摘要:

It is a challenging task to transform style of 3D shapes for generating diverse outputs with learning-based methods. The reasons include two folds: (1) the lack of training data with different styles and (2) multi-modal information of 3D shapes which are hard to disentangle. In this work, a multi-view-based neural network model is proposed to learn style transformation while preserving contents of 3D shapes from unpaired domains. Given two sets of shapes in different style domains, such as Japanese chairs and Ming chairs, multi-view representations of each shape are calculated, and style transformation between these two sets is learnt based on these representations. This multi-view representation not only preserves the structural details of a 3D shape, but also ensures the richness of the training data. At test stage, transformed maps are generated with the trained network by the combination of the extracted style/content features from multi-view representation and new style features. Then, transformed maps are consolidated into a 3D point cloud by solving a domain-stability optimization problem. Depth maps from all viewpoints are fused to obtain a shape whose style is similar to the target shape. Experimental results demonstrate that the proposed method outperforms the baselines and state-of-the-art approaches on style transformation.

展开

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐