TITLE NAME

2567
การสร้างโมเดลทรัพย์สิน 3 มิติจากภาพ 2 มิติ
- ภัทรนันท์ เศรษฐภักดี
- อาจอง มุขเงิน
จิตรลดา
The aim of this study is to recreate a 3D model from a 2D image. The method involves using characters from the dataset to create a 360-degree video using Video Stable Diffusion. The next step is to perform feature extraction and feature matching using DeepPoint and DeepGlue models. Then, the Structure from Motion method is employed to analyze camera movement, using a Neural Network model as the main processing unit. Following this, Multi-View Stereo is used to generate a Point Cloud from multiple camera angles within the video. The Iterative Closest Point technique is then applied to align the Point Cloud from different views. The next step is to rig the model by using Blender’s Python API to rig and add skeletons to the model, making it ready for animation. Finally, the 3D model is displayed using Open3D, which presents the results from Reprojection Error analysis and compares the similarity between the 2D picture and the 3D model from the same angle.