Week 5 3D Scanning and Printing
Understanding the 3D Scanning Device
— EinScan Pro HD
At the beginning of this week, I started working with a handheld 3D scanner called EinScan Pro HD developed by Shining 3D. Before actually scanning, the first important step was understanding how the device sees objects and converts real-world geometry into digital data.The scanner works using structured light scanning. Instead of touching the object or using lasers, the device projects a patterned light beam onto the surface. Two cameras inside the scanner observe how this projected pattern deforms across the object. From this deformation, the software calculates depth, curvature, and spatial position.
The front section of the scanner contains: a projector that emits patterned light, dual high-resolution cameras that capture geometry, internal processing that continuously records spatial change while moving. When scanning begins, the scanner does not capture the entire object instantly. It records thousands of small frames while the operator moves around the object. Each frame contains partial geometry, and later the software aligns all frames together to reconstruct the full 3D form.
Role of Markers (Stickers)
During scanning, small circular reflective stickers were placed on the object.
These markers serve an important purpose:
They act as reference points for tracking movement.
The scanner constantly identifies these markers to understand its position relative to the object.
This prevents loss of tracking when surfaces are smooth or repetitive.
Without markers, the scanner may lose orientation, especially on plain or symmetric surfaces.
How the Scanner Processes Input
The workflow happening internally is: Light pattern projected onto object. Cameras capture distorted pattern. Software calculates depth using triangulation. Multiple frames recorded while moving. Frames aligned using geometry or markers. Point cloud generated. Point cloud converted into mesh surface. So essentially, the scanner converts light → image data → spatial coordinates → digital geometry. Understanding this helped me realise that scanning quality depends more on movement, lighting, and tracking stability than just pressing the scan button.
2️⃣ Live Scanning Process
While scanning the object, a visible projected pattern appeared on the surface. This pattern is not random — it allows the scanner to read depth variation across the object.
A few practical observations during scanning: The scanner must be moved slowly and steadily. Sudden motion causes tracking loss. Maintaining a consistent distance improves accuracy. Overlapping scan areas helps alignment. I understood that scanning is similar to filming a video — continuous smooth movement produces better reconstruction than fast movements.
Before scanning, selecting the correct settings is important because they directly affect accuracy and processing quality.

Texture vs Non-Texture Scan
Texture Scan — captures colour information along with geometry.
Non-Texture Scan — focuses only on geometry and surface accuracy.
Non-texture scan was selected because shape capture was more important than colour data.

Alignment Mode
Determines how the software tracks motion:
Markers → uses stickers for tracking. Features → tracks based on geometric edges. Hybrid → combination of both. Markers mode provided stable tracking for this object. Operation Mode Classic → faster scanning. High Detail → better surface accuracy. High detail mode improves mesh precision but increases processing time. Resolution Selection
Resolution controls scan density:
High → maximum detail, heavy file size. Medium → balanced workflow. Low → faster but less detailed scan. Medium resolution was chosen to maintain both performance and quality.
Key Learning
Scanning accuracy depends heavily on operator movement. Markers improve tracking reliability. Correct software settings reduce reconstruction errors. Understanding scanning principles helps avoid post-processing issues. This session established the foundation for converting physical objects into digital models before moving into CAD editing and fabrication workflows.
Issue Faced During Scanning — Camera Preview Not Working
During the scanning process, an issue occurred where the camera preview suddenly stopped displaying inside the Shining3D scanning software. Even though the device was connected, the preview window appeared blank and scanning could not proceed.
This helped me understand that 3D scanning systems depend on continuous communication between hardware sensors and software drivers. If any part of this communication breaks, the scanner cannot capture geometry.
Exploring Mobile 3D Scanning Using LiDAR
After facing issues with the handheld scanner camera preview, I decided to experiment with an alternative approach — mobile 3D scanning. Modern Apple iPhone Pro devices include a built-in LiDAR sensor, which allows quick depth capture directly from the phone. I used the application Polycam to test whether a phone-based workflow could produce usable scan geometry.
How LiDAR Scanning on Phone Works

Unlike the EinScan device, which projects structured light patterns and captures highly detailed deformation through dual cameras, the iPhone uses LiDAR (Light Detection and Ranging).
The working principle is different: The phone emits invisible infrared light pulses. These light rays bounce back from surfaces. The sensor measures time taken for light to return. Distance is calculated from this timing. The software continuously combines depth frames into a 3D model. So instead of capturing fine surface distortion like structured light scanners, LiDAR mainly builds a depth map of the environment.
Observation During Scanning
The scanning process was fast and easy: No markers required. Real-time mesh appeared instantly. Entire object captured in a single continuous pass. However, the final model did not reproduce the exact form accurately. Certain areas looked smoothed, merged, or slightly inflated compared to the real object.
Why the Form Was Not Accurate (Correct Understanding)
Your assumption is partly right, but the reason is more specific:
LiDAR collects fewer depth points compared to professional scanners.
It prioritizes creating a complete 3D scene rather than precise geometry.
The software fills missing data by interpolating surfaces.
Small features and sharp edges get averaged into smoother shapes.
Mesh reconstruction algorithms merge multiple frames into one continuous render.
So the issue was not simply combining geometry, but rather:
👉 the phone generates a low-density point cloud, and the software predicts surfaces between points to create a full model. Professional scanners like EinScan capture millions of precise points, while phone LiDAR captures coarser spatial information optimized for AR and room scanning
Transition to 3D Printing and Machine Understanding
After understanding 3D scanning workflows, the next phase of the week focused on 3D printing experimentation. Instead of directly printing final models, the approach was to first understand how the printer behaves mechanically and materially.
The goal was to learn: how geometry affects strength, how slicer settings influence results, and how motion mechanisms translate into physical performance.
1️⃣ Preparing G-Code Using Creality Print 6 Before printing, models were prepared using Creality Print (Version 6).
The slicing stage converts CAD models into machine instructions called G-code, which defines: nozzle movement (X, Y, Z motion), extrusion amount, printing speed, layer formation strategy.
While preparing prints, the following parameters were explored:
Key Settings Tested
Layer height — affects surface finish and accuracy. Infill type & density — controls internal strength and material usage. Support generation — required for overhang regions. Brim adhesion — improves bed sticking and prevents warping. Printing speed — influences dimensional accuracy. Rather than using default settings, adjustments were made intentionally to observe how each parameter changed print behavior.
