Wildcard: Digital Embroidery Process with Brother Machine and Ink/Stitch

If Inkscape itself already seemed like an excellent tool to me, I never imagined that something like Inkscape could end up being a full-fledged tool for generating embroidery files. The moment I realized that it was possible to assign stitch types, simulate outputs, and export ready-to-use formats from a vector design, I knew I had just unlocked a new creative potential.

Working with a Brother embroidery machine, the NS1850L, requires an understanding of both the digital design process and the physical setup. This week, I explored the complete path from digital creation to fabric output, using Inkscape enhanced with the Ink/Stitch extension. Below, I share an overview of that workflow and the key steps and insights gained along the way.

Physical circuit

Figure 1: Brother embroidery machine (NS1850L)

The process begins in Inkscape, where a design can either be created from scratch or imported from a raster image. If you start with a PNG or JPEG, it needs to be vectorized using the 'Trace Bitmap' tool under the 'Path' menu. This allows the design to be converted into paths that can later be assigned different stitch types.

While learning to use the embroidery machine, Leonardo Zamora and I quickly realized that stitching any type of image required separating it into individual color layers — a process that can be quite time-consuming if done manually. To streamline this step, we collaborated with ChatGPT to develop a Python script that automatically segments an image based on its color regions, making the layer separation process much faster and more efficient. That script allowed us to isolate parts of a design by dominant color and generate individual SVGs for each layer.

Prompt for GPT 4.0 Me: Can you create a Python code where if I insert a JPG or PNG image, it returns several PNGs that segment the original image into color layers? That is, each generated PNG should return the hues that are most predominant in the image. For example, for the following image of Miles Morales' Spider-Man logo, it should return only a red layer. Now, for the Green Goblin image, while it could return the entire vast range of colors, the truth is that it's impossible, so it should instead return the layers of the most dominant colors—in this case, green, purple, black, and maybe yellow. Before you give me the code, do you understand? I want it to be as good as possible—I honestly don’t know the methods, but what I do know is that I want it in Python, with a transparent or white background, a maximum of 5 layers, a minimum of 1 layer, and for the color detection to automatically detect colors based on relevance. ChatGPT. Here's the code (The tremendous ChatGPT took it out at the first try... Damm):
Physical circuit

Image im gonna use

Script used:

  import cv2
  import numpy as np
  from sklearn.cluster import KMeans
  from PIL import Image
  import os
  
  def extract_dominant_colors(image, num_colors=5):
      # Convert to RGB and reshape for clustering
      img_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
      reshaped = img_rgb.reshape((-1, 3))
  
      # Apply KMeans clustering
      kmeans = KMeans(n_clusters=num_colors, random_state=42, n_init='auto')
      kmeans.fit(reshaped)
  
      # Get dominant colors and labels
      colors = kmeans.cluster_centers_.astype(int)
      labels = kmeans.labels_
      return colors, labels.reshape(image.shape[:2])
  
  def create_color_mask(label_mask, cluster_idx):
      # Create binary mask for the specified cluster
      return (label_mask == cluster_idx).astype(np.uint8) * 255
  
  def create_solid_color_layer(mask, color, size):
      # Create transparent RGBA image
      layer = np.zeros((size[1], size[0], 4), dtype=np.uint8)
  
      # Apply the solid color where mask is 255
      for y in range(size[1]):
          for x in range(size[0]):
              if mask[y, x] == 255:
                  layer[y, x] = [color[0], color[1], color[2], 255]  # R,G,B,A
  
      return layer
  
  def extract_color_layers(image_path, output_dir="output_layers2", max_layers=10):
      # Load original image
      image_bgr = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
      height, width = image_rgb.shape[:2]
  
      # Create output directory
      os.makedirs(output_dir, exist_ok=True)
  
      # Detect dominant colors
      colors, label_mask = extract_dominant_colors(image_bgr, num_colors=max_layers)
  
      # Generate each color layer with a solid color fill
      for i, color in enumerate(colors):
          mask = create_color_mask(label_mask, i)
          layer = create_solid_color_layer(mask, color, (width, height))
  
          output_path = os.path.join(output_dir, f"layer_{i+1}_solid_{color.tolist()}.png")
          Image.fromarray(layer).save(output_path)
  
      print(f"Generated {len(colors)} solid-color layers in '{output_dir}'.")
  
  #File
  extract_color_layers(r"C:\Users\Pingu\OneDrive\Documentos\Codes\Assasins_creed_image.jpg")
  
Physical circuit

Figure 2: Layer generator from image

MPU6050 Pin Layout
Figure 3: Import images
MPU6050 Wokwi Simulation
Figure 4: Create a new layer
MPU6050 Pin Layout
Figure 5: Enable or disable image display for better layer control
MPU6050 Wokwi Simulation
Figure 6: Erase unnecessary layersr
MPU6050 Pin Layout
Figure 7: Sort and rename layers for better control
MPU6050 Wokwi Simulation
Figure 8: Homogeneous color due to the color of the thread in the lab
MPU6050 Pin Layout
Figure 9: Vectorize each of the layers with the Inkscape bitmap and with the threshold almost at maximum to ensure capturing all the details
MPU6050 Wokwi Simulation
Figure 10: Capture the color of the image with the eyedropper and then apply this color to the vectorized image
MPU6050 Pin Layout
Figure 11: Copy the color
MPU6050 Wokwi Simulation
Figure 12: Capture the color of the image with the eyedropper and then apply this color to the vectorized image

Once the vector shapes are in place, the Ink/Stitch extension (click to go to the official page) allows each shape to be configured with specific embroidery parameters. You can apply running stitches, satin stitches, or fills — all of which have customizable settings like angle, density, and compensation. To be honest, I experimented with adjusting some of the parameters, but in several tests, I didn’t notice any significant differences in the output. If you’re unsure about making changes, I’d recommend sticking with the default settings — they work well enough in most cases.

After setting everything up, the next step is to export the design in a format the machine can read — in this case, “.pes”. This is done directly from Inkscape via 'File > Save As'. Just plug a USB into your computer, save the design to it, and plug the drive into the embroidery machine.

MPU6050 Pin Layout
Figure 13: Select every layer
MPU6050 Wokwi Simulation
Figure 14: We select the InkStitch extension to generate the file
MPU6050 Pin Layout
Figure 15: After waiting, the extension simulates the operations that Brother is going to perform, and once finished, you can _apply and remove_ the changes so that Inkscape remembers the G-code
MPU6050 Wokwi Simulation
Figure 16: Save as ".pes" format

From here, it’s all about fabric preparation and machine setup. The basic requirements are simple: load your upper and bobbin thread, stabilize the fabric with the appropriate backing, and properly align it on the hoop. The Brother NS1850L provides a straightforward interface for loading the file and adjusting final settings like thread colors or position.

MPU6050 Pin Layout
Figure 17: Threads selected according to colors
MPU6050 Wokwi Simulation
Figure 18: Cut both the fabric and stabilizer (pellon) to the appropriate size1
MPU6050 Pin Layout
Figure 19: Cut both the fabric and stabilizer (pellon) to the appropriate size2
MPU6050 Wokwi Simulation
Figure 20: Correctly tensioned fabric
Physical circuit

Figure 21: Remove the side carriage to properly position the piece to be embroidered

MPU6050 Pin Layout
Figure 22: If the sewing thread spool runs out, simply follow the numbered threading instructions displayed on the top of the machine. At no point should any of these steps extend downward — they remain entirely within the upper sect
MPU6050 Wokwi Simulation
Figure 23: File parameters
MPU6050 Pin Layout
Figure 24: Lower the safety lever to start the process
MPU6050 Wokwi Simulation
Figure 25: Press ONLY when the button is green to initiate the process

To be honest, I wasn't disappointed with the sewing process itself—but I may have overestimated the capabilities of the Brother machine. The Python script I worked on with Leo is truly impressive when it comes to separating images into color layers. It's very precise—almost too precise. It can capture extremely fine details, down to tiny dots, which is great in theory. However, this level of precision became a challenge during stitching, as the machine attempted to sew even the smallest details. While these aren't inherently difficult for the machine, the accumulation of layers and fine elements made the overall process more time-consuming and complex than expected.

Physical circuit

Figure 26: Complex figure resul

To showcase what the machine can handle, I decided to stitch the iconic spider emblem from Sony’s The Amazing Spider-Man with Andrew Garfield. The process to make the file is literally the same, I will use one layer only for all the project just because is simplier, faster and also because the logo doesn't need many colors by itself.

Physical circuit

Figure 27: Simple figure Result

Physical circuit

HEROSHOT! AMAZING STITCH

LEARNINGS

At first, I honestly didn’t understand how to use the machine—stress definitely got the better of me—but once I calmed down, I realized how straightforward it actually is. The instructions are clearly printed on the machine, making the process easy to follow for anyone.Perhaps one of the reasons I struggled to understand this at first is because it differs significantly from what I’ve done throughout the FabAcademy. While the underlying concepts remain the same—generating G-code, uploading it to a machine, and possibly configuring certain parameters—the implementation here is quite different. The use of threads, in particular, adds a level of abstraction that makes the logic feel less direct and more complex to grasp at first.

As for the file generation, while the Python script does an excellent job, it's important to realistically assess how it handles layer creation. Even after manually deleting some unnecessary layers, the script still captured a level of detail that made the stitching process surprisingly complex. Because of this, I’d recommend refining the script and carefully selecting which layers to embroider, even if it takes more time.

The spider design, although simple, turned out great. It shows that with proper layer management, you can achieve impressive and clean results—even with relatively basic designs.

Documents and software links!