I cannot figure out how to make glowforge read the QR code

I do not know how to make glowforge read the QR code.

How do I inspire it to make an attempt to ready the QR code?

Thank you for any suggestion!

Welcome. As you probably know, it isn’t necessary for the machine to actually read the code. You can simply select the material from the drop down menu and all of the settings for the material will auto populate.

That being said, if it is important to you that the QR code is read, make certain that the lid camera lens is clean and try placing the material code in another place on the bed. Rotate the material so the code is in the upper left if it isn’t being read in the lower right. For material smaller than 12 x 20, place the material in the center of the bed with the code directly under the camera.


Take a picture of the QR code for future reference… Then print them out. You can then just lay the material QR code down and place the printed QR code in the middle of the material. It should read it fine there. The lid camera is… somewhat shoddy and it appears that ALL of them are blurry on the edges where they just-so-happened to place the QR codes for the PG materials. But it does focus well on the center of the bed and should read it there just fine.

Thank you very much. Your suggestion works well!

1 Like

I disagree with your assertion that the camera is low quality. This has been explained many, many times. The camera is a fisheye design so that it can see the entire working area and still be close to the work area. This results in some distortion when looking at the extreme limits of the camera. My machine has never had a problem reading any of the QR codes and provides images that are sharp, if somewhat distorted due to processing, all the way up to the corners of the bed.

This camera is of the appropriate quality for the use. It is not “shoddy”. The majority of issues with QR codes are due to glare from the label, outside lighting, the camera lens being dirty, or simply being so far from the camera lens that the post processing distorts the image beyond the ability of the software to decipher it.


Select the gear upper right hand and tell it to ‘refresh bed image’

1 Like

In case you’re curious - this is what the camera image looks like before they make it pretty for us. The fact that it can ever read the QR codes is because their programming frickin’ rocks! The focal distance on that fisheye is tiny


Two… three cameras across the lid and stitching the images together would eliminate the need for them to “pretty up” the image by cutting down on the fish eye. I like the machine. I’ve recommended it to multiple people. But I can see room for improvement.

Room for design improvement doesn’t mean the hardware is poor quality. I think that your suggestion will likely solve some of the issues. However, I also know that aligning those cameras and the additional processing and support hardware may make a significant price difference.

While doing research into a RaspberryPi dashcam for my car, I found that the hardware required to process two independent cameras at the same time just to record the streams is almost out of the capability of the RPi. This means that with your solution, the onboard hardware would have to handle two or three times the amount of image data to process the image stream. This means higher capacity hardware in the brains, both processing and memory, more time required, more compression required to send the data to the GF server, and more programming to properly anti-fisheye three cameras, even if they need less adjustment.

Think about your cell phone - run it with video for several minutes and see how much it heats up. Image processing takes a huge amount of resources, and the higher quality the image is the more resources it requires.

This may be something they look into for future work, but the reasons above may make it infeasible while keeping other goals or requirements.

1 Like

There is no “stream” here. It’s a snapshot. The stitching of 2-3 images together would take less processing power than what they are doing to correct for the fisheye.

Except they still have to remove the fisheye from three images instead of one, and there is more than one image being processed and transmitted. Every time the machine homes or calibrates a series of images is taken with the head at different locations. These images are then processed to remove the fisheye and the location of the head is found from these images. With three cameras, you have to remove fisheye, tie each image together using indicators to create a composite image, and then locate the head.

Having done image processing on a project, it is not as easy as you may think and it takes a lot of processing power.

1 Like

I’m a programmer. Have been for 30 years. I’ve done a lot of image processing on a very low level myself. I’d have to do some wiresharking to see exactly how much data is sent when I turn my machine on and verify what you are saying. Not saying you are wrong, but I’m a little skeptical. :slight_smile:

Go for it. My recollection is that not much data is used, which is why I have laid out the operation as I have, with the initial processing happening on the local machine. If that is incorrect and the processing is done by the remote server, it means that the image is compressed. Even here you would be tripling the amount of data being sent with three cameras. Maybe that is a permissible operation mode, but you are still going to have multiple images being sent.

Of course we’re in the hypothetical here so grains-of-salt and all that… But I would definitely not send three times the data. I don’t know what the resolution on their current camera is, but if I were spreading it to 3 cameras with less fisheye (or optimally none at all), I would go with a slightly lower resolution on the images. Meaning, if it is currently a 1080p camera, I would go with 3 1080p cameras, but cut the image down to 720p for processing since you don’t have to worry about the distortion from fisheye and needing to compensate (which just creates blurry images). That’s only 33% more data being sent and the image would still be clearer than the resulting fisheyed 1080p image. And that’s assuming that the processor on the glowforge can’t do the stitching itself. If it CAN, you likely wouldn’t need the entirety of the three 720p images. You could likely get away with say 80% of those images as there would have to be overlap for the stitching. And NOW you are down to only 6% more data being sent and a much higher fidelity of bed image.

Again… just hypothetical… but that’s how I’d do it.

I’m so sorry you ran into trouble. It looks like you’ve been able to print since contacting us. That’s Great! Could you please follow the steps located on our support page here for “Things That Need Wiping”? After thoroughly cleaning, please try again and let me know if your unit is then able to recognize Proofgrade material as expected.

It’s been a little while since we’ve heard from you. I’d like to follow up to check out some more details and make sure your Glowforge is working well, so I’ll be in touch via email soon.