Good point to make, not having a constraint for printable area when you are doing a large trace of something large in the bed. The no go zebra stripes don’t pop up.
Did you just back out and redo the initial rectangle and make the selection rectangle smaller and using the known boundaries in relation to the GFUI rullers?
No. Just gave up. But decided on a variant test (because I care less about tracing to be honest).
So now I decided to make a circle with a crosshair in the middle and see how accurately I could place them from the camera image to the grid on the wood. Squares are 1" (approximately)
Even the center one is off. Again disclaimer my PRU lid is torqued, but here I did put the weight on the lid to make it sit flush. Someone should repeat this on a production unit.
Lines are scores, cross is an engrave. bullseye.svg.zip (1001 Bytes)
I sort of did. I took 2 sheets of 8.5x11" paper and printed a grid on it. Taped them down to a sheet of 1/8" Baltic Birch (Woodcrafts). And placed it against the left edge of the tray.
In my case shift-arrow (or ctrl-arrow or ctrl-shift) had no effect. That’s an issue by itself. But I figured I’d see how it scored the lines it was able to “see”. Here’s what it looked like ready to specify the operations.
Notice there’s also an engrave operation the GFUI found. I told it to ignore and turned the other set of lines to Score. Even though it wouldn’t do a complete grid I figured it would give me an idea how the lines would align.
Final result. Parts not done because the scan was bad. Others have decent registration on the vertical lines but not horizontal. There are also areas where the horizontal alignment was good but not the vertical. It wasn’t caused by the air assist moving the paper as the path was not a straight up & down grid pattern . The head followed a random paths doing a couple/three inches and then skipping to another part of the design so there weren’t a lot of cuts/scores that caused loose flying paper.
Bottom line, the current scan and registration is not up to the promises. Looks like work for @dan@Tony and crew awaits.
Could be resolved in software not yet released to PRUs or my machine but not what I expected. I figured it’d be pretty spot on from my other projects from SVGs.
I don’t think they ever promised lid-camera only alignment. When they get the head camera integrated into the process it will be so much better. But this is a great benchmark on where they are now! Thank you so much!
Glowforge’s onboard cameras can cut & engrave directly over top of your drawing.
It doesn’t say only in the middle of the bed.
And when you manually align your design to the material with the lid cam you can’t get right to the edge if it isn’t accurate.
The results are interesting (thanks for the experiments) but they are both PRU’s. We are assuming they run the same software as the production units but the factory calibration or construction might have been improved. It would be interesting to see if a production unit is better. Feedback from the show seems to be that they are.
In case anyone wants to run one but doesn’t want to make/find a grid, I made a grid SVG with varying line widths that I figured might be helpful with further tests. The lines of the grid are spaced at 1" intervals (center to center). The line widths start at 0.010" and increase by 0.010".
Oh boy, that’s a bit of a mess. Thanks for testing @jamesdhatch
Would we get better results on thicker board to avoid warping? You could just bond the printed sheet to cardboard?
Another (different) test (for anybody who’s willing and does not see it as some unreasonable demand…) is to engrave a simple 10x10mm grid on material the full bed size - no camera involved. I am sure it will engrave dimensionally accurate but there’s a bit of confusion as to accuracy of the GF generally I think.
The rulers are coming out pretty accurately when photographed without parallax, so I think it is probably more accurate that you can measure by eye with a ruler.
I would cut out an L shape and measure the inside leg length with calipers. That gives a kerf independent dimension to measure.
I don’t think so. I was watching the laser operations and it was wrong from the start - the first vertical and horizontals were not lined up with the grid lines on the paper. Those were in the upper left side of the paper. It also showed similar behavior when it got to the lower right. It was a fairly random walk across the paper so it was a fair amount into the project before it started having paper blow or cut out and fly off.
I was running this same test the other day. Ran into some of the same problems, specifically the scanned artwork outside the print area. Another issue I had is I was cutting thicker material previously. I scanned the artwork, then I set the material thickness. This caused the scanned artwork and the picture on the bed to become misaligned. I had to rescan.
I didn’t actually do the engrave, because I had other projects to do and it was longer than I thought it would be, but I’m planning on it later this week.
There is a weird anomaly that happened if a few places. The engrave lines bounced around (see image below). This is significantly wider that what was depicted in the scan. I’m curious if it is caused by the 10LPI setting, which is why I’m going to do this test again.
Note they use the plural of the word “camera.” So while alignment isn’t up to snuff with the lid camera alone, I have high hopes for what they can do with the head camera and some clever programming. I’m pretty confident (from a geometric analysis I did a while back) they will never be able to align satisfactorily with the lid camera alone.
I can imagine using the head camera to do some spot sampling of the artwork, and comparing the results with the lid camera image. Then based on that, warping the lid camera image locally to match the head camera reality. Like I’ve always said, a monumental programming task!
Look at screenshots of the Muse’s bed photo stitching and you’ll see why it’s so hard. It depends greatly on the depth of whatever it is you’re trying to capture. With the Muse it looks like they assume a certain depth. Something sloped like a laptop would require what I imagine to be some pretty serious processing to take the depth map from the laser rangefinder, marry that up with the photos from the lid and head camera, remove any lens aberration (fisheye etc.) and then stitch things up accurately.
Thanks @palmercr, looking forward to your results.
To me there is a clear distinction between accuracy of the laser in ‘normal’ operation - i.e. from a vector or raster file and conversely, GF’s attempt at optical alignment via the camera.
I expect the former to be flawless since the hardware seems to be quality but any work involving the camera - either scanning images or aligning materials for cutting or engraving - seems still flawed at this point.
All of this comes back to the requirement for an absolute zero / homing point which has been requested by many. Currently we seem to be resigned to makeshift solutions of rigs & jigs & trial & error. If we’re not able to reliably cut or engrave from a known physical (calibrated) 0,0 point on the bed, this will hugely limit the unit’s functionality for us. And it will be a major design flaw.
Yes without limit switches the cameras need to be very accurate. But they also need be accurate for trace mode, double sided cuts and pass through, not to mention dynamic autofocus. So they are fundamental to the machine.
You are not, so you may want to consider a refund to purchase a system that better meets your needs. I’d hate to lose you as a customer, but I want you to get the system that’s the best fit for your requirements. I’d hate more for it to arrive and you to be surprised or disappointed.