XY home position

The accuracy of those alignments, at the time of maker faire NYC, was quite different from what was shown on the screen. They have themselves a couple of mm of play to make sure the pieces they were cutting wouldn’t fall off the edge of the material. Barrel distortion can be a pain to deal with.

I’m sure they will be dialing this in even more in the future, but it could potentially be different for every laser if there is any variance in the mounting of the lid cams.


I think this is what they will be using for flip-over cutting; Edge-finding and corner detection. Same process for alignment. So it might not be yet, as that part isn’t done, but it will be! Maybe they can parlay that into a solution for this situation.


@dan, you’ve lost me here. Why would the ‘bottom’ be better than the top? Aren’t there covers on both sides?
What would we fix by using acrylic and adhesive?

‘Register large materials’?
We’re trying to find a way of placing a jig in the same place every time.

I know you must be swamped right now but when you do get a moment, a few of us here would love to better understand how to do this with our GF.

We know that the lid camera is doing the looking, but I’d be surprised if it did the alignment by looking at the logo when the head is over in the corner. (It might, but I’d be surprised.) You’d get the same homing effect with much less optical trouble by running the head directly under the camera, or in the center of its field of view or through the region where the camera has the best focus. That spot will have XY coordinates that are determined by the mechanical stability of the whole thing, so you can just offset.

1 Like

I’ve been meaning to ask - just how accurate is that camera?

I vaguely recall it being 5 megapixels. If that’s correct each pixel represents 0.007" of distance on the bed assuming that the camera exactly covers the 20x12 cutting area and there is no noise in the image at all, so that may be a best possible answer. I’m wondering how much worse the actual accuracy is, especially if there is any image distortion.

Didn’t see this from the Pre-release letter mentioned in this topic yet:

On one hand we have the “preview exactly where it’s going” and on the other, improvements to do for accuracy with thicker material.

1 Like

Just to be pedantic, with a high-contrast target like a logo, you could plausibly do sub-pixel processing by watching how the light/dark value changes as you step (slowly) across the camera’s field of view near your alignment position. Worst case would be several pixels of slop on a moving image (which for a 5mp camera these days seems to be 1080p).

1 Like

There’s one feature of Full Spectrum’s mysterious Muse that I thought was well thought out. Instead of capturing one image of the whole bed with a very expensive wide angle lens, yielding severe distortion and the need for significant image processing to mitigate it, the Muse apparently takes many images from distributed vantage points, and stitches them together. Makes a whole lot more sense to me.


True… but you could sure pitch the idea if you wanted to.

“stressing” was probably the wrong word… “spending mental energy upon” might have been more appropriate. And of course a CNC needs to be accurate and repeatable; I’m not at all saying that you are wrong, merely suggesting that sometimes a new tool means learning new ways of doing things, instead of adapting the old way to the new tool. This may be one of those times… or not.

I prefer real vinyl, but I do run Ms. Pinky’s Interdimensionally Wrecked System and Torq. There are areas of the digital vinyl control system which must and do act exactly the same as traditional vinyl. But there are options and functions and settings to play with that enable you do stuff that isn’t possible at all.
I can tell the software to ignore absolute relativity, and do a needle drop anywhere on the record, and it will start playing at my designated cue point. Or I can tell it to be relative, and dropping the needle in the middle of the record would play the middle of the song. Marking, crayons, little round stickers, tic-marks… no longer needed. Software do the work.
That is what I am hoping for with the GF. Between the two cameras and fancy machine vision, you should be able to rely on the GF to accurately find the 0,0 point on rectangular media, no matter where it is in the bed.

My plotter is, technically, a CNC-type machine. It uses an Automatic Registration-Mark System to account for mis-aligned media, so that I can accurately cut shapes that were printed on a different machine as long as you set the file up with reg marks. Since the glowforge can see the actual design, it should be able to use that instead of printed reg marks for re-positioning.

It will all be a matter of getting the positional accuracy dialed all the way in… and seeing how well the cameras hold up over time. If they can get two-sided engraving to work they will have gotten the machine vision dialed enough to accurately find the top corner of rectangular material and define that as 0,0. Right? Don’t we have machine vision that can guide a missile with remarkable accuracy at very high speeds? Would you have thought, ten years ago, that a mobile telephone could do the incredibly complex process of multiple image recognition while at the same time processing digital video and playing it back in real-time? (Snapchat filters)

Granted: I am assuming the positional accuracy has increased since maker faire, and will continue to do so. I am assuming that, eventually, “snap to object edge” , “align vertical/horizontal to top/center/bottom”, “set origin at (X,Y) relative to object”, and other similar options will come along in the interface.

…and I let this sit while a customer came in, and the thread has progressed right on through what I was trying to get at:[quote=“takitus, post:144, topic:3386”]
I think this is what they will be using for flip-over cutting; Edge-finding and corner detection. Same process for alignment. So it might not be yet, as that part isn’t done, but it will be! Maybe they can parlay that into a solution for this situation.


You’ve nailed the problem…

if you place the cardboard in the same relative location each time.

For it to work it’d have to be precisely the same location each time. Without a 0,0… a place I know to exist relative to everything else… how could I ever do that?


Up against the back walls. Big piece of cardboard.

Honestly, I’m not sure if it will work or not, I haven’t seen the machine yet. :slight_smile:

But there should be a way. It might not be the most convenient method for certain specific needs, but we’ll figure something out. :wink:


Yeah… I think I’m going to bail on this topic. That is until a staff member provides some uber-clear answer. Otherwise we’re all just going around and around.

(NOTE: That is not specifically aimed at you, @Jules! :wink: )


No prob…I don’t have any answers either! Chuckle! :wink:

1 Like

This is exactly correct. It’s very repeatable between homing operations.

To see how repeatable, here’s a head image from a 'forge after homing:

Here’s another image after power cycling the machine, moving the head, and re-homing:

As you can see, we can still make things a little better… the second one’s about a thousandth of an inch off.

The ‘bottom’ (side towards you) is close to the printing area. The side away from you is farther away. So it’s easier to reference from the side towards you.

Someone in the thread wanted a permanent stop.

You could rig something up that registers against the front wall and side rail.

I love to explain this stuff but I’m obviously falling far short here, as I’ve tried several times and still failed.


Also correct. :slight_smile:

It does. :slight_smile:


Once the head is “zero’d” using the glowforge symbol. Does the software also have a corresponding “zero” point on the computer interface. You can zero the physical head in the glowforge as accurately as you like, but if you don’t have an zero point on the screen you can move your drawing to, it won’t do you any good.


it seems like there may be an unasked question of if I make a design to engrave a square 200mm to the right and 100mm down from the upper left extreme of the glowforge work area. Then I laser away different projects for 18 months, can I come back plop in the another full bed piece of material and have the square hit the exact same spot as it did a year and half prior? What is the expected drift/margin of error if not?

Not using optical alignment to move everything around here just absolute I want a square here in my design file.

And therein lies the rub. I doubt it. Not if it’s been set up this way.

We should be able to come up with some temporary relative indexing, but I would need to observe how the machine moves, and see the software interface, (and frankly think about it a lot), before i could say for sure.

I know very little about G-Code, just what it is, and what it does to move the machine head. But I do know that it is possible in certain programs to initiate something like a startup script to move the machine to a certain fixed point…in other words - an origin. If this plotter movement was coded with G-Code there may be a way to add something like a Startup Script for an origin later. But the interface that lets individual users move the head to a specific location would have to be written, or anyone who wants to do that would need to learn G-Code. And that step will take time. (And/or tutorials out the wazoo.)

Most folks don’t realize how much work goes into making something One-Button easy. There’s an awful lot of action behind the scenes that has to happen to make it work. (Which is why GF could not immediately jump on this request for those who want it.)

It can be worked on later, by Glowforge, without impacting people’s use of the machines now, because of how they have this set up. In the interim, we just need to come up with something that works temporarily.

(Oh…and let me mention here that I do not have any inside information on any of this…I’m guessing, based on a very limited knowledge of how plotter movements happen, and a total lack of regard for making a complete fool out of myself.) :smile:


Yep. I figure I’ll put a piece of acrylic down (lower left of the bed) that extends beyond the bed a bit. Then cut out the piece that’s at the absolute edge of the cutting area. That should leave me with an acrylic “carpenters square”. Another narrow piece cut and glued vertically to the edges of the new acrylic square so they drop down the sides of the bed would allow me to drop it into place exactly each time. (I’d actually glue the two vertical pieces first before I cut out the L for the square so I wouldn’t get any errors introduced by my gluing placement.) The square with its lip will allow me to get the material in the same place every time and if I don’t need it I can take it out and not get in the way of the pass through slot or anything. The only question is if that corner is going to be the same relative address from the homing position each time and @Dan indicated it should be based on his response (except that it currently can be 1000th of an inch off right now).

Seems to take care of my repeatability needs. Similar to what I did for one of my current lasers to get the same thing (although the 0,0 homing was always the same upper right corner, because it’s got a passthru there wasn’t a backstop against the back of the bed so it was a manual feel to see if the edge of the material lined up with the edge of the bed - the L with the lip provides a repeatable physical stop that’s not affected by pressure deforming my finger pad and making it feel like they were aligned when they were off by a hair).


I think if their software library were opened up it would be as easy as moveHead(29.5,35.2) to get it where you need it to go. In gcode it’s a single command as well: G1 X29.5 Y35.2, but having a console for the laser seems like something that were not going to have since everything is cloud based, but if we did, gcode is a pretty simple command system.

Moving a design around in the UI would ultimately be very similar. Each design will be placed in the virtual bed with a virtual location. Changing its coordinates will move it. They probably have a similar function for that.

The front is of their UI is in react, and the great thing about JavaScript libraries is that you can’t compile it, so it’s all end user accessible. If someone is so inclined they can write a grease monkey script to add functionality to the UI.


I have a feeling somebody way smarter than I will release an open source application for using a Glowforge. Even if solely for using the machine 100% offline. I don’t even think it’ll take that long before that happens. And that person may be open to put “features” in place that don’t, and may never, exist in the official Glowforge software.