Feature request: Absolute positioning - CNC Mill style

It can.

1 Like

I went to cut my full bed jig. My idea was to put a larger piece of plywood up against the back and side and cut out a 20x12" hole. That would give me a fixed corner to use as my reference corner. It would be up off the honeycomb a bit because it would be resting on the frame of the tray but that shouldn’t be a problem (need to order cuts so the pieces fall after they’ve been engraved).

Short of it - doesn’t work.

The back does make a good fixed surface to butt the plywood against. The problem is the rails for the Y axis travel are the only reliably fixed things on the sides. Unfortunately, the X axis rail rides on the Y rails with some pretty robust brackets. Those will bump into any frame or jig that extends past the tray edges.

If the back & side aren’t both used to lock the jig into place it won’t provide a repeatable reference corner. Anything keyed off the tray is potentially skewed due to the slight variation that can occur with the tray only resting in the dimples on the floor.

4 Likes

I want you all to know I have realy enjoyed the thoughtful and articulate discusion on this.

I agree that it is absolutely necessary that it be possible to do fast and efficient piece work. Especially for owners of the Pro. I’m curious what process will win out.

@dan. As a special request for business users I’m going to simply request to see progress or a direction that the team is going use to solve this problem.

I must say that it is quite impressive and pleasing that issues like these are now dominating the front line but they are important nonetheless.

4 Likes

I’m a bit rusty at this, but my intuition is screaming that somehow utilizing a fourier transform can be utilized to solve this positioning problem. I can’t quite put my finger on it. Any engineers or mathematicians want to weigh in? I remember something about reducing the spherical intensity profile around a beam spot to a single point. I think it is called spatial filtering.

All this talk of jigs registered to the case of the machine seem to ignore the fact that the origin of the motion system is registered to the lid camera, so any play in the lid hinges would lose registration with the jig. The only way to make a jig accurate is for the cameras to locate it. In which case there is no benefit from it being registered to the case.

This is unless the lid has special zero play, zero wear hinges, but why would it when Glowforge intend to register with cameras?

2 Likes

It’s definitely been discussed in the other thread starting about here: XY home position - #35 by takitus

It was the whole reason I floated the idea for a fiducial ruler should the 0,0 point from the camera registration seem to change too much, and them not being able to compensate by using the head camera for homing. I also floated the idea of having a set fiducial marker the head camera could use to set 0,0, which would always be the same, maybe attached to the shifting bed, or maybe attached to the frame somewhere and it would be our job to stabilize the bed.

Either way it’s definitely been brought up, just not in this thread til now =).

2 Likes

I agree. It means we’ve gotten over the doubts about delivery (“will I get mine?”) to “when I get mine I’m going to need to do this.”

It’s a seismic shift in forum attitude - doubt appears to be mostly extinguished and we’re moving on to practicalities of use :slightly_smiling_face:

6 Likes

I think you guys have hit on the philosophical difference at the heart of the glowforge in this area, and I’m starting to feel like this is going to be the real long term make or break. Like you’ve said a couple of times in this thread @palmercr, the hardware of the glowforge just doesn’t support a hard 0,0 because everything is relative to the camera in this setup. If the camera skews just a little (microns) then 0,0 moves as well relative to the physical bed of the glowforge.

I know I’m really just restating what you guys have already said, but just two things are missing from this system working as flawlessly as a hard (mechanical) 0,0. The computer vision has to be able to detect the edges/corners of the piece at a sub-kerf kind of accuracy, and then it has to be able to place the design onto what it sees with that same accuracy. The second part of that is easy if the first part is working. And looking back at what glowforge has said in the past it seems obvious (now) that is the system they are shooting for. In reality, it would make it very easy, much easier than a mechanical 0,0. You could, for example, make your self a jig to hold your 20 dog tags (the jig is just there to save time and hold the dog tags that way to you have to place each engrave on each dog tag). The camera finds and sets 0,0 at the top left edge of that piece of material and sets the orientation based on the overall skew of the material on the bed, or scales to fit, or whatever. If it is precise enough it won’t really matter how you lay your material in there, it will just work. The real problem right now is the computer vision isn’t that accurate yet. Here’s hoping they get there soon! :smile:

6 Likes

Yes, well summarised. I don’t know if all the people asking for 0,0 have given up on the cameras being accurate enough or just don’t get the GF philosophy.

With the current state of the art computer vision and accurate cameras you should even be able to scatter 20 dog tags at random and have GF place the same design on each one. I.e. if it can locate one and can orient it accurately it can recognise and locate many. It just needs a bit more coding. The only time you should need a jig is when things need holding in place, like pens that would roll.

The main issue, and this is fundamental to GF, is will the cameras ever be sub kerf accurate. This is where faith is required as there has never been an example of them being anywhere near accurate enough, lots of examples of them not. The pass through example was very encouraging but it was an out of focus low res photo amongst a group of much better photos and when somebody asked for a close up the cleaners had binned it! So I suspect sub kerf accuracy is not there yet, even in development. But then I will be accused of being negative about the announcement that most people found positive.

8 Likes

Not exactly what the GF philosophy is, but they can still have the camera based approach as they invisioned it, while providing numeric inputs for us to use. If they dont want it to clutter up the UI they can just have an ‘advanced mode’ checkbox in the settings for those of us that want that.

Ive looked at the code, and it would be pretty easy to implement something like this. Hopefully its the case that theyre just working on promised functionality and this will come as soon as they have some time to slip it in.

I like and appreciate your objectiveness. I much prefer it to rabid fanboyism and made up facts.

5 Likes

Amen.

2 Likes

@takitus, how do you envision that working? I’m seriously curious, not arguing, engineering isn’t my background :stuck_out_tongue:. Without physical limiters at the top left corner, is there a place where the head can go repeatedly and be sure it is in the exact same place? right now, if I understand it right it is choosing that place based on where it perceives it’s home position to be in relation to the camera. If I’m missing this (and believe me that wouldn’t surprise me at all), then the problem with giving a mechanical 0,0 is they didn’t engineer a physical spot to be a 0,0, but tied it all to the camera instead. Now if I’m wrong, then that’s great, because like you said they can just put two software options in the UI and let you choose, everyone wins! :smile: but if there is not a way to run that head into the top left corner and hit a hard stop on both axes (is that plural for axis?), then the mechanical method is moot and the best it will ever be able to do is emulate it still based on the camera vision. Am I making any sense, if not I’ll shut up :blush: after all tomorrow is Sunday and I still have a lot to do. It’s been one of those kinds of weeks!

4 Likes

I’m sorry, I do machine vision, and that’s simply not possible (well with deep learning for a given object sure, but not the general case).

Let’s take 20 dog tags as a random thing to place on the bed. OK, so then you have some SVG file somewhere with a bunch of art. How would it know the orientation (e.g. is the hole on the top? side?). Sure you could teach it what a given dog tag looks like (I do something like this for different tissue types) but if I used to use dog-bone shaped tags and now use hydrant shaped tags (or heck both) how would it know what orientation it is supposed to be (only because we as humans know what that object is supposed to look like in real life).

Now let’s imagine that @Dan gets his team to take every dog tag made on earth and loads them into his deep learning system, so he can magically orient any given dog tag to keep going with the problem at hand and why you need a jig: So now I have this SVG with a bunch of content for each dog tag. How would it know what to put on each one? If they don’t precisely match the SVG how would I denote which content can be rotated to fit on which tag? Does this dog’s name go onto the star or the hydrant? That’s why you need a jig so the content in the file is laid out according to the jig and you know what content digitally goes with which object on the bed.

5 Likes

It is all relative to the lid camera right now. However, within that environment we can do things to minimize the margin of error in engravings. Secure the bed so its ALWAYS in the same place. Make sure the lid sits the same way every time. The margin of error coming from the lid cam during homing is incredibly small right now. We have access to pretty low margin of error overall right now. We just have to have the GFUI tools to utilize it.

Eyeballing anything will never be precise. Thats why we invented tools to help us gain precision. The camera positioning has its place, and is great for making use of scrap pieces etc. I love that part. Leaving out the ability to specify exactly where we want something numerically (and at the same time removing our ability to recreate what we previously did exactly) removes almost the whole benefit of having computers and hyper-accurate CNC machines. For anyone that needs precision this is a must-have.

There are also a LOT of options moving forward to make sure that jigs that are created in the future are 100% accurate, like the ability to put fiducials on them and have everything align to that. Not only will that fix the 0,0 issue and shut up our begging, but will open the doors to doing some really cool and time saving stuff (saving settings per jig, having them automatically entered etc). The homing process can also be improved by allowing us to home with the head cam to something which is fixed to the frame that we know will ALWAYS be in the same spot. a little fiducial in the back corner somewhere or embedded on the bed. Theres nothing to say we have to always use the lid cam for homing. Would be super cool to see this implemented in the future.

The fact that they put a camera on the head can fix all of our pain points in worrying about accuracy or consistency. Its where the laser comes out, so we know homing with that camera will always be the same distance from the laser lens. I can see this as something comes down the road. If they really want to make this machine destroy the market, adding that functionality would be killer.


TL;DR - We have enough tools currently to make numeric positioning work incredibly well. There is a lot of room in the future to make this process 100% accurate always using the head cam. Numeric entry is a must have for anyone needing precision. Their goal was to make this laser easier to use than others, and without numeric entry, it actually makes the laser HARDER to use for a large portion of their users. I hope they realize that soon and take care of it.

10 Likes

Thanks @takitus, I see where you’re coming from now :smile: We’re actually not that far apart, I just don’t always speak the right language.

2 Likes

I wasn’t suggesting 20 random shaped dog tags. What I meant was you should be able to scatter 20 identical blanks and have 20 bits of artwork that fit them.

3 Likes

If camera alignment worked then it would be easier. Put in a blank, press the magic button. No need to make a jig. If it doesn’t work like that then yes the traditional CNC methods are needed.

And those would be much easier with a couple of limit switches. Without you are suggesting fiducials but they need the same accurate cameras.

Also I agree that when you manually position and scale something you should be able to get back to that state by entering numbers. CNC equipment should not rely on manual dexterity with a mouse.

I use an L shaped origin jig at 0,0 on my CNC router and micro switch limit switches to make thousands of parts all the same. I also have edge finders if I need to index of the work piece. I added limit switches to my CNC lathe because it came with none. It was expected that one would touch off the blank but that is no good when you want to make a thousand parts the same in multiple sessions.

5 Likes

Amen!

6 Likes

Well the good thing is that they have 2 cameras. I think the head camera is going to end up being very important. Its the one thing that makes me feel a lot more comfortable about accuracy moving forward.

I think the weak link in this whole thing is the camera being attached to the lid. It throws in so much inconsistency that theyre having to deal with. If there was instead a camera mounted on each of the side panels, it would give them a much more stable platform to work on.

Now that I think about it, if they wanted to, they could use the head cam to calibrate the lid cam at any time. Stick in some calibration sheet with particular features, use the lid cam to detect the location of the feature, send the head cam to see if its where it thinks it is, report back the discrepancy, rinse, repeat. It could build a nice distortion matrix to deal with camera warp on a per unit basis this way.

4 Likes

I don’t know how accurate the honey comb is but the lid cam could photo it and the head cam could go around looking at specific points of the mesh to see if they tally and make a warp compensation matrix from that. It would be very slow though uploading thousands of images to the cloud.

It makes me wonder how the 3D auto focus will work on curved objects. If it has to process images in the cloud it would have to scan the object with head camera and laser pointer and upload thousands of tiny images.

2 Likes