If they present hi res images from the head camera, it would not need to be limited like that. They could scan an area of interest, stitching together multiple images to give a full screen view of the magnified area.
True but that requires some very clever stitching. You only have to look at the Muse to see how to do it badly. Not only do you need to de-warp it perfectly so edge pixels line up exactly, you also have to match the brightness and contrast due to lighting changes.
Anything that has any 3D depth would look different from different view points, so you would need a full depth scan and build a 3D model to make it look right.
I’d love to be able to zoom into a 1" square to find the origin. If the head camera worked like an optical edge finder you could use it to establish corners, center points, change the angle, pretty-much anything you could want. It would be a lot like zooming in to position your work, but… you know… actually accurate. You could get an overview of the bed using the lid camera, click on the part you want to zoom in on and the machine would move the head camera to that position for fine tuning.
I still hold out hope that this is in the pipeline.
… is still in the spec list. Even if it doesn’t go down to 50 microns, it would still be a big improvement.
Does the GF have a visible laser pointer on the head? Imagine a manual registration mode where you drive the head (and visible dot) with buttons, park the dot somewhere on your material and tell the machine, “this is the upper right corner of the artwork.”
Now you are not relying on a preview: you have TOLD the machine where to go in trustworthy terms that it understands, the location of the head.
To confirm the cut plan, the GF could do a quick physical preview, with the visible laser dot simply moving around the edges of the artwork bounding box… or even around the outer border of the artwork, if it is a closed shape.
My vinyl cutter works kind of like this and it makes it extremely easy to accurately cut things out of odd scraps. It doesn’t do a motion preview, but you drive the head around and push a button to say “origin is here” and then Bob’s your uncle.
I love the idea of the camera, it is one of the reasons I pre-ordered… but if placement via the camera doesn’t become automagical, we are going to need a Plan B. People are already resorting to things like doing a quick score on the masking material to verify placement. We’re already taking extra steps. If those extra steps are going to be a way of life, then the UI should find a way to make them as easy as possible, however that may happen. I am sure there are many solutions.
I can glue a laser onto the head that gets me better than 1/4 inch accuracy. Maybe I can 3D print up a kit.
But it’s ludicrous to need to do so.
Especially as the head already has a red laser pointer in it.
Yah, but I don’t think it’s on axis with the laser.
If it isn’t on-axis it should be possible to determine the offset, and at least the offset probably won’t change with head location.
- Score a special grid pattern provided by GF
- Pip the center of the pattern with your visible dot
- GF then scores a dot, without moving the head
- Read X/Y coordinates for the dot off the grid, type into GF UI
- Offset stored permanently
Fun to talk about… I’d rather the camera just work though! But if that does not come to pass, I sincerely hope we get official support for a good alignment procedure, rather than being left to figure it out ourselves.
The visible laser is what they use to determine the material thickness, iirc. (And yeah, I know, why do you still have to tell the GFUI, because you might want to lie deliberately…)
We’re working on the solution “fix the camera”.
Is the red laser y’all are talking about the one used for material height sensing? If so, doesn’t it fire down at an angle? (which would make determining the offset futile as it will vary depending on material height)
@dan, you’ve mentioned camera improvements before and I am looking forward to that, as I’m sure we all are. The details in the October update were very encouraging. Until release I am sure we’ll continue to play what-if!