Specification reduction

You are arguing with yourself at this point. Said in my original and every other post that I don’t know. And still don’t. I have never considered it a sign of weakness to admit I don’t know.

This started with a comment about doing real time continuous focus using the cloud. It just surprised me that you even entertained the idea that might be tried by competent engineers.

3 Likes

If you fall and no one sees it, did you really fall?

5 Likes

Yes I don’t think it can be done in the cloud practically, so it would have to be done locally. If the controller has enough grunt to do real time analysis of video then the argument that the cloud allows them to use a cheap controller doesn’t really stand. It would easily be able to do all the other things the cloud does because realtime continuous autofocus looks like the most demanding task by far.

Double sided cutting is difficult if the walls of the material are not exactly vertical because then you need to work out the alignment of the bottom face while looking at the bottom edges from the side because you can’t see them from above.

These two tasks look the hardest to implement and just happen to be the ones dropped from the specification to “clean it up”. Good to hear they are still planned to be implemented. Odd to remove them if they are fully confident they will succeed.

Regarding the head apertures: we know there must be a red laser, a camera and a UV light source. If one port is the camera and the other port has two devices my guess is a laser diode and a UV LED. If it is an integrated laser distance sensor with 0.1mm accuracy I will be amazed.

2 Likes

I believe the bruising says “yes.”

3 Likes

Actually thinking about it some more: the laser spot will only move side to side when the height changes. That means you only need to look at a thin strip of video. Using the fact that it will generally be close to where it was last frame you only need to look at a few pixels on most frames. So actually it wouldn’t need very much processing time to track the spot in real time and compute the depth. I don’t know why it has been kicked into the long grass.

1 Like

I think it makes sense to clean up the feature list for what is available now. That way, customers shopping for the laser now don’t wonder why it can’t do those things when they get their laser in November.

It also may not need to go to the cloud for processing since it’ll need to make small adjustments in real-time to the job that already exists in RAM. So possibly what little brains the unit has in its head could be used for that purpose.

Possibly, depends if the head camera is linked to the main MCU or one in the head. But yes it certainly can’t go to the cloud and back in real time.

Depends on your cloud :wink: With 50-100 msec ping times it could be done with a really thin software stack.

That would be nowhere near fast enough if you consider have far the head moves in that amount of time.

1 Like

So at a top speed of about 80 mm/sec, 100 msec would be 8mm, 16 would be about 1.5. Good enough for warp, but not for carved surfaces. (And yes, I’m ignoring the time for the mechanics to move the lens, because you’d be stuck with that anyway. And I’m ignoring the faster movement for engraving because if the previous line isn’t good enough you’re already up the creek.)

Which raises another question: what are the offsets among camera, visible laser dot and IR aim point? Because that’s going to limit the resolution of anything you can do live. (Also with live work, do this mean while the laser is firing, because then the signal processing gets interesting at best, unless the camera can just switch to seeing where the burn point is.)

3 Likes

Good point. As the lens is between the red laser and the camera I don’t think it will be able to see the red laser when it is burning stuff because of the glare and smoke.

Perhaps it will have to do two passes over everything. In which case it could send video to the cloud but it would slow down the job a lot.

I have a toaster with a live video feed. I would really like to get the same from my Laser 3D Printer.

1 Like

Put it on twitch and monetize that sucker!

1 Like

I’ve been wondering how/if they were going to pull off two-sided cutting and magically-fast continuous autofocus. I guess pulling them off the website was their best option.

1 Like

The “advertised” continuous focus is, as I see it, one of dealing with warp and contour. This is best handled in motion planning. So a quick measurement pass to build a 3D mesh representation of the surface along with a perfected image scan of the material.

Then the focal point lens positions are just another waveform to encode and send to the :glowforge:

Trying to do that processing in real time while the laser is firing would be trying to close the loop and do so with fire and smoke obscuring the point of interest. The cloud services do not schedule jobs with real time process handling in mind. While a ping might be 16ms, the round trip including processing and job latency will be higher. And 16ms is an entire video frame’s display time.

Perhaps but as the red laser will only scan a very narrow line it would take a long time to do a full scan. It also wouldn’t cope with material that warps or drops as it is cut.

Hard to say what they had in mind but they changed continuous to multi-point so perhaps just a sparse mesh. That would work for warped material but not following contours of 3D objects like smartphones, but it could optically recognise those and use a mesh from a database.

Or perhaps it could construct a 3D model from the head video as the new Sony phone does and then take a few depths measurements to anchor it. That would be impressive!

1 Like