VERY long Preparing your design time

Again… this was just one example of something not working right today. I took another file from deltacad and everything was centered in Inkscape but it wouldn’t show up in the UI.

Maybe it’s all the coordinates involved in cutting gear teeth. At the end of the day it still takes almost an hour to import at times and then it doesn’t even show up

The pdf has a lot of little tiny segments in it. It really bogs down Inkscape and it looks like more than the Glowforge app can handle, but at least it makes it clear what needs to be nuked from the dxf. So I’ve done that here. I also connected line segments into objects and connected the nodes. The gears still have a lot of nodes, but inkscape isn’t dragging as badly with this as it was with the pdf import:

planetary_card.zip (15.6 KB)

This one took about 12 seconds for me to upload into the app.

I have been working from Corel Draw all day and exporting to svg and pdf. When it works I can get some detailed stuff to print, when it doesn’t it just sits and loads.

image

That is text in the sphere, 16 min of print time.

I’ve noticed with gear teeth that it’s reaaaaaally dependent on how many segments you end up. Yesterday I was doing some tests with shapes exported from geargenerator.com, which I then mucked around with in Rhino3D v4, then exported to .dxf… I was reliably able to get both Inkscape and the GFUI to hang until I went in and manually simplified the 800 segments per tooth that were in the original version.

Side note: the circle around the largest gear is really close to the next cut over on the card. Cursory look makes me think it would work just as well if they were moved farther apart, right? That close of cuts one right after the other, you may get it to start burning. Especially if you’re cutting cardboard.

Another interesting bit of trivia about planetary gears cards. This one on thingiverse is thing 211 (https://www.thingiverse.com/thing:211) uploaded by phooky who is Adam Mayer one of the founders of Makerbot along with Zach and Bre. If I understand correctly, Bre is one of the investors in Glowforge and he posted a video of Adam with the gear cards back in 2008:

4 Likes

Left software overnight at “Preparing your design” step. Gave up at 12:30 am and it is now 8:25 am and it still hasn’t finished prepping the design.

Same thing happening to me. I can print a very trivial object, but the job I’m TRYING to print which is 142k and considerably more in depth of a job fails on preparing design. Gives me no good reason as to WHY it’s failing, but this is A HUGE issue not being able to run my job. Please come up with a way we can do this locally, this is ridiculous.

Having same experience, small works fine, more complex fails. Sad thing is, they aren’t that complex. 843 kb fails, but 383 kb is fine. Other thing and I can’t quantify this, but I believe PDFs are working better than SVGs for me.

image

8 Likes

And it even works (though I think I placed some of the gears poorly.

I did not change much from what I had done yesterday… I think I deleted a few lines from the file and that was about it.

Perhaps while everyone is at Church this morning the servers are more free. =)

3 Likes

Think everyone is home, I am back to very long waits and no longer getting to print any thing. :frowning:

Can you expand on this? I’m very familiar with systems administration at scale as well as docker internals, and I’m not sure I understand what you mean.

Just trying to print this small file and I can’t get past the “Processing your design step Today”

Very small circle with a cut through and an engrave, very little text that has been converted to curves.

The best example I can give is comparing it to something like VMware or Xen. With either of those you create VMs separately and let’s say they are all Linux VMs. So say you’ve created 10 Linux VMs on whichever (VMware or Xen). Each VM has an individual kernel and in this case it’s repeated 10 times.

With Docker you install the single docker instance and then you can create 10 Linux containers and they share a single kernel (more or less… there are better ways to explain this) and some other code.

For quick deployments… docker is great. For high performance computing… meh

Granted… this could be from just my experience with the product… but generally speaking I am not a fan.

Solaris dabbled in Containers years ago and they shared SOME code but not as much as docker does. I more of a fan of LDOMs for this sort of thing but it comes at a cost.

Your example of VMware and Xen are interesting, because these are both virtualized (nowadays, paravirtualized) platforms. In both cases, you’re spending computing resources performing the same tasks amongst your virtual machine. If we’re talking about a modern Linux, as a single example, each VM is running a completely independent instance of sshd.

With docker, meanwhile, best-practices dictates that docker containers run only a single process. If you are running ten docker containers, you still have only one instance of sshd running, that belonging to the “hypervisor,” i.e., the system itself, running dockerd.

I’m trying not to simply express disagreement, but everything I understand about docker internals tells me that there is zero performance hit for a running process inside a docker container. Containerization, unlike virtualization, does not add additional layers (even if, as in the case of paravirtualization, the heaviest layers have been implemented in hardware).

It can take longer for a docker container to start up than it might normally require for a system to run the binary directly, which is why I qualified a “running” process.

I was hoping to elicit a more specific explanation of why you think Docker hinders performance or scalability.

Lets take your example of sshd running on 10 different VMs. That means that I have 10 different instances of SSHD for clients to login to which are subscribed to different hardware across the system.

Now if I take Docker and create 10 containers that all share the same SSHD… I am now putting all my eggs in that single basket. Just for hypothetical scenarios think of it this way:

Let’s say each SSHD session can support 100 clients. If I take 10 “regular” VMs and have 10 SSHD daemons running I can support 1000 clients total. So now with containers if I have a single SSHD instance that still has the 100 client limitation I am now sharing those 100 connections across 10 “systems”. So if Container 1 has 75 connections going and Container 2 has 25 Containers 3-9 are SOL until resources are freed up by Container 1 and 2.

You run into the same problem with hardware being split up via VMware and whatnot as well. I am not saying VMware doesn’t have limitations. There is a reason VMware allows “thick” provisioning of memory and storage… for certain things you want to have guaranteed bandwidth. I am sure Docker has a similar provision but sharing is NOT always caring in the computer world.

So now apply all of this to running it on Google or Amazon servers where your docker container may be sharing headspace with any number of other resource hogs. With VMs in a “normal” hypervisor situation I can alleviate congestion by adding hardware and migrating to other clusters and all sorts of good noise like that. Not as much with docker depending on how it was deployed. At the end of the day you are still sharing your sshd process with others.

It has it’s place… I just don’t think it’s for anything that requires heavy compute. Not unless Google and Amazon are running docker on some super heavy metal boxes (which I highly doubt).

25+ years in the computer industry and this is just my opinion on the matter. My personal experience tells me that if I run something in a container next to a bare metal box… the bare metal box is usually going to spank the container when it comes to performance. In all my years… no one ever sizes their installs for proper growth and they don’t do jack about increasing anything until it all falls on the floor.

1 Like

Now my machine is stuck offline.

I rebooted everything, I even held “The Button” for 10 seconds and then rebooted.

About to reconnect it. Any thoughts?

Good news is I finally got the art uploaded. 1 step forward, 1 or 2 backwards

I’m so sorry for the trouble over the weekend. I believe you should have a better experience now. Can you try again and let me know how it goes?

Thanks @jaz last night went a touch smoother with small designs. Will try something larger tonight.