Why computing on servers?


Why did you choose to do the computing on your servers and not on a owner desktop computer? It seems to me it is more expensive for you to maintain servers than let the job to others.
Thank you.


As someone developing the actual cloud offerering (one of the big ones), there are so many reasons for doing it in the cloud.

  1. One platform (instead of Mac, PC and others)
  2. Support. You know everyone are using the same version if the software.
  3. Easy deployment of new versions
  4. Heavy CPU requirement for a feature? Just throw more servers at the problem.

And more…

There are drawbacks of course, but there is a reason almost every product is going cloud.

If I would guess, cloud development with a modern process is half the cost of old style development.

That can all be done via web app running on automatically updated local servers. See the Lasersaur project for a good implementation of platform independent browser based control of a laser cutter with great support and rapid development. And I can still feed it a regular old g-code file anytime I want.

One reason that could justify the expense of running this on your own servers and locking people out of their machines is to monetize the massive library of designs being built free of charge by users. There’s nothing wrong with that. We’re all getting a perfectly decent laser cutter (waaaay better than the chinese ebay stuff) at a ridiculous price. They gotta make up that money somewhere.

Luckily making a laser cutter work is not rocket science, it’s just a couple of stepper motors and a laser at the end of the day. It can be made to run from any controller and interface you like with a bit of hacking and warranty-voiding.

Of course running an autoupdating own webserver would work, but that would still force them to support different OSes, different OS versions etc. And they still would not be certain all are running the latest version.

And what would happen if something breaks? You sending logs to them and hope that they will release an update? Now any error will be in the cloud and they can debug it easily and with one small update fix the issue for everyone.

The more I think about it the more genious I think it is (having this as a cloud service)

There was a brief period of time when predictions for the future of computing was that absolutely everything would go cloud based. You would pay a monthly fee for a computer, rather than own one. This would let you have super-computer power, all happening somewhere else.

Not sure precisely what drove away from this anticipation. Likely that was before 4K displays, which raised the bandwidth required to maintain a display of whatever the other computer is telling your monitor to show.

But yes… tons of advantages to being in the cloud. Offset by very low probability catastrophic drawbacks (if “the system” goes down, it is down for everyone, and all you can do is complain online until someone else fixes it for you)

It’s very unlikely the servers will do down, it’s google after all. I’ve worked with google servers and they are nifty and resilient. But the amazon web services debacle or apple service problem shows no-one is free from a service going down. But I am guessing google have learnt from others(and their own mistakes) and this is a lot less common. Most IT services aim for a 99% up rate but google is better than that.

Eventually everything will be cloud even 4-8k+(who needs that) we have barely tapped the fiber wavelength cap. Just being able to upgrade your server hardware and update your servers remotely is a huge deal. Imagine adding computing power to computer without having to buy any hardware yourself.(get the input and output devices right and you never have to upgrade it again until it breaks, which would be less often)

The predictions turned out to be right the vast majority of the time. These days the winning programming model is to have the heavy lifting on the server-side, where deployment and scalability is very easy these days, and do the interaction on the client side in JavaScript using a framework such as angular or react. This is a much easier model to develop for and operate then distributing desktop software, because it’s a much simple environment, with out all of the variation of desktop OS, or distributing software and dealing with user environment issues. Admittedly, the user interface is a bit depending on the specific browser a user is running, but given modern browsers and standards that is vastly easier to support then a full stack on Mac, Windows, and Linux. And of course it gives you access to highly skilled the resources. For example when you’re doing something computer intensive you can spread a job across a huge form of servers, allowing you to do things like image processing and slicing in a tiny fraction of the time that it could be done on a single desktop computer. The downside of course is the users have to be on the Internet, but that’s increasingly common.

And the major upside is you can do the same quality of computing from any device with a screen and internet. OnShape is an example I hadn’t considered in this musing of doing just that.

Best experience I had in switching from local programs to cloud was going from a proprietary ILS for our school library to Koha open source. Instead of one computer and not networked, I could access anywhere from any platform. That is me being IT for the school. That is huge even aside from the command and control software not being local. Platform agnostic. Yes. How much will we curse what it can’t do versus what we are amazed it can do.