CoCalc Logo Icon
StoreFeaturesDocsShareSupport News Sign UpSign In

CoCalc News
RSS Feed

Recent news about CoCalc. You can also subscribe via RSS Feed RSS Feed or JSON Feed JSON Feed.

Deepnote is one of CoCalc's direct competitors. Today (November 30, 2023) they announced a major price cut on their pay-as-you-go rates:

"As you may have already heard, starting December 1, we're slashing the pay-as-you-go rates across all our machines – making them more budget-friendly without any hidden terms."


At CoCalc, we recently finally launched pay as you go machines, which was one of our main development priorities for 2023. These are fully integrated with CoCalc, and were a huge amount of work to bring to market. I was terrified that Deepnote's major price cuts would make Deepnote a much better deal than CoCalc.

Here is how the Deepnote and CoCalc pricing compares:

Deepnote's New PriceCoCalc StandardCoCalc Spot
64GB RAM, 16vCPU$1.54$0.59$0.12
128GB RAM, 16vCPU (32 CPU on cocalc)$2.02$1.17$0.23
K80 GPU (newer L4 GPU on cocalc)$2.02$0.93$0.30

Conclusion: CoCalc's prices are still highly competitive, even in light of Deepnote's major price cuts.

Also, spot instances do work very well for many applications. For more details and how to get these prices at, read the rest of this post.

CAVEAT: comparing RAM and vCPU is not necessarily easy. Maybe I'm completely wrong.

More Details

I don't know exactly what Deepnote means by the above machine specs. However, according to my benchmarks, one of the very best machines we offer via Google Cloud is the AMD EPYC Milan family. Their single core performance is excellent, and a vCPU is equivalent to an entire core, which makes them up to twice as fast as lot of "vCPU" options out there. We offer both spot instances and standard instances.

Performance: 16 vCPU and 64GB RAM

Our best pricing on an AMD EPYC with 64GB RAM and 16 cores is $0.59/hour for standard instances.


By selecting a region in Europe, the cost is only $0.12/hour for a spot instance. Spot instances may stop or not be available, but our stats so far show they often work well for days to weeks, perhaps because Google has built out such massive CPU capacity:


In CoCalc the region where the machine is located is transparent, so you can take advantage of the best prices in the world.

High Memory: 16 vCPU and 32GB RAM

Our analogue of "High memory" above is a t2d-standard-32 with 32 cores, 128B of RAM, and it costs $1.17/hour for a standard instance, or $0.23/hour for a spot instance.


Again, the best price on spot instances is in a different region than for standard:



Deep note offers a K80 GPU for $1.80/hour. We do not offer K80's on CoCalc since they are so old, but we have L4's that have the same 24GB of RAM and are a much newer architecture. Our GPU price is $0.93/hour for standard instances, and $0.30/hour for spot instances:


Conclusion: CoCalc's new prices are still competitive. Yeah.

Happy Holidays! 🎄


It is finally easy to run Mathematica Jupyter notebooks on via the free Wolfram Engine! You only have to pay for the compute resources you use, which start at about $0.02/hour. For more details see the guide.


It is now possible to run your own instance of cocalc-docker directly on This is a hosted way to use CoCalc's Jupyter notebooks, LaTeX, VS Code Server, JupyterLab, and much more. It has many advantages involving performance and privacy over just using directly:

  • You can run the server geographically close to yourself, which makes it potentially much faster

  • Your data is not backed up as part of the rest of cocalc in any way, which may be important for some use cases involving privacy or just storing large amounts of data.

  • You can use massive amounts of compute resources and disk space, with optimal high performance

  • Cocalc-docker fully supports using all of CoCalc's own editors, in addition to JupyterLab and VS Code

  • You can be root and install your own software

  • You can run any Docker containers

  • If something goes wrong, you can get hands on support.

  • GPU support

See for a detailed step-by-step tutorial.

CoCalc now features robust compute servers, enabling users to connect a remote computer to CoCalc and utilize it for terminals and Jupyter notebooks. These compute servers open up possibilities for enhanced computing resources, extending far beyond the bounds of local machines. Users simply create a compute server in a project, select the software image and (optional) GPU they require, and can then start running any terminal or Jupyter notebook on this server for an on-demand fee, charged by the second when the server is in use.

The GPU support is extensive, offering variants including A100 80GB, A100 40GB, L4, and T4 GPUs with finely configured software stacks. These stack images include SageMath, Google Colab, Julia, PyTorch, Tensorflow and CUDA Toolkit, accommodating a versatile range of uses. The compute servers integrating these GPUs come at highly competitive pricing, particularly for spot instances. CoCalc's compute servers represent a massive enhancement to default projects, offering increased speed, flexibility, and computational power, transforming the way users can utilize CoCalc for their projects.


To set up a compute server in CoCalc, log into your project, create a compute server through the "Servers" button, selecting your desired software image and optionally a GPU. To use the server, create a terminal file or a Jupyter notebook, move it to the server through the upper left menu, and remember to sync files for editing during computations.

Finally, here is a quick tutorial on how to get started with compute servers on CoCalc:

  1. Once logged in, navigate to your project where you intend to use the compute server.

  2. Click on the "Servers" button on the left side of the screen and select "Create Compute Server".

  3. You will be prompted to select the desired software image and optionally a GPU. A GPU is selected by default but you can disable it if you don't need one. If you are going to write code using CUDA libraries, choose the "Cuda Toolkit" image. If you want to accelerate PyTorch computations with a GPU, choose the "PyTorch" image. If you want to use SageMath, choose the Sage image.

  4. Start your compute server.

  5. If you want to use the Linux command line, e.g., compilers, etc., create a terminal file (one ending in .term) and using the upper-left menu, select your compute server. If you chose the 'Cuda Toolkit', then the 'nvcc' command will be available for compiling .cu code.

  6. If you need to edit the files during your computations on the compute server, remember to click the 'Sync' button at the top left of the terminal for the files to get copied to your compute server.

  7. If you chose the "PyTorch" image or similar, create a Jupyter notebook and move it to the compute server via the upper-left menu. You can then select a Jupyter kernel that's available on the compute server, and your Jupyter notebook will run there.

Remember, compute servers are billed by the second only when they exist.



The default "Ubuntu 22.04" software environment has just been updated. This includes SageMath 10.1 and makes it the default version of Sage in CoCalc. In your existing Jupyter Notebooks, you have to update the list of kernels (if necessary) and switch to the "Sage 10.1" kernel.

Check out the release tour to learn what's new. E.g. you can now instantiate the 27 dimensional exceptional Jordan algebra:

O = OctonionAlgebra(GF(7), 1, 3, 4) J = JordanAlgebra(O) J
Exceptional Jordan algebra constructed from Octonion algebra over Finite Field of size 7 with parameters (1, 3, 4)

For more general information, visit the SageMath documentation.

In other notes, many tools and utilities have been updated and as a new addition, bun, a fast JavaScript runtime is available as well.

CoCalc now provides GPT-4 on a pay-for-what-you-use basis, in addition to our free GPT-3.5 functionality. As an instructor or student, you might have some questions about how this works!

  1. I don't see a warning about fees when I select "@GPT-4" in the chat window. Does the platform remind users about the fee before the chat is sent?

Anybody can select GPT-4 (in chat and other places), but the first time you use it, there is a big confirmation dialog. This lets you set a specific monthly spending limit (you can set anything you want), which is by default $0. You can always adjust this limit later at under "Self-Imposed Spending Limits", where you can also see the rates.

The dialog also lets you add credit to your account, in case you don't have any, and you can check on the status of that credit at After you explicitly set a limit and add credit, you don't get explicitly asked again every time you use GPT-4. Also, on any day when you use GPT-4, you'll receive an email statement at the end of the day listing how much you spent (and this is easy to disable).

  1. If a student uses GPT-4 once, will CoCalc default to GPT-4 thereafter?

Currently no. The default is always GPT-3.5. That said, several people have been requesting a way to default to GPT-4, to save themselves a click, so we will very likely make that an option sometime in the near future. But it will be possible to configure it either way.

From the "tokens" pricing scheme on OpenAI's site, it is difficult for me to get a good approximation for how much GPT-4 use would cost my students. I recognize that there are too many unknowns for a specific dollar amount, but can you give me any information that would help estimate the cost per semester? A sense of scale ($1 vs. $10 vs. $100 per semester) would be helpful.

Since all use is explicit and manual, e.g., via chat or clicking, in practice it's very difficult to use very much. My guess is that a typical student might use $10 for an entire semester worth of use. A typical interaction is a few cents, so hundreds of interactions cost about $10. You'll quickly get a sense of spend because it's listed in the daily statements mentioned above. For comparison, OpenAI charges $20/month for their GPT-4 chat site, and Microsoft charges $30/month for their CoPilot integration. The model in cocalc where you pay for what you actually use is more affordable.

Note that GPT-3.5 is significantly faster (and completely free to users, though it costs me), and for some things it's pretty good, so people often use it just because the output appears so quickly.

Some other notes:

  • In case you're worried, it's also possible to fully or partly disable ChatGPT for students in your class, e.g., during an exam. That's in course configuration.

  • We're planning to add other Large Language Models, e.g. Claude2 from Anthropic, pretty soon.


The 22.04 line of software environments just received an update. If you encounter a problem, the previous one is accessible under "Ubuntu 22.04 // Previous" in Project Settings → Control → Software Environment. Please report any issues!

There are no major changes, just regular updates to many packages and binaries.


CoCalc now has Cash Voucher Codes. These are single-use codes that you can purchase and make available to somebody else, who can then redeem them at for that amount of credit on their CoCalc account. They can then buy anything in CoCalc using that credit, including upgrade licenses, dedicated VM's and disks, pay-as-you-go project upgrades, student-pay course upgrades, GPT-4 chat evaluation, more vouchers, etc.

To buy a cash voucher code, visit and do "Add Cash Voucher".


then fill out the number and description, and customize the voucher codes:


Then create your voucher codes:


You get this:


Go to and redeem your own code... thus getting your money right back (as credit in your account)!


Note that this has no impact on my balance -- I just made a $5 voucher, which reduced my balance by $5, then I redeemed it, increasing my balance back to exactly where it was:


I hope you find this useful. E.g., if you're teaching a workshop and you want everybody to have an easy way to upgrade their projects for a few hours or use GPT-4 for more sophisticated AI help, you can just issue each participant a $2 voucher...


CoCalc's new purchasing system is now live! Instead of directly buying licenses, you add a credit on your account. You can then use that money in a massively more flexible way to buy licenses, pay-as-you-go upgrades of projects (a new thing), use GPT-4 (new), GPU's (coming soon), and we have many more plans. There's a log of exactly what you purchased, with daily and monthly statements, and as you make purchases your balance goes down.

Payments to add credit now work in your local currency anywhere in the world with a wide variety of local payment methods, instead of just credit cards! You can also buy a subscription without enabling any form of automatic payments -- you just have to manually add credit to cover the subscription periodically.

Another massive improve in our license system is that if you purchase a licenses and find that you need to increase or decrease the RAM or disk space or run limit (number of upgraded projects) or anything else at any time, you can just directly edit the license and your account will be debited or credited accordingly. If you need a license for only a week, or to extend an existing license, you can also just do that at any time using the balance in your account (you're charged the prorated difference).

I think this is much better than what we had before, and it's now fully live as you can see at

and in the screenshots below. These improvements to purchasing are the result of feedback from thousands of users over many years.





Edit a license



Pay as you go project upgrade