CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In

CoCalc News
RSS Feed

Recent news about CoCalc. You can also subscribe via RSS Feed RSS Feed or JSON Feed JSON Feed.

The new Wireguard encrypted VPN between all compute servers in a project is now live and fully working in all the testing I've done. This a very critical foundation for building other things -- clusters, the distributed filesystem, etc.

If you want to try the encrypted wireguard vpn, just start two compute servers in the same project. Then type more /etc/hosts and see that compute-server-[n] resolves to the vpn address of the compute server (which will be of the form 10.11.x.y). Do apt-get install -y iputils-ping and then you can ping from one to another, e.g., ping compute-server-[n] . Also, if you set a subdomain so works, then you can also use foo as a name to connect to. The exciting thing is that:

  • all ports are opened on the vpn

  • all traffic is fully encrypted

  • only compute servers in the same project have access to the vpn

  • this fully works across clouds, i.e., some nodes on google cloud and some on hyperstack, and they all connect to each other in a unified way.

Note that on-prem has one limitation still, e.g., on prem nodes can connect to all cloud nodes and all cloud nodes can connect to on prem nodes, but on prem nodes can't connect to each other. To make this work in general is complicated and expensive, requiring TURN servers, so we're not doing that for now. There's some special cases that will be supported in the future. This isn't the highest priority, since probably nobody but me uses on prem with more than one server so far...

Anyway, I think now that this is in place, implementing our new high performance distributed filesystem will be possible! Stay tuned.


We released another round of large language model updates. You can now use GPT-4o Omni and Gemini 1.5 Flash. Both are not only very capable, but also extremely quick with their replies.

Here is an example how I improved a plot of a t-test using R in a Jupyter Notebook. This is a visual check to see, if the data is really significantly different. The plot looks a bit boring, though:

Via AI ToolsImprove, I can tell GPT-4 Omni to make this a violin plot and more colorful

I get a response and can review the changes in side-chat. The result looks like that:

Much better!

Ok, but wait, what's the T-Test? Here, I'm asking Gemini Flash to explain this to me, and there was also something called shapiro. To learn more, I opened a new chat and asked away. I told Gemini to also show me how to do this in R – which I can run directly in the chat.


Get ready everyone! Over the past month, our very own William Stein has been creating an array of videos highlighting various aspects of CoCalc's Compute Server functionality! This series contains significant topics from understanding memory usage to employing popular software images such as Tensorflow, Sage, LEAN and the likes on powerful GPU/CPU machines. William's comprehensive walkthroughs showcase CoCalc's brand-new capabilities for advanced mathematical research, machine learning, and data science!

Feel free to browse this curated playlist which houses these enlightening videos. Dive in and discover how to harness the full potential of CoCalc like never before! The power of CoCalc is at your fingertips - explore, learn, and elevate your experience! Browse the playlist.


The project software environment has been updated. Version 2024-05-13 is the default now: it includes R 4.4 as the default R. Many packages were updated as well.

Note: if you had installed R packages locally in your project before, you have to re-compile them.

The default "R Statistics" compute server image also includes R 4.4 by default as well.

As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, and various underlying Linux packages.

If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select "Ubuntu 22.04 // Previous".

If you are using GPU's on CoCalc, there's an entirely new cloud option that you should see which is Hyperstack:


Once you select Hyperstack after starting to create a compute server, click the A100 tag and you'll see this:


Note that for $3.60/hour you get an 80GB A100, and these are all standard instances. You can also see that at least right now many are available. Everything else works very similar to Google cloud, except that:

  • startup time is slower -- definitely expect about 5-10 minutes from when you click "Start" until you can use the compute server. However, it's very likely to work, unlike Google cloud GPU's (especially spot instances). Google cloud is extremely good for CPU, but for GPU it's not as good.

  • Many of the server configurations have over 500GB of very fast local ephemeral disk, in case you need that for scratch. It's ephemeral, so goes away when you stop the server.

  • The local disk on the server should be as fast or faster than Google cloud, but cheaper.

  • All network usage is free, whereas egress from Google cloud is quite expensive.

  • There's a different range of GPU's. Sometimes there are a lot of H100's but in the middle of the day on Wednesday, there aren't. Yesterday there were dozens of them.

  • By default only a Python (Anaconda) image and an Ollama image are visible, since they are small. When you select the Python image, you'll likely have to type conda install ... in a terminal to install some packages you need. If you click the "Advanced" checkbox when selecting an image, you can select from the full range of images. However, the first startup time for your server maybe be MUCH slower for big images (e.g., think "20-30 minutes" for the huge Colab image). Starting the server a second time is fast again.


  • Live disk enlarging does work, but with a limit of at most 25 times due to Hyperstack architecture.



We add an on-prem compute server running on my Macbook Pro laptop to a CoCalc ( project, and using the compute server via a Jupyter notebook and a terminal. This involves creating an Ubuntu 22.04 virtual machine via multipass, and pasting a line of code into the VM to connect it to CoCalc.


After using a compute server running on my laptop, I create another compute server running on Lambda cloud ( This involves renting a powerful server with an H100 GPU, waiting a few minutes for it to boot up, then pasting in a line of code. The compute server gets configured, starts up, and we are able to confirm that the H100 is available. We then type "conda install -y pytorch" to install pytorch, and use Claude3 to run a demo involving the GPU and train a toy model.


There are many ways to quickly launch Visual Studio Code (VS Code) on


Open a project on, then with one click in the file explorer, launch VS Code running on the project. You can them install and use a Jupyter notebook inside VS Code, and edit Python code and using a terminal.

When you need more power, add a compute server to your project. For example, in the video we demo adding a compute server that has 128GB of RAM and the latest Google cloud n4 machine type. It's a spot instance, which is great for a quick demo. It's good to configure DNS and autorestart, and launch our compute server, watching it boot via the serial console. Once the server is running, launch VS Code with one click, use a Jupyter notebook, edit Python code, and open a terminal and confirm that the underlying machine has 128GB of RAM.

You can also make a CoCalc terminal that runs on the compute server by clicking "+New --> Linux Terminal", then clicking the Server button and selecting your compute server.

This costs just a few cents, as you can confirm using the "Upgrades" tab (and scrolling down). When you're done, deprovision the server, unless you need to keep data that is only on the server.

CoCalc now makes it very easy to run a hosted JupyterLab instance in the cloud, either a lightweight instance on our shared cluster, or a high powered instance on a dedicated compute server with a custom subdomain.

Checkout out or the video at


I saw a new announcement today about "Multibot chat on Poe": "Today we are adding an important new capability to Poe: multi-bot chat. This feature lets you easily chat with multiple models in a single thread. [...] Multi-bot chat is important because different models have different strengths and weaknesses. Some are optimized for specific tasks and others have unique knowledge. As you query a bot on Poe, you now can compare answers from recommended bots with one click, and summon any bot you prefer by @-mentioning the bot - all within the same conversation thread. This new ability lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow. [...] With Poe, you’re able to access all of the most powerful models, and millions of user-created bots built on top of them, all with a single $20/month subscription. "

Due to major recent work by Harald Schilly, also has very similar functionality! Also, in CoCalc, you pay as you go for exactly the tokens you use with each model, and it typically costs our users far less than $20/month, with many of the models being free. Instead of paying $20/month, add $10 in credit to your CoCalc account (which never expires) and pay for exactly what you actually use.


Then ask a question, and follow up by using DIFFERENT MODELS and regenerate the response with any model.


You can see all responses in the history:


The superpower of's LLM's are their integration with web search. The superpower of's LLM's is the integration with computation (including high powered HPC VM's, GPU's, Jupyter Notebooks, LaTeX, R, etc.). For example, continuing our thread above:


But you can also generate code in Jupyter notebooks, that are either running in lightweight shared environment, or on high powered dedicated compute servers:

Finally, you can always check and see exactly how much every interaction costs:


Try it out today!!!

We just added RStudio support to CoCalc projects (restart your project and refresh your browser if this doesn't work):

Run RStudio directly in your project

Open the "Servers" Tab to the left, then scroll down and click the "RStudio" button:


In a second, R Studio server will appear in another tab:


Simple as that. You can also run JupyterLab and VS Code just as easily.

Run RStudio on a compute server

If you need vastly more compute power (e.g., 80 cores for only $0.47/hour!!!), scroll up a little and create a compute server:




After that, create the compute server and when it starts up, click the https link. You may have to copy/paste a token to access the RStudio instance.