CoCalc Logo Icon
StoreFeaturesDocsShareSupport News Sign UpSign In

CoCalc News
RSS Feed

Recent news about CoCalc. You can also subscribe via RSS Feed RSS Feed or JSON Feed JSON Feed.

Yesterday and today I finished and made live a new api key implementation, e.g., this is now in account settings:


There is something similar at and ALSO in the settings page for all projects.

These newapi keys have an expire date, a name (which you can change at any time or repeat), the secret key itself doesn't get stored in the database (which is much more secure), and there are project specific api keys that only work for api calls for a specific project, rather than for everything. I left in the old api key functionality, but with messages that people should delete them, so the old keys still remain fully supported.

With the new api keys you can have up to 100 different keys active at once. A key can be set to expire at any time and then it is automatically deleted. You can edit the expire date and the name of the key at any time. It's a much better model. Behind the scenes we don't store the key in the database; instead, we just store a hash of it (the same sha-512 with 1000 rounds and salt as for passwords), so we can confirm somebody knows their api key without having to have the key in the database; this is much more secure. I also really like that I can make a key with a 1-day expire, play around with it, and know it's not just going to be a ticking time bomb.

Read more about the API here: and

The motivation for doing this is that project-specific API keys are needed for some new functionality we're implementing right now that will support connecting external computers to a CoCalc project to provide much more powerful compute. Among other things, this will greatly expand the sort of compute we can offer to include GPU's and other vastly more powerful options, and also to support people plugging in their own compute resources.

paste-0 4698690247357571


Say hello to the new "flyout panels" - a side panel designed specifically for common aspects of CoCalc projects. Located right next to the vertical buttons on the left-hand side, this feature can be easily accessed by clicking on the "▸" icon. Once expanded, you'll have a compact representation of various project aspects at your fingertips.

With the flyout panel, you can now effortlessly explore files, conveniently check recently modified files, keep track of running processes, perform quick searches, and much more. This initial release is just the beginning, as we have plans to continuously enhance and refine the feature in the upcoming weeks.

Experience the efficiency and ease-of-use that the flyout panel brings to your projects on CoCalc. Try it out today and stay tuned for exciting updates in the near future!

Explore filesRunning processes

CoCalc's family of software environments has several lines. For the past years, the default for a new project was "Ubuntu 20.04" and received periodic updates. This changed today!

We're happy to announce that the Ubuntu 22.04 line of images became the default for new projects. You can update your existing projects to use this new image, or switch back to 20.04 any time – that's in Project Settings → Control → Software Environment.

Most notably, the system-wide Python 3 environment is much more recent, several Octave kernels are available, and many small changes make this a much better environment for modern scientific computing.

Updates for 20.04 will become less frequent and it will eventually be deprecated, just like with the lines before it. However, we'll keep it around in case you depend on older software or on some tools, which did not make it into 22.04.

For more details you can study our software environment inventory.


"Neural AI Search" is now live in cocalc. The actual application right now is minimal compared to what it could be. I just want to get the backend foundations in place, and make it so content starts getting indexed, before building a bunch of new frontend capabilities on this. Right now the only thing you can do is click on the Find page in a project, click "Neural Search" off to the right, and do a search in that directory. It searches only jupyter, tasks, chat, whiteboards, and slides that you have opened for at least 7.5 seconds after I made this live a few minutes ago. It then updates the backend search index as you edit them.

The potential with this is extensive, and this is just a VERY tiny step. E.g., the underlying thing could work across many projects whether or not they are running, and of course it would also be extremely useful to search only within a specific file (like this chat). Also, this provides the foundation to make it so when interacting with ChatGPT it can be aware of content across your files and in relevant technical documentation (e.g., sagemath docs from now instead of 2021).

Technical Architectural Remarks

The basic thing seems to work fine, and the design I finally came up with (after numerous painful iterations this week!) uses git and sync like trickery to I think be very efficient and robust, and the expense of an ε\varepsilon chance of a wrong answer (which hardly matters for search).

In admin settings there is a new box:

When this is "no", everything is disabled, including any backend api's and frontend UI. When set to "yes", a person can put in the address and api key of a qdrant server, e.g., from or run their own, and then they automatically get neural network search working. This involves three tables:

  • postgres: openai_embeddings_logs -- logs any time that somebody calls the openai embeddings api, and how much it costs. It has some "elaborate" throttling strategy to ensure that we don't spend too much...

  • postgres: openai_embeddings_cache -- a cache of the expensive to compute map from text to a vector in R1536\mathbf{R}^{1536} that comes from the openai embeddings api. Entries in this cache expire after 6 weeks of not being touched. That said, postgresql seems to store vectors of doubles pretty compactly, and we aren't doing anything but just using this as a key:value cache.

  • qdrant: cocalc -- a "vector collection" of embeddings and metadata

Yes, this is all available in cocalc-docker.

The "robust" part of the design is that if you delete any data from any subset of the above tables, things will just keep humming along fine - there's no dependence. Delete some of the cache and we just pay more (and things are a little slower), delete some of the vector database, and you'll just get less search results. This is very different than my original design, which tightly couple qdrant and postgres, in such a way that it was very easy for one to break the other.

The data model for qdrant uses a lot of techniques to ensure security and limited data access (similar to what we do with postgresql), which is fairly easy to do with qdrant, but NOT with more basic vector databases. It also, wouldn't have worked with qdrant back in Nov 2022, since they have improved a lot recently.

The final piece in this whole puzzle is that for, we run qdrant itself in our Kubernetes cluster, and have regular snapshots that we backup. Qdrant's design is very much NOT a pig -- it's written in tight memory efficient Rust, and uses quantization to massively reduce the space used to store vectors, so I think it'll scale pretty well for us.

There's also the potential of providing this vector search capability via our api on a "pay for what you use" basis, and that could be of interest as its own product, since I developed a way to have a large number of independent organized vector databases that are "multi-tenant", so the cost is excellent per user. It's something to explore for "", since it could be useful for to sell for a lot of people. It's actually already available (for free), and just not documented.

Hey folks! As the Chief Sales Officer for CoCalc, I am delighted to share my incredible experiences attending some of the most inspiring events across academia, industry, and government over the last month. It was a fantastic opportunity to participate in meaningful conversations and explore potential collaborations with the goal of breaking down siloing and fostering innovation 🎉.


First, we kicked off the month at the Startup Grind Global Conference 2023 in Redwood City, exploring the power of Big Data and building recession-resilient startups. I connected with tech innovators and enterprising entrepreneurs, energized by the spirit of collaboration and shared insights.


Next, we ventured into the realm of high-energy physics at the American Physical Society (APS) April 2023 Meeting. We engaged in stimulating discussions on fusion experiments at Lawrence Livermore National Laboratory, diversity in STEM fields, and learned about the groundbreaking James Webb Space Telescope 🌌.


Then, I immersed myself in the world of data science at PyData Seattle 2023 💻. This exceptional event offered incredible talks and hands-on workshops covering a wide range of topics, from scaling Altair visualizations with VegaFusion to the open source quantum ecosystem.


Finally, I am currently attending the JupyterCon 2023 in Paris, where CoCalc is proudly attending due to our contributions within the Jupyter ecosystem 🌐. It was truly a wonderful opportunity to engage with potential hires, partners, and other professionals in the data science field while strengthening CoCalc's brand visibility.

A heartfelt thank you to my connections, new friends, and colleagues I met along the way. As we continue our journey, let's pave the path for an even brighter, more collaborative future! 🤝💡

Stay up to date with our events here and keep the collaboration flowing!

Can't wait to see you all at the next events! Until then, stay curious and collaborative!

We have been adding many exciting new features to CoCalc recently! Over the last few months, we have introduced numerous enhancements and additions across various aspects of the site. Check out the highlights:

  1. 💡 Jupyter API and kernel pool: There have been significant improvements to Jupyter notebooks, including the introduction of the Jupyter API and a new kernel pool, providing a more seamless and efficient experience to users. You can also embed executable code anywhere in CoCalc where you use Markdown by just making a fenced code block.

  2. 📖 First Steps Guide: A helpful First Steps Guide is now available to guide newcomers and make it easier for them to dive into CoCalc. Click "Start the first steps guide" at the top of your cocalc project.

  3. 🌟 Tab completion: Tab completion for LaTeX, Python, JavaScript, and other languages has been introduced, making coding and document editing faster and more convenient. Use the tab key when editing code or latex outside Jupyter.

  4. 🔧 "Help me fix this..." for LaTeX and Sage Worksheets: A new very popular feature using ChatGPT, which helps you quickly identify and fix errors in LaTeX documents and Sage Worksheets, improving productivity. Whenever an error occurs, click a button, and get a context sensitive suggestion about how to fix it.

  5. 🔍 Better share server searching and sorting: It's now even easier to find and organize your work with improved search and sorting capabilities on the share server. Browse now! You can also run and edit code in any published notebook, directly from the share server, without having to sign in or make a copy.

  6. 📝 Task lists: Task lists have been upgraded to a frame editor, allowing you to view a single task list in multiple ways simultaneously. In particular, you can splits your list horizontally or vertically and set separate search parameters for each frame. You can also easily analyze any subset of tasks using ChatGPT.

  7. 🤖 OpenAI ChatGPT integration: You can now integrate OpenAI ChatGPT with your Jupyter notebooks or Linux Terminal in CoCalc, opening up endless possibilities to leverage AI in your work.

  8. 🎙️ CoCalc Slides: Create amazing presentations with CoCalc Slides, which supports Jupyter code and LaTeX math in your slides. These are based on the whiteboard but with a presentation mode and specific sized slide.

  9. Better display of points in time: Easily switch between relative and absolute time displays for improved clarity and understanding, anywhere that times are displayed in CoCalc.

  10. 🎛️ Vouchers: Transferable codes for licenses can now be renewed later, giving more flexibility to users. Did you read this far? If so, send us a message to try out a voucher for free!

These developments make CoCalc an even more powerful and user-friendly platform for those seeking an all-in-one fully collaborative scientific computing environment! 🎉


UPDATE: this wasn't that popular, and we are rolling out Guided Tours, so this is now deprecated.

I just spent the morning bringing back the "First steps guide".


You probably have it off if you're reading this, but to see it check this box in account prefs:


It looks like this at the top of any project:


When clicked, it copies these files over:

NOTE: If you haven't restarted your project then it copies files from the library instead, which are a lot older.

The content is still relatively dated and it'll get further updates soon.


By default, projects run the "Ubuntu 20.04" line of software environments. Soon, the default of newly created projects will change to be the Ubuntu 22.04 line. It offers a similar software stack, but with many updates and newer versions. Today has been yet another update of it and now there is e.g. Octave 8.2 available.

You can try 22.04 right now by going to Project Settings → Project Control → Software Environment: and select "Ubuntu 22.04 (Current)". You can switch back any time as well.

As always, please let us know about issues you encounter.

Exciting update: Squashed LaTeX error log bugs and integrated ChatGPT "Help me fix this..." buttons into our LaTeX editor for real-time assistance! 💻🚀🙌

This was partly inspired by Terry Tao's recent remark about ChatGPT: "Just being able to resolve >90% of LaTeX compilation issues automatically would be wonderful..."


When clicked, you get a chat like this:


The "Details" contains the error, what you're doing (latex), and a selection from the file to help chatgpt better assist you.

Here's another example:


As a reminder, TimeTravel is another feature of CoCalc that helps in fixing errors -- if you had everything compiling 5 minutes ago, just zip back in time and see what you did to mess things up.



I have just released a Jupyter kernel pool optimization for all projects and the use of Jupyter notebooks. I wrote and deleted this 3 or 4 times before getting something that works robustly (I hope). The final version should not break anything or negatively impact any functionality, as it falls back to not using a pool in every subtle case where something could go wrong (e.g., environment variable customization changes, etc.).


  • To benefit from this optimization, you need to restart your project (or start a new one). Refreshing your browser will not make any difference.

  • The very first time you start your project and open a notebook, there will be no difference except for one extra kernel starting in the background (that's the pool). Moreover, a file ~/.config/cocalc-jupyter-pool will be created with the parameters of that kernel.

  • If you then open another notebook with the same kernel, running code should start MUCH more quickly. Additionally, "restart and run all" in a notebook should be significantly faster than before (e.g., less than a second instead of 10 seconds, say, for Sage!).

  • If your project stops and you start it up again later, your first use of a kernel should be MUCH faster, assuming you're using the same kernel as before and you haven't changed any custom environment variables. Under the hood, whatever was stored in ~/.config/cocalc-jupyter-pool will be started next time.

That's it! One interesting thing I needed was code to generate code for each language that would set custom environment variables on the fly. I had something that would generate code to change directories, and I pointed GPT-3.5 at that code and said, "rewrite this to instead generate code for custom environment variables..." and it worked perfectly on the first try.