CoCalc News
SageMath, Inc. is pleased to announce successfully passing the SOC 2 Type II audit!
Service Organization Controls 2 (SOC 2) is a framework that is governed by the American Institute of Certified Public Accountants (AICPA). With a SOC 2 audit, an independent service auditor reviewed our policies, procedures, and evidence to determine if the controls are designed and operating effectively. A SOC 2 report communicates our commitment to data security and protection of our customers information.
Request access to the report and view the current status of controls at https://trust.cocalc.com/
Come Visit CoCalc's Exhibition at INFORMS 2024
We wanted to send out this quick message to let everyone know that CoCalc will be hosting booth (#208) at INFORMS 2024 in Seattle, WA!
We will live demo our platform in the exhibit hall all week and present a Technology Showcase on Tuesday, Oct. 22nd, at 2:15 p.m. PT.
As a short aside, you might also be interested to know that accessing on-demand H100 GPUs starts at $1.98 per hour via our compute server functionality. (It is all metered per second.) Other more budget-friendly options are available as well.
We are excited to announce that CoCalc has launched a brand-new "How-To" tutorial series on our YouTube Channel! These tutorials are designed to help you get the most out of our platform by providing quick, two-minute videos that cover everything from navigating the CoCalc UI to auto-generating LaTeX/Markdown documents and Jupyter Notebooks.
Our goal is to make your experience with CoCalc as smooth and efficient as possible. To ensure you stay up-to-date with the latest tips and tricks, we highly encourage you to subscribe! That way, you'll never miss a new tutorial and can always access the latest content right as it's released.
Please let us know what you want to see next! For more in-depth information, don't forget to visit our detailed Documentation.
Also, if you have any questions or want to chat with us directly, don't hesitate to book a video chat with our team.
Our most recent update to the LaTeX editor allows you to see more information about a running build process. It also gives you the opportunity to stop it, if it takes too long or causes problems. The build frame now shows memory and CPU usage, as well as the tail of the log while the job is running.
Under the hood, this is accomplished by an improved /api/v2/exec
endpoint. The parameter async_call
returns an ID, which you can use to monitor and control the underlying process. See /api/v2/exec for more details.
SageMath 10.4 is now available in all CoCalc projects and via the SageMath compute server images.
Older versions of Sage remain available. For Sage Worksheets and on the terminal, you can configure the version of sage
in your project by running sage_select ...
in a Terminal. Regarding Jupyter, you can switch to the newer kernel any time!
You can now use MermaidJS, which is a diagramming and charting tool that renders Markdown-inspired text definitions to create diagrams dynamically, anywhere you use Markdown in CoCalc: text between code cells in Jupyter notebooks, whiteboards, slideshows, Markdown files, etc. Just put the mermaid diagram description in a fenced code block, like this:
and it will get rendered like this:
Using Mermaid in exactly this way is also fully supported in JupyterLab and on GitHub. Moreover, if you publish documents on the CoCalc share server, Mermaid also gets rendered properly, e.g., see these examples:
CoCalc's Jupyter notebooks now have the latest version of IPyWidgets, and vastly improved support for custom widgets! We spent much of July doing a complete rewrite of the IPyWidgets implementation in CoCalc's Jupyter notebook to fully support the latest version of widgets, and also support using arbitrary custom widgets. This is done and now live, and is a major improvement. All widget layouts should now be exactly the same as upstream, and custom widgets work as long as they are hosted on the jsdelivr CDN. Before this, almost no custom widgets were supported (basically, only K3d), and now almost all custom widgets work, including ipyvolume, ipympl, newest k3d, bqplot, and much more.
Widgets in CoCalc work almost the same as the latest official upstream version. The main difference is that widgets support realtime collaboration and the full state of widgets is stored on the backend server. This means that if multiple people use a notebook at once (or you open the same notebook in multiple browsers), then state of widgets are sync'd. Also you don't have to re-evaluate code for widgets to appear if you refresh your browser.
An open source library that came out of this project: https://github.com/sagemathinc/cocalc-widgets
Upstream IPyWidgets, now all examples are supported: https://ipywidgets.readthedocs.io/en/stable/
CoCalc Widget docs: https://doc.cocalc.com/jupyter-enhancements.html#widgets-in-cocalc
#feature Several times in the last few weeks I've "lost" a file I wanted to find in cocalc, and wasn't quite sure where it was. Yes, I could open 10 different projects and search the project logs, but that is tedious. So I directly queried the "file_access_log" table in our databse and quickly found the file. E.g., this happened to me today trying to find a tex file. So... I figured everybody using CoCalc might want to do this, and added it as a feature in the upper right of the projects page:
Basically, you can type a string in that search box, hit return, and see the last 100 files (over the last year) that you edited whose name has that as a substring. You can use any PostgreSQL ilike patterns, e.g., % for wildcard. It's just a tiny thing that was easy to implement, but could really help sometimes when you're not sure where a file is.
The new Wireguard encrypted VPN between all compute servers in a project is now live and fully working in all the testing I've done. This a very critical foundation for building other things -- clusters, the distributed filesystem, etc.
If you want to try the encrypted wireguard vpn, just start two compute servers in the same project. Then type more /etc/hosts
and see that compute-server-[n] resolves to the vpn address of the compute server (which will be of the form 10.11.x.y). Do apt-get install -y iputils-ping
and then you can ping from one to another, e.g., ping compute-server-[n]
. Also, if you set a subdomain so https://foo.cocalc.cloud works, then you can also use foo
as a name to connect to. The exciting thing is that:
all ports are opened on the vpn
all traffic is fully encrypted
only compute servers in the same project have access to the vpn
this fully works across clouds, i.e., some nodes on google cloud and some on hyperstack, and they all connect to each other in a unified way.
Note that on-prem has one limitation still, e.g., on prem nodes can connect to all cloud nodes and all cloud nodes can connect to on prem nodes, but on prem nodes can't connect to each other. To make this work in general is complicated and expensive, requiring TURN servers, so we're not doing that for now. There's some special cases that will be supported in the future. This isn't the highest priority, since probably nobody but me uses on prem with more than one server so far...
Anyway, I think now that this is in place, implementing our new high performance distributed filesystem will be possible! Stay tuned.
We released another round of large language model updates. You can now use GPT-4o Omni and Gemini 1.5 Flash. Both are not only very capable, but also extremely quick with their replies.
Here is an example how I improved a plot of a t-test using R in a Jupyter Notebook. This is a visual check to see, if the data is really significantly different. The plot looks a bit boring, though:
Via AI Tools → Improve, I can tell GPT-4 Omni to make this a violin plot and more colorful
I get a response and can review the changes in side-chat. The result looks like that:
Much better!
Ok, but wait, what's the T-Test? Here, I'm asking Gemini Flash to explain this to me, and there was also something called shapiro. To learn more, I opened a new chat and asked away. I told Gemini to also show me how to do this in R – which I can run directly in the chat.