Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In

CoCalc News
RSS Feed
JSON Feed

Recent news about CoCalc. You can also subscribe via RSS Feed RSS Feed or JSON Feed JSON Feed.
Filter

If you are using GPU's on CoCalc, there's an entirely new cloud option that you should see which is Hyperstack:

image

Once you select Hyperstack after starting to create a compute server, click the A100 tag and you'll see this:

image

Note that for $3.60/hour you get an 80GB A100, and these are all standard instances. You can also see that at least right now many are available. Everything else works very similar to Google cloud, except that:

  • startup time is slower -- definitely expect about 5-10 minutes from when you click "Start" until you can use the compute server. However, it's very likely to work, unlike Google cloud GPU's (especially spot instances). Google cloud is extremely good for CPU, but for GPU it's not as good.

  • Many of the server configurations have over 500GB of very fast local ephemeral disk, in case you need that for scratch. It's ephemeral, so goes away when you stop the server.

  • The local disk on the server should be as fast or faster than Google cloud, but cheaper.

  • All network usage is free, whereas egress from Google cloud is quite expensive.

  • There's a different range of GPU's. Sometimes there are a lot of H100's but in the middle of the day on Wednesday, there aren't. Yesterday there were dozens of them.

  • By default only a Python (Anaconda) image and an Ollama image are visible, since they are small. When you select the Python image, you'll likely have to type conda install ... in a terminal to install some packages you need. If you click the "Advanced" checkbox when selecting an image, you can select from the full range of images. However, the first startup time for your server maybe be MUCH slower for big images (e.g., think "20-30 minutes" for the huge Colab image). Starting the server a second time is fast again.

image

  • Live disk enlarging does work, but with a limit of at most 25 times due to Hyperstack architecture.

VIDEO: https://youtu.be/NkNx6tx3nu0

LINK: https://github.com/sagemathinc/cocalc-howto/blob/main/onprem.md

We add an on-prem compute server running on my Macbook Pro laptop to a CoCalc (https://cocalc.com) project, and using the compute server via a Jupyter notebook and a terminal. This involves creating an Ubuntu 22.04 virtual machine via multipass, and pasting a line of code into the VM to connect it to CoCalc.

image

After using a compute server running on my laptop, I create another compute server running on Lambda cloud (https://lambdalabs.com/). This involves renting a powerful server with an H100 GPU, waiting a few minutes for it to boot up, then pasting in a line of code. The compute server gets configured, starts up, and we are able to confirm that the H100 is available. We then type "conda install -y pytorch" to install pytorch, and use Claude3 to run a demo involving the GPU and train a toy model.

jupyter
vscode
2024-04-18

There are many ways to quickly launch Visual Studio Code (VS Code) on https://cocalc.com.

VIDEO: https://youtu.be/c7XHYBDTplw

Open a project on https://cocalc.com, then with one click in the file explorer, launch VS Code running on the project. You can them install and use a Jupyter notebook inside VS Code, and edit Python code and using a terminal.

When you need more power, add a compute server to your project. For example, in the video we demo adding a compute server that has 128GB of RAM and the latest Google cloud n4 machine type. It's a spot instance, which is great for a quick demo. It's good to configure DNS and autorestart, and launch our compute server, watching it boot via the serial console. Once the server is running, launch VS Code with one click, use a Jupyter notebook, edit Python code, and open a terminal and confirm that the underlying machine has 128GB of RAM.

You can also make a CoCalc terminal that runs on the compute server by clicking "+New --> Linux Terminal", then clicking the Server button and selecting your compute server.

This costs just a few cents, as you can confirm using the "Upgrades" tab (and scrolling down). When you're done, deprovision the server, unless you need to keep data that is only on the server.

CoCalc now makes it very easy to run a hosted JupyterLab instance in the cloud, either a lightweight instance on our shared cluster, or a high powered instance on a dedicated compute server with a custom subdomain.

Checkout out https://github.com/sagemathinc/cocalc-howto/blob/main/jupyterlab.md or the video at https://youtu.be/LLtLFtD8qfo

ai
llm
python
2024-04-16

I saw a new announcement today about "Multibot chat on Poe": "Today we are adding an important new capability to Poe: multi-bot chat. This feature lets you easily chat with multiple models in a single thread. [...] Multi-bot chat is important because different models have different strengths and weaknesses. Some are optimized for specific tasks and others have unique knowledge. As you query a bot on Poe, you now can compare answers from recommended bots with one click, and summon any bot you prefer by @-mentioning the bot - all within the same conversation thread. This new ability lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow. [...] With Poe, you’re able to access all of the most powerful models, and millions of user-created bots built on top of them, all with a single $20/month subscription. "

Due to major recent work by Harald Schilly, https://CoCalc.com also has very similar functionality! Also, in CoCalc, you pay as you go for exactly the tokens you use with each model, and it typically costs our users far less than $20/month, with many of the models being free. Instead of paying $20/month, add $10 in credit to your CoCalc account (which never expires) and pay for exactly what you actually use.

image

Then ask a question, and follow up by using DIFFERENT MODELS and regenerate the response with any model.

image

You can see all responses in the history:

image

The superpower of poe.com's LLM's are their integration with web search. The superpower of CoCalc.com's LLM's is the integration with computation (including high powered HPC VM's, GPU's, Jupyter Notebooks, LaTeX, R, etc.). For example, continuing our thread above:

image

But you can also generate code in Jupyter notebooks, that are either running in lightweight shared environment, or on high powered dedicated compute servers:

Finally, you can always check and see exactly how much every interaction costs:

image

Try it out today!!!

We just added RStudio support to CoCalc projects (restart your project and refresh your browser if this doesn't work):

Run RStudio directly in your project

Open the "Servers" Tab to the left, then scroll down and click the "RStudio" button:

image

In a second, R Studio server will appear in another tab:

image

Simple as that. You can also run JupyterLab and VS Code just as easily.

Run RStudio on a compute server

If you need vastly more compute power (e.g., 80 cores for only $0.47/hour!!!), scroll up a little and create a compute server:

image

then:

image

After that, create the compute server and when it starts up, click the https link. You may have to copy/paste a token to access the RStudio instance.

image

sagemath
software
2024-03-25

The project software environment has been updated. Version 2024-03-25 is now the default. It includes SageMath 10.3 as the default. As usual, you can still use older versions by switching to a different Jupyter Kernel or use the sage_select command-line utility to change what sage is actually running.

As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, R, and various underlying Linux packages.

If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select "Ubuntu 22.04 // Previous".


Update:

2024-03-29: a small patch update has been released, which mainly fixes a pandas vs. openpyxl incompatibility involving reading *.xslx files.

If you are running a compute server, click "Edit" (or "Details"), then scroll down to the new "Applications" section, and in most cases you'll find three new buttons -- "JupyterLab", "VS Code" and "X11 Desktop".

image

Click a button and CoCalc installs and runs JupyterLab, VS Code, or an X11 Desktop directly on the compute server.

image

If your compute server is geographically close to you, then using this application will have low latency.

Each application is running on the compute server and has full access to your files and any compute resources of the compute server. Any project collaborator can also access this link. Moreover, if you share the link with the auth token, then anybody you share it with can use the app (even if they do not have a cocalc account).

For JupyterLab, you must configure a DNS subdomain, which is easy to do in the Network section directly above:

image

For the X11 Desktop, almost no applications are installed by default. Fortunately, you can do apt-get install ... to install apps. For example, after apt-get install gimp, you can run gimp:

image

compute
2024-03-14

You can now run arbitrary X86 virtual machines inside compute servers on https://cocalc.com.

Select an intel machine type, e.g., n2-standard-2, then scroll down and check "Enable Nested Virtualization":

image

image

There are now 3 new compute server images:

  • Anaconda

  • JupyterHub

  • Kubernetes Node

image

Anaconda

The Anaconda image is a lightweight image with the conda command installed and configured (via mambaforge), and two channels, anaconda and conda-forge, enabled by default. You get Python 3.11 and can very easily install packages into your compute server's environment using the conda command, e.g., install Matplotlib:

(compute-server-1540) ~/anaconda$ conda install matplotlib

The packages you install are stored in /conda only on the compute server, so installing and using the packages is fast, and if you make the compute server disk large, you can install many packages.

image

JupyterHub

Th JupyterHub image is a single-node Kubernetes install of JupyterHub, which can be fully customized by you exactly as explained in the official docs (or email [email protected] for support!). Click to create it, and wait for everything to install. It can take several minutes to start the first time, so please be patient. There is a random registration token which has to be entered to connect to JupyterHub; once you do that the default auth is that anybody can then sign in with any login/password (that's just the JupyterHub default). The default image is also very simple, but you can easily change it as documented above.

This is a single node deployment by default, but scaling up to multiple nodes does work, though it requires some copy/paste on the command line. (We will automate this in the future.)

Kubenetes Node

You can create a Kubernetes node. This is a single node Kubernetes cluster by default. However, you can join it to an existing cluster following the microk8s directions. E.g., you could expand a JupyterHub install to have multiple nodes.