<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <atom:link href="https://cocalc.com/news/rss.xml" rel="self" type="application/rss+xml"/>
    <title>CoCalc News</title>
    <description>News about CoCalc. Also available at https://cocalc.com/news</description>
    <link>https://cocalc.com/news/rss.xml</link>
    <pubDate>Sun, 12 Apr 2026 03:10:00 GMT</pubDate>
    <item>
      <title><![CDATA[Coding Agent]]></title>
      <link>https://cocalc.com//news/coding-agent-92</link>
      <description><![CDATA[<p>CoCalc now includes a built-in <strong>AI Coding Agent</strong>: an LLM-powered assistant that lives right inside your editor's side panel.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/cocalc-code-assist-jupyter-20260331.png" alt="AI Coding Agent in a Jupyter Notebook"></p>
<h2>What it does</h2>
<p>Instead of just chatting about code, the Coding Agent can <strong>directly edit your files</strong>. Ask it to fix a bug, refactor a function, or add error handling, and it responds with precise, line-level edits that you review as diffs before applying.</p>
<h2>Key features</h2>
<ul>
<li><strong>Works everywhere</strong> — Python, R, Julia, JavaScript/TypeScript, C/C++, HTML, Markdown, and many more file types, plus Jupyter notebooks.</li>
<li><strong>Context-aware</strong> — The agent sees your cursor position, selected text, and surrounding code. No need to copy-paste context manually.</li>
<li><strong>Safe edits</strong> — Either trust it via auto-accepting and revert using TimeTravel, or review all changes.</li>
<li><strong>Shell commands</strong> — The agent can suggest terminal commands (install packages, run tests) that you confirm before execution.</li>
<li><strong>Jupyter integration</strong> — In notebooks, the agent can create, edit, and run cells. Use the &quot;Generate&quot; button between cells or per-cell tools (Explain, Fix, Modify) to open the agent with context pre-filled.</li>
<li><strong>Session history</strong> — Conversations are organized into &quot;turns&quot;. You can revisit, rename, or start fresh.</li>
<li><strong>Cost estimation</strong> — See estimated token usage before sending each message.</li>
</ul>
<p>Open the <strong>Assistant</strong> tab in the side chat panel of any code file or notebook to get started.</p>
<hr>
<h2>LaTeX Agent</h2>
<p>Similar to Jupyter and Code, there is also an Assistant tab in a <code>.tex</code> files. It understands LaTeX document structure and applies edits that keep your document valid.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/cocalc-code-assist-latex-20260331.png" alt="AI Agent editing a LaTeX document"></p>
<h2>LaTeX-specific capabilities</h2>
<ul>
<li><strong>Environment-aware edits</strong> — When a change touches a theorem, proof, align block, or math environment, the agent includes enough surrounding context to ensure the result is syntactically valid.</li>
<li><strong>Minimal but correct</strong> — Edits are as small as possible, but never at the cost of leaving broken LaTeX.</li>
<li><strong>Instant error fixing</strong> — When a LaTeX build fails, click <strong>&quot;Help me fix&quot;</strong> in the error gutter — the agent analyzes the error and proposes a fix. Enable <strong>auto-accept edits</strong> and compilation errors get resolved almost instantly: click, fix, done.</li>
</ul>
<p>This works alongside CoCalc's existing LaTeX build system, and with enabled auto-build on save, this has the potential to speed up your work with LaTeX significantly.</p>
<p>Open any <code>.tex</code> file and switch to the <strong>Assistant</strong> tab to try it out.</p>
]]></description>
      <pubDate>Thu, 02 Apr 2026 14:27:07 GMT</pubDate>
      <guid>https://cocalc.com/news/92</guid>
    </item>
    <item>
      <title><![CDATA[Software Update 2026-03-26]]></title>
      <link>https://cocalc.com//news/software-update-2026-03-26-91</link>
      <description><![CDATA[<p>We just released a software update after quite a lengthy break. It contains updates to basically all the software stacks we offer. If you need to get back to before the update from today, select <code>Ubuntu 24.04 (2025-08-27)</code> in Project Settings → Project Control → Software Environment.</p>
]]></description>
      <pubDate>Thu, 26 Mar 2026 14:26:14 GMT</pubDate>
      <guid>https://cocalc.com/news/91</guid>
    </item>
    <item>
      <title><![CDATA[Pre Alpha Testing of Locally Installable Free Version of CoCalc]]></title>
      <link>https://cocalc.com//news/pre-alpha-testing-of-locally-installable-free-version-of-cocalc-90</link>
      <description><![CDATA[<p>If you're interested in trying out an early single user alpha test version of CoCalc that you can install on your laptop or a server, email us at <a href="mailto:help@sagemath.com">help@sagemath.com</a>, or just try it out [1].   You should expect this to be &quot;alpha quality&quot;, etc., but we would be interested in feedback to help with development.     The complete source code for this is visible at [3], in case you are curious about how it works.</p>
<p>There will also be very early unstable alpha releases of CoCalc Launchpad at [2], which is a <em>multiuser</em> version of CoCalc, which requires connecting external VM's for running code.</p>
<p>Email <a href="mailto:help@cocalc.com">help@cocalc.com</a> and asked to be added to the official alpha testers group if you want more details, guidance, and to provide feedback.</p>
<p>[1] <a href="https://software.cocalc.ai/software/cocalc-plus/index.html">https://software.cocalc.ai/software/cocalc-plus/index.html</a></p>
<p>[2] <a href="https://software.cocalc.ai/software/cocalc-launchpad/index.html">https://software.cocalc.ai/software/cocalc-launchpad/index.html</a></p>
<p>[3] <a href="https://github.com/sagemathinc/cocalc-ai">https://github.com/sagemathinc/cocalc-ai</a></p>
]]></description>
      <pubDate>Fri, 06 Feb 2026 03:54:27 GMT</pubDate>
      <guid>https://cocalc.com/news/90</guid>
    </item>
    <item>
      <title><![CDATA[MCP Server]]></title>
      <link>https://cocalc.com//news/mcp-server-89</link>
      <description><![CDATA[<p>CoCalc now has an update <a href="https://github.com/sagemathinc/cocalc/tree/master/src/python/cocalc-api">API client</a> with preliminary support for connecting to an MCP server. After you installed the cocalc-api from source and <a href="https://github.com/sagemathinc/cocalc/blob/master/src/python/cocalc-api/src/cocalc_api/mcp/README.md">configured the MCP server</a> you can talk to your projects.</p>
<p>Here is a simple demo example:</p>
<p><img src="https://cocalc.com/share/raw/bd967854f0d0134257f9e04de1b5afe12cf4487a/mcp-cocalc.png" alt=""></p>
<p>Which indeed modified a file in my project:</p>
<p><img src="https://cocalc.com/share/raw/75770fa26c0015d3d9a57f4775be103a78167127/mcp-demo-data01.png" alt=""></p>
]]></description>
      <pubDate>Thu, 11 Dec 2025 10:48:41 GMT</pubDate>
      <guid>https://cocalc.com/news/89</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc+: very easy way to run CoCalc on your laptop or a remote server]]></title>
      <link>https://cocalc.com//news/cocalc-very-easy-way-to-run-cocalc-on-your-laptop-or-a-remote-server-88</link>
      <description><![CDATA[<p>I made a first ever early alpha release of Cocalc+!   It's here:  <a href="https://github.com/sagemathinc/cocalc/releases/tag/0.2.5">https://github.com/sagemathinc/cocalc/releases/tag/0.2.5</a></p>
<p>What is this?  It's a locally installable version of CoCalc that does NOT require Docker, and lets you fully use whatever Jupyter kernels, latex, etc. you have on your own laptop directly. <strong>It's a single self-contained binary, and is very lightweight.</strong></p>
<p>Try it, with the caveats of course that it's probably horribly broken.</p>
<ul>
<li>There is an Apple Silicon mac binary that is <em>properly signed</em>.</li>
<li>There's a Linux binary for x86_64 as well.</li>
</ul>
<p>Most importantly, this is the first version of CocalcPlus ever that actually has AI support, which was a significant amount of work to add.  To configure AI, click on the settings icon in the UPPER RIGHT, then click on &quot;AI&quot; and paste in a key and click save.  You might need to refresh your browser, but you'll get full AI integration for the provides you configure.</p>
<p>You can also run this on a remote server and thus use the CoCalc UI, TimeTravel, Jupyter notebooks, whiteboards, latex editor, terminals, etc., and thus more easily use that remote server.</p>
<p>The design is very lightweight in that it is single user and there is only one project -- your computer.  There is also no separate database.</p>
<p>This does not currently support integration with <a href="http://cocalc.com">cocalc.com</a>, but that is of course planned soon as an option.  The model is that CoCalc+ will be free, but some integrations with our cloud services will require a subscription.</p>
<p><a href="https://github.com/sagemathinc/cocalc/discussions/8651">https://github.com/sagemathinc/cocalc/discussions/8651</a></p>
]]></description>
      <pubDate>Thu, 20 Nov 2025 04:07:08 GMT</pubDate>
      <guid>https://cocalc.com/news/88</guid>
    </item>
    <item>
      <title><![CDATA[New Python API and Organization Admins]]></title>
      <link>https://cocalc.com//news/new-python-api-and-organization-admins-87</link>
      <description><![CDATA[<p>I developed a brand new Python api for using CoCalc!</p>
<p><img src="https://cocalc.com/blobs/paste-0.8797696783902372?uuid=1256f403-caa2-4c8f-b74e-ede8cf903362" alt=""></p>
<p><a href="https://pypi.org/project/cocalc-api/">https://pypi.org/project/cocalc-api/</a></p>
<p>and you should be able to pip or uv install it as normal.</p>
<p>The docs are at</p>
<p><a href="https://cocalc.com/api/python/">https://cocalc.com/api/python/</a></p>
<p>You might want to try a few basic things like:</p>
<ul>
<li>create an api key in account prefs on cocalc (refresh your browser if you get an error setting the expire date, since I just fixed an issue)</li>
<li>use the api to list your projects,</li>
<li>create a project,</li>
<li>copy files between two projects</li>
</ul>
<h2>Organizations</h2>
<p>CoCalc now has a notion of organizations with admins, who can manage users of an organization.  This is currently only accessible via this Python API right now.   It is designed to make it much easier to build things like asynchronous courses involving Jupyter notebooks, where you want to easily build your own custom workflow and user management instead of using CoCalc's course management UI.</p>
<p>For managing users we will need to create a new &quot;organization&quot; for you and make you an admin of that organization.  You can then create users in your org, provide a URL so they can use cocalc (without having to worry about creating accounts themselves), etc.  Can you also make projects for them, add them as collaborators to those projects, copy files to their projects (from your own template project), list all members of your org, etc.  It's also easy to broadcast a message to all org members.</p>
<p>It's also fairly easy to add new functionality to this API.  What is missing that you want?  An obvious gap is compute servers, right now.</p>
<p>-- William</p>
]]></description>
      <pubDate>Sat, 30 Aug 2025 19:08:35 GMT</pubDate>
      <guid>https://cocalc.com/news/87</guid>
    </item>
    <item>
      <title><![CDATA[Ubuntu 24.04-Based environment is the default]]></title>
      <link>https://cocalc.com//news/ubuntu-24-04-based-environment-is-the-default-78</link>
      <description><![CDATA[<p>Starting today, our default software environment for <strong>new</strong> projects is based on <strong>Ubuntu 24.04</strong>.<br>
You can still select the previous default, <strong>Ubuntu 22.04 (available until June 2025)</strong>, when creating new projects. While we plan to support Ubuntu 22.04 for a while longer, our main focus going forward will be on Ubuntu 24.04.</p>
<p>Existing projects are unaffected. If you want to switch, you can do this any time via <strong>Project Settings → Project Control → Software Environment</strong>.</p>
<p>To see what’s included, check out our <a href="https://cocalc.com/software">software inventory</a>.</p>
<h3>Programming Languages and Features</h3>
<ul>
<li>
<p><strong>Python</strong>: There is now a new &quot;CoCalc Python&quot; environment, featuring a curated set of popular packages. This replaces the previously called &quot;system-wide&quot; environment. Terminals now run inside this environment by default. The main benefit is, that this allows to manage Python packages without depending on system-wide packages, installed for system utilities. As before, you can also use the Anaconda-based environment via <code>anaconda2025</code>, and we continue to offer a Colab-compatible environment.</p>
</li>
<li>
<p><strong>R</strong>: We now provide a broader selection of R packages, powered by <a href="https://eddelbuettel.github.io/r2u/">r2u</a>, making it easier and more convenient to get started.</p>
</li>
<li>
<p><strong>SageMath</strong>:  The latest version of <a href="https://www.sagemath.org">SageMath</a> is available in the new environment. For earlier SageMath releases, please switch to the &quot;Ubuntu 22.04&quot; environment.</p>
</li>
<li>
<p><strong>LaTeX</strong>: This is now running a full and up-to-date <a href="https://www.tug.org/texlive/">Texlive</a> distribution. We plan to update its packages with each new software environment update.</p>
</li>
</ul>
]]></description>
      <pubDate>Thu, 26 Jun 2025 17:54:52 GMT</pubDate>
      <guid>https://cocalc.com/news/78</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc and NVIDIA CUDA-Q: Revolutionizing Quantum Education]]></title>
      <link>https://cocalc.com//news/cocalc-and-nvidia-cuda-q-revolutionizing-quantum-education-76</link>
      <description><![CDATA[<p><img src="https://cocalc.com/share/raw/fb8f75e82c384704cdee9885a0e654ff99a6f85d/Images/NVIDIA%20CUDA-Q/Screenshot%202025-06-11%20at%203.58.40%E2%80%AFPM.png" alt=""></p>
<p>This week, NVIDIA highlighted CoCalc as a key platform for teaching with its CUDA-Q academic materials. In their technical blog post, NVIDIA mentions how CoCalc can provide a seamless learning environment for the next wave of quantum computing specialists.</p>
<p>This might also be a good time to add that we were officially accepted as an NVIDIA Inception Program Member a while back!</p>
<p>Our Chief Sales Officer, Blaec Bejarano, has had the pleasure of meeting Monica Van Dieren, a Senior Technical Marketing Engineer at NVIDIA, at the Joint Mathematics Meeting in Seattle this past January. Their discussions continued at the NVIDIA GTC conference in San Jose in March, solidifying our shared vision for accessible and powerful quantum computing education.</p>
<p>NVIDIA's CUDA-Q Academic program is a comprehensive suite of Jupyter notebooks designed to bridge the gap between theoretical quantum mechanics and practical application. These resources are now readily available via CoCalc, allowing students and instructors to dive into complex topics like quantum machine learning and variational algorithms without the hassle of a complex setup.</p>
<p><img src="https://cocalc.com/share/raw/3238e4adaf86c967df5f80188b645a570786833a/Images/NVIDIA%20CUDA-Q/Screenshot%202025-06-11%20at%204.02.29%E2%80%AFPM.png" alt=""></p>
<p>The synergy between CoCalc's collaborative platform and NVIDIA's cutting-edge educational content creates an unparalleled learning experience. Students can work through CUDA-Q modules, leveraging CoCalc's powerful computational resources and real-time collaboration features. This integration is particularly highlighted in NVIDIA's post, which notes the ease of getting started on platforms like CoCalc.</p>
<p><img src="https://cocalc.com/share/raw/a1392785b6fba7e16f11aa0af5389ef087d047b0/Images/NVIDIA%20CUDA-Q/Screenshot%202025-06-11%20at%203.07.59%E2%80%AFPM.png" alt=""></p>
<p>For those eager to explore these resources, the CUDA-Q Academic GitHub repository is the perfect starting point: <a href="https://github.com/NVIDIA/cuda-q-academic/tree/main?tab=readme-ov-file">https://github.com/NVIDIA/cuda-q-academic/tree/main?tab=readme-ov-file</a></p>
<p>We are thrilled to be at the forefront of education, providing the tools necessary to train the quantum workforce of the future. The journey with NVIDIA is just beginning, and we look forward to empowering more learners around the globe.</p>
]]></description>
      <pubDate>Wed, 11 Jun 2025 21:47:31 GMT</pubDate>
      <guid>https://cocalc.com/news/76</guid>
    </item>
    <item>
      <title><![CDATA[Using CoCalc's Compute Servers with Course Management for Teaching]]></title>
      <link>https://cocalc.com//news/using-cocalc-s-compute-servers-with-course-management-for-teaching-70</link>
      <description><![CDATA[<p>You can now use compute servers very easily with CoCalc's course management system.   This video shows how to create a compute server associated to an assignment in a CoCalc course, then make private copies of that compute server available to all students in the class.  You can easily set idle timeout, spend limits and a shutdown time for all student compute servers.  You can also very easily control some or all servers in a class or install custom software on all servers.</p>
<p>This new functionality is the result of extensive discussions with many teachers who are already using CoCalc in the courses, and want to expand their classes to gives students real experience involving AI, deep learning and more using state of the art GPU's.</p>
<p><a href="https://youtu.be/ikktaiw14Tw?si=_a6HxTRgDeN2NrVg">https://youtu.be/ikktaiw14Tw?si=_a6HxTRgDeN2NrVg</a></p>
<p><img src="https://cocalc.com/blobs/paste-0.355051748077599?uuid=75c54bcc-047a-41e3-b1ef-67d3dda80f23" alt=""></p>
]]></description>
      <pubDate>Sat, 11 Jan 2025 19:04:14 GMT</pubDate>
      <guid>https://cocalc.com/news/70</guid>
    </item>
    <item>
      <title><![CDATA[New Compute Server Automatic Shutdown Controls]]></title>
      <link>https://cocalc.com//news/new-compute-server-automatic-shutdown-controls-69</link>
      <description><![CDATA[<p>There are now four new compute server automatic shutdown and health check strategies: idle timeout, shutdown time, spending limit, and generic health check.  Each can give you better insight into how your compute servers are used and save you substantial money.  This video describes each in detail:</p>
<p><a href="https://youtu.be/Kx_47fs_xcI?si=99Ex4yNQ14IVzkmD">https://youtu.be/Kx_47fs_xcI?si=99Ex4yNQ14IVzkmD</a></p>
<p><img src="https://cocalc.com/blobs/paste-0.4832372080324592?uuid=790e89ff-4e5f-46f4-8425-cee6e6bed259" alt=""></p>
]]></description>
      <pubDate>Sat, 11 Jan 2025 19:02:39 GMT</pubDate>
      <guid>https://cocalc.com/news/69</guid>
    </item>
    <item>
      <title><![CDATA[Introduction to Computational Physics with CoCalc]]></title>
      <link>https://cocalc.com//news/introduction-to-computational-physics-with-cocalc-86</link>
      <description><![CDATA[<p><em>Bridging Theory and Computation in Physics</em></p>
<h2>Transforming Physics Understanding Through Computation</h2>
<p>Physics—the study of matter, energy, and their interactions—has always been deeply mathematical. Today, computational methods have become essential tools for understanding complex physical phenomena that resist analytical solutions. CoCalc provides an ideal environment for learning physics through the powerful combination of theoretical understanding and computational exploration.</p>
<p><em>For information about available scientific computing tools and environments, see the <a href="https://doc.cocalc.com/">CoCalc documentation</a>.</em></p>
<p>Whether you're modeling planetary motion, analyzing quantum systems, or exploring electromagnetic fields, CoCalc's integrated tools help you visualize, simulate, and understand the physical world in ways that traditional methods alone cannot achieve.</p>
<h2>Your Computational Physics Toolkit</h2>
<h3>Python for Physics: The Foundation</h3>
<p>Python has become the lingua franca of computational physics, offering powerful libraries and intuitive syntax:</p>
<pre><code class="language-python"># Essential physics imports
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate, optimize
import sympy as sp

# Welcome to computational physics!
print(&quot;Welcome to Computational Physics with CoCalc!&quot;)

# Physical constants (in SI units)
c = 299792458          # Speed of light (m/s)
h = 6.62607015e-34     # Planck constant (J⋅s)
hbar = h / (2 * np.pi) # Reduced Planck constant
k_B = 1.380649e-23     # Boltzmann constant (J/K)
e = 1.602176634e-19    # Elementary charge (C)
m_e = 9.1093837015e-31 # Electron mass (kg)
m_p = 1.67262192369e-27 # Proton mass (kg)

print(f&quot;Speed of light: {c:.0e} m/s&quot;)
print(f&quot;Planck constant: {h:.3e} J⋅s&quot;)
print(f&quot;Electron mass: {m_e:.3e} kg&quot;)
</code></pre>
<h3>Mechanics: Motion and Forces</h3>
<p>Start your physics journey with classical mechanics:</p>
<pre><code class="language-python"># Classical mechanics: projectile motion
def projectile_motion():
    &quot;&quot;&quot;
    Simulate projectile motion with air resistance
    &quot;&quot;&quot;
    
    # Parameters
    g = 9.81          # Gravitational acceleration (m/s²)
    v0 = 50           # Initial velocity (m/s)
    angle = 45        # Launch angle (degrees)
    m = 1.0           # Mass (kg)
    b = 0.1           # Air resistance coefficient
    
    # Convert angle to radians
    theta = np.radians(angle)
    
    # Initial conditions
    vx0 = v0 * np.cos(theta)
    vy0 = v0 * np.sin(theta)
    
    # Differential equation: F = ma = -mg - bv
    def equations_of_motion(t, state):
        x, y, vx, vy = state
        
        # Air resistance force
        v_magnitude = np.sqrt(vx**2 + vy**2)
        if v_magnitude &gt; 0:
            ax = -b * vx * v_magnitude / m
            ay = -g - b * vy * v_magnitude / m
        else:
            ax = 0
            ay = -g
        
        return [vx, vy, ax, ay]
    
    # Solve the differential equation
    t_span = (0, 10)
    initial_state = [0, 0, vx0, vy0]
    
    # Event function to stop when projectile hits ground
    def hit_ground(t, state):
        return state[1]  # y-coordinate
    hit_ground.terminal = True
    hit_ground.direction = -1
    
    solution = integrate.solve_ivp(
        equations_of_motion, t_span, initial_state,
        events=hit_ground, dense_output=True, rtol=1e-8
    )
    
    # Extract trajectory
    t_flight = solution.t
    trajectory = solution.y
    
    # Plot trajectory
    plt.figure(figsize=(10, 6))
    plt.plot(trajectory[0], trajectory[1], 'b-', linewidth=2, label='With air resistance')
    
    # Compare with no air resistance (analytical solution)
    t_no_air = np.linspace(0, t_flight[-1], 100)
    x_no_air = vx0 * t_no_air
    y_no_air = vy0 * t_no_air - 0.5 * g * t_no_air**2
    
    plt.plot(x_no_air, y_no_air, 'r--', linewidth=2, label='No air resistance')
    
    plt.xlabel('Horizontal Distance (m)')
    plt.ylabel('Height (m)')
    plt.title(f'Projectile Motion (v₀={v0} m/s, θ={angle}°)')
    plt.legend()
    plt.grid(True, alpha=0.3)
    plt.show()
    
    # Calculate range
    range_with_air = trajectory[0][-1]
    range_no_air = v0**2 * np.sin(2*theta) / g
    
    print(f&quot;Range with air resistance: {range_with_air:.1f} m&quot;)
    print(f&quot;Range without air resistance: {range_no_air:.1f} m&quot;)
    print(f&quot;Air resistance reduces range by: {(1 - range_with_air/range_no_air)*100:.1f}%&quot;)
    
    return t_flight, trajectory

# Execute projectile motion simulation
time, trajectory = projectile_motion()
</code></pre>
<h3>Oscillations and Waves</h3>
<p>Explore periodic motion and wave phenomena:</p>
<pre><code class="language-python"># Simple harmonic motion and damped oscillations
def harmonic_oscillator():
    &quot;&quot;&quot;
    Study simple harmonic motion and damping effects
    &quot;&quot;&quot;
    
    # Parameters
    omega_0 = 2.0     # Natural frequency (rad/s)
    gamma = 0.1       # Damping coefficient
    F0 = 1.0          # Driving force amplitude
    omega_d = 1.8     # Driving frequency
    
    # Equation of motion: m*x'' + gamma*x' + k*x = F*cos(omega_d*t)
    # Using omega_0^2 = k/m, we get: x'' + gamma*x' + omega_0^2*x = (F0/m)*cos(omega_d*t)
    
    def oscillator_equation(t, state):
        x, v = state
        
        # Driving force
        driving_force = F0 * np.cos(omega_d * t)
        
        # Acceleration
        a = -2*gamma*v - omega_0**2*x + driving_force
        
        return [v, a]
    
    # Time array
    t = np.linspace(0, 20, 1000)
    
    # Solve for different initial conditions
    solutions = {}
    initial_conditions = [
        (1.0, 0.0, &quot;x₀=1, v₀=0&quot;),
        (0.0, 2.0, &quot;x₀=0, v₀=2&quot;),
        (0.5, 1.0, &quot;x₀=0.5, v₀=1&quot;)
    ]
    
    plt.figure(figsize=(12, 8))
    
    for i, (x0, v0, label) in enumerate(initial_conditions):
        solution = integrate.solve_ivp(
            oscillator_equation, (0, 20), [x0, v0],
            t_eval=t, rtol=1e-8
        )
        
        solutions[label] = solution
        
        plt.subplot(2, 2, i+1)
        plt.plot(solution.t, solution.y[0], 'b-', linewidth=1.5)
        plt.xlabel('Time (s)')
        plt.ylabel('Position (m)')
        plt.title(f'Damped Oscillator: {label}')
        plt.grid(True, alpha=0.3)
    
    # Phase space plot
    plt.subplot(2, 2, 4)
    for label, solution in solutions.items():
        plt.plot(solution.y[0], solution.y[1], linewidth=1.5, label=label)
    
    plt.xlabel('Position (m)')
    plt.ylabel('Velocity (m/s)')
    plt.title('Phase Space')
    plt.legend()
    plt.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.show()
    
    # Analyze frequency response
    frequencies = np.linspace(0.1, 4.0, 100)
    amplitude_response = []
    
    for omega in frequencies:
        # Steady-state amplitude for driven oscillator
        amplitude = F0 / np.sqrt((omega_0**2 - omega**2)**2 + (2*gamma*omega)**2)
        amplitude_response.append(amplitude)
    
    plt.figure(figsize=(10, 6))
    plt.plot(frequencies, amplitude_response, 'r-', linewidth=2)
    plt.axvline(omega_0, color='b', linestyle='--', alpha=0.7, label=f'ω₀ = {omega_0}')
    plt.xlabel('Driving Frequency (rad/s)')
    plt.ylabel('Steady-State Amplitude')
    plt.title('Frequency Response of Damped Harmonic Oscillator')
    plt.legend()
    plt.grid(True, alpha=0.3)
    plt.show()
    
    return solutions, amplitude_response

# Execute oscillator analysis
oscillator_solutions, frequency_response = harmonic_oscillator()
</code></pre>
<h3>Electromagnetism: Fields and Forces</h3>
<p>Explore electric and magnetic phenomena:</p>
<pre><code class="language-python"># Electromagnetic field visualization
def electromagnetic_fields():
    &quot;&quot;&quot;
    Visualize electric and magnetic fields
    &quot;&quot;&quot;
    
    # Electric field of point charges
    def electric_field_point_charges():
        &quot;&quot;&quot;Visualize electric field of multiple point charges&quot;&quot;&quot;
        
        # Define point charges: (x, y, charge)
        charges = [
            (2, 2, 1.0),    # Positive charge
            (-2, -2, -1.0), # Negative charge
            (0, 3, 0.5),    # Smaller positive charge
        ]
        
        # Create grid
        x = np.linspace(-4, 4, 20)
        y = np.linspace(-4, 4, 20)
        X, Y = np.meshgrid(x, y)
        
        # Calculate electric field
        Ex = np.zeros_like(X)
        Ey = np.zeros_like(Y)
        V = np.zeros_like(X)  # Electric potential
        
        k = 8.99e9  # Coulomb constant (N⋅m²/C²)
        
        for charge_x, charge_y, q in charges:
            # Distance from each grid point to the charge
            dx = X - charge_x
            dy = Y - charge_y
            r = np.sqrt(dx**2 + dy**2)
            
            # Avoid division by zero
            r = np.where(r &lt; 0.1, 0.1, r)
            
            # Electric field components
            Ex += k * q * dx / r**3
            Ey += k * q * dy / r**3
            
            # Electric potential
            V += k * q / r
        
        # Plot electric field and equipotential lines
        plt.figure(figsize=(12, 5))
        
        # Electric field vectors
        plt.subplot(1, 2, 1)
        plt.quiver(X, Y, Ex, Ey, np.sqrt(Ex**2 + Ey**2), cmap='viridis', alpha=0.7)
        
        # Mark charge locations
        for charge_x, charge_y, q in charges:
            color = 'red' if q &gt; 0 else 'blue'
            size = abs(q) * 100
            plt.scatter(charge_x, charge_y, c=color, s=size, edgecolors='black')
        
        plt.xlabel('x (m)')
        plt.ylabel('y (m)')
        plt.title('Electric Field')
        plt.axis('equal')
        plt.colorbar(label='Field Magnitude')
        
        # Equipotential lines
        plt.subplot(1, 2, 2)
        contour = plt.contour(X, Y, V, levels=20, colors='blue', alpha=0.6)
        plt.clabel(contour, inline=True, fontsize=8)
        
        # Mark charge locations
        for charge_x, charge_y, q in charges:
            color = 'red' if q &gt; 0 else 'blue'
            size = abs(q) * 100
            plt.scatter(charge_x, charge_y, c=color, s=size, edgecolors='black')
        
        plt.xlabel('x (m)')
        plt.ylabel('y (m)')
        plt.title('Electric Potential')
        plt.axis('equal')
        
        plt.tight_layout()
        plt.show()
        
        return X, Y, Ex, Ey, V
    
    # Magnetic field of current loop
    def magnetic_field_current_loop():
        &quot;&quot;&quot;Magnetic field of a circular current loop&quot;&quot;&quot;
        
        # Parameters
        I = 1.0      # Current (A)
        R = 1.0      # Loop radius (m)
        mu_0 = 4*np.pi*1e-7  # Permeability of free space
        
        # Calculate field along axis
        z = np.linspace(-3, 3, 100)
        Bz = mu_0 * I * R**2 / (2 * (R**2 + z**2)**(3/2))
        
        plt.figure(figsize=(10, 6))
        
        plt.subplot(1, 2, 1)
        plt.plot(z, Bz*1e6, 'b-', linewidth=2)
        plt.xlabel('Distance along axis (m)')
        plt.ylabel('Magnetic Field (μT)')
        plt.title(f'Magnetic Field of Current Loop (I={I}A, R={R}m)')
        plt.grid(True, alpha=0.3)
        
        # Field lines visualization (simplified)
        plt.subplot(1, 2, 2)
        theta = np.linspace(0, 2*np.pi, 100)
        loop_x = R * np.cos(theta)
        loop_y = R * np.sin(theta)
        
        # Draw current loop
        plt.plot(loop_x, loop_y, 'r-', linewidth=3, label='Current Loop')
        
        # Simplified field line representation
        r_field = np.linspace(0.2, 3, 10)
        for r in r_field:
            # Approximate field lines
            phi = np.linspace(0, 2*np.pi, 50)
            field_x = r * np.cos(phi)
            field_y = 0.5 * r * np.sin(phi)
            plt.plot(field_x, field_y, 'b-', alpha=0.5, linewidth=1)
            plt.plot(field_x, -field_y, 'b-', alpha=0.5, linewidth=1)
        
        plt.xlabel('x (m)')
        plt.ylabel('y (m)')
        plt.title('Magnetic Field Lines')
        plt.axis('equal')
        plt.legend()
        
        plt.tight_layout()
        plt.show()
        
        return z, Bz
    
    # Execute electromagnetic field calculations
    electric_results = electric_field_point_charges()
    magnetic_results = magnetic_field_current_loop()
    
    return electric_results, magnetic_results

# Execute electromagnetic field analysis
em_results = electromagnetic_fields()
</code></pre>
<h3>Thermodynamics and Statistical Mechanics</h3>
<p>Explore thermal phenomena and statistical behavior:</p>
<pre><code class="language-python"># Thermodynamics and kinetic theory
def thermal_physics():
    &quot;&quot;&quot;
    Explore thermodynamics and statistical mechanics
    &quot;&quot;&quot;
    
    # Ideal gas law and kinetic theory
    def kinetic_theory_simulation():
        &quot;&quot;&quot;Simulate kinetic theory of gases&quot;&quot;&quot;
        
        # Parameters
        N = 1000         # Number of particles
        T = 300          # Temperature (K)
        m = 4.65e-26     # Mass of N2 molecule (kg)
        
        # Maxwell-Boltzmann distribution
        def maxwell_boltzmann(v, T, m):
            &quot;&quot;&quot;Maxwell-Boltzmann speed distribution&quot;&quot;&quot;
            k_B = 1.38e-23
            factor = 4 * np.pi * (m / (2 * np.pi * k_B * T))**(3/2)
            return factor * v**2 * np.exp(-m * v**2 / (2 * k_B * T))
        
        # Generate random velocities according to Maxwell-Boltzmann
        # (Simplified: using normal distribution for each component)
        sigma = np.sqrt(k_B * T / m)  # Standard deviation for each component
        
        vx = np.random.normal(0, sigma, N)
        vy = np.random.normal(0, sigma, N)
        vz = np.random.normal(0, sigma, N)
        
        # Calculate speeds
        speeds = np.sqrt(vx**2 + vy**2 + vz**2)
        
        # Plot speed distribution
        plt.figure(figsize=(12, 8))
        
        plt.subplot(2, 2, 1)
        plt.hist(speeds, bins=50, density=True, alpha=0.7, label='Simulation')
        
        # Theoretical Maxwell-Boltzmann distribution
        v_theory = np.linspace(0, np.max(speeds), 200)
        mb_theory = maxwell_boltzmann(v_theory, T, m)
        plt.plot(v_theory, mb_theory, 'r-', linewidth=2, label='Theory')
        
        plt.xlabel('Speed (m/s)')
        plt.ylabel('Probability Density')
        plt.title(f'Maxwell-Boltzmann Distribution (T={T}K)')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        # Calculate characteristic speeds
        v_avg = np.mean(speeds)
        v_rms = np.sqrt(np.mean(speeds**2))
        v_mp = np.sqrt(2 * k_B * T / m)  # Most probable speed
        
        plt.axvline(v_avg, color='blue', linestyle='--', label=f'Average: {v_avg:.0f} m/s')
        plt.axvline(v_rms, color='green', linestyle='--', label=f'RMS: {v_rms:.0f} m/s')
        plt.axvline(v_mp, color='red', linestyle='--', label=f'Most probable: {v_mp:.0f} m/s')
        plt.legend()
        
        # Energy distribution
        plt.subplot(2, 2, 2)
        kinetic_energies = 0.5 * m * speeds**2
        plt.hist(kinetic_energies / k_B, bins=50, density=True, alpha=0.7)
        plt.xlabel('Kinetic Energy / k_B (K)')
        plt.ylabel('Probability Density')
        plt.title('Kinetic Energy Distribution')
        plt.grid(True, alpha=0.3)
        
        # Temperature dependence
        plt.subplot(2, 2, 3)
        temperatures = [200, 300, 400, 500]
        v_range = np.linspace(0, 1500, 200)
        
        for T_temp in temperatures:
            mb_dist = maxwell_boltzmann(v_range, T_temp, m)
            plt.plot(v_range, mb_dist, linewidth=2, label=f'T = {T_temp}K')
        
        plt.xlabel('Speed (m/s)')
        plt.ylabel('Probability Density')
        plt.title('Temperature Dependence')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        # Pressure calculation from kinetic theory
        plt.subplot(2, 2, 4)
        volumes = np.linspace(0.01, 0.1, 100)  # m³
        n_moles = 1.0  # mol
        R = 8.314  # J/(mol⋅K)
        
        # Ideal gas law: PV = nRT
        pressures_ideal = n_moles * R * T / volumes
        
        plt.plot(volumes*1000, pressures_ideal/1000, 'b-', linewidth=2, label='Ideal Gas Law')
        plt.xlabel('Volume (L)')
        plt.ylabel('Pressure (kPa)')
        plt.title(f'P-V Diagram (T={T}K)')
        plt.grid(True, alpha=0.3)
        plt.legend()
        
        plt.tight_layout()
        plt.show()
        
        print(f&quot;Theoretical values at T = {T}K:&quot;)
        print(f&quot;Average speed: {np.sqrt(8*k_B*T/(np.pi*m)):.0f} m/s&quot;)
        print(f&quot;RMS speed: {np.sqrt(3*k_B*T/m):.0f} m/s&quot;)
        print(f&quot;Most probable speed: {np.sqrt(2*k_B*T/m):.0f} m/s&quot;)
        
        return speeds, kinetic_energies
    
    # Heat transfer
    def heat_transfer_simulation():
        &quot;&quot;&quot;Simulate heat conduction&quot;&quot;&quot;
        
        # One-dimensional heat equation: ∂T/∂t = α ∂²T/∂x²
        # where α is thermal diffusivity
        
        # Parameters
        L = 1.0          # Length (m)
        alpha = 1e-4     # Thermal diffusivity (m²/s)
        dx = 0.01        # Spatial step
        dt = 0.1         # Time step
        t_max = 3600     # Maximum time (s)
        
        # Grid
        x = np.arange(0, L + dx, dx)
        nx = len(x)
        
        # Check stability condition
        stability = alpha * dt / dx**2
        print(f&quot;Stability parameter: {stability:.3f} (should be ≤ 0.5)&quot;)
        
        # Initial condition: hot at one end, cold at the other
        T = np.zeros(nx)
        T[0] = 100    # Hot end (°C)
        T[-1] = 0     # Cold end (°C)
        
        # Time stepping
        times = np.arange(0, t_max + dt, dt)
        T_history = []
        
        for t in times[::int(len(times)/10)]:  # Store every 10th time step
            T_history.append(T.copy())
            
            # Update temperature using finite difference
            T_new = T.copy()
            for i in range(1, nx-1):
                T_new[i] = T[i] + alpha * dt / dx**2 * (T[i+1] - 2*T[i] + T[i-1])
            
            T = T_new
        
        # Plot temperature evolution
        plt.figure(figsize=(10, 6))
        
        plt.subplot(1, 2, 1)
        for i, T_snapshot in enumerate(T_history):
            time_label = f't = {i * t_max / 10 / 60:.0f} min'
            plt.plot(x, T_snapshot, linewidth=2, label=time_label)
        
        plt.xlabel('Position (m)')
        plt.ylabel('Temperature (°C)')
        plt.title('Heat Conduction Over Time')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        # Steady-state solution (linear)
        plt.subplot(1, 2, 2)
        T_steady = 100 * (1 - x / L)  # Linear temperature profile
        plt.plot(x, T_history[-1], 'b-', linewidth=2, label='Numerical solution')
        plt.plot(x, T_steady, 'r--', linewidth=2, label='Analytical steady-state')
        plt.xlabel('Position (m)')
        plt.ylabel('Temperature (°C)')
        plt.title('Final Temperature Profile')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        return x, T_history
    
    # Execute thermal physics simulations
    speeds, energies = kinetic_theory_simulation()
    x_heat, T_heat = heat_transfer_simulation()
    
    return speeds, energies, x_heat, T_heat

# Execute thermal physics analysis
thermal_results = thermal_physics()
</code></pre>
<h2>Building Physics Intuition</h2>
<h3>Dimensional Analysis and Scaling</h3>
<p>Physics understanding begins with dimensional analysis:</p>
<pre><code class="language-python"># Dimensional analysis tools
def dimensional_analysis():
    &quot;&quot;&quot;
    Explore dimensional analysis and scaling laws
    &quot;&quot;&quot;
    
    # Fundamental dimensions
    dimensions = {
        'length': 'L',
        'mass': 'M', 
        'time': 'T',
        'electric_current': 'I',
        'temperature': 'Θ',
        'amount': 'N',
        'luminous_intensity': 'J'
    }
    
    # Common physical quantities and their dimensions
    quantities = {
        'velocity': 'L T⁻¹',
        'acceleration': 'L T⁻²',
        'force': 'M L T⁻²',
        'energy': 'M L² T⁻²',
        'power': 'M L² T⁻³',
        'pressure': 'M L⁻¹ T⁻²',
        'electric_field': 'M L T⁻³ I⁻¹',
        'magnetic_field': 'M T⁻² I⁻¹'
    }
    
    print(&quot;DIMENSIONAL ANALYSIS EXAMPLES&quot;)
    print(&quot;=&quot;*40)
    
    # Example 1: Pendulum period
    print(&quot;Example 1: Simple Pendulum Period&quot;)
    print(&quot;Variables: length L, gravity g, mass m&quot;)
    print(&quot;Dimensions: [L] = L, [g] = L T⁻², [m] = M&quot;)
    print(&quot;Proposed form: T = L^a × g^b × m^c&quot;)
    print(&quot;Dimension equation: T = L^a × (L T⁻²)^b × M^c&quot;)
    print(&quot;                   T = L^(a+b) × T^(-2b) × M^c&quot;)
    print(&quot;Matching dimensions:&quot;)
    print(&quot;  Time: 1 = -2b  →  b = -1/2&quot;)
    print(&quot;  Length: 0 = a + b  →  a = 1/2&quot;)
    print(&quot;  Mass: 0 = c  →  c = 0&quot;)
    print(&quot;Result: T ∝ √(L/g)&quot;)
    
    # Numerical verification
    L_values = np.array([0.5, 1.0, 1.5, 2.0])  # lengths in meters
    g = 9.81
    T_predicted = 2 * np.pi * np.sqrt(L_values / g)
    
    plt.figure(figsize=(10, 6))
    plt.subplot(1, 2, 1)
    plt.plot(L_values, T_predicted, 'bo-', linewidth=2, markersize=8)
    plt.xlabel('Length (m)')
    plt.ylabel('Period (s)')
    plt.title('Pendulum Period vs Length')
    plt.grid(True, alpha=0.3)
    
    # Example 2: Drag force scaling
    print(&quot;\nExample 2: Drag Force at High Speed&quot;)
    print(&quot;Variables: velocity v, density ρ, area A, coefficient C_d&quot;)
    print(&quot;Expected form: F_drag = ½ × C_d × ρ × A × v²&quot;)
    
    # Show velocity scaling
    velocities = np.linspace(10, 100, 20)
    rho = 1.225  # Air density (kg/m³)
    A = 2.0      # Cross-sectional area (m²)
    C_d = 0.3    # Drag coefficient
    
    F_drag = 0.5 * C_d * rho * A * velocities**2
    
    plt.subplot(1, 2, 2)
    plt.plot(velocities, F_drag, 'r-', linewidth=2)
    plt.xlabel('Velocity (m/s)')
    plt.ylabel('Drag Force (N)')
    plt.title('Drag Force vs Velocity (∝ v²)')
    plt.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.show()
    
    return quantities, T_predicted, F_drag

# Execute dimensional analysis
dim_analysis = dimensional_analysis()
</code></pre>
<h3>Error Analysis and Uncertainty</h3>
<p>Understanding measurement uncertainty is crucial in physics:</p>
<pre><code class="language-python"># Error analysis and uncertainty propagation
def error_analysis():
    &quot;&quot;&quot;
    Demonstrate error analysis and uncertainty propagation
    &quot;&quot;&quot;
    
    print(&quot;ERROR ANALYSIS IN PHYSICS&quot;)
    print(&quot;=&quot;*30)
    
    # Example: Measuring gravitational acceleration
    # Using pendulum: g = 4π²L/T²
    
    # Measurements with uncertainties
    L = 1.000          # Length (m)
    dL = 0.001         # Uncertainty in length (m)
    T = 2.006          # Period (s)
    dT = 0.010         # Uncertainty in period (s)
    
    # Calculate g
    g = 4 * np.pi**2 * L / T**2
    
    # Uncertainty propagation: g = 4π²L/T²
    # Relative uncertainty: δg/g = δL/L + 2δT/T
    relative_uncertainty = dL/L + 2*dT/T
    dg = g * relative_uncertainty
    
    print(f&quot;Measurement results:&quot;)
    print(f&quot;Length: L = {L:.3f} ± {dL:.3f} m&quot;)
    print(f&quot;Period: T = {T:.3f} ± {dT:.3f} s&quot;)
    print(f&quot;Calculated g = {g:.2f} ± {dg:.2f} m/s²&quot;)
    print(f&quot;Accepted value: 9.81 m/s²&quot;)
    print(f&quot;Percent error: {abs(g - 9.81)/9.81 * 100:.1f}%&quot;)
    
    # Monte Carlo error propagation
    def monte_carlo_errors(n_samples=10000):
        &quot;&quot;&quot;Use Monte Carlo method for error propagation&quot;&quot;&quot;
        
        # Generate random samples
        L_samples = np.random.normal(L, dL, n_samples)
        T_samples = np.random.normal(T, dT, n_samples)
        
        # Calculate g for each sample
        g_samples = 4 * np.pi**2 * L_samples / T_samples**2
        
        # Statistics
        g_mean = np.mean(g_samples)
        g_std = np.std(g_samples)
        
        print(f&quot;\nMonte Carlo results ({n_samples} samples):&quot;)
        print(f&quot;g = {g_mean:.2f} ± {g_std:.2f} m/s²&quot;)
        
        # Plot distribution
        plt.figure(figsize=(10, 6))
        
        plt.subplot(1, 2, 1)
        plt.hist(g_samples, bins=50, density=True, alpha=0.7, edgecolor='black')
        plt.axvline(g_mean, color='red', linestyle='--', linewidth=2, label=f'Mean: {g_mean:.2f}')
        plt.axvline(9.81, color='green', linestyle='--', linewidth=2, label='True value: 9.81')
        plt.xlabel('g (m/s²)')
        plt.ylabel('Probability Density')
        plt.title('Distribution of g Values')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        # Error vs number of measurements
        plt.subplot(1, 2, 2)
        n_measurements = np.logspace(1, 4, 20).astype(int)
        standard_errors = []
        
        for n in n_measurements:
            # Standard error decreases as 1/√n
            standard_error = g_std / np.sqrt(n)
            standard_errors.append(standard_error)
        
        plt.loglog(n_measurements, standard_errors, 'bo-', linewidth=2)
        plt.loglog(n_measurements, 1/np.sqrt(n_measurements), 'r--', 
                  linewidth=2, label='∝ 1/√n')
        plt.xlabel('Number of Measurements')
        plt.ylabel('Standard Error (m/s²)')
        plt.title('Error Reduction with More Measurements')
        plt.legend()
        plt.grid(True, alpha=0.3)
        
        plt.tight_layout()
        plt.show()
        
        return g_samples, g_mean, g_std
    
    # Execute Monte Carlo analysis
    g_samples, g_mean, g_std = monte_carlo_errors()
    
    return g, dg, g_samples

# Execute error analysis
error_results = error_analysis()
</code></pre>
<h2>Next Steps in Your Physics Journey</h2>
<h3>Immediate Explorations</h3>
<ol>
<li><strong>Planetary Motion</strong>: Simulate orbital mechanics and Kepler's laws</li>
<li><strong>Wave Interference</strong>: Explore wave superposition and interference patterns</li>
<li><strong>Quantum Basics</strong>: Introduction to wave-particle duality</li>
<li><strong>Thermal Equilibrium</strong>: Statistical mechanics fundamentals</li>
</ol>
<h3>Building Toward Advanced Physics</h3>
<ul>
<li><strong>Classical Field Theory</strong>: Maxwell's equations and electromagnetic waves</li>
<li><strong>Quantum Mechanics</strong>: Schrödinger equation and quantum systems</li>
<li><strong>Statistical Physics</strong>: Phase transitions and critical phenomena</li>
<li><strong>Relativity</strong>: Special and general relativistic effects</li>
</ul>
<h3>Research Skills Development</h3>
<ul>
<li><strong>Experimental Design</strong>: Planning and analyzing physics experiments</li>
<li><strong>Data Analysis</strong>: Statistical methods for physics data</li>
<li><strong>Modeling</strong>: Creating and validating physical models</li>
<li><strong>Communication</strong>: Presenting physics results effectively</li>
</ul>
<p>Computational physics in CoCalc opens new ways to understand the physical world. Start with fundamental concepts, build your computational skills, and gradually explore more sophisticated phenomena. The combination of theory, computation, and visualization makes complex physics accessible and engaging.</p>
<hr>
<p><em>Begin your computational physics journey. Access physics simulations and start exploring at <a href="https://cocalc.com">cocalc.com</a></em></p>
]]></description>
      <pubDate>Thu, 09 Jan 2025 20:01:55 GMT</pubDate>
      <guid>https://cocalc.com/news/86</guid>
    </item>
    <item>
      <title><![CDATA[Connect with William Stein at JMM]]></title>
      <link>https://cocalc.com//news/connect-with-william-stein-at-jmm-68</link>
      <description><![CDATA[<h4>Join Us at JMM for an Exclusive Meet &amp; Greet and Live Demo with William Stein!</h4>
<p><strong>We are thrilled to announce a special opportunity to meet William Stein, the CEO and Founder of CoCalc, at the <a href="https://jointmathematicsmeetings.org/meetings/national/jmm2025/2314_intro">JMM meeting in Seattle, WA.</a></strong></p>
<p>Don't miss this chance to engage with William as he presents a live demo of CoCalc! With numerous publications under his belt, William previously served as a tenured professor at the University of Washington until 2019, at which point he committed full-time to growing the <a href="https://cocalc.com/">CoCalc Platform</a>.</p>
<p>William has made significant contributions to the field of Computational Algebraic Number Theory and is the creator of the Computer Algebra System Sage.</p>
<p><strong>Come find us at booth 507 in the Exhibit Hall during the Grand Opening Reception.</strong></p>
<img src="https://cocalc.com/blobs/William%20In%20His%20Prime%20Image.jpeg?uuid=31af2d93-0c9d-4f8a-a1d1-077e9139d114"   width="400px"  height="400px"  style="object-fit:cover"/>]]></description>
      <pubDate>Fri, 03 Jan 2025 00:21:59 GMT</pubDate>
      <guid>https://cocalc.com/news/68</guid>
    </item>
    <item>
      <title><![CDATA[Introduction to Pure Mathematics with CoCalc]]></title>
      <link>https://cocalc.com//news/introduction-to-pure-mathematics-with-cocalc-85</link>
      <description><![CDATA[<p><em>Getting Started with Mathematical Computing</em></p>
<h2>Welcome to Mathematical Discovery</h2>
<p>Pure mathematics—the exploration of mathematical concepts for their own beauty and elegance—has never been more accessible. CoCalc provides a comprehensive environment where you can explore algebra, number theory, geometry, and analysis without the barriers of complex software installation or expensive licenses.</p>
<p><em>For detailed information about SageMath and mathematical computing features, see the <a href="https://doc.cocalc.com/howto/sage-question.html">CoCalc SageMath documentation</a>.</em></p>
<p>Whether you're a student encountering abstract mathematics for the first time, a researcher exploring new mathematical territories, or simply someone fascinated by the beauty of mathematical truth, CoCalc offers the tools you need to transform abstract concepts into concrete understanding.</p>
<h2>Your Mathematical Toolkit</h2>
<h3>SageMath: Your Mathematical Companion</h3>
<p>SageMath is your primary tool for mathematical exploration—a comprehensive system that combines the power of dozens of specialized mathematical packages into one unified interface.</p>
<p><strong>Note</strong>: Use a SageMath kernel/worksheet for these examples, not Python.</p>
<pre><code class="language-sage"># Welcome to mathematical computing!
# Let's start with basic symbolic mathematics

# Working with symbolic variables
x, y, z = var('x y z')

# Symbolic expressions and algebra
expr = (x + 1)^3
expanded = expand(expr)
factored = factor(x^3 + 3*x^2 + 3*x + 1)

print(&quot;Original expression:&quot;, expr)
print(&quot;Expanded form:&quot;, expanded)
print(&quot;Factored form:&quot;, factored)

# Basic calculus
f = x^2 + 3*x + 2
derivative = diff(f, x)
integral = integrate(f, x)

print(&quot;\nFunction: f(x) =&quot;, f)
print(&quot;Derivative: f'(x) =&quot;, derivative)
print(&quot;Integral: ∫f(x)dx =&quot;, integral)
</code></pre>
<p>Output:</p>
<pre><code>Original expression: (x + 1)^3
Expanded form: x^3 + 3*x^2 + 3*x + 1
Factored form: (x + 1)^3
Function: f(x) = x^2 + 3*x + 2
Derivative: f'(x) = 2*x + 3
Integral: ∫f(x)dx = 1/3*x^3 + 3/2*x^2 + 2*x
</code></pre>
<h3>Exploring Number Theory</h3>
<p>Number theory—the study of integers and their properties—comes alive with computational exploration:</p>
<pre><code class="language-sage"># Number theory explorations
print(&quot;=== NUMBER THEORY ADVENTURES ===&quot;)

# Prime numbers and factorization
n = 2023
print(&quot;\nIs&quot;, n, &quot;prime?&quot;, is_prime(n))
print(&quot;Prime factorization of&quot;, n, &quot;:&quot;, factor(n))

# Finding primes
primes_up_to_100 = [p for p in range(2, 101) if is_prime(p)]
print(&quot;\nFirst few primes:&quot;, primes_up_to_100[:10])
print(&quot;Number of primes up to 100:&quot;, len(primes_up_to_100))

# The Euclidean algorithm
a, b = 48, 18
gcd_result = gcd(a, b)
print(&quot;\ngcd(&quot;, a, &quot;,&quot;, b, &quot;) =&quot;, gcd_result)

# Extended Euclidean algorithm
g, u, v = xgcd(a, b)
print(&quot;Extended:&quot;, u, &quot;·&quot;, a, &quot; + &quot;, v, &quot;·&quot;, b, &quot; = &quot;, g)
</code></pre>
<p>Output:</p>
<pre><code>=== NUMBER THEORY ADVENTURES ===

Is 2023 prime? False
Prime factorization of 2023 : 7 * 17^2

First few primes: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
Number of primes up to 100: 25

gcd( 48 , 18 ) = 6
Extended: -1 · 48  +  3 · 18  =  6
</code></pre>
<h3>Algebraic Structures</h3>
<p>Explore the fundamental structures that underlie mathematics:</p>
<pre><code class="language-sage"># Group theory basics
print(&quot;=== ALGEBRAIC STRUCTURES ===&quot;)

# Modular arithmetic
print(&quot;\nModular arithmetic:&quot;)
for i in range(12):
    print(i, &quot;mod 5 =&quot;, i % 5)

# Working with permutations
# Create a simple permutation group
S3 = SymmetricGroup(3)
print(&quot;\nSymmetric group S3 has order:&quot;, S3.order())
print(&quot;Elements of S3:&quot;, list(S3))

# Group operations
g1 = S3([2, 1, 3])  # Swap first two elements
g2 = S3([1, 3, 2])  # Swap last two elements
product = g1 * g2
print(&quot;\nPermutation multiplication:&quot;)
print(g1, &quot;*&quot;, g2, &quot;=&quot;, product)
</code></pre>
<p>Output:</p>
<pre><code>=== ALGEBRAIC STRUCTURES ===

Modular arithmetic:
0 mod 5 = 0
1 mod 5 = 1
2 mod 5 = 2
3 mod 5 = 3
4 mod 5 = 4
5 mod 5 = 0
6 mod 5 = 1
7 mod 5 = 2
8 mod 5 = 3
9 mod 5 = 4
10 mod 5 = 0
11 mod 5 = 1

Symmetric group S3 has order: 6
Elements of S3: [(), (2,3), (1,2), (1,2,3), (1,3,2), (1,3)]

Permutation multiplication:
(1,2) * (2,3) = (1,2,3)
</code></pre>
<h3>Calculus and Analysis</h3>
<p>Move from discrete to continuous mathematics:</p>
<pre><code class="language-sage"># Calculus explorations
print(&quot;=== CALCULUS AND ANALYSIS ===&quot;)

# Limits
x = var('x')
limit_expr = sin(x)/x
limit_result = limit(limit_expr, x=0)
print(&quot;lim(x→0) sin(x)/x =&quot;, limit_result)

# Series expansions
f = e^x
taylor_series = f.taylor(x, 0, 5)  # Taylor series around x=0, degree 5
print(&quot;\nTaylor series of e^x:&quot;, taylor_series)

# Definite integrals
integral_result = integrate(x^2 * e^(-x^2), x, -oo, oo)
print(&quot;\n∫_{-∞}^{∞} x²e^(-x²) dx =&quot;, integral_result)

# Plotting functions
p1 = plot(sin(x), (x, -2*pi, 2*pi), color='blue', legend_label='sin(x)')
p2 = plot(cos(x), (x, -2*pi, 2*pi), color='red', legend_label='cos(x)')
combined_plot = p1 + p2
combined_plot.show()
</code></pre>
<p>Output:</p>
<pre><code>=== CALCULUS AND ANALYSIS ===
lim(x→0) sin(x)/x = 1

Taylor series of e^x: 1/120*x^5 + 1/24*x^4 + 1/6*x^3 + 1/2*x^2 + x + 1

∫_{-∞}^{∞} x²e^(-x²) dx = 1/2*sqrt(pi)
</code></pre>
<h3>Linear Algebra: The Language of Modern Mathematics</h3>
<p>Linear algebra provides the foundation for understanding higher mathematics:</p>
<pre><code class="language-sage"># Linear algebra basics
print(&quot;=== LINEAR ALGEBRA ===&quot;)

# Matrices and vectors
A = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
v = vector([1, 2, 3])

print(&quot;Matrix A:&quot;)
print(A)
print(&quot;Vector v:&quot;, v)

# Matrix operations
det_A = A.determinant()
print(&quot;\nDeterminant of A:&quot;, det_A)

# Eigenvalues and eigenvectors
B = Matrix([[2, 1], [1, 2]])
eigenvals = B.eigenvalues()
eigenvects = B.eigenvectors_right()

print(&quot;\nMatrix B:&quot;)
print(B)
print(&quot;Eigenvalues:&quot;, eigenvals)
print(&quot;Eigenvectors:&quot;)
for eval, evect, mult in eigenvects:
    print(&quot;  λ =&quot;, eval, &quot;, v =&quot;, evect[0], &quot;, multiplicity =&quot;, mult)
</code></pre>
<p>Output:</p>
<pre><code>=== LINEAR ALGEBRA ===
Matrix A:
[1 2 3]
[4 5 6]
[7 8 9]
Vector v: (1, 2, 3)

Determinant of A: 0

Matrix B:
[2 1]
[1 2]
Eigenvalues: [3, 1]
Eigenvectors:
  λ = 3 , v = (1, 1) , multiplicity = 1
  λ = 1 , v = (1, -1) , multiplicity = 1
</code></pre>
<h2>Mathematical Visualization</h2>
<p>Mathematics becomes more intuitive when you can see it:</p>
<pre><code class="language-sage"># 2D plotting
x = var('x')
p1 = plot(sin(x), (x, -2*pi, 2*pi), color='blue', legend_label='sin(x)')
p2 = plot(cos(x), (x, -2*pi, 2*pi), color='red', legend_label='cos(x)')

combined_plot = p1 + p2
combined_plot.axes_labels(['x', 'y'])
combined_plot.legend(True)
combined_plot.show(title='Trigonometric Functions')

# 3D surface
x, y = var('x y')
surface_plot = plot3d(x^2 + y^2, (x, -2, 2), (y, -2, 2))
surface_plot.show()
</code></pre>
<h2>Getting Started: Your First Mathematical Explorations</h2>
<h3>Exercise 1: Number Theory Investigation</h3>
<pre><code class="language-sage"># Investigate perfect numbers
def is_perfect(n):
    &quot;&quot;&quot;Check if n is a perfect number&quot;&quot;&quot;
    divisors = [i for i in range(1, n) if n % i == 0]
    return sum(divisors) == n

# Find perfect numbers up to 100
perfect_numbers = [n for n in range(1, 101) if is_perfect(n)]
print(&quot;Perfect numbers up to 100:&quot;, perfect_numbers)

# Investigate their properties
for p in perfect_numbers:
    divisors = [i for i in range(1, p) if p % i == 0]
    print(p, &quot;: divisors =&quot;, divisors, &quot;, sum =&quot;, sum(divisors))
</code></pre>
<p>Output:</p>
<pre><code>Perfect numbers up to 100: [6, 28]
6 : divisors = [1, 2, 3] , sum = 6
28 : divisors = [1, 2, 4, 7, 14] , sum = 28
</code></pre>
<h3>Exercise 2: Polynomial Exploration</h3>
<pre><code class="language-sage"># Explore polynomial behavior
x = var('x')
polynomials = [
    x^2 - 1,           # Quadratic
    x^3 - 3*x + 1,     # Cubic
    x^4 - 5*x^2 + 6    # Quartic
]

for poly in polynomials:
    print(&quot;\nPolynomial:&quot;, poly)
    
    # Find roots
    roots = solve(poly == 0, x)
    print(&quot;Roots:&quot;, roots)
    
    # Factor if possible
    try:
        factored = factor(poly)
        print(&quot;Factored form:&quot;, factored)
    except:
        print(&quot;Cannot factor over rationals&quot;)
    
    # Plot the polynomial
    plot(poly, (x, -3, 3)).show()
</code></pre>
<p>Output:</p>
<pre><code>Polynomial: x^2 - 1
Roots: [x == -1, x == 1]
Factored form: (x - 1)*(x + 1)

Polynomial: x^3 - 3*x + 1
Roots: [x == -1/2*sqrt(5) - 1/2, x == 1/2*sqrt(5) - 1/2, x == 1]
Factored form: x^3 - 3*x + 1

Polynomial: x^4 - 5*x^2 + 6
Roots: [x == -sqrt(3), x == sqrt(3), x == -sqrt(2), x == sqrt(2)]
Factored form: (x^2 - 3)*(x^2 - 2)
</code></pre>
<h3>Exercise 3: Matrix Magic</h3>
<pre><code class="language-sage"># Explore special matrices
print(&quot;=== SPECIAL MATRICES ===&quot;)

# Identity matrix
I = identity_matrix(3)
print(&quot;Identity matrix:&quot;)
print(I)

# Rotation matrix (2D)
angle = pi/4  # 45 degrees
rotation_2d = Matrix([[cos(angle), -sin(angle)], 
                      [sin(angle), cos(angle)]])
print(&quot;\n45° rotation matrix:&quot;)
print(rotation_2d)

# Apply rotation to a vector
v = vector([1, 0])
rotated_v = rotation_2d * v
print(&quot;Original vector:&quot;, v)
print(&quot;Rotated vector:&quot;, rotated_v)

# Verify the rotation preserves length
print(&quot;Original length:&quot;, v.norm())
print(&quot;Rotated length:&quot;, rotated_v.norm())
</code></pre>
<p>Output:</p>
<pre><code>=== SPECIAL MATRICES ===
Identity matrix:
[1 0 0]
[0 1 0]
[0 0 1]

45° rotation matrix:
[1/2*sqrt(2) -1/2*sqrt(2)]
[1/2*sqrt(2)  1/2*sqrt(2)]

Original vector: (1, 0)
Rotated vector: (1/2*sqrt(2), 1/2*sqrt(2))
Original length: 1
Rotated length: 1
</code></pre>
<h2>Building Mathematical Intuition</h2>
<h3>Understanding Through Computation</h3>
<p>Mathematical intuition develops through active exploration. Use CoCalc to:</p>
<ol>
<li><strong>Test Conjectures</strong>: Try examples and look for patterns</li>
<li><strong>Visualize Concepts</strong>: Plot functions and geometric objects</li>
<li><strong>Verify Calculations</strong>: Check hand computations with symbolic math</li>
<li><strong>Explore Edge Cases</strong>: See what happens at boundaries and special values</li>
</ol>
<h3>Example: The Fibonacci Sequence</h3>
<pre><code class="language-sage"># Fibonacci explorations
def fibonacci_sequence(n):
    &quot;&quot;&quot;Generate first n Fibonacci numbers&quot;&quot;&quot;
    fib = [1, 1]
    for i in range(2, n):
        fib.append(fib[i-1] + fib[i-2])
    return fib

# Generate Fibonacci numbers
fib_nums = fibonacci_sequence(20)
print(&quot;First 20 Fibonacci numbers:&quot;, fib_nums)

# Investigate the golden ratio connection
golden_ratio = (1 + sqrt(5))/2
ratios = [fib_nums[i+1]/fib_nums[i] for i in range(len(fib_nums)-1)]

print(&quot;\nGolden ratio:&quot;, float(golden_ratio))
print(&quot;Consecutive Fibonacci ratios:&quot;)
for i, ratio in enumerate(ratios[-10:], len(ratios)-9):
    print(&quot;F(&quot;, i+1, &quot;)/F(&quot;, i, &quot;) =&quot;, float(ratio))

# Plot convergence to golden ratio
data_points = [(i, float(ratios[i-1])) for i in range(1, len(ratios)+1)]
ratio_plot = list_plot(data_points, color='blue', plotjoined=True, legend_label='F(n+1)/F(n)')
ratio_plot += line([(1, float(golden_ratio)), (len(ratios), float(golden_ratio))], color='red', linestyle='--', legend_label='Golden Ratio')
ratio_plot.axes_labels(['n', 'Ratio'])
ratio_plot.title('Fibonacci Ratios Converging to Golden Ratio')
ratio_plot.show()
</code></pre>
<h2>Next Steps in Your Mathematical Journey</h2>
<h3>Immediate Explorations</h3>
<ol>
<li><strong>Prime Number Patterns</strong>: Investigate prime gaps and distributions</li>
<li><strong>Geometric Sequences</strong>: Explore convergence and divergence</li>
<li><strong>Function Transformations</strong>: See how parameters affect graphs</li>
<li><strong>Matrix Powers</strong>: Discover patterns in repeated matrix multiplication</li>
</ol>
<h3>Preparing for Advanced Topics</h3>
<ul>
<li><strong>Abstract Algebra</strong>: Group and ring theory</li>
<li><strong>Real Analysis</strong>: Rigorous foundations of calculus</li>
<li><strong>Complex Analysis</strong>: Functions of complex variables</li>
<li><strong>Topology</strong>: Properties preserved under continuous deformations</li>
</ul>
<h3>Mathematical Research Skills</h3>
<ul>
<li><strong>Conjecture Formation</strong>: Making educated mathematical guesses</li>
<li><strong>Proof Techniques</strong>: Direct, indirect, and inductive reasoning</li>
<li><strong>Mathematical Writing</strong>: Communicating ideas clearly</li>
<li><strong>Collaboration</strong>: Working with others on mathematical problems</li>
</ul>
<h2>Resources for Continued Learning</h2>
<h3>Documentation and Help</h3>
<ul>
<li>SageMath Documentation: Comprehensive guides and examples</li>
<li>CoCalc Help: Platform-specific tutorials and tips</li>
<li>Mathematical Communities: Online forums and discussion groups</li>
</ul>
<h3>Practice Problems</h3>
<ul>
<li>Project Euler: Computational mathematics challenges</li>
<li>Mathematical Olympiad Problems: Contest mathematics</li>
<li>Research Papers: Current mathematical investigations</li>
</ul>
<p>Pure mathematics in CoCalc opens doors to a universe of mathematical beauty and discovery. Start with simple explorations, build your computational skills, and gradually tackle more sophisticated problems. Every mathematician started with curiosity—let CoCalc help you transform that curiosity into deep mathematical understanding.</p>
<hr>
<p><em>Begin your mathematical journey today. Access SageMath and start exploring at <a href="https://cocalc.com">cocalc.com</a></em></p>
]]></description>
      <pubDate>Thu, 02 Jan 2025 19:44:30 GMT</pubDate>
      <guid>https://cocalc.com/news/85</guid>
    </item>
    <item>
      <title><![CDATA[Connect with William Stein at JMM]]></title>
      <link>https://cocalc.com//news/connect-with-william-stein-at-jmm-80</link>
      <description><![CDATA[<img src="https://cocalc.com/blobs/William%20In%20His%20Prime%20Image.jpeg?uuid=31af2d93-0c9d-4f8a-a1d1-077e9139d114"   width="400px"  height="400px"  style="object-fit:cover"/>
<h4>Join Us at JMM for an Exclusive Meet &amp; Greet and Live Demo with William Stein!</h4>
<p><strong>We are thrilled to announce a special opportunity to meet William Stein, the CEO and Founder of CoCalc, at the <a href="https://jointmathematicsmeetings.org/meetings/national/jmm2025/2314_intro">JMM meeting in Seattle, WA.</a></strong></p>
<p>Don't miss this chance to engage with William as he presents a live demo of CoCalc! With numerous publications under his belt, William previously served as a tenured professor at the University of Washington until 2019, at which point he committed full-time to growing the <a href="https://cocalc.com/">CoCalc Platform</a>.</p>
<p>William has made significant contributions to the field of Computational Algebraic Number Theory and is the creator of the Computer Algebra System Sage.</p>
<p><strong>Come find us at booth 507 in the Exhibit Hall during the Grand Opening Reception.</strong></p>
]]></description>
      <pubDate>Thu, 02 Jan 2025 08:00:00 GMT</pubDate>
      <guid>https://cocalc.com/news/80</guid>
    </item>
    <item>
      <title><![CDATA[SOC 2 Type II]]></title>
      <link>https://cocalc.com//news/soc-2-type-ii-66</link>
      <description><![CDATA[<p><img src="https://cocalc.com/share/raw/ecad1d3b4e5ee23abf5be5ba5f2a21347a1d3ec7/SOC_CPA_Blue.png" alt=""></p>
<h3>SageMath, Inc. is pleased to announce successfully passing the SOC 2 Type II audit!</h3>
<p>Service Organization Controls 2 (SOC 2) is a framework that is governed by the American Institute of Certified Public Accountants (AICPA). With a SOC 2 audit, an independent service auditor reviewed our policies, procedures, and evidence to determine if the controls are designed and operating effectively. A SOC 2 report communicates our commitment to data security and protection of our customers information.</p>
<p>Request access to the report and view the current status of controls at <a href="https://trust.cocalc.com/">https://trust.cocalc.com/</a></p>
]]></description>
      <pubDate>Wed, 18 Dec 2024 03:42:42 GMT</pubDate>
      <guid>https://cocalc.com/news/66</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc Exhibit at Informs]]></title>
      <link>https://cocalc.com//news/cocalc-exhibit-at-informs-65</link>
      <description><![CDATA[<h4>Come Visit CoCalc's Exhibition at INFORMS 2024</h4>
<p>We wanted to send out this quick message to let everyone know that CoCalc will be hosting booth (<a href="https://meetings.informs.org/wordpress/seattle2024/exhibit-hall/#Exhibits">#208</a>) at INFORMS 2024 in Seattle, WA!</p>
<p>We will live demo our platform in the exhibit hall all week and present a <a href="https://meetings.informs.org/wordpress/seattle2024/technology-showcases/#cocalc">Technology Showcase on Tuesday, Oct. 22nd, at 2:15 p.m. PT</a>.</p>
<p>As a short aside, you might also be interested to know that accessing on-demand H100 GPUs starts at $1.98 per hour via our <a href="https://doc.cocalc.com/compute_server.html">compute server functionality</a>. (It is all metered per second.) Other more budget-friendly options are available as well.</p>
<p><img src="https://cocalc.com/share/raw/6d9b9f4654c451a4d78f079676efc275337daad9/Images/events/Informs%20Event%20Image.png" alt=""></p>
]]></description>
      <pubDate>Mon, 14 Oct 2024 13:00:00 GMT</pubDate>
      <guid>https://cocalc.com/news/65</guid>
    </item>
    <item>
      <title><![CDATA[New "How-To" Tutorial Series on YouTube!]]></title>
      <link>https://cocalc.com//news/new-how-to-tutorial-series-on-youtube--64</link>
      <description><![CDATA[<p>We are excited to announce that CoCalc has launched a brand-new &quot;How-To&quot; tutorial series on our <a href="https://www.youtube.com/@cocalc-cloud">YouTube Channel</a>! These tutorials are designed to help you get the most out of our platform by providing quick, two-minute videos that cover everything from navigating the CoCalc UI to auto-generating LaTeX/Markdown documents and Jupyter Notebooks.</p>
<p><img src="https://cocalc.com/share/raw/ef58b6ff496724351808a96bdbe288e190e6c879/Social%20Media/CoCalc/News/Images/YouTube%20Channel.png" alt=""></p>
<p>Our goal is to make your experience with CoCalc as smooth and efficient as possible. To ensure you stay up-to-date with the latest tips and tricks, we highly encourage you to subscribe! That way, you'll never miss a new tutorial and can always access the latest content right as it's released.</p>
<p>Please let us know what you want to see next! For more in-depth information, don't forget to visit our detailed <a href="https://doc.cocalc.com/">Documentation</a>.</p>
<p>Also, if you have any questions or want to chat with us directly, don't hesitate to <a href="https://calendly.com/cocalc/quick-meeting">book a video chat</a> with our team.</p>
]]></description>
      <pubDate>Fri, 30 Aug 2024 14:00:00 GMT</pubDate>
      <guid>https://cocalc.com/news/64</guid>
    </item>
    <item>
      <title><![CDATA[LaTeX Build Improvement]]></title>
      <link>https://cocalc.com//news/latex-build-improvement-63</link>
      <description><![CDATA[<p>Our most recent update to the LaTeX editor allows you to see more information about a running build process. It also gives you the opportunity to stop it, if it takes too long or causes problems. The build frame now shows memory and CPU usage, as well as the tail of the log while the job is running.</p>
<p><img src="https://cocalc.com/share/raw/2c986513ef6703645b2bf694dc43ec8676c22934/latex-build-information-2024-08-20.png" alt=""></p>
<p>Under the hood, this is accomplished by an improved <code>/api/v2/exec</code> endpoint. The parameter <code>async_call</code>  returns an ID, which you can use to monitor and control the underlying process. See
<a href="https://cocalc.com/api/v2#tag/Utils/operation/exec">/api/v2/exec</a> for more details.</p>
]]></description>
      <pubDate>Tue, 20 Aug 2024 12:37:00 GMT</pubDate>
      <guid>https://cocalc.com/news/63</guid>
    </item>
    <item>
      <title><![CDATA[Sage 10.4]]></title>
      <link>https://cocalc.com//news/sage-10-4-62</link>
      <description><![CDATA[<p><strong>SageMath 10.4</strong> is now available in all CoCalc projects and via the SageMath compute server images.</p>
<ul>
<li><a href="https://github.com/sagemath/sage/wiki/Sage-10.4-Release-Tour">Release Tour Sage 10.4</a></li>
<li><a href="https://cocalc.com/haraldschilly/ideas/sage-10.4">Examples from the tour</a></li>
</ul>
<p>Older versions of Sage remain available. For Sage Worksheets and on the terminal, you can configure the version of <code>sage</code> in your project by running <code>sage_select ...</code> in a Terminal. Regarding Jupyter, you can switch to the newer kernel any time!</p>
]]></description>
      <pubDate>Thu, 01 Aug 2024 12:33:52 GMT</pubDate>
      <guid>https://cocalc.com/news/62</guid>
    </item>
    <item>
      <title><![CDATA[MermaidJS now fully supported]]></title>
      <link>https://cocalc.com//news/mermaidjs-now-fully-supported-61</link>
      <description><![CDATA[<p>You can now use <a href="https://mermaid.js.org/">MermaidJS</a>, which is a diagramming and charting tool that renders Markdown-inspired text definitions to create diagrams dynamically, anywhere you use Markdown in CoCalc: text between code cells in Jupyter notebooks, whiteboards, slideshows, Markdown files, etc.  Just put the mermaid diagram description in a fenced code block, like this:</p>
<pre><code class="language-md">```mermaid
block-beta
columns 1
  db((&quot;DB&quot;))
  blockArrowId6&lt;[&quot;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;]&gt;(down)
  block:ID
    A
    B[&quot;A wide one in the middle&quot;]
    C
  end
  space
  D
  ID --&gt; D
  C --&gt; D
  style B fill:#969,stroke:#333,stroke-width:4px
```
</code></pre>
<p>and it will get rendered like this:</p>
<pre><code class="language-mermaid">block-beta
columns 1
  db((&quot;DB&quot;))
  blockArrowId6&lt;[&quot;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;]&gt;(down)
  block:ID
    A
    B[&quot;A wide one in the middle&quot;]
    C
  end
  space
  D
  ID --&gt; D
  C --&gt; D
  style B fill:#969,stroke:#333,stroke-width:4px
</code></pre>
<p>Using Mermaid in exactly this way is also fully supported in JupyterLab and on GitHub.   Moreover, if you publish documents on the CoCalc share server, Mermaid also gets rendered properly, e.g., see these examples:</p>
<ul>
<li><a href="https://cocalc.com/wstein/dev/mermaid">https://cocalc.com/wstein/dev/mermaid</a></li>
</ul>
]]></description>
      <pubDate>Thu, 01 Aug 2024 04:01:22 GMT</pubDate>
      <guid>https://cocalc.com/news/61</guid>
    </item>
    <item>
      <title><![CDATA[Major IPyWidgets Update]]></title>
      <link>https://cocalc.com//news/major-ipywidgets-update-60</link>
      <description><![CDATA[<p>CoCalc's Jupyter notebooks now have the latest version of IPyWidgets, and vastly improved support for custom widgets!   We spent much of July doing a complete rewrite of the IPyWidgets implementation in CoCalc's Jupyter notebook to fully support the latest version of widgets, and also support using arbitrary custom widgets.  This is done and now live, and is a major improvement.  All widget layouts should now be exactly the same as upstream, and custom widgets work as long as they are hosted on the jsdelivr CDN.   Before this, almost no custom widgets were supported (basically, only K3d), and now almost all custom widgets work, including ipyvolume, ipympl, newest k3d, bqplot, and much more.</p>
<p><img src="https://cocalc.com/blobs/paste-0.48477617834897413?uuid=5cdd652a-f7cb-4b1e-b248-d06bc3441c2e" alt=""></p>
<p>Widgets in CoCalc work almost the same as the latest official upstream version. The main difference is that widgets support realtime collaboration and the full state of widgets is stored on the backend server. This means that if multiple people use a notebook at once (or you open the same notebook in multiple browsers), then state of widgets are sync'd.  Also you don't have to re-evaluate code for widgets to appear if you refresh your browser.</p>
<ul>
<li>An open source library that came out of this project: <a href="https://github.com/sagemathinc/cocalc-widgets">https://github.com/sagemathinc/cocalc-widgets</a></li>
<li>Upstream IPyWidgets, now all examples are supported: <a href="https://ipywidgets.readthedocs.io/en/stable/">https://ipywidgets.readthedocs.io/en/stable/</a></li>
<li>CoCalc Widget docs: <a href="https://doc.cocalc.com/jupyter-enhancements.html#widgets-in-cocalc">https://doc.cocalc.com/jupyter-enhancements.html#widgets-in-cocalc</a></li>
</ul>
]]></description>
      <pubDate>Thu, 01 Aug 2024 03:44:17 GMT</pubDate>
      <guid>https://cocalc.com/news/60</guid>
    </item>
    <item>
      <title><![CDATA[Find files you have edited (new feature)]]></title>
      <link>https://cocalc.com//news/find-files-you-have-edited-new-feature--59</link>
      <description><![CDATA[<p><span style="color:#1b95e0;background-color:#fafafa;border:1px solid #d9d9d9;padding:0 7px;border-radius:5px">#feature</span> Several times in the last few weeks I've &quot;lost&quot; a file I wanted to find in cocalc, and wasn't quite sure where it was.  Yes, I could open 10 different projects and search the project logs, but that is tedious.  So I directly queried the &quot;file_access_log&quot; table in our databse and quickly found the file.  E.g., this happened to me today trying to find a tex file.   So... I figured everybody using CoCalc might want to do this, and added it as a feature in the upper right of <a href="https://cocalc.com/projects">the projects page</a>:</p>
<p><img src="https://cocalc.com/blobs/paste-0.7575265878632247?uuid=49d47637-fc04-4c99-b650-5ea49d60e28d" alt=""></p>
<br/>
<br/>
<p>Basically, you can type a string in that search box, hit return, and see the last 100 files (over the last year) that <em>you</em> edited whose name has that as a substring. You can use any PostgreSQL ilike patterns, e.g., % for wildcard.    It's just a tiny thing that was easy to implement, but could really help sometimes when you're not sure where a file is.</p>
]]></description>
      <pubDate>Thu, 01 Aug 2024 03:40:42 GMT</pubDate>
      <guid>https://cocalc.com/news/59</guid>
    </item>
    <item>
      <title><![CDATA[Compute Server VPN ]]></title>
      <link>https://cocalc.com//news/compute-server-vpn--58</link>
      <description><![CDATA[<p>The new Wireguard encrypted VPN between all compute servers in a project is now live and fully working in all the testing I've done.  This a very critical foundation for building other things -- clusters, the distributed filesystem, etc.</p>
<p>If you want to try the encrypted wireguard vpn, just start two <a href="https://doc.cocalc.com/compute_server.html">compute servers</a> in the same project.  Then type <code>more /etc/hosts</code> and see that compute-server-[n] resolves to the vpn address of the compute server (which will be of the form 10.11.x.y).  Do <code>apt-get install -y iputils-ping</code> and then you can ping from one to another, e.g., <code>ping compute-server-[n]</code> .    Also, if you set a subdomain so <a href="https://foo.cocalc.cloud">https://foo.cocalc.cloud</a> works, then you can also use <code>foo</code> as a name to connect to.    The exciting thing is that:</p>
<ul>
<li>all ports are opened on the vpn</li>
<li>all traffic is fully encrypted</li>
<li>only compute servers in the same project have access to the vpn</li>
<li>this fully works across clouds, i.e., some nodes on google cloud and some on hyperstack, and they all connect to each other in a unified way.</li>
</ul>
<p>Note that on-prem has one limitation still, e.g., on prem nodes can connect to all cloud nodes and all cloud nodes can connect to on prem nodes, but on prem nodes can't connect to each other.  To make this work in general is complicated and expensive, requiring TURN servers, so we're not doing that for now.  There's some special cases that will be supported in the future.  This isn't the highest priority, since probably nobody but me uses on prem with more than one server so far...</p>
<p>Anyway, I think now that this is in place, implementing our new high performance distributed filesystem will be possible!  Stay tuned.</p>
]]></description>
      <pubDate>Tue, 28 May 2024 16:54:54 GMT</pubDate>
      <guid>https://cocalc.com/news/58</guid>
    </item>
    <item>
      <title><![CDATA[GPT-4o and Gemini 1.5 Flash]]></title>
      <link>https://cocalc.com//news/gpt-4o-and-gemini-1-5-flash-57</link>
      <description><![CDATA[<p>We released another round of large language model updates. You can now use <a href="https://openai.com/index/hello-gpt-4o/">GPT-4o Omni</a> and <a href="https://deepmind.google/technologies/gemini/flash/">Gemini 1.5 Flash</a>. Both are not only very capable, but also extremely quick with their replies.</p>
<p>Here is an example how I improved a plot of a t-test using R in a Jupyter Notebook. This is a visual check to see, if the data is really significantly different. The plot looks a bit boring, though:</p>
<p><img src="https://cocalc.com/share/raw/ec225235f8d347b7849a61cca6d448f30a486f21/published/2024-05-16-t-test.png" alt=""></p>
<p>Via <strong>AI Tools</strong> → <strong>Improve</strong>, I can tell GPT-4 Omni to make this a violin plot and more colorful</p>
<p><img src="https://cocalc.com/share/raw/ec225235f8d347b7849a61cca6d448f30a486f21/published/2024-05-16-t-test-improve.png" alt=""></p>
<p>I get a response and can review the changes in side-chat. The result looks like that:</p>
<p><img src="https://cocalc.com/share/raw/ec225235f8d347b7849a61cca6d448f30a486f21/published/2024-05-16-plot-improved.png" alt=""></p>
<p>Much better!</p>
<p>Ok, but wait, what's the T-Test? Here, I'm asking Gemini Flash to explain this to me, and there was also something called shapiro. To learn more, I opened a new chat and asked away. I told Gemini to also show me how to do this in R – which I can run directly in the chat.</p>
<p><img src="https://cocalc.com/share/raw/ec225235f8d347b7849a61cca6d448f30a486f21/published/2024-05-16-ttest-chat-gemini-flash.png" alt=""></p>
]]></description>
      <pubDate>Thu, 16 May 2024 07:29:58 GMT</pubDate>
      <guid>https://cocalc.com/news/57</guid>
    </item>
    <item>
      <title><![CDATA[Over a Dozen New Videos]]></title>
      <link>https://cocalc.com//news/over-a-dozen-new-videos-55</link>
      <description><![CDATA[<p>Get ready everyone! Over the past month, our very own William Stein has been creating an array of videos highlighting various aspects of CoCalc's Compute Server functionality! This series contains significant topics from understanding memory usage to employing popular software images such as Tensorflow, Sage, LEAN and the likes on powerful GPU/CPU machines. William's comprehensive walkthroughs showcase CoCalc's brand-new capabilities for advanced mathematical research, machine learning, and data science!</p>
<p>Feel free to browse this curated playlist which houses these enlightening videos. Dive in and discover how to harness the full potential of CoCalc like never before! The power of CoCalc is at your fingertips - explore, learn, and elevate your experience!  <a href="https://www.youtube.com/playlist?list=PLOEk1mo1p5tJmEuAlou4JIWZFH7IVE2PZ">Browse the playlist.</a></p>
<p><img src="https://cocalc.com/share/raw/fa7420def489f8586f9a174856567ac76e55f427/Images/events/Screenshot%202024-05-15%20at%204.07.07%E2%80%AFPM.png" alt=""></p>
]]></description>
      <pubDate>Wed, 15 May 2024 16:42:13 GMT</pubDate>
      <guid>https://cocalc.com/news/55</guid>
    </item>
    <item>
      <title><![CDATA[R 4.4 now available]]></title>
      <link>https://cocalc.com//news/r-4-4-now-available-54</link>
      <description><![CDATA[<p>The project software environment has been updated. Version <code>2024-05-13</code> is the default now: it includes <strong><a href="https://www.r-bloggers.com/2024/04/whats-new-in-r-4-4-0/">R 4.4</a></strong> as the default R. Many packages were updated as well.</p>
<p>Note: if you had installed R packages locally in your project before, you have to re-compile them.</p>
<p>The default &quot;R Statistics&quot; compute server image also includes R 4.4 by default as well.</p>
<p>As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, and various underlying Linux packages.</p>
<p>If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select &quot;Ubuntu 22.04
// Previous&quot;.</p>
]]></description>
      <pubDate>Mon, 13 May 2024 11:28:51 GMT</pubDate>
      <guid>https://cocalc.com/news/54</guid>
    </item>
    <item>
      <title><![CDATA[ICLR (International Conference on Learning Representations)]]></title>
      <link>https://cocalc.com//news/iclr-international-conference-on-learning-representations--82</link>
      <description><![CDATA[<h3>Our Big Takeaway from ICLR: It's All About Access to Compute</h3>
<img src="https://cocalc.com/share/raw/a9017d05d688a6907c9690e23806c5f75ab0ea21/Images/events/IMG_5822%202.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>We were in Vienna for ICLR in May 2024, and as always, the scale of the research was incredible. You couldn't walk a single poster aisle without seeing massive transformers, complex generative models, or new self-supervised learning techniques that required immense computational power.</p>
<p>But it was the conversations in the hallways between talks that were most interesting. Beyond the model architectures, the one question that came up constantly was a practical one: &quot;How did you get enough GPU time for that?&quot;</p>
<p>It was clear that access to compute is a huge bottleneck. We talked to many researchers who were tired of maintaining their own hardware and just wanted to spin up a powerful GPU via the cloud for a big training run without a lot of hassle. Moments later, we’d talk to another team that had invested heavily in their own local cluster and wanted to use their own machines for their work.</p>
<p>It was a great reminder that there's no single &quot;right&quot; way to do research. The real need is for flexibility—the ability to use local machines for development and then scale up to the cloud when it's time to really push a model.</p>
<p>As a team that thinks about research infrastructure, it was fascinating to see the community navigating this hybrid future. It’s a complex but really interesting challenge, and it was great to see it being discussed so openly.</p>
<p>A huge thank you to the ICLR organizers for another fantastic conference.</p>
<p>— The CoCalc Team</p>
]]></description>
      <pubDate>Mon, 06 May 2024 23:23:16 GMT</pubDate>
      <guid>https://cocalc.com/news/82</guid>
    </item>
    <item>
      <title><![CDATA[Visit us at MLSys in Santa Clara, CA]]></title>
      <link>https://cocalc.com//news/visit-us-at-mlsys-in-santa-clara-ca-81</link>
      <description><![CDATA[<h3><strong>MLSys 2024: Focusing on the Full Workflow</strong></h3>
<img src="https://cocalc.com/share/raw/fd86a63c9323d0d312427f2752eb1334870c5ed1/Images/events/IMG_5971.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>We just wrapped up our time at MLSys in Santa Clara, an event that’s special because it focuses on the &quot;how&quot; of machine learning. While other conferences often center on model architectures and theoretical breakthroughs, MLSys dives deep into the systems and engineering that make those breakthroughs possible.</p>
<p>The conference is dedicated to the full, end-to-end workflow of ML. The conversations were about the crucial, complex engineering that underpins every great result: building efficient data pipelines, managing training infrastructure at scale, and solving the challenges of deploying and serving models robustly.</p>
<p>It was great to see a whole community focused on these operational hurdles. So much of a researcher's time is spent moving between different tools for data prep, training, and analysis. For our team, it was inspiring to see so many smart people working to streamline that entire journey from raw data to finished paper. It’s clear that closing the gaps between these different stages is a huge priority for the field.</p>
<p>Thanks to the MLSys organizers for creating a space for these crucial conversations, and to everyone we spoke with at the event.</p>
<p>— The CoCalc Team</p>
<p><a href="https://cocalc.com">cocalc.com</a></p>
]]></description>
      <pubDate>Sun, 05 May 2024 23:08:53 GMT</pubDate>
      <guid>https://cocalc.com/news/81</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc Exhibiting at AISTATS 2024]]></title>
      <link>https://cocalc.com//news/cocalc-exhibiting-at-aistats-2024-83</link>
      <description><![CDATA[<h3><strong>Our View from AISTATS: The Two-Toolbox Problem</strong></h3>
<img src="https://cocalc.com/share/raw/7336a8da311e6fc8d75a42f331e79b1364a5980b/Images/events/0BE73DC6-D4E6-4DCA-958E-003D447198BE_1_102_o.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>We recently spent time at AISTATS, and what makes that conference so unique is that it’s where <em>two distinct worlds collide</em>: the rigor of classical statistics and the scale of modern AI.</p>
<p>One talk might be a deep dive into causal inference or Bayesian methods, and the very next could be about the <em>statistical interpretability of a massive deep learning model</em>. The whole event is focused on answering fundamental questions about AI through a statistical lens, tackling big issues like uncertainty quantification and building more transparent models.</p>
<p>In the hallways, this led to a lot of conversations about a common workflow challenge. Many researchers we spoke with feel like they're working with <strong>two separate toolboxes</strong>. They might do their heavy statistical lifting in <code>R</code> or with Python’s <code>statsmodels</code>, but then have to apply those insights to a neural network built in <code>PyTorch</code> or <code>TensorFlow</code>.</p>
<p>It’s a subtle but constant friction—moving data, translating code, and trying to get these two distinct environments to talk to each other seamlessly. As a team that thinks a lot about integrated scientific workflows, it was fascinating to see. It’s a classic problem: how do you make two toolboxes feel like one?</p>
<p>A huge thank you to the AISTATS organizers for creating a space where these two critical fields can meet and push each other forward.</p>
<hr>
<p>— The CoCalc Team</p>
<p><a href="https://cocalc.com">cocalc.com</a></p>
]]></description>
      <pubDate>Wed, 01 May 2024 23:33:56 GMT</pubDate>
      <guid>https://cocalc.com/news/83</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc Exhibit at AISTATS 2024]]></title>
      <link>https://cocalc.com//news/cocalc-exhibit-at-aistats-2024-84</link>
      <description><![CDATA[<h2>Connecting with the AI and Statistics Community at AISTATS</h2>
<img src="https://cocalc.com/share/raw/7336a8da311e6fc8d75a42f331e79b1364a5980b/Images/events/0BE73DC6-D4E6-4DCA-958E-003D447198BE_1_102_o.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>We had a fantastic time connecting with folks at AISTATS (the International Conference on Artificial Intelligence and Statistics). This is a cornerstone event for anyone working on the statistical foundations of AI, and it was great to meet so many of you in person.</p>
<h3>We Want to Support Your Community</h3>
<p>For researchers in AI and statistics, CoCalc is your collaborative, cloud-based workspace. We bridge the gap between powerful tools and seamless teamwork.</p>
<ul>
<li><strong>All Your Tools in One Place:</strong> Access R, SciPy, TensorFlow, and PyTorch in a single, browser-based environment. No more setup headaches.</li>
<li><strong>Collaborate in Real-Time:</strong> Work together on code, data analysis, and model building with shared projects and simultaneous editing.</li>
<li><strong>Powerful Computing:</strong> From complex Bayesian models to large-scale AI, get the computational resources you need.</li>
<li><strong>Reproducible Research:</strong> Easily track your work, manage data, and document your entire process for transparent and reliable results.</li>
</ul>
<h3>Inspiration at AISTATS in Valencia, Spain.</h3>
<p>The conversations we had at AISTATS were certainly insightful. Understanding the computational challenges you face in areas like causal inference, model interpretability, and probabilistic methods helps us make CoCalc even better for the AI and statistics community.</p>
<p>Thanks for all of the engagement at our booth!</p>
]]></description>
      <pubDate>Wed, 01 May 2024 21:35:08 GMT</pubDate>
      <guid>https://cocalc.com/news/84</guid>
    </item>
    <item>
      <title><![CDATA[New GPU Cloud Integration with Hyperstack! ]]></title>
      <link>https://cocalc.com//news/new-gpu-cloud-integration-with-hyperstack--51</link>
      <description><![CDATA[<p>If you are using GPU's on CoCalc, there's an entirely new cloud option that you should see which is <a href="https://www.hyperstack.cloud/">Hyperstack</a>:</p>
<img width="348" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/70ef02da-35b5-4a7f-a3e8-c4a9f51dd617">
<p>Once you select Hyperstack after starting to <a href="https://doc.cocalc.com/compute_server.html">create a compute server</a>, click the A100 tag and you'll see this:</p>
<img width="1379" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/61ef50f8-7590-4561-9b99-e053b2ab2f81">
<p>Note that for $3.60/hour you get an 80GB A100, and these are all standard instances. You can also see that at least right now many are available.  Everything else works very similar to Google cloud, except that:</p>
<ul>
<li>startup time is slower -- definitely expect about 5-10 minutes from when you click &quot;Start&quot; until you can use the compute server.  However, it's very likely to work, unlike Google cloud GPU's (especially spot instances).  Google cloud is extremely good for CPU, but for GPU it's not as good.</li>
<li>Many of the server configurations have over 500GB of very fast local ephemeral disk, in case you need that for scratch.  It's ephemeral, so goes away when you stop the server.</li>
<li>The local disk on the server should be as fast or faster than Google cloud, but cheaper.</li>
<li>All network usage is free, whereas egress from Google cloud is quite expensive.</li>
<li>There's a different range of GPU's.  Sometimes there are a lot of H100's but in the middle of the day on Wednesday, there aren't.  Yesterday there were dozens of them.</li>
<li>By default only a Python (Anaconda) image and an Ollama image are visible, since they are small.  When you select the Python  image, you'll likely have to type <code>conda install ...</code> in a terminal to install some packages you need.   If you click the &quot;Advanced&quot; checkbox when selecting an image, you can select from the full range of images.  However, the first startup time for your server maybe be MUCH slower for big images (e.g., think &quot;20-30 minutes&quot; for the huge Colab image).  Starting the server a second time is fast again.</li>
</ul>
<img width="891" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/19cdaa95-d9c4-4dc8-beed-1fc69e24cf8a">
<ul>
<li>Live disk enlarging does work, but with a limit of at most 25 times due to Hyperstack architecture.</li>
</ul>
]]></description>
      <pubDate>Wed, 24 Apr 2024 20:37:38 GMT</pubDate>
      <guid>https://cocalc.com/news/51</guid>
    </item>
    <item>
      <title><![CDATA[Running On-Prem Compute Servers on CoCalc (Video)]]></title>
      <link>https://cocalc.com//news/running-on-prem-compute-servers-on-cocalc-video--49</link>
      <description><![CDATA[<p>VIDEO: <a href="https://youtu.be/NkNx6tx3nu0">https://youtu.be/NkNx6tx3nu0</a></p>
<p>LINK: <a href="https://github.com/sagemathinc/cocalc-howto/blob/main/onprem.md">https://github.com/sagemathinc/cocalc-howto/blob/main/onprem.md</a></p>
<p>We add an on-prem  compute server running on my Macbook Pro laptop to a CoCalc (<a href="https://cocalc.com">https://cocalc.com</a>) project, and using the compute server via a Jupyter notebook and a terminal.  This involves creating an Ubuntu 22.04 virtual machine via multipass, and pasting a line of code into the VM to connect it to CoCalc.</p>
<img width="1462" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/3aed1b8a-4af9-4379-aa67-9ee6eaf72de2">
<p>After using a compute server running on my laptop, I create another compute server running on Lambda cloud (<a href="https://lambdalabs.com/">https://lambdalabs.com/</a>).  This involves renting a powerful server with an H100 GPU, waiting a few minutes for it to boot up, then pasting in a line of code.  The compute server gets configured, starts up, and we are able to confirm that the H100 is available.  We then type &quot;conda install -y pytorch&quot; to install pytorch, and use Claude3 to run a demo involving the GPU and train a toy model.</p>
]]></description>
      <pubDate>Thu, 18 Apr 2024 19:53:31 GMT</pubDate>
      <guid>https://cocalc.com/news/49</guid>
    </item>
    <item>
      <title><![CDATA[Using VS Code on CoCalc]]></title>
      <link>https://cocalc.com//news/using-vs-code-on-cocalc-48</link>
      <description><![CDATA[<p>There are many ways to quickly launch Visual Studio Code (VS Code) on <a href="https://cocalc.com">https://cocalc.com</a>.</p>
<p>VIDEO: <a href="https://youtu.be/c7XHYBDTplw">https://youtu.be/c7XHYBDTplw</a></p>
<p>Open a project on <a href="https://cocalc.com">https://cocalc.com</a>, then with one click in the file explorer, launch VS Code running on the project. You can them install and use a Jupyter notebook inside VS Code, and edit Python code and using a terminal.</p>
<p>When you need more power, add a compute server to your project. For example, in the video we demo adding a compute server that has 128GB of RAM and the latest Google cloud n4 machine type. It's a spot instance, which is great for a quick demo. It's good to configure DNS and autorestart, and launch our compute server, watching it boot via the serial console. Once the server is running, launch VS Code with one click, use a Jupyter notebook, edit Python code, and open a terminal and confirm that the underlying machine has 128GB of RAM.</p>
<p>You can also make a CoCalc terminal that runs on the compute server by clicking &quot;+New --&gt; Linux Terminal&quot;, then clicking the Server button and selecting your compute server.</p>
<p>This costs just a few cents, as you can confirm using the &quot;Upgrades&quot; tab (and scrolling down). When you're done, deprovision the server, unless you need to keep data that is only on the server.</p>
]]></description>
      <pubDate>Thu, 18 Apr 2024 16:39:34 GMT</pubDate>
      <guid>https://cocalc.com/news/48</guid>
    </item>
    <item>
      <title><![CDATA[Using JupyterLab on CoCalc Compute Servers]]></title>
      <link>https://cocalc.com//news/using-jupyterlab-on-cocalc-compute-servers-47</link>
      <description><![CDATA[<p>CoCalc now makes it very easy to run a hosted JupyterLab
instance in the cloud, either a lightweight instance on our shared cluster,
or a high powered instance on a dedicated compute server with a custom
subdomain.</p>
<p>Checkout out <a href="https://github.com/sagemathinc/cocalc-howto/blob/main/jupyterlab.md">https://github.com/sagemathinc/cocalc-howto/blob/main/jupyterlab.md</a> or the video at <a href="https://youtu.be/LLtLFtD8qfo">https://youtu.be/LLtLFtD8qfo</a></p>
]]></description>
      <pubDate>Thu, 18 Apr 2024 04:58:32 GMT</pubDate>
      <guid>https://cocalc.com/news/47</guid>
    </item>
    <item>
      <title><![CDATA[Multibot chat on CoCalc ]]></title>
      <link>https://cocalc.com//news/multibot-chat-on-cocalc--45</link>
      <description><![CDATA[<p>I saw a new announcement today <a href="https://quorablog.quora.com/Multi-bot-chat-on-Poe">about &quot;Multibot chat on Poe&quot;</a>: &quot;Today we are adding an important new capability to Poe: multi-bot chat. This feature lets you easily chat with multiple models in a single thread. [...] Multi-bot chat is important because different models have different strengths and weaknesses. Some are optimized for specific tasks and others have unique knowledge. As you query a bot on Poe, you now can compare answers from recommended bots with one click, and summon any bot you prefer by @-mentioning the bot - all within the same conversation thread. This new ability lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow. [...] <strong>With Poe, you’re able to access all of the most powerful models, and millions of user-created bots built on top of them, all with a single $20/month subscription.</strong> &quot;</p>
<p>Due to major recent work by Harald Schilly, <a href="https://CoCalc.com">https://CoCalc.com</a> also has very similar functionality!   Also, in CoCalc, you pay as you go for exactly the tokens you use with each model, and it typically costs our users far less than $20/month, with many of the models being free.  Instead of paying $20/month, add $10 in credit to your CoCalc account (which never expires) and pay for exactly what you actually use.</p>
<img width="1575" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/a4390717-0265-439e-9da8-eda88e4752ab">
<p>Then ask a question, and follow up by using DIFFERENT MODELS and regenerate the response with any model.</p>
<img width="1360" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/8d7f60c1-20f8-412c-9921-903fea0a96fa">
<p>You can see all responses in the history:</p>
<img width="1224" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/7e5a3591-a6f6-47d5-b702-cce371a254c6">
<p>The superpower of <a href="http://poe.com">poe.com</a>'s LLM's are their integration with web search.  The superpower of <a href="http://CoCalc.com">CoCalc.com</a>'s LLM's is the integration with computation (including high powered HPC VM's, GPU's, Jupyter Notebooks, LaTeX, R, etc.).  For example, continuing our thread above:</p>
<img width="1279" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/e64a0473-8cf8-41f6-8391-41a5fb7634b5">
<p>But you can also generate code in Jupyter notebooks, that are either running in lightweight shared environment, or on <a href="https://doc.cocalc.com/compute_server.html">high powered dedicated compute servers</a>:</p>
<p>Finally, you can always check and see exactly how much every interaction costs:</p>
<img width="1594" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/f6c6c727-9c26-40a7-9c15-01584d773daf">
<p>Try it out today!!!</p>
]]></description>
      <pubDate>Tue, 16 Apr 2024 16:43:04 GMT</pubDate>
      <guid>https://cocalc.com/news/45</guid>
    </item>
    <item>
      <title><![CDATA[Run RStudio in your project or a compute server]]></title>
      <link>https://cocalc.com//news/run-rstudio-in-your-project-or-a-compute-server-44</link>
      <description><![CDATA[<p>We just added RStudio support to CoCalc projects (restart your project and refresh your browser if this doesn't work):</p>
<h2>Run RStudio directly in your project</h2>
<p>Open the &quot;Servers&quot; Tab to the left, then scroll down and click the &quot;RStudio&quot; button:</p>
<img width="1463" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/03a3e909-27d0-4792-9b1c-7348a9dbbc6f">
<p>In a second, R Studio server will appear in another tab:</p>
<img width="1280" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/7a90fc48-1dce-4312-b713-91d4d3355ec8">
<p>Simple as that.    You can also run JupyterLab and VS Code just as easily.</p>
<h2>Run RStudio on a compute server</h2>
<p>If you need vastly more compute power <strong>(e.g., 80 cores for only $0.47/hour!!!)</strong>, scroll up a little and create a <a href="https://doc.cocalc.com/compute_server.html">compute server</a>:</p>
<img width="766" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/994431a4-f862-45ea-bb92-8c21beaa6654">
<p>then:</p>
<img width="823" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/b02276ae-f4b5-4848-8004-503c093eb9a3">
<p>After that, create the compute server and when it starts up, click the https link.  You may have to copy/paste a token to access the RStudio instance.</p>
<img width="672" alt="image" src="https://github.com/sagemathinc/cocalc-compute-docker/assets/1276278/ef75e077-0da6-4ef6-8787-008463b93e65">]]></description>
      <pubDate>Tue, 16 Apr 2024 00:01:38 GMT</pubDate>
      <guid>https://cocalc.com/news/44</guid>
    </item>
    <item>
      <title><![CDATA[Sage 10.3 and many other updates]]></title>
      <link>https://cocalc.com//news/sage-10-3-and-many-other-updates-43</link>
      <description><![CDATA[<p>The project software environment has been updated. Version <code>2024-03-25</code> is now the default. It includes <strong><a href="https://github.com/sagemath/sage/wiki/Sage-10.3-Release-Tour">SageMath 10.3</a></strong> as the default. As usual, you can still use older versions by switching to a different Jupyter Kernel or use the <code>sage_select</code> command-line utility to change what <code>sage</code> is actually running.</p>
<p>As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, R, and various underlying Linux packages.</p>
<p>If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select &quot;Ubuntu 22.04
// Previous&quot;.</p>
<hr>
<p>Update:</p>
<p><code>2024-03-29</code>: a small patch update has been released, which mainly fixes a <code>pandas</code> vs. <code>openpyxl</code> incompatibility involving reading <code>*.xslx</code> files.</p>
]]></description>
      <pubDate>Mon, 25 Mar 2024 13:28:13 GMT</pubDate>
      <guid>https://cocalc.com/news/43</guid>
    </item>
    <item>
      <title><![CDATA[VS Code, JupyterLab and X11 Desktop on Compute Servers]]></title>
      <link>https://cocalc.com//news/vs-code-jupyterlab-and-x11-desktop-on-compute-servers-42</link>
      <description><![CDATA[<p>If you are running a compute server, click &quot;Edit&quot; (or &quot;Details&quot;), then scroll down to the new &quot;Applications&quot; section, and in most cases you'll find three new buttons -- &quot;JupyterLab&quot;, &quot;VS Code&quot; and &quot;X11 Desktop&quot;.</p>
<img width="876" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/e6d767a9-be22-46be-9df7-f335087fdda6">
<p>Click a button and CoCalc installs and runs JupyterLab, VS Code, or an X11 Desktop directly on the compute server.</p>
<img width="866" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/209e3fb0-f062-48bf-ad1b-e3057594ca19">
<p>If your compute server is geographically close to you, then using this application will have low latency.</p>
<p>Each application is running on the compute server and has full access to your files and any compute resources of the compute server.  Any project collaborator can also access this link.  Moreover, if you share the link with the auth token, then anybody you share it with can use the app (even if they do not have a cocalc account).</p>
<p>For JupyterLab, you must configure a DNS subdomain, which is easy to do in the Network section directly above:</p>
<img width="783" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/e40847ec-3092-4138-b6d1-4b224fc5b2e0">
<p>For the X11 Desktop, almost no applications are installed by default. Fortunately, you can do <code>apt-get install ...</code> to install apps. For example, after <code>apt-get install gimp</code>, you can run <code>gimp</code>:</p>
<img width="1714" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/20bfee00-017b-4ea0-a2d3-a5c1a24344da">
]]></description>
      <pubDate>Sat, 23 Mar 2024 22:51:24 GMT</pubDate>
      <guid>https://cocalc.com/news/42</guid>
    </item>
    <item>
      <title><![CDATA[Nested Virtualization]]></title>
      <link>https://cocalc.com//news/nested-virtualization-41</link>
      <description><![CDATA[<p>You can now run arbitrary X86 virtual machines inside compute servers on <a href="https://cocalc.com">https://cocalc.com</a>.</p>
<p>Select an intel machine type, e.g., n2-standard-2, then scroll down and check &quot;Enable Nested Virtualization&quot;:</p>
<img width="799" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/09ac234f-384b-4e1f-9c1b-6471dcbe2647">
<img width="457" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/759abe16-31c5-4b91-83bd-5c523a22416e">
]]></description>
      <pubDate>Thu, 14 Mar 2024 15:58:06 GMT</pubDate>
      <guid>https://cocalc.com/news/41</guid>
    </item>
    <item>
      <title><![CDATA[Compute Servers: Anaconda and JupyterHub]]></title>
      <link>https://cocalc.com//news/compute-servers-anaconda-and-jupyterhub-40</link>
      <description><![CDATA[<p>There are now 3 new <a href="https://doc.cocalc.com/compute_server.html">compute server</a> images:</p>
<ul>
<li>Anaconda</li>
<li>JupyterHub</li>
<li>Kubernetes Node</li>
</ul>
<img width="816" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/d698c96c-f2b8-4067-ac9f-28ebbcd32d8d">
<h2>Anaconda</h2>
<p>The Anaconda image is a lightweight image with the conda command installed and configured (via mambaforge), and two channels, anaconda and conda-forge, enabled by default. You get Python 3.11 and can very easily install packages into your compute server's environment using the conda command, e.g., install Matplotlib:</p>
<pre><code class="language-sh">(compute-server-1540) ~/anaconda$ conda install matplotlib
</code></pre>
<p>The packages you install are stored in /conda only on the compute server, so installing and using the packages is fast, and if you make the compute server disk large, you can install many packages.</p>
<img width="1014" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/f657d682-4151-4a0b-adf9-53a6031a0e74">
<h2>JupyterHub</h2>
<p>Th JupyterHub image is a single-node Kubernetes install of JupyterHub, which can be fully customized by you <a href="https://z2jh.jupyter.org/en/stable/jupyterhub/customization.html">exactly as explained</a> in the official docs (or email <a href="mailto:help@sagemath.com">help@sagemath.com</a> for support!).  Click to create it, and wait for everything to install.  <strong>It can take several minutes to start the first time, so please be patient.</strong>  There is a random registration token which has to be entered to connect to JupyterHub; once you do that the default auth is that anybody can then sign in with any login/password (that's just the JupyterHub default).   The default image is also very simple, but you can easily change it as documented above.</p>
<p>This is a single node deployment by default, but scaling up to multiple nodes does work, though it requires some copy/paste on the command line.  (We will automate this in the future.)</p>
<h2>Kubenetes Node</h2>
<p>You can create a Kubernetes node. This is a single node Kubernetes cluster by default.  However, you can join it to an existing cluster following the microk8s directions.  E.g., you could expand a JupyterHub install to have multiple nodes.</p>
]]></description>
      <pubDate>Sun, 10 Mar 2024 19:46:55 GMT</pubDate>
      <guid>https://cocalc.com/news/40</guid>
    </item>
    <item>
      <title><![CDATA[Compute Servers -- Serial Port Output]]></title>
      <link>https://cocalc.com//news/compute-servers-serial-port-output-39</link>
      <description><![CDATA[<p>I just made it so you can view the serial port output of CoCalc <a href="https://doc.cocalc.com/compute_server.html">compute servers</a>.   A compute server in <a href="https://cocalc.com">CoCalc</a> is a remote computer, whose resources (GPUs, CPUs, RAM, disks) you can utilize via CoCalc’s collaborative interface in Jupyter notebooks and terminals, providing hundreds of CPUs, thousands of GBs of RAM, full root privileges, run Docker containers, and much more.</p>
<p>There is a new option &quot;Serial&quot; that is visible for any running compute server:</p>
<img width="1097" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/d9440389-0438-4564-b8d3-91eccccbde56">
<p>Clicking the Serial button shows the serial console from when the compute server started running:</p>
<img width="1384" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/5ef729c2-0cbe-4407-92a6-7416e5c90a6f">
<p>This shows you what happened as the VM booted up, including starting the various cocalc services, periodically reporting on status, etc.:</p>
<img width="1350" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/4ee146fb-e803-4590-8218-5da8dc659d29">
<p>The serial console is rendered using <a href="https://xtermjs.org/">xtermjs</a>, using whatever color scheme you have configured for your terminal (in account prefs).</p>
]]></description>
      <pubDate>Sat, 02 Mar 2024 01:25:47 GMT</pubDate>
      <guid>https://cocalc.com/news/39</guid>
    </item>
    <item>
      <title><![CDATA[Compute Server - Automatic Restart]]></title>
      <link>https://cocalc.com//news/compute-server-automatic-restart-38</link>
      <description><![CDATA[<p>I just implemented a new feature for CoCalc <a href="https://doc.cocalc.com/compute_server.html">compute servers</a>.   A compute server in CoCalc is a remote computer, whose resources (GPUs, CPUs, RAM, disks) you can utilize via CoCalc’s collaborative interface in Jupyter notebooks and terminals, providing hundreds of CPUs, thousands of GBs of RAM, full root privileges, run Docker containers, and much more.</p>
<p>You can now make it so a compute server will automatically restart whenever it stops responding (for about a minute) for any reason, including crashing due to running out of RAM or if it is a spot instance that is killed due to a surge in demand.  Just check &quot;Automatically Restart&quot; in the compute server configuration dialog:</p>
<img width="880" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/79649d1c-b634-4a08-9ee8-559876c688de">
<p>This is especially useful for the &quot;Spot&quot; provisioning type, since they are up to 91% cheaper, and tend to be killed randomly between 12 hours and 1 week from when you start them.  Spot compute servers with &quot;Automatic Restart&quot; enabled are ideal for hosting a powerful but affordable web service or a computation that you update periodically (e.g., using crontab), or checkpoint and can automatically resume.</p>
<img width="645" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/e6fbc592-f08a-47bd-a518-6a2c9669af22">
]]></description>
      <pubDate>Sat, 02 Mar 2024 01:24:49 GMT</pubDate>
      <guid>https://cocalc.com/news/38</guid>
    </item>
    <item>
      <title><![CDATA[Full RStudio Server Support for Compute Servers]]></title>
      <link>https://cocalc.com//news/full-rstudio-server-support-for-compute-servers-37</link>
      <description><![CDATA[<p>I've updated the R Compute server image so that it now fully supports RStudio Server:</p>
<img width="803" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/bb980f1a-f8a1-4e17-b51c-4bcef9ed6999">
<p>Just create a compute server (starting at about a $0.01/hour) with the R image, click on the https link, paste in your random token, and you have the latest version of RStudio Server running, with full access to your files from your project and the full power of whatever compute server you're using.</p>
<img width="1721" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/5cea60c0-a68a-41c3-bbfa-92482bfe9d68">
<p>This does not have any support for realtime collaboration or AI integration, etc.   If you need all that, you can also just use a Jupyter notebook on the compute server:</p>
<img width="1060" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/a170e419-bb84-409c-b43b-053dc78dac47">
<p>This is not using X11 or anything like that. It's the normal open source RStudio Server web application.</p>
<p>For extra speed select your compute server to be in a region that is geographically close to you.  When using RStudio all communication is directly between you and the compute server, so it'll potentially be much faster.</p>
]]></description>
      <pubDate>Tue, 20 Feb 2024 22:34:26 GMT</pubDate>
      <guid>https://cocalc.com/news/37</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc was at AGU's Ocean Sciences Meeting]]></title>
      <link>https://cocalc.com//news/cocalc-was-at-agu-s-ocean-sciences-meeting-36</link>
      <description><![CDATA[<p>We were thrilled to participate in the American Geophysical Union's Ocean Sciences Meeting in beautiful New Orleans, Louisiana, from February 19th to 23rd, 2024.</p>
<p>We were in booth 411, where we interacted with distinguished attendees from both academic institutes and esteemed research organizations like NOAA and the U.S. Naval Research Laboratory.</p>
<p>As a company dedicated to providing high-end, competitive resources for research, we are excited to announce the release of our new Compute Server functionality. This development will greatly benefit those working with large-scale ocean current models, similar big data applications, or graphically intensive simulations.</p>
<p>We would be delighted to walk you through our updated system and help you discover the increased efficiency, enhanced accessibility, and more powerful resources that our new Compute Server functionality brings.</p>
<p>This was an excellent opportunity to explore the tools CoCalc provides and to see what we can bring to your research endeavors.</p>
<p>Sincerely, -Blaec Bejarano,
Chief Sales Officer,
CoCalc by SageMath, Inc.</p>
<p><img src="https://imgur.com/a/diwH8yE" alt=""></p>
]]></description>
      <pubDate>Mon, 19 Feb 2024 19:06:50 GMT</pubDate>
      <guid>https://cocalc.com/news/36</guid>
    </item>
    <item>
      <title><![CDATA[How to use Mojo on CoCalc via a Compute Server]]></title>
      <link>https://cocalc.com//news/how-to-use-mojo-on-cocalc-via-a-compute-server-35</link>
      <description><![CDATA[<p>You can easily install and use <a href="https://www.modular.com/max/mojo">Mojo</a> on <a href="https://cocalc.com">https://cocalc.com</a> as explained in <a href="https://github.com/sagemathinc/cocalc-howto/blob/main/mojo.md">our new tutorial</a>.</p>
<p>Mojo is a new programming language that &quot;combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.&quot;</p>
<p>In <a href="https://github.com/sagemathinc/cocalc-howto/blob/main/mojo.md">the tutorial</a>, you will create a compute server, install Mojo in two minutes, use the Mojo Jupyter kernel, edit and run .mojo files and call Python code.    Along the way you can try out any of the new features of Mojo and go through <a href="https://docs.modular.com/mojo/manual/">the Mojo intro manual</a> to find out what this new programming language really feels like.</p>
]]></description>
      <pubDate>Sat, 17 Feb 2024 21:08:56 GMT</pubDate>
      <guid>https://cocalc.com/news/35</guid>
    </item>
    <item>
      <title><![CDATA[AI-powered Formula Assistant]]></title>
      <link>https://cocalc.com//news/ai-powered-formula-assistant-34</link>
      <description><![CDATA[<p>Have you ever struggled typing in LaTeX formulas? Our new <strong>AI-powered formula assistant</strong> eliminates the hassle! Use this tool to transform the description of a formula into perfect LaTeX code in seconds.</p>
<p>Try it in our LaTeX or Markdown editor: Insert → AI Generated Formula (or &quot;AI Formula&quot; button)</p>
<p>The description text can range from simplified algebraic notation, a mix of description and formula notation, up to just a few natural words describing the formula. Additionally, if some text in your document is selected, will be used as the input and replaced when inserted.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/2024-02-09-ai-formula.png" alt=""></p>
<h3>Examples</h3>
<ul>
<li>
<p>text: <strong>product of k=1 to infinity of (N choose k) / (1+k)!</strong></p>
<p>is converted to <code>$$\prod_{k=1}^{\infty} \frac{{N \choose k}}{{1+k}!}$$</code></p>
<script type="math/tex; mode=display">\prod_{k=1}^{\infty} \frac{{N \choose k}}{{1+k}!}</script></li>
<li>
<p>text:  <strong>&quot;heisenberg uncertainty&quot;</strong></p>
<p>is interpreted correctly and you immediately get the formula – no need to fiddle around with uppercase deltas and so on <code>$$\Delta x \Delta p \ge \frac{h}{4\pi}$$</code></p>
<script type="math/tex; mode=display">\Delta x \Delta p \ge \frac{h}{4\pi}</script></li>
<li>
<p>text:  <strong>d/dx f(x) = d^2/dx g(x) + C</strong></p>
<p>transformed to <code>$$\frac{d}{dx} f(x) = \frac{d^2}{dx} g(x) + C $$</code></p>
<script type="math/tex; mode=display">\frac{d}{dx} f(x) = \frac{d^2}{dx} g(x) + C </script></li>
<li>
<p>text:  <strong>sqrt(x^2 + y^2) &lt; k logical-and x + y &gt; 0</strong></p>
<p>two inequalities with a logical and <code>$$\sqrt{x^2 + y^2} &lt; k \land x + y &gt; 0$$</code></p>
<script type="math/tex; mode=display">\sqrt{x^2 + y^2} < k \land x + y > 0</script></li>
</ul>
<p>Here are two short clips showing this in action:</p>
<p><video width="100%" autoplay muted controls loop src="https://storage.googleapis.com/cocalc-extra/2024.02.09-ai-formula-2.webm" type="video/webm"></video></p>
<p><video width="100%" autoplay muted controls loop src="https://storage.googleapis.com/cocalc-extra/2024.02.09-ai-formula.webm" type="video/webm"></video></p>
]]></description>
      <pubDate>Fri, 09 Feb 2024 15:45:29 GMT</pubDate>
      <guid>https://cocalc.com/news/34</guid>
    </item>
    <item>
      <title><![CDATA[New "Menu Toolbar"]]></title>
      <link>https://cocalc.com//news/new-menu-toolbar--33</link>
      <description><![CDATA[<p>CoCalc's user interface received a significant update. A new <strong>&quot;Menu Toolbar&quot;</strong> unifies how most editors present their functionality.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/2024-02-07-menu-toolbar.png" alt=""></p>
<p>A familiar looking menu bar collects all available options for the edited document, the layout and the underlying file. Below that menu bar, a newly designed toolbar exposes commonly used menu items for quick access.</p>
<p>You can adjust which buttons are shown by clicking on the icon in the menu entry. This toggles its visibility in the row of buttons. You can also completely disable the manu toolbar via the menu for a specific type of editor.</p>
<p>Stay tuned for upcoming updates and as usual, please report problems via our <a href="https://cocalc.com/support/new">help channel</a>.</p>
]]></description>
      <pubDate>Wed, 07 Feb 2024 14:27:34 GMT</pubDate>
      <guid>https://cocalc.com/news/33</guid>
    </item>
    <item>
      <title><![CDATA[Octave output fixes]]></title>
      <link>https://cocalc.com//news/octave-output-fixes-32</link>
      <description><![CDATA[<p>A new update to the Ubuntu 22.04 based software environment has been released.</p>
<p>On top of various updates to many system packages and libraries,
this update also tweaks <a href="https://github.com/sagemathinc/cocalc/issues/6695">how Octave output is processed</a>.
This should eliminate spurious extra characters,
which are not properly processed by the kernel itself.</p>
<hr>
<p>As usual, if you encounter any issues, you can switch back to the &quot;Ubuntu 22.04 // Previous&quot; edition. This configuration is in Project Settings → Project Control → Software Environment.
Also, please contact us in support, such that we can investigate the problem.</p>
]]></description>
      <pubDate>Wed, 07 Feb 2024 14:22:44 GMT</pubDate>
      <guid>https://cocalc.com/news/32</guid>
    </item>
    <item>
      <title><![CDATA[Flyout: Active files]]></title>
      <link>https://cocalc.com//news/flyout-active-files-31</link>
      <description><![CDATA[<p>This is a quick rundown of our newest addition to the flyout tabs. The <strong>Active</strong> panel aims to enhance and streamline how you work with multiple open files. Instead of scanning through the names of tabs, which might even be hidden, you see a vertical list of file names. This is easier to read and there are also three modes how the tabs are organized.</p>
<h2>Tabs View</h2>
<p>The first layout mode is called &quot;Tabs&quot; and shows the open files in the same order as the tabs across the top. You can use drag and drop to re-order them, use the arrow buttons to change the ordering, or even use the filter box to help you find a tab more quickly.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/flyout/flyout-active-tabs.png" alt=""></p>
<h2>Starring files</h2>
<p>Starred files, indicated by the filled star icon, are kept in the list even when the file is closed – this resembles how bookmarks work. Toggle the &quot;Starred&quot; button at the top left to show or hide closed bookmarked files. To star a file, just click on the empty star on the left of the file name. To unstar it, click the filled star of an opened file. <em>Note: unstarring</em> <u>closed</u> <em>files is not possible, because this leads to accidental unstarring.</em></p>
<p>The main use case is to mark often used files to get back to them later. They can serve as a static anchor point in a larger project with many files!</p>
<h2>Folder View</h2>
<p>The second layout is <strong>Folder</strong>. Instead of showing all tabs in the &quot;natural&quot; ordering, this view groups them by directory and sorts the files by name. This gives the files a consistent place.</p>
<p>Since directories are logical groupings of files, this view naturally inherits these groups from the folder layout.</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/flyout/flyout-active-folder-starred.png" alt=""></p>
<p>The &quot;Home&quot; directory has a special place at the top. Clicking on the &quot;(X)&quot; close button of a directory means that all opened files in that folder are closed.</p>
<p>By starring a directory, it is kept in the list of tabs at all times. This is helpful to quickly open that directory in the full page file explorer: just click on the folder and you jump there immediately.</p>
<h2>Typed View</h2>
<p>Finally, these opened and starred files can also be grouped by their type. So, imagine you work with a couple of Jupyter Notebooks, they will be in that view at the top at all time.</p>
<p>Here are two screenshots, showing all starred files vs. only grouping the opened ones:</p>
<p>showing starred files:</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/flyout/flyout-active-type-starred.png" alt=""></p>
<p>no starring:</p>
<p><img src="https://storage.googleapis.com/cocalc-extra/flyout/flyout-active-type-unstarred.png" alt=""></p>
<p>Finally: remember any time you open a file, the &quot;current directory&quot; in your project changes as well. This means you can get to see all files in that current directory, when you click on the &quot;Explorer&quot; or you can add files in the current directory in &quot;New&quot;.</p>
]]></description>
      <pubDate>Mon, 08 Jan 2024 15:46:08 GMT</pubDate>
      <guid>https://cocalc.com/news/31</guid>
    </item>
    <item>
      <title><![CDATA[Julia 1.10 and Sage 10.2]]></title>
      <link>https://cocalc.com//news/julia-1-10-and-sage-10-2-30</link>
      <description><![CDATA[<p><strong>Happy new 2024!</strong></p>
<p>Let's start the year with a rather large software update. The default Ubuntu 22.04 based environment has Sage 10.2 and Julia 1.10 as default. As usual, you can still access older versions by using the versioned command-line utility – e.g. <code>julia-1.9</code> – or switch the Jupyter kernel.</p>
<ul class="simple">
<li><p>(new) <a class="reference external" href="https://www.sagemath.org/">SageMath</a> version <code class="docutils literal notranslate"><span class="pre">10.2</span></code> is the default now: <a class="reference external" href="https://github.com/sagemath/sage/wiki/Sage-10.2-Release-Tour">10.2 release tour</a>. You can use the <code class="docutils literal notranslate"><span class="pre">sage_select</span></code> utility to switch what version to use in a <a class="reference internal" href="https://doc.cocalc.com/sagews.html#sage-worksheet"><span class="std std-ref">Sage Worksheets</span></a> or switch the kernel in a <a class="reference internal" href="https://doc.cocalc.com/jupyter.html#jupyter-notebook"><span class="std std-ref">Jupyter Notebook</span></a>.</p></li>
<li><p>(new) <a class="reference external" href="https://julialang.org/">Julia</a> <code class="docutils literal notranslate"><span class="pre">1.10</span></code>: <a class="reference external" href="https://docs.julialang.org/en/v1/NEWS/">1.10 release notes</a>: Use <code class="docutils literal notranslate"><span class="pre">julia-&lt;version&gt;</span></code> on the command line or a different kernel to switch between versions.</p></li>
<li><p>(upd) system-wide, there are a lot of updates, not only the Python 3 based environment, but also various Linux utilities. E.g. <a class="reference external" href="https://gcc.gnu.org/">gcc</a> 12 is available now.</p></li>
</ul>
<hr>
<p>As usual, the version available up until this update is available under &quot;Ubuntu 22.04 // Previous&quot; in Project Settings → Control → Software Environment. Please let us know about major problems, such that we can try to address them with the next update!</p>
]]></description>
      <pubDate>Tue, 02 Jan 2024 16:05:18 GMT</pubDate>
      <guid>https://cocalc.com/news/30</guid>
    </item>
    <item>
      <title><![CDATA[Sage 10.2 and Python 3 (Colab)]]></title>
      <link>https://cocalc.com//news/sage-10-2-and-python-3-colab--29</link>
      <description><![CDATA[<p>A new software environment has been released.  The new default software environment includes <strong><a href="https://github.com/sagemath/sage/wiki/Sage-10.2-Release-Tour">Sagemath 10.2</a></strong> now. Note: <a href="https://github.com/sagemathinc/cocalc/issues/7103">3D graphics</a> are currently broken in Jupyter Notebooks. Once this and related details are resolved, it will become the default Sage version. You can already use the <code>sage_select</code> command-line utility to switch the version of Sage in your project.</p>
<p>There is also a new Jupyter Kernel for Python3: <strong>Python 3 (Colab)</strong>, which is very similar to the Python environment at <a href="https://colab.research.google.com/">Google Colab</a>. This makes moving notebooks from that service to CoCalc very easy. Additionally, if you're interested in running it on a GPU, launch your own <a href="https://doc.cocalc.com/compute_server.html">Compute Server</a> and select the &quot;Colab&quot; image.</p>
<p>Apart from the above, many system wide packages and software libraries across R, Julia, and Python 3 have been updated.</p>
<p>If you encounter any issues, please let us know. Use the &quot;22.04 // Previous&quot; image to switch back temporarily.</p>
]]></description>
      <pubDate>Mon, 11 Dec 2023 12:56:48 GMT</pubDate>
      <guid>https://cocalc.com/news/29</guid>
    </item>
    <item>
      <title><![CDATA[Major Price Cuts: Deepnote Versus Cocalc]]></title>
      <link>https://cocalc.com//news/major-price-cuts-deepnote-versus-cocalc-28</link>
      <description><![CDATA[<p><a href="https://deepnote.com">Deepnote</a> is one of <a href="https://cocalc.com">CoCalc</a>'s direct competitors. Today (November 30, 2023) <a href="https://deepnote.com/changelog/2023-11-30#reduced-machine-prices">they announced a major price cut</a> on their pay-as-you-go rates:</p>
<blockquote>
<p><em>&quot;As you may have already heard, starting December 1, we're slashing the pay-as-you-go rates across all our machines – making them more budget-friendly without any hidden terms.&quot;</em></p>
</blockquote>
<br/>
<img width="550" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/de29afbd-794c-4431-b935-46db6fb2ae36">
<br/>
<p>At CoCalc, we recently <a href="https://github.com/sagemathinc/cocalc/discussions/7048">finally launched pay as you go machines</a>, which was one of our main development priorities for 2023. These are fully integrated with CoCalc, and were a huge amount of work to bring to market. I was terrified that Deepnote's major price cuts would make Deepnote a much better deal than CoCalc.</p>
<p>Here is how the Deepnote and CoCalc pricing compares:</p>
<table>
<thead>
<tr>
<th style="text-align:left"></th>
<th style="text-align:center">Deepnote's New Price</th>
<th style="text-align:center">CoCalc Standard</th>
<th style="text-align:center">CoCalc Spot</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">64GB RAM, 16vCPU</td>
<td style="text-align:center">$1.54</td>
<td style="text-align:center">$0.59</td>
<td style="text-align:center">$0.12</td>
</tr>
<tr>
<td style="text-align:left">128GB RAM, 16vCPU (32 CPU on cocalc)</td>
<td style="text-align:center">$2.02</td>
<td style="text-align:center">$1.17</td>
<td style="text-align:center">$0.23</td>
</tr>
<tr>
<td style="text-align:left">K80 GPU (newer L4 GPU on cocalc)</td>
<td style="text-align:center">$2.02</td>
<td style="text-align:center">$0.93</td>
<td style="text-align:center">$0.30</td>
</tr>
</tbody>
</table>
<p><strong>Conclusion: CoCalc's prices are still highly competitive, even in light of Deepnote's major price cuts.</strong></p>
<p>Also, spot instances <em>do work very well</em> for many applications.  For more details and <a href="https://doc.cocalc.com/compute_server.html">how to get these prices </a>at <a href="https://cocalc.com">https://cocalc.com</a>, read the rest of this post.</p>
<p><strong>CAVEAT:</strong> comparing RAM and vCPU is not necessarily easy. Maybe I'm completely wrong.</p>
<h2>More Details</h2>
<p>I don't know exactly what Deepnote means by the above machine specs. However, according to my benchmarks, one of the very best machines we offer via Google Cloud is the <a href="https://cloud.google.com/compute/docs/general-purpose-machines#t2d_machines">AMD EPYC Milan family</a>. Their single core performance is excellent, and <em>a vCPU is equivalent to an entire core</em>, which makes them up to twice as fast as lot of &quot;vCPU&quot; options out there. We offer both spot instances and standard instances.</p>
<h3>Performance: 16 vCPU and 64GB RAM</h3>
<p>Our best pricing on an AMD EPYC with 64GB RAM and 16 cores is <strong>$0.59/hour for standard instances</strong>.</p>
<br/>
<img width="631" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/98fe8e64-ef08-42ef-8e1f-ea97fd98f507">
<br/>
<p>By selecting a region in Europe, the cost is only <strong>$0.12/hour for a spot instance.</strong> Spot instances may stop or not be available, but our stats so far show they often work well for days to weeks, perhaps because Google has built out such massive CPU capacity:</p>
<br/>
<img width="651" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/78439ac9-3d78-4972-9bf9-bc1db0008852">
<br/>
<p>In CoCalc the region where the machine is located is transparent, so you can take advantage of the best prices in the world.</p>
<h3>High Memory: 16 vCPU and 32GB RAM</h3>
<p>Our analogue of &quot;High memory&quot; above is a t2d-standard-32 with 32 cores, 128B of RAM, and it costs <strong>$1.17/hour for a standard instance</strong>, or <strong>$0.23/hour for a spot instance</strong>.</p>
<br/>
<img width="665" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/9541e081-24f6-4296-884c-2c4a625c8569">
<br/>
<p>Again, the best price on spot instances is in a different region than for standard:</p>
<br/>
<img width="676" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/e7438f94-b4ce-4bd3-bf60-8e3c8fb67844">
<br/>
<h3>GPU</h3>
<p>Deep note offers a K80 GPU for $1.80/hour. We do not offer K80's on CoCalc since they are so old, but we have L4's that have the same 24GB of RAM and are a much newer architecture. Our GPU price is <strong>$0.93/hour for standard</strong> instances, and <strong>$0.30/hour for spot instances</strong>:</p>
<br/>
<img width="585" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/20ad5deb-d694-47f7-88b9-fd4a24d355cf">
<br/>
<p><strong>Conclusion: CoCalc's new prices are still competitive.  Yeah.</strong></p>
<p><strong>Happy Holidays!</strong> 🎄</p>
]]></description>
      <pubDate>Thu, 30 Nov 2023 20:09:37 GMT</pubDate>
      <guid>https://cocalc.com/news/28</guid>
    </item>
    <item>
      <title><![CDATA[Use the Mathematica Jupyter Kernel]]></title>
      <link>https://cocalc.com//news/use-the-mathematica-jupyter-kernel-27</link>
      <description><![CDATA[<p>It is finally easy to run Mathematica Jupyter notebooks on <a href="https://cocalc.com">https://cocalc.com</a> via the free Wolfram Engine! You only have to pay for the compute resources you use, which start at about $0.02/hour.  For more details <a href="https://github.com/sagemathinc/cocalc-howto/blob/main/mathematica.md">see the guide</a>.</p>
<img width="1278" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/d62af273-8149-47f8-a2ea-d5e982efeec0">
]]></description>
      <pubDate>Fri, 24 Nov 2023 04:21:02 GMT</pubDate>
      <guid>https://cocalc.com/news/27</guid>
    </item>
    <item>
      <title><![CDATA[Run cocalc-docker on https://cocalc.com on extremely powerful VM's]]></title>
      <link>https://cocalc.com//news/run-cocalc-docker-on-https-cocalc-com-on-extremely-powerful-vm-s-26</link>
      <description><![CDATA[<p>It is now possible to run your own instance of cocalc-docker directly on <a href="https://cocalc.com">https://cocalc.com.</a>.   This is a hosted way to use CoCalc's Jupyter notebooks, LaTeX, VS Code Server, JupyterLab, and much more.  It has many advantages involving performance and privacy over just using <a href="https://cocalc.com">https://cocalc.com</a> directly:</p>
<ul>
<li>You can run the server geographically close to yourself, which makes it potentially much faster</li>
<li>Your data is not backed up as part of the rest of cocalc in any way, which may be important for some use cases involving privacy or just storing large amounts of data.</li>
<li>You can use massive amounts of compute resources and disk space, with optimal high performance</li>
<li>Cocalc-docker fully supports using all of CoCalc's own editors, in addition to JupyterLab and VS Code</li>
<li>You can be root and install your own software</li>
<li>You can run any Docker containers</li>
<li>If something goes wrong, you can get hands on support.</li>
<li>GPU support</li>
</ul>
<p>See <a href="https://github.com/sagemathinc/cocalc-docker/blob/master/docs/cocalc.com.md">https://github.com/sagemathinc/cocalc-docker/blob/master/docs/cocalc.com.md</a> for a detailed step-by-step tutorial.</p>
]]></description>
      <pubDate>Wed, 22 Nov 2023 19:09:47 GMT</pubDate>
      <guid>https://cocalc.com/news/26</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc has GPU's and powerful VM's]]></title>
      <link>https://cocalc.com//news/cocalc-has-gpu-s-and-powerful-vm-s-25</link>
      <description><![CDATA[<p>CoCalc now features robust compute servers, enabling users to connect a remote computer to CoCalc and utilize it for terminals and Jupyter notebooks. These compute servers open up possibilities for enhanced computing resources, extending far beyond the bounds of local machines. Users simply create a compute server in a project, select the software image and (optional) GPU they require, and can then start running any terminal or Jupyter notebook on this server for an on-demand fee, charged by the second when the server is in use.</p>
<p>The GPU support is extensive, offering variants including A100 80GB, A100 40GB, L4, and T4 GPUs with finely configured software stacks. These stack images include SageMath, Google Colab, Julia, PyTorch, Tensorflow and CUDA Toolkit, accommodating a versatile range of uses. The compute servers integrating these GPUs come at highly competitive pricing, particularly for spot instances. CoCalc's compute servers represent a massive enhancement to default projects, offering increased speed, flexibility, and computational power, transforming the way users can utilize CoCalc for their projects.</p>
<img width="903" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/02e86826-8127-4030-bb13-7611526763dd">
<p>To set up a compute server in CoCalc, log into your project, create a compute server through the &quot;Servers&quot; button, selecting your desired software image and optionally a GPU. To use the server, create a terminal file or a Jupyter notebook, move it to the server through the upper left menu, and remember to sync files for editing during computations.</p>
<p>Finally, here is a quick tutorial on how to get started with compute servers on CoCalc:</p>
<ol>
<li>Once logged in, navigate to your project where you intend to use the compute server.</li>
<li>Click on the &quot;Servers&quot; button on the left side of the screen and select &quot;Create Compute Server&quot;.</li>
<li>You will be prompted to select the desired software image and optionally a GPU.  A GPU is selected by default but you can disable it if you don't need one.  If you are going to write code using CUDA libraries, choose the &quot;Cuda Toolkit&quot; image. If you want to accelerate PyTorch computations with a GPU, choose the &quot;PyTorch&quot; image.  If you want to use SageMath, choose the Sage image.</li>
<li>Start your compute server.</li>
<li>If you want to use the Linux command line, e.g., compilers, etc., create a terminal file (one ending in .term) and using the upper-left menu, select your compute server. If you chose the 'Cuda Toolkit', then the 'nvcc' command will be available for compiling .cu code.</li>
<li>If you need to edit the files during your computations on the compute server, remember to click the 'Sync' button at the top left of the terminal for the files to get copied to your compute server.</li>
<li>If you chose the &quot;PyTorch&quot; image or similar, create a Jupyter notebook and move it to the compute server via the upper-left menu.  You can then select a Jupyter kernel that's available on the compute server, and your Jupyter notebook will run there.</li>
</ol>
<p>Remember, compute servers are billed by the second only when they exist.</p>
<img width="811" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/969ab39d-caa4-4682-8552-146306b29ecd">
]]></description>
      <pubDate>Mon, 20 Nov 2023 04:30:51 GMT</pubDate>
      <guid>https://cocalc.com/news/25</guid>
    </item>
    <item>
      <title><![CDATA[New Theme Panel -- don't like rounded corners? Want a much more compact overall design]]></title>
      <link>https://cocalc.com//news/new-theme-panel-don-t-like-rounded-corners-want-a-much-more-compact-overall-design-24</link>
      <description><![CDATA[<p>In <a href="https://cocalc.com/settings/account">https://cocalc.com/settings/account</a> there's a new theme panel</p>
<img width="793" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/8518d9c3-a24a-4c57-b289-c9a082a4fa88">
]]></description>
      <pubDate>Tue, 17 Oct 2023 18:02:32 GMT</pubDate>
      <guid>https://cocalc.com/news/24</guid>
    </item>
    <item>
      <title><![CDATA[Sage 10.1]]></title>
      <link>https://cocalc.com//news/sage-10-1-23</link>
      <description><![CDATA[<p>The default &quot;Ubuntu 22.04&quot; software environment has just been updated. This includes <strong><a href="https://www.sagemath.org/">SageMath 10.1</a></strong> and makes it the default version of Sage in CoCalc. In your existing Jupyter Notebooks, you have to update the list of kernels (if necessary) and switch to the &quot;Sage 10.1&quot; kernel.</p>
<p>Check out the <a href="https://github.com/sagemath/sage/wiki/Sage-10.1-Release-Tour">release tour</a> to learn what's new. E.g. you can now instantiate the 27 dimensional exceptional <a href="https://en.wikipedia.org/wiki/Jordan_algebra">Jordan algebra</a>:</p>
<pre><code class="language-sage">O = OctonionAlgebra(GF(7), 1, 3, 4)
J = JordanAlgebra(O)
J
</code></pre>
<pre><code class="language-md">Exceptional Jordan algebra constructed from Octonion algebra over Finite Field of size 7 with parameters (1, 3, 4)
</code></pre>
<p>For more general information, visit the <a href="https://doc.sagemath.org/html/en/">SageMath documentation</a>.</p>
<hr>
<p>In other notes, many tools and utilities have been updated and as a new addition, <a href="https://bun.sh/">bun, a fast JavaScript runtime</a> is available as well.</p>
]]></description>
      <pubDate>Mon, 11 Sep 2023 12:50:35 GMT</pubDate>
      <guid>https://cocalc.com/news/23</guid>
    </item>
    <item>
      <title><![CDATA[Concerns about GPT-4 Fees for Students]]></title>
      <link>https://cocalc.com//news/concerns-about-gpt-4-fees-for-students-22</link>
      <description><![CDATA[<p>CoCalc now provides GPT-4 on a pay-for-what-you-use basis, in addition to our free GPT-3.5 functionality.  As an instructor or student, you might have some questions about how this works!</p>
<blockquote>
<ol>
<li>I don't see a warning about fees when I select &quot;@GPT-4&quot; in the chat window. Does the platform remind users about the fee before the chat is sent?</li>
</ol>
</blockquote>
<p>Anybody can select GPT-4 (in chat and other places), but the first time you use it, there is a big confirmation dialog.  This lets you set a specific monthly spending limit (you can set anything you want), which is by default $0.    You can always adjust this limit later at <a href="https://cocalc.com/settings/purchases">https://cocalc.com/settings/purchases</a> under &quot;Self-Imposed Spending Limits&quot;, where you can also see the rates.</p>
<p>The dialog also lets you add credit to your account, in case you don't have any, and you can check on the status of that credit at <a href="https://cocalc.com/settings/purchases">https://cocalc.com/settings/purchases</a>.   After you explicitly set a limit and add credit, you don't get explicitly asked again every time you use GPT-4.  Also, on any day when you use GPT-4, <em>you'll receive an email statement at the end of the day listing how much you spent</em> (and this is easy to disable).</p>
<blockquote>
<ol start="2">
<li>If a student uses GPT-4 once, will CoCalc default to GPT-4 thereafter?</li>
</ol>
</blockquote>
<p>Currently no.  The default is always GPT-3.5.  That said, several people have been requesting a way to default to GPT-4, to save themselves a click, so we will very likely make that an option sometime in the near future.  But it will be possible to configure it either way.</p>
<blockquote>
<p>From the &quot;tokens&quot; pricing scheme on OpenAI's site, it is difficult for me to get a good approximation for how much GPT-4 use would cost my students. I recognize that there are too many unknowns for a specific dollar amount, but can you give me any information that would help estimate the cost per semester? A sense of scale ($1 vs. $10 vs. $100 per semester) would be helpful.</p>
</blockquote>
<p>Since all use is explicit and manual, e.g., via chat or clicking, in practice it's very difficult to use very much.    My guess is that a typical student <strong>might use $10 for an entire semester worth of use</strong>.     A typical interaction is a few cents, so hundreds of interactions cost about $10.     You'll quickly get a sense of spend because it's listed in the daily statements mentioned above.   For comparison, OpenAI charges $20/month for their GPT-4 chat site, and Microsoft charges $30/month for their CoPilot integration. The model in cocalc where you pay for what you actually use is more affordable.</p>
<p>Note that GPT-3.5 is significantly faster (and completely free to users, though it costs me), and for some things it's pretty good, so people often use it just because the output appears so quickly.</p>
<p>Some other notes:</p>
<ul>
<li>
<p>In case you're worried, it's also possible to fully or partly disable ChatGPT for students in your class, e.g., during an exam. That's in course configuration.</p>
</li>
<li>
<p>We're planning to add other Large Language Models, e.g. Claude2 from Anthropic, pretty soon.</p>
</li>
</ul>
]]></description>
      <pubDate>Mon, 21 Aug 2023 14:05:51 GMT</pubDate>
      <guid>https://cocalc.com/news/22</guid>
    </item>
    <item>
      <title><![CDATA[22.04 software update]]></title>
      <link>https://cocalc.com//news/22-04-software-update-21</link>
      <description><![CDATA[<p>The 22.04 line of software environments just received an update. If you encounter a problem, the previous one is accessible under &quot;Ubuntu 22.04 // Previous&quot; in Project Settings → Control → Software Environment. Please report any issues!</p>
<p>There are no major changes, just regular updates to many packages and binaries.</p>
]]></description>
      <pubDate>Mon, 21 Aug 2023 13:24:45 GMT</pubDate>
      <guid>https://cocalc.com/news/21</guid>
    </item>
    <item>
      <title><![CDATA[Cash Vouchers]]></title>
      <link>https://cocalc.com//news/cash-vouchers-20</link>
      <description><![CDATA[<p>CoCalc now has <strong>Cash Voucher Codes</strong>. These are single-use codes that you can purchase and make available to somebody else, who can then redeem them at <a href="https://cocalc.com/redeem">https://cocalc.com/redeem</a> for that amount of credit on their CoCalc account.  They can then buy anything in CoCalc using that credit, including <a href="https://cocalc.com/store/site-license">upgrade licenses</a>, <a href="https://cocalc.com/store/dedicated">dedicated VM's and disks</a>, <a href="https://doc.cocalc.com/paygo.html">pay-as-you-go project upgrades</a>, <a href="https://doc.cocalc.com/teaching-upgrade-course.html#students-pay-for-upgrades">student-pay course upgrades</a>, GPT-4 chat evaluation, more vouchers, etc.</p>
<p>To buy a cash voucher code, visit <a href="https://cocalc.com/store/vouchers">https://cocalc.com/store/vouchers</a> and do &quot;Add Cash Voucher&quot;.</p>
<img width="910" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/619837ec-f829-4e4d-9e2f-2874c21c3d82">
<p>then fill out the number and description, and customize the voucher codes:</p>
<img width="1022" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/4ac0db78-0d8c-498c-9042-0c290b49a96c">
<p>Then create your voucher codes:</p>
<img width="998" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/2f872c31-14bf-4793-a611-4733e2f05aa8">
<p>You get this:</p>
<img width="960" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/038d42f1-9d95-4c6d-b14f-1e1ed8c56624">
<p>Go to <a href="https://cocalc.com/redeem">https://cocalc.com/redeem</a> and redeem your own code... thus getting your money right back (as credit in your account)!</p>
<img width="474" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/8029c576-a92f-4883-a8cb-e7d2be2239b8">
<p>Note that this has no impact on my balance -- I just made a $5 voucher, which reduced my balance by $5, then I redeemed it, increasing my balance back to exactly where it was:</p>
<img width="1045" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/2957520b-0405-4511-bc49-7d330199ec20">
<p>I hope you find this useful.  E.g., if you're teaching a workshop and you want everybody to have an easy way to upgrade their projects for a few hours or use GPT-4 for more sophisticated AI help, you can just issue each participant a $2 voucher...</p>
]]></description>
      <pubDate>Mon, 07 Aug 2023 04:15:36 GMT</pubDate>
      <guid>https://cocalc.com/news/20</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc's New Purchasing System]]></title>
      <link>https://cocalc.com//news/cocalc-s-new-purchasing-system-19</link>
      <description><![CDATA[<p>CoCalc's new purchasing system is now live!  Instead of directly buying licenses, you add a credit on your account.      You can then use that money <strong>in a massively more flexible way</strong> to buy licenses, pay-as-you-go upgrades of projects (a new thing), use GPT-4 (new), GPU's (coming soon), and we have many more plans.  There's a log of exactly what you purchased, with daily and monthly statements, and as you make purchases your balance goes down.</p>
<p>Payments to add credit now work in your local currency anywhere in the world with a wide variety of local payment methods, instead of just credit cards!  You can also buy a subscription without enabling any form of automatic payments -- you just have to manually add credit to cover the subscription periodically.</p>
<p>Another massive improve in our license system is that if you purchase a licenses and find that  you need to increase or decrease the RAM or disk space or run limit (number of upgraded projects) or anything else at any time, you can just directly <em>edit the license</em> and your account will be debited or credited accordingly.   If you need a license for only a week,  or to extend an existing license, you can also just do that at any time using the balance in your account (you're charged the prorated difference).</p>
<p>I think this is much better than what we had before, and it's now fully live as you can see at</p>
<ul>
<li><a href="https://cocalc.com/settings/purchases">https://cocalc.com/settings/purchases</a></li>
<li><a href="https://cocalc.com/settings/subscriptions">https://cocalc.com/settings/subscriptions</a></li>
<li><a href="https://cocalc.com/settings/statements">https://cocalc.com/settings/statements</a></li>
</ul>
<p>and in the screenshots below.  These improvements to purchasing are the result of feedback from thousands of users over many years.</p>
<h2>Purchases:</h2>
<img width="1185" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/45ef10f7-f8df-48db-ad10-d1471bb7338e">
<h2>Statements:</h2>
<img width="1185" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/5523191a-46eb-4317-aa22-5f013c9c4ca0">
<h2>Edit a license</h2>
<img width="878" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/14996515-7135-4962-b8e8-3556f2bacc89">
<img width="614" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/5c049cd8-bee4-4129-91d9-b33d9e2d28d4">
<h2>Pay as you go project upgrade</h2>
<img width="954" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/4b30c488-5ed2-4800-a690-50bfe345c13c">
<h2>GPT-4</h2>
<img width="830" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/0919020b-6c6b-4918-97c7-7d5357fb9c72">
]]></description>
      <pubDate>Fri, 04 Aug 2023 03:52:29 GMT</pubDate>
      <guid>https://cocalc.com/news/19</guid>
    </item>
    <item>
      <title><![CDATA[22.04 software environment: Macaulay2]]></title>
      <link>https://cocalc.com//news/22-04-software-environment-macaulay2-18</link>
      <description><![CDATA[<p>The 22.04 line of software environments just got an update!</p>
<p>This update includes the <a href="http://www2.macaulay2.com/Macaulay2/">Macaulay2</a> Jupyter Kernel, which is a powerful tool for symbolic computation. Here is an <a href="https://cocalc.com/hsy/ubuntu-22.04-testing/macaulay2">m2 example notebook</a>.</p>
<p>In addition to the new kernel, many new and updated packages are now available. For example, the Python3 (system-wide) environment now includes:</p>
<ul>
<li><a href="https://gerrychain.readthedocs.io">GerryChain</a> - a library for using Markov Chain Monte Carlo methods to study the problem of political redistricting.</li>
<li><a href="https://github.com/quantumlib/Cirq">cirq</a> - a library for creating, editing, and invoking Noisy Intermediate Scale Quantum (NISQ) circuits.</li>
<li><a href="https://github.com/tequilahub/tequila">tequila</a> - a high-level abstraction framework for quantum algorithms.</li>
</ul>
<p>As usual, you can switch back to the previous 22.04 environment via Project Settings → Project Control → Software Environment: &quot;Ubuntu 22.04 // Previous&quot;. Please report any issues you encounter!</p>
]]></description>
      <pubDate>Wed, 28 Jun 2023 17:13:55 GMT</pubDate>
      <guid>https://cocalc.com/news/18</guid>
    </item>
    <item>
      <title><![CDATA[Update on Flyouts]]></title>
      <link>https://cocalc.com//news/update-on-flyouts-17</link>
      <description><![CDATA[<p>We are delighted to announce the release of an update to our flyout side-panels. These represent a modernized and condensed variant of our existing full-size pages. Their aim is to curtail extraneous cognitive load associated with frequent page toggling and navigation, and hence improves the speed and ease of using CoCalc.</p>
<p><img src="https://cocalc.com/share/raw/e1fb154c652f6005ae18391458cb558444a8d9bb/news/2023-06-28-flyout3.png" alt=""></p>
<h2>Files</h2>
<p>The files flyout hosts a compact file explorer that retains the familiar filter, sort, and hidden files functionalities. In addition, it introduces a terminal fully synchronized with your directory navigation. Any directory changes within the terminal are reflected in the interface and vice-versa. This synchronization can be disabled through a toggle button.</p>
<p>When one or more files are selected, file actions become available; these currently open the existing panels. This functionality will be further refined in future iterations. A double-click on a file opens an editor, echoing the usual behavior observed on traditional operating systems.</p>
<h2>Log</h2>
<p>A new feature is the presence of a button enabling you to load the full history. Similar to files, color-coded borders to the left signify recent usage (in the past few hours, last day, or past week) which assists in quickly identifying the recently modified files, either by yourself or your collaborators.</p>
<h2>Servers</h2>
<p>Our updated interface also encompasses a command button to restart the Sage Worksheet server.</p>
<h2>Users (formerly known as &quot;Collaborators&quot;)</h2>
<p>The revamped interface offers improved control over project access. Future updates will streamline rough edges associated with the invite token and sandbox project components.</p>
<h2>Upgrades</h2>
<p>This flyout panel provides a succinct overview of your project quotas, their current usage, and limits. Below the quota overview, licenses and upgrades can be configured.</p>
<h2>Settings</h2>
<p>This new settings panel comprises a neatly organized and expandable list of all remaining project settings. This simplified representation makes it easier to focus on just the setting you intend to modify.</p>
<h2>Future Developments</h2>
<p>While the current flyouts might not fully embody all the features from large configuration pages, we assure you that these will soon be incorporated. Several additional elements, including Git integration, a project-wide chat, an improved homepage, and various minor tweaks are also under consideration. We value and look forward to your feedback.</p>
]]></description>
      <pubDate>Wed, 28 Jun 2023 10:08:49 GMT</pubDate>
      <guid>https://cocalc.com/news/17</guid>
    </item>
    <item>
      <title><![CDATA[SageMath 10.0]]></title>
      <link>https://cocalc.com//news/sagemath-10-0-16</link>
      <description><![CDATA[<p>The <a href="https://www.sagemath.org/">SageMath</a> team has released Sage 10.0 and it is now available in the newest Ubuntu 22.04 line of software environments. Users who want to switch to this version should check their Jupyter Kernel menu and update their environment or restart their project if necessary. We recommend updating to this new version to benefit from its latest features and improvements. For more information on Sage 10.0, you can visit the <a href="https://github.com/sagemath/sage/wiki/Sage-10.0-Release-Tour">SageMath 10.0 Release Tour</a> page or <a href="https://cocalc.com/wstein/support/sage-10.0">interactively use the release tour examples on CoCalc</a>.</p>
<p><img src="https://cocalc.com/share/raw/5e575ce4b5f4664e8631a18da5090e70375e8b84/2023-06-07-sage-10.0.png" alt=""></p>
]]></description>
      <pubDate>Wed, 07 Jun 2023 14:45:37 GMT</pubDate>
      <guid>https://cocalc.com/news/16</guid>
    </item>
    <item>
      <title><![CDATA[Recent Updates to Notifications, File Tabs, Buttons, and Real-time AI Outputs]]></title>
      <link>https://cocalc.com//news/recent-updates-to-notifications-file-tabs-buttons-and-real-time-ai-outputs-15</link>
      <description><![CDATA[<p><a href="http://Cocalc.com">Cocalc.com</a> has some exciting new features and updates to improve your online coding experience.  For users tired of pesky notification badges, using @ChatGPT in chat will no longer increase the counter. Further enhancing the user interface, file tabs no longer resize upon closing, akin to Chrome's feature, making it simpler to close multiple tabs.</p>
<p>The Save and TimeTravel buttons are improved for seamless file management. Moreover, all buttons will stay visible in non-focused panels – a feature that will benefit LaTeX users greatly.  In addition, the left flyout panel is now resizable, featuring a number of other subtle improvements. But what's probably most exciting is the new capability of ChatGPT to stream its output in all situations, whether in side chats or when generating new notebook cells. You can now watch the AI response unfold in real-time as it is being written, significantly enhancing interactivity on the platform.</p>
<img width="1719" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/8e718c8c-489a-4a42-800b-b69eaeb59b5d">
<p>These updates are set to make your coding experience smoother, more convenient, and enjoyable. Don't forget to refresh your browser!</p>
<ul>
<li>Using @ChatGPT in chat no longer results in a notification badge counter increase</li>
<li><a href="https://www.modular.com/mojo">Mojo</a> code editing support (syntax highlighting).</li>
<li>File tabs no longer resize when closing them, like with chrome, so it's easier to close many tabs.</li>
<li>Improved Save and TimeTravel buttons</li>
<li>All buttons now stay visible in non-focused panels, which is especially nice with LaTeX:  <img width="1649" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/ce9dac66-5b9c-4df4-85c0-19b7c3f46060"></li>
<li>The left flyout panel is now resizable, and has many other subtle improvements.</li>
<li>ChatGPT streams its output in all cases now, both in side chats, and when generating new notebook cells, so you can see the response as it is written.  E.g., via Home screen or +New --&gt; &quot;Generate Jupyter Notebook: <img width="566" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/8413da04-889b-4c38-9cc8-2f3735929cea"></li>
</ul>
]]></description>
      <pubDate>Tue, 06 Jun 2023 17:32:47 GMT</pubDate>
      <guid>https://cocalc.com/news/15</guid>
    </item>
    <item>
      <title><![CDATA[API Keys!]]></title>
      <link>https://cocalc.com//news/api-keys--14</link>
      <description><![CDATA[<p>Yesterday and today I finished and made live a new api key implementation, e.g., this is now in account settings:</p>
<img width="805" alt="image" src="https://github.com/sagemathinc/cocalc/assets/1276278/2a37a82f-2fc2-4c42-8111-0e98853a6e6a">
<p>There is something similar at <a href="https://cocalc.com/config/account/api">https://cocalc.com/config/account/api</a> and ALSO in the settings page for all projects.</p>
<p>These newapi keys have an expire date, a name (which you can change at any time or repeat), the secret key itself doesn't get stored in the database (which is much more secure), and there are project specific api keys that only work for api calls for a specific project, rather than for everything.    I left in the old api key functionality, but with messages that people should delete them, so the old keys still remain fully supported.</p>
<p>With the new api keys you can have up to 100 different keys active at once.  A key can be set to expire at any time and then it is automatically deleted.  You can edit the expire date and the name of the key at any time.  It's a much better model.  Behind the scenes we don't store the key in the database; instead, we just store a hash of it (the same sha-512 with 1000 rounds and salt as for passwords), so we can confirm somebody knows their api key without having to have the key in the database; this is much more secure.  I also really like that I can make a key with a 1-day expire, play around with it, and know it's not just going to be a ticking time bomb.</p>
<p>Read more about the API here: <a href="https://doc.cocalc.com/api/">https://doc.cocalc.com/api/</a> and <a href="https://doc.cocalc.com/api2/index.html">https://doc.cocalc.com/api2/index.html</a></p>
<p>The motivation for doing this is that project-specific API keys are needed for some new functionality we're implementing right now that will support connecting external computers to a CoCalc project to provide much more powerful compute.  Among other things, this will greatly expand the sort of compute we can offer to include GPU's and other vastly more powerful options, and also to support people plugging in their own compute resources.</p>
<img width="2559" alt="paste-0 4698690247357571" src="https://github.com/sagemathinc/cocalc/assets/1276278/222f66b8-b6ce-4704-b370-7d56f05417ab">
]]></description>
      <pubDate>Wed, 24 May 2023 02:59:39 GMT</pubDate>
      <guid>https://cocalc.com/news/14</guid>
    </item>
    <item>
      <title><![CDATA[Flyout Panels for Projects]]></title>
      <link>https://cocalc.com//news/flyout-panels-for-projects-13</link>
      <description><![CDATA[<p>Say hello to the new &quot;flyout panels&quot; - a side panel designed specifically for common aspects of CoCalc projects. Located right next to the vertical buttons on the left-hand side, this feature can be easily accessed by clicking on the &quot;▸&quot; icon. Once expanded, you'll have a compact representation of various project aspects at your fingertips.</p>
<p>With the flyout panel, you can now effortlessly explore files, conveniently check recently modified files, keep track of running processes, perform quick searches, and much more. This initial release is just the beginning, as we have plans to continuously enhance and refine the feature in the upcoming weeks.</p>
<p>Experience the efficiency and ease-of-use that the flyout panel brings to your projects on CoCalc. Try it out today and stay tuned for exciting updates in the near future!</p>
<table>
<thead>
<tr>
<th style="text-align:center">Explore files</th>
<th style="text-align:center">Running processes</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center"><img src="https://cocalc.com/share/raw/98ea113b327497d28c8209a4a04160f9575850d3/img/2023-05-23-flyout-files.png" alt=""></td>
<td style="text-align:center"><img src="https://cocalc.com/share/raw/eec06c8532c820cbb8c8909fdceddf3947d23595/img/2023-05-23-flyout-processes.png" alt=""></td>
</tr>
</tbody>
</table>
]]></description>
      <pubDate>Tue, 23 May 2023 08:32:34 GMT</pubDate>
      <guid>https://cocalc.com/news/13</guid>
    </item>
    <item>
      <title><![CDATA[SIAM Conference on Dynamical Systems 2023]]></title>
      <link>https://cocalc.com//news/siam-conference-on-dynamical-systems-2023-79</link>
      <description><![CDATA[<h4>Our Notes from SIAM Dynamical Systems 2023</h4>
<img src="https://cocalc.com/share/raw/980f13d3376aef50499364780d37d0bc801fce91/MARKETING/Conference%20Posts/Images/Portland.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>Our team was at the SIAM Conference on Dynamical Systems in Portland, and we wanted to share some of the research that really stood out to us. It was a great chance to see the kinds of complex problems people are tackling right now.</p>
<p>Here are a few highlights that got us thinking:</p>
<ul>
<li>
<p><strong>Alethea Barbaro’s work on collective motion</strong> was fascinating. She’s modeling how groups (like flocks or swarms) move when individuals have different sensing abilities. This is a huge challenge in multi-agent simulations. It immediately made us think about how you could use a tool like CoCalc to run a dozen parallel simulations, tweaking the sensing parameters in each to see how the group behavior changes without getting your local machine tangled up.</p>
</li>
<li>
<p><strong>Sunghwan &quot;Sunny&quot; Jung's presentation on animals and fluid instabilities</strong> was another great example of messy, real-world science. He explored how animals navigate and even use things like turbulence. Research like this is so interdisciplinary. You might need to run a fluid dynamics simulation, analyze observational data with Python, and build a mathematical model in a LaTeX paper, all for one project. It’s a perfect case for having all those different tools and files in one shared online space.</p>
</li>
<li>
<p>We also really enjoyed the talks that went back to fundamentals. <strong>Hugh Hunt’s discussion on the physics of pendulum clocks</strong> and <strong>Alexandria Volkening’s deep dive into stability analysis</strong> were great reminders of how powerful the core principles are. This is the kind of stuff you want to code up from scratch in a notebook to really get a feel for it—and it’s a perfect teaching example to share with a student who can then play with the code themselves.</p>
</li>
</ul>
<img src="https://cocalc.com/share/raw/f54b73ee018f096f86ebaeba72cef7020acdb7b0/MARKETING/Conference%20Posts/Images/Portland%202.jpeg"   width="500px"  height="400px"  style="object-fit:cover"/>
<p>Going to conferences like this is incredibly valuable for us. Seeing the actual computational hurdles you face is what helps us figure out how to make CoCalc a more useful and practical tool. A huge thank you to the SIAM organizers for putting on such a productive event.</p>
<p>All the best,</p>
<p>The CoCalc Team</p>
]]></description>
      <pubDate>Sat, 20 May 2023 22:27:29 GMT</pubDate>
      <guid>https://cocalc.com/news/79</guid>
    </item>
    <item>
      <title><![CDATA[Ubuntu 22.04 Default Software Environment]]></title>
      <link>https://cocalc.com//news/ubuntu-22-04-default-software-environment-9</link>
      <description><![CDATA[<p>CoCalc's family of software environments has several lines. For the past years, the default for a new project was &quot;Ubuntu 20.04&quot; and received periodic updates. This changed today!</p>
<p>We're happy to announce that the <strong>Ubuntu 22.04</strong> line of images became the default for new projects. You can update your existing projects to use this new image, or switch back to 20.04 any time – that's in <a href="https://doc.cocalc.com/project-settings.html#software-environment">Project Settings → Control → Software Environment</a>.</p>
<p>Most notably, the system-wide Python 3 environment is much more recent, several Octave kernels are available, and many small changes make this a much better environment for modern scientific computing.</p>
<p>Updates for 20.04 will become less frequent and it will eventually be deprecated, just like with the lines before it. However, we'll keep it around in case you depend on older software or on some tools, which did not make it into 22.04.</p>
<p>For more details you can study our <a href="https://cocalc.com/software">software environment inventory</a>.</p>
]]></description>
      <pubDate>Mon, 15 May 2023 15:52:02 GMT</pubDate>
      <guid>https://cocalc.com/news/9</guid>
    </item>
    <item>
      <title><![CDATA[Neural Search]]></title>
      <link>https://cocalc.com//news/neural-search-12</link>
      <description><![CDATA[<p><img src="https://user-images.githubusercontent.com/1276278/238100769-2a35de37-aba9-42ac-89bd-4ad70be98496.png" alt=""></p>
<p>&quot;Neural AI Search&quot; is now live in cocalc.  The actual application right now is minimal compared to what it <em>could</em> be.  I just want to get the backend foundations in place, and make it so content starts getting indexed, before building a bunch of new frontend capabilities on this.  Right now the only thing you can do is click on the Find page in a project, click &quot;Neural Search&quot; off to the right, and do a search in that directory.  It searches only jupyter, tasks, chat, whiteboards, and slides that you have opened for at least 7.5 seconds after I made this live a few minutes ago.  It then updates the backend search index as you edit them.</p>
<p>The potential with this is extensive, and this is just a VERY tiny step.  E.g., the underlying thing could work across many projects whether or not they are running, and of course it would also be extremely useful to search only within a specific file (like this chat). Also, this provides the foundation to make it so when interacting with  ChatGPT it can be aware of content across your files and in relevant technical documentation (e.g., sagemath docs from now instead of 2021).</p>
<h2>Technical Architectural Remarks</h2>
<p>The basic thing seems to work fine, and the design I finally came up with (after numerous painful iterations this week!) uses git and sync like trickery to I think be very efficient and robust, and the expense of an <script type="math/tex">\varepsilon</script> chance of a wrong answer (which hardly matters for <em>search</em>).</p>
<p>In admin settings there is a new box:</p>
<p><img src="https://user-images.githubusercontent.com/1276278/238101040-9d9ea00d-ca57-45b7-9222-2ddd8926d8ff.png" alt=""></p>
<p>When this is &quot;no&quot;, everything is disabled, including any backend api's and frontend UI.  When set to &quot;yes&quot;, a person can put in the address and api key of a qdrant server, e.g., from <a href="https://cloud.qdrant.io/">https://cloud.qdrant.io/</a>  or run their own, and then they automatically get neural network search working.  This involves three tables:</p>
<ul>
<li>postgres:  openai_embeddings_logs -- logs any time that somebody calls the openai embeddings api, and how much it costs.  It has some &quot;elaborate&quot; throttling strategy to ensure that we don't spend too much...</li>
<li>postgres: openai_embeddings_cache -- a cache of the <em>expensive</em> to compute map from text to a vector in <script type="math/tex">\mathbf{R}^{1536}</script> that comes from the openai embeddings api. Entries in this cache expire after 6 weeks of not being touched.  That said, postgresql seems to store vectors of doubles pretty compactly, and we aren't doing anything but just using this as a key:value cache.</li>
<li>qdrant: cocalc -- a &quot;vector collection&quot; of embeddings and metadata</li>
</ul>
<p>Yes, this is all available in cocalc-docker.</p>
<p>The &quot;robust&quot; part of the design is that if you delete any data from any subset of the above tables, things will just keep humming along fine - there's no dependence.  Delete some of the cache and we just pay more (and things are a little slower), delete some of the vector database, and you'll just get less search results.  This is very different than my original design, which tightly couple qdrant and postgres, in such a way that it was very easy for one to break the other.</p>
<p>The data model for qdrant uses a lot of techniques to ensure security and limited data access (similar to what we do with postgresql), which is fairly easy to do with qdrant, but NOT with more basic vector databases.  It also, wouldn't have worked with qdrant back in Nov 2022, since they have improved a lot recently.</p>
<p>The final piece in this whole puzzle is that for <a href="http://cocalc.com">cocalc.com</a>, we run qdrant itself in our Kubernetes cluster, and have regular snapshots that we backup.   Qdrant's design is very much NOT a pig -- it's written in tight memory efficient Rust, and uses quantization to massively reduce the space used to store vectors, so I think it'll scale pretty well for us.</p>
<p>There's also the potential of providing this vector search capability via our api on a &quot;pay for what you use&quot; basis, and that could be of interest as its own product, since I developed a way to have a large number of independent organized vector databases that are &quot;multi-tenant&quot;, so the cost is excellent per user.  It's something to explore for &quot;<a href="http://cocalc.ai">cocalc.ai</a>&quot;, since it could be useful for to sell for a lot of people.  It's actually already available (for free), and just not documented.</p>
]]></description>
      <pubDate>Sat, 13 May 2023 08:18:41 GMT</pubDate>
      <guid>https://cocalc.com/news/12</guid>
    </item>
    <item>
      <title><![CDATA[🚀 A Busy Month for CoCalc: Unlocking the Power of Collaboration at 🌐 Startup Grind, APS April, Pydata, and JupyterCon!]]></title>
      <link>https://cocalc.com//news/-a-busy-month-for-cocalc-unlocking-the-power-of-collaboration-at-startup-grind-aps-april-pydata-and-jupytercon--11</link>
      <description><![CDATA[<p>Hey folks! As the Chief Sales Officer for CoCalc, I am delighted to share my incredible experiences attending some of the most inspiring events across academia, industry, and government over the last month. It was a fantastic opportunity to participate in meaningful conversations and explore potential collaborations with the goal of breaking down siloing and fostering innovation 🎉.</p>
<img width="169" alt="image" src="https://i.imgur.com/7DD435S.jpg">
<p>First, we kicked off the month at the <a href="https://about.cocalc.com/2023/05/07/startup-grind-global-conference-2023/">Startup Grind Global Conference 2023</a> in Redwood City, exploring the power of Big Data and building recession-resilient startups. I connected with tech innovators and enterprising entrepreneurs, energized by the spirit of collaboration and shared insights.</p>
<img width="169" alt="image" src="https://i.imgur.com/FELRtAI.jpg">
<p>Next, we ventured into the realm of high-energy physics at the <a href="https://about.cocalc.com/2023/05/11/american-physical-society-april-2023-meeting/">American Physical Society (APS) April 2023 Meeting</a>. We engaged in stimulating discussions on fusion experiments at Lawrence Livermore National Laboratory, diversity in STEM fields, and learned about the groundbreaking James Webb Space Telescope 🌌.</p>
<img width="169" alt="image" src="https://i.imgur.com/XpmjoJQ.jpg">
<p>Then, I immersed myself in the world of data science at <a href="https://about.cocalc.com/2023/05/11/pydata-seattle-2023/">PyData Seattle 2023</a> 💻. This exceptional event offered incredible talks and hands-on workshops covering a wide range of topics, from scaling Altair visualizations with VegaFusion to the open source quantum ecosystem.</p>
<img width="169" alt="image" src="https://i.imgur.com/5r8zfz5.jpg">
<p>Finally, I am currently attending the <a href="https://about.cocalc.com/2023/05/11/jupytercon-paris-2023/">JupyterCon 2023</a> in Paris, where CoCalc is proudly attending due to our contributions within the Jupyter ecosystem 🌐. It was truly a wonderful opportunity to engage with potential hires, partners, and other professionals in the data science field while strengthening CoCalc's brand visibility.</p>
<p>A heartfelt thank you to my connections, new friends, and colleagues I met along the way. As we continue our journey, let's pave the path for an even brighter, more collaborative future! 🤝💡</p>
<p>Stay up to date with our events <a href="https://about.cocalc.com/where-to-find-us/">here</a> and keep the collaboration flowing!</p>
<p><em>Can't wait to see you all at the next events! Until then, stay curious and collaborative!</em></p>
]]></description>
      <pubDate>Fri, 12 May 2023 09:36:05 GMT</pubDate>
      <guid>https://cocalc.com/news/11</guid>
    </item>
    <item>
      <title><![CDATA[Unleashing the Power of CoCalc: A Summary of Exciting New Features! ]]></title>
      <link>https://cocalc.com//news/unleashing-the-power-of-cocalc-a-summary-of-exciting-new-features--10</link>
      <description><![CDATA[<p>We have been adding many exciting new features to CoCalc recently! Over the last few months, we have  introduced numerous enhancements and additions across various aspects of the site. Check out the highlights:</p>
<ol>
<li>💡 <a href="https://github.com/sagemathinc/cocalc/blob/master/docs/api/jupyter.md">Jupyter API and kernel pool:</a> There have been significant improvements to Jupyter notebooks, including the introduction of the Jupyter API and a new kernel pool, providing a more seamless and efficient experience to users.  You can also embed executable code anywhere in CoCalc where you use Markdown by just making a fenced code block.</li>
<li><a href="https://doc.cocalc.com/#first-steps-guide">📖 First Steps Guide:</a> A helpful First Steps Guide is now available to guide newcomers and make it easier for them to dive into CoCalc.  Click &quot;Start the first steps guide&quot; at the top of your cocalc project.</li>
<li>🌟 <strong>Tab completion</strong>: Tab completion for LaTeX, Python, JavaScript, and other languages has been introduced, making coding and document editing faster and more convenient.  Use the tab key when editing code or latex outside Jupyter.</li>
<li><a href="https://doc.cocalc.com/chatgpt.html">🔧 &quot;Help me fix this...&quot; for LaTeX and Sage Worksheets: </a>A new very popular feature using ChatGPT, which helps you quickly identify and fix errors in LaTeX documents and Sage Worksheets, improving productivity.  Whenever an error occurs, click a button, and get a context sensitive suggestion about how to fix it.</li>
<li>🔍 <strong>Better share server searching and sorting</strong>: It's now even easier to find and organize your work with improved search and sorting capabilities on the share server. <a href="https://cocalc.com/share/public_paths/page/1">Browse now!</a>  You can also run and edit code in any published notebook, directly from the share server, without having to sign in or make a copy.</li>
<li><a href="https://doc.cocalc.com/tasks.html">📝 Task lists:</a> Task lists have been upgraded to a frame editor, allowing you to view a single task list in multiple ways simultaneously. In particular, you can splits your list horizontally or vertically and set separate search parameters for each frame.  You can also easily analyze any subset of tasks using ChatGPT.</li>
<li><a href="https://doc.cocalc.com/chatgpt.html">🤖 OpenAI ChatGPT integration:</a>  You can now integrate OpenAI ChatGPT with your Jupyter notebooks or Linux Terminal in CoCalc, opening up endless possibilities to leverage AI in your work.</li>
<li><a href="https://doc.cocalc.com/slides.html">🎙️ CoCalc Slides:</a> Create amazing presentations with CoCalc Slides, which supports Jupyter code and LaTeX math in your slides. These are based on <a href="https://doc.cocalc.com/whiteboard.html">the whiteboard </a>but with a presentation mode and specific sized slide.</li>
<li>⏰ <strong>Better display of points in time</strong>: Easily switch between relative and absolute time displays for improved clarity and understanding, anywhere that times are displayed in CoCalc.</li>
<li><a href="https://doc.cocalc.com/vouchers.html">🎛️ Vouchers: </a>Transferable codes for licenses can now be renewed later, giving more flexibility to users.  Did you read this far? If so, <a href="mailto:help@cocalc.com">send us a message</a> to try out a voucher for free!</li>
</ol>
<p>These developments make CoCalc an even more powerful and user-friendly platform for those seeking an all-in-one fully collaborative scientific computing environment! 🎉</p>
]]></description>
      <pubDate>Mon, 01 May 2023 22:00:40 GMT</pubDate>
      <guid>https://cocalc.com/news/10</guid>
    </item>
    <item>
      <title><![CDATA[First Steps Guide]]></title>
      <link>https://cocalc.com//news/first-steps-guide-8</link>
      <description><![CDATA[<p>UPDATE: this wasn't that popular, and we are rolling out <a href="https://github.com/sagemathinc/cocalc/discussions/6672">Guided Tours</a>, so this is now deprecated.</p>
<p>I just spent the morning bringing back the &quot;First steps guide&quot;.</p>
<img width="875" alt="image" src="https://user-images.githubusercontent.com/1276278/233722803-2039d542-7262-437a-b4ce-62faf00ccfd6.png">
<p>You probably have it off if you're reading this, but to see it check this box in account prefs:</p>
<img width="247" alt="image" src="https://user-images.githubusercontent.com/1276278/233722225-6d8b761c-44ac-416b-837a-1e8c4190ffca.png">
<p>It looks like this at the top of any project:</p>
<img width="498" alt="image" src="https://user-images.githubusercontent.com/1276278/233722197-46c9bbd3-292c-4ffe-bba3-c5f9adb7ece7.png">
<p>When clicked, it copies these files over:</p>
<p><a href="https://github.com/sagemathinc/cocalc/tree/master/src/smc_pyutil/smc_pyutil/templates/first-steps">https://github.com/sagemathinc/cocalc/tree/master/src/smc_pyutil/smc_pyutil/templates/first-steps</a></p>
<p>NOTE: If you haven't restarted your project then it copies files from the library instead, which are a lot older.</p>
<p>The content is still relatively dated and it'll get further updates soon.</p>
]]></description>
      <pubDate>Fri, 21 Apr 2023 19:50:20 GMT</pubDate>
      <guid>https://cocalc.com/news/8</guid>
    </item>
    <item>
      <title><![CDATA[Ubuntu 22.04 Updated]]></title>
      <link>https://cocalc.com//news/ubuntu-22-04-updated-7</link>
      <description><![CDATA[<p>By default, projects run the &quot;Ubuntu 20.04&quot; line of software environments. Soon, the default of newly created projects will change to be the <strong>Ubuntu 22.04</strong> line. It offers a similar software stack, but with many updates and newer versions. Today has been yet another update of it and now there is e.g. Octave 8.2 available.</p>
<p>You can try 22.04 right now by going to Project Settings → Project Control → Software Environment: and select &quot;Ubuntu 22.04 (Current)&quot;. You can switch back any time as well.</p>
<p>As always, please <a href="mailto:help@cocalc.com">let us know</a> about issues you encounter.</p>
]]></description>
      <pubDate>Wed, 19 Apr 2023 11:06:20 GMT</pubDate>
      <guid>https://cocalc.com/news/7</guid>
    </item>
    <item>
      <title><![CDATA[LaTeX + ChatGPT "Help me fix this..." buttons into our LaTeX editor]]></title>
      <link>https://cocalc.com//news/latex-chatgpt-help-me-fix-this-buttons-into-our-latex-editor-6</link>
      <description><![CDATA[<p>Exciting update: Squashed LaTeX error log bugs and integrated ChatGPT &quot;Help me fix this...&quot; buttons into our LaTeX editor for real-time assistance! 💻🚀🙌</p>
<p>This was partly inspired by <a href="https://mathstodon.xyz/@tao/110172426733603359">Terry Tao's recent remark about ChatGPT</a>: &quot;Just being able to resolve &gt;90% of LaTeX compilation issues automatically would be wonderful...&quot;</p>
<img width="712" alt="image" src="https://user-images.githubusercontent.com/1276278/232986067-2886f513-0439-4396-9356-9d70967e64d4.png">
<p>When clicked, you get a chat like this:</p>
<img width="430" alt="image" src="https://user-images.githubusercontent.com/1276278/232986260-7398e9fc-0a7a-4017-977a-20683ef37e36.png">
<p>The &quot;Details&quot; contains the error, what you're doing (latex), and a selection from the file to help chatgpt better assist you.</p>
<p>Here's another example:</p>
<img width="1664" alt="image" src="https://user-images.githubusercontent.com/1276278/232987534-1a66a862-9fd6-40dc-be54-961332b4edb4.png">
<p>As a reminder, TimeTravel is another feature of CoCalc that helps in fixing errors -- if you had everything compiling 5 minutes ago, just zip back in time and see what you did to mess things up.</p>
<img width="1513" alt="image" src="https://user-images.githubusercontent.com/1276278/232986612-bba8526e-dfbe-4ada-bfb1-0711fea6b490.png">
]]></description>
      <pubDate>Wed, 19 Apr 2023 06:35:36 GMT</pubDate>
      <guid>https://cocalc.com/news/6</guid>
    </item>
    <item>
      <title><![CDATA[Jupyter kernel pool]]></title>
      <link>https://cocalc.com//news/jupyter-kernel-pool-5</link>
      <description><![CDATA[<p>I have just released a Jupyter kernel pool optimization for all projects and the use of Jupyter notebooks. I wrote and deleted this 3 or 4 times before getting something that works robustly (I hope). The final version should not break anything or negatively impact any functionality, as it falls back to not using a pool in every subtle case where something could go wrong (e.g., environment variable customization changes, etc.).</p>
<img width="169" alt="image" src="https://user-images.githubusercontent.com/1276278/232261594-f24853e4-fa57-4d40-ba8b-b9bb90a3f6fd.png">
<ul>
<li>To benefit from this optimization, you need to restart your project (or start a new one). Refreshing your browser will not make any difference.</li>
<li>The very first time you start your project and open a notebook, there will be no difference except for one extra kernel starting in the background (that's the pool). Moreover, a file ~/.config/cocalc-jupyter-pool will be created with the parameters of that kernel.</li>
<li>If you then open <em>another</em> notebook with the same kernel, running code should start MUCH more quickly. Additionally, &quot;restart and run all&quot; in a notebook should be significantly faster than before (e.g., less than a second instead of 10 seconds, say, for Sage!).</li>
<li>If your project stops and you start it up again later, your first use of a kernel should be MUCH faster, assuming you're using the same kernel as before and you haven't changed any custom environment variables. Under the hood, whatever was stored in ~/.config/cocalc-jupyter-pool will be started next time.</li>
</ul>
<p>That's it! One interesting thing I needed was <a href="https://github.com/sagemathinc/cocalc/blob/master/src/packages/util/jupyter-api/setenv-commands.ts">code to generate code</a> for each language that would set custom environment variables on the fly. I <a href="https://github.com/sagemathinc/cocalc/blob/master/src/packages/util/jupyter-api/chdir-commands.ts">had something</a> that would generate code to change directories, and I pointed GPT-3.5 at that code and said, &quot;rewrite this to instead generate code for custom environment variables...&quot; and it worked perfectly on the first try.</p>
]]></description>
      <pubDate>Sun, 16 Apr 2023 01:45:35 GMT</pubDate>
      <guid>https://cocalc.com/news/5</guid>
    </item>
    <item>
      <title><![CDATA[CoCalc News]]></title>
      <link>https://cocalc.com//news/cocalc-news-3</link>
      <description><![CDATA[<p>Today, CoCalc has launched a new functionality for sharing news. This will aid in keeping you updated on the latest developments and upcoming modifications. This feature specifically aims to enhance your understanding of how you can leverage CoCalc to its maximum capacity.</p>
<p>News items are available at <a href="https://cocalc.com/news">cocalc.com/news</a> and as notifications in the application itself. You can also grab the <a href="https://cocalc.com/news/rss.xml">RSS feed</a> to ingest it in a reader of your choice. In the future, we plan to send out newsletters as well.</p>
<p><img src="https://cocalc.com/share/raw/7ee91e9695a25ef36adf72419f97d34d3b984d6d/news/2023-04-14-news-announcement.png" alt=""></p>
]]></description>
      <pubDate>Fri, 14 Apr 2023 16:49:24 GMT</pubDate>
      <guid>https://cocalc.com/news/3</guid>
    </item>
    <item>
      <title><![CDATA[Share server searching and sorting]]></title>
      <link>https://cocalc.com//news/share-server-searching-and-sorting-4</link>
      <description><![CDATA[<p>William improved the share server <a href="https://cocalc.com/share">https://cocalc.com/share</a> so now you can search by path and description, and sort by when modified, stars, and views.</p>
<p><img src="https://user-images.githubusercontent.com/1276278/231672215-0babbc5f-c72a-4b77-8a6b-53bda669c9f1.png" alt=""></p>
]]></description>
      <pubDate>Thu, 13 Apr 2023 17:08:36 GMT</pubDate>
      <guid>https://cocalc.com/news/4</guid>
    </item>
    <item>
      <title><![CDATA["Fix this" button in Jupyter Notebooks]]></title>
      <link>https://cocalc.com//news/-fix-this-button-in-jupyter-notebooks-2</link>
      <description><![CDATA[<p>Have you ever been using a Jupyter notebook and got an error message?</p>
<p>You can now click a button and ChatGPT will automatically try to figure out how to fix the error and tell you in a chat next to your main document.</p>
<p><img src="https://cocalc.com/share/raw/84edec2c977ca7731638655a76c219d1678e3056/news/2023-04-14-jupyter-fixbug-button.png" alt=""></p>
<p>Then in the Chat on the side, ChatGPT replies:</p>
<p><img src="https://cocalc.com/share/raw/027a568a58fa7cf8b704c55ff1d2feba4d032adb/news/2023-04-14-jupyter-fixbug-chatgpt.png" alt=""></p>
]]></description>
      <pubDate>Sun, 26 Mar 2023 16:46:48 GMT</pubDate>
      <guid>https://cocalc.com/news/2</guid>
    </item>
    <item>
      <title><![CDATA["Explain" code in Jupyter Notebooks]]></title>
      <link>https://cocalc.com//news/-explain-code-in-jupyter-notebooks-1</link>
      <description><![CDATA[<p>Code cells in a Jupyter Notebook now have an &quot;Explain&quot; button in the upper right corner.</p>
<p>Click it to start a chat with ChatGPT in order to get a detailed explanation what this code does. You can even ask follow up questions.</p>
<p><img src="https://user-images.githubusercontent.com/1276278/227661462-e050cf0f-f23f-44b9-9688-4ee3255ba24b.png" alt=""></p>
]]></description>
      <pubDate>Sat, 25 Mar 2023 17:33:26 GMT</pubDate>
      <guid>https://cocalc.com/news/1</guid>
    </item>
  </channel>
</rss>