Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In

CoCalc News
RSS Feed
JSON Feed

Recent news about CoCalc. You can also subscribe via RSS Feed RSS Feed or JSON Feed JSON Feed.
Filter
gemini
gpt4
llm
2024-05-16

We released another round of large language model updates. You can now use GPT-4o Omni and Gemini 1.5 Flash. Both are not only very capable, but also extremely quick with their replies.

Here is an example how I improved a plot of a t-test using R in a Jupyter Notebook. This is a visual check to see, if the data is really significantly different. The plot looks a bit boring, though:

Via AI ToolsImprove, I can tell GPT-4 Omni to make this a violin plot and more colorful

I get a response and can review the changes in side-chat. The result looks like that:

Much better!

Ok, but wait, what's the T-Test? Here, I'm asking Gemini Flash to explain this to me, and there was also something called shapiro. To learn more, I opened a new chat and asked away. I told Gemini to also show me how to do this in R – which I can run directly in the chat.

video
2024-05-15

Get ready everyone! Over the past month, our very own William Stein has been creating an array of videos highlighting various aspects of CoCalc's Compute Server functionality! This series contains significant topics from understanding memory usage to employing popular software images such as Tensorflow, Sage, LEAN and the likes on powerful GPU/CPU machines. William's comprehensive walkthroughs showcase CoCalc's brand-new capabilities for advanced mathematical research, machine learning, and data science!

Feel free to browse this curated playlist which houses these enlightening videos. Dive in and discover how to harness the full potential of CoCalc like never before! The power of CoCalc is at your fingertips - explore, learn, and elevate your experience! Browse the playlist.

r
software
2024-05-13

The project software environment has been updated. Version 2024-05-13 is the default now: it includes R 4.4 as the default R. Many packages were updated as well.

Note: if you had installed R packages locally in your project before, you have to re-compile them.

The default "R Statistics" compute server image also includes R 4.4 by default as well.

As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, and various underlying Linux packages.

If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select "Ubuntu 22.04 // Previous".

If you are using GPU's on CoCalc, there's an entirely new cloud option that you should see which is Hyperstack:

image

Once you select Hyperstack after starting to create a compute server, click the A100 tag and you'll see this:

image

Note that for $3.60/hour you get an 80GB A100, and these are all standard instances. You can also see that at least right now many are available. Everything else works very similar to Google cloud, except that:

  • startup time is slower -- definitely expect about 5-10 minutes from when you click "Start" until you can use the compute server. However, it's very likely to work, unlike Google cloud GPU's (especially spot instances). Google cloud is extremely good for CPU, but for GPU it's not as good.

  • Many of the server configurations have over 500GB of very fast local ephemeral disk, in case you need that for scratch. It's ephemeral, so goes away when you stop the server.

  • The local disk on the server should be as fast or faster than Google cloud, but cheaper.

  • All network usage is free, whereas egress from Google cloud is quite expensive.

  • There's a different range of GPU's. Sometimes there are a lot of H100's but in the middle of the day on Wednesday, there aren't. Yesterday there were dozens of them.

  • By default only a Python (Anaconda) image and an Ollama image are visible, since they are small. When you select the Python image, you'll likely have to type conda install ... in a terminal to install some packages you need. If you click the "Advanced" checkbox when selecting an image, you can select from the full range of images. However, the first startup time for your server maybe be MUCH slower for big images (e.g., think "20-30 minutes" for the huge Colab image). Starting the server a second time is fast again.

image

  • Live disk enlarging does work, but with a limit of at most 25 times due to Hyperstack architecture.

VIDEO: https://youtu.be/NkNx6tx3nu0

LINK: https://github.com/sagemathinc/cocalc-howto/blob/main/onprem.md

We add an on-prem compute server running on my Macbook Pro laptop to a CoCalc (https://cocalc.com) project, and using the compute server via a Jupyter notebook and a terminal. This involves creating an Ubuntu 22.04 virtual machine via multipass, and pasting a line of code into the VM to connect it to CoCalc.

image

After using a compute server running on my laptop, I create another compute server running on Lambda cloud (https://lambdalabs.com/). This involves renting a powerful server with an H100 GPU, waiting a few minutes for it to boot up, then pasting in a line of code. The compute server gets configured, starts up, and we are able to confirm that the H100 is available. We then type "conda install -y pytorch" to install pytorch, and use Claude3 to run a demo involving the GPU and train a toy model.

jupyter
vscode
2024-04-18

There are many ways to quickly launch Visual Studio Code (VS Code) on https://cocalc.com.

VIDEO: https://youtu.be/c7XHYBDTplw

Open a project on https://cocalc.com, then with one click in the file explorer, launch VS Code running on the project. You can them install and use a Jupyter notebook inside VS Code, and edit Python code and using a terminal.

When you need more power, add a compute server to your project. For example, in the video we demo adding a compute server that has 128GB of RAM and the latest Google cloud n4 machine type. It's a spot instance, which is great for a quick demo. It's good to configure DNS and autorestart, and launch our compute server, watching it boot via the serial console. Once the server is running, launch VS Code with one click, use a Jupyter notebook, edit Python code, and open a terminal and confirm that the underlying machine has 128GB of RAM.

You can also make a CoCalc terminal that runs on the compute server by clicking "+New --> Linux Terminal", then clicking the Server button and selecting your compute server.

This costs just a few cents, as you can confirm using the "Upgrades" tab (and scrolling down). When you're done, deprovision the server, unless you need to keep data that is only on the server.

CoCalc now makes it very easy to run a hosted JupyterLab instance in the cloud, either a lightweight instance on our shared cluster, or a high powered instance on a dedicated compute server with a custom subdomain.

Checkout out https://github.com/sagemathinc/cocalc-howto/blob/main/jupyterlab.md or the video at https://youtu.be/LLtLFtD8qfo

ai
llm
python
2024-04-16

I saw a new announcement today about "Multibot chat on Poe": "Today we are adding an important new capability to Poe: multi-bot chat. This feature lets you easily chat with multiple models in a single thread. [...] Multi-bot chat is important because different models have different strengths and weaknesses. Some are optimized for specific tasks and others have unique knowledge. As you query a bot on Poe, you now can compare answers from recommended bots with one click, and summon any bot you prefer by @-mentioning the bot - all within the same conversation thread. This new ability lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow. [...] With Poe, you’re able to access all of the most powerful models, and millions of user-created bots built on top of them, all with a single $20/month subscription. "

Due to major recent work by Harald Schilly, https://CoCalc.com also has very similar functionality! Also, in CoCalc, you pay as you go for exactly the tokens you use with each model, and it typically costs our users far less than $20/month, with many of the models being free. Instead of paying $20/month, add $10 in credit to your CoCalc account (which never expires) and pay for exactly what you actually use.

image

Then ask a question, and follow up by using DIFFERENT MODELS and regenerate the response with any model.

image

You can see all responses in the history:

image

The superpower of poe.com's LLM's are their integration with web search. The superpower of CoCalc.com's LLM's is the integration with computation (including high powered HPC VM's, GPU's, Jupyter Notebooks, LaTeX, R, etc.). For example, continuing our thread above:

image

But you can also generate code in Jupyter notebooks, that are either running in lightweight shared environment, or on high powered dedicated compute servers:

Finally, you can always check and see exactly how much every interaction costs:

image

Try it out today!!!

We just added RStudio support to CoCalc projects (restart your project and refresh your browser if this doesn't work):

Run RStudio directly in your project

Open the "Servers" Tab to the left, then scroll down and click the "RStudio" button:

image

In a second, R Studio server will appear in another tab:

image

Simple as that. You can also run JupyterLab and VS Code just as easily.

Run RStudio on a compute server

If you need vastly more compute power (e.g., 80 cores for only $0.47/hour!!!), scroll up a little and create a compute server:

image

then:

image

After that, create the compute server and when it starts up, click the https link. You may have to copy/paste a token to access the RStudio instance.

image

sagemath
software
2024-03-25

The project software environment has been updated. Version 2024-03-25 is now the default. It includes SageMath 10.3 as the default. As usual, you can still use older versions by switching to a different Jupyter Kernel or use the sage_select command-line utility to change what sage is actually running.

As usual, there are also a ton of upgrades for the Python 3 (system-wide) environment, R, and various underlying Linux packages.

If you run into problems, please let us know in support. You can also always switch back to the previous environment in Project Settings → Control → Software Environment and select "Ubuntu 22.04 // Previous".


Update:

2024-03-29: a small patch update has been released, which mainly fixes a pandas vs. openpyxl incompatibility involving reading *.xslx files.