Calysto / metakernel
Jupyter/IPython Kernel Tools
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing Calysto/metakernel in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewA Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more). See Jupyter's docs on wrapper kernels. Additional magics can be installed within the new kernel package under a subpackage. Features • Basic set of line and cell magics for all kernels. • Python magic for accessing python interpreter. • Run kernels in parallel. • Shell magics. • Classroom management magics. • Tab completion for magics and file paths. • Help for magics using ? or Shift+Tab. • Plot magic for setting default plot behavior. Kernels based on Metakernel • matlab_kernel, • octave_kernel, • calysto_scheme, • calysto_processing, • java9_kernel, • xonsh_kernel, • calysto_hy, • gnuplot_kernel, • spylon_kernel, • wolfram_kernel, • sas_kernel, • pysysh_kernel, • calysto_bash, • mit_scheme, ... and many others. Installation You can install Metakernel through : Installing from the channel can be achieved by adding to your channels with: Once the channel has been enabled, can be installed with: It is possible to list all of the versions of available on your platform with: Use MetaKernel Magics in IPython Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel. Put the following in your (or a system-wide) file: Use MetaKernel Languages in Parallel To use a MetaKernel language in parallel, do the following: • Make sure that the Python module is installed. In the shell, type: • To enable the extension in the notebook, in the shell, type: • To start up a cluster, with 10 nodes, on a local IP address, in the shell, type: • Initialize the code to use the 10 nodes, inside the notebook from a host kernel and (can be any metakernel kernel): For example: • Run code in parallel, inside the notebook, type: Execute a single line, in parallel: Or execute the entire cell, in parallel: Results come back in a Python list (Scheme vector), in order. (This will be a JSON representation in the future). Therefore, the above would produce the result: You can get the results back in any of the parallel magics ( , , or ) in the host kernel by accessing the variable (single underscore), or by using the flag, like so: Then, in the next cell, you can access . Notice that you can use the variable to partition parts of a problem so that each node is working on something different. In the examples above, use to evaluate the code in the host kernel as well. Note that is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines. Configuration subclasses can be configured by the user. The configuration file name is determined by the property of the subclass. For example, in the kernel, it is . The user of the kernel can add an file to their config path. The base class offers as a configurable trait. Subclasses can define other traits that they wish to make configurable. As an example: Documentation Example notebooks can be viewed here. Documentation is available online. Magics have interactive help (and online). For version information, see the Changelog.