Mojo is a compiled programming language designed by Modular for AI developers to combine the usability of Python with the performance of systems languages like C++. Mojo is being developed to become a full superset of Python over time, allowing existing Python code to run directly within Mojo programs while also offering new features for high-performance computing. This enables developers to create faster, more portable code for CPUs, GPUs, and other accelerators.
How Mojo is a superset of Python
Mojo’s design philosophy is Python-first, aiming to meet Python developers where they are by embracing the Python syntax and ecosystem. The “superset” concept for Mojo is similar to how TypeScript is a superset of JavaScript, adding advanced capabilities for performance-critical scenarios.
Key features and differences include:
- Python compatibility: Mojo allows seamless interoperability with the Python ecosystem. You can import Python modules and libraries (like NumPy and TensorFlow) and use them in your Mojo code.
- Gradual performance adoption: Python developers can write code using familiar
deffunctions for dynamic, Python-like behavior. When higher performance is needed, they can transition to Mojo’s native features. - Systems programming capabilities: Mojo adds new language features that don’t exist in standard Python:
- fn vs. def: Mojo uses the
fnkeyword for defining functions that use static types and are compiled for maximum performance. In contrast,defis used for functions with dynamic, Python-like semantics. - struct vs. class: The
structtype in Mojo provides low-level control over memory layout for memory-efficient and predictable performance, unlike Python’s dynamic classes. - Memory safety: Mojo incorporates a borrow checker, influenced by Rust, to ensure memory safety and prevent common bugs like use-after-free, but with a more simplified syntax.
- Variable declaration: It uses
varfor mutable andletfor immutable variables, providing more control than Python’s single assignment method.
- fn vs. def: Mojo uses the
Core components and goals
- Multi-Level Intermediate Representation (MLIR): Mojo is built on MLIR, a modern compiler infrastructure that enables it to target diverse and heterogeneous hardware, such as CPUs, GPUs, and specialized AI accelerators, with a single codebase.
- Performance: Mojo was explicitly designed to fix Python’s performance problems. For certain workloads, Mojo has demonstrated speed improvements of orders of magnitude over native Python. This is achieved by moving away from Python’s Global Interpreter Lock (GIL) and leveraging modern compilation and parallel processing techniques.
- AI and machine learning focus: Initially created to simplify the fragmented AI ecosystem, Mojo is particularly suited for high-performance AI and machine learning tasks. Its vision is to unify the tooling required for AI compute, replacing the need to juggle multiple languages like Python, C++, and CUDA.
- Future development: Mojo is still in active development, and not every Python feature is currently implemented. The goal is to continue expanding compatibility to encompass the full range of Python functionality over time.
Meta Programming
Mojo’s metaprogramming features are integrated directly into the language, building upon Python’s familiar syntax to enable high-performance, compile-time code generation. The core of this system revolves around parameters, which are compile-time values, and the @parameter decorator.
How it works
- Parameters: Mojo functions and structs can be declared with
[]brackets to indicate compile-time parameters. These parameters are variables known at compilation that act as constants at runtime. The compiler evaluates all expressions involving these parameters at compile-time. - Compile-time evaluation: The compiler can perform significant computations at compile time, including conditional branching. For example, an
ifstatement can be evaluated by the compiler and, if a condition is false, the unreachable code is removed entirely, leading to zero runtime overhead. - Compile-time looping: When a
forloop is placed inside a function parameterized by a compile-time value, the loop can be unrolled by the compiler. This effectively generates a unique version of the function for each different parameter value. - Decorator-driven generation: The
@parameterdecorator can force a block of code to be evaluated at compile time. This allows the compiler to generate code based on a compile-time constant, leading to optimizations such as function specialization. - LLVM and MLIR: Under the hood, Mojo uses the MLIR compiler framework to handle its metaprogramming. This allows it to represent explicitly parametric code before instantiation, which leads to better error messages, faster compile times, and powerful hardware-specific optimizations.
Mojo Metaprogramming vs Circle Mojo’s compile-time metaprogramming uses a declarative parameterization system, while Sean Baxter’s Circle C++ employs an imperative, same-language reflection model with a built-in interpreter. Both approaches aim to provide powerful, zero-cost abstractions that surpass the capabilities and ergonomics of traditional C++ template metaprogramming, but they arrive at their destination from different philosophical standpoints.
Circle’s meta programming See also Circle
Sean Baxter’s Circle is an extension of C++17 that adds powerful imperative metaprogramming capabilities. Its system is built on three core pillars: an integrated interpreter, same-language reflection, and introspection keywords.
How it works
- Integrated interpreter: The Circle compiler contains a
bit-accurate interpreter that can execute any function during the
compilation process, not just
constexprfunctions. This allows for powerful compile-time computation and even side-effects, such as printing to the console or reading from files during compilation. - Same-language reflection: The
**@meta**keyword can prefix any statement, causing it to be executed by the interpreter at compile time. Critically, any non-meta statements within a meta-scope are “deposited” into the innermost enclosing runtime scope. This allows developers to use imperative control flow (if,for) to programmatically generate or inject code into the final program. - Introspection keywords: Circle provides built-in introspection
keywords (e.g.,
@member_count,@member_name) to query type information at compile time. This information can then be used with@metastatements to generate code based on the structure of existing types. This avoids the use of convoluted C++ template patterns. - Configuration-oriented programming: The ability to execute arbitrary code at compile time allows developers to read external configuration sources like JSON files or Lua scripts and use that data to programmatically generate C++ code, separating logic from code.
Similarities between Mojo and Circle
- Motivation: Both languages were created to address the complexity and limitations of C++ template metaprogramming. They offer a more direct, imperative, and easier-to-read approach to compile-time code generation compared to the recursion-based nature of C++ templates.
- Integrated execution: Both leverage a compile-time execution engine or interpreter to perform computations during compilation, enabling powerful optimizations and dynamic code generation.
- Compile-time control flow: Both provide mechanisms for
compile-time branching (
if) and looping (for), which allow for the unrolling of loops and the elimination of dead code. - Improved developer experience: Both prioritize better ergonomics, with cleaner syntax and improved error messages compared to the cryptic errors often associated with C++ templates.
- Hardware optimization: The powerful metaprogramming capabilities in both languages are used to generate highly specialized code tailored for specific hardware, such as different CPU architectures, GPUs, and other accelerators.
Differences between Mojo and Circle
| Aspect | Mojo Metaprogramming | Circle C++ Metaprogramming |
|---|---|---|
| Core paradigm | Declarative/Parametric: You declare parameters on functions and structs, and the compiler uses these for compile-time specialization and evaluation. | Imperative/Reflective: You use the @meta keyword to explicitly execute code in a compile-time interpreter, which deposits new code into the AST. |
| Compiler integration | Based on the MLIR compiler framework, which was designed from the ground up for this kind of parametric compilation. Mojo source is effectively syntactic sugar for MLIR. | A custom C++ compiler built by Sean Baxter with an integrated C++ interpreter. It extends C++17 and its metaprogramming uses standard C++ syntax where possible. |
| Language syntax | Blends Python-like syntax for readability with statically-typed features necessary for low-level systems programming. | Retains C++ syntax but extends it with new keywords like @meta and built-in introspection operators. |
| Extensibility | The MLIR foundation makes it easy to target various hardware backends. This is a core part of Modular’s mission for AI accelerators. | The ability to execute arbitrary code at compile time allows for powerful data-driven code generation, such as reading configuration from JSON or Lua files. |
| Use case | Aims to be a general-purpose language that specifically addresses the needs of high-performance AI and ML, combining Python’s ease of use with the performance of C++. | Aims to be a comprehensive successor to C++ that tackles problems in systems programming and generic libraries, often through a lens of separating high-level goals from low-level implementation. |