Implementing a kernel

In most of the cases, the base kernel implementation is enough, and creating a kernel only means implementing the interpreter part.

The structure of your project should at least look like the following:

└── example/
    ├── src/
    │   ├── custom_interpreter.cpp
    │   ├── custom_interpreter.hpp
    │   └── main.cpp
    ├── share/
    │   └── jupyter/
    │       └── kernels/
    │           └── my_kernel/
    │               └── kernel.json.in
    └── CMakeLists.txt

The xeus-cookiecutter project provides a template for a xeus-based kernel, and includes the base structure for a xeus-based kernel.

Implementing the interpreter

Let’s start by editing the custom_interpreter.hpp file, it should contain the declaration of your interpreter class:

/***************************************************************************
* Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou          *
* Copyright (c) 2016, QuantStack                                           *
*                                                                          *
* Distributed under the terms of the BSD 3-Clause License.                 *
*                                                                          *
* The full license is in the file LICENSE, distributed with this software. *
****************************************************************************/

#ifndef CUSTOM_INTERPRETER
#define CUSTOM_INTERPRETER

#include "xeus/xinterpreter.hpp"

#include "nlohmann/json.hpp"

using xeus::xinterpreter;

namespace nl = nlohmann;

namespace custom
{
    class custom_interpreter : public xinterpreter
    {
    public:

        custom_interpreter() = default;
        virtual ~custom_interpreter() = default;

    private:

        void configure_impl() override;

        void execute_request_impl(xrequest_context request_context,
                                  send_reply_callback cb,
                                  int execution_counter,
                                  const std::string& code,
                                  execute_request_config config,
                                  nl::json user_expressions) override;

        nl::json complete_request_impl(const std::string& code,
                                       int cursor_pos) override;

        nl::json inspect_request_impl(const std::string& code,
                                      int cursor_pos,
                                      int detail_level) override;

        nl::json is_complete_request_impl(const std::string& code) override;

        nl::json kernel_info_request_impl() override;

        void shutdown_request_impl() override;
    };
}

#endif

Note

Almost all custom_interpreter methods return a nl::json instance. This is actually using nlohmann json which is a modern C++ implementation of a JSON datastructure.

In the following sessions we will see details about each one of the methods that need to be implemented in order to have a functional kernel. The user can opt for using the reply API that will appropriately create replies to send to the kernel, or create the replies themselves.

Code Execution

You can implement all the methods described here in the custom_interpreter.cpp file. The main method is of course the execute_request_impl which executes the code whenever the client is sending an execute request.

void custom_interpreter::execute_request_impl(xrequest_context request_context,
                                              send_reply_callback cb,
                                              int execution_counter,
                                              const std::string& code,
                                              execute_request_config config,
                                              nl::json user_expressions)
{
    // You can use the C-API of your target language for executing the code,
    // e.g. `PyRun_String` for the Python C-API
    //      `luaL_dostring` for the Lua C-API

    // Use this method for publishing the execution result to the client,
    // this method takes the ``execution_counter`` as first argument,
    // the data to publish (mime type data) as second argument and metadata
    // as third argument.
    // Replace "Hello World !!" by what you want to be displayed under the execution cell
    nl::json pub_data;
    pub_data["text/plain"] = "Hello World !!";
    publish_execution_result(request_context, execution_counter, std::move(pub_data), nl::json::object());

    // You can also use this method for publishing errors to the client, if the code
    // failed to execute
    // publish_execution_error(error_name, error_value, error_traceback);
    publish_execution_error(request_context, "TypeError", "123", {"!@#$", "*(*"});

    // Call the callback parameter to send the reply
    cb(xeus::create_successful_reply());
}

The result and arguments of the execution request are described in the execute_request documentation.

Note

The other methods are all optional, but we encourage you to implement them in order to have a fully-featured kernel.

Within this method the use of create_error_reply and create_successful_reply might be useful.

Input request

For input request support, you would need to monkey-patch the language functions that prompt for a user input (input and raw_input in Python, io.read in Lua etc) and call xeus::blocking_input_request instead. The first parameter should be forwarded from the execution_request implementation. The third parameter should be set to False if what the user is typing should not be visible on the screen.

#include "xeus/xinput.hpp"

xeus::blocking_input_request(request_context, "User name:", true);
xeus::blocking_input_request(request_context, "Password:", false);

Configuration

The configure_impl method allows you to perform some operations after the custom_interpreter creation and before executing any request. This is optional, but it can be useful, for example it is used in xeus-python for initializing the auto-completion engine.

void custom_interpreter::configure_impl()
{
    // Perform some operations

Code Completion

The complete_request_impl method allows you to implement the auto-completion logic for your kernel.

nl::json custom_interpreter::complete_request_impl(const std::string& code,
                                                   int cursor_pos)
{
    // Code starts with 'H', it could be the following completion
    if (code[0] == 'H')
    {
        return xeus::create_complete_reply({"Hello", "Hey", "Howdy"}, 5, cursor_pos);
    }
    // No completion result
    else
    {
        return xeus::create_complete_reply({}, cursor_pos, cursor_pos);
    }

The result and arguments of the completion request are described in the complete_request documentation.

Code Inspection

Allows the kernel user to inspect a variable/class/type in the code. It takes the code and the cursor position as arguments, it is up to the kernel author to extract the token at the given cursor position in the code in order to know for which name the user wants inspection.

nl::json custom_interpreter::inspect_request_impl(const std::string& code,
                                                  int /*cursor_pos*/,
                                                  int /*detail_level*/)
{
    nl::json result;

    if (code.compare("print") == 0)
    {
        return xeus::create_inspect_reply(true,
                                          {"text/plain", "Print objects to the text stream file, [...]"});
    }
    else
    {
        return xeus::create_inspect_reply();
    }

The result and arguments of the inspection request are described in the inspect_request documentation and the create_inspect_reply_ might be useful to create a reply within specifications.

Code Completeness

This request is never called from the Notebook or from JupyterLab clients, but it is called from the Jupyter console client. It allows the client to know if the user finished typing his code, before sending an execute request. For example, in Python, the following code is not considered as complete:

def foo:

So the kernel should return “incomplete” with an indentation value of 4 for the next line.

The following code is considered as complete:

def foo:
    print("bar")

So the kernel should return “complete”.

nl::json custom_interpreter::is_complete_request_impl(const std::string& /*code*/)
{
    return xeus::create_is_complete_reply("complete");

The result and arguments of the completness request are described in the is_complete_request documentation. Both create_default_complete_reply_ and create_is_complete_reply_ methods are recommended.

Kernel info

This request allows the client to get information about the kernel: language, language version, kernel version, etc.

nl::json custom_interpreter::kernel_info_request_impl()
{
    return xeus::create_info_reply("",
                                   "my_kernel",
                                   "0.1.0",
                                   "python",
                                   "3.7",
                                   "text/x-python",
                                   ".py");

The result and arguments of the kernel info request are described in the kernel_info_request documentation. The create_info_reply_ method will help you to provide complete information about your kernel.

Kernel shutdown

This allows you to perform some operations before shutting down the kernel.

void custom_interpreter::shutdown_request_impl()
{
    std::cout << "Bye!!" << std::endl;

Kernel replies

Error reply

Creates a default error reply to the kernel or allows custom input. The signature of the method is the following:

Where evalue is exception value, ename is exception name and trace_back a vector of strings with the exception stack.

Successful reply

Creates a default success reply to the kernel or allows custom input. The signature of the method is the following:

Where payload is a way to trigger frontend actions from the kernel (payloads are deprecated but since there are still no replecement for it you might need to use it). You can find more information about the different kinds of payloads in the official documentation. data is a dictionary which the keys is a MIME_type (this is the type of data to be shown it must be a valid MIME type, for a list of the possibilities check MDN, note that you’re not limited by these types) and the values are the content of the information intended to be displayed in the frontend. And user_expressions is a dictionary of strings of arbitrary code, more information about it on the official documentation.

Complete reply

Creates a custom completion reply to the kernel. The signature of the method is the following:

Where matches the list of all matches to the completion request, it’s a mandatory argument. cursor_start and cursor_end mark the range of text that should be replaced by the above matches when a completion is accepted, typically cursor_end is the same as cursor_pos in the request and both these arguments are mandatory for the implementation of the method. metadata a dictionary of strings that contains information that frontend plugins might use for extra display information about completions.

In case you do not wish to implement completion in your kernel, instead of creating a complete reply you can use the create_successful_reply with its default arguments.

Is complete reply

Creates a default is complete reply to the kernel or allows custom input. The signature of the method is the following:

status one of the following ‘complete’, ‘incomplete’, ‘invalid’, ‘unknown’. indent if status is ‘incomplete’, indent should contain the characters to use to indent the next line. This is only a hint: frontends may ignore it and use their own autoindentation rules. For other statuses, this field does not exist.

Create info reply

Thorough information about the kernel’s infos variables can be found in the Jupyter kernel docs.

Implementing the main entry

Now let’s edit the main.cpp file which is the main entry for the kernel executable.

/***************************************************************************
* Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou          *
* Copyright (c) 2016, QuantStack                                           *
*                                                                          *
* Distributed under the terms of the BSD 3-Clause License.                 *
*                                                                          *
* The full license is in the file LICENSE, distributed with this software. *
****************************************************************************/

#include <memory>

#include "xeus/xkernel.hpp"
#include "xeus/xkernel_configuration.hpp"
#include "xeus-zmq/xserver_zmq.hpp"

#include "custom_interpreter.hpp"

int main(int argc, char* argv[])
{
    // Load configuration file
    std::string file_name = (argc == 1) ? "connection.json" : argv[2];
    xeus::xconfiguration config = xeus::load_configuration(file_name);

    auto context = xeus::make_context<zmq::context_t>();

    // Create interpreter instance
    using interpreter_ptr = std::unique_ptr<custom::custom_interpreter>;
    interpreter_ptr interpreter = interpreter_ptr(new custom::custom_interpreter());

    // Create kernel instance and start it
    xeus::xkernel kernel(config, xeus::get_user_name(), std::move(context), std::move(interpreter), xeus::make_xserver_zmq);
    kernel.start();

    return 0;
}

Kernel file

The kernel.json file is a json file used by Jupyter in order to retrieve all the available kernels.

It must be installed in the INSTALL_PREFIX/share/jupyter/kernels/my_kernel directory, we will see how to do that in the next chapter.

This json file contains:

  • display_name: the name that the Jupyter client should display in its interface (e.g. on the main JupyterLab page).

  • argv: the command that the Jupyter client needs to run in order to start the kernel. You should leave this value unchanged if you are not sure what you are doing.

  • language: the target language of your kernel.

You can edit the kernel.json.in file as following. This file will be used by cmake for generating the actual kernel.json file which will be installed.

{
    "display_name": "my_kernel",
    "argv": [
        "@CMAKE_INSTALL_PREFIX@/@CMAKE_INSTALL_BINDIR@/@EXECUTABLE_NAME@",
        "-f",
        "{connection_file}"
    ],
    "language": "python"
}

Note

You can provide logos that will be used by the Jupyter client. Those logos should be in files named logo-32x32.png and logo-64x64.png (32x32 and 64x64 being the size of the image in pixels), they should be placed next to the kernel.json.in file.

Compiling and installing the kernel

Your CMakeLists.txt file should look like the following:

############################################################################
# Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou          #
# Copyright (c) 2016, QuantStack                                           #
#                                                                          #
# Distributed under the terms of the BSD 3-Clause License.                 #
#                                                                          #
# The full license is in the file LICENSE, distributed with this software. #
############################################################################

cmake_minimum_required(VERSION 3.4.3)
project(my_kernel)

set(EXECUTABLE_NAME my_kernel)

# Configuration
# =============

include(GNUInstallDirs)

# We generate the kernel.json file, given the installation prefix and the executable name
configure_file (
    "${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels/my_kernel/kernel.json.in"
    "${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels/my_kernel/kernel.json"
)

option(XEUS_STATIC_DEPENDENCIES "link statically with xeus dependencies" OFF)
if (XEUS_STATIC_DEPENDENCIES)
    set(xeus-zmq_target "xeus-zmq-static")
else ()
    set(xeus-zmq_target "xeus-zmq")
endif ()

# Dependencies
# ============

# Be sure to use recent versions
set(xeus-zmq_REQUIRED_VERSION 1.0.2)

find_package(xeus-zmq ${xeus-zmq_REQUIRED_VERSION} REQUIRED)
find_package(Threads)

# Flags
# =====

include(CheckCXXCompilerFlag)

if (CMAKE_CXX_COMPILER_ID MATCHES "Clang" OR CMAKE_CXX_COMPILER_ID MATCHES "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Intel")
    CHECK_CXX_COMPILER_FLAG("-std=c++14" HAS_CPP14_FLAG)

    if (HAS_CPP14_FLAG)
        set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14")
    else()
        message(FATAL_ERROR "Unsupported compiler -- xeus requires C++14 support!")
    endif()
endif()

# Target and link
# ===============

# my_kernel source files
set(MY_KERNEL_SRC
    src/custom_interpreter.cpp
    src/custom_interpreter.hpp
)

# My kernel executable
add_executable(${EXECUTABLE_NAME} src/main.cpp ${MY_KERNEL_SRC} )
target_link_libraries(${EXECUTABLE_NAME} PRIVATE ${xeus-zmq_target} Threads::Threads)

set_target_properties(${EXECUTABLE_NAME} PROPERTIES
    INSTALL_RPATH_USE_LINK_PATH TRUE
)

# Installation
# ============

# Install my_kernel
install(TARGETS ${EXECUTABLE_NAME}
        RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})

# Configuration and data directories for jupyter and my_kernel
set(XJUPYTER_DATA_DIR "share/jupyter" CACHE STRING "Jupyter data directory")

# Install Jupyter kernelspecs
set(MY_KERNELSPEC_DIR ${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels)
install(DIRECTORY ${MY_KERNELSPEC_DIR}
        DESTINATION ${XJUPYTER_DATA_DIR}
        PATTERN "*.in" EXCLUDE)

Now you should be able to install your new kernel and use it with any Jupyter client.

For the installation you first need to install dependencies, the easier way is using conda:

conda install -c conda-forge cmake jupyter xeus xtl nlohmann_json cppzmq

Then create a build folder in the repository and build the kernel from there:

mkdir build
cd build
cmake -D CMAKE_INSTALL_PREFIX=$CONDA_PREFIX ..
make
make install

That’s it! Now if you run the Jupyter Notebook interface you should be able to create a new Notebook selecting the my_kernel kernel. Congrats!

Writing unit-tests for your kernel

For writing unit-tests for you kernel, you can use the jupyter_kernel_test Python library. It allows you to test the results of the requests you send to the kernel.