Implementing a kernel¶
In most of the cases, the base kernel implementation is enough, and creating a kernel only means implementing the interpreter part.
The structure of your project should at least look like the following:
└── example/
├── src/
│ ├── custom_interpreter.cpp
│ ├── custom_interpreter.hpp
│ └── main.cpp
├── share/
│ └── jupyter/
│ └── kernels/
│ └── my_kernel/
│ └── kernel.json.in
└── CMakeLists.txt
Implementing the interpreter¶
Let’s start by editing the custom_interpreter.hpp
file, it should contain the declaration of your interpreter class:
/***************************************************************************
* Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou *
* Copyright (c) 2016, QuantStack *
* *
* Distributed under the terms of the BSD 3-Clause License. *
* *
* The full license is in the file LICENSE, distributed with this software. *
****************************************************************************/
#ifndef CUSTOM_INTERPRETER
#define CUSTOM_INTERPRETER
#include "xeus/xinterpreter.hpp"
#include "nlohmann/json.hpp"
using xeus::xinterpreter;
namespace nl = nlohmann;
namespace custom
{
class custom_interpreter : public xinterpreter
{
public:
custom_interpreter() = default;
virtual ~custom_interpreter() = default;
private:
void configure_impl() override;
nl::json execute_request_impl(int execution_counter,
const std::string& code,
bool silent,
bool store_history,
nl::json user_expressions,
bool allow_stdin) override;
nl::json complete_request_impl(const std::string& code,
int cursor_pos) override;
nl::json inspect_request_impl(const std::string& code,
int cursor_pos,
int detail_level) override;
nl::json is_complete_request_impl(const std::string& code) override;
nl::json kernel_info_request_impl() override;
void shutdown_request_impl() override;
};
}
#endif
Note
Almost all custom_interpreter
methods return a nl::json
instance. This is actually using nlohmann json which is a modern C++ implementation of a JSON datastructure.
Code Execution¶
Then, you would need to implement all of those methods one by one in the custom_interpreter.cpp
file. The main method is of
course the execute_request_impl
which executes the code whenever the client is sending an execute request.
nl::json custom_interpreter::execute_request_impl(int execution_counter, // Typically the cell number
const std::string& /*code*/, // Code to execute
bool /*silent*/,
bool /*store_history*/,
nl::json /*user_expressions*/,
bool /*allow_stdin*/)
{
// You can use the C-API of your target language for executing the code,
// e.g. `PyRun_String` for the Python C-API
// `luaL_dostring` for the Lua C-API
// Use this method for publishing the execution result to the client,
// this method takes the ``execution_counter`` as first argument,
// the data to publish (mime type data) as second argument and metadata
// as third argument.
// Replace "Hello World !!" by what you want to be displayed under the execution cell
nl::json pub_data;
pub_data["text/plain"] = "Hello World !!";
publish_execution_result(execution_counter, std::move(pub_data), nl::json::object());
// You can also use this method for publishing errors to the client, if the code
// failed to execute
// publish_execution_error(error_name, error_value, error_traceback);
publish_execution_error("TypeError", "123", {"!@#$", "*(*"});
nl::json result;
result["status"] = "ok";
return result;
}
The result and arguments of the execution request are described in the execute_request documentation.
Note
The other methods are all optional, but we encourage you to implement them in order to have a fully-featured kernel.
Input request¶
For input request support, you would need to monkey-patch the language functions that prompt for a user input (input
and raw_input
in Python, io.read
in Lua etc) and call xeus::blocking_input_request
instead. The second parameter
should be set to False is what the user is typing should not be visible on the screen.
#include "xeus/xinput.hpp"
xeus::blocking_input_request("User name:", true);
xeus::blocking_input_request("Password:", false);
Configuration¶
The configure_impl
method allows you to perform some operations after the custom_interpreter
creation and before executing
any request. This is optional, but it can be useful, for example it is used in xeus-python
for initializing the auto-completion engine.
void custom_interpreter::configure_impl()
{
// Perform some operations
}
Code Completion¶
The complete_request_impl
method allows you to implement the auto-completion logic for your kernel.
nl::json custom_interpreter::complete_request_impl(const std::string& code,
int cursor_pos)
{
nl::json result;
// Code starts with 'H', it could be the following completion
if (code[0] == 'H')
{
result["status"] = "ok";
result["matches"] = {"Hello", "Hey", "Howdy"};
result["cursor_start"] = 5;
result["cursor_end"] = cursor_pos;
}
// No completion result
else
{
result["status"] = "ok";
result["matches"] = nl::json::array();
result["cursor_start"] = cursor_pos;
result["cursor_end"] = cursor_pos;
}
return result;
}
The result and arguments of the completion request are described in the complete_request documentation.
Code Inspection¶
Allows the kernel user to inspect a variable/class/type in the code. It takes the code and the cursor position as arguments, it is up to the kernel author to extract the token at the given cursor position in the code in order to know for which name the user wants inspection.
nl::json custom_interpreter::inspect_request_impl(const std::string& code,
int /*cursor_pos*/,
int /*detail_level*/)
{
nl::json result;
if (code.compare("print") == 0)
{
result["found"] = true;
result["text/plain"] = "Print objects to the text stream file, [...]";
}
else
{
result["found"] = false;
}
result["status"] = "ok";
return result;
}
The result and arguments of the inspection request are described in the inspect_request documentation.
Code Completeness¶
This request is never called from the Notebook or from JupyterLab clients, but it is called from the Jupyter console client. It allows the client to know if the user finished typing his code, before sending an execute request. For example, in Python, the following code is not considered as complete:
def foo:
So the kernel should return “incomplete” with an indentation value of 4 for the next line.
The following code is considered as complete:
def foo:
print("bar")
So the kernel should return “complete”.
nl::json custom_interpreter::is_complete_request_impl(const std::string& /*code*/)
{
nl::json result;
// if (is_complete(code))
// {
result["status"] = "complete";
// }
// else
// {
// result["status"] = "incomplete";
// result["indent"] = 4;
//}
return result;
}
The result and arguments of the completness request are described in the is_complete_request documentation.
Kernel info¶
This request allows the client to get information about the kernel: language, language version, kernel version, etc.
nl::json custom_interpreter::kernel_info_request_impl()
{
nl::json result;
result["implementation"] = "my_kernel";
result["implementation_version"] = "0.1.0";
result["language_info"]["name"] = "python";
result["language_info"]["version"] = "3.7";
result["language_info"]["mimetype"] = "text/x-python";
result["language_info"]["file_extension"] = ".py";
return result;
}
The result and arguments of the kernel info request are described in the kernel_info_request documentation.
Kernel shutdown¶
This allows you to perform some operations before shutting down the kernel.
void custom_interpreter::shutdown_request_impl() {
std::cout << "Bye!!" << std::endl;
}
Implementing the main entry¶
Now let’s edit the main.cpp
file which is the main entry for the kernel executable.
/***************************************************************************
* Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou *
* Copyright (c) 2016, QuantStack *
* *
* Distributed under the terms of the BSD 3-Clause License. *
* *
* The full license is in the file LICENSE, distributed with this software. *
****************************************************************************/
#include <memory>
#include "xeus/xkernel.hpp"
#include "xeus/xkernel_configuration.hpp"
#include "custom_interpreter.hpp"
int main(int argc, char* argv[])
{
// Load configuration file
std::string file_name = (argc == 1) ? "connection.json" : argv[2];
xeus::xconfiguration config = xeus::load_configuration(file_name);
// Create interpreter instance
using interpreter_ptr = std::unique_ptr<custom::custom_interpreter>;
interpreter_ptr interpreter = interpreter_ptr(new custom::custom_interpreter());
// Create kernel instance and start it
xeus::xkernel kernel(config, xeus::get_user_name(), std::move(interpreter));
kernel.start();
return 0;
}
Kernel file¶
The kernel.json
file is a json
file used by Jupyter in order to retrieve all the available kernels.
It must be installed in the INSTALL_PREFIX/share/jupyter/kernels/my_kernel
directory, we will see how to
do that in the next chapter.
This json
file contains:
display_name
: the name that the Jupyter client should display in its interface (e.g. on the main JupyterLab page).
argv
: the command that the Jupyter client needs to run in order to start the kernel. You should leave this value unchanged if you are not sure what you are doing.
language
: the target language of your kernel.
You can edit the kernel.json.in
file as following. This file will be used by cmake for generating the actual kernel.json
file which will be installed.
{
"display_name": "my_kernel",
"argv": [
"@CMAKE_INSTALL_PREFIX@/@CMAKE_INSTALL_BINDIR@/@EXECUTABLE_NAME@",
"-f",
"{connection_file}"
],
"language": "python"
}
Note
You can provide logos that will be used by the Jupyter client. Those logos should be in files named logo-32x32.png
and
logo-64x64.png
(32x32
and 64x64
being the size of the image in pixels), they should be placed next to the kernel.json.in
file.
Compiling and installing the kernel¶
Your CMakeLists.txt
file should look like the following:
############################################################################
# Copyright (c) 2016, Johan Mabille, Sylvain Corlay, Martin Renou #
# Copyright (c) 2016, QuantStack #
# #
# Distributed under the terms of the BSD 3-Clause License. #
# #
# The full license is in the file LICENSE, distributed with this software. #
############################################################################
cmake_minimum_required(VERSION 3.4.3)
project(my_kernel)
set(EXECUTABLE_NAME my_kernel)
# Configuration
# =============
include(GNUInstallDirs)
# We generate the kernel.json file, given the installation prefix and the executable name
configure_file (
"${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels/my_kernel/kernel.json.in"
"${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels/my_kernel/kernel.json"
)
option(XEUS_STATIC_DEPENDENCIES "link statically with xeus dependencies" OFF)
if (XEUS_STATIC_DEPENDENCIES)
set(xeus_target "xeus-static")
else ()
set(xeus_target "xeus")
endif ()
# Dependencies
# ============
# Be sure to use recent versions
set(xeus_REQUIRED_VERSION 0.19.1)
set(cppzmq_REQUIRED_VERSION 4.3.0)
find_package(xeus ${xeus_REQUIRED_VERSION} REQUIRED)
find_package(cppzmq ${cppzmq_REQUIRED_VERSION} REQUIRED)
find_package(Threads)
# Flags
# =====
include(CheckCXXCompilerFlag)
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang" OR CMAKE_CXX_COMPILER_ID MATCHES "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Intel")
CHECK_CXX_COMPILER_FLAG("-std=c++14" HAS_CPP14_FLAG)
if (HAS_CPP14_FLAG)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14")
else()
message(FATAL_ERROR "Unsupported compiler -- xeus requires C++14 support!")
endif()
endif()
# Target and link
# ===============
# my_kernel source files
set(MY_KERNEL_SRC
src/custom_interpreter.cpp
src/custom_interpreter.hpp
)
# My kernel executable
add_executable(${EXECUTABLE_NAME} src/main.cpp ${MY_KERNEL_SRC} )
target_link_libraries(${EXECUTABLE_NAME} PRIVATE ${xeus_target} Threads::Threads)
if (APPLE)
set_target_properties(${EXECUTABLE_NAME} PROPERTIES
MACOSX_RPATH ON
)
else()
set_target_properties(${EXECUTABLE_NAME} PROPERTIES
BUILD_WITH_INSTALL_RPATH 1
SKIP_BUILD_RPATH FALSE
)
endif()
set_target_properties(${EXECUTABLE_NAME} PROPERTIES
INSTALL_RPATH_USE_LINK_PATH TRUE
)
# Installation
# ============
# Install my_kernel
install(TARGETS ${EXECUTABLE_NAME}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
# Configuration and data directories for jupyter and my_kernel
set(XJUPYTER_DATA_DIR "share/jupyter" CACHE STRING "Jupyter data directory")
# Install Jupyter kernelspecs
set(MY_KERNELSPEC_DIR ${CMAKE_CURRENT_SOURCE_DIR}/share/jupyter/kernels)
install(DIRECTORY ${MY_KERNELSPEC_DIR}
DESTINATION ${XJUPYTER_DATA_DIR}
PATTERN "*.in" EXCLUDE)
Now you should be able to install your new kernel and use it with any Jupyter client.
For the installation you first need to install dependencies, the easier way is using conda
:
conda install -c conda-forge cmake jupyter xeus xtl nlohmann_json cppzmq
Then create a build
folder in the repository and build the kernel from there:
mkdir build
cd build
cmake -D CMAKE_INSTALL_PREFIX=$CONDA_PREFIX ..
make
make install
That’s it! Now if you run the Jupyter Notebook interface you should be able to create a new Notebook
selecting the my_kernel
kernel. Congrats!
Writing unit-tests for your kernel¶
For writing unit-tests for you kernel, you can use the jupyter_kernel_test Python library. It allows you to test the results of the requests you send to the kernel.