if you're here, then there is a chance you have a notebook (.ipynb) in a directory saved as Untitled.ipynb. it is just sitting there, but what if it could be used as a python module? importnb is here to answer that question.
use importnb's Notebook finder and loader to import notebooks as modules
import importnb
# with the explicit API
with importnb.Notebook():
import Untitled
# with the extensible API
with importnb.imports("ipynb"):
import Untitledthe snippet begins
witha context manager that modifies the files python can discover. it will find theUntitled.ipynbnotebook and import it as a module with the__name__ofUntitled. the__file__description will have.ipynbas an extension.
maybe when we give notebooks new life they eventually earn a better name than Untitled?
the importnb command line interface mimics python's. it permits running notebook files, modules, and raw json data.
the commands below execute a notebook module and file respectively.
importnb -m Untitled # call the Untitled module as __main__
importnb Untitled.ipynb # call the Untitled file as __main__use either pip or sconda/mamba
pip install importnb
# or
conda install -c conda-forge importnb
# or
mamba install -c conda-forge importnbimportnb.Notebookoffers parameters to customize how modules are imported- imports Jupyter notebooks as python modules
- fuzzy finding conventions for finding files that are not valid python names
- works with top-level await statements
- integration with
pytest,IPython, andcoverage - extensible machinery and entry points
- translates Jupyter notebook files (i.e.
.ipynbfiles) line-for-line to python source providing natural error messages - command line interface for running notebooks as python scripts
- has no required dependencies
the Notebook object has a few features that can be toggled:
lazy:bool=Falselazy load the module, the namespace is populated when the module is access the first time.position:int=0the relative position of the import loader in thesys.path_hooksinclude_fuzzy_finder:bool=Trueuse fuzzy searching syntax when underscores are encountered.include_markdown_docstring:bool=Truemarkdown blocks preceding aclassordefbecome docstrings.include_non_defs:bool=Trueimport only function and class definitions. ignore intermediate * expressions.no_magic:bool=FalseexecuteIPythonmagic statements from the loader.
some identifying properties of the loader can be customized:
name:str | None=Nonea module name for the imported sourcepath:str | None=Nonea path to a source fileextensions:tuple[str, ...]=(".ipy", ".ipynb")file extensions to be considered importablemodule_type:type[M]=SourceModulethe class used to store a module
these features are defined in the importnb.loader.Interface class and they can be controlled through the command line interface.
the primary goal of this library is to make it easy to reuse python code in notebooks. below are a few ways to invoke python's import system within the context manager.
import importnb
from importlib import import_module
with importnb.imports("ipynb"):
import Untitled
import Untitled as nb
__import__("Untitled")
import_module("Untitled")importnb can import more than notebooks. json-like data from disk can be
loaded and stored on the module with rich representations.
import importnb
with importnb.imports("toml", "json", "yaml"):
passall the available entry points are found with:
from importnb.entry_points import list_aliases
list_aliases()from importnb import Notebook
Untitled = Notebook.load("Untitled.ipynb")often notebooks have names that are not valid python files names that are restricted alphanumeric characters and an _. the importnb fuzzy finder converts python's import convention into globs that will find modules matching specific patters. consider the statement:
import importnb
with importnb.Notebook():
import U_titl__d # U*titl**d.ipynbimportnb translates U_titl__d to a glob format that matches the pattern U*titl**d.ipynb when searching for the source. that means that importnb should find Untitled.ipynb as the source for the import.
import importnb
with importnb.Notebook():
import _ntitled # *ntitled.ipynb
import __d # **d.ipynb
import U__ # U**.ipynba motivation for this feature is naming notebooks as if they were blog posts using the YYYY-MM-DD-title-here.ipynb convention. there are a few ways we could this file explicitly. the fuzzy finder syntax means all of the following are equivalent:
import importnb
with importnb.Notebook():
import __title_here
import YYYY_MM_DD_title_here
import __MM_DD_title_hereit is possible that a fuzzy import may be ambiguous and return multiple files.
the importnb fuzzy finder will prefer the most recently changed file.
ambiguity can be avoided by using more explicit fuzzy imports to reduce collisions. another option is to use python's explicit import functions.
from importlib import import_module
import importnb
with importnb.Notebook():
__import__("YYYY-MM-DD-title-here")
import_module("YYYY-MM-DD-title-here")an outcome of resolving the most recently changed is that you can import your most recent notebook with:
import importnb
with importnb.Notebook():
import __ # **.ipynbsince importnb transforms notebooks to python documents we can use these as source for tests.
importnbs pytest extension is not fancy, it only allows for conventional pytest test discovery, and must be explicitly enabled.
... to discover tests with importnb installed...
add one of:
-
call the
pytestCLI with the plugin enabledpytest -pimportnb.utils.pytest_importnb
-
set the
PYTEST_PLUGINSenvironment variablePYTEST_PLUGINS=importnb.utils.pytest_importnb pytest
-
add to
[tool.pytest.ini_options]inpyproject.toml[tool.pytest.ini_options] addopts = ["-pimportnb.utils.pytest_importnb"]
-
add to
conftest.pypytest_plugins = [ "importnb.utils.pytest_importnb", ]
coverage can tell you how much of your code runs.
... to gather coverage from notebooks ...
-
add to
[tool.coverage.run]inpyproject.toml[tool.coverage.run] plugins = ["importnb.utils.coverage"]
the importnb.Notebook machinery is extensible. it allows other file formats to be used. for example, pidgy uses importnb to import markdown files as compiled python code.
import importnb
class MyLoader(importnb.Notebook):
passa challenge with Jupyter notebooks is that they are json data. this poses problems:
- every valid line of code in a Jupyter notebook is a quoted
jsonstring jsonparsers don't have a reason to return line numbers.
python's json module is not pluggable in the way we need to find line numbers. since importnb is meant to be dependency free on installation we couldn't look to any other packages like ujson or json5.
the need for line numbers is enough that we ship a stand-alone json grammar parser. to do this without extra dependencies we use the lark grammar package at build time:
- we've defined a minimal grammar in
json.g - we invoke
lark-standalonethat generates a stand-alone parser for the grammar.- the generated file is shipped with the package.
- this code is licensed under the Mozilla Public License 2.0
the result of importnb is json data translated into vertically sparse, valid python code.
importlib.import_module machinery