A common problem when testing new business model is to quickly build a basic version to validate ideas. Quite often there is an already existing Python codebase that implements basic infrastructure, secret management and data access. In this case, it's possible to piggyback the existing goodies and build on top of them. The whole application could be a folder in a Jupyter with multiple notebooks representing different use cases.
The beauty of this approach is that the main codebase stays untouched, while the notebook layer is thin and easy to maintain. If the idea fails, there is nothing to be cleaned up in the main repo as all the code is in the notebook repo. Deployment is quick and easy too as the whole code could be rsynced to a notebook server.
Next, it's better to keep the notebook surface minimal, with a single cell that just wires the controller to the application. The controller implements the actual logic and presentation using ipywidgets.
Example notebook:
The beauty of this approach is that the main codebase stays untouched, while the notebook layer is thin and easy to maintain. If the idea fails, there is nothing to be cleaned up in the main repo as all the code is in the notebook repo. Deployment is quick and easy too as the whole code could be rsynced to a notebook server.
Next, it's better to keep the notebook surface minimal, with a single cell that just wires the controller to the application. The controller implements the actual logic and presentation using ipywidgets.
Example notebook:
from my_project.imports import create_app from my_notebook_folder.controller import Controller # application factory similar to Flask's app = create_app() Controller(app).render()
Implementation of the controller:
from dataclasses import dataclass import IPython.display as display import ipywidgets as widgets import pandas as pd @dataclass class Request: user_id: int since: str = "" limit: int = 100 # could be imported from the main def get_changelog(repo, request: Request) -> pd.DataFrame: # Placeholder: wire this to the actual repository method. return pd.DataFrame() def run(repo, request: Request): df = get_changelog(repo, request) display.display(df) class Controller: def __init__(self, app): self.app = app self.repo = getattr(app, "changelog_repo", None) self.create_widgets() @property def widgets(self): return [ self.user_id_input, self.since_input, self.limit_input, self.run_button, self.output, ] @property def request(self) -> Request: return Request( user_id=int(self.user_id_input.value), since=self.since_input.value, limit=self.limit_input.value, ) def create_widgets(self): self.user_id_input = widgets.IntText( value=0, placeholder="Enter user id", description="User:",) self.since_input = widgets.Text( value="", placeholder="YYYY-MM-DD (optional)", description="Since:",) self.limit_input = widgets.IntText( value=100, description="Limit:",) self.run_button = widgets.Button( description="Fetch changelog", disabled=False, button_style="", icon="check",) self.output = widgets.Output() self.run_button.on_click(self.run) def run(self, _): with self.output: self.output.clear_output() run(self.repo, self.request) def render(self): display.display(widgets.VBox(self.widgets))
Deployment
deploy: rsync -ahivr --delete --exclude='tests/' --exclude='.git/' --exclude='.venv' --exclude='__pycache__/' ./ $(SSH_USER)@$(SSH_HOST):$(REMOTE_PATH)/
Benefits of this approach
- Leverages the existing project infrastructure: secret management, database connections, business logic, etc.
- Separation of concerns: building the internal tool on the sideline, without touching the main codebase.
- Logic is decoupled from presentation and can be tested independently and locally.
- Instant deployment: rsync local folder to notebook server.
- The notebook surface is minimal and therefore easier to review.
- Agent-friendly: direct Python access, no Jupyter JSON.
- User-friendly: single cell notebook, easy to use widgets.