The Real Problem¶
The problem isn’t that we lack ideas or eloquence - it’s that our thoughts are fragmented across platforms and protocols. I can write extensively about software development in conversations with AI, but translating that into long-form content feels like an insurmountable task.
So I want to do something to help with this. Solve your own problem and you’ll end up solving a problem a lot of people face, because...
nobody’s that special.
So great, I’m looking to create Yet Another AI Writing Tool and I’m dressing it up like “oh I’m just trying to stick MyST markdown in the preview of whtwnd and also enable it across my fork.” Yeah that’s just the pretense to cramming another AI assistant down everyone’s throats. I feel like there’s a better way to interface with your assistant than just through chat. It should be listening to what you write and doing tasks in the background that can help with your project and interests and goals. A full assistant, but through your journal.
So this isn’t about creating another AI writing tool to automate college essay slop generation. It’s about building a system that helps organize our scattered knowledge into coherent structures, whether that’s a tweet, a blog post, or a book. The goal is to help people maintain consistency, identify patterns in their thinking, and build on their ideas without losing the human element. Because the truth is, most of us are more capable than we realize - we just need help organizing our thoughts into something meaningful.
Today’s Mathematical Journey¶
I’ve been thinking a lot about high-dimensional spaces lately. It’s one of those topics that keeps coming up in different contexts - from machine learning to quantum mechanics. The weird thing about high dimensions is how they completely break our 3D intuition. Let me walk through a proof that blew my mind when I first saw it.
The N-Ball Properties¶
Let’s prove two fascinating properties of high-dimensional spaces:
- Points cluster near the surface of the N-ball
- Any two points are approximately units apart
Radial Distance Distribution¶
What does the phrase “points cluster near the surface” really mean?
Let be the distance from the origin to a point picked
uniformly at random inside the –dimensional unit ball .
The probability that falls in equals the fraction of the
ball’s volume occupied by the thin spherical shell at that radius. In
symbols,
where is the surface area of the -sphere of radius and is the volume of the unit -ball. Because , differentiating with respect to shows that . Plugging this into the fraction above immediately gives the probability-density function
Meaning of ¶
Why ¶
where is the surface area of the unit -sphere.
Because the point is chosen uniformly, its probability of landing in that shell is
with the total volume of the unit -ball.
The ratio equals for the unit ball, so
The factor captures how the “room to place points” grows with radius, while the leading ensures the PDF integrates to 1.
That single formula encapsulates the "surface concentration’’ effect: for large the density is negligible until is very close to 1. Indeed, and , so almost every point lies within an shell of the boundary.
Distance Between Two Random Points Concentrates at ¶
Goal For independent, uniformly-random points
show
as .
Rewrite the distance via a dot product¶
Because ,
Thus it suffices to prove .
Model the sphere points with Gaussians¶
Represent each uniform point as a normalised Gaussian vector:
where and .
Then
Asymptotics of the numerator and denominators¶
Numerator
Set . Since and ,(central-limit theorem).
Denominator
By the law of large numbershence .
Combine with (10):
Distance convergence¶
Insert into (8):
which establishes the claim in (7).
From Geometry to Robots¶
All right, brain back in applied mode. The same tooling that renders those crisp equations is powering my day-to-day robotics stack, so the next section pivots from hollow -balls back to slow and boring coding.
What I’m Actually Working On¶
I’m knee deep in robotics! I spent a good hour or so working on getting maniskill hooked up to lerobot. Didn’t end up connecting it to my robot and move the configuration over into the newly cloned repo because it’s saturday night and I’ve just been crunching on my website and typesetting projects.
The Big Picture¶
I already live in the power-user endgame—VS Code, GitHub Pages, and an AI co-pilot wired straight into my editor. Spinning up a static site or a -heavy blog post is possible today; the real pain is in the dozen tiny steps that sit between inspiration and a shareable link. Those paper-cuts don’t just slow me down—they keep everyone else from even trying.
So the goal isn’t to invent new powers, it’s to compress the ones we have into a single, low-friction loop that anyone who can type Markdown can ride.
- Capture → Journal extension. Works right inside whatever editor you love. Jot an idea, tag it, and it’s instantly part of the knowledge graph—no copy-pasting, no context switching.
- Refine → whtwnd editor. One click promotes a note into a full-blown article lab. Live code blocks, rendering, and a design system that makes even dense proofs look readable.
- Publish → Zero-touch deploy. The build spins up locally, ships through a DO jump-box, and lands in GitHub Pages (or your own server) without you touching a terminal. Good-looking, citation-ready pages, every time.
If we can make that cycle feel as effortless as pressing ⌘-S, then the subject matter—robotics tutorials, category-theory deep dives, garden-variety blog posts—becomes almost incidental. And when the tooling handles my edge-case chaos, chances are it will feel like magic for everyone else.
Okay one bad thing¶
Now I feel like I have to make every damn post all sparkly and sexy and cool like I give a single solitary fuck. Yeah that’s a definite reason to work hard to get whtwnd up and running. You can have posts that aren’t visible to the public, though I don’t know if that means people can’t go see if they’re on your PDS lol.
So I am going to use the designs for the base but I think I’m going to modify both the bottom part of the camera post but also the top. Try to get them pointing at the same basic work area for a nice degree of separation and difference in POV. I’m going to turn them like 30 degrees inward even though that might not be enough. It will create more overlapping field of vision in the cameras than having them both stare straight down.
They just seem like a collision obstacle right now, so I’m going to kick them out by extending the STL file somehow. I wish people shared the actual parametric models instead of just the STL meshes. Oh well. I can probably reinvent the part I need just looking at it, which I’ve already started doing.
Multiple posts in a day, or one big ass journal entry?¶
I don’t know! Certainly seems like I’m doing on big post-a-day, but they can be themed of course. Posts can and definitely should be longer thought out and revisited.
I’ve got the framework for writing my book on robotics now. I need to do that.
My daily journal has already grown to be too unwieldy after focusing on it for a day! I guess I finally have something to say.
It’s sort of blowing my mind that I’m technically going to be publishing my book as I write it but that’s sort of the whole point of having my own journal/book site.
Okay I commented out that part of my myst.yml
.
version: 1
project:
id: 91143d4d-0120-4764-9c1d-b16de83a4b9c
title: Welcome
authors:
- name: Thomas Wood
website: https://odellus.github.io/
id: thomas
orcid: 0009-0001-6099-2115
github: odellus
affiliations:
- name: Phytomech Industries
url: https://phytomech.com
github: https://github.com/odellus/odellus.github.io
plugins:
- type: executable
path: src/blogpost.py
- src/socialpost.mjs
toc:
- file: index.md
- file: about.md
- file: projects.md
# - file: books.md
# children:
# - title: Scientific Computing with Python
# children:
# - pattern: books/Scientific-Computing-with-Python/**{.ipynb,.md}
- file: journal.md
children:
- title: '2025'
children:
- title: June
children:
- pattern: journal/2025/Jun/**{.ipynb,.md}
thumbnail: _static/social-banner.jpg
site:
template: book-theme
options:
folders: true
logo_text: Thomas Wood
favicon: _static/profile-color-circle-small.png
style: _static/custom.css
domains:
- odellus.github.io
nav:
- title: About
url: /about
- title: Projects
url: /projects
# - title: Books
# url: /books
- title: Journal
url: /journal
unlike with journal posts, I don’t want to necessarily share this automatically.
I can also add the drafts in journal/drafts to this. I did but I don’t know when I’ll ever use it.
Kinda feel like I’m on big brother but whatever. I mean I’m a poster on deer.social ffs. I’m already living in the bright eye of the panopticon.
In other words I decided to start publishing my scientific computing with python book live. Fuck it. I’m still not entirely sure what the final structure will be but the very first thing was going to be here’s numpy.
I guess I do need to introduce more practical stuff in there and not just punt and say "here’s dir()
and help()
you’re welcome. Yeah what am I talking about this is terrible. Oh well. Somewhere to start.
23:11¶
Okay what’s up?
What did I accomplish this weekend?
- Ripped off Chris Holdgraf’s website shamelessly and published to my own personal site
- Got lerobot and manikill and lerobot-sim2real set up and ready for further configuration with
uv
in a high reproducible manner - Printed 10 grow towers
- Stopped being a wimp about filling up the hydroponics tank over the course of the day by setting timers to remind me when to turn it on and off
- Put the first chapter of the scientific computing with python book together
I’ve been meaning to put the book together for fucking years now. Now I have my own website that’s not just a place to keep notes on what I’ve done but a place to publish it, I mean no joke I can just make a book out of jupyter notebooks and markdown files and host it at odellus.github.io like a crazy person.
I’m going to be revisiting each chapter in succession as I build up the subject material and yeah honestly I definitely need to motivate the reader with some worked examples showing the power of numpy to solve real world problems in science and engineering.
I think we really should start by building up towards the methods in maniskill and lerobot. Screw the nonsense. We picked the simulation framework and the learning/hardware framework because we’re going to teach people about applied mathematics and that means simulations and optimization.
That’s what we do. That’s our whole business, and business is good.
I don’t know every last little thing about how maniskill works right now, I’m not going to lie. But I know Emo Todorov and took some classes from him and I studied scientific and numerical computing under J. Nathan Kutz, who has written several books about the subject. That’s my background. Heavy on SVD. Heavy on ode45
and all of that jazz.
I might not get right into Partial Differential Equations, but yeah we’re going to have to go pretty deep into math to talk about what lerobot and maniskill do.