Well hello there

LBNLMy name is Adam.

As you may surmise from my website, I build things. I’ve been doing so since the age of 6.

Through the years elapsed, I’ve designed too many things to tally. Everything from particle accelerators and x-ray machines, to induction heaters and flashlamp-pumped lasers. I’ve designed robots, computers, and even thermal identification / body tracking systems.

I have a bad habit of not documenting most of what I design, but what I do happen to record eventually finds its way onto this website.

I’m currently enrolled in the physics program at the Rochester Institute of Technology, where I’m continuing my experiments and private research. Lately, this has been in the domain of robotics and cutting-edge power electronics.

I’m a thrice-failed entrepreneur, and founder of the RIT makerspace.

Please enjoy my varied musings,


Premature optimization & modeling: The root of scientific evil

As a spectator in the sport, I feel it’s worth saying that most scientists, and most engineers, are misguided. Misguided in that many spend their time trying to make new discoveries mathematically -a fools’ errand in my mind, given that models are not the nonlinear, stochastic world we live in.

Premature optimization; that is, spending all of your time analytically modeling and/or simulating systems before they’re built, is bad. It takes far too much time and mental labor, and, without any experimental link to reality to keep models on track, that is very well (and often is), time just wasted!

For example, the hours you spend modeling an IGBT in SPICE, or a transformer in COMSOL, perhaps with data pulled from spec-sheets, is time gambled. I’ve found that in almost every case, it takes far less effort, and far less time, to just build a test rig for the device and see for example, what waveforms, or what flux-densities can be achieved. Often it is the case that such measurements are far different from what your models expect, and further often, one finds that there are many further variables that weren’t accounted for in your model!

If you have an idea in mind, use your ingenuity and available tools to “just build it”. With practice one soon finds that often your guesses will work! Perhaps your design is not optimized, but, once your system is stable, it becomes a far, far easier task to optimize it thereafter than it would have been initially (mathematically). Sometimes, you’ll even find that what was initially “proven” impossible by theoretical modeling is in fact, entirely feasible, and maybe even the right way to go.

I shall now provide a case study:

Some time ago, perhaps one year to date, I set out to design a 100kHz, 30kV transformer for flyback converter purposes. The system goals were simple:

1) Take 72VDC
2) Turn it into 20kV, at 100kHz, with a power throughput of 5kW.

Initially, I did what I could to model, and simulate some designs. In doing so, I eventually found after 5 days’ work that something to this effect should be physically possible, given ideal conditions:


I then built such a transformer, and lo and behold, it sucked something fierce. This was concerning, as there were several parasitic elements taken into account; flux leakage, nonlinear core losses, stray capacitances, etc, yet still, the thing just sucked. And so, it was thrown out, and I adopted a new philosophy.

Instead of returning to my model, I proceeded to go about it the way I’m most familiar: the intuitive way. I considered instead of differential equations, the physical relationships between wire spacing, flux density, core size among other parameters, and picked values I felt were best.

elektro-kaynak-makinasiWhen choosing a core, instead of taking into account permeabilies and fields modeling, I looked instead, at published BH curves. That is, data collected by the manufacturer of cores in experiment. Graphs of core loss vs temperature, saturation flux density vs frequency, replaced my mathematica equations and scaling constants, and the “proper” physical core size was determined simply by ripping apart a switch-mode welder, and seeing what has thus far worked.

Once a suitable core was obtained, my flux density setpoint was determined not by maxwells’ equations in \(\mathbb{R}^3\), but rather, by a few turns of wire on the core, a car audio amplifier, and an oscilloscope. It took all of 5 minutes to determine that my chosen core saturates (ie, distorts my sinusoidal test signal) at about 40V per turn. Mathematically, that should have been 18-ish.

SONY DSCIn an effort to keep stray capacitance low, guided of course, by the simple concept of “a capacitor is two parallel plates”, the secondary winding was redesigned to be physically large and of few turns. Considering of course, that every flux line passing through the solenoid would contribute to induced emf, making the coil physically large wasn’t as much of a problem as some texts made it out to be.

Leakage inductance was kept low by replacing primary turns of wire, with primary turns made from copper sheet. I decided this was OK after considering that most of the current in a high frequency conductor would be flowing on the surface, anyway, so there’s little advantage to using wire. (spice model that, I dare you!)


Several more “educated guesses” eventually led to, a transformer. One that worked much better than my simulated transformer, but probably wasn’t optimized. Making notes of some errata I then followed up the

design with another revision*

*Eg, I noticed some very high density flux escaping the cores’ mating surfaces via experiments with iron filings, so I cut my primary in two to leave space for these to escape unhindered. This prevent magnetically induced, circulating currents in my primary.

…which, when all was done, worked, with only 2 days of patience and effort. Hot damn!


That my reader, is how science & engineering is done: by seeing what works. Doing so experimentally is fast, time efficient, and fun! Most others unfortunately don’t see benefit to such activity; they are too caught up in the beauty and false-reassurance of mathematics to understand that, models are only models. They are not reality, and they take time to build. Time that’s often better spent,inventing.

This applies elsewhere. If for example, you wish to design a protein, don’t waste months simulating Schrodinger’s equation in Matlab. Instead, look at NMR data of proteins similar to the ones you wish to design, understand how they function, and tinker with them. Add new groups, see what happens. When designing an airplane, CAD some interesting designs and toss them in a wind tunnel. When designing a nuclear reactor, consider not intense mathematics, but rather, your alloys, their characteristics, and the tools you have to measure and machine them. I dare you to build a model that perfectly predicts metal creep under extreme neutron bombardment!

Science is not as concrete as many imagine it to be. When going about it, don’t get lost in your head –instead, follow in the footsteps of Faraday, Tesla, Edison, Hedy Lamarr, Curie, Rontgen, Jack Kilby, Compton and countless others who’ve changed the way we see the world.

Just do it!

Plan your experiments with a pen and paper, make them work, and if you stumble on something of value, model it afterwards. Often it’s not as expensive to do so as you might think, and it gets work done faster too.

What I’ve learned, Δt = 2 – The world is broken

In many ones’ eyes I’m a bad student, consistently in my classes earning B’s and C’s. Perhaps an occasional “A” in some physics course. I’ve now come to accept this; that there are others whom have better short term memories than I do. Others who see an equation and read it through as if it were a sentence –whom can find an error in a collection of symbols faster than I can even read them.

For those like me whom are without the opportunity of riding through these fields on the stallion of true mathematics, this is discouraging. So much so that as of late, I have become convinced that my classes and their closed-box curricula are not helping me all that much in accomplishing the goals I have set for myself.

I’m learning of course, how to solve many simplified, general-case problems. Problems others have fabricated to be solved correctly in only one way, for the ease of others hired to evaluate how well students solve such problems given limited time. And while that is great exercise, and gives some children the self-esteem and courage needed to peruse a career in their chosen field; that’s not the way such field work. The real world today is numerically simulated, usually, with the help of software.

To say I’ve learned nothing though in my stay thus far at RIT would be a lie in its most general form. I’ve had the fortunate opportunity to learn things about life many others haven’t yet seemed to notice; things too countless to tally.

I have learned that unfortunately the world is a sad place to live in. Most, if not two-thirds of all people are only out to cheat you. They’ll do so if it’s beneficial to them financially or socially, and whether they take on the title of car dealer, insurance agent, lawyer, businessman, scientist, doctor, artist or engineer is irrelevant -for many, dollars speak louder than friendship.

I’ve learned that people in power are often there solely because they wish to be. I’ve learned that police departments, and the systems of law we have in place to empower them are fundamentally broken, in that criminal records and jailtime do nothing for this society but socially force others back onto the same roads, that such punishment intends to keep untraveled.

I’ve learned to at all cost avoid political discussions. I’ve learned that many drug ‘abusers’ have some of the biggest hearts you’ll ever find in a person, and that saving your own money with the hope of in the future paying others is fruitless if you wish to pay them well.

I’ve learned that academia is in a state of despair; papers are published not for content but instead for numbers, and the other metadata attached. They are evaluated based on how complex and unintuitive their simple concepts have been twisted to be, and professors are hired and judged not on the importance of their work, but instead on their number of citations and the value of the grant dollars they bring in. It’s heartbreaking to see them, at one time young wide-eyed students themselves, forced to continually write unread proposals instead of moving forward the arts and sciences they so love.

What I feel is my most prudent lesson of all however, is that everyone needs help.

Whether it is help in something as simple as a math operation, or as complex as coaching someone off the brink of suicidal despair, help from others, is fundamental to solving problems. No one goes about large projects on their own and if they claim to, they are fools and pompous liars.

Forgoing such help in my past projects has brought me to a state of mind where hours of the day have become irrelevant, and it has brought me to a state where the only thing that ever matters, is “what needs to be done next?”. It has brought me to such a state of physical and mental torment that I have forgotten to eat for days, and on occasion forgotten to sleep as well. It’s now unclear even, how to recognize that I am tired.

Forgoing help, has brought me to a state where my only means of available relaxation and rest, were the forced escape of alcohol and cannabinoids. I’ve found myself socially into a world of work-abusers; a world where people –even ones so young as junior undergraduates, have become reliant on cocaine and amphetamines just to get done the work they have promised others. Work that could be done easily, if they would just ask for help.

This ends now.

It is exceedingly humbling to recognize that the advice offered by my first mentors at this university was correct. If I’m going to move forward with the projects I anticipate to complete, I need to be talking to people and not transistors. I am very fortunate to have spent nearly 15 years of my life doing the latter; in that, the experience I’ve thus far gained is one most haven’t the chance to have until age 30. However, it takes many hands to build an airplane, or in my case, what I wish to be a paradigm-shifting x-ray machine.

That said, the state of research science is still broken -broken enough, that I cannot expect such a project to move forward under the roof of my, or other universities in finite polynomial time. Instead I need to find a team, money, and some modern equivalent of Dave Packard’s Palo-alto garage to make this happen. I shall look for that then, once I have finished the hackerspace project I’ve promised to oversee and complete.

It’s time to learn how to ask for help, and how to coherently organize thoughts and people. It’s time to learn how to write, and how to intuitively understand others’ complex emotions, as well as the mass emotions of a crowd. It’s time to learn how to rip through published articles and extract their useful content, and it’s time to understand the mess that is law; both patent, and criminal. It’s time to do this while I’m young, impressionable and have relatively little to lose. In any case, a degree in these modern days of social connection and access to limitless information, is irrelevant to my desired career of, “inventor”.

Heisenberg’s Uncertainty Principal: The actual content of quantum theoretical kinematics and mechanics


Upon reading chapter four of my assigned physics textbook [Modern Physics, Krane], I grew both tired and annoyed with the generalizations, or “leaps of faith” which author continually made. I soon found it more useful instead, to spend time reading the papers upon which these principals have been derived. Astonishingly however, I failed to find a modern, usable English translation of Werner Heisenberg’s landmark paper! More unfortunately even, the closest I did come on the hunt for such a translation was the discovery of a broken-english, NASA OCR script from 1988 hosted on the web archive. That won’t do.

Thus utilizing a day’s time, Google translate, MathJax and my personal skills at reading broken-english datasheets, I below have provided a modern translation of W. Heisenberg’s paper. For convenience of the reader, I have replaced some original variables used in the paper to more represent those found in common texts today. New notations such as euclidean norms (i.e, \(|f(x)|\)) have been instated, as well.

Dr. Heisenberg’s various justifications alone make for an interesting (and perhaps, very useful!) read, but for those short on time I have prepared also, a “too long, didn’t read” summary immediately preceding.


TLDR Summary

CaptureIf we are to derive a model that quantizes space, perhaps to cells with lengths some finite dimension \(h\), then we are left with in the space \(\mathbb{Q}^2\) for example, a 2-dimensional grid of possible positions. Objects in this grid then, may be given some arbitrarily-defined co-ordinate, \(q\).

q of course, is a function of \((x,y)\) inside \(\mathbb{Q}^2\). \(x\), and \(y\) may only be integer multiples of h, or specifically:

\(q = \left \{ \forall (x, y)*h\in\mathbb{Q}^2 \right \}\)

(don’t be scared, I’m just having fun with LaTeX!)

CaptureNow, if \(q\) is a function of yet another quantized variable, \(t\), then \(q(x,y)\) may be broken into \(q(x(t),y(t))\).

Thus if it’s fair to say “\(q\) can move as time advances integer multiples of h”, then it is possible to define some distance \(q_x\), that \(q\) has moved in that elapsed time \(\Delta t\). We may thus define a 1-dimensional “velocity” \(v_x = \frac{\Delta q_x}{\Delta t}\).

\(q\) however, is not a continuous function in this space, as it may only take on discrete values, themselves integer multiples of \(h\). Therefore it is useless to define “the velocity at a point”. More generally, \(q\)‘s average velocity for any time interval, \(\Delta t\), smaller than \(h\), is not definable.

Restated, only values of \(q_x\), or \(v_x\), can satisfy the below statement;

If time advances as \((integers) * h\), then \(\Delta q_x \geq h\) if our definition of “velocity” is to make any sense.

By extension, momentum in this direction, which is defined as \(m v_x\) must satisfy \(p_x \geq h\), if \(m\) can be no smaller than \(h\) as well.

Now consider the thought:

What if we were to look at the object \(q\), with absolute precision? That is, \(q_x\) is exactly defined, and \(\Delta q_x = 0\).

Then, if \(v_x\) is a function of \(\Delta q_x\) then as \(\Delta q_x(t \rightarrow 0)\), or “the change in \(q_x\)” approaches zero, then the function \(v_x(\Delta q_x(t \rightarrow 0))\) becomes indeterminate. This relation works on the converse as well, such that the relation:

\(\Delta q_x * m \Delta v_x \geq h\) is justified!

In our 3 dimensional world \(\mathbb{Q}^3\), this equation becomes the familiar Heisenberg uncertainty principal:

\(\Delta q_x\;\Delta p_x \geq \frac{h}{2 \pi}\)

The factor of \(2 \pi\) is a geometric normalization.

The origins of this relation’s elegance are plain to see: it is one derived from simple principals! Below, Heisenberg purports similar arguments exist for an energy-time relationship, and proves both relations are just as true for wave-functions as they are for discrete, “particle” functions. I’ll leave that lesson to be a test of your reading comprehension skills, however.


Über den inhalt der quantentheoretischen anschaulichen the kinematik und mechanik (or, the actual content of quantum theoretical kinematics and mechanics)

W. Heisenberg, a modern translation by Adam Munich

Continue reading

Bad documentation: Engineers can’t write, and Marketing folks are full of crap

No, really.

For the past week or so I’ve been configuring my STM32F37x series microcontroller. Specifically one from the chip-tray STmicro custom-fabbed and hand-delivered to me (all I wanted was fedex ground!). This adventure has been one with fortune, and failure.

Delightfully this microcontroller was engineered proper, unlike the TMF320′s which gave me so much woe last month. That is to say, it responded to my initial st-link JTAG requests immediately and without issue, and hasn’t given any connection troubles thereafter. “Bricking it” does not appear to be a possibility given that all fuse bits are write-protected during its user-flash sequence, as well as its bootloader and other really important things. TI, please take a lesson from ST and write protect your non-program pages!

ST unfortunately does not maintain an IDE for their microcontrollers. Instead users are forced to use one of four commercial IDEs, the most notable of which are MDK-ARM (keil) and IAR workbench. I assume that this course of action is what allows [legally] for them to provide *so many* freeware peripheral drivers (many are in fact, developed by those who make these IDEs), but still, it’s a bit awkward having a 32k code limit on the freeware versions of such workbenches.


That’s where the fun appears to end, however. ST needs a librarian. Really. Or a new webmaster. After much struggle and somewhat-of an implicit “go to hell, plebeian” response from IAR support, I settled on Keil/MDK-ARM. While it’s UI leaves a lot to be desired, overall I’m quite impressed with its capabilities. Its compiler (not GCC!) compresses code like a champ, it has built-in core config files for nearly every ARM available, and its debugger is nothing short of amazing. I can snoop on local variables, even!

Although the company has many examples, datasheets and programming guides for their microcontrollers, it took communication with an engineer to find any of it. As one would have it, the collection of STM32F37x peripheral drivers was buried in a folder, inside a folder, inside a zip file, inside a lonely webpage, itself linked to only by one non-emphasized link in the “design resources” tab on the STM32F373RB’s  product page. I kid you not.

It gets better though; as it turns out, while much documentation exists on the microcontroller’s busses, core, peripherals and their registers, there’s absolutely nothing about their C drivers.  That is, nothing but the little /* @brief statements */ contained within them.

While inconvenient, this wasn’t a showstopper. Fortunately many of these helper functions directly correlate with the configuration registers documented in their big book, and every peripheral has several C examples to give the designer an idea of how to use them. That said, some functions do not work unless others are called beforehand, and it’s often a mystery as to what their proper order is. 

Some tips, for my reader:

• Enable your clocks! Every peripheral has an RCC_blah clock command which must be run before the device will work, and some others also have a PWR_blah command that must be set before even that. Nothing throws an error if these aren’t turned on,  and quite frankly it took me 4 hours to debug timers because of it!

• Disable your watchdog. There’s an unconfigured watchdog timer enabled by default in the option bytes, that will repeatedly kill your program unless it’s turned off. This took another 4 hours to debug.

• Use this tool to write your sys config file (for the stm32f37x only). The microcontroller won’t throw any errors if the sysclock is misconfigured, and again, this was a real bitch to debug.

• Be forewarned; despite what some of ST’s configuration programs (cubeMX, microXplorer) might imply, that you cannot arbitrarily define analog pins as differential pairs. Rather, the pins must be labeled in the form SDADCx_AIN#P, SDADCx_AIN#N, where # is some channel number. I’m going to need to fab new boards because of this.  ಠ_ಠ

• This powerpoint has been the most useful piece of documentation I’ve found yet.

Some 30 hours after first contact I eventually did get things configured. As of now I have clocks running, timers counting and ΔΣADCs sampling. There’s still an ADC, USART peripheral, DMA and interrupt table to configure before I can *actually* start crunching vectors in \(\mathbb{R}^3\), and given the pace of current progress I expect that to take at least another week. Oh well.

However, I did run into an unexpected problem last night around 2. Evidently, marketing engineers lie.


As one would have it, all of TI/BB’s micro-tiny “rail to rail” amplifiers are in fact, not “rail to rail”. Rather, the ones I chose to buffer my current shunts (OPA244) bottom out at 100mV, and are nonlinear for a good 50mV thereafter! This means that my nice getcurrent_float() function bottoms out at 0.9A, and doesn’t start properly working until we pass 2.5A or so through the phase in question.

Good job Burr-Brown, you get a medal. Also included is one angry customer, who now has to redesign their board to include a negative-rail charge pump that should not been needed.

Oh well, such goes “progress”. Despite their idiosyncrasies I’m happy with the STM32 product line, and will do my best to master the machine.

How many things can we build (and burn) in one week?

Last week I was pretty peeved, due to excessive homework among other things. Now what do we do when that happens?


So by forgoing sleep, how many things can we construct in one week?  As one would have it, a good deal of things.


Thing 1: A UL approved power box for bigcoil

Ignore the wire-nut e-stop.



There’s really not a whole lot to see here. It’s little else but a laser-cut box, a contactor, a breaker, some buttons, LEDs and a phasing switch for operation on Y-only, or Y,U,V hookups. Yes wood is flammable, but quite frankly, there’s not much that can go wrong in a box like this.

If something does go wrong, I’ll be right there to act, anyway.

Thing 2: A pulse generator

SONY DSCThis one is a little bit more interesting. Sans one missing knob, it’s an optical (fibre) pulse generator for bigcoil. Dubbed, ‘the turboencabulator’, she’s capable of generating up to 300us pulses at either timed intervals, or to the tune of 4 channels’ worth of midi music [4-polyphony]. The four possible super-positioned notes, each run from a dedicated interrupt timer, are FIFO scheduled (that is, scheduled on a first-come, first-serve basis).

The turboenecabulator is powered from a lithium battery and may either enumerate as a USB device, or accept a legitimate DIN-MIDI signal from whatever source you so choose. There are still some bugs, but hey, what do you want from one days’ work?

Thing 3: A second monitor for my tablet.


It’s very inconvenient to split-screen my tablet while doing math homework. After all, we only have 1080 lines to play with here!

As a solution, I put together a USB-powered, second display for my tablet. It is an iPad Retina-display™®© (specifically, the Lp097qx1-spa2 from LG), powered from USB, and fed video through a specially-shortened displayport cable.  Conveniently, the LG glass supports embedded displayport as its video signal input, so, this project was little else than a bit of level-shifting, and constant-current LED driving.

Sadly, I cannot run this panel at above 50% brightness or we’ll blow the polyfuse in my tablet’s USB port. But my, that glass is purdy nonetheless.

Thing 4: More segue-boards


Advanced circuits managed to get my PCBs in by Friday, which of course means they’re populated by 3AM Saturday morning! Overall there didn’t appear to be many design errors with this run, and in fact I’m quite happy with how Brushbuster IV turned out.

We’re still waiting for my ST-Link JTAG programmer.

CaptureUNFORTUNATELY, Linear technology seems to have overstated the reliability of their LTC6802-1 DACs, that is, their “we can float them at 45V and use current to daisy-chain the SPI lines” promise.

I don’t know why I believed that; the first few moments of real pack loading made short order of the the ICs, and the $60 pads they were connected to. Dang.

Luckily, it wasn’t too hard to redesign the BMS to use the addressable LTC6801-2 ICs, and *real* optoisolators for their communication. I hope then, to order new boards sometime this weekend. 

Thing 5: Bigcoil herself

Bigcoil version two finally got tested -the good news is that she worked. The bad news is, we:

• Melted the capacitor-discharge relay.
• Replaced that relay, then melted it again.
• Broke the $2 AliExpress e-stop button.
• Tripped some GFIs, a couple times*
• Eventually, caught one of my sketchy ebay switches on fire, and by extension, exploded my gate drivers.

There are still issues that need to be solved :-3



* This one is interesting actually. In bigcoil, there is no, non-catastrophic method for GFI-tripping differential currents to form. It’s concerning then, why the GFIs actually tripped! It’s even more concerning, that try as we might, we were unable to trip the AFCI breaker on the circuit bigcoil was tested on! Cool protection products, Leviton.

Well that was fun.

Now it’s time for five exams.  (•_•’)

I want to punch out a window right now

I can’t keep this up much longer. These classes, that is.

Modern physics, differential equations, vector calculus, AC circuit analysis and another physics course, for those curious.

It’s not that I don’t understand their concepts: allow me the use of a normed vector space, abelian under addition and most anything is possible. It just takes me a long time to do math -almost twice as long when I don’t have colored markers to keep track of things. Memorization, without binary relation to anything else, takes weeks.

In fact, I’m probably dyslexic. I’m also going blind in my right eye. More problems, to add to my list I suppose, which includes a chemically burned lung.

I spend all of my free waking hours, from 6:30 to 1, doing homework. After it’s finished, there’s no time to check over it as another class’s problem set isn’t yet done. I do that, then some more work, I hand it in, and it comes back covered in red ink.

“Make corrections”, some say. I would, if a DIY differential equation solver wasn’t due tomorrow. Or another 5 hour problem set.

I find it hard to believe that most people understand the scope of the work they ask others to do. It’s work I haven’t the time to do effectively, work that teaches me little, and brings about only feelings of discouragement and dismay, upon its return.

And all for what? A $16/hr job offer from another organization, where it’s likely that more people will tell me I’m wrong? To appease professors, so hell-bent in the ways they’ve been taught that even the idea of “vector division”, possible in 2D, and 4D spaces, or negative refractive indices is horrific in their eyes; even if it’s a valid solution to the problem? To please people afraid to read between the lines.

There is more to the world than most ever see.

It’s hard to find desire to keep  myself on this path. Most I suppose, take solace in the fact that “a degree means money in the future”. They bandage their psyches with relationships, alcohol and fraternity life. How jejune.

I am tired of kids and their bullshit ‘life problems’. My car’s engine has seized, I can’t afford housing and quite frankly, I am disgusted with the ungrateful attitude most members of my age group promote.

The fact of the matter is, I don’t care about this crap any more. This is not how I learn. There is no pedagogical reason why 25 problems a week, per class is a good idea. I have no time to -study- ; to sit down, conceptualize and put into effect, the new techniques I’ve set out to learn. Memorization in preparation for testing, is not learning. Abusing alkaloids in an effort to stay afloat in the work they assign and maintain my measly 3.2, is not learning. Clearly however, it’s been proven time and time again, that such a system produce wonderful engineers.

Caffeine is a horrible drug. Alcohol is a horrible drug. It boggles my mind why these ones are uniquely legal.

I don’t forget the things I learn, ever. Just give me some time to learn them, please.

This life is unsustainable. I have no time to peruse the things I enjoy; the things that keep me sane (the segue project, for example).

I need to go out and do what I’m good at; I need to devote my time, to building something great. Governor Cuomo, expect a rather interesting entrant in your 43north competition.

“Little hellions kids feeling rebellious, embarrassed, their parents still listen to Elvis, they start feeling like prisoners, helpless.”

Evolution of Segue: Δt = 2

Hardware development is painfully slow. But it does consistently, albeit slowly carry on.

(or at least I try to convince myself it does)


Le’ Software: Motor Modeling

CaptureWhat is a brushless motor?

Well, in it’s simplest case, it is the following:

3 coils, configured in either a wye, or delta arrangement, each having an inductance, and a mutual inductance with the other coils in the network. Surrounding these coils is an arrangement of magnets; rare-earth, usually, which create a static field upon which the stator coils interact. In a plebeian sense, it’s the ‘job’ of the stator, to generate a rotating dipole vector that the rotor will do its best to ‘follow’.

Now, because the total flux in the system is constant, it’s possible to write a kirchoff voltage relation for the stator:


Where VR is the voltage drop due to ohmic losses in a coil, VL is the voltage dropped across a coil’s inductance, and Vgenerated is the motional back-emf generated by the rotor’s flux lines transversing the stator coil. For fun, we can break this up into the following matrix form assuming a 3 phase motor:


Where VR becomes a [resistance matrix] * [current vector], VL becomes an [inductance matrix] * the time derivative of the [current vector], and Vgenerated remains unchanged. Take special notice of the inductance matrix however, specifically note that “L AC” (et cetera), are the mutual inductances between stator coils. That is,  the transformer-like inductance which will induce voltage in neighboring coils, when the voltage in the coil of interest is changing in time.

This can be broken up further:



And, assuming that all of our stator windings are reasonably equivalent, and that our windings are magnetically distributed 120 degrees apart from each-other,  we may simplify to a somewhat-nicer looking equation:



R = the average stator winding resistance

L = the average stator inductance

M = the average stator mutual inductance

I dot = the time derivative of current

Vmax = the maximum back-emf that will be generated

Theta_e = the electrical angle which describes the current back-emf, and also, the currents I(theta_e) and Idot(theta_e). For aesthetic reasons, I do not write I or V as functions of theta_e, but they indeed are!

The above equation also assumes that back-EMF is generated sinusoidally. That is, when you spin the motor and look at the voltage across a stator coil, you’ll see a sine wave. Due to reasons I won’t delve into here, this *is not true for every motor*.

With these substitutions made, it’s a bit more convenient now to smash everything back into as few matrices as possible:


CaptureAdmittedly though, this equation still doesn’t do us much good. It requires explicit values for every current in the circuit, and who even knows what the value of “M” is?

What if however, we assume our currents to be balanced? That is, there is some relation Ia + Ib + Ic = a constant, which in theory, would let us simplify our matrices a bit.

Could this be the case? Well… certainly not if the motor is grounded! Just refer to the left figure, and you’ll soon see why.

( For those bad at where’s waldo: If a ground exists, Ia and Ib and Ic can all leak out through it, independent of each other! )



A simple solution to this problem is to *not* ground our motor. In doing so, we end up with the relation Ia + Ib  + Ic = 0, which lets us make the following simplifications:


 Which then becomes the rather trivial relation:


Look at that, our inductance matrix is now isomorphic to an inductance scalar. Hot damn!

There are still some more unknowns to kill, however. Specifically, what exactly is Vmax?

Well, Vmax, by our prior definition, is the maximum electromotive force generated by the rotor’s magnets flying past the stator’s coils. As such, per one of maxwell’s relations it’s a linear function of the rate of change of magnetic flux in the stator coil.

With this argument in mind and the constraint that no magnetic components are changing in physical size, Vmax must then only be a function of flux linkage and angular velocity of the motor!


And with that…


Look at that: a model which contains quantities that are either all constants, or physical elements we can directly measure. Nice!

But, this matrix representation is still not all that useful to us. I say this because it’s impossible to fully represent the model in this form, with conventional engineer’s tools like phasors. So, we need some sort of transform to take us from this balanced, 3-phase representation into something of the form e^ix, or restated, from a 3-tuple representation into a 2-tuple representation.

Thankfully, a lady by the name of Edith Clarke figured out how to do this in the 1960′s.


Yay, we just gave one component in our current/voltage tuples the boot!

Edith’s transform can be thought of as a geometric transformation: three vectors, rotating synchronously in space, are projected onto the complex plane. Assuming these vectors are “at theta=zero”, that is, oriented in such a way that vector A is co-linear with the real (alpha) axis, then we are left with a situation in where the two other vectors  are pointing the other direction, and are offset 60 degrees from the real axis, or correspondingly, 30 degrees from the imaginary (beta) axis.

Now what is the real part of a 30-60-90 triangle on the complex plane?  sin(60) = sqrt(3)/2.

What is the imaginary part? cos(60) = 1/2.

That in mind, take a look at the transformation matrix, and it all should make a lot more sense :-).


Now, we still have that ugly back-emf vector, |cos(x), sin(x)|. What can we do about that?

Well, in vector space, a 2-tuple is for all intents and purposes isomorphic to a complex number, which is also a 2-tuple. (read: “ordered pair”). So, with the convenient relation:

e^ix = cos(x) + i sin(x)

We can say that:

|cos(x), sin(x)| for our purposes, is equal to e^ix

Ain’t that somethin? 

In hat notation, our equation is now beautifully simple!


Where V_hat, or any other similar vectors are equivalent to ( V_max e^i theta ).

One might ask though, how is this two-dimensional equation useful in our 3-phase motor? What good does it really do us?

To answer that question, it does us good, because instead of three phase shifted cosines, we now only need to keep track of one sine, and one cosine component. This is a much easier proposition for DSPs to handle, and, because we used a linear transformation to get this equation, it’s possible to use an equally trivial transformation to bring us out of the complex plane, and back into three-phase space. Specifically like so:

I_phase_a = 3/2 Re[I_hat]

I_phase_b = 3/2 ( -1/2 * Re[I_hat] + √3/2 * Im[I_hat] )

I_phase_c = 3/2 ( -1/2 * Re[I_hat] – √3/2 * Im[I_hat] )

Where I_phase_n is a current, itself linearly related to the PWM value you’d feed to some leg of a mosfet bridge.

But that’s enough of this, let’s toss everything into mathematica to prove I’m not giving you crap!



Bam, would you look at that. Those curves sure look like a brushless motor to me.

With the proper constants chosen they should represent any motor, even a Segue motor!


Le’ Hardware: Another $100 worth of power electronics

It is often the case that something you discover during a project, completely invalidates all prior work on the project. Sometimes this may happen more than once.

After modeling my motors we soon needed to fill in the constants, flux linkage in particular. How does one go about finding that?

WIN_20131227_104418Well, you use lots of scotch tape, a power drill, aluminum foil, two LEDs and an oscilloscope.


Too much of it to handle.




Anyway… in this case, the maths above soon revealed that my 44V of lithium power was not going to be enough for Segue. That is, the back-emf generated by my motor will equal 44V, when Segue reaches a linear speed of 21 km/h. …which is *not* 40 km/h!

So ladies, gentlemen and children of the internet, I present Segue v5.

After getting fed-up with de-, then re-soldering hundreds of components every time a board was revised, I decided it was in my best interest to go back to the “multi-pcb-with-ribbon-everywhere” solution. That is, Segue will have 5 independent boards:


Cellsniffer – A 24 cell (88V) lithium-ion pack monitor and balancer.

S.T.S. Whitetail – A board full of buck converters, and an 88V 4A boost converter.

Brushbuster(s) 3 – A new motor controller board, equipped with super-badass 8mohm 150V mosfets and a C2000 DSP

Seguebrain – A board with a bluetooth radio, an IMU and a cortex-M4, among other things.


“Lolomgwaffles it’s a penis” screams the internet. Yeah, get over it.

Now, as much as I would love to say I’ve gotten things working; IE, having a nice stochastically-spun motor available to show you, I don’t.



Because, programming a C2000 piccolo is evidently quite a pain in the ass. That is, code composer studio for some reason likes to hang during flash erases, which just so happens to brick my DSP.

I’ve yet to figure out why this is happening, but to be honest, I’m a bit fed-up with these ICs already. I mean really, *WHO STORES LOCKOUT PASSWORDS IN PROGRAM FLASH*. Is is really that much more expensive to have a 64 byte bank of EEPROM memory, explicitly for storing such configuration data? Evidently so!

Assuming all other TMS320 products do the same, remind me to *never* use a TI DSP inside an aircraft, or some other similar mission-critical system. I say so, because it would be really, really bad if you locked-up such an IC, buried deep inside an apollo-computer construction actuator control box and/or something of equal “this is going to be really hard to repair” stature.

“Oh hay guys, our JTAG flash bricked ur airplane”



Hyperspecialization ought to be a thing of the past

It is my firm belief that hyperspecialization stifles innovation. That is to say; one can never expect an PHD engineer to design something new, if all they have been taught are electrical engineering concepts.

Let me elaborate.

I credit most of my design ability to time spent in multiple disciplines. Drawing, sculpture, mechanics and photography among others, have all had nothing but positive impact on my ability visualize and design systems. To extent that for those without such experience, I fear for their abilities to design similar systems.

Restated, it is immeasurably more likely that an electrical engineer who has fixed cars in the past, will know what to expect from a motor they wish to integrate in their product. It’s fair to argue that an industrial designer who has been exposed to power electronics, knows what’s practical to fit inside such a thing as a powered cell phone case, and may even attempt such a project. It’s fair to argue that even musicians who have studied physics, are more likely to have less reflection and echo in the things they record.

It’s possible to propose many further scenarios, but what I state is, by forcing students through highly tailored paths during their undergraduate careers, we set them up for failure when it’s time for them to innovate. How can a mechanical engineer be expected to design a new powered scooter for kickstarter, if they know nothing about electronics? Similarly, how can a physicist be expected to create a new particle detector, if all they have been exposed to is mathematics? Especially so, when neither party knows where to begin in such studies!

My philosophy is a simple one; do your best to gain a plebeian knowledge about everything there is to know, past and present. When a problem arises ask upon this resourceful memory of creative solutions, and if something looks promising, go study it in further detail. With the tools we now have available as a human race, we’re beyond the time where an engineer must be able to recite from memory, all there is to know about control theory, or where a photographer must know the granularity of every film available.  

I’d much rather have engineers who know the physics of camera obscura, and photographers who know how to build high-speed triggers from a microcontroller.

That is, I’d much rather have a world where people put forth effort to become jacks of all trades, if they choose to master in one.

Evolution of Segue

Now that finals are complete I once again have time to work on projects. Video glasses for James, rebuilding super-DRSSTC among others, but the most important one right now, is Segue!

For the past year or so I’ve been working on and off on Segue; the world’s first DIY, brushless motor segway-robot-thing. My end goal is to design a Segue kit of sorts, to offer to all the ambitious nerds who would like to have their own DIY, aluminum-frame self-balancing transporter (for in theory, $1700).

This is how that’s thus far happened.


Around last Christmas:

Someone stole my bike on a cold friday night. This did not sit well with me, so I did the only logical thing I could do at the time: I built a segway.

Fortunately I was able to find on campus;

  • An old power scooter
  • A gyroscope
  • An accelerometer
  • An arduino
  • A piece of plywood
  • An aluminum tube
  • A machine shop with a broken door lock

Monday morning, I had this:

Oddly enough, my evil plans worked. This doomsday device worked well enough to ride around campus for several weeks, and it probably sealed my reputation as a nerd at this university. Awesome!

But, “Segue” had several shortcomings:

  • The motors were too slow to travel at speeds greater than 4mph, which made me sad.
  • The motors’ gearboxes chattered… a lot. Often, this led to some interesting, positive gain oscillations!
  • Lead acid batteries + winter = bad batteries in a week.

…among other things.

So I set out to build a new Segue, one that wouldn’t suck too much to commute around campus with. It was to have;

  • Brushless motors
  • Lithium batteries
  • A metal frame with a “bad-ass” factor > 1.0
  • Really, really good system control.

This turned out to be harder than first expected.


Segue 2: The Beginnings

First, I needed motors. Specifically brushless hub motors with a single, thick stator shaft. Where oh where, can one expect to find such a thing?


As one would expect, only in china. One month of negotiations and a $500 paypal transfer later, DHL showed up with my new toys!

 Soon thereafter the next question was “what should Segue 2 look like?”, followed soon by, “how would I be able to manufacture more, if others want one?”.  The answer to both of these questions, was immediately obvious:

–> Water jet cut industrial aluminum (⌐■_■)

A week of yelling at autodesk inventor, and about $300 at Klein steel and Nifty-Bar inc provided me with something worthwhile.

picture208This, specifically! A sturdy, waterjet-cut frame held together with interleaving aluminum tongues, and friction through an interesting bolt-nut arrangement. Though that might sound worrying, this proved to be quite strong indeed!

The idea behind this frame, is to require nothing but a wrench and a screwdriver to assemble it. This is so because tig welding for assembly of a kit isn’t quite my idea of a weekend project, nor do I expect it to be anyone else’s.



Now, it was electronics time. It’s fair to argue that it’s still electronics time, but let’s not go there just yet.

The question to ask again, was “what does a Segue need?”, soon followed by “how can I do that, without spending $2,000 on mouser?”.

A Segue has batteries; batteries need a battery management system. A Segue needs a computer, it needs an inertial sensor and it needs power supplies for such. A Segue needs two motor controllers, and three FET half-bridges on each of those. We found that a Segue needs at minimum, $100 worth of electronic components –which isn’t too bad.

I won’t go into much detail on the early prototypes. It should be rather obvious, what I’ve done there from this next photo, so here it is:



I will point out though, the following decisions:

  • The main computer is a Teensy 3.0, which itself is a cortex-M4 running Arduino. I chose to use this because I want Segue 2 to be easily hackable: c purists, go home.
  • I chose to use a finite-state machine as my motor controller. This was a bad decision. I probably should have foreseen so, but alas, I assumed it might work anyway. Nope.



That is not how we Segue around these parts.


The re-Segueing:

Board two combined everything into PCB. The state machines were dropped in favor of a nice BLDC motor controller IC, bugs in the battery management system and SPI bus were fixed, the power supplies were made a bit more reliable and layout was generally improved. The mosfets no longer suffered from random explosions.

I seem to have lost the photos I had of this board, but once again, we had motor control issues. Namely I learned from this design, that commercial motor control ICs suck something fierce!

I learned also, the importance of proper inrush current limiting!

The re-re-Segueing:


Board three dumped the motor control IC in favor of an AVR32. Inrush current limiting was added, along with a proper fuse, and a bluetooth radio, once I realized I could use a smart watch to steer the Segue!

This is the board where I learned that AVR32 is a mess beyond comprehension, where somehow Atmel took the harvard architecture and broke it. 

There are 9 ways to multiply two numbers in that instruction set. THIS IS NOT USEFUL.  (╯°□°)╯︵ ┻━┻

AVR32 was soon abandoned in favor of TI’s C2000 Piccolo series DSP’s. They are in fact real RISC machines; and I can understand their command set which is just lovely when I’m trying to use them.

Final exams haven’t given me time to play with these just yet, unfortunately…




The current state of Segue:

The main goals of the project as of now are to design a reliable, self-balancing segue-bot with:

  1. A frame supporting two 2kW brushless hub motors, both providing in theory, a top speed of 25mph.
  2. A 500Wh lithium ion battery pack, with associated charging, and battery management systems.
  3. A bayesian inertial measurement engine, to provide accurate ground normal, velocity and acceleration measurements for the rest of the system.
  4. On board, electronically switched power converters, to provide for things a 5V, 15V and 3V rail from the 51V battery stack.
  5. Bluetooth connectivity, for hands-free steering with a smart-watch.
  6. Suitable power electronics, for heat-free switching of the brushless motors’ stators.
  7. A dead-reckoning, bayesian-corrected motor simulation engine, to allow for “0 speed”, constant torque brushless motor driving. Gimballing, if you will.

Of these goals five have thus been accomplished. The frame is built, the battery management system is done and does not explode, the power converters are working reliably from the 51V they’re given, the bluetooth works, and the MOSFET buffers work and don’t require heatsinking. That took the whole of last summer!

What’s left to do at this point is nothing but math.

Brushless motors are tricky devils. In order to spin one’s rotor, you must provide with its three stator coils, a rotating magnetic field. That in itself isn’t too hard:

Stator n = A cos(0 - ωt + Φ)

Where each Φ is offset either 60 or 120 degrees from the last coil’s, depending on the design of your motor. Advancing t would spin the motor in one direction, regressing t moves it the other direction. ω is simply your angular frequency of electrical rotation, which itself is related to the motor’s physical rotation by some constant.

Simple, right? Well… not quite. Physics likes to screw things up.

A is the maximum amplitude of your wave function; the case of DSP control, this would be the maximum PWM duty cycle of your mosfets, and thus the maximum current given to the stator winding in question. There, lies one problem: voltage and current aren’t necessarily aligned in the motor. Why?

Think about a loop of wire moving in a non-uniform B field.n As flux lines cut this loop, an emf will be generated in it as per the maxwell-faraday equation:

\oint_{\partial \Sigma} \mathbf{E} \cdot \mathrm{d}\boldsymbol{\ell}  = - \frac{d}{dt} \iint_{\Sigma} \mathbf{B} \cdot \mathrm{d}\mathbf{S}

Notice something though; in no way, does current play a role in this guy! That is to say, the emf generated in the loop of wire, moving in a B field, is *independent* of the current in the wire.

Of course if we want to push back on the B field as we do with a motor; one needs a current in the loop. This is so, because of the Lorentz relationship:

\mathbf{F} = q\left(\mathbf{E} + \mathbf{v} \times \mathbf{B}\right)

Where (charge * velocity) = (Coulombs / Second) = current. Note here, that voltage plays no role! Ruh roh.

As non-intuitive as it may seem, a voltage generated, and a current generating a force, in a wire with no resistance, are two entirely different phenomena. Ain’t that something?

But here’s the kicker: see that little (-) sign in front of d phi / dt in the maxwell-faraday equation? That’s the little bit of hell right there which makes this job hard. It’s of opposite sign, of the voltage we need to apply to our real, restive wire, to make current flow in the direction we need. Clarifying:

Spinning the motor generates a voltage of opposite sign as the one you are applying to spin it. This is velocity dependent.

So that means A in our equation, is also a function of ω. Not only that, but it is a function also of the motor’s stator resistance and B field strength, where these themselves are even functions of temperature! Delightful.

There is no general solution to A, as it will be different for every motor. But, it is typically an empirically derived function of ω, which will do its best to scale I such that we can properly offset the back emf of the motor, such that the system becomes linear.

Got that? Good. There are more wrenches to toss in!

Magnets are not of infinite strength, and motors are not perfect. Thus, it’s possible to force them in positions no one likes; positions where the magnets we’re trying to move are no longer aligned with the field of our stator. It’s possible to lose track of them, for all intents and purposes.

To avoid this, ω needs to be correctly adjusted, such that if the motor is decidedly slowing down and we’re unable to do anything about it, that electrical RPM’s stay synced with the mechanical ones [w_m]. If they don’t, then the game’s over.

So now, we need to make ω a function of ω_m. Ok, but what is ω_m?

Therein lies big problem #2. I don’t know.

Mechanical RPM is difficult to measure at low speed. This is so, because nearly every velocity sensor in existence, relies solely on either:

  • Discrete reference marks passing by some point, (hall effect sensors, rotary encoders, etc)
  • Non-discrete references, such as the voltage generated in a wire, as a magnet rotates over it.

As one would have it, both fall apart at low speed. Discrete references don’t pass by fast enough to properly infer some (delta reference) / (delta t) value, and non-discrete measurements such as the one described, become signals too small to measure. s a bonus, how the hell am I to define what ω_m is at zero speed?

In reality we need a rotor position measurement, and a really precise one to boot. That’s expensive!

Expensive doesn’t work on a DIY segue-bot kit. It’s the same reason why I’m not just using a small motor and a planetary gearbox to begin with.

The Plan From here:

I have hall effect sensors; 6 of them. I have a motor whose flux linkage, stator resistance, and rotor field strength I can measure, and whose stator currents I have PWM control of, and can measure with kelvin sensing. I have Maxwell’s equations, Wikipedia articles describing Bayesian statistics, and 6 weeks of winter break to figure all that math out.

Let’s do this:

With a known flux linkage, moment of inertia, B field strength and stator resistance, I can run a simulation of my motor inside the DSP. From this, I can know its momentum, and guess how it should respond to a given current. Based on its current angular velocity, I can then scale this current value to compensate for phase lead / lag. Then, based on some a priori measurement, give the mosfets some (ideally correct) PWM values, and increase/decrease that duty cycle based on the phase’s current sense resistor.

The angular velocity will be found through 1), what I’m modeling the motor to do and 2), some correction, based on information to from the hall effect sensors. At zero speeds, I’ll just “gimbal” the motor and hope for the best. In theory, its application (balancing the robot) won’t torque the motor to such an extreme, that I will skip commutation steps.

So in essence, I need to cram:

Hall effect sensor information
Current information
A priori knowledge
A desired speed                         — Into —>        [ Magic filter ]            —->          [ PWM value register ]
A desired torque
Maxwell’s Equations
Ohm’s Law

All in some way that makes the DSP not catch fire.


You’ve got mail

I have a bad habit of missing voicemails. When they arrive I usually do not know about them, and for some reason verizon doesn’t warn me when my inbox is full.

I do not have a smartphone for many reasons. They’re expensive, fragile devices, and they have the ability to distract the hell out of you. That is, I don’t care if Jody broke up with her boyfriend, and I don’t want to be notified of it.

Unfortunately though, this means I don’t have an ACPI LED at my disposal. :-(


~ Get Ghetto ~


To remedy my situation, we decided at 11:30 last night to get ghetto. By this, I mean it was high time to cram an ATtiny into whatever space was available in my phone.

Of course there wasn’t much…



I’m not particularly proud of this one; it’s quite the hack. I’ve told the little man in that SOIC to look for when the ground line of my vibramotor goes low, and to initiate an LED blink sequence when this occurs.

There’s no easy way to determine when I’ve “read” my mail though, so the best 2AM solution to that problem was to break-out the ATtiny’s reset line to a button, glued on the phone.


Yeah, it’s pretty bad. But hey, it worked.

Hacking the neural network

Lately, I’ve been intrigued with the idea of brain hacking, triggered in part by the talk Lee von Kraus gave at the NY hall of science some months ago.

By “hacking”, let’s be clear that I do not mean hardware hacking of the type Ben Krasnow seems to be so fond of. I tried that in fact, and it didn’t work.

Rather, let’s define ‘hacking’ as a form of software manipulation; reprogramming the neural network, if you will.

What is a neural network?

Our brains are amazing devices. Unlike most computers which process data sequentially (albeit, at amazing speeds), evolution has decided the parallel processing route was a bit better. No doubt, in my mind, because it’s *very hard* to completely disable a huge, decentralized network. Phineas Gage is the best example of this that comes to mind, but all it takes is one look at the RIAA’s attempts to destroy such networks, to gain a good understanding of what I mean!

But I digress.

A brain is a huge parallel computation machine.  One with a “clock speed” of about 10 hertz, but a node count something akin to 230*10^9. A massive number yes, but what exactly does it mean?

Given its low processing rate, yet amazing IO capability, a brain is interesting compared to normal Von-Nuemann machines. To make the distinction a bit more clear I’m going to define some  quantity “computing power”; that is, the number of possible operations which can be preformed in some finite time:

(tick) (operations) (# of operations preformed in one tick)^-1 = computing power (P)

Ticks can be thought of as the time it takes for an operation, or group of operations, to be processed. Propagation delay or reaction time, if you will. In humans this works out to be on average 200 milliseconds, in a typical desktop PC, 0.294 nanoseconds.

Operations are just that; operations. Memory lookups, inputs, outputs… anything.

The maximum number of operations that can be preformed in one tick can be thought of as the “bus width”. For humans, this is practically limitless, and can be assumed to be equal to the number of operations which are given to the machine. For your typical 4 thread CPU, one can assume this to be no more than 4.

With this in mind, let’s compare some simple tasks:

Elementary mathematics as an example typically deals with one input, one output, a few memory/character lookups, and two memory writes at any given time (8 operations). For your brain, this becomes a problem since your ability to compute such algebra in unit time, is resultantly;

(0.2) * (8)  * (8)^-1 = 0.2 seconds

Very quickly you can see that summing an infinite series, doesn’t quite work with our hardware. For a von-neumann machine though, the opposite is true:

(.294 * 10^-9) * (8) * (4)^-1 = 2.35 nanoseconds

Obviously, humans were not designed to do simple math.

Here’s where the fun comes in, however. Our eyes have on average, 190 million photo-receptors total, and we have two of them. This corresponds to an image, 390 megapixels in size, that gets processed in continuous time by our brain. What is then, our ability to process a single “frame”?

(0.2) * (390 * 10^8) * (390 * 10^8) ^-1 = 0.2 seconds

Now it’s fun to see that for a Turing computer:

(.294 * 10^-9) * (390 * 10^8) * (4)^-1  = 2.85 seconds

…it takes longer.

This is the beauty of parallel architectures. They are limited only by (A) their ability to handle IO, and (B), their total propagation delay. For operations that process a large amount of data such as filtering, image processing, and voice recognition, they work really, really well. For operations like arithmetic, such is not the case.

It’s prudent to note though, that neural networks are fundamentally bottle-necked by their reaction time. How is it then, that you are able to drive a car if your “visual framerate” is only 0.2 seconds?

Now it gets even more fun.

Our brains regularly compensate for the hardware available. This hardware runs in continuous time –time not governed by a “system clock” like every other state machine out there. An input now, comes out about 200ms later. An input (now + 20ms), comes out about (200 + 20)ms later. It’s a continuous signal processing machine, which means it can do fun tricks to compensate for scarcity in free hardware.

This text that you’re reading right now is text that you are in fact reading right now. It’s visually processed, recognized as words, and sent through a specialized piece of your network trained for comprehension.

But the edge of your computer monitor, or the room in your periphery… is not.

That’s actually the world as it was some noticed time ago; with some continually added information. That is, it’s a hallucination about how things are expected to be, one that is updated only when inputs dramatically change. As it rightly should; why would you waste finite computing resources on what the back wall is doing, when you “know” with certainty that it’s not going to be doing anything interesting? Of course, this has its downfalls when someone unexpectedly chucks a baseball your way, but it’s certainly nice to have enough neurons leftover, to process sound while watching a movie!

This is among one of many tricks.

Why is it, that you only notice the lead tracks in a 20 track studio production? Why is it, that you don’t notice the 3rd chair violinist’s contribution in an orchestra rendition? Why is it so that you don’t notice the noise your computer fan makes, unless it changes unexpectedly? All of this information is there; it’s just… not important.

I’m sure many of you have seen this:

It illustrates my point exactly.

I feel it’s safe to argue, that a person’s experience of the world is merely a simulation. A simulation that responds to inputs in such a way to better handle future inputs, trained with what’s familiar. 

This is why it’s possible, to reach for your right toe without looking and (usually) succeed. You know what degrees of freedom your body is capable of, and you know the position of your toe relative to the rest of your body. With this model in mind, it’s a simple task to tell your arm to move in such a way, that all it takes is a little bit of a priori SLAM at the end to make fine adjustments. Walking, a task that took Honda 30 years to accomplish on Von-Neumann machines, is simple for us because of this.

This is also why abstract concepts such as charged particles moving in B fields are such confusing ones to comprehend. Your network isn’t trained to recognize that type of pattern. If I was a philosopher, I’d say it’s reasonable to argue that sentience is nothing but the ability such a network to modify its surroundings, such that the inputs it receives are familiar; self-preserving. But I’m not, so that tangent ends here.


The good stuff: Hacking such a network

It’s not really possible to ‘program’ a neural network. Can you imagine giving explicit instruction to 10^11 nodes, in such a way that they recognize text from an image?

Code that in C. I dare you.

No. Rather, the only feasible way to preform such a task is to ‘teach’ the network. Give it inputs, monitor the outputs, then “tell it” to remember its connections if the output is good. Do this enough times, and eventually you’ll get a predictable response worth looking at. Do this continuously for 80 years, with millions of inputs and a pre-programmed ‘instinct’ to get a head start, and you get an old wise man.

This in mind, I will make the postulate that ‘personality’; habits, reactions, ideals, tastes and preferences… are all learned. They’re statistical patterns formed through past experiences, and by the definition of a neural network, they are patterns that can be changed. Hacked, if you will.

And how does one hack them? Tell nodes that what they’re doing is wrong, useless even. Eventually, they will change.

These past two years I’ve been doing just that. Making note of my habits, my responses, and recognizing the patterns they reveal. When I see something I don’t particularly like, I do what I can to tell myself to “change”. And quite frankly, it works.

I used to be a very depressive person. When something went wrong, the response was to feel sad about it. Hobbies, drugs, you name it, did not change that.

Of course this didn’t get much accomplished, so I told myself that. Consistently.

Now, when something breaks or goes wrong, there is no longer any depression. Maybe I’m sad, or annoyed for 20 minutes, but the pattern of pouting for days about it, is gone.

The fun thing is, there seems to be no “limit” to the extent at which this occurs.  If I break a $2,000 laser, it doesn’t sadden me. If a family member passes on it’s an unfortunate event, but it’s not likely to bother me for more than a half hour. If my oscilloscope breaks, rather than call up Tektronix and scream over the phone, I take the thing apart, see what went wrong, and determine whether or not it was my ill-doing or simply an inherent machine fault. I can act rationally.

Some might call that dehumanizing, but I won’t. It’s nice to be able to react to such situations, without the bias that sadness otherwise provides. That was a good hack.


So what else can we hack?

Well, I don’t know. This is not a science.

We’re going to try things and see what’s possible. At the time of this writing, my current goals are:

  • To immediately respond, as much as feasible, when given tasks. I intend to start by immediately responding to emails and such, until it becomes habitual.
  • To reprogram my fight-flight patterns, such that mathematics does not trigger such a response. We need to associate integrate(x^2+3x/e^4x) with “fun”, and do so effectively.
  • To reprogram my flight-flight patterns, such that exams do not trigger the response. We’re making progress here.
  • To dissociate “fear” with “unknown”. There is little good reason to be fearful of what one doesn’t yet know, and doing so, only wastes time. I imagine this is why so many entrepreneurs fail in their ventures; they waste too much time planning in fear.

…and that’s it, currently.

I must stress to the reader of my words, this is hard. It is very difficult to tell billions of nodes that their connections are now wrong, and even more difficult to tell them that the new ones they make are worth keeping. Do not try to change too much at once, or it will not work.

Old patterns are strong; archaic ones even stronger. One must never accept their familiar outputs, if a hack is to succeed.

Now, go break some synapses!

Faith in TEK has been restored, somewhat.

How about that; it looks like my voice was heard, and tektronix sent me a new TDS2024C!

And what do we do with new $1500 toys?

WE RIP THEM APART   (⌐■_■)っ═一 

picture871 Let’s look at her from behind. Not much going on there, apart from a USB device port, FCC information, a kensington lock and various certification info. All and all, pretty boring.

picture885However inside it’s still pretty boring! There’s a power supply, a mainboard, a ribbon for the display panel and a wire harness for the human interface panel. The power supply itself is noteworthy though. Interestingly enough it’s all through-hole constructed, on a single sided PCB. Like most SMPS’s, it’s a flyback converter, and a very well filtered one at that (second order LC on the input). Everything is well-glued, isolation slots are properly routed, what needs cooling is heatsinked well and overall, it’s a very nice power supply. The switch is hefty, and the IEC connector is nicely screwed into a big metal frame.

picture878I don’t see this failing anytime soon. Though it doesn’t mean much to anyone looking to buy one [it's custom], it’s an Emerson Networks 7001574-J100 power supply, Rev 1A. It’s output voltages are: -4.22V, +3.3V and +5.8V. Some of these are odd voltages, but I know exactly why they’ve chosen them.

picture874 Take a look in the top left corner of the mainboard: there’s a whole bunch of linear regulators! Specifically, a circuit that has a 0.7V intrinsic potential loss, hence the 5.8V supply for this board. There’s also a negative voltage source on there, similarly, only a few tenths of a volt below what’s fed to it. One might wonder why these are even here.

Two words; noise floor. In a scope it must be as low as possible, and the way you do that is through linear regulation, and active filtering.

What else is on this board though?


picture881* A Xilinx Spartan FPGA

* Firmware EEPROM’s

* Several megabytes of asynchronous RAM (71v016sa12bfg) for the sample memory.

* A pair of custom ASICs, marked 32zcc87639858-00. I’m assuming these are what do the signal analysis for each channel pair.

* Another pair of custom ASICs, marked 31zceg79857-00. Without doubt these are the analog front-ends/daqs, which handle all of the level shifting, attenuation and signal sampling.


* A CY7C67300, “EZ-Host” embedded USB host controller, from cypress logic. This looks like an interesting IC…

* A DS1339C, I2C Real time clock IC, and associated CMOS battery. This is a data-logging scope, so it makes sense that it should know what time it is!

* A pair of 48lc2m32b2 62MB SDRAMs from Micron, likely the system memory.

* An AC16244, which seems to be a mysterious TI ASIC. And last but not least…

* A freescale MC6800 processor! (on back). This is really the main system chip: an 8-bit microprocessor from the 70′s.


What the hell? Evidently it was fast enough to run the LCD display. Or not; that’s probably the FPGA’s job. Still though, why would anyone in this day and age where 32 bit ARMs are $5 each, use a 6800 in an oscilloscope of all things? Was it because they were military approved? Or was that just lazy engineering? I’d appreciate it if someone could shed light on that.


For comparison, here’s the TDS2024A’s mainboard. It hasn’t changed much.


Interestingly enough, this board has no EMI shielding. At all. There are some little castle-wall plate type things, but I’d be shocked if those actually did anything. Not only that, but there’s no metal enclosure around any of this circuitry; not even metallized plastic. Essentially, all of these signal paths are wide open for the world to see, and thus open for ambient RF to come and screw things up. Noticeably, too: turning on my Metcal near this scope increases the noise floor by almost 20mV!

Not good, yo.



On the front panel there’s nothing special: buttons, LEDs, rotary encoders and an LED backlit display. Interestingly enough though, this is an IPS display. For those who don’t know, IPS, or in-plane-switching, is a type of LCD technology that allows for excellent color rendition, ultra-wide angle viewing and an all-around better LCD experience. It’s actually a huge improvement over past models, if you ask me.

Overall, I give the scope a 6/10.

On the plus side the tektronix UI is as simple and elegant as ever, and the scope is a joy to use. The FFT function is much-improved, there’s a crap-ton of measurement and data logging features, and the ability to save waveforms to flash is very useful when archiving reference signals.

However, -1 points for not having a reasonable sample depth: 10kpoints is just not enough in these days of cheap RAM. -0.5 points for not having a modern processor, and another 0.5 points for a USB host that can’t understand more than 2GB of flash memory. Lastly, minus another 2 points for no EMI shielding. That’s really a pain in the ass when working with RF and power electronics, and I had to add it myself for that purpose!

That’s not to say I’m complaining; thank you tektronix, for doing what you could to make a customer happy. But, it’s high time for some innovation in those engineering labs if you want to stay on top!

We’re on the verge of a social revolution

I went to Makerfaire this weekend at the New York hall of science. For those who don’t know what the event is, consider it a large show-and tell of things people have built, things people do, and things people want to sell to those people that build and do things. Comicon perhaps, but for hardware hackers.


While I could talk for days about the faire, I’ll share only some of its secerets, and that, via pictures!

Let’s start with 3D Printers. Everywhere. Entrepreneurs capitalizing on the recent expiration of Dimension Engineering and 3D Systems patents, and a lot of them, at that.

Even if you weren’t selling a printer, it was unordinary to not have one in your booth.




Look what happens when patents expire? You get a whole bunch of very determined people building very cool things. Now who would have thought that competition could lead to something as absurd as innovation?

But let’s not make this post about how much I detest patent law. Rather, let’s look at more of makerfaire!

There were also…





Flame-throwing weed eaters…

Bicycle-powered vegetable washers…




A sound system….

Built from an array of transistor radios.



Wave vessels, a project I hold very near and dear to my heart.




Ham-radio antenna deploying robots (for emergency communications)

 Eepybird in all their glory.

SONY DSC954682_10151879677563928_1124683793_n


Cardboard furniture, supporting an arduino workshop.

All of my friends at MITERS





Their electric vehicles…

 And Charles.




Jeri’s CastAR.

 Open-source, DIY prosthetics.





TI calculators, linked over IP.





Incredibly sketchy go-karts.

And kids. Lots, and lots of kids.



Lots, and lots of kids. Kids building robots, kids  programming microcontrollers. Kids lockpicking, kids listening to science talks, art talks, and kids immersed in the wonder of all the projects at hand.

Kids who learned it’s possible to build something yourself. 

This is the beginning of something big.

Hackerspacing: I’m a pain in the butt

Things are looking bright for the hackerspace as of late; no doubt due to the TI attention, we now have the administrators taking us seriously. So serious, in fact, that I was directly called a pain in the rear by the president of our university himself.

I’m still not sure whether or not the hidden subtext is ribbing or endearing, but it’s good nonetheless. Dr Destler seems to support the idea, and after the speech, offered to find a room and support of our first funding option in private conversation.

…which is good, partly. But, we already have a small room, and it’s completely unsuitable for machine tools. Not only is it impossible to reasonably fit a router in there, but it has no power, no ventilation; not even a window that opens! We need a room in booth.

Regarding the first funding option, it’s good that the administration is willing to support a $55k purchase, but in reality, it’s not money well spent. To fit that budget we needed to make some serious cuts, typically by choosing used, single-purpose equipment. Take for example a knee-mill; though useful for its purpose, if we later buy the haas, the bridgeport is $5k wasted. Same goes for the cheap FDM 3d printer if we later buy a stereolithographic printer, or the modela if we get a laser for the router. Simply put: with more money, we can make intrinsically better purchases for the lab. Though more expensive up front, an extra $20,000 now saves us $40,000 in the future.  Not only that, but with tools that are 2x more useful, the hacking gets 10x as good.

The trouble is getting the admins to see that, also.