Yes, net neutrality is conservative. That’s the problem.

Fred Wilson makes the argument (here and here) that net neutrality is a conservative idea. That’s correct, and that’s the problem.

My definition of “conservative” is “seeking to preserve accreted value”. It fights change, because change is considered dangerous. It is (perhaps rational) risk-aversion and loss-aversion.

For example, environmental protection is conservative. Preferring traditional marriage is conservative.

(You’ll note that people who support those respective ideas are not typically found within the same political tribe, which is my semantic point.)

Net neutrality wishes to “preserve the internet”. It wishes to lock in a certain model, believing that one is protecting value.

But the internet is not conservative. It allows for many unpredictable outcomes. It is emergent and adaptive.

To me, neutrality locks in the worst of the internet (last-mile monopoly) while hindering its best qualities (routing around damage).

Imagine it’s 1996, the internet is emergent. It is largely designed around discrete text protocols – email and HTTP and such. Binary data is supported, but pipes are narrow.

Now, companies start popping up to stream video over that network. Video was not considered in the network’s design – in 1996, video over IP is wildly inefficient. Coax cables can stream dozens of (analog) channels, but that modem on the phone line? Not so much.

So netizens quite reasonably wish to prevent this change. Imagine if bandwidth-hungry video providers crowd out those of us sending email! It’s an abuse of the network, greedy, and a departure from history. At the very least, the FCC should step in.

That’s a conservative case, and it looks a lot like the argument we are having today.

My security and privacy tools

A quick list of the things I use to improve my web experience:

HTTPS Everywhere – Does its best to detect whether a site offers HTTPS, and switches to that if so.

DNSCrypt – DNS traffic can be encrypted, too.

Block third-party cookies – In Chrome it’s a checkbox at chrome://settings/content. In Safari, it’s the default. Fluid is a nice Safari wrapper, btw.

AdBlock Plus – Ad blocker that’s open source.
Funny story: I work for a company that makes money on ads, and I wrote one of our ad servers. Every six months or so, I give myself a heart attack when I don’t see the ads on the site, and remember above.

I don’t hate ads by the way, just the 90% of them that are worthless.

Disconnect – Blocking of analytics and such.

All of the above have the ability to cause trouble under certain conditions, but they work well for me. DNSCrypt takes a few seconds to get going on a new connection.

A type system in runtime’s clothing

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. – Greenspun

I am reminded of this quote when I see a common pattern in data stores, especially RDBMS. It’s a key-value pattern, along the lines of:

Columns: IDType | ID | Value

…combined with application code which branches on the IDType column. The Value (and heck, the identifier) are interpreted differently based on the type.

This is a fine pattern depending on one’s goals. But it’s important to understand the choice one is making here: we’ve created a dynamic type system. Those if’s and switch’s are type resolution, happening at runtime.

With a RDBMS, a table typically maps to a single type, say “Person”. One can completely express the shape of that entity in code, requiring no conditionals at runtime. Values flow into known slots.

Using the pattern at top, by contrast, one might create a “Documents” table. IDType might be “PDF” or “Section” or whatever; the Value may be a complex payload or a reference to another entity.

And it can work great. As a key-value store, the store is “dumb”: meaning happens in code. This can give you great performance and a lot of presentation choices at runtime.

But one gives up a large class of static (compile-time) type guarantees; one inevitably will do “type” checks at runtime, to combat newly-possible illegal states.

Too often, such code looks like an ad hoc, informally-specified, slow type system.

The upshot is, it’s a trade-off between safety and flexibility, exactly as with static and dynamic type systems. If one chooses the latter, plan on accounting for legal and illegal states in application code — and be clear about guarantees the system will and won’t offer.

Hierarchy and orthogonality in C# and Go

Prompted by this question I got to thinking about methods in C# and Go. It’s another example, I realized, of Go’s (logical) insistence on orthogonality and (stylistic) insistence on flatness/lack of hierarchy.

Go does not allow methods to be defined within a struct (ersatz class) definition. Instead of this, where the method lives in the declaration:

type Foo struct {
    Count, Price int
    Total() int {     // nope
        return Count * Price
    }
}

…one writes this:

type Foo struct {
    Count, Price int
}

func (f Foo) Total() int {
    return f.Count * f.Price
}

Which is to say, the method is its own free-standing declaration.

In C#, you have a choice of doing either (eliding access modifiers):

struct Foo {
    int Count;
    int Price;
    int Total() {
        return Count * Price;
    }
}

…or, using extension methods (eliding the outer class):

struct Foo {
    int Count;
    int Price;
}

int Total(this Foo foo) {
    return foo.Count * foo.Price;
}

Go achieves two things in this design decision. First, orthogonality: there is one way to write a method. Go effectively chooses extension methods.

Second, a matter of taste perhaps, there is less hierarchy in Go. Methods are just funcs alongside all the others; they don’t represent a new level of “indentation” or membership.

There are other design justifications described in the above link, but these advantages are the ones that jump out for me.

Know your guarantees, Go edition

I was directed to a thread about a poor soul who started a project in Go, eventually had to hand it off to the community, and discovered that his original source no longer compiled, due to third-party dependencies having changed. Key quote:

Not even the original programmer, with the original files on his original dev machine, can compile the source anymore.

I feel for the guy. Unfortunately, the above quote is not correct. He didn’t have all of the original files. He only had his own.

See, he took dependencies on third-party code he doesn’t control. That’s a fundamental choice to make. It is a fundamental characteristic of his program (and his dev process).

The complaint is that Go does not prevent this. It’s true! Versioned dependencies are not a feature of the base platform.

It’s also a deliberate choice, where the Go authors chose not to implement a feature when they felt that the trade-offs were no good.

One low-level reason they made this choice is to avoid slow compilation and bloated binaries (which are two sides of the same coin). Remember, packages depend on other packages. So Foo might depend on Bar 2.1. Foo might also depend on Baz which in turn depends on Bar 1.9, and on down the tree. So that would mean compiling and linking several copies of nearly identical code.

Depending on several versions of the same package also means knowing which version one is calling, whereby the dependency mess seeps into your source code.

Which leads us to the high-level reasoning behind the Go platform punting on this feature: they did not have a logical solution they considered acceptable. It’s not that they don’t understand the problem; it’s that, at the moment, there is not a solution they like. So they choose no feature over over a regressive one.

It’s a controversial stance. After all, npm and bundler and many other systems have dependency versioning built-in, and people work with them every day. But if you’ve used them, you know they are not without flaws.

I’ll speculate further: perhaps dependency versioning is not unlike “high availability” software. It makes promises of reliability, at the expense of increased complexity and bloat. After all, you are running a lot more code with a lot more relationships.

Often, “high availability” solutions don’t net out to being more reliable, due to this underestimated complexity. Perhaps it’s better to make no guarantee, than to make one that you can’t keep.

This is also something of a cultural experiment on Go’s part. By removing these (possibly) false guarantees, you are forced to be much more deliberate about managing your code and its dependencies. Trust but verify.

It sounds like our poor soul from the opening paragraph trusted, but didn’t verify.

As a practical matter, here’s what he should have done. First, recognize that he was depending on code that is out of his control. Second, make a choice: don’t depend on it, or turn it into code that he does control.

He could fork those repos and depend on his own copy. He could “vendor” them into his solution with something like godep. He could depend on the weak (but not non-existent) promise of tagged versions.

Remember: everything about third-party code is a decision about trust. Waving the wand of “versioned dependencies” doesn’t change that. In fact, it might fool us into seeing guarantees that don’t exist.

discuss on hacker news

The two edges of “culture fit”

Here’s a smart and funny bit about culture fit. I think it’s brutal and by and large true. [1]

When I first heard the term “culture fit”, I thought intuitively, yes. Great way to build a company. Companies are cultures and we should aim to build them.

Then, not long after, I thought, wait. “Culture fit” among highly-educated-20-something-males-on-the-spectrum might not actually make for a great culture, if it’s going to be of any size.

That last bit is key: “culture fit” is probably unavoidable in early days, but rapidly transitions to creepy when your company starts growing.

When you start from scratch, you choose partners. To partner up, by definition, means you probably share a vision. Otherwise, why would you be partners?

In all likelihood, sharing a vision means sharing priors. So your early team will not be diverse. It will likely be a tight group of people whose cultures, um, fit.

(It doesn’t have to be this way. But odds are it will.)

Now, if your company’s goal are to be a tight, focused lifestyle business, great. That’s a good way to go. You don’t need much diversity.

However, if you are funded, your goals are probably a big audience and a big exit. Your ability to reach new and diverse audiences requires diversity in your company. I don’t just mean skin color and gender. I mean, people of varying experiences who see opportunities that you don’t.

You need type-A glad-handing salespeople. You need neckbeard sysadmins. You need touchy-feely designers. You need nagging accountants. They will grow the business. And you won’t get them if they need to be like you.

(If you achieve the above, you’ll get the skin-tone-and-gender diversity for free, btw.)

So, start with culture fit. Then know when to lay off.

 

[1] See also

How a savvy landlord would handle Airbnb

Megan McArdle describes the understandable discomfort that some tenants and property owners have about Airbnb’ers coming into their buildings.

Airbnb will naturally drive up rents in places where it’s being used. If a tenant has the capacity to pay (e.g.) $2000 for an apartment, but can get an additional (e.g.) $300 a month, the tenant’s capacity to pay is now $2300.

A savvy landlord or condo association would want to capture this. Because it’s new, it’s seen as a disturbance. A little bit of thinking would reveal that there is now $XXX new money coming in the door (due to better utilization of the property).

It’s in the owners’ ultimate interest to figure out how to either a) accommodate this explicitly and safely, and capture some of the new revenue or b) prohibit it because the tenants prefer it that way, but understand that the price of a “non-shared” building will carry a premium.

One way a condo association might sell the idea to co-tenants is to point out that the new Airbnb money might be used for building upgrades or to reduce condo fees.

They might set up rules that only Airbnb’ers of a specified high rating may be allowed. They might limit the total number of nights per month. They might upgrade their keying systems.

I’d also point out that Airbnb’ers are likely nothing special, behavior-wise. I have recalcitrant noisy neighbors (across the courtyard out back) who are legitimate, paying leaseholders. There will be a non-zero and manageable amount of problems, but not more than the status quo.

Another way this is nothing new: some single-family homeowners raise an eyebrow at condos; some owners raise looks askance at renters, and down the line. They are trade-offs with pros and cons.

It seems clear there is a net economic gain in sharing real estate. The challenge is for parties to agree on the distribution of that gain, seeing beyond the natural status quo bias.

discuss on hacker news

PS: a commenter on HN (on Megan’s original story) mentions that Fannie & Freddie, the ultimate buyers of many mortgages, wouldn’t countenance this, and this would hamper the whole market. True?