Episode 48: Yourythmics

This week concludes the Viacom v Google case, so we discuss digital media and copyright on the Internet.

Show notes available at http://wiki.whatstherumpuspodcast.com/48

Open Source FTW!

I feel pretty strongly about copyright. For me it started because of “free” software. I’m cheap and I don’t like to pay for things if I don’t have to and I was always happy to try out a free piece of software (and open to figuring out how to use a not-so-free piece of software too). This lead me to find open source software, which at first just meant “open source = free” to me. As my understanding of the open source movement grew, so did my opinion on copyright.

I feel pretty strongly about copyright. For me it started because of “free” software. I’m cheap and I don’t like to pay for things if I don’t have to and I was always happy to try out a free piece of software (and open to figuring out how to use a not-so-free piece of software too). This lead me to find open source software, which at first just meant “open source = free” to me. As my understanding of the open source movement grew, so did my opinion on copyright.

Copyright and patent laws in this country are outdated, outmoded, and generally ludicrous. The fact that Microsoft can patent some of the most basic algorithms in computer science for the sole purpose of suing someone for copyright infringement if they don’t like that product is absurd to me. Copyrighting what can be considered common knowledge or fundamental knowledge in the field is just plain stupid to me and I feel like its a travesty that there are all these worthless patents and copyrights out there miring people in the terrible legality of things. This kind of thing stunts innovation and hurts the industry.

While the open source movement is relatively young and still developing, look at how much innovation has come from that sector: projects like Open Office (an office application suite), Apache (what most web servers run to host web sites), Linux (operating system), GIMP (image editing software) all open source and all very well developed projects.

Each project has a community that builds up around it, drawing more people as it becomes more popular. People report bugs, people fix bugs, the project becomes better, more people start to use it, rinse and repeat. The most insignificant person in the world has a say in the project and anybody with the know-how is capable of patching bugs. If the project wasn’t open source, would things remain this way? Look at the way Microsoft handles things with their products. Most of the time a bug is found its treated the way most of corporate America operates and that is, hush the person that found it, stick your head in the sand, and hope it goes away. There was recently a security flaw discovered in Windows that reaches back to 1993 and every operating system released by Microsoft since then up to and including Windows 7. A 17 year old vulnerability that Microsoft just pretended wasn’t there until somebody made it public. I’m not saying that every piece of closed source software is maintained in the same way, I’m just saying its harder to find and patch bugs when only your people can look at the code.

At this point I’ve made it pretty clear that I like free software and that I don’t want to have to pay for things. How then, in my Utopian world where all software is free, does a developer or company make money enough to justify creating the software in the first place? I honestly don’t know, but I think the Red Hat folks are on the right path. Red Hat creates the Red Hat Linux distribution under an open source license, so its free to use and free to be tinkered with. The way they make money is by charging enterprises for support, and a nominal fee for CD/DVDs (this isn’t required, you can download a copy for free from their site). With a support contract, you can call them up when you have a problem you can’t solve yourself and they’ll have someone help you solve it to the best of their ability. If other companies can come up with similar ways to make money and just let us poor folks have our free software, the world would be a better place.

The Internet and Its Impact On My Concentration

One of the interesting points made in the Stanford study is that switching tasks has a ‘startup cost.’  This is the amount of time it takes your brain to stop thinking about those TPS reports, and start focusing on reading the ‘LOL’ reply to your super-witty ‘Dear inanimate object…’ tweet you posted earlier.  By turning off the notifications, I went from checking Twitter as soon as there was a new update, to checking it when I wasn’t already thinking about something else.

One other thing I discovered about myself that I thought was interesting was that I intentionally interrupt myself when working, simply because I don’t like the work I am doing.  The more I disliked what I had to do, the easier it was for me to hop into the Google Reader tab I keep open, in order to check RSS feeds, so that they can keep me from whatever unpleasant task I had in store.  I found that this happened almost exclusively when I was switching from one task to another.  For example, if I resolved a case, I would update the ticket system, then dispite knowing the other task I had was time sensitive, I would still hop from the ticket system tab to the Google Reader tab.

I also noticed that my behaviour towards reading websites has changed dramatically.  Before, I would read something until something else distracted me, and then I would move on to something else, and never look back.  If it was something I found genuinley interesting, I found that I would go back and try to pick up where I left off, but abandon the page quickly.  Now that all those alerts are turned off, I have instead continued reading the article; either to completion or until I was no longer genuinley interested, at which point I would move on to something else.  One interesting side note: I think this has as much to do with the fact that I hate my job as it does with my concentration.  Before I stopped caring, I would often click away from what I was reading because someone I work with would walk behind my cube, and I was worried about someone seeing me wasting time.  Now that I care much less about losing my job, I am more willing to put my full attention into what I am doing.  Both because I don’t stop in the middle of what I am doing, and because I am not spending energy trying to detect incoming coworkers who can see me wasting time.

There were two tests on the article the New York Times posted with their article.  Test Your Focus & Test How Fast You Juggle Tasks I took them both and have posted my results below.  I hope to take them again in a few weeks, to see if my results improve.

Also, I mentioned AJ Jacob’s book The Year of Living Biblically in the podcast, but I didn’t remember the name.  Since I don’t expect it to be in the shownotes, here is a linkalt.

Test Your Focus ResultsTest How Fast You Juggle Tasks Results

The Case for Tablet Computing

The problems I have with the iPhone are all minor things that most cosumers probably don’t give a crap about:

  • * Only syncing with iTunes
  • * App DRM problems
  • * No access to file system
  • * No DIVX support
  • * No Multitasking (and don’t tell me about the new OS, because that doesn’t count – it isn’t full multitasking)

Rather than go into a long winded article about my assumptions about a device I have never used, I thought I would talk about the platform.  I won’t attack or defend Apple and the iPad here, so if that is what you are looking for, move on.  What I want to talk about is tablet computing.  Like I said, I didn’t see a need for it.  But it occurred to me after reading blog posts on TheModernDayPirates.com and Wil Wheaton’s blog that there might be something to the platform after all.  Tablet computing started out as just a laptop with a screen that folded backwards.  No one used them, because the touch screens were resistive, which meant they were unresponsive, so they weren’t good as touch input devices, and when you tried to use the thing as a normal laptop, the touch film required got in the way, so it wasn’t a useful laptop either.  It was the worst of both worlds.  Add to that the fact that it was still a full blown PC, which meant slow boot times, and heat and battery issues, and it’s no wonder that doctors and professors were the only people who used them.

As much as I dislike the iPhone, it did a lot for computing on small hardware.  With a capacitive touch screen, and a well thought out design, it was possible to use fingers effortlessly to drive input on the device.  It also meant more screen real-estate since there was no keyboard, and it was crystal clear because either there is no touch film, or it is not nearly as intrusive as a resistive screen.  (I am not a touch-screen expert, obviously.)

As much crap as I give the iPhone, it does do a lot right.  Getting so many developers on board is a colossal reason for the device’s success.  Even people using BlackBerrys and Android phones have Apple to thank for making developing for mobile devices a possibility.  It existed before, but not in such a major way.  Apple did what they always do, and brought some niche thing mainstream.

I would argue that that is exactly what they did for tablet computing with the iPad.  It is a movie screen, board game, ebook, photo viewer and much much more, I am sure.  Sure, all of those things can be done on the iPhone (or other phones), but the point is that the screen is too small to do most of them well.  The iPhone put a decent computer in remote control sized hardware.  The iPad is a coffee table book.  And that is where I see it excelling.  It isn’t enough of a computer to be a realistic laptop replacement, although some people would disagree, but it is perfect for sitting on the couch and checking IMDb for some actor on screen, or ordering a pizza while the game is on.  Then at the end of the day, it is a backlit ebook reader.

I have tried to do all of those things with my laptop (it is netbook sized, but old enough that it I had it long before the term netbook was around), and it works, but it isn’t quite the right tool for the job.  Much like I had a BlackBerry Pearl, and it was a good smartphone, but not quite there yet.  Much like I have a Nook, and it is a great ebook reader, but it just isn’t quite there yet.

I don’t like using Apple products.  The longer I have and use the iPhone, the more convinced I am of that.  But they are pushing boundries and making normal people accept these new gadgets.  Old school tablet computers, ebook readers, portable DVD players, and netbooks are all goofy one shot solutions that can be answered with the iPad.  I am very excited not about the iPad, but about its competitors that are surely only a few more miles down the road.  Apple has finally convined me, and most of the American public, that tablet computing can work, and now I want one.  Just not theirs.