Digifesto

Academia vs. FOSS: The Good, The Bad, and the Ugly

Mel Chua has been pushing forward on the theme of FOSS culture in academia, and has gotten a lot of wonderful comments, many about why it’s not so simple to just port one culture over to the other. I want to try to compile items from Mel, comments on that post, and a few other sources. The question is: what are the salient differences between FOSS and academia?

I will proceed using the now-standard Spaghetti Western classification schema.

The Good

  • Universities tend to be more proactive about identifying and aiding newcomers that are struggling, as opposed to many FOSS projects that have high failure-and-dropout rates due to poorly designed scaffolding.
  • Academia is much more demographically inclusive. FOSS communities are notoriously imbalanced in terms of gender and race.

The Bad

  • The academic fear of having ones results scooped or stolen results in redundant, secrecy, and lonely effort. FOSS communities get around this by having good systems for attribution of incremental progress.
  • Despite scientific ideals, academic scientific research is getting less reproducible, and therefore less robust, because of closed code and data. FOSS work is often more reproducible (though not if its poorly documented).
  • Closed access academic journals hold many disciplines hostage by holding a monopoly on prestige. This is changing with the push for open access research, but this is still a significant issue. FOSS communities may care about community prestige, but often that prestige comes from community helpfulness or stake in a project. If metrics are used, they are often implicit ones extractable from the code repository itself, like Ohloh. Altmetrics are a solution to this problem.

The Ugly

  • In both FOSS and academia, a community of collaborators needs to form around shared interests and skills. But FOSS has come to exemplify the power of the distributed collaboration towards pragmatic goals. One is judged more by ones contributions than by ones academic pedigree, which means that FOSS does not have as much institutional gatekeeping.
  • Tenure committees look at papers published, not software developed. So there is little incentive for making robust software as part of the research process, however much that might allow reproducibility and encourage collaboration.
  • Since academics are often focused on “the frontier”, they don’t pay much attention to “building blocks”. Academic research culture tends to encourage this because it’s a race for discovery. FOSS regards care of the building blocks as a virtue and rewards the effort with stronger communities built on top of those blocks.
  • One reason for the difference between academia and FOSS is bandwidth. Since publications have page limits and are also the main means of academic communication, one wants to dedicate as much space as possible to juicy results at the expense of process documentation that would aid reproducibility. Since FOSS developed using digital communication tools with fewer constraints, it doesn’t have this problem. But academia doesn’t yet value contributions to this amorphous digital wealth of knowledge.

Have I left anything out?

Don’t use Venn diagrams like this

Today I saw this whitepaper by Esri about their use of open source software. It’s old, but still kept my attention.

There’s several reasons why this paper is interesting. One reason is that it reflects the trend of companies that once used FUD tactics around open source software to singing a soothing song of compatibilism. It makes an admirable effort to explain the differences between open source, proprietary software, and open standards to its enterprise client audience. That is the good news.

The bad news is that since this new compatibilism is just bending to market pressure after the rise of successful open source software complements, it lacks an understanding of why the open source development process has caused those market successes. Of course, proprietary companies have good reason to blur these lines, because otherwise they would need to acknowledge the existence of open source substitutes. In Esri’s case, that would mean products like the OpenGeo Suite.

I probably wouldn’t have written this post if it were not for this Venn diagram, which is presented with the caption A hybrid relationship:

I don’t think there is a way to interpret this diagram in a way that makes sense. It correctly identifies that Closed Source, Open Source, and Open Standards are different. But what do the overlapping regions represent? Presumabely they are meant to indicate that a system may both be open source and use open standards, or have open standards and be closed, or…be both open and closed?

It’s a subtle point but the semantics of set containment implied by the Venn diagram really don’t apply here. A system that’s a ‘hybrid’ between a closed and open software is not “both” closed and open the same way closed software that uses open standards is “both” closed and open. Rather, the hybrid system is just that, a hybrid, which means that its architecture is going to suffer tradeoffs as different components have different properties.

I don’t think that the author of this whitepaper was trying to deliberately obscure this idea. But I think that they didn’t know or care about it. That’s a problem, because it’s marketing material like this that clouds the picture about the value of open source. At a pointy-haired managerial level, one can answer the question “why aren’t you using more open source software” with a glib, “oh, we’re using a hybrid model, tailored to our needs.” But unless you actually understand what you’re talking about, your technical stack may still be full of buggy and unaccountable software, without you even knowing it.

Another rant about academia and open source

A few weeks ago I went to a great talk by Victoria Stodden about how there’s a crisis of confidence in scientific research that depends on heavy computing. Long story short, because the data and code aren’t openly available, the results aren’t reproducible. That means there’s no check on prior research, and bad results can slip through and be the foundation for future work. This is bad.

Stodden’s solution was to push forward within the scientific community and possibly in legislation (i.e., as a requirement on state-funded research) for open data and code in research. Right on!

Then, something intriguing: somebody in the audience asked how this relates to open source development. Stodden, who just couldn’t stop saying amazing things that needed to be said that day, answered by saying that scientists have a lot to learn from the “open source world”, because they know how to build strong communities around their (open) work.

Looking around the room at this point, I saw several scientists toying with their laptops. I don’t think they were listening.

It’s a difficult thing coming from an open source background and entering academia, because the norms are close, but off.

The other day I wrote in an informal departmental mailing list a criticism and questions about a theorist with a lot of influence in the department, Bruno Latour. There were a lot of reactions to that thread that ranged pretty much all across the board, but one of the surprising reactions I got was along the lines of “I’m not going to do your work for you by answering your question about Latour.” In other words, RTFM. Except, in this case, “the manual” was a book or two of dense academic literature in a field that I was just beginning to dip into.

I don’t want to make too much of this response, since there were a lot of extenuating circumstances, but it did strike me as an indication of one of the cultural divides between open source development and academic scholarship. In the former, you want as many people as possible to understand and use your cool new thing because that enriches your community and makes your feel better about your contribution to the world. For some kinds of scholars, being the only one who understands a thing is a kind of distinction that gives you pride and job opportunities, so you don’t really want other people to know as much as you about it.

Similarly for computationally heavy sciences: if you think your job is to get grants to fund your research, you don’t really want anybody picking through it and telling you your methodology was busted. In an Internet Security course this semester, I’ve had the pleasure of reading John McHugh’s Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Off-line Intrusion Detection System Evaluation as Performed by Lincoln Laboratory. In this incredible paper, McHugh explains why a particular DARPA-funded Lincoln Labs Intrusion Detection research paper is BS, scientifically speaking.

In open source development, we would call McHugh’s paper a bug report. We would say, “McHugh is a great user of our research because he went through and tested for all these bugs, and even has recommendations about how to fix them. This is fantastic! The next release is going to be great.”

In the world of security research, Lincoln Labs complained to the publisher and got the article pulled.

Ok, so security research is a new field with a lot of tough phenomena to deal with and not a ton of time to read up on 300 years of epistemology, philosophy of science, statistical learning theory, or each others’ methodological critiques. I’m not faulting the research community at all. However, it does show some of the trouble that happens in a field that is born out of industry and military funding concerns without the pretensions or emphasis on reproducible truth-discovery that you get in, say, physics.

All of this, it so happens, is what Lyotard describes in his monograph, The Postmodern Condition (1979). Lyotard argues that because of cybernetics and information technologies, because of Wittgenstein, because of the “collapse of metanarratives” that would make anybody believe in anything silly like “truth”, there’s nothing left to legitimize knowledge except Winning.

You can win in two ways: you can research something that helps somebody beat somebody else up or consume more, so that they give you funding. Or you can win by not losing, by pulling some wild theoretical stunt that puts you out of range of everybody else so that they can’t come after you. You become good at critiquing things in ways that sound smart, and tell people who disagree with you that they haven’t read your cannon. You hope that if they call your bluff and read it, they will be so converted by the experience that they will leave you alone.

Some, but certainly not all, of academia seems like this. You can still find people around who believe in epistemic standards: rational deduction, dialectical critique resolving to a consensus, sound statistical induction. Often people will see these as just a kind of meta-methodology in service to a purely pragmatic ideal of something that works well or looks pretty or makes you think in a new way, but that in itself isn’t so bad. Not everybody should be anal about methodology.

But these standards are in tension with the day to day of things, because almost nobody really believes that they are after true ideas any more. It’s so easy to be cynical or territorial.

What seems to be missing is a sense of common purpose in academic work. Maybe it’s the publication incentive structure, maybe it’s because academia is an ideological proxy for class or sex warfare, maybe it’s because of a lot of big egos, maybe it’s the collapse of meta-narratives.

In FOSS development, there’s a secret ethic that’s not particularly well articulated by either the Free Software Movement or the Open Source Initiative, but which I believe is shared by a lot of developers. It goes something like this:

I’m going to try to build a totally great new thing. It’s going to be a lot of work, but it will be worth it because it’s going to be so useful and cool. Gosh, it would be helpful if other people worked on it with me, because this is a lonely pursuit and having others work with me will help me know I’m not chasing after a windmill. If somebody wants to work on it with me, I’m going to try hard to give them what they need to work on it. But hell, even if somebody tells me they used it and found six problems in it, that’s motivating; that gives me something to strive for. It means I have (or had) a user. Users are awesome; they make my heart swell with pride. Also, bonus, having lots of users means people want to pay me for services or hire me or let me give talks. But it’s not like I’m trying to keep others out of this game, because there is just so much that I wish we could build and not enough time! Come on! Let’s build the future together!

I think this is the sort of ethic that leads to the kind of community building that Stodden was talking about. It requires a leap of faith: that your generosity will pay off and that the world won’t run out of problems to be solved. It requires self-confidence because you have to believe that you have something (even something small) to offer that will make you a respected part of an open community without walls to shelter you from criticism. But this ethic is the relentlessly spreading meme of the 21st century and it’s probably going to be victorious by the start of the 22nd. So if we want our academic work to have staying power we better get on this wagon early so we can benefit from the centrality effects in the growing openly collaborative academic network.

I heard David Weinberger give a talk last year on his new book Too Big to Know, in which he argued that “the next Darwin” was going to be actively involved in social media as a research methodology. Tracing their research notes will involve an examination of their inbox and facebook feed to see what conversations were happening, because just so much knowledge transfer is happening socially and digitally and it’s faster and more contextual than somebody spending a weekend alone reading books in a library. He’s right, except maybe for one thing, which is that this digital dialectic (or pluralectic) implies that “the next Darwin” isn’t just one dude, Darwin, with his own ‘-ism’ and pernicious Social adherents. Rather, it means that the next great theory of the origin of species is going to be built by a massive collaborative effort in which lots of people will take an active part. The historical record will show their contributions not just with the clumsy granularity of conference publications and citations, but with minute granularity of thousands of traced conversations. The theory itself will probably be too complicated for any one person to understand, but that’s OK, because it will be well architected and there will be plenty of domain experts to go to if anyone has problems with any particular part of it. And it will be growing all the time and maybe competing with a few other theories. For a while people might have to dual boot their brains until somebody figures out how to virtualize Foucauldean Quantum Mechanics on a Organic Data Splicing ideological platform, but one day some crazy scholar-hacker will find a way.

“Cool!” they will say, throwing a few bucks towards the Kickstarter project for a musical instrument that plays to the tune of the uncollapsed probabilistic power dynamics playing out between our collated heartbeats.

Does that future sound good? Good. Because it’s already starting. It’s just an evolution of the way things have always been, and I’m pretty sure based on what I’ve been hearing that it’s a way of doing things that’s picking of steam. It’s just not “normal” yet. Generation gap, maybe. That’s cool. At the rate things are changing, it will be here before you know it.

Computation and Economic Risk

I’m excited to be working with Prof. John Chuang this semester on an independent study in what we’re calling “Economics of Data and Computation”. For me, it’s an opportunity to explore the literature in the area and hunt for answers to questions I have about how the information and computation shape the economy, and vice versa. As my friend and mentor Eddie Pickle would put it, we are looking for the “physics of data”–what rules does data obey? How can its flows be harnessed for energy and useful work?

A requirement for the study is that I blog weekly about our progress. If these topics interest you, I hope you will stay tuned and engage in conversation.

To get things going, I dipped a toe into computational finance. Since I have no background in this area, I googled it and discovered Peter Forsyth’s aptly titled Introduction to Computational Finance without Agonizing Pain. What I found there surprised me.

The first few chapters of Forsyth’s work involve the pricing of stock options. Stock options are agreements wherein the option owner has the option, not obligation, to sell a stock at a particular price at a future time. These can be valued by imagining them as part of a portfolio with the stock in question, and determining the price which would hedge out all the risk from the portfolio.

Since stock prices are dynamic, evaluating the price of a stock option requires quick adaptation to change. As a model for the changes in stock prices, Forsyth uses Brownian motion, in which a variable moves according to a combination of drift and noise. The accuracy of the estimation of a value of a stock option is going to depend on the accuracy of the estimated expected value of this random stock price.

How do you estimate the expected value of a stock price that is subject to a complex stochastic process, such as Brownian motion? Forsyth starts by recommending a Monte Carlo method. This involves running lots of randomized simulations based on the model of stock price fluctuation and averaging the result.

This is great, but there’s a catch: Monte Carlo methods are computationally expensive. Forsyth goes into detail warning about how to tune the parameters to account for the time it takes these Monte Carlo simulations to converge on a result. Basically, the more iterations of simulation, the more accurate the estimate will be.

This is a very promising early result for us, because it suggests a link between computational power and economic risk. Even when all the parameters of the model are known, deriving useful results from the model requires computation. So we can in principle derive a price of computation (or, at least, of iterations of Monte Carlo simulation) as a function of the risk aversion of the stock option trader.

Is that counter-intuitive? This would be consistent with some findings that high-frequency trading reduces market volatility. It also suggests a possible economic relationship between finance, insurance, and the cloud computing market.

One question I’d like to look into is to what extent computational power can be seen as a strategic advantage to adversaries–for example, in a stock trading situation–and what the limits of that power are. At some point, the effects of computation are limited by the amount of data one has to work with. But too much data without the computational means to process it is a waste.

See where this is going? I’m interested in hearing your thoughts.

Truth vs. Power: Buddy Roemer, SOPA, money in politics and liberation technology

Buddy Roemer is a former Governor and Congressman of Louisiana who is running for president as a Republican. He has so far not been allowed to take part in any televised debates, and so is relatively unknown. The television stations say that he is not eligible to debate because he has not raised sufficient campaign contributions. This is a problem for Buddy, because he has refused to accept Super PAC money and caps individual donations at $100.

Whatever else one may say about Roemer as a candidate, there is something wrong with this picture. Putting aside the other tools of the modern campaign (advertising, for example), the debate is the cornerstone of rational politics. In these events, we pretend for a moment that we are lead by those who are able to persuade us to follow them. This is only a fantasy when reasonable candidates are barred from entry.

Of course, politics is not a fair fight for our approval as citizens. Citizens are pawns. Or, perhaps more appropriately, ants ready to swarm to any greasy slick of propaganda spewed from the orifices of power. So must we be viewed by the billionaire Super PAC donors who have been investing in the Romney campaign, shareholders ready to instate their loyal CEO.

Is it going too far to say that these Romney shareholders aim to turn a profit on the presidency? We could consider the alternative: that these are philosopher-king oligarchs, who have spent their lives earning their billions through honest business only to turn their attention to national politics and endorse Mitt Romney. Out of selfless benevolence, they seek a consistent champion of middle and lower classes. Some of them think Gingrich would be a better one.

No, that seems unlikely.

If there is any iron law of politics, it is that those in power aim to keep themselves in power. Companies that succeed will try to maintain their market power, even when their products face obsolescence. Unions that triumph will shift demands from workers rights to the excluding the unorganized. Non-profits that form out of genuine selfless action contort themselves to chase funding and become whatever will justify their existence. Prison systems will fight to incarcerate more people. Political parties will try to maintain control of political messaging to keep out political diversity. And so on.

Truth erodes the grip of power. By recognizing these patterns as what they are, we can choose to deny them. We can liberate ourselves by holding institutions of power to account.

However, truth is something we transmit to one another. Truth travels as information. In our era, that means the spread of truth is controlled by mass media and information technology. But media and IT are themselves part of our economy and politics. Herein lies the problem.

SOPA is a good example of this. Media companies that want to use the power of the state to enforce monopolies on their works (Hollywood, the RIAA, etc.) are battling with Internet companies that profit from easy sharing of information across networked users (Google, Facebook, Twitter) over control of the Web. The media companies have been playing politics for much longer than the internet companies. One friend of mine explains to me that the Hollywood lobbyists are physically older than Google’s. They have been on K Street longer. They have better connections with legislators and other lobbyists. So they are winning.

Buddy Roemer is trying to expose this truth about how politics works–that policies are determined not by citizens but by lobbyists paid for by the rich and powerful. He has other politics but he has ripped this plank from his platform and sharpened it into a spear fit for the head of Mitt Romney.

But the media companies by and large control the spread of truth. These media companies are in their tangle of alliances with powerful political parties and corporations, they have no incentive to let in a candidate who is so eager to blow the lid off the whole complex. So they raise the requirements of debate eligibility to exclude anyone who isn’t playing their power games.

So Roemer has turned to non-mass media to launch his campaign. Roemer has been working hard on his Web campaign, using social media (especially Twitter) to get his message out.

Perhaps Roemer’s faith in this alternative structure is due in part to his witnessing of the Occupy movement. I believe it can be uncontroversially said at this point that social media was necessary (though not sufficient) for the successes of the Occupy movement, whether in organizing, gaining publicity, and in responding tactically to suppression. Its success in raising the issue of inequality in national politics has been due largely to its independence from centralized media. It continues to use the Internet to organize itself over the winter in order to plan its next moves for 2012. Perhaps Roemer can raise awareness about political inequality through similar channels.

It is worth watching and studying these events because the question of whether and under what conditions information technology can be liberation technology will determine our future. Is it possible for a message that is true but unpopular with power to spread? Under what conditions? This is not just a question of theoretical interest. It is a strategic question for those concerned with their own freedom.

We have many clues to this question already. We have the efficacy of the open Web, as opposed to centralized media channels, in assisting politics of truth. In SOPA, we see how the centralized hub of the Internet, its DNS system, is where it is most vulnerable to attack by powers that are threatened by it.

On the other hand, open data programs by governments show that there is also a politics of mutual empowerment through sharing information with citizens. Government transparency initiatives allow the kinds of analysis and awareness of money in politics that show us who is supporting SOPA and help us verify the claims of Buddy Roemer and the like. And SOPA has shown examples of industries that are able to gain power by benefiting openness and wage political battles to defend it.

What technologies are needed to further embolden truth? What strategies will get these technologies into the hands of those that can use them? How can truth be sifted from fiction, anyway? Can we find out before a growing concentration of power stamps out our ability to search and disseminate our answers?

I am eager to discuss these topics with anyone interested and collaborate on solutions.

A vote for Roemer is a vote for Obama

I’ve spent the New Years with friends from DC who I think of as “Washington Insiders” because they work in or with various parts of the government. Unlike the people I normally talk shop with, they have never even heard of Richard Stallman. They are dismissive of the Occupy movement or just don’t want to talk about it. They are pessimistic about the next election, because they see it as a sure victory for Mitt Romney. Many of them were active in the Obama campaign, and will likely be involved in the campaign in some capacity this coming year. The are grim.

When I brought him up, one of them told me that “Buddy Roemer is a joke”–as if there was nothing at all sensible about a former Congressman running as a government reform protest candidate after two years of Tea Party and Occupy press. I have to remind them that Buddy was once Louisiana’s Governor, not just a Congressman. One friend jokes, “Good people don’t become Governor of Louisiana.” I don’t really know what he’s talking about, but Buddy seems like good people to me.

I ask if he could be a third party spoiler. “No, that’s unrealistic. I mean the last time there was a third party spoiler was…” It gets him thinking. “Well, there was a minor spoiler effect with Nader in 200, but the last real spoiler was Perot in 1992.” That sounds like once a decade to me. We’re due.

Let’s play it out. Roemer is running as a Republican currently. He has a slim to nothing chance of winning the primary. Suppose he continues to run as an independent. Suppose he is allowed to debate nationally and get public attention.

Buddy is an old Southern white man who will spend his time at the debate telling Mitt Romney that he is fake and bought, which is the elephant in the room around Romney and the root of the flip-flopping that so pisses of his base. No wonder the GOP won’t let Buddy debate with them. But in a national debate, Buddy could easily steal elements of the Romney’s base in addition to swing voters.

If things are as dismal for Obama as some say (though at the moment he’s InTrading at 51%…) then Roemer on that ballot could be the spoiler he needs to pull things through. Obama, after all, ran on “Change” originally, and could have plenty to agree with Roemer about, but with the spin that it’s only the Republican party that is as influenced by money in politics.

At this point, I don’t see a stronger move for the center-left than backing Roemer and helping him get on the ballot.

This is critical

Connecting the dots

SOPA is backed by a large industry coalition led presumably by the industries that on-line piracy hurts most, including Hollywood and the RIAA. These industries have tremendous influence over Congress because of their campaign contributions, despite the fact that the education sector and human rights organizations oppose the bill.

Campaign finance reform is a hot political topic right now, but mostly only among the netroots and those that get their political news through the Internet. The Internet has allowed grassroots activists to get national attention despite the lack of coverage by traditional media through, for example, viral video. And the Internet has offered an alternative means of nominating a presidential candidate and allowing them to appear on the ballot.

If SOPA passes, the value of the Internet as a platform for political organizing will be greatly diminished. And the political influence of those industries who are fighting for SOPA will be secure.

Is it possible that SOPA is being pushed through Congress to deliberately destroy the Internet, in order to break the one platform that has potential to truly change politics?

Would Congress rather destroy the Internet than adapt to a new technology that makes a united and informed citizenry, politically represented by those that honor its rights and values, possible?

Would it smash the greatest engine of innovation the United States has ever seen in order to enshrine powers whose time has come and past?

Perhaps, SOPA is more than an assault on the Internet. Maybe it’s an assault on what’s left of democracy in our once great nation.

The open source acqui-hire

There’s some interesting commentary around Twitter’s recent acquisition, Whisper Systems:

Twitter has begun to open source the software built by Whisper Systems, the enterprise mobile security startup it acquired just three weeks ago. …This move confirms the, well, whipsers that the Whisper Systems deal was mostly made for acqui-hire purposes.

Another acquisition like this that comes to mind is Etherpad, which Google bought (presumably to get the Etherpad team working on Wave) then open sourced. The logic of these acquisitions is that the talent is what matters, the IP is incidental or perhaps better served by an open community.

When I talk to actual or aspiring entrepreneurs, they often make the assumption that it would spoil their business to start building out their product open source. For one thing, they argue, there will be competitors who launch their own startups off of the open innovation. Then, they will miss their chance at a big exit because there will be no IP to tempt Facebook or whoever else to buy them out.

These open source acqui-hires defy these concerns. Demonstrating talent is part of what makes one acquirable. Logically, then, starting a competing company based on technology in which you don’t have talent makes you less competitive, from the perspective of a market exit. It’s hard to see what kind of competitive advantage the copycat company would have, really, since it doesn’t have the expertise in technology that comes from building it. If they do find some competitive advantage (perhaps they speak a foreign language and so are able to target a different market), then they are natural partners, not natural competitors.

One can take this argument further. Making open and available software is one of the best ways for a developer to make others aware of their talents and increase the demand (and value) for their own labor. So the talent in an open source company should be on average more valuable in case of an acqui-hire.

This doesn’t seem like a bad way out for a talented entrepreneur. Why, then, is this not a more well-known model for startups?

One reason is that the real winners in the startup scene are not the entrepreneurs. It’s the funders, and to the funders it is more worthwhile to invest in several different technologies with the small chance of selling one off big than to invest in the market value of their entrepreneurs. Because, after all, venture capitalists are in the same war for engineering talent as Google, Facebook, etc.. This should become less of an issue, however, as crowdfunding becomes more viable.

Computing power

I’m working on a project analyzing Twitter data with Sean Chen for a class. I am learning one of the simple pleasures of scientific computing, which is watching your machine ramp up to use all its processing power because numpy is crunching some big arrays.

There’s a sense in which computing power is the limited resource for humanity these days.

We have the vast canon of recorded human thought available as digitized text. We have countless sensors, oceans of data, very accurate models of the fundamental mechanics of our universe. We we lack is the ability to synthesize that data and learn as much as we could from it.

This isn’t new; brains are an important source of computing power, and in fact a remarkably efficient one. But digital processing and memory have accelerated human thought to such a degree that we have outpaced ourselves.

It doesn’t help that so much of this precious resource is used against itself. The processing power that spammers use to spread new spam is pitted against the processing needed to identify and block it. We revere projects like reCAPTCHA because they harness that computing power that otherwise goes to waste for something good.

So, there is something heartwarming about see that my little lappy is running at full steam. It’s actualizing some potential. I hope I’m putting it to good use.

EDIT: Ironically, just an hour or so after I wrote this, my laptop shut down spontaneously and wouldn’t restart until I took the battery in and out. Maybe lappy couldn’t handle it after all. I’ll be doing more intensive computing on the cloud from now on.