Posts Tagged ‘computer science’

New Image for Computing (NIC) is a project put together by WGBH and the ACM to spice up the image of computing professions amongst teens, especially among girls and minorities.  They released a study showing that at least among boys, the mission has pretty much been achieved for minorities.  Black and hispanic male teens have a more favorable image of computing as a profession than white males do.  Girls, on the other hand, think it really sucks.  45% of teen males think computing would make a very good profession, whereas only 10% of girls think so.  35% of girls think it’s a bad choice, as opposed to 10% of males.  Ouch!

Reblog this post [with Zemanta]

The papers are out for WWW2009 (and have been for a bit), but I’ve only just gotten a chance to start looking at them. First of all, kudos to the ePrints people for improving the presentation of conference proceedings. This is a lot easier than having to do a Google Scholar search for each paper and hoping I find something, like I have to do with some conferences.

WWW2009 Madrid

WWW2009 Madrid

There are a lot of very interesting ones, and here are a few that bubbled to the top of my reading list:

Data Mining Track

Semantic/Data Web

Social Networks and Web 2.0

Reblog this post [with Zemanta]

Luis von Ahn has an insightful post lamenting the fact that we are holding onto a paper-world philosophy of academic publishing in a digital age. He kicks out the fledgling idea that a “wiki, karma, and a voting method like reddit” hybrid might supplant our current method. I’m always a little confused by the reluctance to change publishing models in academia. Granted, I have never struggled to get tenure at a university, nor is it remotely likely that that will ever be something I do. But still, computer scientists of all people, should be willing to change and adopt a more sensible model. It turns out we’re just people after all.

What might a wikarmeddit version of academic publishing look like? A good place to start might be Stack Overflow. They are a self-proclaimed combination of wiki, blog, reddit, forum, and have karma. Perfect, right?

The benefits of peer review by the herd are great, but not without pitfalls. First of all, you can be herd-reviewed by morons. Moron 1 might think everything Researcher A publishes is GOLD and gives the thumbs-up no matter how badly the research was done. Ditto on the flipside, with Moron 2 hating everything Researcher A does. I’m not really being fair. The number of real morons who bother with this sort of thing is probably low, but the number of non-experts is a different matter.

On the other hand, open sourcing the research results like this allows all sorts of insights that you wouldn’t see from peer review. First of all, has a reviewer ever tried implementing an algorithm described in a paper? If you are a reviewer who has — I salute you. I doubt it’s very common. But when I come across a paper that is interesting for a problem I’m working on, I do try to implement it. If it gives me fits, I either abandon the method or try to contact the authors. This is simplified in a StackOverflow academic review setting, where the herd is giving this sort of feedback to the authors as a part of the review process. You can see how this level of communication would be beneficial. Inane non-expert commenters will either be filtered out (if they are truly inane) or they will shed light on confusing parts of your research presentation, allowing dissemination of your research to an even wider audience. This last thing is often given lip-service by the scientific community, but rarely have I seen actual attempts to do so.

So the next question is do we reinvent the wheel? Stack Overflow already has a community of smart people in place. Why don’t we just start using it?  Maybe SO could include some functionality for more research oriented questions.  All research can be viewed a set of questions.  Is this a good way of attacking this problem?  Is there a better way of doing it?  Is the methodology sound?  Isn’t my method the shiz?

Note: I’m fairly certain I’m not the first person making this call. I’m pretty sure I heard someone else recently make this point (maybe it was John Cook?) but i can’t find the reference.  Please comment.

I got most of the books I wanted the most for Christmas this year. It was a great haul that will keep me busy for a while. Among them were:

The books on string and tree algorithms and collective intelligence should be self-explanatory. The book on data visualization I wanted because it was an overlooked skill in my education. I appreciate great data visualizations and taking some steps to improve my understanding and increase my skills in that area is worth doing. Finally the book on evolutionary computing is for personal enrichment. I’ve been playing around with genetic algorithms since 1994, even before I got out of high school. It’s always been playing, though, and I wanted a bit of a more rigorous introduction to them.

With any luck, I’ll be posting some thoughts on these books in the coming months.

Git is a version control system that has been gaining in popularity recently.  If you have heard of or used Subversion or CVS, you are familiar with the basic principle of keeping track of changes by multiple users in a series of documents (source code, text files, etc).  One of the chief benefits of version control in software is that you can rollback in case the code has become corrupted.  It’s easy to see which changes were made where and broken code can be fixed more easily than if you had no version control and had to reconstruct the working code from scratch.  Unlike Subversion and CVS, Git is a distributed version control system.  Each user has their own copy of the entire repository and history.  Branching and merging is much easier and it’s extremely simple to get started.  Plus, having used all three, Git is the most fun.

Academic settings impose different constraints on code base management.  The goal is usually less about code quality and more about exploring possibilities.  Academic code is often quite shitty, hacked together by some grad student(s), with dozens of false starts and changes in requirements.  Trying to recreate previous experiments is often very difficult unless the grad student made previsions for such rollbacks.  And if they have, it’s probably done in a way that seemed logical to the grad student at the time but is a nightmare for someone new to the project.  There are ways to avoid this, by placing more of an emphasis on software engineering, but sometimes projects are so small or short-lived that it doesn’t seem feasible to trouble with that at first.  And if you don’t even have a clear picture of where you are heading, it might not even be possible (though you are probably doomed to many problems in that case).

To help combat these issues, I will contend that every academic software project must use version control.  Git makes that easy and here’s why.

1.  Creating the first repository is a no-brainer.

To create a new repository you simply type:

git init

It’s so easy, you can use it for anything.  To clone someone else’s repository, just type:

git clone git://location.of.origin.repository

Cloning is very similar to checking out in Subversion and CVS, except that you can now work completely independently if you desire.  And you can tunnel it through ssh (substitute ssh:// for git:// above), if you’re worried about security.

2.  You can still use it while off the grid.

In Subversion, creating the initial repository means needing some central place where all of the code goes.  If you are collaborating with several people, chances are this repository is not on your own machine so if you cannot access the network, you cannot access the repository.  With Git, you store the entire repository and history on your own machine so even if you are off the network, you can take advantage of all of the features of having version control.

3.  Branch your experiments.

Often the need arises to try out different approaches in academic coding.  Branching in Git is ridiculously simple:

git checkout -b new-branch-name

You can easily switch between multiple branches, merge branches, or discard them.  One approach might be to keep the main architecture stuff in your master branch (the original) and use branches for different parameters in experiments.  This will let you easily and logically separate functionality so that running an old experiment is just a matter of checking out the branch that pertained to it.  Update:  Thanks to Dustin  Sallings for the shorter version of checking out a new branch.

4.  Version control your paper.

Why use a shared folder or email to edit your paper?  You can easily create a Git repository to collaborate and merge changes.  You can quickly see who contributed what to a paper.  Dario Taraborelli wrote about this a few months ago, though his point was that you would need your collaborators to be familiar with a version control system and they usually aren’t.  I am arguing that they should be.  On a side note, another VCS, Bazaar, is listed as an alternative in the comments to Dario’s post.

5.  Convert into an open source project.

Sourceforge has been around for a while, but the UI is absolute garbage.  There is an even better solution out there:  GitHub.  GitHub is free for open source projects and offers some great visualizations for helping you track the life of your open source project.  Of course, there is Google Code, which is quite nice and easy to use.  It doesn’t support Git, just Subversion.  The drawback to using Google Code is that you have a lifetime max of 10 open source projects.  No such limit with GitHub.  Moving your Git repository to GitHub is also a simple matter of forking your project to GitHub.

Why does this even matter?  Check out Ted Pedersen‘s Empiricism is not a matter of faith [pdf] in the September issue of Computational Linguistics.  He contends that you should create academic software with the goal of releasing it.  This ensures the survivability of your project, increases the impact of your work, and allows reproducibility of your results.  Git makes that easier, n’est-ce pas?

6.  Keep track of your grad students.

Suspect your grad students are slacking?  Check the commit logs!  And now I prepare for hate mail from grad students.  However, I think that if I had this form of accountability, it would have made me more productive.  Of course, you don’t need Git for this, any version control system would do.  Of all the systems I’ve used, Git’s presentation of changes is the user-friendliest.

7.  Version control helps you write the paper.

When it comes time to write the paper, the version control logs can be used to provide a roadmap of what you have done.  Even though you probably have kept good notes, version control keeps a calendar of events that can add useful perspective (or fill in gaps when your notes are inadequate).

8.  Git is faster and leaner than other version control systems.

Because you have the complete repository on your own system, most operations are much faster in git.  Git reports an order of magnitude improvement in speed for some operations.  Git has a packed format they report uses less storage in most circumstances, as well.  Git has been reported to be almost three times more space efficient than Bazaar, another distributed version control system mentioned above.  Git also features an easy binary search when trying to locate bugs.

9.  Version control makes it easier to bring new team members up to speed.

Speaking from experience, having a record of commits (and well documented commits) makes it easier to come up to speed on an existing project.  This applies not only to academic coding but to any coding endeavor.  Good documentation doesn’t hurt either.

10.  Save yourself some headaches.

I think you’ll minimize headaches if you use Git.  If not Git, at least use some version control system.  A lot of the things I listed above are covered by most version control systems, but Git combines regular advantages of version control in a way that is very friendly to non-linear coding situations.  Git also makes it a cinch to move your code into an open source project that can have a significant impact on your career as a researcher.  And Git is so easy to use, you have to ask yourself, why not?

Latent Dirichlet Allocation (LDA) is an unsupervised method of finding topics in a collection of documents.  It posits a set of possible topics from which a subset are selected for each document.  This selected mixture of topics represents the topics discussed in the document, and each word in the document is generated by this mixture.  As a quick example, if we had a short document with the topics geology and astronomy:

The rover traveled many millions of miles through space to arrive at Mars. Once there, it collected soil samples and examined them to determine if liquid water had ever been present on the surface.

In this case, the topic astronomy is represented in red and geology in green.  LDA finds these latent topics in an unsupervised fashion using the EM algorithm.  EM is a two step process for estimating parameters in a statistical model.  The nice thing about it is that it’s guaranteed to converge to a local maximum (not necessarily the global!).  However, it can take a while to converge, depending on the size and nature of the data and model.  While I was in school, EM was one of the most confusing concepts, and I’m still not 100% on it, but it makes a lot more sense now than it did before.

In the context of LDA, EM is basically doing two things.  First, we come up with an idea about how the topics are distributed.  Next, we look at the actual words and compute the probabilities in the model based on those hypothesized topics.  Eventually we converge to a local “best” set of topics.  These may not correspond to realistic topics, but they maximize the negative log probability of the model.  Usually LDA does a pretty good job of finding explainable topics given a decent amount of data.

For more details about LDA, check out the paper by Blei et al (2003).  LDA has been extended in a number of different directions since the original paper, so it’s essential reading if you’re doing any sort of topic modeling [citation needed].

References

D.M. Blei, A.Y. Ng, and M.I. Jordan, “Latent dirichlet allocation,” The Journal of Machine Learning Research, vol. 3, 2003, pp. 993-1022. [pdf]

GWAP Promo

Posted: 18 May 2008 in Uncategorized
Tags: , , , , , , , ,

Figured I’d post this promo video the GWAP group did.  Unfortunately, I wasn’t able to participate in the filming of it since I was visiting my dad and family in Ohio for the first time after many years.  So unfortunate in that I missed the filming, but the alternative was worth it.  Johnny Lee had a not insignificant role in the making of the video, I believe.  Check out his stuff if you haven’t, he’s doing some pretty amazing things with Wii remotes.