Search This Blog

Thursday, May 29, 2008

Absolutely Amazing

Since it would take quite a patchwork of quotes to summarize this story, I'll just give a few bullet-points of my own as a summary.
- Revision3 uses BitTorrent to distribute its own content (legal distribution, in other words)
- Everybody's second-favorite company MediaDefender decided to play with R3's tracker. Once they found a hole that allowed the tracker to serve torrents not by R3, they began using the tracker to track their own files.
- R3 discovered that somebody was using their tracker for external content and banned MD's torrents
- MD's servers (the ones actually uploading the files that they were using R3's tracker to track) responded by DOSing R3's tracker (according to one person on Slashdot, MD has a 9 Gbps internet connection for this purpose), taking R3's tracker and other systems completely offline
- The FBI is currently investigating the incident. Some have suggested and are praying that the PATRIOT Act could be used to charge MD with cyber-terrorism, as defined by law.

Various coverage:
Inside the Attack that Crippled Revision3 (mirror)
MediaDefender, Revision3, screw-up
Revision3 Sends FBI after MediaDefender

Thursday, May 22, 2008

More on kd-tree Splitting

I was just reading the paper Analysis of Approximate Nearest Neighbor Searching with Clustered Point Sets, which, as the title indicates, performs performance analysis of 3 different kd-tree splitting policies. The policies used are the standard policy, the sliding-midpoint policy, and the minimum-ambiguity policy, which is new in this paper.

The minimum ambiguity method takes into account not only the full set of data points, like all the others, but also takes into account a set of training regions which represent the distribution of regions to find points within - that is, the future searches themselves. As with the other methods, the goal of the algorithm is to minimize the average number of nodes in the tree that overlap each search region; however, when both the searches and data points are known, the minimum-ambiguity method can do it better than the others.

Two different scenarios analyzed are of particular interest. In all cases the data points were clustered; the two correspond to the distribution of the training regions: the same clustered distribution as the data points, or uniform distribution. In the case of both using the same clustered distribution, the minimum-ambiguity policy > the standard policy > sliding-midpoint policy (here using the internet use of '>' as "is superior to"). In the case of searches distributed uniformly, the sliding-midpoint policy > minimum-ambiguity policy, with both far superior to the standard policy.

So, what's it mean for writing a kd-tree for a game? Well, it provides some pretty interesting information, though it doesn't change the bottom line. As mentioned in my paper, more complex splitting policies like sliding-midpoint and minimum-ambiguity are only viable for data sets that are essentially fixed. In a game, this corresponds to immobile objects that are either unchanging (e.g. cannot die) or change extremely infrequently; in E Terra, this corresponds to doodads - objects which take up space but do not have any function - and immobile game objects such as grass (which is edible but not consumable).

As also mentioned previously, the distribution of points is not expected to be uniform - it's expected that there will be clusters of things at various focal points on the map. Furthermore, in the case of mobile objects, the search distribution will roughly equal the distribution of the data points themselves.

Unfortunately, neither of these facts is useful to us. Despite the mostly known distribution of searches, we cannot use the minimum-ambiguity policy in any of our trees because the set of search regions - corresponding mostly to the mobile game objects - is dynamic. Furthermore, it wouldn't be of any particular benefit to use the data points in the static trees as the search region distribution, as the majority of searches will be from the mobile objects, for things like collision detection and sight radii.

Thursday, May 15, 2008


Well, I've finally finished my last final of the semester, and the last homework assignment/term project was turned in last week. Now comes the much more pleasant task of purging myself of any knowledge acquired this semester. A list of some of the things I ought to work on this summer:
- Clean up and post kdTrieDemo
- Clean up and post TextBreaker
- Clean up and post the GUI code from MPQDraft
- Work on E Terra
- Do follow-up experiments to my AI term project
- Work on Secret Project V (no, that doesn't stand for "vendetta")
- Work more on my various languages

Though somehow I'm betting the only one I'll do in any great amount is:
- Play World of Warcraft

Saturday, May 10, 2008

The Story of Comcast

As companies like Comcast are increasingly in the news on technology-related news sources, some might wonder how the entire situation with Comcast came to be. Well, this abridged version of the actual shareholder delegate meetings in two acts might shed some light on the topic.

Act 1

Shareholder Delegate #1: Gentlemen, I propose that we advertise higher bandwidth, increase prices, and sign up more customers. Yay or nay?
Delegate #4: What is the business impact of this proposal?
Delegate #1: More money for us.
Delegate #4: Yay.
Delegate #3: Yay.
Delegate #5: Yay.
Delegate #2: Yay!
Delegate #5: I propose we increase network capacity to accommodate those additional customers.
Delegate #2: What is the business impact of this proposal?
Delegate #5: We'll need to spend some money in the short term to...
Delegate #2: Nay!
Delegate #4: Nay!
Delegate #3: Yay.
Delegate #1: Nay.

Repeat 37 times

Act 2

Delegate #2: Holy shit! We've got way more network traffic than the network can handle! It's strangling our network to death!
Delegate #4: This is a disaster! Quick, somebody find a scapegoat!
Network Technician: Well, it looks like BitTorrent is using up a fair amount of bandwidth.
Delegate #4: Kill it, quick!
Network Technician: BitTorrent traffic blocked. Network performance has returned to mediocre levels.
Delegate #2: Whew. Crisis averted.
Delegate #2: Now then, I propose we increase advertised bandwidth and sign up more users. Oh, and we can't forget to increase the price; we are selling a limited resource, after all.
Delegate #3: What is the business impact of this proposal?
Delegate #2: More money for us.
Delegate #2: Yay.
Delegate #5: Yay.
Delegate #4: Yay!
Delegate #3: Yay.
Delegate #5: I propose we upgrade our network to...
Delegate #2: I thought we voted you out months ago. Security!
*delegate #5 is dragged out of the room by security guards*

*cut to next business meeting*

Delegate #4: Gentlemen, I propose that we raise advertised bandwidth, increase prices, and sign up more customers.


This post inspired by Mac Chaos' various parodies over the years, which are probably better than mine.

Friday, May 09, 2008

Spatial Sorting with kd-trees - Part 2

Here's the paper itself. The teacher required that I use the ACM format, but I cut corners where possible (e.g. he only said we needed to have those four sections) :P I should note that the sections themselves are from specifications given by the teacher. I think if it were an ACM paper, or even if I was coming up with the structure myself, it would have been organized significantly differently.

Thursday, May 08, 2008

Spatial Sorting with kd-trees - Part 1

So, today I turned in my graphics programming term project... what's done of it, anyway. Spatial Sorting with kd-trees is the name of the paper/presentation; not quite as impressive sounding as my AI project, but then, this project as a whole is less impressive than my AI project (and my third term project even less) :P

Well, I was supposed to give the presentation today, on the last day of class. Unfortunately, due to gross lack of time, that didn't actually happen. If I recall correctly, 12 people were scheduled to go today, in 75 minutes of class time; that's about 6 minutes per person. Problem is, everybody made their presentations with at least 10 minutes in mind (not unreasonable), and it took everybody a couple minutes to get their Powerpoint presentation running on the overhead projector. By the end we'd gone from allowing the first 3 or 4 people to give their full presentations, to "7 minutes each", to "5 minutes each", to "3 minutes each", to "run your demo program and sit down"; and at least one person (me) never did get a chance to go (I'm not sure if there were others). So, I wrote the URL of my blog on the whiteboard and told people I'd post the presentation there.

The presentation itself is here. I'd recommend that anyone who hasn't seen it before also look at my previous description of kd-trees and kd-tries, as I'm not sure the slides alone, without the spoken part of the presentation, will give you a clear explanation.

In addition, the demo program is also here, compiled with XNA 1.0 Refresh and XNA 2.0. The space bar toggles pause (initially paused); the H key toggles display of the kd-tree divisions, simulated view frustum, and statistics about the search for objects in the view frustum (the statistics are, in order: number of objects in the view frustum, number of objects evaluated, the percentage of objects evaluated that are in the view frustum, the number of tree nodes navigated, and the ratio of objects in the frustum to combined nodes and objects visited); the arrow keys move the view frustum. I'll probably post a rant about XNA in the near future; just make sure you download the version of my demo for the version of XNA you have installed. I'll probably post the written portion of the project tomorrow, after I finish it.

I'm still hoping to post the source to TextBreaker and kdTrieDemo, although that's gonna be at least a week from now (at the earliest), as finals are next week.

Sunday, May 04, 2008

Orthographic Language Identification Using Artificial Neural Networks

Finally finished adding in the figures from the presentation that weren't in the original version of the paper. I have no idea why those three Visio figures are so ugly; they don't look that way in Visio, Word, and Powerpoint, but magically turn ugly when printed with PDF reDirect (which has worked well before those three figures).

Orthographic Language Identification Using Artificial Neural Networks

I'd like to post the full source to TextBreaker, but a lot of it was written in a hurry and needs cleaning up and commenting. Combine my infamous laziness (and past experience with MPQDraft) and time will tell if I ever get around to it :P

Saturday, May 03, 2008

The Grand Unification

The assumptions you start with can often limit the set of conclusions you are able to arrive at through logical reasoning. Computer science people were having a heck of a time attempting to adapt the binary tree - the standard for in-memory search structures - to media with large seek time, particularly disk drives. Progress was slow and fairly unproductive while the basic definition of binary tree held. Finally, somebody questioned the assumption and thought: what if we built the tree from the bottom up? And so the B-tree was born, and remains the standard in index structures to this day, with incremental improvement.

Other times, when creating, the assumptions themselves are fun to play with and observe the results. In Caia, I initially envisioned all of the major words - nouns, adjectives, and verbs - to be nouns. Nouns became adjectives when used with an attributive particle analogous to "having" (e.g. "woman having beauty" vs. "beautiful woman"). Taking an idea from Japanese, nouns became verbs by an auxiliary verb meaning more or less "do" or "make" (e.g. "make cut" vs. "cut"). Unfortunately, I ultimately concluded that verbs had to be separate, due to both practical concerns (specifically, concerns about making thoughts too long) and theoretical concerns (different nouns require different semantics, which would produce inconsistent theoretical behavior).

With another one of my languages, I took a different route, with some very interesting results. As this language is synthetic (unlike Caia, which is strongly analytic and isolating), I had quite a bit more flexibility. This language was actually modeled on the Altaic languages - Japanese, Korean, Mongolian, Turkish, etc.; as such, I suppose I can't claim that I invented this (what I'm getting to), but merely took what existed in Altaic languages and perfected it to a degree that doesn't exist in nature - at least to my knowledge.

The result is a language is which nouns, adjectives, and verbs are, in fact, all verbs; they are conjugated exactly the same, and play the same grammatical role. Even though this particular language is fictional, and I'm not expecting anybody to actually speak it, this idea might show up in other languages of mine (possibly a real one), as I find it exceptionally elegant. However, I do like to refer to "nouns" as substantives, "adjectives" as attributes, and "verbs" as actions; this is done because there are slightly differences in meaning between the noun bases in the three cases.

I explained previously that Japanese verbs had several base forms - usually distinguished by one vowel - which were then agglutinated with other things to form complex verb forms. I won't describe them again, as I use different names for the ones in my language, which might result in confusion. In my language, there are at least five different base forms of each verb: neutral, conclusive, attributive, participial, instantiative, and conjunctive (note that these names are not final, and I'm open to suggestions).

The conclusive form is the same as the Japanese conclusive: it's the main verb of the last clause in a sentence; it indicates the end of the sentence, and often has a number of suffixes indicating various details about the sentence. The attributive form is the same as one of the two uses of the Japanese attributive: it's the verb of a relative clause; adjectives are attached to nouns by forming relative clauses (e.g. "cat that is fat" vs. "fat cat"). The participial form resembles the second use of the Japanese attributive form: it is a noun referring to the act named by the verb (essentially an English gerund or participle, as in "watching FLCL makes me want to kill people"). The instantiative is unique to my language, and refers to an instance of the verb's action; for example, "a run" would be an instance of the verb "run". The conjunctive is similar to the Japanese -te form (and also includes the Japanese conjunctive base); specifically, it is used in verbs not in the last clause of a sentence (I'll come back to that). Finally, the neutral form is used in agglutination, and has something of a flexible, context-dependent meaning.

So, how exactly does this framework allow the grand unification? Let's look at an example of the specific meanings of the different bases for each word type (substantive, attribute, and action). Although I should note that not every base is necessary to unify the three; some are simply part of the bigger picture for the language.

For the substantive "human":
Conclusive form: "is [a] human"
Attributive form: "who is [a] human"
Participial: "being human"
Instantiative: "human"
Conjunctive: "is [a] human, and..."

For the attribute "fat":
Conclusive form: "is fat"
Attributive form: "who is fat"/"fat"
Participial form: "being fat"
Instantiative form: "fat thing/person"
Conjunctive form: "is fat, and..."

For the action "travel":
Conclusive form: "travel"
Attributive form: "who travels"
Participial form: "traveling"
Instantiative form: "journey"
Conjunctive form: "travel, and..."

Thus we are able to use identical conjugation for each type of word, treating the first two as stative verbs and the last as an active verb, in an elegant unified system. The real key to this, I think, was the separation of participial and instantiative forms. Note that not all actions have an instantiative form; it only exists where it makes sense: where something is produced or performed.

Now that I've explained how the unification works, there's just one more loose end to tie up: the meaning of the conjunctive form. This form is somewhat foreign to English speakers, because English only works this way in one of the circumstances this language uses it for (specifically, conjunction - e.g. "Murasaki is seven years old and [is] surprisingly well-spoken").

In English, when we have multiple clauses in a sentence that are related in a particular way, they are generally joined by some linker word that carries information about the relationship between the clauses; furthermore, the verbs in all clauses are conjugated normally. In Japanese, all non-main clauses are simply joined, often without any indication of what the relationship is (for matters of time, this is not that unusual; many languages lack such words); as well, the verbs in all but the main clause are deficient - they lack various things like tense, mood, politeness auxiliaries, etc. This is a matter of economy; all that stuff they stick on the verb at the end of the sentence can be rather lengthy. Thus it uses a generic verb form which in some ways resembles the -ing form of English verbs; this form indicates a conjunctive relationship between sentences (note that the literal conjunctive, as in the example a bit above, is actually indicated with a separate form in Japanese, appropriately known as the "conjunctive form/base"; what I've done is merged the two uses).

Here are some examples of things that would use this conjunctive form in Japanese. The first version is how it would be said in Japanese (note that I'm conjugating all verbs here, even though only the last one would be conjugated in Japanese); the second sentences shows how we would typically say the same thing in English.

Simultaneity: "I looked at manga and she looked at novels"/"I looked at manga while/as she looked at novels"
Coincidence: "I went shopping and ran into a friend"/"I ran into a friend when I went shopping"
Sequence: "I got a haircut, [and] went to the bank, and went to the supermarket"/"I got a haircut, went to the bank, [and] then went to the supermarket"
Consequence: "I overslept and was late for class"/"I was late for class because I overslept"

So that's all the conjunctive form is. On an interesting random note, you might notice that in none of those examples does the first version sound unnatural, and might very well be used by native English speakers in addition to the more precise second versions (though of course it would have sounded extremely strange if I had only conjugated the last verb, like Japanese does). This indicates that even in English this kind of vagueness is used; and for that matter, there are ways of indicating some of those relationships explicitly in Japanese, as well - they just aren't always used.