Search This Blog

Tuesday, December 27, 2005

What a Pain

One of the things I got for Christmas was the Memoirs of a Geisha soundtrack. As with all my other CDs, I immediately wanted to rip it for my MP3/Ogg collection. This sounded simple, but ended up being rather unpleasant.

The first sign of trouble came when AudioCatalyst couldn't calculate checksums for the tracks. Looking at the manual, the reason this would occur is that there wasn't at least one frame of silence at the beginning and end of each track. One thing this may mean (which is what I initially suspected) is that the tracks are not discrete, but transition directly from the end of one track to the beginning of another, with no inter-track delay (the Gladiator soundtrack did this, just to name one prominent example). This is troublesome because it's difficult to perfectly mimic this in lossy audio formats, as well as that it sounds bad when you play the tracks on shuffle (like I do).

Well, the real reason turned out to be even more annoying. A quick look at the tracks revealed that there was noise at the beginning and end of each track; but even just looking at the waveform, it was oddly shaped noise. The frequency graph confirmed what I'd suspected from looking at the waveform: there was a prominent spike at an upper frequency, embedded in otherwise brown noise; further investigation revealed that this spike was centered at 15,733 hz. At the beginning and end of tracks (it fades out in towards the center), this peak stood about 50 db above that of the surrounding noise.

While this spike was only present at the beginning and end of the tracks, I soon found that a different spike was present throughout most of the track. Some careful isolation revealed that this spike was centered at 15,700 hz. The intensity of this spike varied anywhere from 14 to 27 db, and may be different intensity on the two channels.

The 15,700 hz spike I managed to dispose of, by means of manually performing notch filters on each track. The 15,733 hz spike, however, I left, as it would be a pain to deal with, only at the beginning and end, and it's not present during most of the tracks.

So, now that we know what these spikes (hums) are, why are they there? Unfortunately, I don't know the answer, though I can think of a number of possibilities. It could be the hum of some piece of equipment in the studio, during the recording of the music (though I've never encountered this kind of thing before); I'm told that power supplies could produce hums in this frequency range. It could also be some sort of resonance with a stray instrument or item in the studio, causing it to vibrate and amplify those particular frequencies. Finally, it could be some kind of watermark or rip-protection system; if it was the latter, I could test for it by recording the music directly from my stereo (a significant amount of work, considering that the stereo's in a different room than my computer), but I don't really have any way to test the first.

So, there's Q's unsolved mystery of the week.

Sunday, December 25, 2005

Q's Do-It-Yourself Anime Christmas

Yeah, so that title kind of sucks. What it's supposed to mean is: what anime/manga (note the glossary at the end of the post) Q recommends you get yourself for Christmas (and would consider them all worth the money).

Ai Yori Aoshi (manga & anime). A "guys'" love story about two rich (before the guy ran away from his home, that is) kids betrothed to be married, and trying to get along living together (and, of course, plenty more female characters that love the guy as well). Lots of humor, an amusing story, and typical Japanese ecchi* (though not hentai). I originally started watching this series because it sounded similar to one of the stories I wrote; this similarity ended up being pretty shallow, but I still liked the series a lot.
Azumanga Daioh (manga). Awesome comedy series about the daily life of a group of... unique Japanese high school girls, and their equally unique teachers. There's also an anime version, but I don't consider it near as good, as the conversion from the comic strip format of Azumanga Daioh to anime is quite poor.
Elfen Lied (anime). An extremely vicious diclonius (a two-horned human subspecies, with a number of invisible arms - vectors) escapes from a research facility that has been experimenting on her (and other diclonius) for years. A second, infantile personality develops, allowing her to live peacefully with a boy and his cousin, until the research facility attempts to reclaim her, and her original personality reappears. Primarily drama and action (with very graphic violence), but also a good amount of comedy, and ecchi. There's also a manga, which covers almost twice as much material as the anime, but it has yet to be released in English (and fan translator groups have only gotten about 1/4 through translating).
Full Metal Panic (manga and anime). An elite mercenary (raised in war zones worldwide) has difficulty adjusting, when he is sent to live the life of a Japanese high school student while protecting a female student from a terrorist organization. Best for it's comedy, but also a moderate amount of drama. Can occasionally be ecchi.
Great Teacher Onizuka (manga and anime). None-too-bright biker gang leader turned teacher takes on the school's most delinquent class, and proves that he can do a better job teaching with his brawn and heart than the other teachers can do with their ivy-league diplomas. Comedy and drama, with a bit of ecchi. The manga is almost twice as long as the anime.
Love Hina (manga). The lord of ecchi (more than my taste, actually, but it has more than enough good stuff to make up for it). The story of a boy (attempting to enter Tokyo University, the most prestigious college in Japan) who inherits an inn turned girls' dormitory from his grandmother. In this process he develops various relationships (of various natures, and only one of them ending up being romantic) with the girls in the dorm. Hilarious comedy, as well as drama, and a bit (okay, maybe more than a bit) too much ecchi. There's also an anime, which contains some material not in the manga (although the reverse is true, as well), but I don't like it as much as the manga (although I do prefer the ecchi-light of the anime, which is the one I saw first).
Noir (anime). A pair of assassins make their services available for hire under the name of 'Noir', but soon discover that the name was already used by an ancient secret society. An interesting and amusing series (primarily drama), though things sort of start to suck near the end.
Rurouni Kenshin (manga and anime). Fictional story set in historical Japan (around the 1860s) and anchored around real events and people. Kenshin, a legendary assassin/warrior from the Meiji Revolution (a fictional character, although inspired by the real Hitokiri Gensai from that time) vowed that after the revolution he would no longer kill, but find another way to continue to help the people of the world. So, he wanders from place to place, doing what he can, using his legendary swordsmanship and sakabato (a reverse-bladed katana, where the normal cutting blade is merely a blunted edge, and thus cannot be used to kill). Primarily drama, with a good dose of comedy. I'd consider the manga better than the anime, although both contain material not in the other.
School Rumble (manga and anime). Kenji (male) enters high school with a proud reputation as a delinquent, and has his eyes set firmly on Tenma (female). Tenma, however, is too thoroughly smitten by Oji (male) to notice; Oji also seems to be pretty clueless in general, let alone about her interest in him. Primarily comedy, some drama.
Stars (Seikai) Trilogy (Crest of the Stars, Banner of the Stars, Banner of the Stars II; manga and anime). A sci-fi series about a noble human boy entering the Abh (you just know the original design concept was 'blue-haired elves in space') military, where he becomes a friend of an Abh princess also in training, who commands her own small ship. Ultimately, a war breaks out between human and Abh, which they must participate in. War, culture clash, growing up, falling in love, etc.; primarily drama, but some comedy (and it's rare for the Japanese to not throw in at least a tiny bit of ecchi for good measure).
Yotsuba &! (manga). Another hilarious comedy series by the author of Azumanga Daioh, about an exuberant (and rather psycho) little girl, enjoying life to the fullest.

Glossary:
Anime: Japanese "cartoons". Unlike American cartoons, anime may be for any age group, and may, in the extreme case, be pornographic.
Ecchi: Kind of like 'risque' or 'lewd'. Anything from panty shots to actual nudity (in Japan this is acceptable material for network television).
Hentai: Animated pornography (note that nudity alone is not sufficient to call something hentai).
Manga: Japanese "comics". Again, may be made for any age group, and may even be pornographic.

Thursday, December 22, 2005

& Debates - Fidelity

This one (registration required) is actually not one of my debates, although it's a timely topic that was brought up in another thread (and thus I'd been thinking about it, as well):
So this is the place to discuss serious matters. I love it.

Anyway, I also have something that I'd like us to talk about. Throughout the generations, we're taught that once one reaches a certain age, one marries, has children and continues to live on, without thinking much about it anymore.

At the center of all of this lies fidelity. When you're in love, you can't imagine ever wanting someone or something else. But I've been thinking about this for most of my adolescent life, and even though I am romantic and I really want to believe in eternal love and faithfulness, I think fidelity is against human nature. And that love and desire sometimes go hand in hand, but that they're not one and the same.

Please do not think that I'm pleading for adultery here, I just want to know what you think.

Sunday, December 18, 2005

Q1 Register Set

As mentioned previously, the Q1 has 32 32-bit general purpose registers, as well as the instruction pointer, instruction register, and various flags. I've adopted the MIPS register naming convention (registers 0-31 being $0 to $31), because it elegantly allows the use of register nicknames: aliases for certain registers, based on the designated function of the register (we'll come back to this in a couple paragraphs).

Of these 32, 31 are truly general purpose - the software can do anything it wants with them, and the CPU doesn't care. Register 0, however, has a very special purpose: it is the zero register (a concept I read about in the MIPS architecture, and found particularly useful). The zero register always contains the value 0, regardless of what is written to it; this allows the value 0 to always be available to the program, as well as allowing unneeded values (such as the result of subtraction, when all you really care about is the value of the flags - for a conditional branch) to be discarded without overwriting another register.

$31 is nicknamed $ra - the return address. This register is supposed to hold the return address for the current function (at least when the function returns). If the function calls other functions, it must be sure to save and restore its own $ra. As with all the registers that follow, this is not a hardware restriction (as it was in the case of $ra on MIPS), but a software convention that should be followed so that all code remains compatible.

$30 is $sp: the stack pointer. As previously described, the stack is a software structure on Q1; however, as it's integral to the operation of functions, it must be standardized. As with MIPS, $sp always points to the NEXT available stack location (rather than the first occupied stack location, as with x86).

$29 is $gp: the global pointer. This is more of a reserve for function use. Well-behaved Q1 programs should not hard-code the addresses of their global variables. Rather, they are expected to, should they need to access global variables, construct a pointer to their global variables by adding a (negative) offset to the instruction pointer (which can be obtained with the LRA - load relative address - instruction). The global variables can then be accessed through $gp + offset. This variable is nonvolatile; if a function uses it, it must save and restore the previous value, so that it does not overwrite the previous function's $gp.

$2-$5 are $p0-$p3: the first four 32-bit (or smaller) parameters passed to a called function. On return, $2 is $rv: the return value (if the function has a return value of or less than 32 bits).

$2-$15 are also $v0-$v14: the volatile registers. Functions may use these registers however they like, without regard to saving previous values. Thus, if a function needs a value preserved while calling another function, it must either use a different register, or save the value on the stack. These registers allow for functions to do short-term calculations (where the intermediate values are inconsequential, and may be overwritten, and only the end result is important) without the need to save registers on the stack.

$16-$25 are $n0-$n9: the nonvolatile registers. Functions may use these functions however they like, but must save and restore the previous values of any registers they use. These registers are thus saved across function calls, and may be used to store values needed after the call, without the need to save the registers to the stack.

Having both volatile and nonvolatile registers allows for best-of-both-worlds optimization. Intermediate values that will not be needed more than a few instructions later (and not across a function call) can be placed in the volatile registers, and no prolog or epilog code is needed to clean them up. On the other hand, important values can be placed in nonvolatile registers, where cleanup overhead is only needed if the called function actually uses the registers (a probability that is decreased by the fact that the volatile registers are available for quick computations).

The remaining 4 registers are reserved, at this time. The register set isn't 100% final, and I may yet find a use for them.

Saturday, December 10, 2005

Asynchronous I/O - Emulation Plan

One of the really cool things I have planned for LibQ is an emulated asynchronous I/O server. This server emulates asynchronous I/O either on systems that don't support it natively (like Windows 9x) or on data sources that operate synchronously. This latter item, in particular, is useful, because it allows you to add on asynchronous I/O support on almost anything (for archive file libraries, for one example).

The basic design of CAsyncServer (a singleton) is pretty straightforward. It maintains two thread pools (and separate request lists), which function almost identically: dequeue a request from the request list, perform the operation synchronously, perform the completion notification, then grab another request, sleeping if none are available. It's very straightforward.

The first thread pool is exactly what you'd expect: a pool for issuing calls for file system files (things you access directly through the operating system). As all of these are exclusively I/O (save for the bit of processing overhead required for each call), the size of this pool is capped only at a large number of threads (something to prevent the system from getting swamped by huge numbers of threads), and all of them execute in parallel. If the list of outstanding requests becomes too long, the dispatcher will spawn another worker thread for the pool.

The reason that there are two pools is due to the fact that CAsyncServer is intended to also work as a server for things other than file system files. One thing I plan on using it for is QuarkLib, the library for Quark, my archive file format. In this case, most operations will require a moderate amount of CPU, as most data will be compressed. If you tried issuing a bunch of operations like this on the first thread pool, with a sky-high limit on the number of concurrent threads, the entire system would grind to a halt. The second thread pool, then, is limited to a small number of threads (around the number of CPUs in the system, give or take). While there may be some waste due to file system access, this will ensure that asynchronous I/O server won't strangle the entire system.

Friday, December 09, 2005

& More Infatuation

Just finished watching Elfen Lied, again, and I still like it a lot. Grabbed a beautiful new desktop from it, too. Again, if you can get your hands on it (and can stomach the gore), go watch Elfen Lied.

Wednesday, December 07, 2005

& Debates - Truth and Fiction

The latest installment (registration required) in Q's never-ending supply of debating material, and this one's a whopper:
So, I've got this thing called a-p mail that I invented for one of the stories I'm writing. That's anonymous private mail: an e-mail system that allows a piece of encrypted mail to be routed on a public network from a sender to the intended recipient (which the sender must know) without the mail server ever knowing the identity of either. This is not to be confused with something like an Onion network, which can act as an anonymous, untraceable proxy server; this is a method of getting an e-mail to the intended receiver without so much as an e-mail address to identify the receiver.

This is something I invented from scratch; while I don’t know that no one else has thought of it before, I can say that I have no knowledge of anyone else ever implementing or considering this method. This brings up the question of whether I should look into patenting it.

However, if nobody has considered this method previously, a question of ethics arises. This method was invented as a means of secure and anonymous communication between terrorists and other underworld persons, and could, if introduced to the real world, be used just the same. Assuming nobody else has publicly described the method, would it be ethical to bring it into the real world?

Not Quite Idle

Well, since the last few posts, I haven't been completely idle. I've been (from time to time) researching possible solutions to the distortions in a lot of the Mega Man 6 music. This has been complicated by the fact that I'm not that good at math (at least not anything at or above calculus). Right now I have some ideas that sound promising, but I'll have to try them before I post about them.

Monday, December 05, 2005

& Debates - Propaganda?

My latest debate topic (registration required):
As part of an information offensive in Iraq, the U.S. military is secretly paying Iraqi newspapers to publish stories written by American troops in an effort to burnish the image of the U.S. mission in Iraq.

The articles, written by U.S. military "information operations" troops, are translated into Arabic and placed in Baghdad newspapers with the help of a defense contractor, according to U.S. military officials and documents obtained by the Los Angeles Times.

Many of the articles are presented in the Iraqi press as unbiased news accounts written and reported by independent journalists. The stories trumpet the work of U.S. and Iraqi troops, denounce insurgents and tout U.S.-led efforts to rebuild the country.

Though the articles are basically factual, they present only one side of events and omit information that might reflect poorly on the U.S. or Iraqi governments, officials said. Records and interviews indicate that the U.S. has paid Iraqi newspapers to run dozens of such articles, with headlines such as "Iraqis Insist on Living Despite Terrorism," since the effort began this year.
Los Angeles Times (found via Mac's blog)

So, couple questions for debate. First, is this propaganda, when it's factual? And in either case, is this acceptable practice (not poor ethics)? If yes, how do you handle the fact that the stories lie about their authors? If no, what do you have to say about the fact that the citizens would be less likely to trust (factual, in this case) news from the US-promoted government?

AMD Opteron Architecture

Dark_Brood found me an amazingly detailed site with more than you ever wanted to know about the Opteron architecture.

Spoils - UPDATED

Just a couple of the tracks I've extracted (and had time to edit and encode):

http://www.campaigncreations.org/starcraft/mpq2k/Misc/data4.aif_0083_E4C2E552.ogg
http://www.campaigncreations.org/starcraft/mpq2k/Misc/data4.aif_0084_3FD54DC5.ogg
http://www.campaigncreations.org/starcraft/mpq2k/Misc/data4.aif_0089_85EE44B9.ogg

UPDATE: Reuploaded the new files, after recompressing them in standard quality mode, so they should be significantly better quality.

Friday, December 02, 2005

Vorbis Bad!

Well, this isn't a happy discovery. While editing the music from Mega Man 5 (to archive it in my library of game music), I discovered that Vorbis was destroying the upper frequencies of the tracks at 70 kbps (I set it to encode at 80 kbps, as they're all mono, but for some reason on MM5 it decided to use 70). Take a look at the results:

The original track, after ADPCM decompression:


The same track, after Vorbis compression at 70 kbps:


The track at 80 kbps (forced by reducing the range of allowable bitrates):


The track again, at 96 kbps:


UPDATE: Hmm. Looks like this is actually only a problem when not using standard quality mode (was using average bit rate mode before). In other news, lots of Mega Man 6 music is recorded at high volume, and noticeably distorted. Maybe I should send an e-mail to Capcom...

Thursday, December 01, 2005

Little Change to Asynchronous I/O

I've decided to make a slight change to how LibQ handles one of the three methods of completion notification. Instead of using a CEvent to notify the application of I/O completion, the new method internalizes the waiting process. Practically, instead of waiting on a CEvent you specify when the you begin the I/O, you'll now wait on the CAsyncStatus for the I/O, itself (this requires that the operation be marked as waitable, when initiated).

There are a couple reasons for this. First, it should be a little faster on POSIX systems, due to more direct OS support. Second, it will allow timed waits on I/O completion (remember that timed waits could not be implemented with the CEvent on POSIX, due to technical difficulties).

A third benefit, although I don't know if I'm going to even do this, yet, would be that it allows the possibility of waiting for multiple waitable I/O operations at once, returning when one of them completes (or the timeout expires). However, due to incomplete OS support, this might be slower than desired.

& Debates - Pornography

My lastest attempt (registration required) to incite flame wars for my own amusement:
From Social Psychology, eighth edition (my social psychology text book), by David G. Myers, pages 399-400:
Evidence also suggests that pornography contributes to men's actual aggression toward women. Correlational studies raise that possibility. John Court (1986) noted that across the world, as pornography became more widely available during the 1960s and 1970s, the rate of reported rapes sharply increased - except in countries and areas where pornography was controlled. (The examples that counter this trend, such as Japan, where violent pornography is available but the rape rate is low, remind us that other factors are also important.) In Hawaii, the number of reported rapes rose ninefold between 1960 and 1974, dropped when restraints on pornography were temporarily imposed, and rose again when the restraints were lifted.

In another correlational study, Larry Baron and Murray Straus (1984) discovered that the sale of sexually explicit magazines (such as Hustler and Playboy) in the 50 states correlated with the state rape rates, even when controlling for other factors, such as the percentage of young males in each state. Alaska ranked first in sex magazine sales and first in rape. Nevada was second on both measures.

When interviewed, Canadian and American sexual offenders commonly acknowledged pornography use. For example, William Marshall (1989) reported that Ontario rapists and child molesters used pornography much more than men who were not sexual offenders. An FBI study also reported considerable exposure to pornography among serial kills, as did the Los Angeles Police Department among most child sex abusers (Bennett, 1991; Ressler & others, 1988).

Although limited to the sorts of short-term behaviors that can be studied in the laboratory, controlled experiments reveal what correlational studies cannot - cause and effect. A consensus statement by 21 leading social scientists summed up the results: "Exposure to violent pornography increases punitive behavior toward women" (Koop, 1987). One of these social scientists, Edward Donnerstein (1980), had shown 120 University of Wisconson men a neutral, an erotic, or an aggressive erotic (rape) film. Then, the men, supposedly as part of another experiment, "taught" a male or female confederate some nonsense syllables by choosing how much shock to administer for incorrect answers. The men who had watched the rape film administered markedly stronger shocks, especially when angered with a female victim.

In a sidebar on page 400:
Repeated exposure to erotic films featuring quick, uncommitted sex also tends to
  • decrease attraction for one's partner,
  • increase acceptance of extramarital sex and of women's sexual submission to men, and
  • increase men's perceiving women in sexual terms.
(Source: See Myers, 200)

Wednesday, November 30, 2005

My New Cell Phone Ring Tone

How awesome is that (the old NES version of the Mega Man 4 last boss music)? It's even perfectly sized for a ring tone.

& Iraqi Civilian Casualties

I usually try to limit my news items to technical ones, but this was just so surprising I had to share it with you guys (assuming, of course, that there's anyone actually reading this blog):
There is indeed a mind-blowing story about collateral damage that needs to be told, but that story is one in which we honor the extraordinary achievement of the United States military: two years of combat since the fall of Baghdad, much of it urban warfare, with less than 1,000 civilians killed as a result of U.S. action.

What is the source for these numbers? The most comprehensive study of civilian casualties is available from a group opposed to the Coalition intervention in Iraq called Iraq Body Count. This summer, the Iraq Body Count project published an analysis of casualties in the Iraq War that must be admired for its meticulous documentation.

This study reports 24,865 civilian deaths in the first two years of the Iraq War, an apparent ringing endorsement of the "Iraq in chaos" position. But a curious statistical anomaly jumps right off page one: over 81% of the civilian casualties are men. Even stranger, over 90% of civilian casualties are adults in a country with a disproportionate percentage of the population under 18 (44.5%). This contradicts a basic tenet of the civilian casualty argument, namely that we are describing collateral damage during a time of war. Collateral damage does not differentiate between male and female, between child and adult. A defective smart bomb falling in a marketplace, stray bullets ripping through bedroom walls, city warfare in Fallujah – all these activities should produce casualties that reflect the ratio of men to women or adults to children that prevail in Iraq as a whole.

This question is particularly relevant when one side in the conflict does not wear uniforms, is predominantly adult and of one gender, and engages in a practice of concealing its combatants within the civilian population. The statistics are further distorted if the Iraqi security forces – essentially the free Iraqi military on the side of the U.S. coalition – are classified as civilians, as they are in this study.

Real Life Adventures: Mega Man Fun - Part 3

At this point, I was feeling rather discouraged, as the file format wasn't anything recognizable. I found a really nice site that explains the various flavors of ADPCM; but alas, none of them described the format I was seeing.

That left an exceedingly painful alternative: reverse-engineer the game and find the decompression code. While I do know MIPS assembly language (which the PS2 uses), debugging an unfamiliar platform is hell.

Next idea: talk to nameless programmer friend who programs, among other things, the PS2. I explained the situation, and asked if he thought it was remotely feasible to reverse-engineer the thing. He doubted it. However, he offered an opinion even more valuable: he suggested it might be VAG, a hardware format. Now that was a format I'd never heard of before, other than seeing it listed in the list of formats MFAudio supports. I smell an opportunity...

I whipped out a random WAV file and ran it through MFAudio (which can encode as well as decode). While the header was obviously different (for reasons I wouldn't realize till later), the distinctive data block structure was evident in the generated VAG file.



That left one thing to do, to confirm: to try to splice the AUS file data into the VAG file, and see if MFAudio could play it. I deleted the VAG file data and pasted in the AUS data, and transplanted the AUS header fields to the VAG header, as best as I could guess what they were. The result: MFAudio played it; the length and sample rate were wrong (this was due to my incomplete understanding of the header fields of the two formats), but it played it, devoid of pops, clicks, or other distortions. This was a positive identification of the compression format.

Desktop Linux Survey Results

I just saw this posted on Slashdot, and thought it was pretty interesting.
Encouraged by a solid 3,300 user responses to its Desktop Linux survey, the Open Source Development Labs (OSDL) Desktop Linux Working Group (DTL) Tuesday thanked all its respondents by email and began sifting through the mountain of data the survey provided.

The month-long online survey focused on determining the key issues driving Linux on the desktop as well as the major barriers to Linux desktop adoption, OSDL officials said.

Tuesday, November 29, 2005

& Debates - Responsibility (Again)

My attempt (registration still required) to revive the previous debate, after seeing a post on Raging Right-Wing Republican (and references that post):
The first paragraph is pretty much how I feel about the matter. Nobody asks to get raped, but certain things (wearing skanky clothes, getting drunk on a date, etc.) are playing with fire. The rapist is always the Bad Guy ™; there's no question about that. But only a complete idiot would hand a Bad Guy another chance to do something bad!

Imagine you're driving through a traffic light. The light is totally green, and you're following every letter of the law. Then some psycho goes zinging through the intersection, unmistakably running the red light, and is headed straight for you. You have two options: continue, reassured that the guy is completely in the wrong, and the accident will be his fault, or slam on the brakes and avoid the accident completely (let's assume you still have time to do so)? You'd have to be a friggin' idiot to do the former, yet people try to justify that in things like this topic. That you didn't stop doesn't excuse the law breaker - the guy that ran the red light - but the fact is that you could have prevented it and you didn't. And with something as painful as rape (or car accidents, for that matter), do you think you'll CARE that it wasn't your fault, after it happens?

To me, date rape is something of another beast. I consider rape to be, by definition, one person forcing sex on another, when they know that the other is not willing. The real distinction of date rape is that, while common rape is pretty clear about what happened (it's extraordinarily rare for a woman to consent to sex with someone she knows nothing about, and isn't even on a date with), date rape is significantly more muddy, as it's very difficult to prove that it meets that definition. That men tend to misunderstand female signals as sexual invitation (and not understand when 'no' means 'no', especially when hormone-crazed) is thoroughly documented in social psychology, and things get even more difficult if the girl had previously consented to sex with the guy (as it makes it that much more difficult to tell whether she meant no, or was just playing*).

This is one of the definitions for arguing lack of responsibility for a crime in court: when the perpetrator did not, at the time of committing the crime, have the ability to tell right from wrong. If the guy doesn't know the girl doesn't want to have sex (in his mind, they're having consensual sex), how can you say that he could tell what he was doing was wrong (as most people would not consider consensual sex in and of itself a crime)?

This, of course, leads to even more sticky issues. Even if the guy didn't know that the girl wasn't willing, the (substantial) damage was still done, to the girl. What do you do with a real victim without a real criminal?

* And even worse is the (halfway commonly held) belief that girls that say (and perhaps mean) no will still enjoy the sex once things get going (a common porn scenario). Although this doesn't fall under the same category as mistaking 'no' as play, as in this case the guy may know, at the time, that the girl does not want sex (and thus meets the definition of rape).
UPDATE: Perhaps I should clarify something confusing, a little. It sounds like me saying that being raped is not the fault of the victim is contradictory to me saying that the victim was playing with fire. Here's what I mean: I believe that it takes a special disposition (either by nature or by nurture) to rape. I don't believe that people lacking this disposition will end up raping girls just because they were wearing skanky clothes. I do believe, however, that someone disposed to rape will be more likely to rape a girl like that. It's not the girl's fault that the guy was disposed to rape, but it wasn't very bright to intentionally do something that increased the probability of being raped, either.

Q's Instant Fun in Three Easy Steps

1. Enter a chat room, forum, or online game (preferrably one with women in it)
2. Post this quote: "And on another note, to the subset of moral relativists who are communists, socialists, and other leftists, who believe that no one person can have a claim on any property, then how can a woman object if a rapist decides to make use of that which belongs to him?"
3. Sit back and enjoy the show

Monday, November 28, 2005

*fume*

Have a listen at this.

Now, what do you suppose is responsible for that Gord-awful distortion? The Ogg encoder? Nope. My decompression code? I wish. The AUS encoder? Also negative. No, ladies and gentlemen, that is the sound of some idiot at Capcom recording this track at such high volume that a good 15-20% of the samples get clipped to fit in a signed 16-bit integer.

How exactly did this get past quality control? Even a deaf person could have told you this would sound funny, just from looking at the wave form.

Sunday, November 27, 2005

More Cell Info

Well, not any new info, but if you haven't read much of the technical documentation, there's a new summary you might want to look at.

Q's Live and Learn of the Day

Never, ever, ever, ever access large amounts of data (more than will fit in the system cache) from a slow drive (i.e. DVD) via memory mapped files; you will kill Windows (or at least make it wish it was dead).

Saturday, November 26, 2005

Real Life Adventures: Mega Man Fun - Part 2

Well, fortunately I was too stubborn (or perhaps bored) to quit. So, I continued the next day (only had like 2 hours to work on it on Monday). Scouring the archives for information, I found several general types of files: 'ASF ', 'AUS ', and a large file type that had no header tag. So, where's the music?



Perhaps rather aimlessly, I began searching through the binaries, looking for some piece of information that might lead me to the music. As luck would have it, that's exactly what I found. The strings "SELECT JUNGLE" and "FrontEnd/Music/Jungle.aus" in close proximity. The former I had seen before - it was in the secrets menu in the game, and played a remix of one of the music tracks. This offered pretty convincing evidence that the AUS files were the ones I was looking for.



Naturally, my next step was to extract a couple of them and look at the format. While it was nothing I recognized, and had no apparent waveforms (and searching for AUS format on various sites yielded no information), the file format was striking: rows and rows of 16-byte data blocks. The fact that the blocks were 16 bytes large was obvious, due to the near invariance displayed by the first two bytes of each block. This immediately made me think of ADPCM, as some variants of it used 16-byte blocks of data. However, the format didn't resemble any ADPCM variant I'd seen before; nor did any of the couple dozen audio formats I tried saving with Cool Edit Pro have the striking block structure.

I again went searching the web, looking for a decoder. This time I searched for any type of PS2 audio file player, hoping that perhaps it was a common compression format in a different package (the AUS file). I happened upon the Mozzle Flash (MFAudio) player, which claimed to play several different game audio formats. I was disappointed to see that the only formats it would attempt to play without the proper file header were generic ADPCM and PCM. But I supposed that I should at least give the ADPCM a shot at the data.

Much to my surprise, music came out! Not only that, but loud music; fortunately, I had turned my sound way down, on the chance that it would play garbage and damage my speakers. Despite the obnoxious volume, it was playing music from the game, and I recognized it. Unfortunately, it wasn't playing it perfectly; crackles and distortions were clearly audible. Now what?

Friday, November 25, 2005

Real Life Adventures: Mega Man Fun - Part 1

It all began on a Monday afternoon. My computer was broken, the friend I wanted to play World of Warcraft with was at work, and I was bored. So, I decided I'd pop the Mega Man Anniversary Collection into my Playstation 2 and play some Mega Man 4 on our new 35" TV (the one that weighs 190 pounds). One of the very first things that struck me (I had the anniversary collection for a while now, but this was the first time I'd played one of the NES games on it) was the music... it was different!

After listening to it for a bit, I realized it was remixed versions of the original music in something resembling MIDI quality. As I used to collect video game music, I wanted it in my (MP3) collection. However, I'm a lot lazier than I used to be (back when I used to record music directly from the consoles), and I wanted an easier way. Particularly because of the fact that I'd heard the music, after several minutes, fade out and restart from the beginning; this was a strong indication that the game was using digital (and thus fairly easy to rip) music. So began the hunt.

Sticking the DVD into my friend's computer (who was preoccupied playing Dragon Quest 8 for about a week before and a couple days after this) turned up the basic PS2 configuration files (SYSTEM, SLUS, etc.), a number of IRX modules, and 13 AIF files. No XA files (for those who aren't familiar with them, the XA extension refers to CDXA format files, a multi-stream digital ADPCM compressed format popular with Playstation games), which I originally expected to find. Okay, now what?

The absence of anything else, and the fact that the AIF files consumed 3.5 gigs of the DVD, made it probable that they were archives. A quick look at the files in a hex editor seemed to agree. As you can see in the picture, there's a table of 16-byte structures with 5 apparent fields in little endian order (three 32-bit fields followed by two 16-bit fields). As well, the first 4 bytes of the file listed the offset of the end of the table; this was probably a file table.



Given the fact that the second 32-bit field of the file table entry was generally always equal to the second 32-bit field of the previous entry added to the third 32-bit field of the previous entry agreed with this; it seemed as though the second field was the file offset, and the third the file size. This was confirmed by following some of the file offsets and finding what appeared to be file headers; in addition, this also made it apparent that the files in the archives were neither encrypted nor compressed. The lack of any type of pattern in the first 32-bit field, and the complete absence of any file names in the archive, made me think this was a file name hash.

My first thought was to scan the archives looking for XA files. So I did a text search for 'CDXA', the format tag of the CDXA format. No dice. I then tried searching for 'RIFF', the tag of RIFF format files (of which CDXA files were). This turned up two matches: one apparently in an executable, and the other of an RIFF/WAVE file (standard issue .WAV file). I followed the offset back to the file table, and cut and pasted the WAV file, and played it. From the sound, it seemed to be background music for the title screen, nothing more.

As well, in the process of various text searches, I'd found strings referring to various files with names such as BGM.XA. This made me wonder if the files were stored outside the DVD file system. I don't really know why you would do that, but I've seen this done in other games, before. So, I whipped out Visual Studio 2003 and MSDN library, and started coding a text searcher. This one would open the DVD drive as a volume (look at CreateFile for information about this), then search for the text. In the process I amused myself by writing my first ever complete program using asynchronous I/O, which used dual read buffers to read a block while searching the other. But in the end, it was futile. No occurrences of 'CDXA' or 'RIFF' were found outside those found in the archive files.

Okay, now what?

Thursday, November 24, 2005

The Q1 Instruction Format

Q1 is a load-store architecture. That means that the only instructions that read/write memory are load and store instructions; all math and binary operations are performed on registers and/or immediates encoded in the instruction itself. Q1 uses three different instruction formats, which maximize the amount of the encoded instruction that is the same for all three formats, to minimize the amount of work that must be done by the instruction decoder circuitry. All instructions have the primary opcode in the highest 6 bits. As well, all immediate values are signed.

The simplest format of instructions is the long immediate format. In this format, the high 6 bits contain the opcode for the instruction, and the remaining 26 bits contain the long (signed) immediate. This format is used primarily in conditional branch instructions, in which the immediate represents the relative address of the branch target, and the opcode indicates the condition being tested.

Next is the short immediate format. In this format, the high 6 bits contain the opcode, the next 5 bits contain the destination register index, the next 5 bits the source register index, and the final 16 bits contain the short (signed) immediate. This format is used for all instructions that take a register and an immediate as parameters, such as load and store instructions (which add the immediate to the value of the source register to form the address for the operation) and math operations that take an immediate value.

Last is the register format. Just like the short immediate format, the top 16 bits contain the opcode, destination register, and first source register, respectively. After that, 5 bits contain the second source register, the next 5 bits the second destination register, and the last 6 bits contain the extended opcode. In this instruction format, the primary opcode is always 0, indicating that this is a register format instruction, and the extended opcode indicating the operation to be performed; this was chosen to allow a greater number of instructions when possible.

The second source register is used in any instruction that takes two inputs, with neither being an immediate. The second destination register is used only in instructions that have two outputs; right now the only instructions which do are the multiply (64-bit result) and divide (32-bit quotient and 32-bit remainder) instructions.

If this instruction format looks familiar (to, say, MIPS), that's probably because I've been studying MIPS all semester in my Low Level Languages class, which handily coincides with the time that I've been designing the Q1. Nevertheless, a lot of it is just common sense. The maximum that can be stored in one instruction is a 16-bit immediate and two 5-bit register indices, leaving 6 bits for the opcode. As well, in each case the order of instruction fields is such that the maximum amount of similarity between formats is achieved, minimizing the decoding hardware necessary.

Wednesday, November 23, 2005

Conditions and Overflow - The Q1 Way

Finally, the end of this thread of posts: what I'm going to use for the Q1.

Q1 will use a mix-and-match of features from the x86 and PPC. Conditions and overflow are both handled by means of a condition register, with flags for carry (unsigned overflow), overflow (signed overflow), signed (negative) result, and zero result. This condition register will be set only by versions of math and binary instructions that set the condition register (add!, sub!, and!, or!, xor!, nor!).

I decided on this method because I consider it too slow and cumbersome to have to manually determine whether overflow or carry has occurred, or whether a comparison of two numbers is true. As well, exceptions are too slow to execute; not only that, but to support both carry and overflow exceptions, there would have to be separate signed and unsigned instructions for every math operation.

I also considered making add and subtract operations 4-register operations (two inputs, and two outputs forming a doubleword result), which would have made it very easy to do chain math operations of values larger than the word; while this is a neat idea, it seemed impractical, as not only would it have required signed and unsigned variants of those operations (so that the Q1 would be able to determine whether the high word should be 1 or -1 if a carry occurs), but it would have made comparisons against zero more difficult.

Q1 supports two methods of handling conditions, once the condition register has been set. First, it supports conditional jumps for carry/unsigned less than, overflow/signed less than, unsigned greater than, signed greater than, signed result, and zero result. It also supports conditional moves that are 3-register operations - the destination register will be set to one value (in another register) if the condition is true, or a second value if it is false. I may also add an instruction to invert the condition register flags; I'm still thinking about that.

To me, conditional moves were a necessity, for speed reasons. Any conditional branch has the potential to be slow, with that potential directly proportional to the frequency of the less taken branch; conditional moves do not have that possibility. However, if you think about it, it's logically possible to implement conditional branches without any conditional branch instructions at all: perform a conditional move with the two target addresses, then do an unconditional branch. While that would have cut down the number of instructions in the Q1 by half a dozen, I thought it would be too slow. A conditional branch takes only a single instruction, while using a conditional move in that way requires four: two loads to load the target addresses, the conditional move, and the unconditional branch.

Tuesday, November 22, 2005

Practical MMORPG Math

We now interrupt your normally scheduled viewing for this unimportant math lesson.

As has been mentioned previously, I spend a substantial amount of time playing World of Warcraft (WoW), Blizzard's MMORPG. More than anything else, I like playing with my friends. However, as is especially the case with friends who have a very limited amount of time they can play (or only use a single character), sometimes they play without me (I play on many chars, so it's rarely a problem the other way around). For catching up, I've developed a strategy, one that has left two of my friends with blank (uncomprehending) stares, thus far; so, I'll explain the math behind it, here.

First, let me give a brief summary of the relevant features of WoW, for those who haven't played it:
- Enemies near your level give experience (XP) when you kill them
- When in a party, XP for kills (only kills) is divided by the number of players in the party
- Quests come in many shapes and sizes; kill X number of Y, and collect X number of Y, where Y drops at some frequency from enemy Z are two examples
- Quests give XP when you complete them
- Each quest can generally only be completed once per character
- Thus, quest rewards are not a good way of playing catch-up, as the person you're playing with will have to do them in the future, and you won't have gained anything
- Grinding (killing enemies without any purpose other than to get XP) is boring

So, here's my strategy: when playing catch-up solo, do quests that require collection of items that drop off enemies. If you think about it, you can imagine the reasoning for my friends' skepticism: if you have to kill Y (the number of people in the party) times as many enemies, each giving 1/Y XP when in a party, shouldn't that mean that you get the same amount of XP doing the quest solo as when you do it in a group?

No. And here's why. While it's true that you will get the same amount of XP from the enemies when you do the quest, remember that the people you're playing with still need to do the quest. If you tag along with them, not only will you get the XP of when you did it solo, but you will also get a proportionate share of the XP from the party (100% * XP + 100%/Y XP). And by doing so, you decrease the amount of XP the other party members get, proportionally (100% * (Y - 1)/Y XP). This comes out, for example, to a 150%/50% (% of the amount of XP for doing the quest solo) or 3:1 split between you and your companion, if in a group of two (166%/66%/66%, 5:2:2, for three, etc.).

And on a completely unrelated note, there's a term in psychology called the hindsight bias. It describes the tendency of people who know the solution to a problem (especially in the case of nontrivial problems) to think that the answer was unavoidably obvious, even when the problem is difficult enough that it is likely that they themselves could not have solved it. Also known as the "I could have told you that" syndrome. A prime example of this is the media and others' response to the "intelligence failures" that prevented the 9/11 attacks on the World Trade Center from being stopped, despite the previously obtained evidence that the attack was coming.

Monday, November 21, 2005

Dilemma - Conditions and Overflow

Now that I've discussed conditions and overflow, I can explain what the dilemma is (or was, back when I was thinking about it). The way I see it, there are three methods of handling conditions and overflow (although two are much more similar than the third).

MIPS treats signed overflow (but not unsigned carry, which it provides no mechanism for detecting) as an exception. When an arithmetic instruction generates signed overflow, an overflow exception is generated, and the exception handler is called. Separate unsigned arithmetic instructions exist, which will not throw overflow exceptions.

Conditions, on the other hand, are implemented by a series of conditional branch instructions: beq (branch if two values are equal), bne (branch of two values are not equal), bltz (branch if value is less than zero), blez (branch if value is less than or equal to zero), bgtz (branch if greater than zero), bgez (branch if greater than or equal to zero).

While overflow exceptions can be convenient, this method has many shortcomings. First, comparisons of two values is cumbersome and slow, as they must be performed using a number of instructions. Testing for carry is similarly slow, and also requires multiple instructions. Finally, exceptions are slow. Even in a single-tasking system (like the Playstation, which uses a MIPS CPU), where the OS doesn't need to do complicated exception handling before control returns to the program (I benchmarked this taking more than 100,000 cycles on my NT computer), if the exception handler is called in kernel mode (as is the case for MIPS, x86, etc.), a full kernel mode transition is still required before the user mode handler (i.e. the catch block) can get invoked (I don't know about MIPS, but on x86 this kind of thing can take hundreds of cycles). Compare this to the worst case scenario in a Pentium 4 (the worst performing CPU I know of, with respect to unpredicted branches), where an incorrectly predicted branch can stall the CPU for 29 cycles.

x86 uses perhaps the most obvious method of handling conditions and overflows: a condition register. This register has flags for a wide variety of conditions, including carry, overflow, zero, signed, all four being set (or reset, as the case may be) by math and binary (and, or, etc.) instructions. In addition (and likely on account of the fact that the x86 only has 8 registers), x86 has two comparison instructions: CMP, which is equivalent to a subtraction, save that the result is not written to any register (thus conserving a register, while setting the flags from the operation), and TEST, which performs a binary and, then discards the result.

x86 offers three ways of responding to conditions. First, conditional branches allow for branching based on various conditions, such as greater than, less than, carry, signed, etc. As well, conditional set instructions set a register depending on whether the condition is true (1) or false (0); this is commonly used for complex boolean algebra expressions. Finally, conditional move instructions perform a move only if the condition is true. The conditional set and condition move instructions are of particular value, as they allow actions other than branches (which can be mispredicted) to be taken based on conditions.

PowerPC uses a similar but simpler method of handling conditions and overflow. It also uses a condition register, comparison instructions (similar to the x86 CMP command), and conditional branches, but does not support conditional moves or sets. What is noteworthy, however, is that each math and logical instruction comes in two flavors: those that set the condition register, and those that don't. This allows other math operations to come between the condition register being set and the action taken as a result.

Sunday, November 20, 2005

Conditions

Now that I've explained the topic of overflow, I can get to the second part of the problem: conditions. Conditions are any manner of expression that can produce different behavior when the very same instruction is executed multiple times. The most common types of conditions are equal, not equal, less than, and greater than.

The reason I put overflow and conditions under the same heading is that conditions are also based on carry overflow. If we compare two values, one of the following must be true: they are equal, the first is less than the second, or the first is greater than the second. Computers perform this comparison using subtraction, then checking for overflow. Compare unsigned 5 and 10 (in that order): 5 - 10 = -5 (a nonunsigned result) with carry. If we reverse these, 10 - 5 = 5, with no carry.

Thus, a carry indicates that the first is less than the second (this is always true, not just in these two examples). In the case of both values being equal, the result will be 0. Note that the assumption that carry indicates less than, and no carry indicates greater than, is only valid when the result is nonzero.

Signed comparisons are a bit more complicated, as it's not as simple as whether or not there is overflow. I won't go into examples of why this is (as you'd have to construct a full truth table to see the relationships), but this is how it works: excluding the case of both being equal, then the first value is less than the second if the overflow state is different than the sign of the result of the subtraction (overflow != result_sign) .

Saturday, November 19, 2005

Overflow

One of the major design decisions I had to make for Q1 was how to handle two things that seem unrelated, but really are: arithmetic overflow and conditions. Arithmetic overflow occurs when the result of arithmetic (either addition or subtraction) is a number that is too large to be represented in a single word (a register; as Q1 is a 32-bit CPU, its words are 32-bit).

Take the case of the unsigned addition of 0xFFFFFFFF and 0xFFFFFFFF (the largest possible numbers). The correct result of this addition is 0x1FFFFFFFE. However, this result requires 33 bit, and is thus truncated to 0xFFFFFFFE when placed in a register; one bit of significant data is loss.

Now, at the risk of confusing you, I should make it clear that the lost 33rd bit is not always significant. Take, for example, the subtraction of 5 from 10. In two's complement math this is performed by negating the 5 (to get 0xFFFFFFFB) and then adding it to 10. The result of this is 0x100000005, which is 33 bits. In this case, one bit is lost, but it contains no actual information. A single example such as this isn't sufficient to prove it is so, so I'll tell you straight out: for unsigned subtraction, overflow occurs if and only if there is NO loss of the 33rd bit - exactly the opposite of the case for addition.

However, it gets even more complicated. Consider the signed addition of 0x40000000 and 0x50000000. Both of these numbers are positive, so the result must also be positive. However, the result of addition is 0x90000000; the fact that the highest bit has been set indicates that the number is negative. Overflow has occurred, even though the 33rd bit hasn't been touched. Now consider the addition of -1 and -1 (0xFFFFFFFF). In this case the result is 0x1FFFFFFFE, or 0xFFFFFFFE (-2) when truncated. Here, the 33rd bit is lost, but no overflow has occurred.

What this means is that there are different methods of detecting overflow for signed and unsigned arithmetic. Unsigned arithmetic is fairly simple: if ((33rd_bit != 0) != is_subtraction), overflow has occurred. For signed arithmetic, it's more complicated. First of all, let me tell you that this equation, although it appears in some computer architecture books (like mine), is NOT correct: if (33rd_bit != 32nd_bit) overflow. The correct equation is: if ((32nd_bit_of_operand_1 == 32rd_bit_of_operand_2) && (32nd_bit_of_result != 32nd_bit_of_operand_1)) overflow. In other words, if both operands have the same sign (remember that with subtraction one operand will be negated; this must be taken into account), but the sign of the result is different, then overflow has occurred. Traditionally, unsigned overflow is referred to as a carry, and signed overflow is referred to as overflow (neither of which are particularly good names).

Friday, November 18, 2005

Ssssssssssssssmokin'!

Pop quiz, kids: is it a good thing or a bad thing when your CPU is hot enough to boil water? Because mine is! And gosh, just as I was typing this message my grandparents called and asked if there were any forest fires in our area (there are some in our state right now, but not too close to us). Heh, earlier today I was talking to Dark_Brood about CPUs, and mentioning that the new CPUs are twice as fast as mine (by raw clock speed). Wonder if I'll have to replace any other parts while I'm at it (and no, I'm not on my own computer, right now).

!#@$, It Broke!

Okay, I just broke the VC++ optimizer, or something. All of a sudden it started adding a copy of the return path to EACH INSTRUCTION (there are about 50 of them). While each one isn't so bad on its own (7 bytes), this adds up to 350 bytes (16% of the total of 2,162 bytes for the emulation core).

Thursday, November 17, 2005

& Immunology

BahamutZero just informed me that one of the major books on immunology is available freely online. If that sort of thing interests you (like it does me and BZ), you should go check it out.

Not for the Faint of Heart

Q1Emu is becoming quite a noteworthy piece of software. I wonder if, somewhere, there's a prize for most creative use of code structure that implies a compiler optimization strategy; I could be in the running for it. I've already broken the Visual Studio debugger's ability to match source code lines to instruction addresses; I'm sure the VS coders will be warning everyone "friends don't let friends' compilers do Q1Emu" :P

Incidentally, the Q1 emulation core is now done. It currently weighs in at 1.8 KB, but I may be able to shrink it some. So far all the optimization has been stuff I've done as I've coded. Now that I'm all done, I can go back and look for new things to optimize.

Tank-Top and Shorts?

While not the first to point out the lack of malware utility response to the Sony rootkit, Groklaw is pointing out something few seem to have noticed (original source):

The creator of the copy-protection software, a British company called First 4 Internet, said the cloaking mechanism was not a risk, and that its team worked closely with big antivirus companies such as Symantec to ensure that was the case. The cloaking function was aimed at making it difficult, though not impossible, to hack the content protection in ways that have been simple in similar products, the company said.
So the antivirus companies were working with the maker of the rootkit to begin with? Hope you brought some cool clothes, because it's hot where we're going.

& Debates - Origin of Homosexuality

Star Alliance forums are back up, and I've got a new debate to go with them:
So, I was looking for articles on gender roles for a psychology class paper. Well, I found something really interesting, because it totally NOT what I was expecting to find. First of all, about the person being interviewed:
"Dr. Anne Fousto-Sterling, 56, a professor of biology and women's studies at Brown... lesbian... Her 1985 book, 'Myths of Gender: Biological Theories About Women and Men,' is used in women's studies courses throughout the country."

Q. Among gay people, there is a tendency to embrace a genetic explanation of homosexuality. Why is that?
A. It's a popular idea with gay men. Less so with gay women. That may be because the genesis of homosexuality appears to be different for men than women. I think gay men also face a particularly difficult psychological situation because they are seen as embracing something hated in our culture - the feminine - and so they'd better come up with a good reason for what they're doing.
Gay women, on the other hand, are seen as, rightly or wrongly, embracing something our culture values highly - masculinity. Now that whole analysis that gay men are feminine and gay women are masculine is itself open to big question, but it provides a cop-out and an area of relief. You know, "It's not my fault, you have to love me anyway."
It provides the disapproving relatives with an excuse: "It's not my fault, I didn't raise 'em wrong." It provides a legal argument that is, at the moment, actually having some sway in court. For me, it's a very shaky place. It's bad science and bad politics. It seems to me that the way we consider homosexuality in our culture is an ethical and moral question.
The biology here is poorly understood. The best controlled studies performed to measure genetic contributions to homosexuality say that 50 percent of what goes into making a person homosexual is genetic. That means 50 percent is not. And while everyone is very excited about genes, we are clueless about the equally important nongenetic contributions.
Q. Why do you suppose lesbians have been less accepting than gay men about genetics as the explanation for homosexuality?
A. I think most lesbians have more of a sense of the cultural component in making us who we are. If you look at many lesbians' life histories, you will often find extensive heterosexual experiences. They often feel they've made a choice. I also think lesbians face something that males don't: at the end of the day, they still have to be a woman in a world run by men. All of that makes them very conscious of complexity.

Hallelujah!

At a conference for its management software customers, company executives detailed its plans to add support 64-bit microprocessors in its server applications and operating systems.

By late next year, Microsoft expects to deliver Exchange 12, which will run only on x86-compatible 64-bit servers, said Bob Kelly, general manager of infrastructure server marketing at Microsoft.

Kelly said 64-bit chips will make the greatest impact on the performance of applications such as Exchange and its SQL Server database.

"IT professionals will be able to consolidate the total number of servers running 64-bit (processors) and users will be able to have bigger mailbox size," he said.

Longhorn Server R2 and a small-business edition of Longhorn Server will be available only for x86-compatible 64-bit chips as well the company's Centro mid-market bundle. Longhorn server is expected to be released in 2007 and the R2 follow-up could come two years after that.
Frankly, I was dissappointed when MS announced that Longhorn would run on x86-32 at all. Now that x86-64 CPUs are starting to appear on the desktop, and should be the majority by the time Longhorn ships, having Longhorn only run on x86-64 would have drastically simplified application design. But I guess something is better than nothing.

Tuesday, November 15, 2005

It Runs! - UPDATED

That title pretty much says it all. Today I have a paper to write; that means it's procrastination time! Fortunately, I had plenty of stuff to procrastinate with. So, I started working on an emulator for Q1 (my CPU). After only an hour or so of coding, it runs a simple test program with a 6 instruction set. Of course, as each instruction only requires 3 lines of code, adding the other 50 or so will be quite easy. I'll have to do that on Thursday.

UPDATE: Now it's up to 24 instructions (half the instruction set). The emulation core is about 2/3 KB of optimized assembly. Should be no problem keeping the emulation function and the CPU context (most importantly the registers) in L1 cache, making it about as fast as possible.

News From the Front Lines

durandal255 (11:16 PM) :
in the latest wow patch, they finally admit to searching your computer for viruses and cheats
Quantam (11:17 PM) :
I saw that
durandal255 (11:17 PM) :
it pokes through my IE history!
Quantam (11:17 PM) :
That's not cool
durandal255 (11:17 PM) :
god knows what it does after that!
durandal255 (11:18 PM) :
it also looks at your autoexec.bat, and um, your start menu and desktop shortcuts
Quantam (11:18 PM) :
Dude
durandal255 (11:18 PM) :
old news?
Quantam (11:18 PM) :
I bet MS' malicious software removal tool does less than that
durandal255 (11:19 PM) :
rofl
durandal255 (11:19 PM) :
it looks at ntuser.log and both the 9x and NT temp folders
durandal255 (11:20 PM) :
i bet i'd find it inspecting my MBR if i knew how to look for that
Quantam (11:20 PM) :
Probably

MD5 Is Officially Dead - UPDATED

Patrick Stach has announced that he has created a program that can find MD5 hash collisions in 45 minutes on a 1.6 ghz Pentium 4. If that's true, MD5 isn't just insecure, it's downright dead.

If you've got any digital signatures using MD5, I suggest you FIX THEM, NOW! (not that I know exactly what to use; SHA-1 is on its last leg)

UPDATE: Fortunately, it isn't as bad as I thought it was. This program can only produce two randomly generated messages that hash to the same value; it cannot find a new message that matches a given hash. No complete security meltdown yet, but you could still safely say that MD5 is no longer safe to use.

& Debates - Responsibility - UPDATED

One of the sites I frequent is the Star Alliance. Star Alliance is a game and modding site, but it also is known for something else: its debates. While not exactly the Socrates, Plato, and Aristotle of our time, the forumers manage to regularly engage in at least halfway intellectual debates (some more than other), often involving religion or philosphy. While there are some exceptions, these debates are moderately mature, particularly as the site ages, and the 'old school' forumers are in college, now.

Now that there's a formal debate forum (requires registration to view/post in) with more strict rules for posts, I've begun to periodically start debates (in fact I have a list of four or so I plan to start in the foreseeable future). The flavor of the week is the nature of indirect responsibility for something. The opening post (which is just to get the thinking started, before the debate begins):

This is, to my knowledge, a fictitious story (although it would hardly surprise me if sometime, somewhere in history it actually happened).

There once was a husband and wife. The husband worked nights, and the wife frequently became lonely, and went out to meet lovers during the night, although she always returned home before her husband. The wife always cut off the relationships if the lovers wanted it to get serious (as in, endangering her marriage).

One night, she was doing just that: dumping a lover. She had just left the lover's apartment, and was about to go home, when she realized she didn't have money for the ferry she would have to take to get back to her house. Reluctantly, she went back and asked the lover if she could borrow some money. Not surprisingly, the lover slammed the door in her face.

She then went and asked her previous lover (call him #2) for money, who lived nearby. He also slammed the door in her face. So she went back to the ferry, and begged the ferry operator to let her ride for free, and she would pay him back. He refused.

Finally, she remembered there was a bridge a ways away, but she thought she could still make it home in time. However, this bridge was known as being a dangerous area, especially at night. So, she takes the bridge, and, as luck would have it, gets mugged. Angered that she did not have any money, the mugger stabs her, and she dies.

Now, how would you assign blame for the death of the woman? Rank the six characters (the husband, the wife, lover #1, lover #2, the ferry operator, and the mugger) from most responsible to least responsible in your post.
A more recent post, which introduces the debate itself:

Well, where I was hoping to go with this was a debate about what constitutes responsibility for something like this.

As for myself, I'd say the blame belongs first and foremost to the mugger, as the mugger is the one who actually killed her. But I don't think it's correct to say the woman didn't contribute to it. She made several choices that contributed directly or indirectly to her death. In chronological order:
- She chose to be out there having an affair. While getting killed by a mugger is not a foreseeable outcome of having an affair, I have little sympathy for people who get hurt or killed as a result of perpetrating some crime (in this case the crime is a moral one; as I said a couple posts up, I consider being faithful to your spouse part of the job description for anyone who's married). If a terrorist gets blown up due to a bomb malfunction while trying to bomb some place, all I'm going to say is "Haha, loser!"
- She chose to take a dangerous route in the middle of the night. While that isn't to say she was "asking" to be killed, the fact remains that when you do something fairly dangerous (and you have viable alternatives), you have to take some responsibility for the plausible, predictable outcomes, of which this was one. If I'm welding something while not paying attention, and I end up burning myself (a plausible, predictable outcome for welding carelessly), it's my fault for not being more careful. If, on the other hand, the propane tank explodes and kills me due to some manufacturing defect (neither a plausible nor predictable outcome), that's the manufacturer's fault.

In other news, the usual response to that story (it's commonly used in college psychology classes) is that about half the people blame the woman primarily, and the other half the mugger. I guess this goes to show that you become more conservative with education... rolleyes.gif
Do not think to reply here. If you want to join the debate (which is the whole point of me posting about it), go to the debate itself.

UPDATE: As those of you (assuming there are any of you out there reading this blog) probably noticed, the Star Alliance site went down a day after I posted this entry, and has been down ever since. Seems they had some problems with their host, and are in the process of relocating. I'll try to remember to post when they come back up.

Saturday, November 12, 2005

Errata

Skape (I don't know who that is other than that it's someone Skywing knows) has informed me that my reasoning as to the reasons NTDLL has a fixed address was incorrect. There are kernel mode facilities for loading and preparing user mode modules, so this is not an issue. Rather, the reason is that the kernel expects some functions in NTDLL that it calls to be in the same place for all processes.

Friday, November 11, 2005

Of Wizards and Quantum Physics

Back when I made the post about Singularity, I sent Merlin (of Camelot Systems fame, and who now works as a coder for Microsoft) the link, to ask what he thought of Singularity. He provided me with some food for thought, although I didn't get around to writing about it until now (story of my life...).

His overall conclusion of Singularity was that the idea was 'idiotic'. He had two reasons for this conclusion. First, he claimed that the quality of the JITer is not sufficient for this kind of thing, given that the JITer becomes the single most important piece of software in Singularity, with respect to security and stability (as I said in my post).

Second, he claims that the idea of the JITer being the gatekeeper to system security is fundamentally flawed in that it can't control the hardware. It can certainly ensure that software doesn't have access to the hardware, and that drivers communicate only in well-defined (and legal) ways, but the JITer has no way to verify that the data drivers actually send to the hardware is valid. Even with a JITed system, it's possible a driver might give the wrong address or buffer size to the hardware, and the hardware writes to it, corrupting program or system data (or even worse).

This second point is particularly valid, as I've seen first-hand (my knowledge of the JITer itself is insufficient to comment on the first point). Take a little library I was writing called DD3D (that's 'DirectDraw 3D') as an example. It was a little library that displays a DirectDraw surface as a Direct3D texture map. The test program would recreate the Direct3D device every time you resized the window, so that it could use the right size of back buffer (for optimal image quality). This meant frequent destruction and creation of Direct3D devices. Well, as it turned out, the program initially had a reference count leak that prevented the Direct3D device from actually being destroyed before another one was created; Direct3D even complied with the requests to create new devices.


Eventually, this exhausted some system resource, and it broke. And by 'broke' I don't mean it threw a "screw you, I'm not making any more devices" error (which would have been an appropriate response in this situation). Nor did the program crash, or even blue-screen. Nope; once it got above some number of Direct3D devices created, it hard-reset the computer. That is, blammo, black screen, "testing memory", "press DEL to enter setup", "starting Windows XP...". Yeah, that's not supposed to happen. Whatever the driver had sent to the video card made the whole computer go boom (this was an NVidia card and non-WHQL approved driver, by the way; I reverted to the WHQL driver and the hard-resetting went away).

So, maybe this isn't such a viable idea after all.

& California Housing

Random fact: our house here (which is about 50 years old, 1 story, and moderately large - but not huge or very ornate) is worth more than half a million dollars, up from $65k parents paid 30 years ago. That always boggles my mind.

Also, on a totally unrelated note, I seem to be accumulating quite a harem in my Temp tab of my ICQ contact list (where I put all the people who spontaneously add me to their contact list with no prior contact). 11 girls and counting (and those are the ones that aren't porn bots - I've broken half a dozen porn bots this week alone with my first reply); apparently 'Justin' is a popular name girls from countries all over the planet search for to find people to chat with (one of them said that's how she found me). *shakes head* Heck, at least half of them have never even messaged me.

Asynchronous I/O - Notification Types Summary

Event-Based Notification
Pros
- Only method that allows threads to wait until the operation completes
Cons
- Not useful in most other cases
- Potentially high latency on POSIX

Asynchronous Procedure Calls
Pros
- Calls only occur in the thread that requests the I/O
- Calls can be deferred until the thread is ready for them
- Most convenient method for single-threaded programs
Cons
- Must be polled for in the thread that requested the I/O
- Potentially high latency on POSIX

I/O Completion Ports
Pros
- Potentially fastest (highest throughput), most scalable method on multi-CPU systems, due to optimized thread pooling architecture
Cons
- Cumbersome to use, as often requires construction of a finite state machine
- Not particularly suitable for single-threaded programs
- Potentially high latency on POSIX

Unpredictable Callbacks
Pros
- Lowest latency method on POSIX
- Potentially fast on multi-CPU systems, if the OS does CPU load balancing of callbacks
Cons
- Calls may occur in any thread at any time
- Must be polled for in the thread that requested the I/O

Wednesday, November 09, 2005

Dilemma: Predictable and Unpredictable?

I'm seriously considering adding a fourth method of asynchronous I/O notification: unpredictable callbacks. Unlike asynchronous procedure calls (APCs), which will only be executed in a predictable place (the thread that requested the I/O) and time (when the APC dispatch function is called), unpredictable callbacks are just that: unpredictable. They could take the form of an APC queued to the thread that requested the I/O, or they could be called at some random time in a totally different thread.

Thus, the practical difference between APCs and unpredictable callbacks is that unpredictable callback functions have to be thread-safe. If they access any shared data, it must be protected by thread synchronization (with APCs you could sometimes get away without thread safety, if the "shared" data was only used by the thread that started the I/O). Of course, because it's possible that unpredictable callbacks will be implemented as APCs, it's still necessary to regularly dispatch APCs for the threads that request I/O.

So, what's the point? Seems like a lot of work for a lot of uncertainty. Well, the usual answer for questions like that about LibQ is speed. Windows implements APCs natively; POSIX does not. Instead, POSIX implements asynchronous I/O notifications via signals, one form of which is unpredictable callbacks. All other types of notification can be readily emulated using POSIX callbacks (as they're the single most flexible method of asynchronous I/O notification), but this comes at a speed cost.

The speed cost itself isn't very large (several dozen cycles), but the latency introduced is much worse (could be hundreds of milliseconds, in the worst case). For situations where a low latency response is needed, this might be too much. If, on the other hand, latency isn't important, other methods might be more convenient (or even faster, i.e. completion ports).

Tuesday, November 08, 2005

& Psychology Fun - UPDATED

"Call girl nymphomaniac in front of entire social psychology class"

*crosses item off life todo list*

That'll teach her not to be so ambiguous in playing her role that people had to ask what she was supposed to be (and nymphomaniac was the first thing that came to my mind after the first couple things she said). :P Was also hilarious when the nerd listed Screech Powers as his hero.

And for your information, the purpose of that exercise was to demonstrate the stereotypes associated with various roles.

BahamutZero's response to this post: "I do think I said you were a devious, corrupt, manipulative and all around dangerous person."

UPDATE: Today (Thursday) in psychology class the teacher was asking for attributes (taken from a list she passed out) that we thought were more typical of women than men. I answered 'tact'. I heard several people chuckle; I have a guess as to why :P

Friday, November 04, 2005

Q's Fact of the Day

Eating 2/3 of a pound (like 1/3 kg) of gummy worms before tae kwon do is a remarkably stupid idea.

Rootkits, Spyware, and Hacks, Oh My!

So yeah, this news is a bit old now, but I thought I should post it, if for no reason other than to use that post title. All of this stuff I discovered (or, more accurately, was linked to, by people or sites).

First, we have Sony installing a rootkit on the computers of anyone (with admin privileges) that puts the Get Right With the Man CD in their drive. This rootkit is a driver that hides itself from detection by hooking the Windows system call table and preventing any files with file names beginning with "$sys$" from showing up in Explorer or anywhere else (you can readily test for the presence of this rootkit by renaming a file that way, and observing if it disappears). After the public outrage from the Slashdot readers and others, Sony released a none-too-effective uninstaller.

In the same week (at least for me), news of the Warden got around. The Warden is Blizzard's anti-hacking tool for World of Warcraft (in the legacy of Work, Blizzard's neato hack detector for Starcraft, Diablo II, and Warcraft III). This one has the enjoyable function of scanning the programs running on your computer, and sending such things as the title of open windows to Blizzard.

Finally, in a move of minor brilliance (and what makes an ideal final entry in summary posts such as this), hackers decide that it would be worth their time to use one to thwart the other; that is, to use the Sony rootkit to hide their WoW hacks from the Warden. Looks like it's gonna be a war between the video game and music industries for who's responsible for this mess.

Industrial Strength Spin

Okay, I usually try to avoid the Slashdot bashing, but this one I just couldn't resist. One person (who should be thankful they remain nameless) writes:
I'm only replying to the parent so that this post is high up the screen.

Look at page 31 of this PDF. Microsoft publish benchmark statistics showing Linux (and FreeBSD) to be better than Windows.

Okay, so this post is so important he decided to ignore posting etiquette. The post refers to a table of benchmarks that shows the number of cycles needed for each of 6 things, on Singularity, XP, FreeBSD, and Linux. If we ignore Singularity, which has the lowest - and thus best - scores in 5 of 6 categories, Windows XP holds the lowest score in 3 categories, Linux in 3 categories, and FreeBSD in none (however, FreeBSD does have a lower score than XP in 1 category). As far as proof of Linux/FreeBSD superiority goes, that's pretty underwhelming (and while I could be grossly ignorant, I don't recall MS ever claiming that Windows was superior to Linux on every single data point).


Of course, these are simply statistics showing off the abilities of Singularity (that's just common sense - when you go to great lengths to make something faster than its competitors, you want to show that it's faster than its competitors), and a much too small sample size to draw any kind of conclusions.

Even more disturbing is that most of the replies to this post are along the lines of "Well, duh. Everybody knows that Windows blows; MS just finally stopped lying about it." And they wonder why Slashdot has a reputation for being a bunch of fanatic Linux zealots who couldn't think rationally if their lives depended on it...

Thursday, November 03, 2005

Groovy

So, today Slashdot (and by association myself) learned about Singularity: Microsoft labs' new playtoy OS. I immediately went and read part of (was already late for class at this time...) the overview paper on the MS labs site.

This thing is pretty sweet. It's like .NET (or Java) applied to an entire computer (OS, drivers, and applications), and then some new ideas. From what I've read, there are two basic ideas that set Singularity apart from any existing production OS. First, the entire system, with the exception of the microkernel, is JITed code. This is a huge benefit because it allows the JITer to verify that the code is safe before it ever gets executed. In a garbage-collected language without pointers, this means no more access violations or buffer overflows, period. It also means the code can't pull exploits like those that can give elevated permissions, or screw up some other thread/process' data.

In fact, because the OS audits all code before it ever gets executed, there's no need for multiple processes at all; indeed, all logical processes in Singularity run in the same virtual address space (commonly known as a 'process' on today's OSs; and in theory you could have everything running in kernel mode and it would still be safe). The fact that code can be guaranteed to be well-behaved on load also removes the need for most (but not all) checks for things like parameter validity, access control, etc., making programs run faster than has ever been possible.

The other major premise of Singularity is strict modularization. All code exists in its own "sandbox". Code is loaded as a unit, either as an application, with multiple code modules, or as a single library. Once a sandbox is created, no new code can be loaded in it.

However, it's possible to call from one sandbox to another. This communication is governed by interface metadata that dictates exactly what is and is not allowed, and is mediated by the JITer. While inter-process communication (IPC) has always been painfully slow in the past, it is not so in this case. Because all code is JITed, the JITer can verify that new code follows the rules, then give it direct access to what it's trying to reach, creating 0-overhead IPC.

Unfortunately, as MS has so much invested in Windows already, it's not actually looking to make this into an actual product. However, I think it's a really promising idea, and hope that somebody at some point will try to make a commercial OS based on this kind of thing.

Oh, and on a completely unrelated note, that overview paper has a benchmark comparing the speed creating processes on various OSs (one of the things I'd been wondering for a while): 5.4 million cycles for XP, compared to Linux's 720k.

Tuesday, November 01, 2005

& Halloween

Happy halloween! No, Q didn't forget it until today, nor was he too drunk to post until now. No, today's the day: the day Q and friends go and raids all the local stores to get halloween candy at half price. Now it's time to eat candy till your teeth bleed!

Sunday, October 30, 2005

Lazy is Better

So, I'm playing around with code for the asynchronous I/O system. In a number of ways I'm finding that doing things "the right way" (fully thread safe, fully error checked and tolerant, etc.) is both cumbersome to code and slow/bloated when compiled (if you haven't figured it out by now, I step through most new code in release build, to see what the generated assembly looks like). Both of these are very much in opposition to the entire LibQ paradigm; so, I've decided to use a lazy model for the design of this thing.

What that means is that it's sensitive to how you use it. If you follow the rules (particularly with respect to call orders, and what operations you do from different threads simultaneously), nobody gets hurt; if you don't, you can expect that sometime, somewhere, anything ranging from subtle errors to spectacular crashes will slink into your program.

For a couple examples, calling CFile::Open (on the same CFile variable) at the same time from two different threads means death. Closing a file from one thread while another thread is doing a read/write on that file means death. Trying to use the same CAsyncStatus for an operation before the last operation using that CAsyncStatus has completed means death. Get the picture? Most of it's just common sense, but some of it I'll have to explicitly document.

Saturday, October 29, 2005

Positional Operations vs. the File Pointer

Traditionally, file I/O was sequential. You read (or wrote) from the beginning of the file to the end. While my history of operating systems isn't sufficient to say that it was the first, Unix was (and still is) particularly fond of this model, because it allows for piping (that is, redirecting the output of one program to the input of another, etc.).

Traditional I/O APIs reflect this behavior, in that they feature a file pointer associated with each file. Reads and writes always begin at the current file pointer, and advance the file pointer on completion (assuming the operation succeeded). If you wanted to do random access on a file (that is, read or write nonsequentially in the file), you had to call a seek function. On Windows, these functions are ReadFile, WriteFile, and SetFilePointer; on Unix, there's read, write, and lseek; and in the C standard library, there's fread, fwrite, and fseek. These functions work perfectly for sequential file access, and work sufficiently for random file access from a single thread (remember that DOS, Win16, and Unix were single-threaded operating systems, although Win16 and Unix could run multiple single thread processes simultaneously).

Then came NT and later editions of Unix (actually, it would hardly surprise me if other OS supported this earlier; I just don't know of them), which introduced multithreaded apps. This introduced the possibility that multiple threads could share access to a single file handle (Unix always allowed multiple programs to share access to files; but in this case each process had its own file handle, with its own file pointer, so this wasn't a problem.

This is a good thing, certainly, but it created problems. Since it was not possible to atomically set the file pointer and perform the file operation (and it would probably even require two trips to kernel mode), the entire procedure was fundamentally thread-unsafe. If two threads tried to perform random file access on the same file at the same time, it would be impossible to tell exactly where each operation would take place.

The simplest solution to this problem would be to protect each file with a mutex. By ensuring mutually exclusive access to the file, you ensure that you will always know exactly where the file pointer is. However, by definition it also causes all threads to wait if more than one thread attempts a file operation at the same time. While this might be acceptable when file I/O occupies a very small portion of the thread's time, this is a distinctly sub-optimal solution.

This is where positional operations come in. Positional operations are read/write functions which explicitly specify where the operation is supposed to occur, and do not alter the file pointer. Windows NT was originally created with this ability (in fact, as previously mentioned, all I/O on NT is internally performed asynchronously, which mandates positional operations) - the very same ReadFile and WriteFile, only used in a different way - but I don't know when exactly the POSIX positional file functions were introduced - pread and pwrite. Windows 9x, again bearing more resemblance to Windows 3.1 than to Windows NT, and again the most primitive of the three, does not support the use of positional operations.

The merit of truly simultaneous operations on the same file may not immediately be obvious. If this is a disk, or some other type of secondary storage, the nature of the device dictates that it can only perform one operation at any point in time; so what benefit is the OS being able to accept multiple operation requests on the same file simultaneously? It is because when the OS supports this in the kernel (as opposed to funneled kernels, or kernels that emulate this with per-file mutexes), neat optimizations can be done. For example, if thread A wants to read 10 bytes from offset 0 in a file, and thread B wants to read 10 bytes from offset 10 in the file, the operations can be combined into one physical disk operation (reading 20 bytes from offset 0), and the OS can then copy the data into the two output buffers.

But even if it isn't the case that the operations can be combined, there are still optimizations that can be done. For example, if thread A wants to read 10 bytes from the file at offset 50, and thread B wants to read 10 bytes from the file at offset 150, does it matter which of these reads gets physically performed first? It does, actually, because the hard drive has a "file pointer" of its own - the head location. If the head location is at offset 0 in the file, it will probably (I say probably because in reality things are a lot more complicated than I've described here; this is just a simple illustration) be faster to perform thread A's read first, then thread B's, because the total distance the head will move in this order is 160 bytes (50 + 10 + 90 + 10); if it did the reads in the opposite order, it would have to move the head forward 160 bytes, then back 110 bytes (150 + 10 - 50), and finally forward 10 bytes, totalling 280 bytes - almost twice as far.

Conclusion: positional file I/O is a Good Thing (tm).