Author: Jonathan Zdziarski

Dumpster Diving in Forensic Science

Recent speculation has been made about a plan to unlock Farook’s iPhone simply so that they can walk through the evidence right on the device, rather than to forensically image the device, which would provide no information beyond what is already in an iCloud backup. Going through the applications by hand on an iPhone is along the dumpster level of forensic science, and let me explain why.

The device in question appears to have been powered down already, which has frozen the crypto as well as a number of processes on the device. While in this state, the data is inaccessible – but at least it’s in suspended animation. At the moment, the device is incapable of connecting to a WiFi network, running background tasks, or giving third party applications access to their own data for housekeeping. This all changes once the device is unlocked. Now when a pin code is brute forced, the task is actually running from a separate copy of the operating system booted into memory. This creates a sterile environment where the tasks on the device itself don’t start, but allows a platform to break into the device. This is how my own forensics tools used to work on the iPhone, as well as some commercial solutions that later followed my design. The device can be safely brute forced without putting data at risk. Using the phone is a different story.

Read More

On FBI’s Interference with iCloud Backups

In a letter emailed from FBI Press Relations in the Los Angeles Field Office, the FBI admitted to performing a reckless and forensically unsound password change that they acknowledge interfered with Apple’s attempts to re-connect Farook’s iCloud backup service. In attempting to defend their actions, the following statement was made in order to downplay the loss of potential forensic data:

 “Through previous testing, we know that direct data extraction from an iOS device often provides more data than an iCloud backup contains. Even if the password had not been changed and Apple could have turned on the auto-backup and loaded it to the cloud, there might be information on the phoen that would not be accessible without Apple’s assistance as required by the All Writs Act Order, since the iCloud backup does not contain everything on an iPhone.”

This statement implies only one of two possible outcomes:

Read More

10 Reasons Farook’s Work Phone Likely Won’t Have Any Evidence

Ten reasons to consider about Farook’s work phone:

Read More

Apple, FBI, and the Burden of Forensic Methodology

Recently, FBI got a court order that compels Apple to create a forensics tool; this tool would let FBI brute force the PIN on a suspect’s device. But lets look at the difference between this and simply bringing a phone to Apple; maybe you’ll start to see the difference of why this is so significant, not to mention underhanded.

First, let me preface this with the fact that I am speaking from my own personal experience both in the courtroom and working in law enforcement forensics circles since around 2008. I’ve testified as an expert in three cases in California, and many others have pleaded out or had other outcomes not requiring my testimony. I’ve spent considerable time training law enforcement agencies around the world specifically in iOS forensics, met LEOs in the middle of the night to work on cases right off of airplanes, gone through the forensics validation process and clearance processes, and dealt with red tape and all the other terrible aspects of forensics that you don’t see on CSI. It was a lot of fun but was also an incredibly sobering experience, as I have not been trained to deal with the evidence (images, voicemail, messages, etc) that I’ve been exposed to like LEOs have; my faith has kept me grounded. I’ve developed an amazing amount of respect for what they do.

For years, the government could come to Apple with a warrant and a phone, and have the manufacturer provide a disk image of the device. This largely worked because Apple didn’t have to hack into their phones to do this. Up until iOS 8, the encryption Apple chose to use in their design was easily reversible when you had code execution on the phone (which Apple does). So all through iOS 7, Apple only needed to insert the key into the safe and provide FBI with a copy of the data.

Read More

Code is Law

For the first time in Apple’s history, they’ve been forced to think about the reality that an overreaching government can make Apple their own adversary. When we think about computer security, our threat models are almost always without, but rarely ever within. This ultimately reflects through our design, and Apple is no exception. Engineers working on encryption projects are at a particular disadvantage, as the use (or abuse) of their software is becoming gradually more at the mercy of legislation. The functionality of encryption based software boils down to its design: is its privacy enforced through legislation, or is it enforced through code?

My philosophy is that code is law. Code should be the angry curmudgeon that doesn’t even trust its creator, without the end user’s consent. Even at the top, there may be outside factors affecting how code is compromised, and at the end of the day you can’t trust yourself when someone’s got  a gun to your head. When the courts can press the creator of code into becoming an adversary against it, there is only ultimately one design conclusion that can be drawn: once the device is provisioned, it should trust no-one; not even its creator, without direct authentication from the end user.

Read More

tl;dr technical explanation of #ApplevsFBI

  • Apple was recently ordered by a magistrate court to assist the FBI in brute forcing the PIN of a device used by the San Bernardino terrorists.
  • The court ordered Apple to develop custom software for the device that would disable a number of security features to make brute forcing possible.
  • Part of the court order also instructed Apple to design a system by which pins could be remotely sent to the device, allowing for rapid brute forcing while still giving Apple plausible deniability that they hacked a customer device in a literal sense.
  • All of this amounts to the courts compelling Apple to design, develop, and protect a backdoor into iOS devices.

Read More

Tl;Dr Notes on iOS 8 PIN / File System Crypto

Here’s iOS file system / PIN encryption as I understand it. I originally pastebin’d this but folks thought it was worth keeping around. (Thanks to Andrey Belenko for his suggestions for edits).

Block 0 of the NAND is used as effaceable storage and a series of encryption “lockers” are stored on it. This is the portion that gets wiped when a device is erased, as this is the base of the key hierarchy. These lockers are encrypted with a hardware key that is derived from a unique hardware id fused into the secure space of the chip (secure enclave, on A7 and newer chipsets). Only the hardware AES routines have access to this key, and there is no known way to extract it without chip deconstruction.

One locker, named EMF!, stores the encryption key that makes the file system itself readable (that is, directory and file structure, but not the actual content). This key is entirely hardware dependent and is not entangled with the user passcode at all. Without the passcode, the directory and file structure is readable, including file sizes, timestamps, and so on. The only thing not included, as I said. Is the file content.

Another locker, called BAGI, contains an encryption key that encrypts what’s called the system keybag. The keybag contains a number of encryption “class keys” that ultimately protect files in the user file system; they’re locked and unlocked at different times, depending on user activity. This lets developers choose if files should get locked when the device is locked, or stay unlocked after they enter their PIN, and so on. Every file on the file system has its own random file key, and that key is encrypted with a class key from the keybag. The keybag keys are encrypted with a combination of the key in the BAGI locker and the user’s PIN. NOTE: The operating system partition is not encrypted with these keys, so it is readable without the user passcode

There’s another locker in the NAND (what Apple calls the class 4 key, and what we call the DKEY). The DKEY is not encrypted with the user PIN, and in previous versions of iOS (<8), was used as the foundation for encryption of any files that were not specifically protected with “data protection”. Most of the file system at the time used the Dkey instead of a class key, by design. Because the PIN wasn’t involved in the crypto (like it is with the class keys in the keybag), anyone with root level access (such as Apple) could easily open that Dkey locker, and therefore decrypt the vast majority of the file system that used it for encryption. The only files that were protected with the PIN up until iOS 8 were those with data protection explicitly enabled, which did not include a majority of Apple’s files storing personal data. In iOS 8, Apple finally pulled the rest of the file system out of the Dkey locker and now virtually the entire file system is using class keys from the keybag that *are* protected with the user’s PIN. The hardware-accelerated AES crypto functions allow for very fast encryption and decryption of the entire hard disk making this technologically possible since the 3GS, however for no valid reason whatsoever (other than design decisions), Apple decided not to properly encrypt the file system until iOS 8.

Read More

Running Invisible in the Background in iOS 8

Since iOS 8’s release, a number of security improvements have been made since publishing my findings last July. Many services that posed a threat to user privacy have been since closed off, and are only open in beta versions of iOS. One small point I made in the paper was the threat that invisible software poses on the operating system:

“Malicious software does not require a device be jail- broken in order to run. … With the simple addition of an SBAppTags property to an application’s Info.plist (a required file containing descriptive tags iden- tifying properties of the application), a developer can build an application to be hidden from the user’s GUI (SpringBoard). This can be done to a non-jailbroken device if the attacker has purchased a valid signing certificate from Apple. While advanced tools, such as Xcode, can detect the presence of such software, the application is invisible to the end-user’s GUI, as well as in iTunes. In addition to this, the capability exists of running an application in the background by masquerading as a VoIP client (How to maintain VOIP socket connection in background) or audio player (such as Pandora) by add- ing a specific UIBackgroundModes tag to the same property list file. These two features combined make for the perfect skeleton for virtually undetectable spyware that runs in the background.”

As of iOS 8, Apple has closed off the SBAppTags feature set so that applications cannot use that to hide applications, however it looks like there are still some ways to manipulate the operating system into hiding applications on the device. I have contacted Apple with the specific technical details and they have assured me that the problem has been fixed in iOS 8.3. As for now, however, it looks like iOS 8.2 and lower are still vulnerable to this attack. The attack allows for software to be loaded onto a non-jailbroken device (which typically requires a valid pairing, or physical possession of the device) that runs in the background and invisibly to the SpringBoard user interface.

The presence of a vulnerability such as this should heighten user awareness that invisible software may still be installed on a non-jailbroken device, and would be capable of gathering information that could be used to track the user over a period of time. If you suspect that malware may be running on your device, you can view software running invisibly with a copy of Xcode. Unlike the iPhone’s UI and iTunes, invisible software that is installed on the device will show up under Xcode’s device organizer.

Read More

Testing for the Strawhorse Backdoor in Xcode

In the previous blog post, I highlighted the latest Snowden documents, which reveal a CIA project out of Sandia National Laboratories to author a malicious version of Xcode. This Xcode malware targeted App Store developers by installing a backdoor on their computers to steal their private codesign keys.

Screen Shot 2015-03-10 at 2.09.50 PM

So how do you test for a backdoor you’ve never seen before? By verifying that the security mechanisms it disables are working correctly. Based on the document, the malware apparently infects Apple’s securityd daemon to prevent it from warning the user prior to exporting developer keys:

“… which rewrites securityd so that no prompt appears when exporting a developer’s private key”

A good litmus test to see if securityd has been compromised in this way is to attempt to export your own developer keys and see if you are prompted for permission.

Read More

The Implications of CIA’s Jamboree

Early this morning, The Intercept posted several documents pertaining to CIA’s research into compromising iOS devices (along with other things) through Sandia National Laboratories, a major research and development contractor to the government. The documents outlined a number of project talks taking place at a closed government conference referred to as the Jamboree in 2012. The projects listed in the documents included the following pieces.

Rocoto

Rocoto, a chip-like implant that would likely be soldered to the 30-pin connector on the main board, and act like a flasher box that performs the task of jailbreaking a device using existing public techniques. Once jailbroken, a chip like Rocoto could easily install and execute code on the device for persistent monitoring or other forms of surveilance. Upon firmware restore, a chip like Rocoto could simply re-jailbreak the device. Such an implant could have likely worked persistently on older devices (like the 3G mentioned), however the wording of the document (“we will discuss efforts”) suggests the implant was not complete at the time of the talk. This may, however, have later been adopted into the DROPOUTJEEP implant, which was portrayed as an operational product in the NSA’s catalog published several months ago. The DROPOUTJEEP project, however, claimed to be software-based, where Rocoto seems to have involved a physical chip implant.

Strawhorse

Strawhorse, a malicious implementation of Xcode, where App Store developers (likely not suspected of any crimes) would be targeted, and their dev machines backdoored to give CIA injection capabilities into compiled applications. The malicious Xcode variant was capable of stealing the developer’s private codesign keys, which would be smuggled out with compiled binaries. It would also disable securityd so that it would not warn the developer that this was happening. The stolen keys could later be used to inject and sign payloads into the developer’s own products without their permission or knowledge, which could then be widely disseminated through the App Store channels. This could include trojans or watermarks, as the document suggests. With the developer keys extracted, binary modifications could also be made at a later time, if such an injection framework existed.

In spite of what The Intercept wrote, there is no evidence that Strawhorse was slated for use en masse, or that it even reached an operational phase.

NOTE: At the time these documents were reportedly created, a vast majority of App Store developers were American citizens. Based on the wording of the document, this was still in the middle stages of development, and an injection mechanism (the complicated part) does not appear to have been developed yet, as there was no mention of it.

Read More

Lenovo’s Domain Record Appears Jacked

Early reports came in from Verge that Lenovo was hacked, however upon visiting the website, many reported no problems. Lenovo servers were not, in fact hacked, however it appears that the lenovo.com domain record may have been hijacked. Two whois queries below show that the domain was updated today and its name servers were changed over from Read More

CVF: SPF as a Certificate Validator for SSL

In light of recent widespread MiTM goings on with Superfish and Lenovo products, I dusted off an old technique introduced in the anti-spam communities several years ago that would have prevented this, and could more importantly put a giant dent in the capabilities of government sponsored SSL MiTM.

The Core Problem

The core of the problem with SSL is twofold; after all these years, thousands of Snowden documents, and more reason to distrust governments and be paranoid about hackers more than ever, we’re still putting an enormous amount of trust into certificate authorities to:

  1. Play by the rules according to their own verification policies and never be socially engineered
  2. Never honor any secret FISA court order to issue a certificate for a targeted organization
  3. Be secure enough to never be compromised, or to always know when they’ve been compromised
  4. Never hire any rogue employees who would issue false certificates

Not only are we putting an immense trust in our CAs, but we’re also putting even more trust into our own computers, and that the root certificates loaded into our trust store are actually trustworthy. Superfish proved that to not be the case, however Superfish has only done what we’ve been doing in the security community for years to conduct pen-tests: insert a rogue certificate into the trust store of a device. We’ve done this with iOS, OSX, Windows PCs, and virtually every other operating system as well in conducting pen-tests and security audits.

Sure, there is cert pinning, you say… however in most cases, when it comes to web browsers at least, cert pinning only pins your certificate to a trusted certificate authority. In the case of Superfish’s malware, cert pinning doesn’t appear to have prevented the interception of SSL traffic whatsoever. In fact, Superfish broke the root store so badly, that in some cases, self-signed certificates could even validate! In the case of CAs that have been compromised (either by an adversary, or via secret court orders), cert pinning can also be rendered ineffective, because it still primarily depends on trusting the CA and the root store.

We have existing solid means of validating the chain of trust, but SSL is still missing one core component, and that’s a means of validating with the (now trusted) host itself, to ensure that it thinks there’s nothing fishy about your connection. Relying on the trust store alone is why, after potentially tens of thousands of website visits, none of the web browsers thought to ask, “hey why am I seeing the same cert on every website I visit?”

Read More

Superfish Spyware Also Available for iOS and Android

For those watching the Superfish debacle unfold, you may also be interested to note that Superfish has an app titled LikeThat available for iOS and Android. The app is a visual search tool apparently for finding furniture that you like (whatever). They also have other visual search apps for pets and other idiotic things, all of which seem to be quite popular. Taking a closer look at the application, it appears as though they also do quite a bit of application tracking, including reporting your device’s unique identifier back to an analytics company. They’ve also taken some rather sketchy approaches to how they handle photos so as to potentially preserve the EXIF data in them, which can include your GPS position and other information.

To get started, just taking a quick look at the binary using ‘strings’ can give you some sketchy information. Here are some of the URLs in the binary:

Read More

Lenovo Enabled Wiretapping of Your Computer

Robert Graham recently uncovered software that came preinstalled on Lenovo computers hiding under the guide of advertising-ware. While the media rushes to understand the technical details behind this, many are making the mistake of chocking it up to some poorly designed advertising / malvertising software with vulnerabilities. This is not the case at all, and it’s important to note that what’s been done here by Lenovo and SuperFish by all accounts is far more serious: a very intentionally designed eavesdropping / surveillance mechanism that allows Lenovo PCs’ encrypted traffic to be wiretapped anywhere it travels on the Internet. We’ll never know the true motives behind the software, but someone went to great lengths to maliciously transform encrypted traffic in a way that allows this electronic wiretapping, then bundled it with new Lenovo computers.

Based on Graham’s notes, and what the media is reporting is commonly referred to as a Man-in-the-Middle attack on the victim’s computer; this is only where the trouble begins. When the user goes to establish an encrypted connection with, say, Bank of America, the SuperFish software pretends that it’s Bank of America right on your computer, by using a phony certificate to masquerade as if it were actually the bank. SuperFish then talks to the real Bank of America using its own private keys to decrypt traffic coming back to it. Where this becomes dangerous is that this transforms the traffic while it’s in transit across the Internet, so that data coming back to the PC is encrypted with a key that SuperFish can decrypt and read.

The threat here goes far beyond that of just the victim’s computer or advertisements: by design, this allows for wiretapping of the PC’s traffic from anywhere it travels on the Internet. In addition to the local MiTM / advertising concerns the media is focusing on, it appears as though the way SuperFish designed their software allows anyone who has either licensed or stolen SuperFish’s private key to intercept and read any encrypted traffic from any affected Lenovo PC across the Internet, without ever having access to the computer. How is this possible? Because SuperFish appears to use the same private keys on every reported installation of the software, according to what Graham’s observed so far.

Read More

Waze: Google’s New Spying Tool

In 2013, Google acquired Waze, a tool designed to find you the best route while driving. Upon hearing of the application, I thought I’d check it out. Unfortunately, I didn’t get past the privacy policy, which was updated only six months ago. While Waze’s policy begins with “Waze Mobile Limited respects your privacy”, reading the policy demonstrates that they do no such thing.

Interesting note: Waze will not let you view the privacy policy inside the app until you’ve already agreed to let it track your location.

Unique Tracking Identifiers

The first thing I immediately noticed about Waze is that they function in the same way Whisper does: under the false guise of anonymity. The average user would wrongly assume that by not registering an account, their identity remains unknown. Even if you don’t create an account in Waze, the privacy policy states that their software creates a unique identifier on your device to track you; to my knowledge, this is a violation of Apple’s own App Store guidelines, but it seems that Google (and Whisper) have gotten a free pass on this. From the policy:

“If you choose to use the Services without setting up a username you may do so by skipping the username setup stage of the application installation process. Waze will still link all of your information with your account and a unique identifier generated by Waze in accordance with this Privacy Policy.”

I’ve previously written about Whisper and how this technique, combined with multiple GPS data points, can easily identify who you are and where you live, even if the GPS queries are fuzzed. With Google as a parent company, not only is your location information particularly identifying, but cross-referenced with Google data and their massive analytics, could easily determine a complete profile about you including your web search history (interests, fetishes, etc). Even if you don’t have a Google account, any Google searches you’ve done through local IP addresses or applications that track your geolocation can easily be used to link your Waze data to your search history, to your social networking profiles, to virtually any other intelligence Google or its subsidiaries are collecting about you. Simply by using Waze just once, you’ve potentially granted Google license to identify you by GPS or geolocation, and associate an entire web search history with your identity, to de-anonymize you to Google.

Of course, Waze doesn’t come out and admit this; if you read their privacy policy, however, you see that they’ve granted themselves a number of interesting (some nonconventional) rights to your data that make this possible. Perhaps this is why company may have been worth over a billion dollars to Google.

Read More

Pawn Storm Fact Check

Fortinet recently published a blog entry analyzing the Pawn Storm malware for iOS. There were some significant inaccuracies, however, and since Fortinet seems to be censoring website comments, I thought I’d post my critique here. Here are a few things important to note about the analysis that were grossly inaccurate.

First of important note is the researcher’s claim that the LSRequiresIPhoneOS property indicates that iPads are not targeted, but that the malware only runs on iPhone. Anyone who understands the iOS environment knows that the LSRequiresIPhoneOS tag simply indicates that the application is an iOS application; this tag can be set to true, and an application can still support iPad and any other iOS based devices (iPod, whatever). I mention this because anyone reading this article may assume that their iPad or iPod is not a potential target, and therefore never check it. If you suspect you could be a target of Pawn Storm, you should check all of your iOS based devices.

Second important thing to note: Most of the information the researcher claims the application gathers can only be gathered on jailbroken devices. This is because the jailbreak process in and of itself compromises Apple’s own sandbox in order to allow applications to continue to run correctly after Cydia has relocated crucial operating system files onto the user data partition. When running Cydia for the first time, several different folders get moved to the /var/stash folder on the user partition. Since this folder normally would not be accessible outside of Apple’s sandbox, the geniuses writing jailbreaks decided to break Apple’s sandbox so that you could run your bootleg versions of Angry Birds. Smart, huh?

Read More

A Meta-Data Resilient, Self-Funding “Dark” Internet Mail Idea

I’m still reading the specs on DIME, but already it’s leaving a bad taste in my mouth. It feels like it’s more or less trying to band-aid an already broken anonymous mail system, that really isn’t anonymous at all, and leaves far too much metadata lying around. Even with DIME, it looks like too much information is still exposed to be NSA proof (like sender and recipient domain names), and with all of the new moving parts, it leaves a rather large attack surface. It feels more as if DIME gives you plausible deniability, but not necessarily NSA proof anonymity, especially in light of TAO, and the likelihood at least one end of the conversation will be compromised or compelled by FISA. I could be wrong, but it at least got me thinking about what my idea of an Internet dark mail system would look like.

Let me throw this idea out there for you. We all want to be able to just write an email, then throw it anonymously into some large vortex where it will magically and anonymously end up in the recipient’s hands, right? What’s preventing that from being a reality? Well, a few things.

Read More

What You Need to Know About WireLurker

Mobile Security company Palo Alto Networks has released a new white paper titled WireLurker: A New Era in iOS and OS X Malware. I’ve gone through their findings, and also managed to get a hold of the WireLurker malware to examine it first-hand (thanks to Claud Xiao from Palo Alto Networks, who sent them to me). Here’s the quick and dirty about WireLurker; what you need to know, what it does, what it doesn’t do, and how to protect yourself.

How it Works

WireLurker is a trojan that has reportedly been circulated in a number of Chinese pirated software (warez) distributions. It targets 64-bit Mac OS X machines, as there doesn’t appear to be a 32-bit slice. When the user installs or runs the pirated software, WireLurker waits until it has root, and then gets installed into the operating system as a system daemon. The daemon uses libimobiledevice. It sits and waits for an iOS device to be connected to the desktop, and then abuses the trusted pairing relationship your desktop has with it to read its serial number, phone number, iTunes store identifier, and other identifying information, which it then sends to a remote server. It also attempts to install malicious copies of otherwise benign looking apps onto the device itself. If the device is jailbroken and has afc2 enabled, a much more malicious piece of software gets installed onto the device, which reads and extracts identifying information from your iMessage history, address book, and other files on the device.

WireLurker appears to be most concerned with identifying the device owners, rather than stealing a significant amount of content or performing destructive actions on the device. In other words, WireLurker seems to be targeting the identities of Chinese software pirates.

Read More

How App Store Apps are Hacked on Non-Jailbroken Phones

This brief post will show you how hackers are able to download an App Store application, patch the binary, and upload it to a non-jailbroken device using its original App ID, without the device being aware that anything is amiss – this can be done with a $99 developer certificate from Apple and [optionally] an $89 disassembler. Also, with a $299 enterprise enrollment, a modified application can be loaded onto any iOS device, without first registering its UDID (great for black bag jobs and the intelligence community).

Why not to rely on self-expiring messaging apps

Now, it’s been known for quite sometime in the iPhone development community that you can sign application binaries using your own dev certificate. Nobody’s taken the time to write up exactly how people are doing this, so I thought I would explain it. This isn’t considered a security vulnerability, although it could certainly be used to load a malicious copycat application onto someone’s iPhone (with physical access). This is more a byproduct of developer signing rights on a device, after it’s been enabled with a custom developer profile. What this should be is a lesson to developers (such as Snapchat, and others who rely on client-side logic) that the client application cannot be trusted for critical program logic. What does this mean for non-technical readers? In plain English, it means that Snapchat, as well as any other self-expiring messaging app in the App Store, can be hacked (by the recipient) to not expire the photos and messages you send them. This should be a no-brainer, but it seems there is a lot of confusion about this, hence the technical explanation.

As a developer, putting your access control on the client side is taboo. Most developers understand that applications can be “hacked” on jailbroken devices to manipulate the program, but very few realize it can be done on non-jailbroken devices too. There are numerous jailbreak tweaks for unlimited skips in Pandora, to prevent Snapchat messages from expiring, and even to add favorites in your mentions on TweetBot. The ability to hack applications is why (the good) applications do it all server-side. Certain types of apps, however, are designed in such a way that they depend on client logic to enforce access controls. Take Snapchat, for example, whose expiring messages require that the client make photos inaccessible after a certain period of time. These types of applications put the end-user at risk in the sense that they are more likely to send compromising content to a party that they don’t necessarily trust – thinking, at least, that the message has to expire.

Read More

A Warning to the Tech Community on Abusive Journalists

Below is a letter I’ve sent to Royal Media today regarding a journalist who has gone far beyond his ethical and professional boundaries to harass and attack me. Why you ask? Because I didn’t think a particular subject I was researching was credible enough yet to warrant a story. I wanted to bring this to the attention of the tech community as a lesson to be very careful about which journalists you choose to speak with. When you have new findings to share, the choice of which journalists you discuss them with can be harmful if you choose unethical or unprofessional reporters, who are not willing or able to come to an understanding of the details surrounding your work.

Unfortunately, this is not the first time I have had to deal with less than ethical journalists. If you recall, I’ve recently had to deal with a smear campaign from a ZDNet writer, who seemingly used her position in journalism to launch a libelous attack against me, motivated by my religious beliefs (or what she thinks they are), with the full support of the ZDNet staff, who never took any action. Sadly, today, any hack can become a “reporter”, in today’s sense of the word, regardless of what kind of journalism training, or even ethical training, they’ve had. News agencies rarely hold their own writers accountable, especially in tech, where misogyny / misandry thrive, and where personal attacks generate headlines.

Read More

1 2 3 4 5 6 8