Hacker Newsnew | past | comments | ask | show | jobs | submit | IgorBog61650384's commentslogin

Hi, author of the blog post. This is correct - keeping PII protected has always been their concern, but recent breaches in thier's and other industries (including some they heard of and were not publicized) made them even more concerned.


Yep!


Thanks, fixed it


The SolarWinds incident was detect because of bad opsec by the operators who performed the FireEye op. I would image the capability was developed by an expert group in some intelligence agency, and then used as an entry point by a different operator group with lower standards. But who is to day there aren't more of this kinds of attacks out there, just no one has made a foolish error using them yet? If we assume that, we have to assume this operation was somewhere in the middle of a normal curve of complexity, and there are even more sophisticated backdoored systems like that we just don't know about. Imagine any medium-large code base (100+ of KLoCs), that is deployed widely, and has an auto update mechanism. Most companies don't have very strict access to the build process (and even if they do, all you need is to corrupt one employee), so it shouldn't be to hard to patch binaries before they are signed (especially bytecode in .NET and Java) , and add another URL and/or signature for verification (for sig only the attacker needs access to the web site/CDN too). The change will be only a few lines, so is very hard to detect automatically - it will look like regular code for tools.


I love the idea but is scaring me a bit security wise. This can be a really hidden persistence method for malware.

Imagine the following scenario - memory only malware lands on my computers, identifies the keyboard, uploads a malicious firmware and disappears. Using basic heuristics like time and entropy it detect when I logon to the machine, get my passwords, understands my OS, and what for a hidden signal by the memory only malware. If the signal is not detected for a while because I rebooted my computer or reinstalled it, It unlocks the computer with the password at a time of inactivity, and types in a command like wget/curl to to download the malware again, and so on.

I think this can even be used for Virtual Machine escape, as many VMs just pass-through HID commands, so its possible the firmware can be updated from a VM.

Kudos to System76 though for providing the firmware, this helps in auditing it and running tools like lint or PVS studio to decrease the chance of bugs like that. They are consistent in being open source and I hope more vendors with firmware follow their lead.


QMK requires a RESET keycode to be added to your keymap in order to reboot into firmware update mode while it's running. Alternatively, depending on the configured options at compile-time, you can either hold ESC as you plug it in, or hold Space+B.

system76's launch requires the holding of ESC on bootup (well, top-left matrix position).

Under normal circumstances there's no "unattended update" functionality built in. Unsure if system76 has modified this behaviour to do so.

(Full disclosure, am a QMK maintainer)


Unfortunately, system76 developers added exactly that functionality to their firmware: https://github.com/system76/qmk_firmware/commit/a1ab70c3a28a...

So it is possible to reboot into the firmware update mode just by sending some bytes to the raw HID interface. Apparently they did not think about the security aspect of this feature.


Would you mind creating a GitHub issue to track this on? We may decide to change the behavior before production

EDIT: I have created an issue here: https://github.com/system76/launch/issues/17

We do not intend for the production firmware to include any software reset to bootloader functionality. It will require a physical keypress (Fn+ESC)


There is an automatic reset to bootloader feature that will be removed prior to production, customer units will always require a physical keypress (Fn+ESC) in order to flash firmware: https://github.com/system76/launch/issues/17


There is a pretty simple solution for this, require physical input to enter firmware upload mode.

Most open source keyboards have a button you press to switch from KB mode to flash target mode. Not sure how this laptop works but a solution would be to require the user to hold fn and some other key for a few seconds before it will accept firmware.

This problem btw is actually present in a lot of devices you already use but it’s proprietary. People have found out ways to flash firmware to USB devices, SD cards and hard drives before. It just requires reverse engineering to work it out.


You're right, as long as the hardware switch is really hardware and not fake-hardware-implemented-in-software like many vendors.


That's a real risk but closing the design doesn't really mitigate it, famously: https://semiaccurate.com/2009/07/31/apple-keyboard-firmware-...


They didn't write the firmware. QMK is a popular open source keyboard firmware.

And fwiw, user updatable keyboards are usually configured require a particular key be pressed to flash it, or have a dedicated button to hit with a paper clip.


I'm not too fond of the idea of having the firmware updatable in-place either, but given that the board picture shows it has an ISP header, you could probably disable the self-updating functionality and then you would need to do firmware modifications physically.


Nice article, thanks for the tips at the end!

Personal perspective: I've been trying intermittent fasting for about four years, and found it helped a little in the beginning but then petered out.

What happened was I did not eat until 7PM, then ate a little bit, but once I started eating I could not stop. It was never a matter of hunger, I barely ever feel hunger at all, it was like a deep primal drive to eat once the gates had opened - each and every evening.

The one thing that actually worked for me for a while was almost complete fasting - I would eat one meal one Saturday, one meal one Sunday, and that's all. The first two weeks were hard, but then it became quite easy, except the time aches Monday morning. It also lowered my resting heart rate by 10 BPM. I managed to drop about 40lbs in 6 month. Then I started on a really tough and stressful project at work, stayed late with coworkers, they ordered pizza night after night, I ate, and the fasting was over. Couldn't get back on the wagon since ...


Credit to Jason Fung (author of The Obesity Code) for the tips!

And I can very much relate to not being able to stop once I start eating something. I find this especially hard with processed/refined carbs, which I believe play some tricks on our hunger related hormones so our body can't detect when we've had enough food. If I stick to mostly more protein/fat and unprocessed carbs I find it much easier to control how much I eat.

It's interesting to hear your success with an extended fast. I haven't ventured beyond the 16:8 but am intrigued to experiment with something a bit longer this year.


I love hearing great news about the space industry, that the way to get us humans into outer space.

Can any one a little more knowledgeable explain the difference between this 3d printed engine and RocketLab's one? Thanks!


I don't think this should surprise anyone. The FBI has multiple methods for accessing locked phones: using physical exploits like those provided by Cellbrite, or through baseband attacks - i.e. first attacking the cellular modem and from there using an exploit to get to the main ARM cpu, or through exploits or backdoors in any app the phone had that do background refresh through the web while the phone is locked. I think the current status of infosec means that anyone that is the target of a nation state intelligence agency or counter intelligence agency can be hacked. The question if that is actually done or not depends on how interesting they are and the lawfulness of the action and not on technical capabilities.


Your understanding of baseband attacks is not correct. Having a baseband exploit would not facilitate this. Nor would exploits/backdoors in any particular app.


Why couldn't a baseband attack facilitate this? It was shown at least as far back as 2017[0] that a program on a baseband could affect the memory of the application processor, and in 2018[1] that a specially crafted message can achieve an RCE on a baseband. Since then, cell modems have gotten even more integrated with APs.

[0] https://comsecuris.com/blog/posts/path_of_least_resistance/ [1] https://i.blackhat.com/us-18/Thu-August-9/us-18-Grassi-Explo...


>Why couldn't a baseband attack facilitate this?

Because this is about the iPhone, where the baseband is just a USB peripheral. There simply is no DMA. iPads and Macs have DMA controls in place as well. There are other iPhone attacks for sure, but they have been fairly conscious about keeping the baseband isolated for a good long while. So it's less likely to be the vector. Apple didn't spend a ton of money on a custom security processor and OS stack just to let a 3rd party vendor firmware walk all over it. From page 41 of their old iOS Security Guide:

>"To protect the device from vulnerabilities in network processor firmware, network interfaces including Wi-Fi and baseband have limited access to application processor memory. When USB or SDIO is used to interface with the network processor, the network processor can’t initiate Direct Memory Access (DMA) transactions to the application processor. When PCIe is used, each network processor is on its own isolated PCIe bus. An IOMMU on each PCIe bus limits the network processor’s DMA access to pages of memory containing its network packets or control structures."

You'll notice in those papers you link, that "iPhone" and "Apple" do not appear as subjects of the paper. Cellebrite and the like are probably doing other things.


Exploits are possible even without DMA. Windows had a slew of USB stack exploits, ranging from the serial and modem drivers to HID device and more. There have also been in the past (and probably still exist) exploits over serial lines, over I2C and SMBus, etc'. Not having DMA makes it much, much harder, but not impossible.

So having the modem connected by USB does not make attacking through it impossible - how can you tell there are no bugs in the iOS USB stack?


As far as I understand, the isolation between basebands and the main SoC has also been improved (using IOMMUs etc.)


Love the post, good luck!

Re “Productivity equiation” from Ali Abdaal: Productivity = Useful output / time x f (where f is the ‘fun factor’), but treating f as constant is wrong - even fun projects can get tiresome . A good measure might be instead an inverse 'tediousness' factor, e.g. how tedious are the worst expected tedious parts of the project (like the 'fun' bottleneck) and make sure you don't get stuck when you reach those points.

Also fun project tend to be novel, but novel project have the highest amount of unexpectedness, so its good to decide on a few 'novel' things and let the rest of the project be based on stuff you know well.


The only reason this was detected was very overt behavior - opening AD popups. So I guesstimate for each one of these we have 10 that go undetected. This means the whole ecosystem is broken, as there is no reason this will happen only for updates and not for new apps as well. Apple's ecosystem is somewhat better, but I can't imagine they go through every line of code in each package, so most of their review is probably done with some combination of automatic static and dynamic analysis, and these can be fooled. The problem with both platforms is that they don't provide run of the mill users the option of installing an effective firewall and security solutions.


This happened on ios for me years ago.

I had two apps that radically changed their business model (owner?) through updates with no recourse.

I had an app called gas cubby, which let me locally - on the phone - keep track of all my vehicles. I could enter detailed information about each car such as year, make, model, vin, insurance policy, gas purchases, oil changes and the like. It would tell you gas mileage and remind you of upcoming maintenance. One day, I updated the app and all my local data was uploaded to the cloud.

Another app I updated was camscanner from tencent that basically did the same thing. Think of all the PDFs you scan going to their cloud.


I've been writing apps for a long time. They are usually free/Tier 1 apps.

A while back, I was approached by a [NATION OBFUSCATED] developer, asking to buy up one of my older apps (they are all open-source).

I ignored the request, and reported the approach to Apple, as I'm sure that this actor has been doing the same for many other apps.

This is apparently a common method for malware-slingers. They buy established, older apps, that they assume the developer has abandoned (I hadn't abandoned it, but it's a simple app that hardly ever needs tweaking. If I stop supporting an app, I remove it from the store).

They then "update" the app, with a little "extra flavoring."


> One day, I updated the app and all my local data was uploaded to the cloud

This happened to me with Chrome. It auto-updated, then automatically synced browser history, passwords, and who knows what else, to Google. They soon changed it to opt-in sync, but it was too late for me at that point; they had already hoovered up my personal data. That was when I stopped using Chrome and switched fully to Firefox.


Camscanner was a blatant bait and switch. When I first started using it, I paid for a license to get full functionality with no ads/watermarks/etc. Magically, years later I got reverted to the ad-supported/free version, and my license was nowhere to be found. This was at the same time they moved to "cloud features" and a subscription model. Their reviews are littered with people having the same issue and the developer copy-pasting some response that doesn't work.


I haven't had this issue with Camscanner, but I've had it with other apps. One outright disappeared from my library, as if I have never had it installed.


yeah this is one reason why I can't take mobile app end to end encryption, or client side only, claims seriously. a single update at any time could undermine all of that

and secondly, they or an analytics package can just read everything client side and upload it to a server anyway

doesn't matter if its whatsapp, or signal, or some protonmail client if such a thing exists

I just don't use them with that assurance in mind, I use them for other things.


>yeah this is one reason why I can't take mobile app end to end encryption, or client side only, claims seriously.

If it's a large company like Facebook that values these products like Whatsapp at billions I trust them at least on this issue. I'm pretty sure they're not going to put junk third party malware for 50k into the Whatsapp client.

This is mostly an issue for apps done by individual developers who have huge incentive to take these deals, like the barcode scanner in question.


They've been sideloading with React Native, allowing updates even for people without automatic updates enabled, and have abused enterprise/privileged developer keys which allows access to additional parts of the system. I just don't see how you can draw that conclusion.

I use the apps for other things, not for any assurance of privacy.


> I trust them

You literally mentioned a company that betrayed trust so bad a government tried to call them to account.


Are people capable of enough nuance to distinguish between issues that large tech firms are likely trustworthy on and issues that they aren't?

When they stand to make billions from breaking my trust I'm sceptical. When they stand to make a penny and ruin their entire product, then no I' not.

The problem in question here, that rogue developers sell out their product to third parties, is not an issue that Facebook, Google etc have. They have every incentive to keep their software secure.


A betrayal of trust will not "ruin their entire product", we've already seen that it won't (no matter the scale). Believing a small betrayal to be worse than a big one is your right, but that doesn't mean it isn't naive.


Your whole premise is based on a very arbitrarily low value of collecting your plain text data? From a company that is a machine built for monetizing this specific thing? And that they wont because their users care about trust too much, users of Facebook products but specifically whatsapp? And you think the rest of us arent compartmentalizing our issues with that company enough?

this is.... I’m speechless, I ran out of words for this absurdity


I get what you're saying, but it's funny because what the dodgy small players do with the data is actually sell it to facebook. You're just cutting out the middleman here.


>If it's a large company like Facebook that values these products like Whatsapp at billions I trust them at least on this issue. I'm pretty sure they're not going to put junk third party malware for 50k into the Whatsapp client.

Zuck: They "trust me"

Zuck: Dumb fucks.


That's a one dimensional way to think.

You may not be able to trust facebook with your privacy, but you can trust them not to install a malware that swipes your bitcoins.

That being said, I despise the current state of affairs with cellphones. I don't like needing to trust any corp. I'm jumping to a Linux native phone when my current device dies.


>you can trust them not to install a malware that swipes your bitcoins

Sure, they might not take malware that swipes my crypto, but I wouldn't put it past them to take malware that uses my resources to mine for crypto. What is the downside for them?


School tried to make me use camscanner, glad I took the extra effort to do something else. Thanks for the anecdote.


Try OpenScan, open source document scanner app...

Source: I am a user


Thank you. Unfortunately, it seems that OpenScan does not have the feature to straighten out photographed documents. Cammscanner has its own camera app, which has features specific to photographing documents.


I absolutely love Camscanner, and I have been for over a year on the old version because I refuse to update to the new version which requires network permissions. I exactly suspected this is why it needs those permissions.

To what did you switch? Camscanner is otherwise an excellent app, especially for combining multiple images and straightening them out.


Not OP, but I switched to using Microsoft Office Lens.


Thank you! This one seems to have the features of Camscanner that I use: straightening documents and combining multiple images into a single PDF.


I just continue to use the brother scanner in the other room. I don’t recommend brother, they updated the software and somehow took away features.


Unfortunately the HP scanner doesn't fit into my meeting bag!


Adobe Scan is a solid option as well.


Adobe has lost my trust years ago, and I see that viewpoint vilified often enough to never use Adobe software again. The only Adobe product that I still use is Magento, and only that on client sites. I would love to find a non-Adobe alternative.


> This happened on ios for me years ago.

Neither of the 2 scenarios you describe are even remotely what's happened here. Not sure how you got from 'malicious ad popups' to 'app added cloud feature'.


I gave Slacker Radio the big heave-ho when they decided they wanted to help themselves to my contact list. They did that just before I was about to pony up for a paid subscription. Bullet dodged.


You probably overestimate Apple here. I'm pretty sure you can do a lot of fuckery with WebView, JavaScript and an innocent-looking API and feature flags in JS that gets swapped for bad behavior remotely after the review process is complete.


> The problem with both platforms is that they don't provide run of the mill users the option of installing an effective firewall and security solutions.

Google does allow no-root firewalls on the PlayStore which rely on VPN APIs. Here are some open source ones: https://www.reddit.com/r/androidapps/comments/jhtvn4/a_list_...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: