Hacker Newsnew | past | comments | ask | show | jobs | submit | quantumf's commentslogin

This extension alone is sufficient justification to shift to Firefox.


Wow, what a great idea. This is incredibly fun.


This setting is not available in my Facebook video settings.


I think you're understating how tough it can be.

There are applications that

  * are mature and complex

  * with 100s of tables

  * serving millions of users

  * have to be broken into multiple micro-services

  * have developer resource constraints
So you're easily looking at a 1-2 year project, not 1-8 weeks.

You've also ignored some of the complexities, such as resharding (moving data between shards), which may significantly add to the cost of the project.


Also, when architecturing for shards, you must take into account availability.

Having several shards can lower the availability of your application if it cannot handle the absence of a shard.

For example if you have 99.9% availability on your individual DBs, and if you split it up into 10 shards, availability will drop to 99% (8 hours VS 3 days of downtime a year).

To handle that, you need to add replication and automatic fail-overs, adding even more complexity.


At Prosperworks we offer a CRM which integrates closely with G Suite applications like Gmail and Calendar.

We consider our app to be maturing if not mature. It is certainly complex - we integrate with dozens of partners and external APIs. We have 80 tables and 300k LoC of Rails code which is runs on several TB of PostgresSQL data. We have not broken our app into multiple micro-services. Like everybody, we always feel that our developer resources are constrained.

Our data model is very similar to the CRM example in Ozgun's article: _mostly_ we have a master customer table and a wide halo of tables which are directly or transitively associated with customers. We called this the "company sharding domain". Since we allow one user to be associated with multiple accounts, we shard our user table independently: there is a smaller halo of tables in the "user sharding domain". And we have a handful of global tables for content and feature configuration in the "unsharded domain".

We kicked off our migration project from unsharded Postgres to sharded CitusCloud in early Q4 2016. We had one dev work on it solid for one quarter updating our code to be shard-ready. Then another 1.5 devs joined for a month in the final build up to the physical migration. We migrated in late Feb 2017, then consumed perhaps another 3 dev-months on follow-up activities like stamping out some distributed queries which we had unwisely neglected and updating our internal process for our brave new world.

Two years ago at another company I was tech lead on a migration of two much larger Mongo collections to sharded Mongo. That was a larger PHP application which was organized partly into microservices. That effort had a similar cost: as I recall I spent one quarter and two other devs spent about one month, and there were some post-migration follow-up costs as well.

I am confident that real world applications of significant complexity can be migrated from unsharded to sharded storage with a level of effort less than 1 year. I admit that 8 weeks feels fast but I'm sure I could have done it if we had been willing to tie up more devs.

Why were these efforts easier than 2 years? Because we didn't have to build the sharding solution itself - those came off the shelf from some great partners (shout outs to CitusData and mLabs). We just had to update our applications to be shard-smart and coordinate a sometimes complicated physical migration, derisking, and cutover process.

That said, I can imagine the work growing slowly but linearly in the number of tables, and quickly but linearly in the number of micro-services.


I used to think similarly several years ago. I now think differently for the following reasons:

* Citus and other technologies can now provide features that do a lot of the heavy lifting. Some examples are resharding, shard rebalancing, and the high availability features mentioned below.

* My estimates are for B2B (multi-tenant) apps. For those apps, we found that the steps you need to take in re-modeling your data and changing your app are fairly similar. At Citus, we used to shy away when we saw 200-300 tables. These days, complex apps and schemas have become commonplace.

* We saw dozens of complex B2B databases migrate in similar time frames. Yes, some took longer - I'm in the tech business and always an optimist. :)

I also don't want to generalize without knowing more about your setup. If you drop me a line at ozgun @ citusdata.com, happy to chat more!


These perspectives are very negative. An alternative view is that there are tons of interesting problems and projects and applications to be worked on, and to some degree at least you get a say in which ones you work on.


USA has 93%. Both figures include plea bargains. https://en.wikipedia.org/wiki/Conviction_rate


This gets no comments? Oracle cloud engineers will be a little disappointed I'm sure.


So Gates wants heavy taxes on consumers, but no taxes on entrepreneurs and philanthropists. Interesting. I wonder which of these categories applies to Gates?


ASIC's consume very little power compared to a PC. One watt per gigahash is a rough value, although most promise quite a bit less. You'd need about 100 PC's, at 300W each, to produce a gigahash, using the assumptions in this article.


Ah but that's the unfortunate twist. If you can get a gigahash for 30000W with PCs, you would have no incentive to go down to consuming only 1W with an ASIC. You would consume 30000W with ASICs and get 30000 gigahashes.

[EDIT: Corrected the numbers to match what you wrote. It is irrelevant, though -- if you get a gigahash for X watts with PCs, you'll use X watts to get X gigahashes with a 1W/gigahash ASIC.]


...assuming that you have the resources to purchase 30KW worth of ASICs. A 1W ASIC does not cost 1/300th of a 300W PC.


All you are saying is that the transition to 30kW of ASICs will not happen overnight. If you stopped at 1W, you would quickly find that you are making less money than you were before -- your competitors will also switch to ASICs, and will consume more than 1W because they want to make more money.

To put it another way, not going to 30kW of ASICs is equivalent to taking 30kW of PC mining hardware and turning some of it off (in a pre-ASIC world where PCs are the most efficient mining hardware available).


If it's true it isn't that bad (yet). 30mn W is like what, 150'000 TVs? 400 cars? A few buildings worth of AC/heating?

I don't think we'll get to a point where BTC has a significant footprint anytime soon.


Could not figure out how to add request headers.


You type them, i.e Header: Value


Typing the below

GET XUsername:ADMIN https://localhost:443/resources/myresource/1 HTTP/1.1 Host: target

gives an error in the REST: "Your request contains errors"


This request is invalid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: