Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anybody have an equally short summary of any differences today?

By the way, there are links below the text, here's the one to the HN discussion of 2016: https://news.ycombinator.com/item?id=10885727



Today we use

For database, we use RDS/Dynamodb

Redis for cache.

Dynamodb is better in cases where we want to localize the latency of our regional lambdas.

RDS for everything else like dashboard entity storage etc..

Cloudwatch prints logs, kinesis takes the log to s3 where it's transformed in batches with lambda then data is moved to Redshift. Redshift for stats/reports.

Converted whole ad network to Serverless. Used Rust Lambda runtime for CPU intensive tasks.

Using Go for the rest of the Lambdas.

I love Go and Rust and optimizing just one Lambda at a time brought back the programme joy in my life

Used Apex+Terrform to manage whole infra for the ad network.

We managed to deploy Lambda in all AWS regions ensuring minium latency for the ad viewers.

The thing which took over 50 (tech side only) person team, now takes 10 people to run the whole ad network.

Network is doing 20M profit per year/9 billion clicks per day.

It's my greatest achievement so far because we made it from scratch without any investor money.

But one the other side, we'll have to shrink our team size next year as growth opportunity is limited and we want to optimize efficiency further.


Pretty awesome!

I'm currently planing to write a book about AWS. It should teach people how to build their MVPs without restricting themselves in the future.

Are you available for an interview in January?


I've been looking for a book on this topic for a while now!


Some people told me they were searching for this.

I think I'll put up a small splashpage for email gathering in the next days to keep people up to date :)


https://goo.gl/forms/S66Z9sPTaJLbokHI3

I'll start gathering informations in January, feel free to share this form.


Did you mean 9 billion clicks or impressions daily?

50 person team to run an adnetwork on tech side only? I am really curious why did it take that many people before going to Lambdas. We are in the adtech space also and there is a 5 persons team (on-call ops+2 devs) to run our own datacollection, third party data providers, RTB doing half a million QPS and own adservers doing hundreds of millions of impressions daily.


Sounds really interesting, kudos for building a profitable business from scratch. I have no experience with redshift, we mostly use the ELK stack, so Kibana do to all the log analysis. Is redshift significantly better?


Using redshift for metrics, mostly OLAP.

Think about drilldown to 3 level, based on device, os, placement, country, ISP etc... along with click stats per variable.

I've never used elastic search for this.

Before that used bigquery but every query takes atleast 2 seconds.

So we had to move to a dedicated redshift cluster.


Make your next project getting off of AWS and you'll save enough money to keep people on your team. :)


So what is the alternative? Maintaining your own infrastructure like we did before "cloud" providers, i.e. your own dedicated servers in managed locations unless you were huge enough to have your own locations? Or just a different cloud provider? It is hard to check if your suggestion is any better since you only say "don't do that", but not what else to do instead...


What's your definition of huge? Just curious as it's still really cheap to rent racks even in top tier datacenters.


> What's your definition of huge?

Quoting myself:

> unless you were huge enough to have your own locations

Those locations are millions, sometimes hundreds of millions of dollars investments with backup power generators large enough to provide power to a comfortably sized village. So, "large enough to a) need and b) able to afford owning such a location just for your own needs", e.g. Google, Amazon. Even companies like large banks have their servers co-hosted in a separate section but in the location owned by a 3rd party co-hosting provider. To own one you either are one of those providers or you are in the "Google tier". For the purposes of the current context, the linked article, one would even need to have multiple such locations all over the world. I think that qualifies as "huge" (the company owning such infrastructure just to run their own servers, co-hosting firms do it for others).


You don't need to build and run your own datacenter to self-host. That's just ridiculous to think that's a requirement. Colo is more than fine.


We did go that route as well in past but costs were insane, experienced talent hard to find and doesn't come cheap.

Cloud has talent working in the background on AWS's payroll, they've better ability too hire at scale than what we can do.

So decided to use them, no we don't regret. It's a more reliable cost then hiring and managing a team which might prove to be less reliable.


They do filter the malacious traffic if you use their loadbalancer.

Loadbalancer is shared across the user accounts, so amazon has to stop the ddos.

They've very effective network level ddos detector/filtering


I forgot to mention, lambda model is very easy to reason about and costs can be forecasted with more accuracy than running a VM.

Once you have setup Lambda in one region

You just need to loop through the list of regions and deploy your lambda in ALL AVAILABLE REGIONS. Yes, it's that simple!

API gateway doesn't charge for 4xx response, so it's very good for defying level 7 ddos too.

Add cognito and use lambda authorizer, it generates API keys and emails it to your users.

Add a Latency Based DNS routing using route 53 on top and you ensure minium latency in all regions!


What happens if someone sends a layer 7 DDoS that you do respond to with a 200? Or 300-level?


Then you're screwed I suppose. To do that they'd likely be performing some sort of replay attack, in which case you should be mitigating against this. There's no magic bullet anywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: