Hacker Newsnew | past | comments | ask | show | jobs | submit | more InGodsName's commentslogin

Bitcoin will rise again.

Bitcoin removes the need to trust a bookie/agent to move money offshore.

Whenever flow of money is possible between a safe haven and a corrupt/socialist big country with high tax rate/more population burden.

Money flows to the safe heaven.

Bitcoin is one such way to move money from african nations/india/china to safe haven like Switzerland, Singapore, UK etc...

I am talking about illict gains/unreported assets which can't be moved through official channels.

Moving your unreported assets to a different country might not be illegal in that country if a deal is structured correctly. That's where lawyers/accounts/consultants come into play.

Bitcoin will rise again once a need to move money to save heaven rises again.

It's an efficient medium to move large sums of money without trusting a bookie.

I can buy coffee loaded in container from an african country where inflation has killed the local currency and banks do not have infra for international settlement or take much bigger cut or simply aren't trusted by local people. For this i make simple bitcoin transfer from Hong Kong and get my coffee contain undocked in Hong Kong.

Edit: it's haven not heaven.


First of all the word you're looking for is haven (as in a harbor), not heaven.

Secondly, are you saying that bitcoin exists solely for the purpose of money laundering and tax evasion?


No, but it certainly can be used for money laundering and tax evasion.

It removes the need to trust a bookie or agent to move money offshore.

There are lawyers/accountants in Switzerland and Hong Kong who will help you launder money from third world countries.

Moving your assets offshore without local government knowing is not a crime in other country!

Edit: why downvotes? What's wrong?


My point was that you are assuming some sort of libertarian viewpoint where nation-states trying to tax your money are fundamentally corrupt/socialist.

I personally tend to think that in the vast majority of cases govt spending of tax money does more for the greater good than whatever a self-appointed authority would. There are many exception to this, but I still think this is largely true in democracies. But then again I tend to identify as a social-democrat.


Yeaa but if you go to some third world countries, you'll see high tax rate yet not enough benefits for your tax rate. Prime example is India where people still have power cuts yet business/salaried middle class pays north of 30% tax.

People there will happily pay 60% tax if they get even Rome level of infrastructure access

So, it's no brainer a lot of them move their money offshore.

Edit: people who feel they are not getting fair share of their infra quality, with this line of thinking will move money aboard doesn't matter if it's ethical/legal thing to do or not.


This would require some discussion of international wealth and income distribution, but to me the injustice is not at the state level here, but rather in the fact itself that India is a third world country. The Indian upper-middle class should really be demanding reparation to the UK for the effects of colonization rather than moving money offshore damaging only their poorer countrymen.


Basically a scam then?


[flagged]


Dogmatic is the word you’re looking for


Nah.

> strong religious feeling or belief

Religiosity


Can we extract the prasite and sell it as entrepreneur drink now?


I thought this was the concept behind raw water? Expose yourself to all of nature's risky and risk-taking-inducing fecal transmitted parasites!


i got water poisoning in the backcountry... the delusion is not that dissimilar to a psychedelic experience, and actually intensified by the pervasive and unavoidable actual fear of death.


What exactly does it run on GPU. Is it a dropin replacement for Term2 or i lose lots of things?


This doesn't apply to product (Flipkart/Alibaba/Zoho/Chargebee) companies in India/China, they are at par with West in their workflow/culture.


Would be cool if it shows code in most common programming languages


The code is mostly equivalent to the recursive form found in the wikipedia article on the cooley-tukey algorithm. This is a good one to learn from, as its not only a simple formulation but it forms the basis of modern optimised FFTs such as FFTW (source[1], from the authors of FFTW).

As an aside, I also find the non-recursive, breadth-first, form easy to derive thru a process of code transformations of the depth-first form; explanations that start breadth-first are somewhat bewildering

[1] https://cnx.org/contents/ulXtQbN7@15/Implementing-FFTs-in-Pr...


Actually my quest is to write the FFT code at the lambda-calculus level. Why? First for the fun and also because I consider that lambda-calculus is for the mind what assembly language is for the computer. See http://lambdaway.free.fr/lambdaspeech/?view=PLR. In this page I would like to replace the inefficient unary numeration by a more efficient decimal position numeration and I guess that FFT, the Katsuba or any divide & conquer algorithm could be useful. It should be pleasant for the mind and could overcome limits of the JS numbers implementation.


"The code is mostly equivalent to the recursive form found in the wikipedia article on the cooley-tukey algorithm" I don't think so. A JS translation of the code shown in wikipedia can be seen in http://lambdaway.free.fr/lambdaspeech/?view=lispology.


You copy rather than use strided accesses. It's the same algorithm.


" You copy rather than use strided accesses. " I don't understand "strided accesses". « Plagiarism is stealing, copying is create. » I just translated the code in the lambdatalk language and added missing examples, something that I consider mandatory.


Sorry, I'm not accusing you of anything. By strided accesses I mean when you access elements of an array sequentially using some step size. Think loops with `i += stride` instead of `i++`. In the wikipedia pseudocode this is done implicitly using the 's' parameter. Notice that, in the version you implemented, you split the input into even and odd parts explicitly; you can achieve the same end by accessing the input array in a certain order as you are performing the mathematical operations. This is what the wikipedia pseudocode does. If you've seen other versions of the FFT with a bit reversal step, this is also where that comes in.

Check this out (javascript):

  function permute1(x) { 
      if (x.length == 1) return x;
      let even = [];
      let odd = []; 
      for (let i = 0; i < x.length; i += 2) { 
           even[i / 2] = x[i];
           odd[i / 2] = x[i + 1];
      }
      return [].concat(permute1(even), permute1(odd));
  }

  function permute2(x, offset, stride) {
      if (!offset) offset = 0;
      if (!stride) stride = 1;
      if (stride >= x.length) return [x[offset]];
      return [].concat(permute2(x, offset, stride * 2), permute2(x, offset + stride, stride * 2));
  }
  
  function permute3(x) {
      let result = [];
      for (let i = 0; i < x.length; i++) {
          let k = i;
          // pretend 32-bit ints
          k = ((k >> 1) & 0x55555555) | ((k & 0x55555555) << 1);
          k = ((k >> 2) & 0x33333333) | ((k & 0x33333333) << 2);
          k = ((k >> 4) & 0x0F0F0F0F) | ((k & 0x0F0F0F0F) << 4);
          k = ((k >> 8) & 0x00FF00FF) | ((k & 0x00FF00FF) << 8);
          k = ( k >> 16             ) | ( k               << 16); 
          k = k >> (64 - Math.log2(x.length));
          if (k < 0) k += x.length; // fix up due to signed ints
          result[i] = x[k];
      }
      return result;
  }
For arrays with power of two sizes, these perform the same permutation (but fail differently for non power of two sizes). Note that, with permute1, we effectively iterate over the entire input log2(n) times, so this is an O(nlogn) algorithm!

edit: also, i think i may have misunderstood the relationship between your js version and your lambdatalk version. They seem to be the same to me?


Thank you for your clever code. This morning I improved a little the JS code in http://lambdaway.free.fr/lambdaspeech/index.php?view=lispolo... and I plan to learn, understand and insert your code in the fft() function.

Yes there is a relationship between the JS version and its translation into lambdatalk. My project is to replace the array based version by a list based version so that I can replace in this page http://lambdaway.free.fr/lambdaspeech/?view=PLR the inefficient unary numeration based implementation of numbers (using standard Church numbers or just lists) by a decimal position numeration. Standard multiplication of words seen as polynoms being O(n^2) I need to go further and implement fast multiplication. So my interest in FFT.

As you could see in http://lambdaway.free.fr/lambdaspeech/meca/JS.js, the lambdatalk's interpreter is a regular expression window running on the code (not an AST) and replacing in situ expressions by their values. A kind of Turing machine. I like the idea of overcoming limits of JS numbers using nothing but words and simple substitutions on words.


You are right. You can find a Javascript version in this page http://lambdaway.free.fr/lambdaspeech/?view=lispology and the initial LISP code in https://www.physik.uzh.ch/~psaha/mus/fourlisp.php on which the {lambda talk} code has been built with some missing examples.


Here are a few implementations in various languages. https://codegolf.stackexchange.com/questions/12420/too-fast-...


Most implementations on http://rosettacode.org/wiki/Fast_Fourier_transform use the same algorithm.


I didn't see one alike, in a pure functional way. (http://www.rosettacode.org/wiki/Fast_Fourier_transform#lambd...)


Today we use

For database, we use RDS/Dynamodb

Redis for cache.

Dynamodb is better in cases where we want to localize the latency of our regional lambdas.

RDS for everything else like dashboard entity storage etc..

Cloudwatch prints logs, kinesis takes the log to s3 where it's transformed in batches with lambda then data is moved to Redshift. Redshift for stats/reports.

Converted whole ad network to Serverless. Used Rust Lambda runtime for CPU intensive tasks.

Using Go for the rest of the Lambdas.

I love Go and Rust and optimizing just one Lambda at a time brought back the programme joy in my life

Used Apex+Terrform to manage whole infra for the ad network.

We managed to deploy Lambda in all AWS regions ensuring minium latency for the ad viewers.

The thing which took over 50 (tech side only) person team, now takes 10 people to run the whole ad network.

Network is doing 20M profit per year/9 billion clicks per day.

It's my greatest achievement so far because we made it from scratch without any investor money.

But one the other side, we'll have to shrink our team size next year as growth opportunity is limited and we want to optimize efficiency further.


Pretty awesome!

I'm currently planing to write a book about AWS. It should teach people how to build their MVPs without restricting themselves in the future.

Are you available for an interview in January?


I've been looking for a book on this topic for a while now!


Some people told me they were searching for this.

I think I'll put up a small splashpage for email gathering in the next days to keep people up to date :)


https://goo.gl/forms/S66Z9sPTaJLbokHI3

I'll start gathering informations in January, feel free to share this form.


Did you mean 9 billion clicks or impressions daily?

50 person team to run an adnetwork on tech side only? I am really curious why did it take that many people before going to Lambdas. We are in the adtech space also and there is a 5 persons team (on-call ops+2 devs) to run our own datacollection, third party data providers, RTB doing half a million QPS and own adservers doing hundreds of millions of impressions daily.


Sounds really interesting, kudos for building a profitable business from scratch. I have no experience with redshift, we mostly use the ELK stack, so Kibana do to all the log analysis. Is redshift significantly better?


Using redshift for metrics, mostly OLAP.

Think about drilldown to 3 level, based on device, os, placement, country, ISP etc... along with click stats per variable.

I've never used elastic search for this.

Before that used bigquery but every query takes atleast 2 seconds.

So we had to move to a dedicated redshift cluster.


Make your next project getting off of AWS and you'll save enough money to keep people on your team. :)


So what is the alternative? Maintaining your own infrastructure like we did before "cloud" providers, i.e. your own dedicated servers in managed locations unless you were huge enough to have your own locations? Or just a different cloud provider? It is hard to check if your suggestion is any better since you only say "don't do that", but not what else to do instead...


What's your definition of huge? Just curious as it's still really cheap to rent racks even in top tier datacenters.


> What's your definition of huge?

Quoting myself:

> unless you were huge enough to have your own locations

Those locations are millions, sometimes hundreds of millions of dollars investments with backup power generators large enough to provide power to a comfortably sized village. So, "large enough to a) need and b) able to afford owning such a location just for your own needs", e.g. Google, Amazon. Even companies like large banks have their servers co-hosted in a separate section but in the location owned by a 3rd party co-hosting provider. To own one you either are one of those providers or you are in the "Google tier". For the purposes of the current context, the linked article, one would even need to have multiple such locations all over the world. I think that qualifies as "huge" (the company owning such infrastructure just to run their own servers, co-hosting firms do it for others).


You don't need to build and run your own datacenter to self-host. That's just ridiculous to think that's a requirement. Colo is more than fine.


We did go that route as well in past but costs were insane, experienced talent hard to find and doesn't come cheap.

Cloud has talent working in the background on AWS's payroll, they've better ability too hire at scale than what we can do.

So decided to use them, no we don't regret. It's a more reliable cost then hiring and managing a team which might prove to be less reliable.


They do filter the malacious traffic if you use their loadbalancer.

Loadbalancer is shared across the user accounts, so amazon has to stop the ddos.

They've very effective network level ddos detector/filtering


I forgot to mention, lambda model is very easy to reason about and costs can be forecasted with more accuracy than running a VM.

Once you have setup Lambda in one region

You just need to loop through the list of regions and deploy your lambda in ALL AVAILABLE REGIONS. Yes, it's that simple!

API gateway doesn't charge for 4xx response, so it's very good for defying level 7 ddos too.

Add cognito and use lambda authorizer, it generates API keys and emails it to your users.

Add a Latency Based DNS routing using route 53 on top and you ensure minium latency in all regions!


What happens if someone sends a layer 7 DDoS that you do respond to with a 200? Or 300-level?


Then you're screwed I suppose. To do that they'd likely be performing some sort of replay attack, in which case you should be mitigating against this. There's no magic bullet anywhere.


When you get network level ddos on digital ocean, they can't save you.

This rules out small clouds for us.

So, we use GCP Azure AWS exclusively because of their ability to defy network level ddos.


This is an interesting point. Did you originally use a non network-level-ddos-defying CSP and then switch? I am curious when this became a variable to consider as an item to explicitly pay more for (e.g. going from AWS Shield standard to advanced or picking big-3 CSP with higher price because of DDOS-protection) / when the inflection point in the business where DDOS-protection is now a serious consideration due to financial impact/user impact occurred (if possible to point to).


Yes, we used baremetal and digital ocean before that.

Why? Mostly because of cost and it seemed simple than building out stuff on cloud services.

Every Friday night, our services got ddosed by our competitors.

Provider would nullroute our IPs and our service goes down

We struggled with it a lot since we were not big enough to afford a premium ddos solution.

Once we realized that big cloud like aws, gcp do not suffer from this, we had to make a switch.


How do you manage to mitigate a DDOS on a public cloud?

When you say GCP, Azure and AWS have the capacity to defy the ddos, what capacity are you reffering to?

Are you talking about actually scaling and serving the bogus requests? Or capacity to have enoough bandwith and firewall power to fend it off?


I can only talk for AWS, not GCP or Azure, but there are services that can help mitigate DDOS attacks:

Shield (https://aws.amazon.com/shield/) is AWS's DDOS protection service. It's free and is basic protection against L3/4 attacks.

Shield Advanced (same URL as above) is a big step up in price, but gives you access to 'improved protection' and a global response team.

Cloudfront (https://aws.amazon.com/cloudfront) is a CDN with global edge locations.

WAF (https://aws.amazon.com/waf) is AWS's web application firewall service. It's less about DDOS and more about specific application attacks, but is part of the whole solution.

For more detail, you can have a look at AWS's DDOS whitepaper: https://d0.awsstatic.com/whitepapers/Security/DDoS_White_Pap...


The clouds don't do any filtering or mitigation for free. They just have enough bandwidth to pass the attack through to your servers and services. You're just moving the bottleneck here, as now you need to use a cloud DDoS mitigator like Silverline, Cloudflare, prolexic.

You probably would have been better off using a cloud mitigator from the start. Their pricing is competitive when you factor in all of the costs.


They do filter the malacious traffic if you use their loadbalancer. Loadbalancer is shared across the user accounts, so amazon has to stop the ddos.

They've very effective network level ddos detector/filtering


Put cloudflare in front then, you can use it with Digital Ocean.

So you're saved then.


Or Akamai’s mitigation services if you can afford it. I’m curious how big that value proposition is these days.

I wonder how much DDoS it takes before a cloud provider starts dropping a customer’s packets now. Do they even bother anymore?


Yeaa improvising can really produce good stuff.

See: https://youtu.be/lHXkQDanKow

Neither improvising is lost in East.


Deadline based pipeline seems to work best for our cast.


Especially in async msg-based systems. ActiveMQ (and the JMS API) has built in support for setting a message expiration upon sending, but you can also configure all of these policies on the broker itself.


If the system is down for a day, and reports only valid for that day, there is no sense in "catching up". Throw the old work away and start with the fresh stuff.


Haha Chinese is my second language. With Vietnamese or Chinese, you need "absolute pitch" to be able to speak it like natives when learning as an adult.

China/Vietnam have higher proportion of people with absolute pitch.

It has to do with the number of phonemes in a language.

There is also a genetic aspect of it. Mandarin speakers will have higher IQ than normal population.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: