Hacker Newsnew | past | comments | ask | show | jobs | submit | Malipeddi's commentslogin

I will believe it when I see it. I have been feeling really helpless and hopeless recently about Windows. So hopefully this news turns into something real.


Same thing caught my eye. They are usually sparse.


Yep, for any decent sized graph, sparse is an absolute necessity, since a dense matrix will grow with the square of the node size, sparse matrices and sparse matrix multiplication is complex and there are multiple kernel approaches depending on density and other factors. SuiteSparse [1] handles these cases, has a kernel JIT compiler for different scenarios and graph operations, and supports CUDA as well. Worth checking out if you're into algebraic graph theory.

Using SuiteSparse and the standard GAP benchmarks, I've loaded graphs with 6 billion edges into 256GB of RAM, and can BFS that graph in under a second. [2]

[1] https://github.com/DrTimothyAldenDavis/GraphBLAS

[2] https://onesparse.com/


There are other quite elegent methods for triangle and simplices.

For a triangle, drawing α and β uniform over [0,1) the barycentric coordinates given by (1-sqrt(α), sqrt(alpha)(1-β), βsqrt(alpha)) is uniform over the triangle. No rejection and no test for flipping.

For simplices (triangle, tetrahedron, 5-cell etc) barycentric coordinates obtained by drawing uniformly from (0,1] taking a log and normalizing will be uniform within the simplex.

I wrote about this and other related sampling below.

https://abhila.sh/writing/5/Random_Sampling.html

https://abhila.sh/writing/8/Random_Sampling_2.html


This is a neat and accessible introduction to the topic. Nice work! I want to mention nesting quadratures like Gauss-Kronod like Clenshaw-Curtis/Fejér. The nesting means that the quadrature points for lower order are included in the points required for higher orders. This allows you to increase your order of accuracy if required with fewer total evaluations.


This is quite nice! Thank you for sharing it. I think i came here after the good feedback had been incorporated. So i did not find anything confusing. It was quite fun to solve.


I'm a computational physicist (PhD, Mechanical & Aerospace Engineering) and postdoc at UofM, researching high-performance algorithms for multiscale, multiphysics phenomena. I co-develop Crimson (www.crimson.software), a blood flow simulation tool for vascular research and surgical planning. Skilled in high-performance computing, optimization, and novel computational methods, I also consult extensively within and beyond my research group. Transitioning from academia to industry, I seek roles in applied research, computational engineering, or software development tackling open-ended problems.

Location: Anywhere in US

Remote: Open to remote.

Willing to Relocate: Yes

Technologies: C++, Fortran, Python, HPC stack; experienced with Docker deployments, GPU computing, and modern infrastructure strategy.

Resume/CV: https://abhila.sh

Email: absh [at] umich.edu


Related write-ups below along with comparison of performance of different approaches. BMI2 instruction set based method seems to be the fastest of the lot.

1. https://www.forceflow.be/2013/10/07/morton-encodingdecoding-...

2. https://www.forceflow.be/2016/01/18/libmorton-a-library-for-...

3. https://github.com/Forceflow/libmorton

I used it to accelerate nearest neighbor detection for collision processing in particle-laden flow for modeling complicated domains in 3d (for biological fluids simulation). I was using it as a locality sensitive hashing to put particles near each other in the same bucket in an hash map. I came across the ideas of BIGMIN (big minimum) and LITMAX (little maximum) for range search in a morton encoded data that I found to be cool.


Normal distribution is not the same as uniform distribution. I edited in a normal distribution sampler in your codepen example.

https://pastebin.com/raw/CP4yNRE9


I see, thanks!


Btw, the following two are equivalent (in JavaScript syntax):

    var a = Math.max(Math.random(), Math.random());
    var b = Math.sqrt(Math.random());


Curious, how come? Does it become a statistical thing?


Well, not sure whether you'd call it statistics. It's a mathematical fact from probability theory. And it doesn't matter how often you sample: the distributions themselves are the same.

I came across that one in the other direction, when trying to work out how to make weighted rendezvous hashing work. See https://en.wikipedia.org/wiki/Rendezvous_hashing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: