Hacker Newsnew | past | comments | ask | show | jobs | submit | ubasu's commentslogin

The Trader Joe's at University and MLK is very recent, only since 2010 [1]. It used to be an auto-parts store before. Perhaps it used to be a grocery store back in 1955?

[1] https://www.sfgate.com/bayarea/place/article/Complex-houses-...


Because WWI and Nazi Germany drove the best scientists from Europe to the US?


It's been 80 years. No other great scientists were born in Europe in the past 80 years?


Yes, and then they too moved to the US to study or work.

Look at any top US university’s STEM faculty? What percentage are native born? It is shockingly low.


I don’t think that’s inconsistent with the article’s narrative that the US succeeded by offering strong incentives and prioritizing research.

If these brilliant people had the option to do this in their home country, they probably would. Moving to a completely different country is difficult.


To put it in terms of what the audience here can relate to:

multiplication is an overloaded operation, or in more modern terms, it is doing multiple dispatch, depending on whether the input is a whole number or integer or rational or real or complex


It's not about curvature, it's moment of inertia that stops it from bending, see:

https://news.ycombinator.com/item?id=8275112


Here's my rant of the day:

The meme that the GPL is "less free" sounds very Orwellian.

What it really means is that I cannot freely use someone else's creation without restriction, and that I have to respect the wishes of the original author about how their work should be used.

That is because GPL was intended to be a hack on the copyright system (hence "copyleft"), but it always respected the idea of copyright - it does not call for the abolition of copyright.

This talk of GPL being "not free" smells of propaganda so that people (big corporations?) can reuse and repackage others' valuable work for free without any kind of compensation or acknowledgement. (TFA says the same thing)

Feel free not to use GPL'ed code if you find it too restrictive, but please don't contribute to spreading propaganda against it (and get off my lawn while you're at it)

[For the record, I work on proprietary software myself and haven't written any FOSS code, but I use GPL software daily - Linux, gcc, emacs etc]


Made me remember the free software song.

    When we have enough free software
    At our call, hackers, at our call,
    We'll kick out those dirty licenses
    Ever more, hackers, ever more.


"Resonance occurs when two sources of excitation fall completely in sync, and reinforce one another endlessly."

This is incorrect, even according to the wikipedia article it links to.

Resonance occurs when the excitation frequency matches a natural frequency of the system being excited, causing it to vibrate at larger amplitudes, even perhaps uncontrollably.

It seems that the author misunderstands mixing also - the graph that he presents for the mixed signal seems to be incorrect according to the other wikipedia article he links to.


Yes, the author is describing constructive interference instead of resonance, although the fact that they said multiplication instead of addition indicates that they aren't really speaking correctly about either one.


In my experience, the method described here works very well in practice:

http://wwwf.imperial.ac.uk/~rn/distance2ellipse.pdf

(this is linked to from the stackoverflow post)

Finding the closest point for an ellipsoid is more challenging in practice, because one needs to take a different parametrization near a pole, and sometimes conjugate gradient is necessary to get convergence.


Although I didn't implement this myself, the plot accompanying the answer shows that if correctly implemented, this method is unstable in some regions inside the ellipse.


I couldn't identify all the car company logos. Specifically, in 1-based (row, column) format:

(1,3), (3,2), (3,6), (4,4), (4,6), (4,7), (5,9)

Can people please chime in with the correct identifications?

(There seems to be a logo behind the speaker, which I am not counting)



You seem to be conflating "numerical computing" with machine learning. However, numerical computing typically involves solving PDEs via e.g. finite elements or finite differences, or solving large systems of linear equations associated with such methods.

The difference between the two is that when solving PDEs, accuracy is paramount, so even using single precision is a bit of a compromise, whereas in machine learning, the trend seems to be to use half-precision or lower, sacrificing accuracy for speed.

For classical numerical computing, e.g. solving PDEs or linear equations, Java may not be the best choice, e.g. see the following paper:

How Java's Floating Point Hurts Everyone Everywhere:

https://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf


FWIW, we use blas just like matlab,numpy,r, and julia.

The speed depends on the blas implementation.

Nd4j has a concept of "backends". Nd4j backends allow us to sub in different blas implementations as well as different ways of doing operations.

We have a data buffer type that allows people to specify floating point or double. Those data buffers then have an allocation type that can be javacpp pointers,nio byte buffers (direct/offheap) or normal arrays

We are also currently working on surprassing the jvm's memory limits ourselves via javacpp's pointers. That allows us to have 64 bit addressing which people normally have access to in c++.

Every current jvm matrix lib that uses net lib java or jblas is going to have problems with jvm communications as well. The reason for this is passing around java arrays and byte buffers is slower than how we handle it which is via passing longs (raw pointer addresses) around that are addressed via unsafe or allocated in jni where we retain the pointer address directly. We expose that to our native operators.

We are solving this by writing our own c++ backend called libnd4j that supports cuda as well as normal openmp optimized for loops for computation.

We also offer a unified interface to cublas and cblas (which is implemented by openblas as well as mkl) FWIW, I more or less agree with you, but it doesn't mean it shouldn't exist.

JVM based environments can bypass the jvm just like python does now.

The fundamental problem with the jvm is no one just took what works on other platforms and mapped the concepts 1 to 1.

Our idea with nd4j is to not only allow people to write their own backends, but also provide a sane default platform for numerical computing on the jvm. Things like garbage collection shouldn't be a hindrance for what is otherwise a great platform for bigger workloads (hadoop,spark,kafka,..)

In summary, we know the jvm has been bad till now - it's our hope for fixing that.


Looking at it, I don't see anything dealing with sparse matrices or factorization (e.g., LU, QR, SVD). All the Java libraries for SVD are pretty bad. Plus, none of your examples mention double precision. Does the library support it?

I find it interesting in the Numpy comparison no mention of the BLAS Numpy is linked to is mentioned, but it is for Nd4j. Numpy is highly dependent on a good BLAS and the basic Netlib one isn't that great.


Those are implemented by lapack as part of an nd4j backend.

Yes we have double precision - we have a default data type with the data buffer.

If you're curious how we do storage: https://github.com/deeplearning4j/nd4j/blob/master/nd4j-buff...

We have allocation types and data types.

Data types are double/float/int (int is mainly for storage)

Allocation types are the storage medium which can be arrays,byte buffers or what have you.

If you have a problem with the docs - I highly suggest filing an issue on our site: https://github.com/deeplearning4j/nd4j/issues

We actually appreciate eedback like this thank you.

For net lib java, it links against any blas implementation you give it. It has this idea of a JNILoader which can dynamically link against the fallback blas (which you mentioned)

or typically openblas or mkl. The problem there can actually be licensing though. The spark project runs in to this: https://issues.apache.org/jira/browse/SPARK-4816

If we don't mention on the site, it's probably because we haven't thought about it or haven't gotten enough feedback on something.

Unfortunately, we're still in heavy development mode.

FWIW, we have one of the most active gitter channels out there. You can come find me anytime if you're interested in getting involved.


Lapack doesn't implement any sparse linear algebra. If you think the landscape of "Java matrix libraries" is fragmented, when really they're all just different takes on wrapping Blas and Lapack or writing equivalent functionality in pure Java, wait until you look into sparse linear algebra libraries. There's no standard API, there are 3ish common and a dozen less common different storage formats, only one or two of these libraries have any public version control or issue tracker whatsoever, licenses are all over the map. The whole field is a software engineering disaster, and yet it's functionality you just can't get anywhere else.


I'm aware of the different storage formats. However there are quite a few sparse blas and lapack implementations now.

I'm aware the software engineering logistics that go into doing sparse right which is why I held off

We are mainly targeting deep learning with this but sparse is becoming important enough for us to add it.

As for disparate standards I managed to work past that for cublas/blas.

I'm not going to let it stop me from doing it right. If you want to help us fix it we are hiring ;).


> However there are quite a few sparse blas and lapack implementations now.

There's the NIST sparse blas, and MKL has a similar but not exactly compatible version. These never really took off in adoption (MKL's widely used of course, but I'd wager these particular functions are not). What sparse lapack are you talking about?

> If you want to help us fix it we are hiring ;).

We were at the same dinner a couple weeks ago actually. I'm enjoying where I am using Julia and LLVM, not sure if you could pay me enough to make me want to work on the JVM.


You can read Sokal's various commentaries on this topic. His point was that many philosophers tried to make grandiose statements about science and mathematics that did not make sense at all. The philosophers were making statements based on their imagination of what certain terms meant, and not on the actual meaning of those terms.

See the links here:

https://news.ycombinator.com/item?id=10715872


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: