Hacker Newsnew | past | comments | ask | show | jobs | submit | nimrody's commentslogin

It's beautiful and the demo video shows how someone with music background can make even such a limited tool sound so amazing.


No. String.hashCode() was already memoized. So after the first call to hashCode(), future calls just retrieved the data from the hash field of the string object.

This optimization is about avoiding even calling the method because the jvm knows that the value returned will be the same.


No. The string hash is stored as part of the String object. It is initialized to 0 but gets set to the real hash of the string on first call to hashCode()

(which is why it will be computed over and over again if your special string happens to hash to 0)


The biggest problem Intel had was that their process was optimized for their high end processors. Everything else (within the company) suffered.

For Intel to succeed as a foundry it needs customers that target the same "high end, power hungry" market segment. I don't see how Qualcomm (low power) fits that niche. More likely big AI accelerators (like, perhaps, Microsoft is planning).

As long as Intel depends on its high end processors for most of its profits, it will be difficult to develop a low power process for other, less profitable, customers.


Werent they trying to fight for this market segment with their partnerships with tower/umci?


How can it tie requests arriving at a service and generating additional downstream requests?

Distributed tracing needs some common token all requests share to identify all RPCs that should be associated with a specific incoming request.


VP of DeepFlow here. Thank you for your interest in DeepFlow!

Yes, we have implemented distributed tracing using eBPF. In simple terms, we use thread-id, coroutine-id, and tcp-seq to automatically correlate all spans. Most importantly, we use eBPF to calculate a syscall-trace-id (without the need to propagate it between upstream and downstream), enabling automatic correlation of a service's ingress and egress requests. For more details, you can refer to our paper presented at SIGCOMM'23: https://dl.acm.org/doi/10.1145/3603269.3604823.

Of course, this kind of Zero Code distributed tracing currently has some limitations. For specific details, please see: https://deepflow.io/docs/features/distributed-tracing/auto-t...

These limitations are not entirely insurmountable. We are actively working on resolving them and continually making breakthroughs.


would it be reasonable to assume that because this entirely network-based, it would work best with systems which really emphasize the "micro" in microservices?

how well does this work if, say, my system has a legacy monolith in addition to microservices?


I believe the current situation is like this.

The advantage of eBPF lies in *request granularity* (i.e. PRC, API, SQL, etc ...) distributed tracing. To trace the internal functions of an application, instrumentation is still required for coverage. Therefore, the finer the service decomposition, the more effective eBPF's distributed tracing becomes.


It looks like it depends on applications either using threads or go routines for concurrency:

> When collecting invocation logs through eBPF and cBPF, DeepFlow calculates information such as syscall_trace_id, thread_id, goroutine_id, cap_seq, tcp_seq based on the system call context. This allows for distributed tracing without modifying application code or injecting TraceID and SpanID. Currently, DeepFlow can achieve Zero Code distributed tracing for all cases except for cross-thread communication (through memory queues or channels) and asynchronous invocations.


Take a look at Core Feature #2 in this post - https://deepflow.io/ebpf-the-key-technology-to-observability...

It looks like it's using tcp flow tuple + tcp_seq to join things.


Is this similar to MySQL's InnoDB? (which is also MVCC and does not require vacuum)


Unrelated: anything similar for increasing photo resolution? I frequently encounter cases where users upload low resolution images (transferred using Whatsapp or similar) and need to increase the resolution to get something suitable for printing.


There's nothing automated that produces reliably good results with a variety of content currently.

I've tried https://bigjpg.com https://imglarger.com https://vanceai.com/image-enlarger/ https://www.upscale.media https://photoaid.com/en/tools/ai-image-enlarger https://waifu2x.org <- specifically for animated stuff

Given the similarity of the results of some of these, I'd guess they're using the same model with some input parameter tweaks.

Waifu2x is probably the closest to "reliable" if you're looking to enlarge some sort of animated content.


I was checking out one of the suggestions in another comment for background remove and they seem to have this functionality to increase resolution: https://clipdrop.co/apis/docs/super-resolution



I recently came across https://imglarger.com/ which is an AI image enlarger. I haven't used it yet so can't give a review, but they there is a free plan you can try and even their paid tiers seem reasonably priced.


My take: have a few database instances (machines) each holding the data for a group of customers. With postgres you can even put different customers on the same database instances but in different schemas.

This way you get all the benefits of the relational model (you can use foreign keys, transactions consisting of multiple tables, etc.) and the performance benefits of additional machines that are not just read-replicas.

Centralized shared tables can be in a separate database which can also hold the mapping between customer-ids and database instances.

Only drawback is that management is more difficult -- backup, migrations, etc. Specifically, you need to handle the case where some customers have migrated their database and others had not yet.


This one is about compression, reliable communication. While interesting and well-written, I don't think it matches the original request for "papers on AI, ML, ...".


> "I don't think it matches the original request for "papers on AI, ML, ..."."

I feel like most who understand this paper and who also understand AI and ML will disagree.


I had a good quick skim. It has a language model (ok trigrams!) and the cross entropy formula and reasoning for it. On my reading list for sure! We did some information theory at uni but don’t recall all of this stuff, maybe I just forgot.


Research paper on AI doesn't necessarily mean a meme title and chasing a dubious "sota" status on some benchmark. I would say foundational work is more worth reading.


Can you explain what's non-deterministic about swagger/open-api?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: