"... A comprehensive evaluation was performed on a suite of models, including Qwen3-Omni-30B-A3B-
Instruct, Qwen3-Omni-30B-A3B-Thinking, and two in-house developed variants, designated Qwen3-
Omni-Flash-Instruct and Qwen3-Omni-Flash-Thinking. These “Flash” models were designed to improve
both computational efficiency and performance efficacy, integrating new functionalities, notably the
support for various dialects. ..."
Another thing we are trying to understand is whether the 2D element adds value to the simulation. A simpler option would be a pure text/chat interface. Still, the hypothesis here is that it is easier to comprehend what's going on in the environment with an actual 2D world and characters, and it might be more immersive compared to just a text interface.
Curious to see what other things you will simulate in the future!
Shameless plug: recently we've built a demo that allows you to search for objects in San Francisco using natural language. You can look for things like Tesla cars, dry patches, boats, and more. Link: https://demo.bluesight.ai/
We've tried using Clay embeddings but we quickly found out that they perform poorly for similarity search compared to embeddings produced by CLIP fine tuned on OSM captions (SkyScript).
howdy! Clay makers here. Can you share more? Did you try Clay v1 or v0.2
What image size embeddings from what instrument?
We did try to relate OSM tags to Clay embeddings, but it didn't scale well. We did not give up, but we are re-considering ( https://github.com/Clay-foundation/earth-text ). I think SatClip plus OSM is a better approach. or LLM embeddings mapped to Clay embeddings...
Hey hey! We tried Clay v1 with 768 embeddings size using your tutorials. We then split NAIP SF to chips and indexed them. Afterwards, we performed image-to-image similarity search like in your explorer.
We tried to search for bridges, beaches, tennis courts, etc. It worked, but it didn't work well. The top of the ranking was filled with unrelated objects. We found that similarity scores are stacked together too much (similarity values are between 0.91 and 0.92 with 4 digit difference, ~200k tiles), so the encoder made very little difference between objects.
I believe that Clay can be used with additional fine-tuning for classification and segmentation, but standalone embeddings are pretty poor.
Check this: https://github.com/wangzhecheng/SkyScript. It is a dataset of OSM tags and satellite images. CLIP fine-tuned on that gives good embeddings for text-to-image search as well as image-to-image.
Thanks for sharing Brooklyn text demo. Haven't seen it!
Captioning images using VLM would definitely help as an additional conditional feature. Maybe it even would be enough to use only embeddings of captions to do search!
We chose aerial satellite instead of street view because we plan to apply the same technologies where street view is not available, e.g. crop fields or forests. Another thing is that we plan to monitor areas that change frequently and street view data is not enough to keep up. But the idea is great! Although your query "palace of fine arts" is not extremely exciting because it is searchable via Google Maps :D
"USF" by itself doesn't work, "USF word" pointed me where needed xD
"beach" and "picnic tables" indeed doesn't work in object mode, but works great in "big" mode, probably because they needs some context around themselves
"lots of people" didn't work, "a crowd of people" seems to work. Interesting, that almost the same (semantically) queries produce very different results!
I really like using pandoc as a build system [1] for my personal website to convert .md to .html. I can use templates, automatically generate a table of content and run some lua scripts to get the desired result, such as clickable headers.
You are correct, training sorely in fp16/bf16 can lead to imprecise weight updates or even gradients turning to zero. Because of that, mixed precision is used. In mixed precision training, we keep a copy of the weights in fp32 (master model) and the training loop looks like this:
compute the output with the fp16 model, then the loss
-> back-propagate the gradients in half-precision
-> copy the gradients in fp32 precision
-> do the update on the master model (in fp32 precision)
-> copy the master model in the fp16 model.
We also do loss scaling which means multiplying the output of the loss function by some scalar number before backprop (necessary in fp16 but not required in bf16).
Mixed precision is a default method to pretrain and full fine tune right now. It is especially good in transformers, because they have memory bottleneck in activations (outputs of intermediate layers stored for backprop), and running forward pass in fp16/bf16 reduces VRAM by almost half (speeds up forward pass as well).
This may sound stupid, but from my perspective renting random VMs on vast.ai is safe in general and might be safer than using traditional cloud providers in particular. Consider this: on your VM a new image starts several times a day, each time with a new volume. It downloads tens of GBs of data and weights for training. Once training is done, everything gets cleaned up and the process starts again for a new tenant. This constant cycle makes it kind of difficult to track and extract any meaningful data from it.