Hacker Newsnew | past | comments | ask | show | jobs | submit | theCrowing's commentslogin

The problem is that dang looks the other way as soon as someone uses a thesaurus instead of spewing misinformation and hate in plain words.


You can't have no opinion.


I guess I mean the everybody should and does care


I wish we lived in a world where it was possible not to care about what billionaires like Musk did, but capitalism has placed civilization at their feet to dispose of as they will, and Musk in particular controls a primary engine of political communication, propaganda and discourse shaping for the Western world, so the rest of us should care at least a little.


I don't get it do you want to be the biggest but only biomass left on the planet just to own the libs?


That's not your decision to make one way or the other.


What? Answer the question in word and meaning if you answer and not your deranged interpretation of it. People are voluntary getting less children in almost every western society.


I'm not "all humans", I'm one. I like being here. You're asking me to decide for others if they should also be here. It's not for me to say.

Also- the future isn't necessarily all humans and nothing else.


[flagged]


> do you want to be the biggest but only biomass left on the planet just to own the libs?

What does this mean? You're asking about the human race right? Or are you literally asking the parent if they want to grow into the fattest sole survivor on the planet? I'm confused by what you're even asking.


I know what they meant but your interpretation made me chuckle


Good for them. Not that much for ecological reasons, though. The point of your parent was, I believe, that it's not your place to decide for others.


exactly


[flagged]


Apparently, three-quarters of Earth's food supply draws on just 12 crops and five livestock species. Is that a good thing? a bad thing? Or, as I believe, not really either good or bad. That situation has been sustained for quite a while already right?


> three-quarters of Earth's food supply draws on just 12 crops and five livestock species.

All of the biodiversity on the planet is reducing to 12 crops and five livestock species, at least by weight.


Why do we not choose to broaden the range of species that we cultivate? Are most just another species of rat or weed? Maybe more varieties of carrot are just better. Or just more carrots in general for that matter.


If it's so wonderful Western nations having fewer kids why are uneducated people from elsewhere being brought in in such numbers?


with the noble view to save the planet?


I bet that's the case for more than you would expect.


lol.


> are voluntary getting less children

It has been definitively established that this is entirely voluntary? Do you have a source you can cite?


source is their mouth


The 2012 video "Everything is a remix" couldn't be truer today than it was 12 years ago.

https://www.youtube.com/watch?v=nJPERZDfyWc


No.

Dunno but the AWS cost for your knowledge base would be comical.

Everyone with enough VRAM and data.


I know OpenAI describes the current costs of operating ChatGPT as “eye watering”, but I wonder how quickly these costs will start to come down. Improvements seems to be accelerating, and those improvements should be in efficiency as well as performance.


It seems like you have no real knowledge about what you are talking. If the last years told us anything especially with the bad performance of GPT-J and BLOOM we will see these models ballooning for at least 2-3 more years on the hardware side there is also nothing that shows that the costs will go down.


I absolutely have little knowledge in this area. I just know that the models have reached an entirely new level of performance based on what I see with ChatGPT and limited reading in recent weeks.

I’m trying to get a read on how things will play out over the next year or two. I’m bracing for big change and trying to get my head around the enabling and gating factors.

If you can point me to any articles/resources that would help with that effort, I’d appreciate it.


If you have no work experience as Eng Manager/Consultant is it really fair to false advertise?


For consultancy, I've worked as freelancer on a few projects outside of my day job - most recently for the local government here on a recent expo.

For Engineering Management, I'm currently doing this within my current Senior Dev role. I'm basically responsible for developing our processes and software stack and advising on hiring and selection of tools/frameworks for our clients (I work in an agency.)

It's just that our main client does not a have specific title Eng Manager so the role is kind of not official in the job title.


When you get gaslit by tik tok influencers and write a blogpost about it.


That's true. A lot of people using it are not influencers though, and while I don't expect there to be a real transition for this use case, it's clear that for newer generations things are going to be less monolithic in terms of the job market.

Also in terms of social media.

Wouldn't you agree?


I don't really feel that it's monolithic right now. Your perspective might be skewed because of the field you work in.


Also true.

Is there anything you see that is changing or that should though?

Asking since I imagine you may have your own perspective on this.


Most of us were rooting for the demise of Twitter before Musk took over. Having Musk fuck it up as hard as he does now is just the cherry on top.


Him losing billions makes it just so much better. As someone who really have never cared for his antics and the cult surrounding him.


You can work with embeddings.


The best is probably tortoise but you have to run it yourself https://github.com/neonbjb/tortoise-tts

here are some demos https://nonint.com/static/tortoise_v2_examples.html


@yacineMTB (Twitter) used Tortoise to diy his own podcast replicating Joe Rogan (by ChatGPT) & the results are amazing worth a quick listen to get the gist [1]

   I wrote a script that 
   - pulled @_akhaliq's last 7 days of tweets
   - fished out the arxiv links
   - downloaded raw paper .tex
   - parsed out intros & conclusions
   - automated a podcast dialogue about the papers w/ web automation & GPT
   - generated a podcast
[1] https://scribepod.substack.com/p/scribepod-1#details


As someone that works in generative modeling (vision) there's something that sparks suspicion. They note that these are hand picked results. Has anyone used this and can report the actual quality of results? I bring this up because anyone that has used Stable Diffusion or DallE will know why. Hand picked results are good, but median results matter a lot too.


I'm the author of FakeYou.com and can speak to Tortoise and the TTS field.

Tortoise produces quality results with limited training data, but is an extremely slow model that is not suitable for real time use cases. You can't build an app with it. It's good for creatives making one-off deepfake YouTube videos, and that's about it.

You're looking for Tacotron 2 or one of its offshoots that add multi-speaker, TorchMoji, etc. You'll want to pair it with the Hifi-Gan vocoder to get end-to-end text to speech. (Avoid Griffin-Lim and WaveGlow.)

Your pipeline looks like this at a high level:

  Input text => Text pre-processing => Synthesizer => Vocoder => [ Optional transcoding ] => Output audio
TalkNet is also popular when a secondary reference pitch signal is supplied. You can mimic singing and emotion pretty easily.

These three models are faster than real time, and there's a lot of information available and a big community built up around them. FakeYou's Discord has a bunch of people that can show you how to train these models, and there are other Discord communities that offer the same assistance.

If you want to train your own voice using your own collected sample data, you can experiment with it on Google Colab and on FakeYou, then reuse the same model file by hosting it in a cloud GPU instance. We can also do the hosting for you if that's not your desire or forte.

In any case, these models are solid choices for building consumer apps. As long as you have a GPU, you're good to go. If you're not interested in building or maintaining your own, you can use our API! I'd be happy to help.


> "Tortoise produces quality results with limited training data, but is an extremely slow model that is not suitable for real time use cases"

What would run if you had large set of training data (and time and money) but your focus is on quality? Still Tortoise?



Thanks for this, I actually appreciate the honesty. It is always difficult for me to parse the actual quality of things I don't have intimate experience with.

Can I ask another question? If I wanted to hack around with STT and TTS (inference only) on a pi (4B+) is there anything that is approximately appropriate and can be done on device? (I could process on my main machine but I'd love to do it on the pi even with a decent delay)


For STT, take a look at Wenet: https://github.com/wenet-e2e/wenet

They provide support for running in a Raspberry Pi and it runs in real-time. I have tried the desktop version and the quality is good enough when the audio is clean.


No problem!

There are other ML TTS models that are both lightweight and can run on a CPU. Check out Glow-TTS for something that will probably work.

Also swap out the HifiGan vocoder for Melgan or MB-Melgan as these will also better support your use case.

I ran this exact setup on cheap Digital Ocean droplets (without GPUs) and it ran faster than real time. It should work on a Pi.

Unfortunately I'm not aware of STT models that operate under these same hardware constraints, but you should be good to go for TTS. With a little bit of poking around, I'm sure you can find a solution for STT too.


From thee link:

> Tortoise is a bit tongue in cheek: this model is insanely slow. It leverages both an autoregressive decoder and a diffusion decoder; both known for their low sampling rates. On a NVidia Tesla K80, expect to generate a medium sized sentence every 2 minutes.

I suspect that for a real(-ish) time TTS system, something else is needed. OTOH if you want to record some voice acting for a game or other multimedia product, it still may be more cost-effective than recording a bunch of live humans.

(K8 = NVidia Tesla K80, GPU, $800-900 for a 24GB version right now.)


I see 24GB Tesla K80s on ebay for $90...what am I missing?


a k80 is extremely old by now, so I'd expect this to be maybe an order of magnitude faster.


Would it still require a 3080 to run adequately, that is, with 1-2 seconds of delay? I've no idea what consumer-grade hardware works well for ML loads.


I haven't tried it, but the k80 is about 6 years old/5 generations. there have been massive leaps since then.


6 years old is nowadays more like 3 generations and it's definitely not a magnitude (10x) of difference.


Kepler, Maxwell, Turing, Volta, ampere, Lovelace, hopper. it's 6 generations old when you include the micro architectures. it would be about a 10x improvement.


Oh, if it's kepler, absolutely. Thought 6 years thus Ampere.


Does anybody run Tortoise on cloud serverless GPUs? If yes, can you please recommend a setup?


English only from a cursory glance


NLP is going to have this problem for a long time. Obviously most original research is done by Americans in English. There are really only valid training sets for languages that NLP researchers or engineers speak.


Chinese is well-represented among ML researchers.


Because 14% of the world's population speaks Mandarin Chinese. But what about Yoruba, Burmese or even Hakka Chinese?


Speech will have this problem but text based NLP can be translated and we have pretty good translators


ChatGPT works in Russian for example, don't know about other languages


I suspect it might be translated


It also works in German and I'm relatively certain it's not translated outside the model itself. I've asked it to generate puns incorporating certain words and while the English results were subjectively somewhat better, the German ones were still "fine" and definitely wouldn't work in English.


ChatGPT is so crazy it even works in fluent Thai. That's better than any machine translation I've ever tried so far. It even takes cultural differences into account. For example when you ask it to translate "I love you" into Thai, it mentions, that normally you would not say this in the same circumstances as you would say it to your lover in the West, correctly explaining in what circumstances people would really use it, and what to use instead. That's revolutionary for minority languages without a lot of learning material available online.

Also I am a native Swiss German speaker. For those who don't know: Swiss German is a dialect continuum, very very different from standard German to an extend, that most untrained German speakers don't understand us. There is no orthography (writing rules), no grammar rules etc. It's a mostly undocumented/unofficial writing system. Only spoken, and the varieties are vast. And guess what, I can write in completely random, informal Swiss German dialect and ChatGPT understands everything, but answers in standard German.


Unless it was trolling I saw evidence it was trained on Russian texts, how else it could do convincing style transfer from Russian poets for example.

But as always only successful prompts are shared so I don't know how hit or miss it is


Ooh, that is a nice one!


Examples 4 and 5 sound like George Clooney for some reason.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: