hey guys I just launched my add-in called LegalLint. LegalLint is a Word add-in designed to make life easier for legal professionals and anyone working with large Word documents by streamlining formatting tasks. Please check us out. It's free for now. At some point I will also include some pricing but it will stay pretty much affordable, most likely as cheap as a buy me a coffee per month hehe
hey yodon, valid feedback. And yes we are building a very general cloud-based spreadsheet that supports large datasets and extensible Python functions. As a young company, we need to focus though and we chose pricing as our first domain as my co-founder has extensive domain knowledge in this area. Our vision is to open it up and therefore I can understand that our website currently is not sexy enough but we are working on it :) At the end it's the product and tech that counts to us and I believe we are building something nice here and hope to get as much feedback as possible from people using it!
Priceloop.ai | Full-stack Scala software engineer | Mid-Senior | competitive compensation with ESOP | Berlin, Germany but we are also open to fully remote
At priceloop, we are working on a completely novel way of how businesses can run their pricing. Like really really novel. Not just another AI model for pricing. We strive to redefine this software category.
That's why we're looking for full-stack Scala software engineers to build our no-code pricing platform. The team is small but experienced. We also love open-source, in fact some of us are maintainers and contributors of some well-known projects such as outwatch, a functional and reactive web-frontend library for ScalaJS. Once we developed the core platform, we are also planing to open-source it. Our tech stack is Postgres, Docker, Kubernetes and we deploy on AWS.
We released the Forward version of our Transformer TTS implementation, a text-to-speech Transformer in TensorFlow 2/Keras. Now the model is more robust, fast and controllable.
Your work looks good. Code is very clean, well done. What hardware did you use for training? Do you see any potential improvement/application for multispeaker tasks?
Also, what does a publishing house's in-house ML team generally do? Are you building the implementations for peer review replication?
I cannot speak for the authors, but my impression is that the vocoder (WaveRNN) part of current TTS systems takes much more computing than the part that this research addresses, so this may not help all that much.
We've just open-sourced our implementation of TransformerTTS: a Text-to-Speech Transformer. It's based on a Microsoft paper: Neural Speech Synthesis with Transformer Network. It's written in TensorFlow 2 and uses all its cool features.
The best thing on our implementation though is that you can easily use the WaveRNN Vocoder to generate human-level synthesis. We also provide samples and a Colab notebook. Make sure to check it out and please star ⭐️ the repo and share it! We're already working on the Forward version of TransformerTTS and we'll release it soon as well.
We've just open-sourced our first text-to-speech project! It's also our first public PyTorch project. Inspired by Microsoft's FastSpeech, we modified Tacotron (Fork from fatchord's WaveRNN) to generate speech in a single forward pass without using any attention. Hence, we call the model ⏩ ForwardTacotron.
The model has several advantages:
* Robustness: No repeats and failed attention modes for complex sentences
* Speed: Generating a spectrogram takes about 0.04s on a RTX2080
* Controllability: You can control the speed of the speech synthesis
️* Efficiency: No usage of attention so memory size grows linearly with text size
We also provide a Colab notebook to try out our pre-trained model trained 100k steps on LJSpeech and also some Samples. Check it out!
Hey my team mate Dr. Christian Schäfer and me just published a blog article about our library "Headliner" where we discuss why we decided to create it and also how we use it internally at Axel Springer. The coolest part is that we integrated BertSum, a SOTA summarizer based on finetuning pre-trained BERT language models, into our library. We also speak a little bit about TensorFlow 2.x and why we used it. Check it out if you're interested. We would love people trying out our library for their text summarization problems.
We've just open-sourced our library headliner which is a sequence modeling library that eases the training and in particular, the deployment of custom sequence models. It was originally built for our own research at Axel Springer AI to generate headlines from Welt news articles. That's why we chose the name, Headliner. Although this library was created internally to generate headlines, you can also use it for other tasks like machine translations, text summarization and many more.
We built this library with the following goals in mind. Firstly, it offers a simple API for training and deployment of models (only a few lines of code). Secondly, it uses TensorFlow 2.0 with all its new features. Thirdly, it has modular classes: text preprocessing, modeling, evaluation and is easily extensible for new models and finally works well on large text data.
Headliner is our first NLP project that we open-sourced and we're happy about this. Please try out our library, star it on Github and spread the word! We'd love to get feedback.