I would say if you really wanted to make this easier for streamers to use you could use helm to package up a kubernetes configuration (a bit of work) and then put it into something like google cloud launcher so you could just point and click to set this up.
I agree, one-click deploy would be the next step. Hadn't seen helm, I was debating between k8s and rancher.
If possible I'd like to avoid an opinionated hosting service like Google Cloud Launcher or AWS ECS, but I recognize that it's probably necessary. The only open source one-click deploy I've seen is a church service[1], but the author of that project accomplished it by with a separate app that provisions the necessary DO droplets via API and presenting the credentials to the user.
Is there a name for the pattern recognition section? I'd be interested in reading more about it and how you use it as a measurement/how the results correlate with the decision process. Does it tie in with the behavioral aspect at all or are they completely independent?
We built a PWA with vanilla javascript that does a lot of augmented reality work with the camera. The main issue is the lack of iOS support. Apple recently updated iOS to include support for the service worker (literally like this week) but it still lacks a huge number of features that chrome on Android has and basically every desktop browser has.
You do get access to a lot of the core sensors on the device but the inconsistencies on iOS just really make life a lot harder. Hopefully this will change over the next year but I'm not sure how quickly it will happen. So basically yes, you should build PWAs but until apple gets completely on board you can't really leverage them to their full extent like you can with chrome on android.
Yeah they added it in the most recent version of iOS (11.3) but progressive web apps have a very loose definition the most basic of which is, as you correctly stated, using a service worker. However service workers are just the ground floor of a much bigger building of features that should be able to be leveraged cross browsers.
In practice they work just fine, it's just very annoying to initially set them up and you have to do some patching to get the latest version of tensorflow working with it. I use one every day and I haven't run into any problems while running models with both pytorch and tensorflow.
So I've been using an eGPU for about 3 months now and it is amazing. This isn't officially supported so you end up having to do a lot of work arounds to get things like tensorflow/pytorch working.
It doesn't let you hot plug your eGPU into your computer (you have to restart for it to work) but for officially supported eGPUs you're now able to do that.
I've trained various models using a Titan XP and it's so awesome to be able to maintain the portability of your laptop and still get all of that power. A large benefit is also not having to move training data around between servers and other machines if you have it on an external drive or just on your laptop.
After you get everything up and running there isn't that much maintenance or anything you have to do regularly to keep it working. The speedup is incredible and it was definitely worth it for me personally, however it's not a walk in the park to get it set up initially.
I've tried it with an external monitor which works great but I haven't tried it with any games. From looking at the eGPU forums it seems like people aren't running into that many problems with it.
I've been using an eGPU for a few months now and It's fantastic. It's a bit annoying to deal with all of the workarounds for libraries like pytorch/tensorflow so you can use the latest version but other than that it's great.
So there I didn't really follow any blog post or anything, there's a lot of gists about setting it up but they become relatively out of date fairly quickly and are usually specific to their setup.
A lot of people have put in a lot of work to make it as easy as possible to setup. I would just make sure you setup things one at a time and don't immediately jump to trying to get TF or pytorch to work after installing the drivers/ following any sort of guide. Verify your Cuda installation first by building the sample programs and running them. (see http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x...)
The main thing is just getting the GPU drivers setup, after that installing tensorflow requires some modifications to the source (relatively trivial find and replace), and I don't think you have to do anything special for the latest version of pytorch.
Other tips:
1. Make sure you're using a thunderbolt 3 cable with whatever egpu you have, other cables will not work even if they're usb c. (USB 3.1 != Thunderbolt 3 != USB C) Read about the differences.
2. I would recommend the AKITO Node enclosure as it seems like most people use it and the community is really small already so if you aren't it might be more difficult to debug issues, but I wouldn't say you couldn't use something else.
3. You'll want to have a docker container that has the CPU version of tensorflow/whatever so you can use that when you don't have the GPU readily available as tensorflow won't work if you installed it with GPU support and your GPU isn't there.
4. If you're trying to use Nvidia-Docker I'm not sure if anyone has gotten that to work on mac because device binding isn't supported on mac for docker. You might be able to get this to work by modifying the docker VM but I'm not sure.
This is huge, and it's only an alpha. I begun reading about AutoML/Neural Architecture searches around ~year ago and something I've been thinking about is:
Why doesn't this just move the optimization problem? Aren't you now just optimizing your DeepRL network rather than the network you're trying to optimize?
The idea of AutoML (in this case[1]) is to improve the NN architecture for a given type of problem.
In "normal" machine learning this is basically hyperparmater optimization for a given dataset (eg, the depth of a random forest, XGB parameters, the best random seed/jk )
In this case is tests different combinations of operators on a known dataset to see what performs the best. So it is optimizing the prediction network
(Also this isn't DeepRL, it's a deep neural network. I think that was a typo)
Jeff Dean talks about AutoML using RL and in the paper "
Neural Architecture Search with Reinforcement Learning" it also talks about this.
Also it seems different from more traditional hyperparameter optimization because it makes novel cells. So the structure of the network isn't limited to our existing library of layers/cells.
"Novel Cells" are combinations of existing operators.
It's entirely true that these are combinations that humans haven't (and probably wouldn't) come up with.
I don't want to underplay this. "It's similar to hyperparameter search" makes it sound like it isn't interesting or novel, which is untrue. I completely believe it is a revolutionary way to build software (so much so that I quit my job, raised funding and are working on a similar space of problems).
But it isn't doing something like inventing a new math operations similar to the other operators which humans put together to form cells/layers. It is rearranging and choosing those operators in new ways.
The first thing that comes to mind is peer reviews. Although it's not inherently quantitative data and doesn't remove any of the bias/politics/etc. it can be a good indicator of who others feel are helping them.
However as the writer of the article pointed out often times you can't see when other people are improving your ability to do work whether that's through writing more documentation, improving some internal infrastructure, or other often invisible things that can drastically improve your life as an engineer.
I think an effective approach would be through time tracking (i.e: writeup on how you spent your time for the day) in conjunction with some analysis on how the time you spend on certain things correlates with what other engineers are doing. This of course would rely on everyone being detailed in their writeups and the analysis still might not catch certain things because it's difficult to write down exactly how you spent your time. Perhaps you could only write down the three most difficult/annoying problems you ran into and just track that.
I remember reading about some company that is trying to make this easier by tracking computer use and creating graphs of how people interact within their organization and how that changes over time but that doesn't seem like it would work everywhere.
I don't know if there is a way to truly quantify teamwork or organizational impact. I think the important thing is to encourage your organization to communicate more: talk about what you're doing, why you're doing it, and how it helps others within your organization. If your organization is fairly big write about it and share your writeup internally. Train people within your organization to speak up if they feel they aren't being appreciated for the impact they made.
Ultimately I think it's an extremely hard problem because it stems from the question of what having a positive impact means. The answer to that can branch off into millions of different small tasks that could be totally unique to a subset of people within your org.
----
The best you can do is:
* Measure what you can.
* Know that's not the entire picture and it never will be.
After that try to create a culture that:
1. Speaks up about their own impact
2. Shares the impact of others' work, especially of those who might not speak out much
3. Is openly appreciative of and encourages 1 and 2.
Googler here: The google performance and promotion process does, in fact, rely on reviews from your peers. The process happens once every 6 months, every other being non-mandatory.
Although your peers saying you were helpful is not enough, you need to demonstrate impact (and other attributes depending on your level).