> Here's what a healthy online community has: respect for other members, maturity, empathy, self-awareness, strong moderation, and a diverse enough set of views from participants to make conversations well-rounded and thought provoking.
How do you know this? What data verifies that this is what a healthy online community looks like? Why is strong moderation part of a healthy online community, where strong moderation in a real life would be a signal of unhealthiness (it's censorship)? If an online community is mature, empathetic, and self-aware - why would it require strong moderation?
Strong moderation in real life occurs all the time, but because the feedback is immediate, behavior modification of the individual generally occurs much faster than online. For example, if someone calls you a derogatory name to your face, you will either disengage or get mad back, both of which providing negative reenforcement for that behavior. Such negative feedback loops exist everywhere in our social communications.
Those are not the sole two options one has available when called a derogatory name. A fun third option is to completely ignore it and demonstrate in that ignorance that you've already won the conversation, which breaks the negative reinforcement you're discussing.
Seriously, next time someone goes after you, try it. Laugh and move on. It's really, really fun to watch what people do in response, because their anger at not landing the desired effect often manifests in physical twitches. Don't give people what they want until it benefits you.
> Why is strong moderation part of a healthy online community
Membership criterion is central to maintaining the competency level of the members. And competency on the topic (along with intellectual honesty) is the single most, perhaps only, important component to a useful discussion: It would be insanely destructive for the New England Journal of Medicine to publish every single thing it ever received with equal weight in massive weekly tomes.
Moderation acts as competency monitoring to a degree.
The problem with literal membership is that it precludes autodidacts, people new to the field and people who can't be bothered with a complicated process for joining. Thus unrestricted membership with moderation. And indeed on reddit, the highest quality subreddit, /r/science, is the one most aggressively moderated
For fact based, purpose driven venues, if you believe you were censored because of your opinion, you should not have posted an opinion in the first place. That's certainly how it works in the workplace which is how some people need to use the web.
There can be unlimited venues where partially informed people post their opinions, maybe we can call them "healthy" or not. But some people want to work on problems that actually do require knowledge and should not have to have their discussions constantly vandalized. A "functional" venue one might say.
And indeed on reddit, the highest quality subreddit, /r/science, is the one most aggressively moderated
Although IMHO their extreme stance on moderation is not always a good thing. For example, they have an unwritten (at least, the last time I checked) policy of nuking entire threads where there are some poor comments, even if this also results in deleting many useful comments later in the same thread.
The first time I contributed substantially to a discussion in /r/science, on a subject where I did have something resembling an expert opinion to offer, my entire contribution (which took several hours to write across a handful of comments, with carefully backed sources etc.) was summarily deleted without warning. I queried this with the mods, and they explained the policy about nuking entire threads. Given the nature of some of the early comments around that thread, I couldn't disagree with the assessment that they were not a constructive contribution. However, I also immediately filed the whole sub in the same dustbin as SO and have made no further attempts to contribute, for much the same reasons.
> If an online community is mature, empathetic, and self-aware - why would it require strong moderation
First off I think you make a point. However, the reason I think strong moderation is still required, even if those previous qualifiers are met, is that online communities are generally built off of focus on some shared interest. Real flesh and blood people, even if they are mature and empathetic etc, will invariably have a variety of passions some of which may conflict with the focus of the community.
If a passion that conflicts with the focus surfaces in a discussion it may _ignite_ that passion among a subset of the members of the community. This causes conversation to derail, the community loses focus, and the members abandon the forum. Strong moderation keeps the community focused, and for that reason the community thrives and is "healthy".
Think of the evening news on TV. You have half an hour to fill, you are going to have to pick the most relevant things. Some things will make the cut, some won't.
Once you get to 24 hours news then it is another thing, because they are repeating the same 15 minutes over and over again. The model to describe that is not censorship, it is spam, electronic warfare, jamming -- an entirely disingenuous communication that superficially looks like "speech" but has nothing to do with "free speech" and such. In fact, it inhibits real communication the same way weed killer kills plants.
Online isn't the evening news or a 24 hour news station - I'm not sure what point you're trying to make. HN doesn't only moderate so that the "best" content makes it to the top, they also moderate (censor) certain topics - you don't see a lot about politics on HN because they don't want those subjects discussed on the HN platform.
Counterfeiting is creating something that is fake, but presenting as real. Counterfeiting has nothing to do with quantitative easing and you shouldn't conflate them.
The government is considered the sole producer of currency. As such they are the hallmark of valid money. Anyone in the basement is creating a knock off.
What's the advantage of this over just using pip (other than this is has a GUI which I don't see as an advantage, but understand others might)? How do you handle updates to packages? I'm just really struggling to see why this would be useful.
Forgive me, I was confusing strongly typed with statically typed. My point still stands though, for someone used to a dynamically typed language like Python having database columns that aren't locked to a single rigid type will seem quite natural.
It doesn't though. There are almost no dynamically typed languages where you supply a type yet don't enforce it, maybe forth is the closest with its stack effect comments.
If you give me a type, I do expect a type, if you omit the type fine, I'm gonna assume it'll be dynamic at runtime.
All these dynamic languages are also strongly typed for a reason, even javascript uses === everywhere instead of the weakly typed ==. The bugs that occur because of automagic are just ridiculously hard to track down, and even statically typed languages can fall into this pit, for example Scala with its implicits shudder.
But SQLite does the worst of all worlds, you have to specify a type, yet you cannot rely on it, it sometimes does type conversion for you e.g. int to string and back, and sometimes does not.
They should've just implemented a single ANY type or have the syntax not use types at all and deviate from the SQL spec (which they do a lot anyways).
There's no other way to say this without sounding rude, but you weren't unable to learn R because you couldn't understand what a dataframe was - you were unable to learn R because you gave up. Blaming a data structure for the failure seems like a bit of a stretch.
That is what I said. I didn't understand what a dataframe was. Note that I didn't really blame the data structure. I said it was an irritating factor. Someone else might not have been as irritated. But if R was closer to what I already knew, learning would have been easier. For me.
Maybe you could try to save the pre-trained model to a storage bucket (e.g. s3) and then use flask (or whatever framework you like) to create the endpoints. When the flask app starts, the model can be loaded into memory from the storage bucket, and then you could create, for example, a /predict endpoint that accepts whatever data is needed to make the prediction. Deploy this to some PaaS (Heroku, AWS EBS, GCP App Engine) that has auto-scaling as a feature and you're sorted.