The allegation that driver payouts are manipulated to:
1) Hook new drivers with better than average rates before tapering off
2) Take into account the age/model/value of the vehicle and what payments for it would look like in the market and dole out enough to cover costs but not "too much" that they're getting ahead of other drivers
Totally baseless and sourceless hearsay tho. Still, if true, really plays into the image of "there's no depth they won't go".
Add another: the various platforms talk to each other (or analyze driver movement) in order to manipulate order offerings in such a way as to discourage drivers from taking orders from more than one app at once. One app will wait until the other has confirmed an accepted order before deluging you with their own orders, all taking you in the opposite direction (which makes you late for one or more deliveries, giving cause to terminate your contract).
Pure anecdata. However, the change from the first two days I multi-apped and made almost 3 times my usual hourly rate, to the following weekend, when
>neither app would send me orders for up to half an hour
>as soon as one had assigned me and order, the other would start sending my multiple per minute
>all of these orders were either comically low-compensation (no tip), a 15-minute-plus drive away from the order I'd just accepted (to areas it had never sent me before), or both
> Uber Intelligence will let advertisers securely combine their customer data with Uber's to help surface insights about their audiences, based on what they eat and where they travel.
So the companies have the identities. It sounds like they're going to be learning something about their customers, the question is just how much detail they'll get.
One of the easiest methods is to find a different data source with overlap and use that to map real people to anonymized lists. Big tech companies find this super easy to do because of all the internal data they already have on everyone
When done properly is going a lot of heavy lifting there. Time and time again it's been found most aggregates are not filtered properly and be deanonymized with eaze.
It's not that one is big bad, and one is little bad, it's the little bad can become big bad with a small amount of work by an attacker/company. Then when you add in zero external third party verification of these company claims, you really don't have any reason to believe them.
> When done properly is going a lot of heavy lifting there.
Not really. There are common practices for it. Yes it hits HN when deanonymization can happen at a well-known company, just like it hits HN when there's a security vulnerability that gets patched at a well-known company.
But "it's the little bad can become big bad" is what's doing the heavy lifting in your argument. No, that's not how it works. There's no universe in which aggregate data can be deanonymized to anywhere close to what all of the individual profiles would be. It's a completely false equivalenace, period.
And I'm tired of people acting like companies putting on a show of protecting our privacy is doing anything actually helpful. But you're right. I'm wrong and clearly don't care about privacy.
As a completely unrelated aside, I wonder how much social progress is hindered by people alienating people on their own side.
It's more challenging to encourage correct implementation of semantics than implemention of visuals; which is a great reason for using the element that was designed for this use case.
Technically it's the country code for the North American Numbering Plan, which is used by several other countries as well as the US.
But in this context it'd be the first digit of the area code, with no country code being used because the call is within the US. There are no area codes in the the North American Numbering Plan that start with a 1.
"The syntax rules for area codes do not permit the digits 0 and 1 in the leading position."
My guess would be it's to avoid ambiguity with the fact that 1 is also the country code. If I recall correctly, historically, dialing the 1 was necessary for any long distance call (even if not international).
>If I recall correctly, historically, dialing the 1 was necessary for any long distance call (even if not international).
You recall correctly. I haven't had a landline for a number of years now but I think it was still required latterly when I still had one. Don't think it was ever needed on cell (or maybe even valid) when I first got one at some point in the 1990s.
It used to be the case that the middle digit of an area code had to be a 0 or a 1. All the O.G. "cool" area codes like 212 are in this format, and the less desirable new area codes like 646 are not (yes, this is an accidental Seinfeld reference).
Another interesting point: If someone think that robots can be a better partner, then you also no longer understand what it means to be human.
Maybe it depends on what you want in a relationships. AI is sycophantic and that could help people who might have trust issues with humans in general or the other sex (which is happening way more than you might think in younger generations, whether that's involuntary celebates or whatever)
I don't blame people for having trust issues but the fact that they can live longer in some idea of a false hope that robots are partners would just make them stuck even longer and wouldn't help them.
Should there be regulations on this thing depends if this becomes a bigger issue but most people including myself feel like govt. shouldn't intervene in many things but still. I don't think its happening any time soon since AI big tech money and stock markets are so bedded together its wild.
That is what I wrote if I wasn't clear. Thanks for putting it in clear words I suppose
I 100% agree. I mean that was what I was trying to convey I guess if I didn't get side tracked thinking about govt regulation but yeah I agree completely.
It's sort of self sabotage but hey one thing I have come to know about humans is that judging them for things is gonna push them even further into us vs them, we need to know the reasons behind why people feel so easy to conform to llm's. I guess sycophancy is the idea. People want to know that they are right and the world is wrong and most people sometimes don't give a fuck about other problems and if they do, then they try to help and that can involve pointing to reality. AI just delays it by saying something sycophantic which drives a person even further into the hole.
I guess we need to understand them. Its become a reality dude, there are people already marrying chatbots or what not, I know you must have heard of these stories...
We are not talking about something in the distinct future, its happening as we speak
I think the answer to why is desperation. There are so many things broken in the society in dating that young people feel like being alone is better and chatbots to satisfy whatever they are feeling.
I feel like some people think they deserve love and there's nothing wrong with that but then its also at the same time that you can't expect any person to just love you at the same time, they are right in thinking about themselves too. So those people who feel like they deserve love flock to chatbot which showers them sycophancy and fake love but people are down bad for fake love as well, they will chase anything that resembles love, even if its a chatbot.
Its a societal problem I suppose, maybe internet fueled it accidentally because we fall in love with people over just texts so we have equated a person to texts and thus love, and now we have got clankers writing texts and fellow humans intrepreting it as love.
Honestly, I don't blame them but I sympathesize with them. They just need someone to tell their day to. Underputting them isn't the answer but talking with them and asking them to take professional therapy as well in the process could be great but so many people can't afford therapy that they go to LLM's so that's definitely something. We might need to invest some funding to make therapy more accessible for everybody I guess.
> [..] we need to know the reasons behind why people feel so easy to conform to llm's.
I think it's ultimately down to risk, and wanting to feel secure. There's little risk in confiding to something you turn off, reset and compartmentalise.
I can't imagine any depth they wouldn't dive to, in order to get a morsel to feed on.
reply