Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google has another secret browser (matan-h.com)
777 points by matan-h on Feb 2, 2024 | hide | past | favorite | 219 comments


A team that handles security vulnerability reports should never say "oh - that's another internal team. Go ask them...".

In fact almost any staff member inside an organisation that receives a plausible vulnerability report should ensure it reaches the right people. It's not something you should shrug off.


The "that's another internal team" reply was presumably more about bounty than vulnerability itself. Still, my contrarian take: support - whether external customers or internal stakeholders - is a game of hot potato: first person that fails to forward it to someone else will get burned.

It would be great if everyone was happy to drop whatever they're doing and lead resolution of customer's complaint, regardless of who the actual empowered/responsible person/team is. Alas, we live in the world where most people subscribe to Copenhagen Interpretation of Ethics. In this world, even forwarding a request to those responsible is dangerous. Anything more than that entangles you with the problem, meaning you'll be held responsible for it, no matter your actual connection to it.

We can call it "principal-agent problem", or just "survival in the world where requesters are hunting for anyone willing to engage with their requests".

(Source: I used to be the one willing to handle any internal request even tangentially related to my work, until my line manager told me to ask requesters for project ID or billing code before giving any help that requires more than 1 minute, because otherwise I'll end up doing none of the work we're actually being paid for.)


> support - whether external customers or internal stakeholders - is a game of hot potato

I’d like to shift this a little:

Support who’s primary metric is handle time is in a game of hot potato.

From a business perspective the managers and leaders always feel like there’s too many fires which inevitably leads to either pressure on front-lines to “go faster” and “stop doing unnecessary work” (aka “taking time away from the fires”) or some level of management that’s intentionally blocking higher-ups from seeing those fires so that they look like they are managing the department well (and in this case not only is there the same pressure on the front-lines, but there’s additional pressure about not reaching out to anyone except through that manager.

When the primary metric is handle time, the issues pile up, there’s never enough people to handle it, and the business slowly sinks as no one with a budget sees the “ounce of prevention [that can prevent a pound of cure]”.

However: If the metric is minimum number of departments an issue touches before it’s resolved it’s a whole different thing. Suddenly playing hot potato is a problem and “problem ownership” is praised. There are other metrics too that produce different support cultures (and sometimes different games), but the reason hot potato is so popular is that those other metrics all require top-level execs to be comfortable with spending now to save down the road.


This seems difficult to resolve because staff time is limited and you can’t do everything. That’s why tasks need to be prioritized. But how?

Distributed prioritization seems like a problem; you can get priority inversion if you’re not careful.


How is a question who’s edges are unique to the company but if you’re coming in to a situation like that one generally-solid approach is to accept that things will suck for a while, and might even be worse short-term, the prioritize chipping away at the underlying causes.

Something like reducing staff ticket time by 20%, then using that 20% for feedback, strategy, and structure. Some (maybe even most if you’re lucky) of the front-line staff will have enough experience and insight to be invaluable here (though it is likely they won’t have the language yet to express it in ways that make sense to management). As the company goes through the process of communication, discovery, awareness, planning, and execution (preferably with a tight feedback loop) some of the underlying causes will be addressed, the front-line pressure will ease off, the cascade effect will see ease-off in other departments as well, and that 20% can go down to 5% or even 0 (not recommended lol) which will further reduce the workload to give longer-lasting relief.

Then with staff time “out of the red” the company can start thinking about what they will do in a fire-free future.


It's a classic triage problem. But "tell them to go away" isn't a triage strategy.


> But "tell them to go away" isn't a triage strategy.

The stubbornness of large organization can be maintained for several hundred years, assuming it exists for that long, so this isn't quite true.


Exactly it. I’d like this as a tshirt that people can wear when consulting on support team management.


lol first they came up with the phone menu. Then they came up with complicated phone menus meant to confuse you and get you to give up. Then they replaced those with voice recognition menus that are straight up infuriating. Then they replaced that with AI that tries to act like it's a person.


All because telling you to GTFO isn't an option. Typical customer support is thus a proof-of-work scheme: to get it, you need to waste some significant amount of time and energy up front.


Problem is once its easier to just bypass all that and find ways to bug higher level employees whose time costs the company siginificantly more. If more people realized that then the regular support would quickly be told to actually solve customers' problems.


> There are other metrics too that produce different support cultures

Could you elaborate?


Not them, but I worked at a University in their IT dept, and our chief metric was solely 'customer satisfaction' (and 'customer' meant 'faculty'). We almost never wanted to pass tickets, because that tended to make the 'customers' upset. They wanted white glove service, and that's what they got.

It also meant that some people had "forever tickets", that were a continuous series of tacked-on asks by the same faculty member. HigherEd IT can be crazy (and crazily-laid back).


I've heard IBM used to be the same, if the customer had an unlimited expense account with them.


I feel like my experience with big cos is that the "that's another team" might go like this:

Parental controls is essentially maintenance mode and has 1 dev nominally responsible for it, maybe their workload is divided between that and a bunch of other stuff that they deem their "real" work. The way the component works means that bugs typically get assigned elsewhere in the system very far away from parental controls; you, the owner of Contacts, land a bug like "Your feature XXX has the following failure in parental controls mode." The team responsible is like ... "Why do I care about this? Why should I take a code change for this? Isn't that your problem?" Whoever is responsible for parental controls might not care, but if they do, they don't have political leverage over the owner of the Contacts app or whatever. Therefore, won't fix.


Yes, and the worst part is I don't think it's even a side effect of organizational structure because I've seen it in so many places. There is just a quirk of human psychology where "if you touch a problem it belongs to you now," and the result is a situation where everyone would be genuinely happy and eager to help but nobody (except the newbie) dares try because the consequences for trying are immediate and dire.


This seems related to what I think of as the “jurisdictional hack.” Nobody can solve every problem, so you define a realm that’s your responsibility and anything outside it is someone else’s problem.

Keeping your jurisdiction small means you can do more within that jurisdiction, by ignoring even important problems that are outside it.

But the alternative is ineffective doomscrolling because all the world’s problems are yours.


By definition Google's board, and shareholders who elected them, can 'solve every problem' within Google, since they have the authority to wind down the company.

It's just that practically nobody in the world can credibly demand Google's board, or even just upper middle management, to make a decision on small matters.


Sure, but that's because they can delegate to lower management tiers.

When a ticket reaches your average IT guy, they can't usually delegate it to a lower tier employee unless there are formal support tiers, like L1 and L2, and the ticket was sent to one of the upper layer techs (which usually doesn't happen straight away, because of how L1 and L2 support teams work).

If this is not the case, the only way out is forwarding the issue to another team.


There’s a lot you can criticize Google about, but that would have extremely disruptive worldwide effects. Google is load-bearing infrastructure. Dismantling it would require solving whole lot of new problems.


This is how we have a really urgent problem, which I can fix in the non-production environments in an afternoon, only to have people emailing me for the next month asking why this extremely important problem is not fixed in prod yet.

Sorry guys, that’s another team. They’re extremely reluctant to deploy even very important things. Separation of responsibilities means we have no other choice.


So, the future is Universal Basic Income and Open Source.


> The "that's another internal team" reply was presumably more about bounty than vulnerability itself.

Yeah, that's my read. Basically the first line of support said "parental controls and screen pinning don't count as security boundaries", and the author is upset not because of an abstract argument about impact but because they want to get paid.

Should they be security boundaries? Honestly I'm mixed on this. First because the threat mode is totally different when the attacker is your teenager (i.e. who exactly is the harmed victim? The parent?).

But mostly because the whole idea behind bug bounties is to encourage disclosure of vulnerabilities that would otherwise be sold and deployed against the public at large. That is, the bugs have "value", and we're all better off if the purchase price is borne by the software developer than the criminal. There's no market for parental controls bypasses in that sense.


> parental controls and screen pinning

Will be used in scenarios with much more at stake than someone's belligerent teenager.

Think of these features in the broadest sense you possibly can.


Are you thinking about something specific? What's the scenario where the public harm to a usage control bypass becomes more valuable to an attacker than the bug bounty?

(Edit: <sigh> than the bug bounty that the linked author desires. Really?)

Remember that both of these technologies don't allow the device to do anything it isn't able to do in its default configuration. They're essentially a form of DRM: disallowing otherwise useful activities because of the desires of the owner (and not the user). Would you demand, say, Apple pay a bug bounty for a DRM bypass that let people rip Netflix videos? Probably not, right?


> What's the scenario where the public harm to a usage control bypass becomes more valuable to an attacker than the bug bounty?

Since the bug bounty is zero. All the time?


Or the value of the exploit is zero too.


You're totally right in the general case, but in the specific case of security vulnerabilities it makes sense for there to be an exception (even if the action taken is just to hot potato on your side).


Google is the king of "not my department." "No, I don't have contact with any other department within Google." "No, I don't have the email address of anyone on any other team in Google." WTF, Google?


So having worked there, this was absolutely true, and the parent complaint about hot-potato is also absolutely true.

The problem as I see it is that Google came to be dominated by an egalitarizing culture which at first wasn't necessarily a problem. This was an explicit choice by Larry and Sergey, that your manager should not be able to unilaterally fire you just because of a personal disagreement, nor stiff you out of financial rewards, none of that. So, your manager lacks any formal authority over your day-to-day work: they have to use politics and soft power. Instead, performance is reviewed by a committee of your manager’s peers, who can “calibrate” that manager’s opinion of you against others and against empirical data.

The result of being judged by a faceless committee is that implicitly, some things generate the empirical data that they look at, and other things don't. It's helpful to oversimplify this to a common currency of “perfcoin” Ⓟ even though that was never explicit at Google. Some activities generate Ⓟ, some don't. Google has built dozens of new chat apps because whenever you can have a good excuse for how this aligns with your business priorities, they generate lots of Ⓟ. The design documents are rich in Ⓟ, the tracking issues for each feature are rich in Ⓟ, getting the thing privacy-analyzed and internationalized can get you some Ⓟ, the inevitable work to merge it into another chat app is also worth Ⓟ. But please understand that the existence of Ⓟ is a result of semi-hierarchy. The manager exists (hierarchy) but has to point to an objective measure (Ⓟ) to say that you're not doing what you're supposed to (semi-), it is almost a mathematical deduction that this has to exist given that structure.

Now networking with people outside of your team, will never get you any Ⓟ. And this is not for lack of trying! When I was there it was a job responsibility to do some things that were not your job responsibility (“community contributions”) to try and associate Ⓟ with some form of networking! And everyone hated it, and it didn't work anyways. Manager-committees immediately decided that Ⓟ would not be awarded for excessive networking, just that you had to prove a little bit of networking or else Ⓟ would be deducted. Furthermore the most reliable community contributions were noncommunal—conducting hiring interviews being the easiest: probably this person will not be hired, but even if they are, you will never interact with this person ever again. But, you conducted N interviews in the quarter and that is just barely enough to not get docked some Ⓟ for being a shut-in.

I am giving somewhat of a negative portrait and it is not all negative, see Laszlo Bock’s Work Rules for the better parts. I'm just saying that the culture of not-my-department has been created by, and is sustained by, incentivization.


This is a great comment that probably deserves a post of it's own.


Very high in Ⓟ


It's most likely because when you forward any internal information to any outsiders, you will get a stern dressing-down by your manager.


That's the ultimate cop-out. There is a way you can expose coordination with internal teams and colleagues without

"I've reached out to a colleague who has provided me with some additional context" or "this work requires some additional input from another team - I'm working to establish this and will get back to you with more details"

Neither of the above examples provide any more context on internal teammates or their organizations. However they do require additional work and a culture of customer support (which Larry Page was infamously against for years).


That’s an explanation, not a cop out. It’s saying it’s a problem with management’s incentives and presumably not easily corrected before they have a good CEO.


In my experience, this is common with most large corporations, not just Google.


It’s not really a vulnerability in the sense that it leads to any sort of system compromise. It’s definitely a design flaw in whatever features they added to the OS, but not necessarily something that warrants a huge investigation.


I think anyone expecting these security-related features to work as expected would regard it as a vulnerability.


There are also just normal bugs and known limitations and acceptable risks.


So are nearly all security vulnerabilities.

Is bypassing the lock screen a security bug?


The lock screen isn't bypassed in either of these.


He didn't say it was.


Is parental lock really "security-related" ?

Like it's a frustrating response to this valid bug report, but it's not really a security risk here, either. You don't actually bypass the lock screen or anything.


I think it really is and could have serious safeguarding issues.

Also other features are effected like kiosk mode etc. The implications are unclear but could conceivably be quite serious in some scenarios.


> Also other features are effected like kiosk mode etc

Is it? That's not demonstrated nor claimed in the linked article.

> I think it really is and could have serious safeguarding issues.

Elaborate. What's the security risk from your child using a browser after the parental control timeout expired? It's annoying that the automatic limits didn't fully happen, but data isn't compromised as a result, either.


Browse the open internet (or internal network?!) from a McDonalds ordering kiosk?

No skin in the game, but this is very similar to the old Win95 "About... Help... $BROWSER" style bypasses.


>Win95 "About... Help... $BROWSER" style bypasses

Could you tell me more about this?



We are worried about children being compromised. This is as much about data getting into their heads as it is about basic exfiltration.


That is still out of scope. And as parents you have to accept that you cannot keep control of everything. Your child might see stuff in the streets, might see stuff on someone else's device for which you weren't prepared, or find ways to circumvent any limitation you put to his life.

And being educated != being in jail.


Oh sure, kind of like we adults cannot keep control of everything, like secret browser loopholes. Hey, one dev parent's scope is another dev parent's creep!

But honestly, maybe re-read the HN guidelines: https://news.ycombinator.com/newsguidelines.html

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

These are parental controls. They aren't working in this one specific way. That's. The. Scope.


scope related to security risk. The device and its data are not being compromised by the kid escaping to a browser.


I think “your threatmodel is not my threatmodel” applies here. I can easily see a parental lock being relevant in certain abuse cases for example.


If I read that correctly, in the second case someone can bypass the pinning feature to access your personal information via the default browser's active sessions. That would be a compromise if that's the case.


Sure, it's not arbitrary code execution, but it's certainly privilege escalation.


They said 'go ask them' a out why they decided to close the issue (which also implies that someone went over this already), not 'go ask them because we simply don't care to look', as your comment seem to imply...


The result was the same. Someone was reporting a bad thing. The bad thing never got fixed.


This is too reductive.

The 'we analyzed the issue and decided it won't be fix' Is NOT the same as 'we don't cate about this, go talk to some other team and maybe they'll fix it'.

Deciding something is not a bug is not the same as just ignoring the bug and not fixing it


In this case it is - because someone outside the org - who has no responsibility for your company fixing it's stuff - is being asked to make sure the issue isn't lost.

Google lost out in this case - because an employee pushed responsibility onto an outside party.


What did they lose out on?


1. They are still shipping a product with a fairly serious flaw because the report didn't get to the right people

2. The flaw was publicly exposed which cause reputational damage.


And also this crisp new $50 bill I have in my pocket, if that’s all they care about.


How do you even find the right people? I have no idea how I’d do that at my company.


It's a vulnerability. Report it to infosec. Even if they likely don't fix vulns themselves, they are the ones tracking SLAs for remediation and coordinating tickets, etc.

If your infosec team does not have a contact email (or preferably, several, e.g. "incident@", "security@", etc), slack/teams/webex channel, or escalation process for reporting security issues, that is decently well-known to employees, they are not doing their jobs well.

Quarterly infosec bulletins? Infosec day? "Cool incidents" annual recaps?

I cannot imagine working on an infosec team that doesn't make itself very visible, and very *available*, to other groups, and always position themselves as the 'catch-all' place for any security issues (which these bugs clearly are).


Send it to the CTO, the receptionist, or even HR. hell, CC them all with a note saying that it is unknown where to send it to so hoping someone will know where to forward it. It also sounds like your company needs better internal communication about communications within the company.


I worked at Google for many years.

Google is so huge that it's extremely common to know you have an important bug for another team, but not to be able to route it to them because you can't find their team name.

Most teams have "code names" that have nothing to do with the public name of the project. For example, the parental controls team might be named "pigglewiggle-team" and the Android contacts team might be named "katniss-team" and their bug components might have similarly obscure code names. If you don't work with those teams frequently it can be really daunting to find.

Even when the bug components have hints that get you close to the right place, it's not unusual to learn that most of the engineers are busy working on the new version of the app that isn't released yet, and the old version of the app (the one with the bug) has been destaffed and bugs are supposed to be routed to some other random team that's literally never touched the code.


This giving codenames to projects thing seemed so endearing when I was working in a team and company of 10, but now that I’m working in a team of 100, and an org of 2000, it’s so extremely aggravating.


So, it isn’t just my company that does all of the above. I guess that’s good to know. At least we aren’t the only dysfunctional ones.


I think any company that has more than 1 employee leans towards dysfunctional


Any large bureaucracy seems to do that. Either you have a relatively flat org chart, but then teams have limited visibility past their "neighbors". Or you have a deeply nested hierarchy, where there's a clear path to route requests to any given place, but it takes ages because of all the red tape involved in escalating it sufficiently to route it properly (and then people try to avoid that hassle).


I think it's safe to say their competitor does not have this issue.


My company has 80,000+ people. I doubt those people have any clue either.


Regardless of the number of employees, if the CTO can't figure out where it needs to go, then WTF does that CTO have a job?


Try contacting the CTO in a company of 100k+ people. Your email probably goes to their junk folder. I worked in a similarly sized company and I didn't even know the name of the CTO, nor if there even was one.


They have these exact phrases in their best practices list, but with no instead of any, and always instead of never.


I assume that's the reason he just made a writeup about it instead.


When I was a kid, banks and similar institutions would have computer terminals in the lobby, loaded up with the company website in a special browser that would only let you view that site. This was also the age of Best Viewed In badges.

Whenever my parents would take me with them to run errands, I would find one of these computers and click around until I got a Best Viewed In badge. (It was often on the Help page.) That would bypass the restrictions of the single-site browser and take you to e.g. netscape.com, where you could find a link to a search engine and browse wherever you wanted.


Reminds me of when the PCs on display at Sears, CompUSA, etc. would have their demo software running which didn't let you do the one thing you should do when buying a PC.. actually use it. So I used to CTRL+ALT+DEL and kill the demo software with the task manager. For the more stubborn demo software that would instantly relaunch, I'd then open msconfig, uncheck the demo software from running on startup, and reboot.

I remember the amazement and joy of other people nearby watching, who would then ask me to do it on their PC so they too could click around and play Minesweeper or Pinball :-)

Sounds so amateur now, but it's hard to remember that CTRL+ALT+DEL in the mid-90's was -- dare I say it? -- sort of a power tool that only more experienced users knew about.


I would actually say that Ctrl+Alt+Del was better known in mid-90s simply because so many people were still on DOS, and that was the proper and documented way to reboot it. Which was not uncommon to do especially since if a DOS app locked up, your entire OS was unusable.


I should probably clarify that I meant "known" by average consumers you'd see in a retail shopping mall, most of which at that point in their life have never owned or possibly even used a PC.


This reminds me of trying to get around the web filter on the computers in high school. Going to MS Word, selecting open from URL, then entering a search engine or whatever site you wanted, would open the browser and circumvent the web filters.


This is a thing on the PS5. You cant just browse the web like the PS4. But you can open links in messages. So put https://google.com in a message and open it


Why bother, you can use a pihole/etc and redirect the users guide to wherever you want, like a webkit exploit and hen enabler. (Which is what they've been doing since the ps3)


This is used for homebrew. There is (of course) a WebKit exploit on some firmwares and there is a way to get arbitrary page load by going into the help section then bouncing around in several Google help pages until you get to a search box, search for the exploit page and click it.


wow, mind blown at this ps5 "detail" yuk


My school (where I work) seems to be blocking certain SSL certificates or blocking at the DNS level but changing the DNS server to 1.1.1.1 is futile. Using Mullvad’s free DoH or DoT is futile as well. The website is unable to load

The VPN is one of the few reasons why I pay $5 for proton unlimited and I could make do without it.

My only alternative is to set up my own wireguard exit node at home with a raspberry


Your school is likely using a NGFW like a Palo Alto. These provide very granular application based filtering of traffic.


Back in my day, I'd go to the library and telnet into various edu hosts. Those servers often showed a motd piped into more before dropping you into whatever software (eg lynx). However, they didn't use "restricted mode", which meant I could open a shell by typing "!sh". Good times.


This is similar to how I used the internet in the early 90s as well, but instead of the library I was lucky enough to have an older cousin who got me the University’s dialup number to some VAX system gateway that was supposed to do something or other but probably not let me telnet anywhere without signing in.

I had to hunt for servers that had gopher/lynx on them which didn’t have any login. I vaguely remember having to do something similar to this post to navigate somewhere that would allow me to bring up search or enter a URL.

This was all done on a C64 and a 1200 baud modem, along with this incredible software I was able to find on local BBSes called NovaTerm - an 80 column terminal on a Commodore 64 was black magic at the time. That disk was my prized possession and I almost wore it out.


I remember in university, our computers in the rec center had restrictions like this. Our email could be opened via web, I would email myself a link to search engine, open email, right click and open in the same frame. At that point you could go anywhere you could find a search result for. Good times.


> I would find one of these computers and click around until I got a Best Viewed In badge. (It was often on the Help page.) That would bypass the restrictions of the single-site browser and take you to e.g. netscape.com, where you could find a link to a search engine and browse wherever you wanted.

I’m not familiar with a “Best Viewed In” badge and I am intrigued as to how this bypass worked (based on my own 2000s high school experiences of working around restrictions). Would appreciate if any passersby could elaborate.


It linked to the browser’s website. The only actual restriction on these computers was usually that you couldn’t access the address bar to type in arbitrary URLs.


I don't remember the specific restrictions, but I remember that you could only click links that were on the allowed site.

So if you tried to go to example.com, you would be redirected to bank.com. But if a link on bank.com got you to example.com, you could visit it that way.

Obviously address bars are one of the restricted things. I feel like there might have been others, but I don't recall the specifics.


These sites just didn’t link to any other site that had anything cool. And you couldn’t type in your own link obviously.

Except when they slapped on that badge, it linked to netscape.com which had a search engine.


If I remember correctly, these were href images used to recommend a specific browser for the best experience.


Indeed. They were small images (https://old.reddit.com/r/nostalgia/comments/iv8hgg/the_netsc...) at the bottom of webpages linking you to netscape.com/microsoft.com so you could download the browser “recommended” (or targeted) by the page creator. These icons were usually a standard size (88x31 px?).

Wikipedia explains the context in whoch these were born a bit more: https://en.m.wikipedia.org/wiki/Browser_wars


I remember when the Windows NT login screen had this feature. Also found through help.


As a current student we're still doing this stuff to bypass the network filter


What's the chance the computer was also on the bank's internal LAN?


Shhh hehe. This still works somewheres


Any computer back in the day. 10 print "hello world" 20 goto 10

Owned ;)


The Googles parental controls leave so much to be desired. There is a long running requests for disabling the Play Store app. Still this is not possible without using adb (which is not a good solution, it leads to other problems).

It feels like no real kids are testing the parental controls: For a long time it was trivially easy to circumvent a set YouTube time limit restriction by just opening Play Store, browsing to an app with a video in the screenshot list and head over to YouTube from there. My son actually showed me this when he discovered it.


Google parental controls are basically abandonware, they function poorly and awkwardly, and don't integrate well with other products/services such as Google Home and Google TV. I have a dense 5 page document I wrote up detailing all of this, and I have nobody to send it to.

Most of my frustrations come from the challenge of having 2 older (not toddler) kids, plus multiple Google devices (phones, tablets, Google TV's, PCs signed into Google Accounts). Google imagines parental control to be in the context of supervision, ie. this is Billy's phone and I'm going to physically hand it to him to use until he's done. And it's fundamentally device-level rather than account level, making it very cumbersome and easily circumvented -- let's say Jill has access via her account to 3 different Google TV's in the house. Family Link makes you say how much time she's allowed to spend on each TV per day. But to Jill, TV is TV, so if you leave her home alone unsupervised she'll just watch her quota on the first TV and then move on to the next.

My prevailing theories, mind you I have no evidence at all for this:

- These disparate product teams don't actually work closely together (and are probably incentivized to NOT work together)

- These are dead-end teams at Google. If you end up on one, your goal is to nominally ship something so you can go somewhere else.

- The product and engineering people who end up on the Family Link team don't actually have kids (they're too young), or if they do, they have like, one young kid.


Google itself is abandonware at this point. All of their services have bugs every single day. UI feels unpolished. And God forbid you have to talk to support. Some dude in India who doesn't understand English responds through email and keeps repeating the same script.


Even Search, the rumors are true, has really gotten worse. After hearing about it here, I did a Kagi trial and didn't hesitate to buy a year of unlimited searches when it was up.

It's hard to quantify the difference but Kagi really feels like it's working with me instead of just trying to sell me garbage.


Remember "I'm Feeling Lucky"?

It was absolutely bananas that a search engine could be SO GOOD the first result would probably be the right one.

Now the first link (and second and third) are never the right one. Usually something different entirely.


First link? You are lucky if the first page has the result you want.


This is hyperbole. Search has 100% gotten so much worse. But 99% of the time, even for technical questions, I can find the correct result on the first page. If I don't, for example in cases where the names clash like the word atlas that has a ton of results, I can add a word of additional context and get the correct result


The fact they are planning to drop the cache feature makes you wonder if Search might be heading for the Google Graveyard eventually.

I know it seemed unthinkable for most of Google's history. For the longest time Search was Google. But now...

We must consider the fact that Google is essentially a user data broker. All their products are in various proportions collecting user data and/or serving ads.

If Search quality and features keep reducing then its usage and relevance will drop too, reducing its effectiveness at both data collecting and ad serving.

I don't think Google's DNA allows them to consider radical alternatives. They were born in an era when data automation was in its infancy and they rode it all the way to the top. They abhor the human touch, the manual intervention. Their services are set to function automatically, set and forget.

But AI is changing the automation landscape, it's bringing a transformative paradigm shift. It's irresistible bait for Google but it will ruin Search. They'll deal with it the only way they know how, drop it.


For comparison, and as a rant to substantially agree with you....

Microsoft has e.g. actual child Hotmail accounts, where the parent can whitelist who they can email and who can contact them. Gmail does not and has no intention adding such a thing. They did eventually add child accounts as part of the Family Link effort, but there's really no controls at that level. I recall seeing internally at Google that instead of adding such facilities to Gmail, they just preferred to a) cover their eyes and pretend that <13 year olds didn't have accounts by tossing that into the agreement and asking for a birthday b) proposing some blue sky alternative communication system for children (I forget the name of this effort, but it was I think 2016ish time frame?), but it had mock-ups and hand waving and big discussions and PRDs etc but was guaranteed to go nowhere because it was a giant vision parallel to y'know... actual-reality...

Microsoft also has global time limits across all devices. And far more granular control.

Anyways, you're substantially right about all this. It drove me nuts that we struggled to deal with access to harmful content and had no control over it, and the worst part of it was working at Google at the time and seeing just how not--seriously this is taken or at least the level of organizational paralysis that was preventing action.

Apple's system isn't much better than Google's FWIW.


My kid (under 13 y.o.) and all her friends know that when you sign up for some online service, you always need to give a birthday such that your age is 13+. Many services are totally nerfed to the point of uselessness if you say you are under 13, and/or won't even let you sign up. Also, some companies blacklist your E-mail address if you ever say you are under 13, so you can't try to sign up as a 10 year old, realize your mistake, and then try to re-sign up with the same E-mail as a "14 year old". Consequently, circumvention techniques get around pretty quickly among the pre-teen circles.


Microsoft’s family thing is better but still not good. Simple things like adding more time or allowing purchases or adding funds with gift cards routinely fail with meaningless errors along the lines of “failed”. Or worse, a success message but there was a failure and the action doesn’t actually take effect.

The moral here is all parental controls are crap because they don’t directly drive revenue. (Yes, you can say “but I would prefer to use a service with better controls, which drives adoption”, but let’s be honest - we are a minority there). Nobody’s getting promoted for making the best parental control suite.


Sure, and for ad-revenue driven companies like Google and Meta having good parental controls actually harms their business instead of improving on it. The incentives are all wrong.

That, and parents only have kids of the age that this is of relevance for, for maybe 4, 5, 6 years. So you're targeting a feature not only for a small segment, but one that is transient.

And if you screw it up, there's all sorts of potential for liability. You have to be careful about what you promise, etc. etc.

All the more reason why the answer probably comes down to: gov't regulation instead of expecting them to do this voluntarily.


Microsoft just nuked their shared global time limits across devices. They force you to split the time up now.


Sigh.

Well, I was thinking of buying an XBox but now...


At least with Family Link I can control everything on my kid's Android tablet from the web or my (Android) phone. My other kid has an iPad and there's nothing I can do remotely. I have to mess about with screen time codes on the device, it's a nightmare. What does Apple expect, that I'm going to buy an Iphone just to have parental controls on my kids Ipad. Why can't I do anything from icloud.com or something?


Yes, they do expect that. This apple's MO.


1 and 3. The quiet detente at Google is product won't be too ambitious if engineering doesn't go out of its way to do anything:

90% of the time product lays out a minimal rushed vision, engineering huffs and puffs that it might be impossible, then people work about 20-30 hours a week complaining that the designers didn't tell them exactly what to do and the teams they need to integrate with won't help, and you deliver 80-90% of the original minimal "vision" and slap eachother on the back.

And that was _before_: A) spent 18 months firing people, while some managers took advantage of that situation to punch down. B) they nuked the performance review system, 80% are exactly the same with their Significant Impact, another 10-15% have scarlet letters, and 5-10% get rewards.

Any deviation from that and someone perceives you as being on their turf and finds a way to punch down.

And good luck getting management to care, just like the real world, no one wants to get within 100 feet of trouble.

Then you're faced with the invitation to appeal to a VP, a coin flip where you have to guess at if they're going to back you, and even if they do, facing the fact you nuked your career anyway because you broke omerta.


They probably realize it is hopeless to try to lock motivated kids out of their devices. And it might be a nostalgic memory for them, maybe they aren’t too motivated to implement this stuff. Most of us had full control of our devices growing up, right?


Nostalgia time.

> Most of us had full control of our devices growing up, right?

Devices, yes. Internet connection: not at first. With the AOL app (not a separate appliance or OS feature) responsible for establishing the dialup connection, it only bridged internet access to the OS when the current AOL user had no parental controls. As a 10-14 year old whose AOL account was set to "young teen" (DNS allow list) and then "mature teen" (DNS deny list), I was free to use non-AOL apps but they had no internet access. Solution: download a keylogger* and subsequently use a parent's AOL account for the next few years until they removed controls from mine, giving full Internet access to the whole computer, without being found out.

*The free version had a "pay for this" nag popup every few minutes. I opened the exe in a hex editor, typed over that nag string, and managed to corrupt it just enough that it would crash (with a totally generic fatal error) instead of nag. Launched it right before finding mom or dad to help me do some safe but blocked activity, which they were always happy to do, with increased supervision.


> Most of us had full control of our devices growing up, right?

I dunno about you but we had one computer, one TV, and one phone line. If by "full control" you mean "constant negotiations" then yeah we had that. By the time I was able to drive and earn enough coin to build my own computer, I was practically an adult.


Locking kids out of a device is what Family Link excels at. You go in, you select the kid, you select the device, you click Lock.

Here are things you cannot do:

- Create a global screen time limit across all devices regardless of how that time is used.

- Create a PIN for Google TV that prevents kids from switching to an adult account in order to access more apps. Right now it's the other way around, you can set a PIN that prevents a kid from accessing their own account (again, because supervision!) but that kid can easily switch over to the adult account if they want.

- Require that when an idle Google TV exit to the 'user selection' screen after a timeout. Right now a kid can just walk up to the TV and start using my profile and apps because I was the last one to use it.

As for nostalgia, I think that's overthinking it.


No, parental controls don’t drive revenue so they get abandoned. No manager type is getting promoted for creating an amazing parental control product.


I don't see why they wouldn't, with the right marketing. Surely there's a large market of parents desiring locked-down devices for their children? It shouldn't be a hard sale.


Parental controls and enterprise management/MDM are the same product. That's important to the kind of people who'll buy hundreds of your devices.

(Although they do have different fine grained features, like time limits and web site approval specifically aren't enterprise things.)


> Most of us had full control of our devices growing up, right?

Yes, but the devices could do a lot less.

Like, I had full control over my bike too. But not an automobile.


> it is hopeless to try to lock motivated kids out of their devices

What? Apple's successfully made it very hard for adults to get into their own devices. This claim is absurd.


Once the dust from the YouTube Ad-pocalypse settled and advertisers started spending money again, I feel like Google lost interest in a lot of child safety stuff.


That must be when Google realized that child safety directly hurts its bottom line.


Can you post the doc?


Yeah I can do that. Need a couple days and I'll get it to you.


Post it publicly, and maybe even submit it to HN.


>It feels like no real kids are testing the parental controls

I feel the same can be said about accessibility service: Once you get the accessibility permission, you have FULL control over the user's device. They could just split those permissions and expose a more fine-grained control api, but they (I suspect) have some one, verry extreme use case in mind and design the service around it (like ie. phone user being completly blind and requiring the accessiblity app to be an interface for literally all interactions with the device).

Which means that whenever you want to use some feature of that api, you have to trust an app completely and give it a carte-blanche to do whatever it wants on your device.

Which ultimately leads to gigantic whole in platforms security, for no other reason then 'this is the way and scenarios we intend people to be using it, and we give no compromises for anyone who has any other usege in mind'


Parental controls are a strange beast. In general, they stand zero chance against even mildly interested kid, unless you're going to lock them up in a basement to isolate them entirely from their peer group. Those controls work best as a soft limit - strong enough that going around them would be clear, unambiguous disobedience. After all, they're parental controls, not NSA-proof security. Making them technically bulletproof would arguably be worse for everyone.


> Those controls work best as a soft limit - strong enough that going around them would be clear, unambiguous disobedience.

Which could very likely go undetected, therefore unpunished. It's not like it's a family-room computer that's easily monitored.

> After all, they're parental controls, not NSA-proof security. Making them technically bulletproof would arguably be worse for everyone.

It sounds like they're about as bulletproof as as screen door. I would be much better to have them as strong as an locked exterior door, maybe not "NSA-proof" (the door is vulnerable to locksmiths and battering rams) but strong enough to keep a kid out.


I think you're describing the point of view of the phone makers. Parents I've interacted with are in a whole other world. If you limit YouTube, you're limiting YouTube. There should be no caveats.


I'm a parent myself. But I also keep in mind that I owe my career and life comfort mostly to my parents being clueless about technology and too busy to supervise my use of computers since I was 9 or so.


I agree with you, yet I can set up a sensibly controlled Linux notebook, but I cannot do that with either iOS,OSX or windows.

I think that tells me something about the actual priorities of the people building all those systems.


I don't think many people really use parental controls on Android or iOS. Its a feature thats there to make consumers feel safer, but anyone that tries to actually use it is going to quickly give up.

Small example: On iOS 'Screen Time' you can restrict websites to a whitelist, which seems useful. But so many things break if you do that - all kinds of login screens for different apps - and you dont get given clues to as to what urls need to be whitelisted to un-break things.

Sometimes with modern tech you're using a feature and you think "this is incredibly complicated and broken, there can't be many people actually using this" and I tend to get that feeling with parental controls.


Most the parents that I know with kids do use iOS Screen Time. And yes, it's super frustrating.


I was using the parental control, however I have stopped using it.

In the end it is surrogate of a parent. Either you care about your child and you know what it is doing, or not.

If you think that your child would be vunlerable to anything in the web, then most likely you should not give the phone to your kid.

If the kid is old enough to understand things, then it does not require software parental control, but a parent. A good parent does not need parental control in apps of their children.

Parental controls also disables ability to install apps from other sources and I prefer fdroid apps from play store apps.

The last thing is that it teaches that we are controlled by some software company, and 'kept safe from harm'. It gives that illusion. It trains that illusion. It enforces it.


True, but maybe a little idealistic?

Every parent can use a little help. There's so much that your child sees and hears, you want to be there to help explain it to them when they have questions.

Hand them a device that shows anything happening anywhere in the world? Maybe a little help there, limiting what they can easily stumble upon, is a good thing.


Rising a child is difficult. Children, as people, are different. My children do not require it. It is not idealistic therefore.

I do not say that everybody can now safely remove their safeguards.


On iOS, the ability to prevent (say) the use of social media after 10pm is very useful. What would you, as someone who "cares about your child" do instead?


> On iOS, the ability to prevent (say) the use of social media after 10pm is very useful. What would you, as someone who "cares about your child" do instead?

Have your child hand you their phone at 10PM before they go to bed? Why do they need a phone on their person 24/7?


Because they charge it in their room?

Because they use it as an alarm?

Because you have a late date night and won't be there in person?


In the end, it's your choice as a parent whether this is "good parenting" or not. The aforementioned roadblocks ("gee dad, it's my alarm clock! insurmountable obstacle! guess i keep the phone, sucker!") are not arguing in good faith.


It's not insurmountable.

There are solutions.

E.g. parental controls.


E.g. you charge it in your room and give them an analog alarm clock.

Or "you need to put the phone down at 10pm, if I come home and see you on it you will be grounded"

Why must we use technology to enforce parenting choices


Technology isn't a requirement.

Humans predate computers.

But technology can come in handy, don't you think?


Yep, I do think that. Does everything have to rely on it though?

(also, we're in a thread complaining about how technology doesn't properly support paraental controls)


In my case, my child likes to listen to various Podcasts of music and it helps them get to sleep. "Hand over your phone" feels draconian, not caring. My


> If you think that your child would be vunlerable to anything in the web, then most likely you should not give the phone to your kid.

phone has many utilities I want kids to use: make calls, check mail, maps, weather etc.

The issue is that they are using it for secretly watching tiktok for example.


I have yet to find a really good online electronic control system. Having worked with the Apple, Windows, and Sony systems, they all suck.

The Apple ones seem to have a hundred holes kids can break to extend screen time or download apps, and sometimes it takes awhile for a change to take effect. Windows was completely broken last time I checked on my son’s gaming machine. And Sony PlayStation - oh, so so painful.

So it isn’t just Android. It’s everyone.


If Google care in the least for kids they would scrub all of those games that are predatory, introduce gambling addiction mechanics, use annoying and confusing in-game ads, and gateway to older even more addiction focused apps. Notice I didn't even mention all of the information hoovering.

And of course the Play store is desperate for you to provide a credit card at every single opportunity so you can maximize the potential of kids doing accidental buying.

It is a complete scam.

I honestly don't know how television got such strict laws and regulations on children's programming, when viewed in comparison to the complete wild west, that is the modern app store.


The sheer fact that I can't differentiate between "Has ads, and you can pay to get rid of them" and "Has 15 different currencies that make the game no fun unless you pay a fortune" in the Play store is proof that Google don't want to promote good business practices.


“Television” doesn’t have strict laws and restrictions.

Over the air broadcasts do. The broadcast spectrum is considered publicly owned and is leased to television operators.

I guess you could say the same about the cellular spectrum. But how deep do you want government regulation to go since Google operates over the internet? Do you really want the government controlling internet content “for the children”?

And if they regulate app stores, especially on Android, do they also regulate what you can distribute from your own website?


I see your point to some degree, but an app store is not the open internet, there is not freedom nor assumption of freedom. It is curated strictly in anticompetitive terms by both Google and Apple.

So yes, I do expect strict child regulation in it, especially since there isn't the open internet issue of a) who pays for it and b) who is the central regulatory nexus point. It is google/apple in both cases.


And how many of those apps could just be websites? Do you then regulate websites?

If the US does something like the EU DMA, do you regulate all app stores for content?

Exactly how do you do either in a way that the government doesn’t come in an regulate content that they don’t like?


As a parent far on the liberal spectrum I am just fine with proper functioning government regulation, and the job they did in the 70s/80s/90s was perfectly fine.

You are obviously arguing for zero-regulation, or you think you are in the libertarian sense. I always like to ask libertarians if they think murder should be illegal, which they usually do. Well, guess what, you are in favor of government regulation, it simply comes down to what line in the sand you are drawing that obviously is convenient to your own self interest.


Not sure who is downvoting you, but you're absolutely right.

Just like Meta/Instagram, they're playing lip-service to the concept, but not really taking action.

Frustratingly, out of all the platforms & BigCorps, Microsoft's parental controls and support for child accounts seems the best.

For many parents this might be no big deal. But there are genuinely children who've ventured into self-harm, eating disorder, etc. content on account of the wild-westness of the Internet combined with weakness of this crap. And it's absolutely maddening to see how pathetic they all (including Apple) are treating this.


> I honestly don't know how television got such strict laws and regulations on children's programming, when viewed in comparison to the complete wild west, that is the modern app store.

With time and pressure.

Right now you have a fun new technology which people are still infatuated with, bought by one of the biggest companies to ever exist, in a country which openly permits business-to-politician payments through lobbying.

The wild west won't look anything like it does 50 years from now


No, it's because television was regulated as a tradeoff for using the limited resource of airwaves, but there isn't a limited resource of internet connections to game servers.

> in a country which openly permits business-to-politician payments through lobbying.

It's actually amazing how good of a tell this is. Nobody who says this ever knows anything about politics. I'm sorry, but politicians actually genuinely believe in most of the stuff they do you don't like, and so do their voters.


If you believe an entity has the ability to use funds to influence public policy, we agree.

If you don't, you could let them know and save them billions of USD per year.


Do you have evidence for this claim?


The claim that an entity has the ability to use funds to influence public policy?


That they spend billions on it.

But yes, that too, because even when they try it doesn't work.

Here's today's election results in SC where the billionaire-backed* candidate got 1.4%.

https://x.com/armanddoma/status/1753947901972418952

* morally, not financially, since you can't really do that unlike what people think


it's not a secret that mega companies use huge amount of funds on lobbying to influence politics

e.g. amazon in 2023, 19.8m USD https://www.opensecrets.org/federal-lobbying/top-spenders

it's not like america is alone in this, they just have gigantic dollar figures. australia is a country which really struggles to move away from fossil fuels, and it also has gigantic coal-mining companies paying huge amounts to keep it that way

https://www.bbc.com/news/world-europe-16797862

https://www.cnbc.com/2023/01/23/apple-ramped-up-lobbying-spe...

https://www.theguardian.com/environment/2023/dec/09/big-meat...


> it's not a secret that mega companies use huge amount of funds on lobbying to influence politics

Companies also spend billions on marketing. But there's no reason to believe either of these things actually /work/. And lobbying is not giving money to politicians.


In my experience (granted I haven't been a kid in a long time), all parental controls leave much to be desired. When I was a kid I never once encountered something I couldn't eventually get around (at home and at school). Obviously, I wasn't a "typical" kid. I LOVED breaking through these types of things and eventually faced potential expulsion for this love, but my point is these things are typically half baked and security is only ever as strong as the weakest link. The worlds strongest door doesn't matter if its next to a glass window.


I bought my son a used iPhone SE specifically for parental controls.

I have a legacy Google Workspaces account that all my family is on and now that it's considered a "business" product, I can't enable parental controls on my kids' Google accounts.


Amazons Fire tablet allows you to block apps like Youtube with a password (or completely hide them), but I havent ever tried to "hack" them , so dont know how effective they are


My kid has an Amazon kids Fire tablet. It allows parents to prevent specific apps from appearing on the home screen, and it seems to work fine, but it doesn't allow any kind of keyword blocking. For example, I have blocked a bunch of apps/videos/books by the YouTuber 'Blippi', but he releases new stuff all the time, and the new ones often pop up on the home screen. If I could preemptively block everything containing 'blippi' (or any other keywords) my life would be easier.

Regarding YouTube, the YouTube Kids app lets me block videos and whole channels, and it's great. But there is no equivalent functionality in the main YouTube app. On various devices (e.g., Roku, Google Nest) the kid sometimes manages to find some YouTube channels I'd rather they didn't watch. But I have to manually intervene each time, I can't just block those channels from being watched or recommended in the future. YouTube/Google obviously know that it's valuable to block videos/channels, since that feature exists in the kids app, yet they omit the feature from the main app, even for a paying customer like me. It is obnoxious.


"It feels like no real kids are testing the parental controls"

Or... hear me out... they don't really want adequate controls to be put in place in the first place?

And, yeah, I have many many beefs to pick with Family Link.


This is some Windows 98 login screen bypass hack trick.

https://i.imgur.com/BULPmCI.gif

Honestly, I would have never expected Google to become Microsoft Windows 98 level bad at designing their systems.


Such a great exploit because it doesn’t require you to know about arcane stuff like buffer overflows. Even a casual user could follow the process. So much of security seems to be just minimizing the attack surface so you have less to think about. Why does someone need to be able to print a tooltip in the sign in dialog? It’s absurd. Once you involve printing, you are letting in all kinds of third party stuff that isn’t secure at all. Even if you want to permit printing of tooltips or help in general, they should have had a “secure context” where such features are disabled. Similar stuff in PDFs too, where 99% of the use of random features in PDFs like 3D models or scripting was for security exploits. Keep it simple by default and avoid that stuff!


> Why does someone need to be able to print a tooltip in the sign in dialog? It’s absurd.

I doubt that it was intentional. Although careful deploying systems, I have often a feeling that we must have forgotten something that is trivially exploitable by someone. I wonder if there are provably secure systems in use somewhere...


> provably secure systems in use somewhere...

There are. Check out sel4 and dependent type systems.


Yes, I've heard of those. What I wanted to write is "widely used" even at the user level (GUI programs, web applications, etc.).

You prompted me to read more about seL4; the white paper is nice [1].

[1] https://sel4.systems/About/seL4-whitepaper.pdf


This doesn't bypass the lock screen...


This lets you run any app. And it works in any window that has the context help icon in the titlebar, including the "Welcome to Windows" lock screen.


No, the Android "secret browser" doesn't bypass the lock screen.


Sorry, I misunderstood your comment - I thought you were referring to the Win9x lock screen.


Reminds me of this classic gem: https://imgur.com/BULPmCI?r


Exactly what I thought of. I've used a technique similar to the OP for bypassing FRP on a Pixel 2 that I bought used on Craigslist. Also provoked a similar thought of how my entire life was set in motion staring at this screen for hours as a bored little kid, finally breaking in and experiencing my first "hit".


This takes me back to my own first experiences 'hacking' and tinkering with the guts of a computer system, which put me on the path to a career in IT and engineering:

- Breaking the family computer with a trojan pirating Halo PC, which I then had to figure out how to fix before my dad got home

- Circumventing the NetNanny, etc parental controls my parents randomly decided to install on our personal computers several years after us kids had already been using the internet (edit: okay, there may have been a letter from Comcast re: the above sloppy piracy). Restoring my netbook to useful functionality without leaving a trace of modification introduced me to Linux Live CDs, and Linux!

Good to know tomorrow's hackers are still getting that education today!


Similarly: I remember spending quite a lot of time exploring [National Capital Feenet](https://www.ncf.ca/en/)'s Gopher pages in an attempt to find a link-to-a-link-to-a-page that would let me make arbitrary telnet connections, thereby bypassing their efforts to ensure that their free dialup service was only used to access their own services, rather than, say, to play Nethack on a public server at tamu.edu (whose exact domain name I can, alas, no longer recall).


Circumventing NetNanny definitely forced me to level up my computer skills. Classic.


Related. Others?

Google has a secret browser hidden inside the settings - https://news.ycombinator.com/item?id=36478206 - June 2023 (312 comments)


I read that Google Play Services can even grant itself new permissions[1]. How does that work? Does it have root?

[1]https://developers.google.com/android/guides/permissions


> Does it have root?

Not really, but it is a privileged System app, which pretty much means it can do a factory load of things that installed apps cannot without root.


Systems (or vendor) apps also have to predefine permissions in their manifest, so not every system app can do everything. But the list of permissions accessible by those apps is so broad they can effectively have root, as long as you define enough of them as developer


yes. (It's not really the 'root' user, but it trusts blindly and can do things such as installing apps without user confirmation.). In my other blog post about gms, the JS bridges would be running in the privileged scope.

You agreed to this in Google's privacy policy when installing Android.


Luckily it can be installed sandboxed and not privileged. (E.g. with GrapheneOS).


this is called a back door, not to mention it can already install (and uninstall!) apps without your permission. and yes they have already gotten in trouble for it in the past, but not enough happened to them.


GrapheneOS with sandboxed Play Services seems unaffected.

The Phone app doesn't seem to be able to view or open web URLs in contacts, and the Contacts app fails to open the two URLs given in the article in pinned mode.


In case anyone is interested in an earlier discussion from 2023 (312 comments): https://news.ycombinator.com/item?id=36478206


I expected Google Ultron


My first thought too!


Surprised I had to scroll this far to see this


Didn't work for me. Fixed? Or did I fail or is it different on different Android phones?


Same here perhaps samsung did something in my case?


I am lineage with standard google services but I couldn't get the 'Learn More'


Me neither. One Plus Nord N30, Android 13. Last security update 1/5/24.


I have a faint memory of having seen an HN article about this hidden browser before.

In any case, the Google response you've seen shows how the company is messed up. Google became Microsoft in the 90s.


Eh, calling an embedded web-view a 'screet google browser' smells a bit clickbait'ish.

In situations where it bypasses things like parental control its fair to bring it up as an issue, but it's not exactly a 'vulnerability' in the way a vulnerability is commonly understood


Perhaps not, unless there is a security vulnerability in the web-view. I think it shows that there's a problem with the usage, and implementation of web-view and it's permissions.

I can see why Google wouldn't want to apply the permissions and parental contracts from the browser to the web-view, that would break a bunch of stuff and it would be hard to explain to the user that a link in the Contacts app doesn't work, because Chrome is locked down. Others would argue that is exactly what they expect to happen.

In this case I fail to see why Contacts embeds its own webview, rather than just triggering the browser to open the link. Not every app needs a web-view.


Android's WebView is Chrome, process sandboxing & all. Unless the Contacts app injected a JS handler, which we have no evidence of, then it's no less secure than Chrome is.


This is how the current ps5 jailbreak works iirc


the pinning thing looks like other applications could assume it's safe when it's not actually, which is a normal recipe for a vulnerability. worth investigating at least


I remember using something similar to bypass the lock of an old phone my collegue had forgotten the password of in my teens. It involved downloading an apk from some shady site with this "in-built browser" that did something to unlock the phone, then factory-resetting it.


I'm pretty sure these parental control holes are left on purpose to train the next generation of enterprising hackers.


If everyone here knows about it, is it still secret?


what's the consequence of this? kids can bypass parental controls? just making sure i understand


Was not Google's slogan "Don't be evil" years ago?


"Never attribute to malice that which can be explained by incompetence."


folly is the cloak of knavery


toad pond


Is this the same Google that pours millions of dollars into its Project Zero securi-tainment blog where they specifically use hamfisted disclosure policies to discredit competing products?

Oh, well color me shocked!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: