Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is cool, but honestly I wish it was builtin language syntax not a magic comment, magic comments are kind of ugly. Maybe some day…

(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)



It IS built-in language syntax. It's defined in the PEP, that's built-in. It's syntax:

"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."

That's the syntax.

Built-in language syntax.


Suppose I have such a begin block without the correct corresponding end block - will Python itself give me a syntax error, or will it just ignore it?

It might be “built-in syntax” from a specification viewpoint, but does CPython itself know anything about it? Does CPython’s parser reject scripts which violate this specification?

And even if CPython does know about it (or comes to do so in the future), the fact that it looks like a comment makes its status as “built-in syntax” non-obvious to the uninitiated


Semantic whitespace was bad enough, now we have semantic comment blocks?

I'm mostly joking, but normally when people say language syntax they mean something outside a comment block.


Shebang lines are semantic comments too


Not necessarily-while many languages which accept shebangs use hash as a line comment introducer, others don’t, but nonetheless will accept a shebang at the start of a file only. e.g. some Lisp interpreters will ignore the first line of a source file if it starts with #!, but otherwise don’t accept # as comment syntax.


I am - while awesomely valuing uv - also with the part of the audience wondering/wishing this capability were a language feature.-


The PEP explains why it isn't part of the regular python syntax.

uv and other tools would be forced to implement a full Python parser. And since the language changes they would need to update their parser when the language changes.

This approach doesn't have that problem.

Making it a "language feature" has no upside and lots of downside. As the PEP explains.


Beyond that, the needed name to download from PyPI doesn't necessarily have anything at all to do with the name used for an `import` statement. And a given PyPI download may satisfy multiple `import` statements. And it can even be possible to require a download from PyPI that doesn't install any importable code (it may legally be a meta-package that runs one-shot configuration code when "built from source").


> Beyond that, the needed name to download from PyPI doesn't necessarily have anything at all to do with the name used for an `import` statement. And a given PyPI download may satisfy multiple `import` statements.

I think this is a design issue with PyPI though. It really should have some kind of index which goes from module names to packages which provide that module. (Maybe it already does but I don't know about it?)

Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.

Once that were done, it would be possible to implement the feature where "import X", if X can't be found locally, finds the primary package on PyPI which provides module X, installs it into the current virtualenv, and then loads it.

Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it. And CPython might say the virtualenv configuration needs to nominate an external package installer (pip, pipx, poetry, uv, whatever) so CPython knows what to do in this case.

You could even build this feature initially as an extension installed from PyPI, and then later move it into the CPython core via a PEP. Just in the case of an extension, you couldn't use the "from __future__" syntax.

> it may legally be a meta-package that runs one-shot configuration code when "built from source"

True, but if Python were to provide this auto-install via "import X" feature, packages of that nature could be supported by including in them a dummy main module. All it would need would be an empty __init__.py. You could include some metadata in the __init__.py if you wished.

Once "import X" auto-install is supported, you could potentially extend the "import" syntax with metadata to specify you want to install a specific package (not the primary package for the module), and with specific versions. Maybe some syntax like:

    import foobarbaz ("foo-bar-baz>=3.0")
I doubt all this is going to happen any time soon, but maybe Python will eventually get there.


> I think this is a design issue with PyPI though. It really should have some kind of index which goes from module names to packages which provide that module. (Maybe it already does but I don't know about it?)

PyPI never really saw much "design" (although there is a GitHub project for the site: https://github.com/pypi/warehouse/ as well as for a mirror client: https://github.com/pypa/bandersnatch). But an established principle now is that anyone can upload a distribution with whatever name they want — first come, first serve by default. Further, nobody has to worry about what anyone else's existing software is in order to do this. Although there are restrictions to avoid typo-squatting or other social engineering attempts (and in the modern era, names of standard library modules are automatically blacklisted).

> Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.

These kinds of conflicts are actually by design. You're supposed to be able to have competing implementations of the same API.

> Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it.

The language is not realistically going to change purely to support packaging. The time to propose this was in 2006. (Did you know pip was first released before Python 3.0?)

> but maybe Python will eventually get there.

That would require the relevant people to agree that with heading in that direction. IMX, they have many reasons they don't want to.

Anyway, this isn't the place to pitch such ideas. It would be better to try the Ideas and/or Packaging forums on https://discuss.python.org — but be prepared for them to tell you the same things.


Perfect sense.-


Funny how all this time, the only commonly-used import syntax like this was in HTML+JS with the <script> tag.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: