Hacker Newsnew | past | comments | ask | show | jobs | submit | mjgeddes's commentslogin

Yeah, I noticed with 'Gemini Pro' , it didn't seem to be able to remember much about earlier outputs in the conversation (apparently little to no context window), which obviously drastically dumbs it down.

I was starting to get OK results with 'Pro', but I had to use special prompting tricks.

Tried 'Advanced' (Ultra), seems only marginally better so far.


> I was starting to get OK results with 'Pro', but I had to use special prompting tricks.

Like what?


I usually put a couple of keywords in brackets at the beginning (before the body of the prompt) to provide some context


Well, it says I'm in to 'Bard Advanced' (which is the same as 'Gemini Ultra'). I only did a couple of queries so far, text output seemed only marginally better than 'Gemini Pro', which I was just starting to get decent results with after getting used to prompting. It's possible they'd done a stealth release earlier, obviously need to do a lot of experiments to make a proper comparison with GPT-4.


I believe I can answer the question 'Why Anything'.

See my other post in thread outlining my 'Actualization Theory'. I think there's a 'ground state' of 'possible worlds' that must exist by virtue of logical necessity and some of these get 'Actualized' (become 'actual worlds') over time via the emergence of complexity (including minds and consciousness).

What I'd say is that "existence" is not binary, but rather a matter of degree, and reality is 'still under construction': the ultimate ends of the universe are the explanation as to why this universe: it's a teleological answer. The universe has to be such as allow for the existence of observers and is fine-tuned for life (because mind contributes to the ongoing 'actualization' of reality).


Actualization Theory & Meaning of Life (“Why Quantum Mechanics?” - Shtetl Optimized, Jan 26th/27th, 2022)

(1). The notion of “Objective Reality” is a limit that only makes sense after infinite elapsed time from the perspective of observers within reality.

(2). The basic design principle of reality is ‘Actualization’. Reality begins from a ground state of ‘possible worlds’ (non-constructive math), some of which start to get actualized (become actual worlds). Actualization simply means that an objective description of these worlds can increasingly be given purely in terms of computation (i.e, constructive mathematics). From (1), this process continues forever; worlds are always only in various degrees of actualization, which is the measure of their existence.

(3) To actualizae reality there are 3 conditions: (a) the whole can be decompose into understandable parts (Compositionality) (b) the parts can combine into larger integrated systems (Complexity) (b) the parts affect each other in limited, logical ways (Causality)

(4). Quantum mechanics is simply a special case of the general ‘theory of actualization’, which explains the physics of conditions (a), (b) & (c) above. The 3 conditions together give reality the property of ‘comprehensibility’ , which is equivalent to ‘actualization’. Comprehensibility is the ease with which observers within reality can understand it.

(5). Hilbert space is only a description of the space of possible worlds, it does not account for the actual process of actuliazation (properties a,b,c) which are expressed as : (a) computational topology, (b) function spaces, (c) computational geometry.

(6). The full ‘theory of actualization’ is about the mapping between (1) Hilbert space, (2) Computational Geometry & (3). Space-Time. (1) is about the ground-state of reality (the space of possible worlds), (2) is about the actualization of reality (how reality is made comprehensible) & (3) is the actualized structure of reality (the observed physical world).

---

I think the ‘ground state’ of reality is simply a space of ‘possible worlds’, and as complexity is built-up, these worlds get increasingly ‘actualized’ entirely via computational processes, so the ball gets rolling without any observers or consciousness, which are emergent systems of computation.

However, I think that after a certain complexity threshold is cleared, the continuing actualization of reality does need minds, and from this point forward consciousness contributes to the on-going actualization of reality! Not through any sort of mystical or non-physical process, but by structuring information (i.e., turning information into knowledge), thus helping to make reality increasingly comprehensible.

So what are minds? Well, remember I talked about the ‘actualization’ process itself, which I said takes place on the level of computational geometry, function spaces and topology. And minds exist at this level. They’re simply the higher-level processes of ‘actualization’. Minds make mappings of (representations of) reality by modelling systems of causal relations that are complex and compositional. And these models are themselves new systems of causal relations that are complex & compositional! This is an open-ended recursive process.

The meaning of life (for conscious observers) is thus simply the high-level version of the same process of ‘actualization’ that I think accounts for physics. It’s ‘Self-Actualization’ ! Of course, now we have to try to achieve a reasonable understanding of the meaning of the term ‘Self-Actualization’

Here’s my explanation of consciousness and values:

Consciousness is the highest level of recursive actualization. It’s a model of the perceived flow of time (temporal awareness). It works by integrating high-level values and low-level facts, to generate mid-level action plans. The representation of values, plans and facts is in the form of the temporal modalities ‘should, ‘would’ & ‘could’ respectively. And the generated ‘action plans’ which are “good” are simply the ones that structure information as knowledge such that it contributes to the on-going actualization of reality (i.e., generation of manageable complexity).

To understand values, consider the motivations of God in the context of ‘actualization’. I believe that these motivations can be considered to consist of two complementary tendencies, (1) rationality & (2) creativity, because this is the combination that generates manageable complexity ( ‘actualized reality’).

Rationality is about the compression of information (manifesting as intelligence) , whereas creativity is about the exploration of possibilities (manifesting as values). The balance between them generates manageable complexity.

In conclusion, the “actualization” of reality is ultimately about the generation of manageable complexity, which is complexity that has enough structured details to be interesting, but can still be compressed enough to make it comprehensible. At a high-level, this is the balance between rationality and creativity in conscious minds.


Yup, wikipedia already maps all human knowledge. In fact , you don't need to map everything, just central concepts in every field. I think there's a law of diminishing returns - summaries of knowledge are extremely useful, but as you add more and more detail , there's less and less benefit.

I created 'Wikipedia-Prime', a series of 27 wiki-books mapping all central concepts in all fields of human knowledge. Approx. 16 700 articles was enough to capture all the central concepts used by experts in all fields.

Wikipedia-Prime Index: http://www.zarzuelazen.com/CoreKnowledgeDomains2.html


The work of Ana Wierzbicka and Cliff Goddard studied 'Semantic Primes', 'the set of semantic concepts that are innately understood but cannot be expressed in simpler terms'.

https://en.wikipedia.org/wiki/Semantic_primes

The combination of a set of semantic primes and the rules of combining them forms a 'Natural Semantic Metalanguage' , which is the core from which all the words in a given language would be built up.

https://en.wikipedia.org/wiki/Natural_semantic_metalanguage

The current agreed-upon number of semantic primes is 65 (see list at wikipedia links above).

That means that any English word can be defined using a lexicon of about 65 concepts in the English natural semantic metalanguage.


I've been following this stuff for years, it's fascinating. I'm particularly interested in the recent practical applications like Minimal English and it's equivalent in other languages. For those that don't know, unlike other minimalist English subsets which usually focus on learnability or clarity, Minimal English focuses on maximum translatability.

I'm going to get silly now, but I can't help but think the semantic primes - if you can avoid thinking of them as words or even conscious experience - represent some core set of cognitive axioms, like the primitive elements for constructing mental models. As you go to simpler life forms the "word list" would get smaller. If there is any truth to that, I wonder what potential primitives we are missing that would allow us to think more complex thoughts and whether you could measure species intelligence by their "vocabulary" and working out what concepts can't be expressed when one of the primitives is missing. What would happen if you lost the concept of above'ness?

The other thing I find interesting and it might be no more than a coincidence, is how there is only the numbers one and two and then you have to use many or more. This in some way matches up with the ideas of the Parallel individuation system[1] whereby young children can only precisely recognize quantities up to 3, or 1 + 2 and an adult can only precisely recognize quantities up to 4, or 2 + 2. After that, the brain uses the Approximate number system[2]. So it's like there are only 2 slots to place a quantity.

[1] https://en.wikipedia.org/wiki/Parallel_individuation_system [2] https://en.wikipedia.org/wiki/Approximate_number_system


> some core set of cognitive axioms

This and the rest of the comment remind me of the Pirahã language, in which there are purportedly two numerals but researchers can't figure out what they are: https://en.wikipedia.org/wiki/Pirah%C3%A3_language#Numerals_...

> Frank et al. (2008) describes two experiments on four Pirahã speakers that were designed to test these two hypotheses. In one, ten spools of thread were placed on a table one at a time and the Pirahã were asked how many were there. All four speakers answered in accordance with the hypothesis that the language has words for 'one' and 'two' in this experiment, uniformly using hói for one spool, hoí for two spools, and a mixture of the second word and 'many' for more than two spools. The second experiment, however, started with ten spools of thread on the table, and spools were subtracted one at a time. In this experiment, one speaker used hói (the word previously supposed to mean 'one') when there were six spools left, and all four speakers used that word consistently when there were as many as three spools left.


Having read only your comment, I'll jump in and solve the puzzle.

    enough

    not enough


I taught myself to recognize five as a distinct quantity. Useful when counting up the "spare change jar".

I assume you see 3 objects on a table as a triangle. It's probably not equilateral, but any three objects on a table describe a triangle.

Make sure you can see 4 as a square, not 2+2. If you're stuck on seeing two pairs (or lines), try seeing 3+1 (a triangle and a point) instead. Then incorporate the point into the triangle...

Next, see pentagons. ... That's it.

I haven't tried to see "six"... Five was hard enough. :P


I don't think you would ever lose a concept like 'aboveness' - even if that word didn't exist in our language, we would have found away to express the same idea, perhaps in a less abstract way like 'closer to the sky'


What proportion of linguists with an interest in semantics regard "semantic primes" as a useful concept? The Wikipedia articles don't seem to have a "Criticism" section, which isn't a good sign.

It looks interesting, certainly, but rather arbitrary. There are several pairs of opposites, which in a minimal language could be handled with the concept of "opposite", and I have no idea how you'd express some fundamental concepts of human experience such as hunger, cold, pain or surprise, while "live, die" do not seem to me to be such fundamental concepts: they seem more like concepts that need to be defined, for example by a philosopher or medical specialist, rather than experienced directly.


I think this is the right approach to the problem. It's a question of meaning and bootstrapping a minimal language that's based heavily on metaphor (specifically, the conduit metaphor). The answer from this perspective, based on semantic metalanguage, is 800 words. Minimal english, but also minimal across all languages. It's a core language system that's translatable, because language is based on concepts, and those are consistent across natural languages (Chomsky - Universal Grammar).

--- https://en.wikipedia.org/wiki/Conduit_metaphor

https://en.wikipedia.org/wiki/Natural_semantic_metalanguage

https://intranet.secure.griffith.edu.au/…/natural-semantic-…


The 'scientific method' is not something that can be reduced to formal reasoning. Bayesian inference is a type of formal reasoning for making predictions, and as such, is a mathematical 'toy model' that doesn't correspond to the real universe.

Yudkowsky worships pure mathematics, but he always had it 'arse backwards'. It's not pure mathematics that's the ideal, it's numerical and iterated methods and heuristics (cognition). Pure math can only be applied to idealized situations, whereas numerical methods and heuristics apply everywhere. So in fact, it's numerical methods that are fundamental, and pure math that's the imperfect idealization!

Yudkowsky read too much Tegmark in his youth and was sucked in by the idea that 'everything is mathematics' (or 'everything is information'). But to repeat, this is all 'arse backwards'. Thank goodness that I read some Sean Carroll and debated with a friend of Sean's on his forum; that's what finally talked me out of all that Tegmark multiverse/'reality is a simulation' nonsense.

It's the physical world that's fundamental, cognition is next level of abstraction up, and pure mathematics is a top-level abstraction (it's not fundamental). As Bayesian inference (and all formal methods) are part of the domain of pure math, they can't serve as a foundation for rationality. Cognition is more fundamental than math (because it's closer to the base level of reality - physics).

As I commented recently to Scott Aaronson on his blog, what distinguishes cognition from pure math, is that pure math is about fixed equations, whereas cognition is about heuristics , iteration and numerical methods. But in fact, P≠NP implies that cognition is the more fundamental. See:

https://www.scottaaronson.com/blog/?p=3875#comments

So for instance, AIXI (a much touted mathematical model of general intelligence), is the 'fake' (imperfect) solution, whereas a workable heuristic implementation would be the correct (perfect) one. This is the complete reverse of what Yudkowsky thinks.


I'd like to assure anyone reading this that the above commenter has no idea what Eliezer Yudkowsky thinks. I, with my considerably greater knowledge of that subject, can testify that among Yudkowsky's beliefs is "Bounded agents are more impressive than unbounded agents."

(In general, you would be very very wise not to believe someone who claims that I believe a thing, until you have seen the original text, in its original location, in full context, written by me under my own account, plainly and unambiguously saying that I believe that thing. Even then I've been known to change my mind later, as is a sane person's right. But most of what I'm wildly rumored to believe is more completely made up out of thin air than anything I've changed my mind about.)


Now the question is: were you watching my comment for activity, or do you have a bot checking for name drops.


Yes, we need a logic-based approach rather than a statistical one for NLP if you want to incorporate things like attention and memory. The full range of non-classical logics should be looked at, including Modal, Fuzzy etc.

We need to extend the logic-based approach to deal with reasoning under uncertainty, so many-valued logic is needed. Also, we need the logic to be able to model discrete, dynamical systems, so we need to look at Temporal Logic, Situation Calculus , Event Calculus etc etc. You can call all this 'Computational Logic'; it may actually be more general than probability theory.

See my wiki-book listing the central concepts of 'Computational Logic' here:

https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality...

Functional programming is best for handling the logic-based approach, for example Haskell. Functional programming languages work at a higher-level than ordinary imperative and procedural programming, since functional programming deals with the manipulation of knowledge itself rather states of the computer.

I'd also look closely at the pure mathematical origins of computational logic, including classical mathematical logic and category theory. Type theory (a constructive form of category theory) looks like it forms the basis for language rules, thus making it ideal for NLP.

I conjecture that an extension of computational logic leads to a solution to machine psychology (inc. NLP), analogous to how machine learning can be viewed as an extension of probability and statistics.

Probability&Stats >>> Machine Learning

By analogy,

Computational Logic >>> Machine Psychology (inc. NLP)


Let me just elaborate on the ‘complex motivations’ idea, because I certainly think that ‘orthogonality’ is the weak point in the AGI doomsday story.

Orthogonality is defined by Bostrom as the postulate that a super-intelligence can have nearly any arbitrary goals. Here is a short argument as to why ‘orthogonality’ may be false:

In so far as an AGI has a precisely defined goal, it is likely that the AGI cannot be super-intelligent. The reason is because there’s always a certain irreducible amount of fuzziness or ambiguity in the definition of some types of concepts (‘non-trivial’ concepts associated with values don’t have necessary definitions). Let us call these concepts, fuzzy concepts (or f-concepts).

Now imagine that you are trying to define the goals that will let you specify that you want an AGI to doprecisely, but it turns out that for certain goals there’s an unavoidable trade-off: trying to increase the precision of the definitions reduces the cognitive power of the AGI. It’s because non-trivial goals need the aforementioned ‘f-concepts’, and you can’t define these precisely without over simplifying them.

The only way to deal with f-concepts is by using a ‘concept cloud’ – instead of a single crisp definition, you would need to have a ‘cloud’ or ‘cluster’ of multiple slightly different definitions, and it’s the totality of all these that specifies the goals of the AGI.

So for example, such f-concepts (f) would need a whole set of slightly differing definitions (d):

F= (d1, d2, d3, d4, d5, d6…)

But now the AGI needs a way to integrate all the slightly conflicting definitions into a single coherent set. Let us designate the methods that do this as <integration-methods>.

But finding better <integration methods> is an instrumental goal (needed for whatever other goals the AGI must have). So unavoidably, extra goals must emerge to handle these f-concepts, in addition to whatever original goals the programmer was trying to specify. And if these ‘extra’ goals conflict too badly with the original ones, then the AGI will be cognitively handicapped.

This falsifies orthogonality: f-concepts can only be handled via the emergence of additional goals to perform the internal conflict-resolution procedures that integrate multiple differing definitions of goals in a ‘concept-cloud’.

In so far as an AGI has goals that can be precisely specified, orthogonality is trivially true, but such an AGI probably can’t become super-intelligent. It’s cognitively handicapped.

In so far as an AGI has fuzzy goals, it can become super-intelligent, but orthogonality is likely falsified, because ‘extra’ goals need to emerge to handle ‘conflict resolution’ and integration of multiple differing definitions in the concept cloud.

All of this just confirms that goal-drift of our future descendants is unavoidable. The irony is that this is the very reason why ‘orthogonality’ may be false.


Bayesianism is a 'grand unified theory of reasoning' that all of science be should be based on assigning (and updating) probabilities for a list of possible outcomes; the probabilities are supposed to indicate your subjective degree of confidence that a given outcome will occur.

Contrast this with an alternative conception of rationality as espoused by David Deutsch.

David Deutsch in his superb books, 'The Fabric Of Theory' and 'The Beginning Of Infinity', argued for a different theory of reasoning than Bayesianism. Deutsch (correctly in my view) pointed out that real science is not based on probabilistic predictions, but on explanations. So real science is better thought of as the growth or integration of knowledge, rather than probability calculations.

So what's wrong with Bayesianism?

Probability theory was designed for reasoning about external observations - sensory data. (for example, "a coin has a 50% chance of coming up heads"). In terms of predicting things in the external world, it works very well.

Where it breaks down is when you try to apply it to reasoning about your own internal thought processes. It was never intended to do this. As statistician Andrew Gelman correctly points out, it is simply invalid to try to assign probabilities to mathematical statements or theories, for instance.

Can an alternative mathematical framework be developed, one more in keeping with the ideas of David Deutsch and the coherence theory of knowledge?

I believe the answer is yes, and I am going to sketch the basic ideas for such a framework.

The basic idea is to separate out levels of abstraction when reasoning (or equivalently, levels of recursion). In my proposed framework, there are 3 levels, and each level gets its own measure of 'truth-value'. All reasoning must terminate in a Boolean truth value (True/False) at the base level but the idea is that different forms of reasoning correspond to different levels of abstraction.

1st level: Boolean logic (True/False)

2nd level: Probability value (0-1)

3rd level: Conceptual coherence (categorization measure)

For full reflection, you need three different numbers: a Boolean value (T/F) at the base, a probability value (0-1) at the next level of abstraction, and an entirely new measure called conceptual coherence at the highest of abstraction.

As a rough working definition of conceptual coherence, I would define it thusly;

"The degree to which a concept coheres with (integrates with) the overall world-model."

It should now be clear what's wrong with Bayesianism! It only gets us to the 2nd level abstraction! There is not just uncertainty about our own knowledge of the world (probability), there is another meta-level of uncertainly; uncertainty about our own reasoning processes, or logical uncertainty. Bayesianism can't help us here. Conceptual coherence can. Lets see how:

All statements of the form:

‘outcome x has probability y’

can be converted into statements about conceptual coherence, simply by redefining ‘x’ as a concept in a world-model. Then the correct form of logical expression is:

‘concept x has coherence value y’.

The idea is that probability values are just special cases of coherence (the notion of coherence is more general than the notion of probabilities).

To conclude, conceptual coherence is the degree with which a concept is integrated with the rest of your world-model, and I think it accurately captures in mathematical terms the ideas that Deutsch was trying express, and is a more powerful method of reasoning than Bayesianism.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: