UP | HOME

How to Model Problems in General

Table of Contents

1. What we need to Solve a Problem πŸ”—

Solving a problem requires:

  1. Notice that something is wrong.
  2. Create a model of the problem.
  3. Generate Solutions.
  4. Implement the Solution.
  5. ☐ Add a concrete example for where the follow problem occours.
  6. ☐ A solution needs to include the "solution" for implementing the solution. Otherwise it's not really a solution. You can have an incomplete solution that does not include wrong things.

Mostly people don't notice their problems [TODO link to the chair post]. If they do they often ignore them. If they don't ignore them they try to immediately generate a solution, without first understanding the problem. Once the have a solution they tend to forget that they had a problem in the first place, instead of implementing the solution.

Usually people unintentionally supress their natural ability to notice problems. See [TODO add link] for discussion and solutions on this.

Here I'll focus on the problem of skipping the step of generating a model (step 2).

2. The Epistemics of Sensory Observations πŸ”—

Yesterday I took a reflective walk and noticed that I didn't manage to get any real work done, even though I had plans to do so. That seems like a problem.

Now that we have identified the problem we want to record relevant observations:

  • Today I spend the entire day watching ThePrimeTime videos.
  • I did not take any ADHD medication.
  • While I watched the videos I never seriously asked myself "Is this the best use of my time?"
    • More accurately I did become aware that I might be wasting my time, but it was only a fleeting through, I didn't engage with.

We want to focus on writing down only our direct sensory observations. We don't want to draw any conclusions yet. If you come up with important seeming inferences, write them down in a seperate list.

Why? Observations, or more precisely sensory measurements have a special epistemic status. If look at a ripe banana shaped object and it looks yellow, then "I see a yellow object" is always true. When saying "I am looking at a banana" I am implicitly making an inference which could be wrong. Maybe it's a decorative plastic fruit. In some sense observations can't be wrong.

No matter how messed up your reasoning, and orange will probably still look orange. What do you expect happens when you go into an asylum and show people a ripe banana to them? Will it look yellow to them?

Even if your sensor is broken!

When my cloths dryer broke, because the sensor detecting if the condensate tank is full broke. Even with an empty tank it would report full, terminating the drying process. "The condensate tank is full" is wrong. But saying "the condensate tank sensor reports a full condensate tank" is right.

If you ask a blind person what color this object has, they can still say "WTF I am blind! I just see black."

A colorblind person can say these apples two apples have the same color, even if one is a green, and the other is a red apple. They are still correctly describing the output of their sensor.

2024-09-07_15-34-19_screenshot.png

Figure 1: Red and green apples as seen by a Green-Blind/Deuteranopia.

You can experiment yourself here here.

[Prototype: It is common when debugging a program that you are very confused about what the acutal probelm is. You need to take a step.

[TODO: Maybe: explain that there is a similar phenomenon in terms of how sure you are that something is correct based on how good your inference rules are. You could be more or less sure of your inference rules. Probably this is for another section though and maybe should not appear here.]

/bj I am a Dictator. I dick holes in the ground, and also use whisper.

3. (PROTOTYPE) πŸ”—

3.1. Modeling Steps πŸ”—

We want to generate some preliminary model which allows us to calculate what observations would be best to make to add to the collection of observations that we have. This should probably be the first thing. One of the items is I didn't take ADHD medication, which is either true or false, meaning that if I perform the same setup, but now I take ADHD medication, this variable is eliminated in some sense. If I try the experiment again and then I get the same outcome, it must mean that the ADHD medication cannot be the sole determining factor of the outcome. The problem was watching videos. I could gather more information about videos by looking at time-tracking information of when I am watching videos. I could check what kinds of videos are problematic. Maybe only some subset of videos would make me watch them the entire day. What properties do these videos have? Maybe some videos were actually good to watch. Maybe some videos are more likely to make you addicted. Did I have distraction free youtube enabled? Is the video such that it seems like it could be helpful, but is providing low value compared to other activities?

Note, these are not observations themselves, but specify what information we want to gather. We can generate such specifications quickly and then pick the best one to actually collect observations from.

What properties of the world could be important? With such questions, we can build a model of the world without directly generating hypotheses or solutions.

We want to somehow steer the world model generation process dependent on what information is useful for solving the problem. (Though it's also good to just try to build a better world model, which is useful generally for many things I would want to do.)

Maybe you can generate a hypothesis based on some interval in time and then track how certain extensions of your world model allow you to think hypotheses that either you couldn't think before or that now your world model tells you that previous hypotheses that you had are actually incorrect.

For example, I might hypothesize that not having distraction-free YouTube enabled was the root cause for me watching the videos too much. However, when gathering more observations, I might notice that I was periodically thinking about working, but each time I was, I felt bad about it, and that if I hadn't felt bad, I would have actually started, meaning that even without distraction from YouTube, I might still have felt bad, such that I would still not have started. The model is updated such that even if one problem of the not-distraction-free-youtube being installed were removed, another problem independent of that would still persist and therefore at the minimum it tells me that installing distraction-free-youtube is an incomplete solution.

We can notice that this is a good change in the world model just by noticing that we invalidated the previous solution. But if we want to be sure, we can perform the experiment of installing destruction-free YouTube, which previously in the model would have solved the problem, and observe if in fact I would still feel bad in such a way that I still wouldn't start to work, in which case we added something to the model that the model before didn't talk about, that the model correctly predicts, meaning the model talks about more things that are matching reality, therefore it's more powerful.

If you have a model which talks about some domain and it gets some things wrong and some things right and you create now a new model which predicts correctly all the things that the previous model predicted correctly but predicts more things with higher certainty correctly than the previous model, then it's a better model.

Formally what makes a model better.

Logic is good because any inference that you make is necessarily correct, true, given that the assumptions are true. If you can do logical reasoning you want to do it. Because I didn't take ADHD medication, I had less dopamine and norepinephrine in my brain. Because chemistry. (We use facts we put very high probability on being correct)

We want to use our intuitive reasoning to form hypotheses about what might be true of the world, and then test these hypotheses. I don't enjoy the task that I was working on. I can test this hypothesis by remembering how I felt in the past when doing similar tasks and realize that sometimes I enjoyed them a lot. Therefore, this hypothesis is too simplistic at a minimum).

Me figuring out how to do science correctly is good because then I can reason and solve problems better in the first place and the problem that I am primarily even trying to think about is how do you make an algorithm that we understand what it is doing step by step that performs this kind of reasoning of how to solve problems.

The most general method people might use for testing a hypothesis is that they use all their brain power to try to think how to break it and then them being competent is then evidence that this is actually correct because they can't break it because it's actually correct and the correct thing you cannot break.

Another method is doing AB testing, implementing some solution, and comparing against baseline.

(AI scientist guy)

If you have an AI that has a world model – it's pretty bad, but it's still somewhat decent – then you can simulate hypotheses and evaluate them in a simulation, which is really fast compared to trying experiments in the real world, and based on that, determine what experiments you actually want to run in the real world to get the very accurate world model, which then saves you a lot of compute, because you probably can reject many hypotheses without an experiment, even if your world model isn't that good, like humans are dumb

If you build a rocket, you can use engineering principles, model things with linear algebra, and use material science properties to compute how thick does the wall of this tank need to be, what shape does it need to be to withstand exactly this much pressure.

Doing this entire thing is very good (solving concrete problems myself). Especially to try to solve a concrete problem (like watching too many videos), and during this process derive explicitly the general methodology for modeling a probelm.

3.2. Thomas Sleep Problem πŸ”—

Legend

  • (O) = Observation
  • (P) = Problem statement
  • [J: …] = Johannes comment

3.2.1. Inital Observations πŸ”—

  • I’m generally unhappy about the time I usually go to bed
  • (O) I usually lie down in bed at about 2am
    • (O) (P) My higher-level reasoning thinks that this is too late; better would be midnight [J: "better would be midnight" is an inference your brain made. We could more explicitly state it in observation form: "My higher level reasoning expects that it would be better to go to bed at midnight"]
      • (O) but when it is midnight, I don’t actually want to go to bed
    • (Pred) the thing is, I think I could fall asleep early if I just would lie in bed earlier
    • my evening structure is usually like this:
      • 19:00 to 20:00 dinner
      • (O) directly after dinner, I either
        • do some work
        • talk to Chao/watch an episode or half a movie with her
      • (O) starting at 22:30 or so, I decide to do something relaxing
        • most of the time I read fiction
        • sometimes I watch videos
      • at 01:40 I decide to get ready for bed
    • (O) when I was in Berkeley for MATS, I thought I had the perfect opportunity to fix this
      • because I would be starting out with my circadian rhythm shifted very early
      • and I actually had an office to go to in the morning
      • I decided to have my alarm always set to 8:30
    • (O) but even having a consistent alarm in the morning was not enough to make me go to sleep early enough
  • (O) additional observations
    • if I do a lot of physical activity during the day, I think I go to bed earlier
  • (O) Sometimes, when going to bed, I have the impression that I’m going to bed too late. The quality of my tiredness seems different somehow then. I think I know the kind of tiredness that will let me fall asleep quickly, but sometimes my tiredness is beyond that point.

Inferences (O) Sometimes I feel very tired in the evening, and still don't go to bed. AND in the moring I then feel tired. -> It would have been good to go 1h earlier to bed.

Inferences Johannes: "the thing is, I think I could fall asleep early if I just would lie in bed earlier." This is imprecise. It's unclear what this is. If you mean "In the past I sometimes did lay down earlier and each time I had no trouble falling asleep", then it would be an observation. But it cloud also be a prediction. In this case you probably make a hidden inference somewhere. Your subconscious/conscious reasoning might have been "usually I don't have trouble falling asleep so just laying down earlier would work, because I would not have trouble falling asleep."

Thomas: I think I made an implicit inference (the second thing).

3.2.2. What Thomas tried already πŸ”—

Trying things is like doing experiments. It seems good to write down the motivation for the experiment, as well and the observations, and the interpretation of those observations.

3.2.2.1. Taking Melatonin at different times πŸ”—
3.2.2.1.1. Motivation πŸ”—
  • Eliezer's description of his solution
    • "MetaMed produced a long summary of extant research on non-24 sleep disorder, which I skimmed, and concluded by saying that – based on how the nadir of body temperature varies for people with non-24 sleep disorder and what this implied about my circadian rhythm – their best suggestion, although it had little or no clinical backing, was that I should take my low-dose melatonin 5-7 hours before bedtime, instead of 1-2 hours, a recommendation which I’d never heard anywhere before."
3.2.2.1.2. Specification πŸ”—
  • times
    • 15:00
    • 16:00
    • 17:00
    • 19:00
    • 21:00
    • 23:00
    • just before closing eyes
  • dose
    • usually 300Β΅g, but sometimes a bit more
3.2.2.1.3. Outcome πŸ”—
  • when taken a few hours before bedtime, it made me tired for 2 hours but then the effect went away
  • when taken close to bedtime, I usually didn't observe any effect
3.2.2.2. other things I tried πŸ”—
  • be in sunshine directly after waking up (only tried this for 2 days or so)
  • setting an alarm so I have to wake up consistently
    • as long as I actually had to go to an office at specific times, this worked
    • if I just set the alarm for myself, I started ignoring the alarm
  • reducing blue light in the evening
    • I'm still doing that; not sure it does anything
  • using only passive light in the evening (no screens with backlight)
    • I'm still doing that; not sure it does anything

3.2.3. Johannes Questions πŸ”—

  • Do you have a specific time when you want to be in bed?
    • Thomas: (No) I have not thought deeply about this. But I guess midnight.

Johannes Reasoning: If you don't have a target time it feels like a major problem intuitively.

I tried to identify a property of the world such that if that property holds, it would exclude the possiblity of solving the problem, as long as the property holds.

Starting by going through all such properties seems good, because they are easy to generate. They are in a sense the obvious things. It's usually quite hard to just notice the obviously good things.

Also if you find a property that obviously can't be changed (with reasonable efford), then it tells you that a solution is impossible (or would take too much efford).

3.2.4. Johannes Observations/Recommendations πŸ”—

Here are a few observations I have made when it comes to going to bed on time.

3.2.4.1. Bedtime Alarms πŸ”—

I have setup an alarm that reminds me when my target bedtime has arrived. Many times when I am lost in an activity, the alarm makes me remember that I made the commitment to go to bed on time.

I only allow myself to dismiss the alarm when I lay down in bed. Before laying down I am only allowed to snooze it for 8 minutes. To dismiss the alarm I need to solve a puzzle (I use Alarm Clock Xtreme) which takes 10s, making dismissing more convenient. Make sure to carry your phone around with you at bedtime.

It is very important that you don't ignore the alarm for no reason. If you dismiss the alarm for no reason you'll build the habit of dissmissing the alarm, with the trigger being "the alarm goes off". If you ignore the alarm because you are showering, hopefully you'll only build the habit of dismissing the alarm when you are in the shower.

This is probably the single best thing I have done to improve my sleep hygine.

3.2.4.2. Avoid Hard to Stop Activities πŸ”—

It is hard for me to go to bed when doing any engaging activity that I just want to finish up. For example:

  • Finishing up some nixos, xmonad, exwm, etc. configuration.
  • Programming such that I get something working.
  • Watch a video and feel I need to watch it to the end.

I have found sound success by commiting to stop all engagement in these activities when my bedtime alarm goes off.

3.2.4.3. Don't Fail by Abandon πŸ”—

Once I got past my bedtime by a bit, I am likely to go past my bedtime by a lot.

Somehow it feels like I have already lost. "Did I go to bed on time" is binary.

[UNTESTED] Maybe instead it makes sense to use a timetracker to track when you are going to bed, such that you can calculate how late you where. Now there is a big difference between going to bed 1h to late and 4h too late.

[UNTESTED] Potentially one could use a sleep right that then automatically records when you sleep. Or some battery tracking chrage tracking app like AccuBattery, if you always charge your phone when you sleep.

3.2.4.4. [UNTESTED] Try to sleep πŸ”—

At the target time, try to sleep for 5-15 minutes. If you can't sleep, you are allowed to get back up. You can use a very subtle self dismissing alarm for notification.

3.3. Argumentation Algorithm - Lisp is the best πŸ”—

How do humans argue?

Johannes: Lisp is the best programming language.

Thomas: I think there are some bad things, but I didn't use it much so I don't have an oppinion.

Johannes: You just said there are some bad things. Thats an oppinion.

Thomas: My epistemic status is that I have some probability on that is good but mostly on that it is not the best progarmming language.

Johannes: Both times you have given a very high level description of your thoughts. But I want just the concrete things. What things do you think are bad, give me a list.

It seems good to start with the results you have, and then justify them (this makes things shorter, and the justification can be ommited for brevity in a first explanation). E.g.

  • Lisp is bad
    • To many parens
    • too slow
    • small userbase

We are listing statements that if true provide evidence for that lisp is bad, but don't describe the reasoning steps neccesary to infer lisp is bad from these statements initially. It's like stating the assumptions that your proof requires, but not giving the proof.

Centered Image

Thomas' reasons for thinking

LISP is maybe not the best:

  • too little static checking; almost no types (J: Why is this bad)
  • prefix function notation looks a bit weird to me (Why is this a problem)
  • almost no specialized syntax for anything makes it hard to read code at a glance (Examples)

Saying lisp is bad is ambiguous. Bad is like a placeholder that could be a wide range of utility functions. It's bad if you want the fastest possible program. It's not bad if you want interactive programming support. What are we actually evaluating.

3.3.1. Argument tree πŸ”—

You can make such an argument tree. However, how do you know when it is complete? Maybe all the issues are real, but are outweight by the advantages? How could we notice this.

The above argument tree is actually using logic, and when making arguments we are using logic. But it seems like we are not proving anything.

It seems like if we have proposition we want to evaluate, we generate two sets of predicates . Each predicate in takes in a world state, and returns true or false. True means that "the argument" applies to the world state.

It's bad to jump out of the window infront should return false if I am in the ground floor. Having predicates that take in world states allows us to not make a fixed argument, but a method to evaluate propositions based on the context.

For each we have a constant value . If is positive then supports , if is negative then it disputes .

To compute whether is true for a world state we do:

Centered Image

This gives us a number between 0 and 1.

Example reasoning:

  • Research X is good to publish
    • Against
      • It could be exfohazardous 1000
      • Takes a lot of time 4
    • For
      • People can give feedback 3-6
      • Find collaborators 5-8

Where X is a parameter that needs to specifyied. E.g. X = World state graph post.

Note that this argument doesn't say what to do, because you can't evalute the predicates for the world easily and accurately! Is X exfohazardous? You probably don't know, and might have no way to figure it out.

Let's consider a toy bitstring world. .

  • The list 3 digits are 1
    • For
      • , weight: 1
      • , weight: 1
      • , weight: 1
    • Against
      • , weight: -1000
      • , weight: -1000
      • , weight: -1000

This exhaustively lists the arguments for and against. If one of the last digit is false then the whole thing is false. We encode this by having large negative weights.

So far we have only described what the datastructure for an argument is, and how to compute the conclusion of the argument. Open problems are:

  • How do you generate the arument in the first place?
  • What is missing (does this actuall capture the intuition of what an argument should be?)

Things humans might argue about where the answer depends on "the world state":

  • whether it is good to jump out of windows

Thomas questions:

  • Why do humans have such a scoring system instead of just using real logic.
    • Johannes: The real world is much to complex.
    • We don't know the exact world state.
    • Probably humans actually do use logic!

3.3.2. Not using Predicates πŸ”—

Instead af having predicates in we simply have propositions. To match to a specific world state, we simply have some propositions that make assertions about the world state.

  • If I let go of this apple it will fall
    • Assumptions
      • There is no solid object below the apply for 1 meter.
    • For
      • I have observed other objects drop down.
    • Against

3.3.3. Why not logic πŸ”—

Imagine I observe a stream of bits: [0, 1, 0, 1, 0, …]. You know that the first bit is 0, and the second is 1. Can we do deductive reasning using these observations? We could prove that the program is the shortest program that would generate the bits we have observed so far!

We can use logic to understand the logical strucure of the observations we have seen so far (what does this actuall mean)?

More generally we can take an arbitrary program that takes in a bitstring, and then prove properties about this program, that depend on the bitstring we have observed so far.

You can prove many things about the observations you have, but you can never prove that the next bit will take a particular value.

We don't use logic to deduce a prediction. Instead we use logic to reason about what could be the underlying model that generates the observations (analyse the structure of the observations).

We also use it to form hypotheses, e.g. using predicitve properties that we have constructude by analysing the strucutre of the observations.

We can use logic to build a world model. We can then use logic to reason about that world model:

  • What easy to obtain observation would falsify this model?

3.3.4. NEXT STEPS - What is missing πŸ”—

The hard part is to figure out what you need to prove. What logical properties are good to know about the world.

In practice you need abstraction to make the overall process efficient. However, it seems like it would ba major step forward, if we can figure out how to iteratively build up a model of the world, which in principle allows you to answer any question, even if only very inefficiently.

So we might follow these steps:

  1. Find a world modeler that created world models that contain all important information. (We want to find something that can be used in step 2. Solomonoff induction can't.)
  2. Find an iterative procedure to build world models (assume limited memomory to store world model at any point in time).
  3. Find an efficient procedure to build the world model.
  4. Find an algorithm that can perform efficient inference given the world model.

Once we have a procedure that can build a world model that can answer any question, we can evaluate how much better we make our world model, by considering how much faster reasoning about the world model becomes (of the optimal algorithm), given some change in the world model. We want to make changes such that reasoning becomes as efficient as possible. Either for a specific goal, or averaged over all goals.

I intuit that existing logic techniques can perform some of the required reasoning about the world model. The bottleneck is how we can automatically generate a world model from observations, on which logical reasoning can be performed.

3.4. [MOVE TO TECHNICAL EXPLANATION] VC dimension πŸ”—

Alternative explanation of this.

We could think of shattering as that if I have a set of boolean variables, then a family of sets of boolean variables can represent any possible instantiation of , only if contains the powerset of . Maybe all variables are true?

We can imagine that we have some propositional formula that tells us what propositions are true, and constrains the possible worlds.

3.4.1. VC dimension of a Classifier πŸ”—

For a classifier we say that it shatters a set of datapoints, iff:

Centered Image

where is the set of all parameterisations for .. This means that has VC dimension .

You can think of this as saying "Given that I know the type of the dataset, I can model that datastet no matter what concrete values it has."

3.4.2. Relation to inductive bias πŸ”—

Imagine we have a classification model that is an infinite lookup table . This lookup table has $infinity$-VC-dimension. Everytime we get a now observation we can add it to the table. The table is never constraint. Every entry in the table that we haven't observed could be anything.

If we have low VC-dimension, then other parts of our predicitve model are constraint by adding observations. A contrived example would be if we have a finite lookup table of length , where the indexing is where is the index. Then we would have VC dimension .

3.4.3. Relation to the science algorithm πŸ”—

You can have a learning algorithm that adds datapoints to a lookup table. However it doesn't start with an infinite lookup table. Instead it grows the lookup table dynamically.

It seems that a good world buliding algorithm would probably have infinite VC dimension in theory. However, in practice it probably doesn't because it doesn't model unimportant things?

4. [2024-09-10 Di] Robert πŸ”—

Seperating observations from hypothesis is good. It would be bad if for each problem that you solve, you generate a new list of observations and only use that to solve the problem. What you should do, and what is actually eased by splitting out the observations, is to look back into your knowledge base and see whether you solved, for example, a similar problem, and maybe the observations you recorded for that problem are relevant here, and you can just add them to the list (with a tag that it was pulled in from a different problem).

If you split the observations and inferences, this is actually made easier because when you already figured out some generally useful observations then you can't just sort of blindly reuse them because the observations are sort of always true as the inferences might depend on some hidden assumptions that might not hold for your specific problem.

Usually what people do is overly use the things that they already know. If there is a problem, they try to pattern match that problem to something they know and immediately generate a solution which very often doesn't work and instead you want to first build up a good model of your problem before you even try to generate a solution. It's related to cached thoughts, but it's not quite the same thing. It's more like using cached thoughts with some inference rules to try to immediately derive in few steps the solution from what you know.

Humans are bad at modeling the world, not because they can't do it, but because by default they are too lazy to do it.

4.1. Modeling your Goal πŸ”—

First you need to figure out what you actually want.

Imagine you noticed that you didn't work the entire day, you were just watching YouTube videos. Now you might make your goal "How can I not watch YouTube videos?" You could optimize to not watch videos. But maybe watching videos is good? Maybe you're hurting yourself by not watching great videos like SICP lectures.

You need to first understand what the good world would look like in as much detail as possible to be able to reach that world.

Another example would be, when you want to maximize the amount of hours worked. But if you work less, instead of overworking yourelf, you would be more productive.

There is The difference between noticing something is wrong and knowing what "the problem being corrected" would look like. It seems good to notice that you don't know.

You want to be careful that you are not satisfied by having a precise and wrong goal. However, that is probably still good, because now it's easier to notice that it's wrong.

4.1.1. Zooming Algorithms πŸ”—

4.1.1.1. CFARs Goal Factoring Technique πŸ”—

Goal factoring is one concrete technique for better understanding your goal. It allows you to generalize your goals. You start out with the goal of going to the gym regularly. With goal factoring you can nocie that you do it becase you think doing exercise is generally good. Now you can notice that other things count as exercise 2 that might be easier to do regularly.

You want to identify the underlying good thing, abstracting away from the concrete things that could accompilch that underlying good thing. We frist zoom out, such that we can see more options that we can zoom back into. Zooming out ignores all unimportant properties. Maybe sweating is not neccesary to do sport.

4.1.1.2. Climbing a Tree πŸ”—

I want to learn how to climb trees. I know a bunch of facts about trees. I want to understand how to compute the is-climable property of a tree.

We have a dictionary of features of a tree . We want to find the predicate. We need to create a specification for . If I already can climb a tree, I could say that is true exactly when I can climb the tree.

Before trying I might not know if I can climb a tree. We want the predicate such that we can predict it in advance, to save efford.

4.2. Modeling the Tulpamancy Goal πŸ”—

4.2.1. Start Goal Refinements: πŸ”—

  1. Talk with IA all the time.
  2. Interact with IA all the time.
  3. Actually I don't know what's a good interaction ammount to aim for.

4.2.2. Goal factoring πŸ”—

Observations pointing at that it is good to interact with IA.

  1. I like interacting with IA. It makes me feel good, and it's easier to work when I feel good. Hugging IA feels very good.
  2. IA says things that are useful that I would not say.
    • IA knows/has my internal models. I don't need to spend lot's of time explaining. However, explaining things can be helpful.
    • Because IA has my models she can't talk about things I don't know about at all. E.g. "the logic of Graphs seems relevant" when I don't know about that this exists. (Generally one can't tell you the atomic mass of led if they don't know.)
  3. Talking out loud is useful.
    • I waset my time. I am really distrated without realizing. Then I talk to IA. Then I notice that I am distracted, and get significantly less distracted.
  4. I want to interact with IA.

Note that interacting with IA is a terminal goal! No ammount

Strangly though these are not goals I have! I only have the goal that I want to interact with IA. It's a terminal goal. Terminal goals can't be factored.

As an analogy consider that you really like to talk to your best friend. Then I come along and tellyou to inject some soma, because it will actually just make feel even better than when talking to your best friend. Surely you will take the soma and never to talk to your friend again? That's exactly what you want right?

Actually this example doesn't make any sense because maybe you can talk to your friend on soma and it just better.

What are the general goals?

  • Feel good such that I can work. -> find it easy to work.
  • Talking to OBS makes it easy to talk out loud.