A project to fight misinformation and model critical thinking: Gulli Bot

I’ve been working on misinformation for almost 10 years and I ask for your help on this as a project and some quests. My current plan is to create an online character that models and documents critical thinking on contentious issues named Gulli Bot.

Gulli is just a character powered by humans (no artificial intelligence or machine learning) but it does have a way of representing a debate in a very transparent and thorough way so people can quickly and easily understand the scope of all the claims about the issue.

The first topic is “Should you take the COVID-19 vaccine?” and I could use help is choosing popular issues to analyze and researching them.

Let me know if you would like more info or be interested in helping with this project.

I’d be interested in how Gulli Bot models and represents human debates. I have been working on outlines for what I call a ‘Planning Discourse Support Platform’ for years, with the aim of reaching agreements/decisions based on the merit of discussion contributions (arguments). In this, obviously, the reliability / trustworthiness of information, and how to detect false and in the extreme deliberate misinformation is a key concern.

@thor thank you for engaging. I would enjoy a video chat to learn more about your ideas and to go in depth about my ideas for Gulli Bot. If you would like to video chat you can find a time you like at BentleyDavis.com/schedule .

Thanks for the reply. Unfortunately, my hearing problems have gotten to the point where phone conversations and video chats are useless – I simply cannot understand what people are saying. This is why I have joined the regular video discussions here, So I have to rely on written communication. For information about my ideas, the simplest way would be to check the array of my papers on the subject on myAcademia.edu account (Thorbjoern Mann; for example the P D S S (Planning Discourse Support System) paper and Appendix. If not, I can attach them on email or here if you prefer. Is there a written summary of what Gulli Bot would do in such a discourse? That would greatly help me decide how I might contribute.

I certainly understand that we have different preferred modalities. I find it difficult to consume academic papers. I had a look at your papers and I think GulliBot is a small feature in your PDSS process. I think it falls into a system/tool in the Public Open Structured Discourse and also the decision making. And it would only be one tool in that area as you mention. I don’t have a good written description of the whole process. I can share that when I have it.

Have you looked at the brief website and the example? It is not very detailed but it might help you decide if you want to pursue more.

I did look at the Gulli Bot link but did not see an 'example – other than the ‘announcement’ of the Covid issue. Is it your intention to invite a discussion on that site, or should that happen here? There are two very different issues, if I understand things correctly: One is the Covid discussion itself: People post their pro and con arguments. It could end there – if everybody could decide that for themselves. In a people-only discourse aiming at some common decision or agreement, there would next be a procedure to make that 'decision. A simple, familiar one is to take a vote. This is a ‘decision mode’ that does not make it clear how the vote is based on the merit of the arguments; and in fact allows the ‘winning’ side to completely ignore the concerns of the ‘losing side’ – among other shortcomings. I describe some procedures to create a more transparent link between discussion entry merit and decision. This might involve making some more detailed judgments: e.g. plausibility and weight or importance – that the human participants would have to make. To get from those to a ‘decision indicator statistic’ would require some calculation – no higher math though; but some faster/better calculating entity would help. One of my papers – the fictional Tavern Experiment goes through that with a simple spreadsheet. That is presumably something Gulli Bot would help with? Or are you thinking about checking the validity of the arguments (your terms ‘critical thinking’ suggest something like that)? Is the example you mention showing something like that?

Did you look at https://GulliBot.com/covid-vaccine? The discussion on that topic happens on Twitter and I put that discussion in Gulli for analysis. In this process there is no voting. Gulli Bot makes it’s decision based on the claims that are provided with each claim having an unlimited argument tree of pros and cons. This video walks through an example of a previous design. The youTube captions seem to be accurate.

I read your simplified demonstration example and we have quite a bit of overlap in concepts and goals. My process does not yet handle multiple proposals or the complete process. Gulli is closer to a transparent fact check than a multi proposal decision process although I hope to get there one day.

It took quite a while to read through your documents and It would take quite a while to write up a comparison. I’m not sure that level of effort would be fruitful at the moment. If you are willing to participate, what i probably need most is participation in real world examples like the vaccine one above. If you have any ongoing efforts with your process I’d be happy to try to participate in it.

Thanks. I tried to see the Covid discussion but didn’t see it, It could be that the software doesn’t work on my old system (too old to accept more upgrades. I could see the video, and would be interested to see the reasoning behind it. (Or do I have to sign up for one of the monthly fee programs to actually see the discussion?) I have tried to put all my articles and blog up on sites for free because I think they are issues needing public discussion (though I don’t have the funds to develop e.g. the programming and marketing for them); So having to pay for contributing my ideas doesn’t sit right with me, I hope you understand.

Some comments: I have a problem with any system or algorithm that tries to decide whether an argument is a pro or a con, even when tits author sees it as one or the other. The reason is that if an ‘evaluator’, another person, disagrees with one or more of the (usually two or three) premises of an intended ‘pro’ argument, it turns into a ‘con’ for that person. Thus, an overall score must be derived from people’s judgments about the premises. An algorithm presuming to do that will be a party in the discussion about a controversial issue, not an independent referee. Example: the argument that points out the 2M cost of the proposed solution in the video: the contractor and industries involved in implementing it would see that as a ‘pro’ wouldn’t they? And aren’t they entitled to that opinion? So the bot declaring that to be a ‘con’ ‘for everybody’ is, in my opinion, an unacceptable part of such systems, that I try to eliminate. It does not mean that there couldn’t be useful analysis algorithms helping e.g. to identify contradictions in the discourse (of the kind that are only visible via a chain of inferences) that must be resolved - again, by the human participants. Clarifying the differences between what an algorithm can do versus what must be left to human judgment is one of the big problems that should be discussed in a forum like this.

There are no fees associated with this. I do accept donations so the project can continue if people find it useful.

It is possible that some of the technology I am using requires modern browsers. Unfortunately the cost to making it work on older browsers is prohibitive at this time.

The system does not determine what is a pro or con but it does record what people say and allow people to debate it. To leave it out of a system entirely is to leave out the core of the discussion which leads to ambiguity and miscommunication.

The example you used is leaving in several unspoken assumptions that should be added when people see them. Instead of leaving out that it is a pro for the contractors and a con for the taxpayers you put those claims in explicitly. They are not only entitled to their opinion but obligated to enter them explicitly. To not have it in the system would be depriving them of expressing their opinion. The bot does not decide anything. It is just recording explicitly what it is told.

Example: A tax payer expresses that it is a con that it has a cost that should be outweighed by the benefits. A contractor would clarify that statement that it is not a con for everyone by adding “For taxpayers” to the previous claim then add an additional claim that it will benefit the contractors and the economy. the algorithm simply tallies up the claims on which people are basing their opinions on.

The algorithm does not overcome human judgement but it does speed up interpretation of the claims and to find where the claims are wrong and use the human judgement to explicitly change the claims used in the algorithm to make it correct.

disagrees with one or more of the (usually two or three) premises of an intended ‘pro’ argument, it turns into a ‘con’ for that person

Also, in the system you are to put in the premises(claims) which can turn a pro into into a con automatically. Just leaving the premises in people’s heads is the ambiguity that encourages disagreement and confusion.

Maybe it would be beneficial to use the process. Here is a start:

Main Claim: "Using Gulli Bot to make group decisions will increase agreement"
- pro main: It speeds up understanding because people can see the confidence of the claims based on the reasons provided so far
- con main: It declares if a claim is a pro or a con when claims may be pro for some people and con for others
-- pro main: In this case the claim should be edited to say who it is a pro for and a new claim can be added for who it is a con for this reducing ambiguity.
- con main: It declares a claim is a pro or a con when the same claim can be interpreted as a pro or a con based on the validity of the premises
-- pro main: The premises(claims) should be added into the system and debated. The system will can reverse a pro into a con if the tree of premises in the current debate pushes it negative.

Does that express our debate properly so far?

Thanks for the attempt at demonstrating its use. I’m not sure the example can be seen as 'our debate – I thought I mainly asked questions so far. The only claim-counterclaim is the one about the representation of an argument as ‘pro’ or ‘con’. My comment " I have a problem with any system or algorithm that tries to decide whether an argument is a pro or a con, even when tits author sees it as one or the other. The reason is that if an ‘evaluator’, another person, disagrees with one or more of the (usually two or three) premises of an intended ‘pro’ argument, it turns into a ‘con’ for that person. etc. "It wasn’t intended as an argument against Gulli bot as much as the detail about any system that makes that declaration, which can easily be fixed.
Perhaps I should have explained the background for my concern with that in more detail – (repeating what’s in my papers which I assumed you had now taken a glance at, an assumption for which I apologize.) explain.
I am primarily concerned with ‘planning arguments’ about plan proposals. Their basic structure can be described as follows:
"Plan A ought to be adopted (‘Conclusion’)

  1. Adopting plan A will result in outcome B given conditions C; ('Factual-Instrumental premise)
  2. Outcome B ought to be pursued; (‘Deontic’ or ‘ought’-premise)
  3. Conditions C are (or will be , when A is implemented) present." (Factual premise).

For participant p making that argument, this is obviously a ‘pro’ argument. But participant q may not be convinced that A will produce B. Or she does not agree with B as a goal. Or doubts that conditions C will be given, or that there may be other conditions preventing A from becoming successful. So for person q, any single of those reasons will turn that claim of three premises into a con argument. She could express that by simply adding a ‘not’ or - into any of those claims and the ‘conclusion’.

I perceive and agree that your process would contain recommendations for participants to express such argument with all the premises. (Any additional claims offering further support for any of the premises would be a ‘new’ ‘successor’ argument, and in visual ‘map’ of the discourse should be shown as such.)

If the groups decides upon a more ‘systematic’ evaluation – for example, by each participant explaining how the ‘weight’ of each argument depends on the evaluation person’s confidence in the truth or plausibility of its premises, the ‘worksheet’ for doing that should not, in my opinion, present the argument as ‘pro’ or co’ but simply state the claims for each participant to judge for themselves. It is possible, of course, to calculate statistics of these assessments door the whole group – they should NOT be, in my opinion, be taken as a 'common “Group Judgment” determining the decision without further analysis – especially if there are large differences (disagreements in the premise assessment and in any overall group score. To reach a decision, the grow can use those results in very different ways: they could take some ‘average’, or they could select a plan from a set of alternatives that takes care of the worst-off affected parties (maximizing the minimum scores etc. There are many other such 'decision indicators that could be used.
The usual ‘taking a vote’ or the facilitators declaring ‘consent’ (participants having gotten too tired to continue the discussion and offer “no more questions?” which is taken for ‘agreement’ – are taking the principle of making decisions transparently based on due and thorough consideration" somewhat lightly.

So my question is: where in this process – that could be done by the participants themselves, if necessary – would the Gulli bot make its contributions, and how?

  • Gulli Bot never makes contributions. It listens and represents the collection of information and synthesizes it into a naïve analysis based on the information provided so far. Gulli Bot is a hierarchical spreadsheet calculating the weights of each claim based on the evidence/assumptions/claims provided so far.

  • “But participant q may not be convinced that A will produce B” then they should be encouraged to express the reasons why in a way that it will affect Gulli Bot’s analysis. Leaving it unexpressed and analyzed is encouraging division. The way of expressing things the participant wouldn’t express it as a con. they would express it as a false pro and should be able to affect the system to make that clear by providing evidence. Thus the opinion and reasons behind it is clear instead of being left open to misinterpretation.

  • I agree that it should not be taken as a 'common “Group Judgment” determining the decision without further analysis. Gulli Bot is just a representation of the current state of the argument.

  • It sounds like Gulli Bot is the “more systematic evaluation” that you refer to

Fundamentally the core of Gulli Bot is the debate about if something is a pro or con so you can’t leave opinion of it’s pro-ness or con-ness unexpressed in the system. You label it as Pro or Con the the user agrees or disagrees through the system by adding reasons. If I am wrong on the value of expressing pros and cons then this experiment fails, Gulli Bot fails and we move on to the next experiment, but you cannot take that out and still have this system.

In this system people do not express the weight of each argument directly. Instead they see how Gulli Bot’s weighted each claim based on the evidence provided so far and they are challenged to provide additional pro and con evidence to change the weight to match their intuition.

Reading through your reply it seems to me that you have a base of terminology and assumptions that make communication on this very difficult. I’m not sure I have the time to continue this line of thinking. I feel I understand your concerns but I am not able to express in a way for you to understand how they are mitigated in this system because any word I have to use you have slightly different interpretations and the level of effort to align in meanings of terms is likely beyond the value to either of us. Once there is better working examples it may be easier than over just text.

I have a youtube playlist of me trying to align with 5 other people on their ideas in this field and it has taken hundreds of hours so far. I don’t have the stamina to do that again at the moment.

You can follow the project and when you feel confident how it works you can ask specific questions and If I have time I can attempt to provide coherent answers. If I make a better explainer about the inner workings I can share it with you.

Here is my transcript for the video which may help:

In this example of Reason Score we imagine Fictional city deciding if they would benefit overall from converting Elm Street to pedestrian use only
The circle above the statement indicates if the city overall
benefits from converting Elm Street or disadvantages or if its unknown
based on the information provided so far
Reason Score does not have any information except what you provide.
Reason Score accepts the claim as true because it has no other reasons to score.
We shouldn’t trust the score until we make sure it has all the correct information.
To start, we add that this will increase foot traffic to local shops by 15%
Since the score is already maxed out this pro does not affect the result.
Someone points out that the conversion will divert traffic down resedential streets endangering the lives of children.
With one pro and one con the system adjusts the rating to the middle
We know that child safety is more important than local shops profit so we let Reason Score know by adding the reason it is more important.
Now the score shifts to recommending that the city not convert Elm street.
The city realizes a set of railroad tracks are no longer in use and the City can convert that to a new street.
By adding that, the system notes that the traffic problem is cancelled out so the score moves towards recommending the conversion
We look to see if all the important information has been entered.
A con is that the conversion will cost 2 Million dollars.
With that we are back to unknown.
But is everything there?
The increase in revenue is expected to pay off the expense in under 2 years meeting the cities investment requirements.
That raises up the score more towards converting the street.
Once we have everyone’s input we can quickly explore the reasons and know what we can do together.

Has anyone started modeling critical thinking yet?

Yes, Critical Thinking is a broad category and it has been modelled in many different ways and in sub sections. Some of it doesn’t really belong in a model. Can you be more specific? I don’t have a general list under that scope.