Thanks for the attempt at demonstrating its use. I’m not sure the example can be seen as 'our debate – I thought I mainly asked questions so far. The only claim-counterclaim is the one about the representation of an argument as ‘pro’ or ‘con’. My comment " I have a problem with any system or algorithm that tries to decide whether an argument is a pro or a con, even when tits author sees it as one or the other. The reason is that if an ‘evaluator’, another person, disagrees with one or more of the (usually two or three) premises of an intended ‘pro’ argument, it turns into a ‘con’ for that person. etc. "It wasn’t intended as an argument against Gulli bot as much as the detail about any system that makes that declaration, which can easily be fixed.
Perhaps I should have explained the background for my concern with that in more detail – (repeating what’s in my papers which I assumed you had now taken a glance at, an assumption for which I apologize.) explain.
I am primarily concerned with ‘planning arguments’ about plan proposals. Their basic structure can be described as follows:
"Plan A ought to be adopted (‘Conclusion’)
because
- Adopting plan A will result in outcome B given conditions C; ('Factual-Instrumental premise)
and
- Outcome B ought to be pursued; (‘Deontic’ or ‘ought’-premise)
and
- Conditions C are (or will be , when A is implemented) present." (Factual premise).
For participant p making that argument, this is obviously a ‘pro’ argument. But participant q may not be convinced that A will produce B. Or she does not agree with B as a goal. Or doubts that conditions C will be given, or that there may be other conditions preventing A from becoming successful. So for person q, any single of those reasons will turn that claim of three premises into a con argument. She could express that by simply adding a ‘not’ or - into any of those claims and the ‘conclusion’.
I perceive and agree that your process would contain recommendations for participants to express such argument with all the premises. (Any additional claims offering further support for any of the premises would be a ‘new’ ‘successor’ argument, and in visual ‘map’ of the discourse should be shown as such.)
If the groups decides upon a more ‘systematic’ evaluation – for example, by each participant explaining how the ‘weight’ of each argument depends on the evaluation person’s confidence in the truth or plausibility of its premises, the ‘worksheet’ for doing that should not, in my opinion, present the argument as ‘pro’ or co’ but simply state the claims for each participant to judge for themselves. It is possible, of course, to calculate statistics of these assessments door the whole group – they should NOT be, in my opinion, be taken as a 'common “Group Judgment” determining the decision without further analysis – especially if there are large differences (disagreements in the premise assessment and in any overall group score. To reach a decision, the grow can use those results in very different ways: they could take some ‘average’, or they could select a plan from a set of alternatives that takes care of the worst-off affected parties (maximizing the minimum scores etc. There are many other such 'decision indicators that could be used.
The usual ‘taking a vote’ or the facilitators declaring ‘consent’ (participants having gotten too tired to continue the discussion and offer “no more questions?” which is taken for ‘agreement’ – are taking the principle of making decisions transparently based on due and thorough consideration" somewhat lightly.
So my question is: where in this process – that could be done by the participants themselves, if necessary – would the Gulli bot make its contributions, and how?