194
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Trusting: Alone and together

ORCID Icon, &
Received 03 Mar 2023, Accepted 30 Jan 2024, Published online: 23 May 2024

Figures & data

Table 1. Modelling decisions in selected social network learning literature.

Figure 1. (a) the single agent learner model illustrated conceptually and (b) two example paths of the model dynamics in the random walk interpretation. B(ϑ) denoting the Bernoulli distribution with parameter ϑ.

Figure 1. (a) the single agent learner model illustrated conceptually and (b) two example paths of the model dynamics in the random walk interpretation. B(ϑ) denoting the Bernoulli distribution with parameter ϑ.

Table 2. Sample paths and agent beliefs as random walks.

Figure 2. The probability of a single agent quitting pquit(α,β,c,r,ϑ) plotted against ϑ for different values of r while α=β=2 and c=1. Analytical results are plotted as lines, while simulated results (4 000 iterations) are shown as points with 95% confidence intervals.

Figure 2. The probability of a single agent quitting pquit(α,β,c,r,ϑ) plotted against ϑ for different values of r while α=β=2 and c=1. Analytical results are plotted as lines, while simulated results (4 000 iterations) are shown as points with 95% confidence intervals.

Figure 3. Expected time in the system conditioned on the agent quitting at some t< plotted against ϑ for different values of r while α=β=2 and c=1 on a log-linear axis. Analytical results are plotted as lines, while simulated results are shown as points with 95% confidence intervals.

Figure 3. Expected time in the system conditioned on the agent quitting at some t<∞ plotted against ϑ for different values of r while α=β=2 and c=1 on a log-linear axis. Analytical results are plotted as lines, while simulated results are shown as points with 95% confidence intervals.

Figure 4. The probability of a single agent quitting plotted against ϑ for different values of c while α=3,β=1 and r=1. Analytical results are plotted as lines, while simulated results (4 000 iterations) are shown as points with 95% confidence intervals.

Figure 4. The probability of a single agent quitting plotted against ϑ for different values of c while α=3,β=1 and r=1. Analytical results are plotted as lines, while simulated results (4 000 iterations) are shown as points with 95% confidence intervals.

Figure 5. Expected time in the system conditioned on the agent quitting at some t< plotted against ϑ for different values of c while α=3,β=1 and r=1. Analytical results are plotted as lines, while simulated results are shown as points with 95% confidence intervals.

Figure 5. Expected time in the system conditioned on the agent quitting at some t<∞ plotted against ϑ for different values of c while α=3,β=1 and r=1. Analytical results are plotted as lines, while simulated results are shown as points with 95% confidence intervals.

Figure 6. The observable rewards model of learning.

Figure 6. The observable rewards model of learning.

Figure 7. The observable actions model of learning.

Figure 7. The observable actions model of learning.

Figure 8. An illustration of the weighting wx(t) used in the interpretation of the observed action.

Figure 8. An illustration of the weighting wx(t) used in the interpretation of the observed action.

Figure 9. The random walk interpretation of the or model as well the individual sample paths of the respective agents.

Figure 9. The random walk interpretation of the or model as well the individual sample paths of the respective agents.

Table 3. Estimates, ϑˆt in the respective models.

Figure 10. Simulation results for c=r=1 in the u=1 case.

Figure 10. Simulation results for c=r=1 in the u∗=1 case.

Figure 11. Simulation results for c=r=1 in the u=3 case.

Figure 11. Simulation results for c=r=1 in the u∗=3 case.

Figure 12. Simulation results for c=1 and r=2 in the u=1 case.

Figure 12. Simulation results for c=1 and r=2 in the u∗=1 case.

Figure 13. Simulation results for c=1 and r=2 in the u=3 case.

Figure 13. Simulation results for c=1 and r=2 in the u∗=3 case.

Figure 14. Simulation results for c=2 and r=1 in the u=1 case.

Figure 14. Simulation results for c=2 and r=1 in the u∗=1 case.

Figure 15. Simulation results for c=2 and r=1 in the u=3 case.

Figure 15. Simulation results for c=2 and r=1 in the u∗=3 case.

Figure 16. The results of extra simulation runs for the probability of quitting. In these runs c=3, r=2, and u=2.

Figure 16. The results of extra simulation runs for the probability of quitting. In these runs c=3, r=2, and u∗=2.

Table 4. Two random walks.

Table 5. Probability of quitting and the expected time to quitting in the single agent model (Sol), the observable actions model (OA) and the observable rewards model (OR). Parameters α and β set such that u=1

Table 6. Probability of quitting and the expected time to quitting in the single agent model (Sol), the observable actions model (OA) and the observable rewards model (OR). Parameters α and β set such that u=2

Table 7. Probability of quitting and expected time to quit in the single agent model (Sol), the observable actions model (OA) and the observable rewards model (OR). Parameters α and β set such that u=3