(Audio book)
Algorithms deciding things sometimes based on insufficient data and can cause life changing problems
Everyone has models which are a simplification of reality. They often work but sometimes don’t as they don’t take everything into account. They need to be constantly updated or they become stale.
Racism is our model being flawed, generalization, not seeking to get information to overturn and confirmation bias
Need clear understandable model based on situation vs proxies, updated frequently in both data and assumptions
Models can create feedback loops proving themselves right (ie recidivism risk, more jail time, more recidivism)
Wmd – Opacity, scale, damage
Used in many areas, (finance, jail terms, university rankings etc).
Sometimes the rankings are feedback loops or self fulfilling prophecies.
Variables/proxies selected can become goal vs actual goals – ie people strive go improve their ranking not necessarily be better
Some industries focused on praying on people’s vulnerabilities
Ie colleges admissions focus on certain groups, rope them into loans, and spend less money on their education than on getting more students. Administrators make millions while the students are stuck with debt and useless degree
Scheduling algorithm, focused on revenue and efficiency vs human aspect.
Teacher eval models have very high variability due to low sampling numbers. Change in student composition makes big difference.
Using a secondary model (ie predict with model what students should do and compare with results) also not good
Credit scores, clear attributes, can know inputs, fix errors and continually updated
Proxy scores might not be representative, group people together, could be judging on unfair factors and lead to feedback loops
Incorrect links/old info in credit scores or background checks cand cause big problems. Humans sometimes need to filter that data (ie you are not that convicted criminal don’t fit description even if same name) but often humans look at results and accept, hard to get it corrected
Some subgroupings occur in things like car insurance. In some cases, credit scores can significantly change car insurance rates. Proxy is not relevant to actual driving risk.
Insurance tracking with on board sensors – may disadvantage those in poorer neighborhoods, longer commutes in riskier neighborhood, driving patterns in terms of hours of day
Grouping people in buckets to calculate things. In some cases the grouping is done by algorithms with no real explanation why they are grouped together. Essentially black box why you are part of some group
Health scores starting to be factor for employees, pay additional fees for health insurance if you don’t meet these requirements.
Most of the time, use bad evaluators and, in some cases, don’t actually cost more while people are employed but later in life (ie a way to reduce pay/recover costs beyond actual costs)
Potential for wmd with social networks to show things that may sway populations, votes etc. May not be there but potential.
Profiling of people occurs with data gathered from some of these networks for targeting certain voters with certain ad campaigns
Targeted advertising based on behaviours (often on internet)
A lot of misadvertising or incorrect facts targeted to receptive audience. They may believe them and cause them to take action (ie Obama is Muslim and wasn’t born in the us)
WMDs abuse the poor essentially as they can extract more $ and block them from opportunities
Difficult to get out of the loop, not much political influence. People blame poor for being poor unknowing that the WMDs that advantage them, disadvantage the poor
Leave a Reply