söndag 21 januari 2018
onsdag 17 januari 2018
- Ted Chiang is a science fiction writer specializing in short stories. When I read his collection Stories of Your Life and Others I said to myself "wow, this guy is almost better than Greg Egan" (but let me withhold final judgement on that comparison). The book opens with Tower of Babylon, which explores a beautiful alternative cosmology more in line with what people believed in ancient times, and continues with Understand, which, albeit lacking somewhat in realism, gives what is probably the best account I've read on what it might be like to attain superintelligence - an impossible topic, yet important in view of possible future transhumanistic developments. Among the other stories in the book is the title one, Story of Your Life, which was later adapted to the Hollywood movie Arrival; I recommend both reading the story and seeing the movie (the plots diverge somewhat in interesting respects) and then listening to the podcast Very Bad Wizards discussing them.
- Scott Alexander blogs about science, philosophy, future technologies and related topics. He often penetrates quite deeply into his chosen topic, and his posts are often longish to very long. Several of his blog posts have influenced me significantly, such as...
- Meditations on Moloch which is an original, wide ranging and important discussion of the long-term future of humanity that comes out so pessimistically that although I am deeply impressed by it I still badly wish to see it debunked,
- Book Review: Age of Em which is the best critical comment I've seen on Robin Hanson's seminal 2016 futurology book,
- Book Review: Inadequate Equilibria which gives such an instructive account of Eliezer Yudkowsky's recent and perhaps equally important book that I am (at least for the time being) prepared to accept the review as a substitue for reading (anything beyond Chapters 1-3 in) the book,
- and Contra Grant on Exaggerated Differences which offers a (much-needed) level-headed discussion on the underlying reasons behind gender imbalances in various STEM fields.
lördag 13 januari 2018
- Scientists have an obligation to be involved, says Tegmark, because the risks are unlike any the world has faced before. Every time new technologies emerged in the past, he points out, humanity waited until their risks were apparent before learning to curtail them. Fire killed people and destroyed cities, so humans invented fire extinguishers and flame retardants. With automobiles came traffic deaths—and then seat belts and airbags. "Humanity's strategy is to learn from mistakes," Tegmark says. "When the end of the world is at stake, that is a terrible strategy."
fredag 5 januari 2018
- Emma Frans är ofta först bland svenskspråkiga skribenter med spännande nyheter om beteendevetenskap och annan forksning, och om någon vill utnämna Emmas selektion till Sveriges bästa vetenskapsblogg just nu så har jag inget att invända.
- Föreningen Vetenskap och Folkbildning (VoF) har utsett Emma Frans, doktor i medicinsk epidemiologi, till Årets folkbildare 2017. Emma Frans tilldelas utmärkelsen för sin förmåga att på ett pedagogiskt och humoristiskt sätt sprida kunskap och förklara myter och missförstånd kring vetenskap.
– Emma Frans har fått särskilt mycket uppmärksamhet det senaste året och det förtjänar hon. Hon är den första folkbildaren som började sin verksamhet på sociala medier och det är fortfarande där hon har sin starkaste plattform. Sociala medier har visat sig vara vår tids främsta kanal för förvillande – där behövs källkritik och folkbildning mer än någon annanstans, säger Peter Olausson, ordförande för Vetenskap och Folkbildning.
– Men det är inte tillräckligt att behärska en kanal. Emma Frans är verksam på Twitter, och en stor tidning, och nu även i bokform. Det är så en modern folkbildare behöver arbeta: I olika kanaler med olika möjligheter och framför allt olika målgrupper.
Doktor Emma Frans gav under 2017 ut boken Larmrapporten - Att skilja vetenskap från trams, på Volante förlag. I boken redogör Frans på ett pedagogisk sätt för hur vi som individer kan navigera genom den djungel av information som florerar runt omkring oss dagligen. Med hjälp av lustiga anekdoter tagna ur vardagslivet beskrivs begrepp som placeboeffekten och cherry picking. Även avdelningar rörande källkritik, informationsinhämtning och statistik avhandlas med hjälp av tydliga exempel, ofta hämtade från verkligheten.
Genom att Emma Frans uttrycker sig lättbegripligt och ofta med en stor portion humor, utan att tumma på det vetenskapliga förhållningssättet, är hon en god folkbildare i den digitala tidsåldern.
lördag 23 december 2017
Anonymous sources at the Department of Health and Human Services told the National Review’s Yuval Levin this week that any language changes did not originate with political appointees, but instead came from career CDC officials who were strategizing how best to frame their upcoming budget request to Congress. What we’re seeing, his interviews suggest, is not a top-down effort to stamp out certain public-health initiatives, like those that aim to help the LGTBQ community, but, in fact, the opposite: a bottom-up attempt by lifers in the agency to reframe (and thus preserve) the very work they suspect may be in the greatest danger.
- Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.
fredag 22 december 2017
- O. Häggström: Remarks on artificial intelligence and rational optimism, accepted for publication in a volume dedicated to the STOA meeting of October 19.
Introduction. The future of artificial intelligence (AI) and its impact on humanity is an important topic. It was treated in a panel discussion hosted by the EU Parliament’s STOA (Science and Technology Options Assessment) committee in Brussels on October 19, 2017. Steven Pinker served as the meeting’s main speaker, with Peter Bentley, Miles Brundage, Thomas Metzinger and myself as additional panelists; see the video at [STOA]. This essay is based on my preparations for that event, together with some reflections (partly recycled from my blog post [H17]) on what was said by other panelists at the meeting.
- O. Häggström: Aspects of mind uploading, submitted for publication.
Abstract. Mind uploading is the hypothetical future technology of transferring human minds to computer hardware using whole-brain emulation. After a brief review of the technological prospects for mind uploading, a range of philosophical and ethical aspects of the technology are reviewed. These include questions about whether uploads will have consciousness and whether uploading will preserve personal identity, as well as what impact on society a working uploading technology is likely to have and whether these impacts are desirable. The issue of whether we ought to move forwards towards uploading technology remains as unclear as ever.
- O. Häggström: Strategies for an unfriendly oracle AI with reset button, in Artificial Intelligence Safety and Security (ed. Roman Yampolskiy), CRC Press, to appear.
Abstract. Developing a superintelligent AI might be very dangerous if it turns out to be unfriendly, in the sense of having goals and values that are not well-aligned with human values. One well-known idea for how to handle such danger is to keep the AI boxed in and unable to influence the world outside the box other than through a narrow and carefully controlled channel, until it has been deemed safe. Here we consider the special case, proposed by Toby Ord, of an oracle AI with reset button: an AI whose only available action is to answer yes/no questions from us and which is reset after every answer. Is there a way for the AI under such circumstances to smuggle out a dangerous message that might help it escape the box or otherwise influence the world for its unfriendly purposes? Some strategies are discussed, along with possible countermeasures by human safety administrators. In principle it may be doable for the AI, but whether it can be done in practice remains unclear, and depends on subtle issues concerning how the AI can conceal that it is giving us dishonest answers.
fredag 15 december 2017
- [F]ear mongering now about possible Terminator scenarios is a bit like saying in the mid 19th century that the automobile will destroy humanity because, although we might someday figure out how to build internal combustion engines, we have no idea how to build brakes and safety belts, and we should be very, very worried.
- [t]he emergence of human-level AI will not be a singular event (as in many Hollywood scenarios). It will be progressive over many many years. I'd love to believe that there is a single principle and recipe for human-level AI (it would make my research program a lot easier). But the reality is always more complicated. Even if there is a small number of simple principles, it will take decades of work to actually reduce it to practice.