
AI and Human Thought and Emotion
- Length: 266 pages
- Edition: 1
- Language: English
- Publisher: Auerbach Publications
- Publication Date: 2019-08-20
- ISBN-10: 0367029294
- ISBN-13: 9780367029296
- Sales Rank: #9407427 (See Top 100 Books)
Buy Soma 350 The field of artificial intelligence (AI) has grown dramatically in recent decades from niche expert systems to the current myriad of deep machine learning applications that include personal assistants, natural-language interfaces, and medical, financial, and traffic management systems. This boom in AI engineering masks the fact that all current AI systems are based on two fundamental ideas: mathematics (logic and statistics, from the 19th century), and a grossly simplified understanding of biology (mainly neurons, as understood in 1943). This book explores other fundamental ideas that have the potential to make AI more anthropomorphic.
go to site Most books on AI are technical and do not consider the humanities. Most books in the humanities treat technology in a similar manner https://musicboxcle.com/2025/04/ia77t0afd . AI and Human Thought and Emotion, however is about AI, how academics, researchers, scientists, and practitioners came to think about AI the way they do, and how they can think about it afresh with a humanities-based perspective. The book walks a middle line to share insights between the humanities and technology. It starts with philosophy and the history of ideas and goes all the way to usable algorithms.
https://www.anonpr.net/209sszhjvb4 Central to this work are the concepts of introspection, which is how consciousness is viewed, and consciousness, which is accessible to humans as they reflect on their own experience. The main argument of this book is that AI based on introspection and emotion can produce more human-like AI. To discover the connections among emotion, introspection, and AI, the book travels far from technology into the humanities and then returns with concrete examples of new algorithms. At times philosophical, historical, and technical, this exploration of human emotion and thinking poses questions and provides answers about the future of AI.
https://colvetmiranda.org/uyn64b4pge2 Cover Half Title Title Page Copyright Page Contents Author 0 Introduction 0.1 Frustrations and Opportunities in AI Research 0.2 Central Questions 0.3 Structure of This Volume 0.4 How to Read This Book PART I: INTELLIGENCE IN COMPUTERS, HUMANS AND SOCIETIES 1 Artificial Intelligence as It Stands 1.1 About AI 1.1.1 AI’s Relation to Psychology, Cognitive Science, etc. 1.1.2 What Are Intelligence, Consciousness, and Introspection 1.1.3 Defining and Viewing AI 1.2 First Approach: Logic and Mathematics 1.3 Second Approach: Biological Inspiration 1.4 A Half-Approach, and a Point or Two 1.5 Watson 1.5.1 Explicit Motivations 1.5.2 Arguments against Introspection 1.5.3 Interesting Points 1.5.4 Watson – Summary 1.6 Simon 1.6.1 Economics 1.6.2 Hostile to Subjectivity – Rationalistic 1.6.3 Artificial Intelligence 1.6.4 Against His Critics 1.6.5 Flirting with Subjectivity 1.7 AI as It Stands – Summary 2 Current Critiques of Artificial Intelligence 2.1 Background: Phenomenology and Heidegger 2.1.1 Phenomenology 2.1.2 Heidegger 2.2 The Cognition vs Phenomenology Debate 2.3 Dreyfus 2.3.1 Part I – Ten Years of Research in Artificial Intelligence (1957–1967) 2.3.2 Part II – Assumptions Underlying Persistent Optimism 2.3.3 Part III – Alternatives to the Traditional Assumptions 2.3.4 Dreyfus’s Updated Position 2.4 Winograd and Flores 2.4.1 Cognition as a Biological Phenomenon 2.4.2 Understanding and Being 2.4.3 Language as Listening and Commitment 2.5 Hermeneutics and Gadamer 2.5.1 Hermeneutics 2.5.2 The Hermeneutics of Heidegger and Gadamer 2.6 AI’s Inadequate Response to Dreyfus and Other Critiques 2.7 Locating This Project amongst Existing Thinkers 2.8 Current Critiques of AI: Summary 3 Human Thinking: Anxiety and Pretence 3.1 Individual Thinking 3.1.1 Our Thinking Processes Are Embarrassing 3.1.2 Anxiety, Pretence, Stories, and Comfort 3.1.3 Can We Even Tell the Truth? 3.1.4 Motivations 3.2 Society’s Thinking 3.2.1 Politics 3.2.2 Social Perceptions of Science 3.2.3 Interrelation of Politics and Science 3.2.4 Distinct Disciplines and Education 3.2.5 Education as Indoctrination 3.3 Adapting to Social Norms 3.3.1 Social Pressure – the Game of Life 3.3.2 Conforming 3.3.3 Escape to a Role, Arrogance 3.3.4 Needs Must 3.4 Relevance to AI 3.4.1 Anxiety and Pretence Are Immediately Relevant to Thinking 3.4.2 Implications for AI, a Rudimentary Human-Like Mind 3.4.3 Meaning-for-Me vs Big Data 3.4.4 Relevance to AI – the Future 3.5 Human Thinking: Anxiety and Pretence: Summary 4 Prevailing Prejudices Pertaining to Artificial Intelligence 4.1 A History of an Idea: Positivism 4.2 Knowledge 4.2.1 Truth Exists, Is Knowable, and Can Be Expressed in Language 4.2.2 There Is Only One Truth System 4.2.3 Kinds of Illumination 4.2.4 Polarisation of Knowledge and Doubt 4.3 Science 4.3.1 The Scientific Clean Sweep 4.3.2 Science Is Distinct from Magic or Religion 4.3.3 The World Is Modular, Logical Atomism, Determinism 4.4 “Wooly” vs “Rigorous” Thinking 4.4.1 Secularisation 4.4.2 Philosophy Is Seen as Bad 4.4.3 Especially, Continental Philosophy Is Seen Negatively 4.5 Humans and Minds 4.5.1 The Human Mind Is a Natural Kind 4.5.2 Humans Are Like Computers 4.5.3 Lower and Higher Human Functions 4.5.4 Humans Are Rational 4.6 Other Worries about Religions 4.6.1 Genesis 4.6.2 Heresies 4.7 Prejudices Pertaining to AI: Summary PART II: AN ALTERNATIVE: AI, SUBJECTIVITY, AND INTROSPECTION 5 Central Argument Outline 5.1 Context for Central Argument 5.1.1 Science vs Technology and Human-Like vs Rational 5.1.2 Philosophy of AI 5.1.3 Philosophy of Technology 5.2 Notions of Truth 5.2.1 The Idea of a Single Truth 5.2.2 Perspectivism 5.2.3 Perspectives, Realities, Agendas, Occam 5.2.4 In What Sense Is This Book True? 5.2.5 Notions of Truth: Summary 5.3 Outline of “Is Recommended for Developing” 5.3.1 “Recommended” 5.3.2 “For” 5.3.3 “Developing” 5.4 Central Argument Outline – Summary 6 Main Term: “Anthropic AI” 6.1 Human vs Ideal/Rational 6.2 Motivations for Human-Like AI 6.2.1 Rational AI’s Interaction Is “Clunky” 6.2.2 The Versatility of Human Intelligence 6.2.3 Getting along with People 6.3 Characteristics of Human-Like AI 6.4 Human-Like vs Anthropic 6.5 Perspectives and Levels in Human Modelling 6.5.1 Are There Really Levels or Layers in the Mind/Brain? 6.5.2 Multiple Levels of Discussion 6.5.3 The Cognitive Level Is Problematic 6.5.4 Simultaneous Multiple Levels in Computers 6.6 Anthropic AI so far 6.7 Knowing That vs Knowing How, and a Hint on Data Structure 6.8 Metaphysical Non-problems 6.9 Ethics 6.10 Anthropic AI: Summary 7 Main Term: “Introspection” 7.1 Studying Subjectivity 7.1.1 Why Subjectivity? 7.1.2 Locating Subjectivity 7.1.3 What Is Subjectivity 7.1.4 Subjectivity Can Be Studied 7.1.5 Phenomenology, Hetero-Phenomenology 7.2 Defining Introspection 7.3 A Boundary between Introspection and Science Collapses 7.3.1 “Thinking Aloud” (TA) Can Be Seen as Introspective 7.3.2 Two Distinctions between TA and Introspection 7.3.3 Inferences and Confusion 7.3.4 Non-inferential Observation Is Impossible 7.3.5 A Boundary between Introspection and Science Collapses: Conclusion 7.4 What Kind of Introspection Is Recommended 7.5 Main Term: “Introspection”: Summary 8 Introspection Is Legitimate 8.1 Introspection as “Impossible” 8.2 Introspection as “Forbidden” 8.2.1 Watson 8.2.2 Cognitive Psychology’s Attitude to Introspection 8.2.3 Other Objections 8.2.4 Contexts of Discovery and Justification 8.2.5 Truth in Science vs Technology 8.2.6 Example and Summary of “Introspection Is Forbidden” 8.3 Introspection as “Commonplace” 8.3.1 Sweeping Testimony 8.3.2 Specific Apparent Cases 8.3.3 Mainstream Cognitive Science Uses Introspection 8.3.4 Introspection Is “Commonplace”: Summary 8.4 Introspection as “Desirable” 8.4.1 Introspection and Phenomenology 8.4.2 The Neisser–Dreyfus Debate 8.4.3 Introspection vs Phenomenology 8.5 Introspection as “Unavoidable” 8.6 A Hybrid Position 8.7 Types of Truth in Introspection 8.8 Introspection Is Legitimate: Summary 9 Introspection Is Likely to Be Profitable 9.1 Conceptual Arguments 9.2 An Argument from Education 9.2.1 Skill Questions 9.2.2 Teaching Skills 9.2.3 Self-Observations 9.2.4 Mental Self-Observation Is Introspection 9.2.5 Examples of Mental Skills Being Transmitted by Introspection 9.2.6 Skills Only Part-Acquired by Explicit Instruction 9.2.7 An Argument from Education: Summary 9.3 Programming Impossible without Introspection 9.3.1 Role-Playing 9.3.2 Programming Is Introspective 9.3.3 If So, What Is the Point of This Book? 9.4 Introspection Is Likely to Be Profitable: Summary PART III: GETTING PRACTICAL 10 Details and How to Use Introspection for Artificial Intelligence 10.1 Definitions and Delineations 10.1.1 Definition for “AI Based on Introspection” 10.1.2 Non-human-Like Inspirations 10.1.2.1 Genetic Algorithms (Twice) 10.1.2.2 Neural Nets 10.1.3 Human-Like Inspirations (Non-introspective) 10.1.4 Types of Introspection for AI 10.2 The Process of Introspection for AI 10.3 Comments on the Process of Introspection for AI 10.3.1 Introspection Is a Witness Account 10.3.2 Looking/Listening For 10.3.3 Pollution 10.3.4 Introspection: Is It Above or Below the Culture Line? 10.3.5 Interpolation and Approximation 10.3.5.1 The Holes in Introspection 10.3.5.2 Opportunistic Approximation 10.3.5.3 Analogue Cannot Arise Out of Digital 10.3.5.4 Being Analogue Does Not Mean It Is Not Digital 10.3.6 Multiple Iterations, Multiple Mechanisms 10.3.7 Personnel 10.4 Project Expectations 10.5 Testing and Evaluation 11 Examples 11.1 Fuzzy Logic 11.2 Case-Based Reasoning 11.3 AIF0 11.3.1 Introspection 11.3.2 Implementation 11.3.3 Example Run, Statistics 11.3.4 Discussion 11.3.4.1 Details and Parameters 11.3.4.2 Why This Is More Anthropic 11.3.4.3 Similarity 11.4 AIF1 12 A More Sophisticated Example 12.1 Introspection 12.2 Introspective Model 12.3 Software Design 12.3.1 Preliminary: Sequences in Software 12.3.2 A Novel Data Type 12.3.3 Decision Process 12.3.4 More Details of AIF2’s Implementation 12.3.5 Dynamics of the Sequences Table 12.3.6 Initial Conditions and Decisions 12.3.7 Further Parameters 12.4 AIF2 Example Runs 12.4.1 Learn 1 12.4.2 Learn 2 12.4.3 Learn 3 12.5 Discussion of AIF2 12.6 Consequences of the Examples 12.6.1 AIF Is More Like CBR Than Like Reinforcement Learning 12.6.2 The “Sequence” Data Type 12.6.3 Dynamic Symbols 12.6.4 How AIF2 Is Gadamerian 13 Summary, Consequences, Conclusion 13.1 Summary 13.2 Future Technical Work 13.3 Possible Consequences for Cognitive Science 13.3.1 Models for Scientific Psychology 13.3.2 A Response to Dreyfus’s Critique of AI 13.3.3 Natural Language Processing 13.3.4 Cognitive Models 13.4 “Underpinning” Models in Philosophy 13.4.1 Wittgenstein’s “Seeing As” 13.4.2 Gadamer 13.4.3 Dreyfus’s Demands from AI 13.4.4 Wheeler’s Action-Oriented Representations 13.4.5 Adhyasa/Superimposition 13.5 Open Questions 13.5.1 Dilthey vs Gadamer 13.5.2 Further Unexplored Terrain 13.6 Conclusion Bibliography Index
see 1. Disable the see AdBlock plugin. Otherwise, you may not get any links.
https://lavozdelascostureras.com/iezf01rlmsw 2. Solve the CAPTCHA.
https://www.psychiccowgirl.com/1fo7f03sr 3. Click download link.
https://faroutpodcast.com/4t81k25xebo 4. Lead to download server to download.