Tuesday 20 September 2022

Building robust AI systems: Is an artificial intelligent agent just a probabilistic boolean function?


Preamble
    George Boole (Wikipedia)

Agent, AI agent or an intelligent agent is used often to describe algorithms or AI systems that are released by research teams recently. However, the definition of an intelligent agent (IA) is a bit opaque. Naïvely thinking, it is nothing more than a decision maker that shows some intelligent behaviour. However, making a decision intelligently is hard to quantify computationally, and probably IA for us is something that can be representable as a Turing machine.  Here, we argue that an intelligent agent in the current AI systems should be seen as a function without side effects outputting a boolean output and shouldn't be extrapolated or compare to human level intelligence.  Causal inference capabilities should be seen as a scientific guidance to this function decompositions without side-effects,  i.e., Human in-the loop Probabilistic Boolean Functions (PBFs).

Computational learning theories are based on binary learners

Two of the major  theories of statistical learning PAC and VC dimensions build upon on "binary learning".  

PAC stands for Probably Approximately Correct, It sets basic framework and mathematical building blocks for defining a machine learning problem from complexity theory. Probably correct implies finding a weak learning function given binary instance set $X=\{1,0\}^{n}$. The binary set or its subsets mathematically called concepts and under certain mathematical conditions a system said to be PAC learnable. There are equivalences to VC and other computation learning frameworks. 

Robust AI systems: Deep reinforcement learning and  PAC

Even though the theory of learning on deep (reinforcement) learning is not established and active area of research. There is an intimate connection with composition of concepts, i.e., binary instance subsets as almost all operations within  deep RL can be viewed as probabilistic Boolean functions (PBFs). 

Conclusion 

Current research and practice in robust AI systems could focus on producing learnable probabilistic boolean functions (PBFs) as intelligent agents, rather than being a human level intelligent agents. This modest purpose might bring more practical fruit than long-term aims of replacing human intelligence. Moreover, theory of computation for deep learning and causality could benefit from this approach. 

Further reading


(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.