-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
56 lines (51 loc) · 3.26 KB
/
index.html
File metadata and controls
56 lines (51 loc) · 3.26 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
<!DOCTYPE HTML>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=Edge">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport"
content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no">
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="monitor-signature" content="monitor:player:html5">
<meta name="keywords" content="Beta Bit, interpretable machine learning, explainable artificial intelligence, XAI" />
<meta property="og:image" content="files/thumb/1.jpg" />
<meta property="og:title" content="We must explain! We will explain!" />
<meta property="og:description" content="We must explain! We will explain!" />
<meta name="og:image" content="files/thumb/1.jpg" />
<link rel="image_src" href="files/thumb/1.jpg" />
<meta name="Description" content="What if AI could be explained — and what if it must be?
Join Beta and Bit, two fearless adventurers in the world of artificial intelligence, as they traverse iconic research hubs like Singapore, Berlin, Pisa, London, and Warsaw to confront one of the most pressing challenges in modern AI: the need for explainability. With style, wit, and rigor, this comic dives into historical, philosophical, and scientific debates—from Gödel’s theorems to Hinton’s neural net critiques—making a compelling case for post-hoc interpretability techniques in AI systems.
Science, ethics, and storytelling collide in this visually stunning journey.
From uncovering biases in recidivism risk scores to learning strategy from AlphaZero, Beta and Bit collect evidence that explanations aren’t just possible—they're essential. We Must Explain! is more than an accessible introduction to eXplainable AI (XAI); it’s a manifesto for transparency and accountability in machine learning, packaged in a brilliant, illustrated adventure that both informs and inspires">
<title>We must explain! We will explain!</title>
<link rel="stylesheet" href="style/scrollbar.css" />
<link rel="stylesheet" href="style/style.css" />
<link rel="stylesheet" href="style/player.css" />
<link rel="stylesheet" href="style/phoneTemplate.css" />
<link rel="stylesheet" href="style/app.css" />
</head>
<body>
<script src="javascript/jquery-3.5.1.min.js"></script>
<script src="javascript/config.js"></script>
<script src="javascript/LoadingJS.js"></script>
<script async src="https://www.googletagmanager.com/gtag/js?id=G-ZWLGMSWT14"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-ZWLGMSWT14');
</script>
<script type="text/javascript">/*userDefineScript*/</script>
<script src="javascript/main_preview.js"></script>
<script src="javascript/editor.js"></script>
<script src="files/search/book_config.js"></script>
<link rel="stylesheet" href="style/template.css" />
<script type="text/javascript">
var sendvisitinfo = function (type, page) { };
</script>
<script src="javascript/FlipBookPlugins.min.js"></script>
<link rel="stylesheet" href="style/FlipBookPlugins.min.css" />
<script src="javascript/flipHtml5.hiSlider2.min.js"></script>
<link rel="stylesheet" href="style/hiSlider2.min.css" />
</body>
</html>