美國國防部也在用 - 可解釋的智慧技術 (Explainable AI) 或簡稱XAI,是什麼?


現今諸多機器學習應用都呈現超越人類的效能,技術的突破在於我們可以訓練具龐大參數的機器學習模型;從傳統僅使用幾十個參數的方法(如SVM)提升到包含數千萬(或數十億)參數的深度學習網路,而且正逐步滲透各領域。
深度學習技術逐漸落地為產品,但在擔綱機器學習重要角色同時,也得面對質疑:這些龐大的演算法是個複雜的「黑盒子」,不曉得何時會成功、失敗,為什麼做這樣的決定?為什麼不是另一個?當我們逐漸仰賴智慧系統時,必須有全然的信賴感。
為提高智慧服務的信賴感,可解釋人工智慧(Explainable AI)或簡稱XAI,在近來成為重點研究。特別是在關鍵性的企業決策如金融、醫學、軍事、災害預防等應用中,除了預估最終結果之外,我們更需要合適的論理來解釋為什麼做出這樣的決定。比如透過MRI斷定具有風險,我們希望智慧系統更進一步解釋原因為何,並將相關影像標示圖說;或是在決定買賣某些股票時,還能將影響決策的原因條列解釋,如同領域專家一樣解釋。 
因其重要性,近來美國國防高等研究計劃署(DARPA),也將XAI列為接下來科技發展的重點。其實這是相當挑戰的研究工作,因為「可解釋性」與「正確率」到目前為止是互斥的。解釋性高的演算法,像是傳統線型模型、決策樹等,由其中的參數或是決策過程,可以直接解釋出判斷的邏輯,但是整體的正確率卻偏低;相反地,深度學習算法的效能高,解釋性卻低。
目前的推薦系統(如購物、電影、音樂等),除了算法推薦的品項之外,還加上了解釋;例如之前買了滑雪板,可能也會喜歡某件雪褲;或是看了電影A的朋友也看了電影B。因推薦系統大都使用品項/使用者相似度、或是共同出現的關聯性,較易推導出解釋性。
深度學習的XAI技術還待努力。一般有兩種做法:設計較透明的演算法(Transparent Design)或是外部解釋(Post-hoc Explanation)。前者是將深度學習網路搭配一些具解釋性的演算法,讓決策的過程中呈現不同層次的透明度。後者則是使用額外的演算法來解釋黑盒子是怎麼做決策的,目前還沒看到最佳的做法。
我們觀察,技術關鍵在於需要解釋的對象為何:專業經理人、使用者、還是工程師?呈現的內容深度不同。在目前深度學習技術落地還在摸索的階段,對智慧系統開發人員提供可解釋性,似乎是當下最有價值的,透過XAI我們可以更了解這些含有成千上萬個參數的深度模型是如何運作的。 
過去三年我們協助幾家軟硬體公司開發人臉辨識產品。深度模型設計過程中會遇見人臉辨識結果跟直覺無法吻合的案例,不曉得判斷的依據為何。為解決這樣的問題,我們花了一年多的時間開發了一個可以嵌合在各個人臉辨識模型的模組,發現可以成功解釋為何兩張人臉會辨識為同一人(或另一人)的原因。XAI模組可以加速人臉辨識網路的開發,我們相信類似的概念可以拓展到其他深度學習應用,加速技術落地。 
深度學習技術已逐步進入各種智慧應用。「黑盒子」是目前無法避免的,但絕不是攔阻智慧技術落地的理由。研究人員正利用各種XAI方法逐漸打開黑盒子,接下來進階到關鍵決策的可解釋性,應該是可以期待的。

As artificial intelligence becomes an increasing part of our daily lives, from the image and facial recognition systems popping up in all manner of applications to machine learning-powered predictive analytics, conversational applications, autonomous machines, and hyperpersonalized systems, we are finding that the need to trust these AI based systems with all manner of decision making and predictions is paramount. AI is finding its way into a broad range of industries such as education, construction, healthcare, manufacturing, law enforcement, and finance. The sorts of decisions and predictions being made by AI-enabled systems is becoming much more profound, and in many cases, critical to life, death, and personal wellness. This is especially true for AI systems used in healthcare, driverless cars or even drones being deployed during war.
However most of us have little visibility and knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning is being applied. Many of the algorithms used for machine learning are not able to be examined after the fact to understand specifically how and why a decision has been made. This is especially true of the most popular algorithms currently in use – specifically, deep learning neural network approaches. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. The lack of explainability and trust hampers our ability to fully trust AI systems. We want computer systems to work as expected and produce transparent explanations and reasons for decisions they make. This is known as Explainable AI (XAI).
Making the black box of AI transparent with Explainable AI (XAI)
Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. This area inspects and tries to understand the steps and models involved in making decisions. XAI is thus expected by most of the owners, operators and users to answer some hot questions like: Why did the AI system make a specific prediction or decision? Why didn’t the AI system do something else? When did the AI system succeed and when did it fail? When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors that arise?
One way to gain explainability in AI systems is to use machine learning algorithms that are inherently explainable. For example, simpler forms of machine learning such as decision trees, Bayesian classifiers, and other algorithms that have certain amounts of traceability and transparency in their decision making can provide the visibility needed for critical AI systems without sacrificing too much performance or accuracy. More complicated, but also potentially more powerful algorithms such as neural networks, ensemble methods including random forests, and other similar algorithms sacrifice transparency and explainability for power, performance, and accuracy.
However, there is no need to throw out the deep learning baby with the explainability bath water. Noticing the need to provide explainability for deep learning and other more complex algorithmic approaches, the US Defense Advanced Research Project Agency (DARPA) is pursuing efforts to produce explainable AI solutions through a number of funded research initiatives. DARPA describes AI explainability in three parts which include: prediction accuracy which means models will explain how conclusions are reached to improve future decision making, decision understanding and trust from human users and operators, as well as inspection and traceability of actions undertaken by the AI systems. Traceability will enable humans to get into AI decision loops and have the ability to stop or control its tasks whenever need arises. An AI system is not only expected to perform a certain task or impose decisions but also have a model with the ability to give a transparent report of why it took specific conclusions.
Levels of explainability and transparency
So far, there is only early, nascent research and work in the area of making deep learning approaches to machine learning explainable. However, it is hoped that sufficient progress can be made so that we can have both power and accuracy as well as required transparency and explainability. Actions of AI should be traceable to a certain level. These levels should be determined by the consequences that can arise from the AI system. Systems with more important, deadly, or important consequences should have significant explanation and transparency requirements to know everything when anything goes wrong. 
Not all systems need the same levels of transparency. While it might not be possible to standardize algorithms or even XAI approaches, it might certainly be possible to standardize levels of transparency / levels of explainability as per requirements. Product recommendation systems, for example, need to have very little requirement for transparency and so might accept a lower level of transparency. On the other hand, medical diagnosis systems or autonomous vehicles might require greater levels of explainability and transparency. There are efforts through standards organizations to arrive at common, standard understandings of these levels of transparency to facilitate communication between end users and technology vendors.
Organizations also need to have governance over the operation of their AI systems. Oversight can be achieved through the creation of committees or bodies to regulate the use of AI. These bodies will oversee AI explanation models to prevent roll out of incorrect systems. As AI becomes more profound in our lives, explainable AI becomes even more important.