I gave a chat with the workshop on how the synthesis of logic and machine learning, Primarily regions like statistical relational Discovering, can allow interpretability.
I might be offering a tutorial on logic and Mastering that has a deal with infinite domains at this calendar year's SUM. Backlink to celebration below.
The Lab carries out research in synthetic intelligence, by unifying Understanding and logic, with a new emphasis on explainability
If you are attending NeurIPS this yr, you may be interested in testing our papers that touch on morality, causality, and interpretability. Preprints can be found over the workshop webpage.
Our paper (joint with Amelie Levray) on learning credal sum-solution networks has actually been recognized to AKBC. This kind of networks, in addition to other kinds of probabilistic circuits, are beautiful because they assure that specific forms of likelihood estimation queries might be computed in time linear in the dimensions on the network.
A consortia undertaking on reliable devices and goverance was acknowledged late past yr. Information hyperlink below.
Interested in training neural networks with logical constraints? We've a brand new paper that aims in the direction of complete satisfaction of Boolean and linear arithmetic constraints on coaching at AAAI-2022. Congrats to Nick and Rafael!
Bjorn and I are promotion a two yr postdoc on integrating causality, reasoning and understanding graphs for misinformation detection. See here.
Lately, he https://vaishakbelle.com/ has consulted with big banks on explainable AI and its impression in monetary institutions.
, to allow programs to know a lot quicker and much more correct types of the globe. We have an interest in producing computational frameworks that will be able to demonstrate their conclusions, modular, re-usable
Extended abstracts of our NeurIPS paper (on PAC-Studying in initial-order logic) and the journal paper on abstracting probabilistic products was approved to KR's not too long ago printed study track.
The paper discusses how to take care of nested features and quantification in relational probabilistic graphical models.
I gave an invited tutorial the Bathtub CDT Art-AI. I lined latest trends and potential tendencies on explainable device Discovering.
Meeting url Our work on symbolically interpreting variational autoencoders, in addition to a new learnability for SMT (satisfiability modulo principle) formulas got recognized at ECAI.