Parisa Kordjamshidi
UIUC
kordjam@illinois.edu
Bio
Parisa Kordjamshidi is a postdoctoral researcher in University of Illinois at Urbana-Champaign, computer science department, in cognitive computation group. She obtained her PhD degree from KULeuven in July 2013. During her PhD research she introduced the first Semantic Evaluation task and benchmark for Spatial Role Labeling (SpRL). She has worked on structured output prediction and relational learning models to map natural language onto formal spatial representations, appropriate for spatial reasoning as well as to extract knowledge from biomedical text. She is also involved in an NIH (National Institute of Health) project, extending her research experience on structured and relational learning to Declarative Learning Based Programming (DeLBP) and performing biological data analysis. DeLBP is a research paradigm in which the goal is to facilitate programming for building systems that require a number of learning and reasoning components that interact with each other. This would help experts in various domains who are not expert in machine learning, to design complex intelligent systems.The results of her research have been published in several international peer-reviewed conferences and journals including ACM-TSLP, JWS, BMC-Bioinformatics, IJCAI.
Saul: Towards Declarative Learning Based Programming
Saul: Towards Declarative Learning Based Programming
Developing intelligent problem-solving systems for real world applications requires addressing a range of scientific and engineering challenges.
I will present Saul, a learning based programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of intelligent systems.
Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program.
Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints.
Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints.
I will describe the key constructs of Saul and exemplify its use in case studies of developing intelligent applications in the domains of natural language processing and computational biology. I will also argue that, apart from simplifying programming for complex models, one main advantage of such a language is the reusability of the designed inference, learning models and features, henceforth increasing the replicability of research results. Moreover, the models can be extended to use new emerging algorithms, new data resources and background knowledge with a minimum effort.