Abstract:
Mathematics is about modeling concepts. In the age of big data and AI, we seek to find a mathematical model for "concept" itself. One proposed model for concepts, a semantic vector space, benefits from vector operations to capture semantic relations. Word2Vec builds vector spaces of words from their contextual proximity to other words. This has produced miraculous statements such as King - Man + Woman = Queen. However, there is ambiguity as to what conceptual addition should actually mean. Using vectors to model concepts lacks the ability to ask "What makes up this thing?" and hence "What makes up the sum of two things?". We will discuss various models for concepts which have this internal information, each with benefits over the previous. Without choosing a "best" model for conceptual representation, we will use categorical semantics to recover richer operations. From these operations, conceptual "addition" becomes less ambiguous.
Abstract: We will explore two variables in the theory of ontological representation. That of the ontological shape, which describes how things are composed and identified, and the propositional category, which allows us to make statements about the things in our ontology. We will motivate this by taking Simplicial Sets, whose shape determines compositions of relations, and whose propositional category is Set.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827