no code implementations • 27 Mar 2024 • Hazhar Rahmani, Abhishek N. Kulkarni, Jie Fu
In the second step, we prove that finding a most preferred policy is equivalent to computing a Pareto-optimal policy in a multi-objective MDP that is constructed from the original MDP, the preference automaton, and the chosen stochastic ordering relation.
no code implementations • 23 Apr 2023 • Lening Li, Hazhar Rahmani, Jie Fu
We demonstrate the efficacy and applicability of the logic and the algorithm on several case studies with detailed analyses for each.
no code implementations • 3 Apr 2023 • Chongyang Shi, Abhishek N. Kulkarni, Hazhar Rahmani, Jie Fu
Furthermore, if such a strategy does not exist, winning for P1 must entail the price of revealing his secret to the observer.
no code implementations • 25 Sep 2022 • Hazhar Rahmani, Abhishek N. Kulkarni, Jie Fu
We prove that a weak-stochastic nondominated policy given the preference specification is Pareto-optimal in the constructed multi-objective MDP, and vice versa.
no code implementations • 6 Nov 2020 • Yulin Zhang, Hazhar Rahmani, Dylan A. Shell, Jason M. O'Kane
Reduction of combinatorial filters involves compressing state representations that robots use.
no code implementations • 5 Aug 2020 • Hazhar Rahmani, Jason M. O'Kane
In this paper, we consider a temporal logic planning problem in which the objective is to find an infinite trajectory that satisfies an optimal selection from a set of soft specifications expressed in linear temporal logic (LTL) while nevertheless satisfying a hard specification expressed in LTL.