From Autonomous cars, factories to households, we envision a future where robots work safely and efficiently alongside humans. While robots today thrive in industrial and lab settings, we must develop theories and algorithms that will enable the transition of these systems from robot-centric workspaces to the real-world. Hence, we want robotic systems not only to observe and react to the changes in the environment but also guarantee completion of the given task while ensuring safe execution. For robots operating in the presence of humans with non-contradicting goals, it is crucial to work together to save resources, time, and ensuring that everyone achieves their goals. Combining these objectives is challenging given the complexity of the tasks and various ways the system could interact with the environment. This thesis blends different theories developed by the Formal Methods, Game Theory, and the Human-Robot Interaction community to develop a Regret Minimizing Framework (RMF) that addresses the above problems. While previous work using these approaches has assumed the human to be either purely adversarial or probabilistic, these assumptions are very conservative and eliminate the possibility of cooperation. This thesis proposes a notion of regret to synthesize high-level strategies for the robot that explores possible cooperation with the human while ensuring completion of the given task within some user-defined bound on the total energy spent by the robot. This work analyzes and implements various notions of regret and reasons about their emergent behaviors. An end-to-end synthesis toolbox is developed to compute regret minimizing strategies, which implements various optimization techniques to help scale the algorithms to the robotics domain and illustrate the efficacy of the optimal strategies through various case studies.