(1) In "Learning the Optimal Recommendation from Explorative Users", we show how a recommender system and user can simultaneously and collaboratively learn in order to reach a globally optimal user recommendation.
(2) Worrying about attacks to ML algorithms? In "Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification", we show that adversarial attacks to stochastic bandits can be fully guarded by selectively verifying only O(log T) rewards.
(3) Given two sequences of input variables and derivatives (not label values), when is it possible to find a convex function to fit these two sequences? Our paper "First-order Convex Fitting and its Application to Economics and Optimization" gives a clean characterization.
(4) Does more accurate prediction necessarily benefit a decision maker? Interestingly, No -- in "The Strange Role of Information Asymmetry in Auctions", we show that a bidder can get harmed with more accurate value prediction and may benefit from less accurate value prediction
(5) But information does help sometimes --- in "When Can the Defender Effectively Deceive Attackers in Security Games?", we study when a defender can leverage informational advantage to deceive an attacker, and when this is not possible.
• • •
Missing some Tweet in this thread? You can try to
force a refresh