Ed Fish

PhD Student in Machine Learning at the University of Surrey, UK.


University of Surrey

Guildford, GU2 7XH

Hello there! 👋

I’m a PhD candidate and research associate at the University of Surrey’s Innovative Media Lab, where I specialize in multi-modal video understanding and efficient ML.

My recent contributions include publications at NeurIPS, Interspeech, BMVC, and ICIP with work on video classification, action recogntion, ASR, quantization, temporal action localization, and efficient video processing.

I recently returned from a research internship at Samsung Research UK’s Advanced Research Team, where I explored quantization, explainable AI, and speech recognition. A highlight of my work there was my work on the personalized quantization of foundation ASR models which led to several pending patents.

Before working in ML I worked as a creative technologist and educator, working with companies and institutions such as Foster & Partners, Sky, Grimshaw, University of East London, and Mayor of London helping to develop creative tech initiatives to get more young people interested in programming. In 2019 I was elected as a fellow of the Royal Society of the Arts (FRSA) in recognition of my commitment to positive social change.

As 2023 unfolds, I’m setting my sights on the next chapter. I’m actively seeking a research scientist role where I can work on difficult problems with interesting people. While my expertise is in video understanding, I’m excited by, and have some experience in, other fields of ML research including LLM’s, generative AI, federated learning, optimization, and multi-modal CLIP. Applied research is also of interest, and you can see some applications of my own research in the projects page of my site. If you believe my expertise aligns with your organization’s vision, I’d be thrilled to connect. You can reach out via LinkedIn here or contact me for an up to date CV.


Nov 1, 2023 Our paper “A model for every user and budget - data-free mixed-precision ASR quantization” is accepted for INTERSPEECH 23 (Aug 23)
Apr 1, 2023 I received research funding from the Horizon EU research and innovation program No 951911 to continue my research in efficient multi-modal video understanding as a post-doctoral associate. I’m completing this work while writing up my PhD.
Feb 1, 2023 I completed my research internship at Samsung Research UK with a paper and a patent pending on quantization of foundational ASR models.
Oct 20, 2022 Our paper “Two-Stream Transformer Architecture for Long Video Understanding” was accepted at BMVC 2022 - the preprint is available here
Aug 26, 2021 Our paper “Rethinking genre classification with fine-grained semantic experts” was as accepted to IEEE Internation Conference on Image Processing. Read the paper here