FUN-Media will enable next-generation immersive networked media communications, ensuring the expected QoE, allowing for empathic communications, providing real feel of presence, ensuring the expected content and user authenticity. This is achieved via technology advances in the field of digital twins, multimodal and multisense communications, audio/acoustic user interaction, QoE-aware distribution of trustworthy contents, media generation and representations for humans and machines.

FUN-Media is part of Spoke 4 – Programmable Networks for Future Services and Media 

Project PI: Enrico Magli

Technical advances have been made in several areas, including:
  • project management and purchases for the Spoke Lab
  • adaptive metronome algorithms and packet loss concealment for mitigating the impact of latency
  • methods for detecting audio manipulation
  • study of the impact of compression and transmission artifacts on dynamic and dense point clouds with subjective tests to explore the users’ QoE with varying combinations of degradations (compression and packet loss)
  • QoE-aware motion control of a swarm of drones for video surveillance
  • study of the effect of the adoption of augmented and virtual reality on the quality perceived by the user
  • learning-based viewport prediction
  • learning-based compression schemes based on diffusion models
  • methods for network sparsification and quantization
  • compression of point clouds and light fields
  • an approach to asynchronous federated continual learning.
The project has already generated several practical outcomes, many of which have been consolidated in scientific publications.

These include:
  • a content-aware compression and transmission method for automotive Lidar data
  • a continual learning method for semantic image segmentation
  • methods for detection of synthetic and manipulated speech
  • a method for deepfake detection
  • a method for viewport prediction
  • a federated continual learning method
  • a study on the impact of VR on user attention.

Several of these methods are expected to lead to technologies exploitable by the industry during the course of the project, as the related use cases have been chosen in such a way as to be relevant for the market.
  • Publications
    Total number of publications (including journals and conference papers):
    Expected: 36
    Accomplished: 15
    Readiness: 42%

  • Joint publications
    (at least 30% of total number of publications)

    Expected: 12
    Accomplished: 2
    Readiness: 17%

  • Talk, dissemination and outreach activities
    (does not include conference presentations)

    Expected: 9
    Accomplished: 4
    Readiness: 44%

  • Innovations
    Expected: 10 items
    Accomplished: 2 items submitted to mission 7
    Readiness: 20%

  • Demo/PoC
    Expected: 5 PoCs by the end of the project
    Accomplished: 0
    Readiness: 0% (work according to plan, as demo/PoCs are expected starting from the second year of the project).
  • M1.1 First release of exploitation, dissemination and impact
    Expected M12
    Accomplished M12
    Readiness 100%

  • M1.2 Second release of exploitation, dissemination and impact monitoring monitoring
    Expected M24
    Accomplished M12
    Readiness 50%

  • M1.3 Third release of exploitation, dissemination and impact monitoring
    Expected M36
    Accomplished M12
    Readiness 33%

  • M3.1 First release of audio and acoustic signal processing system
    Expected M12
    Accomplished M12
    Readiness 100%

  • M3.2 Advanced release of audio and acoustic signal processing system
    Expected M24
    Accomplished M12
    Readiness 50%

  • M3.3 Release of proof-of-concept of audio and acoustic signal processing system
    Expected M36
    Accomplished M12
    Readiness 33%

  • M4.1 First release of experience-aware distribution system for authentic contents
    Expected M12
    Accomplished M12
    Readiness 100%

  • M4.2 Advanced release of experience-aware distribution system for authentic contents
    Expected M24
    Accomplished M12
    Readiness 50%

  • M4.3 Release of proof-of-concept of experience-aware distribution system for authentic contents
    Expected M36
    Accomplished M12
    Readiness 33%

  • M6.1 First release of innovative media generation and representation system
    Expected M12
    Accomplished M12
    Readiness 100%

  • M6.2 Advanced release of innovative media generation and representation system
    Expected M24
    Accomplished M12
    Readiness 50%

  • M6.3 Release of proof-of-concept of innovative media generation and representation system
    Expected M36
    Accomplished M12
    Readiness 33%

Collaboration proposals:

Provisional list (contact project PI for more info):

  • a collaboration on networked music performance, which allows musicians to collaborate and perform together in real-time, transcending geographical boundaries. The objective is to develop a more seamless and engaging collaborative musical experience;
  • a collaboration on efficient viewport-based algorithms for omnidirectional video streaming systems, employing machine learning methods and taking advantage of saliency information;
  • a collaboration on deepfake detection models for visual information employing deep neural networks;
  • a collaboration on neural radiance fields and Gaussian splatting for scene rendering;
  • a collaboration on low-complexity (e.g. binary) neural networks for inference and compression on embedded devices;

For any proposal of collaboration within the project please contact the project PI.


FUN-Media News: