FUN-Media will enable next-generation immersive networked media communications, ensuring the expected QoE, allowing for empathic communications, providing real feel of presence, ensuring the expected content and user authenticity. This is achieved via technology advances in the field of digital twins, multimodal and multisense communications, audio/acoustic user interaction, QoE-aware distribution of trustworthy contents, media generation and representations for humans and machines.

FUN-Media is part of Spoke 4 – Programmable Networks for Future Services and Media 

The project has been active in all workpackages, except those reserved for the cascade calls. Technical advances have been made in several areas, including:
  • project management and purchases for the Spoke Lab
  • adaptive metronome algorithms and packet loss concealment for mitigating the impact of latency
  • methods for detecting audio manipulation
  • study of the impact of compression and transmission artifacts on dynamic and dense point clouds with subjective tests to explore the users’ QoE with varying combinations of degradations (compression and packet loss)
  • study of the effect of the adoption of augmented and virtual reality on the quality perceived by the user
  • learning-based viewport prediction
  • learning-based compression schemes based on diffusion models
  • methods for network sparsification and quantization
  • compression of point clouds and light fields
  • an approach to asynchronous federated continual learning.
The project has already generated several practical outcomes, currently mostly in the form of algorithms. While several activities and methods are not mature enough for reporting, others have been consolidated in scientific publications. These include:
  • a content-aware compression and transmission method for automotive Lidar data
  • a continual learning method for semantic image segmentation
  • methods for detection of synthetic and manipulated speech
  • a method for deepfake detection
  • a method for viewport prediction
  • a federated continual learning method
  • a study on the impact of VR on user attention.
Several of these methods are expected to lead to technologies exploitable by the industry during the course of the project, as the related use cases have been chosen in such a way as to be relevant for the market.
  1. Publications
    • Total number of publications (including journals and conference papers):
    • Expected: 36
    • Accomplished: 8
    • Readiness: 22%
  2. Joint publications (at least 30% of total number of publications)
    • Expected: 12
    • Accomplished: 0
    • Readiness: 0%
  3. Talk, dissemination and outreach activities (does not include conference presentations)
    • Expected: 9
    • Accomplished: 2
    • Readiness: 22%
  4. Innovations
    • Expected: 10 items
    • Accomplished: 2 items submitted to mission 7
    • Readiness: 20%
  5. Demo/PoC
    • Expected: 5 PoCs by the end of the project
    • Accomplished: 0
    • Readiness: 0% (work according to plan, as demo/PoCs are expected starting from the second year of the project).

Collaboration proposals:

  • a collaboration on networked music performance, which allows musicians to collaborate and perform together in real-time, transcending geographical boundaries; the objective is to develop a more seamless and engaging collaborative musical experience.
  • a collaboration on efficient viewport-based algorithms for omnidirectional video streaming systems, employing machine learning methods and taking advantage of saliency information.

For any proposal of collaboration within the project please contact riccardo.leonardi at unibs.it


FUN-Media News: