Return to Video

10.2 - Some research projects that use techniques you learned here

  • 0:01 - 0:04
    Module 10.2.
    Some research projects that use the
  • 0:04 - 0:08
    techniques you have learned in the
    digital signal processing class.
  • 0:09 - 0:12
    We're going to talk about some current
    research in the lab.
  • 0:12 - 0:16
    There is a whole slew of them, and it's a
    selection here of interesting research
  • 0:16 - 0:22
    projects that we can briefly discuss.
    The first one is, eFacsimile is a project
  • 0:22 - 0:27
    on art work acquisition.
    The second one is about signal processing
  • 0:27 - 0:30
    in sensor networks.
    Then there is a network science result on
  • 0:30 - 0:35
    source localization put in graphs.
    Then we talk about sampling result, so
  • 0:35 - 0:39
    called finite rate of innovation
    sampling.
  • 0:39 - 0:42
    Then we talk again about sampling, that
    of physical fields, using some new
  • 0:42 - 0:47
    techniques for sampling.
    Then, we have a project on image
  • 0:47 - 0:53
    acquisition where we change the sensors
    used in acquiring images.
  • 0:53 - 0:55
    Then, an old classic, predicting the
    stock market.
  • 0:57 - 1:01
    Then, we talk about inverse problems.
    The three next projects are actually
  • 1:01 - 1:04
    inverse problems.
    The first one is on the diffusion
  • 1:04 - 1:08
    equation, the second one is trying to
    understand the nuclear fall out from
  • 1:08 - 1:14
    Fukushima, and last but not least is an
    inverse problem in acoustics.
  • 1:17 - 1:20
    The eFacsimile project.
    This is a project that we do together
  • 1:20 - 1:27
    with Google to try to improve how artwork
    is represented on the internet.
  • 1:27 - 1:32
    it's lead by [INAUDIBLE] researcher Loic
    Baboulaz and several PhD students are
  • 1:32 - 1:36
    involved in this.
    The questions are how to capture,
  • 1:36 - 1:41
    represent, and render artwork as well as
    possible.
  • 1:41 - 1:45
    And to do this, we need some advanced
    techniques on relighting, manipulation
  • 1:45 - 1:50
    of, the so called, light fields that is
    acquired, and potentially high resolution
  • 1:50 - 1:58
    solutions for mobile devices.
    There are some demos online that I
  • 1:58 - 2:02
    encourage you to actually watch.
    because this really doesn't show the
  • 2:02 - 2:05
    idea.
    This is of course a static version.
  • 2:05 - 2:09
    But for example one of the demos is you
    take a, an oil painting here.
  • 2:09 - 2:13
    And you acquire it in such a way that if
    you show it on a tablet and you move it,
  • 2:13 - 2:19
    it will actually exactly look like the
    original oil painting.
  • 2:19 - 2:22
    So you get the illusion that you have
    actually the oil painting in the hand.
  • 2:22 - 2:27
    So if the light changes, the vis,
    visualization will change.
  • 2:27 - 2:29
    If you turn the tablet, the visualization
    will change.
  • 2:29 - 2:34
    so then, what is quite stunning and I
    suggest you actually watch it, similarly,
  • 2:34 - 2:39
    there is another demo which deals with
    stain glasses.
  • 2:39 - 2:43
    So stain glasses are very interesting art
    objects, but very difficult to render on
  • 2:43 - 2:46
    the internet.
    And so here we will stimulate the stained
  • 2:46 - 2:50
    glass, so if you have a tablet in your
    hand and you move it, it looks like if
  • 2:50 - 2:57
    you had, stained glass in your hand.
    So the tools we use is, we use
  • 2:57 - 3:02
    traditional cameras, but we also use
    so-called light field cameras.
  • 3:02 - 3:04
    You might have heard of the light
    [INAUDIBLE] for example.
  • 3:04 - 3:07
    That's a new generation of camera.
    It's extremely interesting.
  • 3:07 - 3:12
    And so we need to fully understand light
    transport theory and [INAUDIBLE].
  • 3:12 - 3:15
    Which uses sparse recovery methods or
    compress sensing.
  • 3:15 - 3:21
    The website of the project is given here.
    And as I indicated there is YouTube demo
  • 3:21 - 3:26
    that shows quite realistically the demos
    that we're discussed just in a minute
  • 3:26 - 3:31
    ago.
    So, next project is about wireless sensor
  • 3:31 - 3:37
    networks, in particular about monitoring
    visually in a wireless sensor network.
  • 3:37 - 3:39
    So, sensor networks have deployed for
    many years.
  • 3:39 - 3:44
    We have large projects here, in the lab,
    on environmental monitoring.
  • 3:44 - 3:48
    And the current generation is actually
    equipped with camera, and then you have a
  • 3:48 - 3:52
    problem of compression, of
    representation.
  • 3:52 - 3:55
    So even though the trend is towards
    smaller and smaller devices, they are
  • 3:55 - 3:59
    still power hungry and in particular if
    you have a sophisticated camera, the
  • 3:59 - 4:03
    number of images or the number of pixels
    that is generated might actually overwelm
  • 4:03 - 4:09
    the power budget of the system.
    And so Dr.
  • 4:09 - 4:13
    Zichong Chen, who finished his PhD here,
    and did his post doc, together with
  • 4:13 - 4:17
    Guillermo Barrenetxea, are looking at
    creating large scale sensor networks
  • 4:17 - 4:22
    equipped with cameras that are energy
    efficient.
  • 4:24 - 4:29
    So why do we want images?
    Well, here are a few examples.
  • 4:29 - 4:33
    This is from I think a Berkeley project
    about, monitoring birds' nest.
  • 4:33 - 4:37
    Unfortunately a snake is showing up an
    he's actually eating all the eggs in the
  • 4:37 - 4:40
    nest.
    So that's, monitoring for why life
  • 4:40 - 4:43
    protection.
    Here is an example from the Swiss Alps
  • 4:43 - 4:48
    monitoring for for avalanche detection.
    Here is also an example from the Swiss
  • 4:48 - 4:52
    Alps, its monitoring to see weather
    conditions.
  • 4:52 - 4:56
    And, finally, here is monitoring networks
    that is installed on the PFL campus.
  • 4:56 - 5:01
    In all these cases you have many cameras
    using small communication devices.
  • 5:01 - 5:05
    And so compression and representation is
    extremely critical.
  • 5:07 - 5:13
    So there are a number of results that you
    can find in the thesis of Dr.
  • 5:13 - 5:17
    Chi Chong Chang, given here in this,
    website, and essentially the idea is
  • 5:17 - 5:22
    that, cameras can help each other to
    reduce the amount of information that
  • 5:22 - 5:26
    actually has to be sent to the base
    station or into the cloud, for doing
  • 5:26 - 5:33
    efficient monitoring.
    So a long with signal processing today,
  • 5:33 - 5:36
    actually it's moving to single processing
    on graphs.
  • 5:36 - 5:40
    I don't have to explain to you the
    importance, for example, of social
  • 5:40 - 5:44
    networks.
    And so Pedro Pinto who was involved here
  • 5:44 - 5:48
    in the class and is a post doc in the
    lab, together with Patrick Thiran, has
  • 5:48 - 5:52
    worked on the problem of source
    localization.
  • 5:52 - 5:57
    So you have some graph here, let's say
    social network and somebody launches a
  • 5:57 - 6:01
    rumor.
    Here is the source and the rumor gets
  • 6:01 - 6:05
    forwarded along the edges of the graph at
    different times and you have some
  • 6:05 - 6:13
    observers, say green nodes here, that
    receives a rumor at some instant of time.
  • 6:13 - 6:16
    They know where the rumor comes from, you
    know who told you the gossip, and you
  • 6:16 - 6:20
    know when you got the gossip information.
    So, the question is, you know the
  • 6:20 - 6:23
    structure of the graph, or you have an
    approximation of the structure of the
  • 6:23 - 6:26
    graphs.
    You have these observations.
  • 6:26 - 6:30
    Can you figure out who actually, spreads
    the rumor first.
  • 6:30 - 6:33
    It turns out this has, an interesting
    solution.
  • 6:33 - 6:38
    And using only few observers, about 20%,
    you can achieve a very high accuracy in
  • 6:38 - 6:44
    finding the source of a rumor on a large
    scale network.
  • 6:44 - 6:47
    And there are many interesting questions
    here, to pursue in this source
  • 6:47 - 6:51
    localization in social networks.
    And there was a paper that came out last
  • 6:51 - 6:55
    year, Locating the Source of Diffusion in
    Large-Scale Networks that had quite a bit
  • 6:55 - 6:59
    of impact.
    The project is actually funded by the
  • 6:59 - 7:03
    Bill and Melinda Gates Foundation.
    The reason is that one of the
  • 7:03 - 7:06
    applications is to monitor health
    problems.
  • 7:06 - 7:11
    For example, here is a map of Cholera
    outbreak in Africa, and the map shows the
  • 7:11 - 7:15
    river network.
    Cholera is a water born disease, and so,
  • 7:15 - 7:20
    typically Cholera will actually diffuse
    along waterways.
  • 7:20 - 7:24
    But you know when people fell sick at
    certain locations and then you can infer
  • 7:24 - 7:27
    the source of the actual Cholera
    outbreak.
  • 7:27 - 7:31
    There is another example here, which is a
    simulation of, if you had to figure out
  • 7:31 - 7:35
    if there was some pollution or attack on
    the New York subway, and if you could
  • 7:35 - 7:39
    figure out knowing the network of the New
    York subway and when you start detecting
  • 7:39 - 7:46
    the problems where the source of the
    problem actually was.
  • 7:47 - 7:51
    The next project is on sampling, so we
    have worked on a new theory of sampling
  • 7:51 - 7:55
    here called Finite Rate of Innovation
    Sampling, and it is used in
  • 7:55 - 7:59
    communications problems, and in
    monitoring problems to reduce the number
  • 7:59 - 8:04
    of samples being transmitted or acquired.
    Dr.
  • 8:04 - 8:08
    Freris, who is a senior scientist with
    doctoral students and MS assistants, are
  • 8:08 - 8:13
    actually working on doing ECG monitoring
    at very low power for wireless health
  • 8:13 - 8:17
    monitoring.
    So here is a block diagram, it's
  • 8:17 - 8:21
    relatively complicated so let me not get
    into this, but it uses some fairly
  • 8:21 - 8:25
    sophisticated techniques to reduce the
    sampling rate so as to reduce the energy
  • 8:25 - 8:33
    consumption on these wireless devices.
    So, this project is actually sponsored by
  • 8:33 - 8:40
    somebody well known, Qualcomm, interested
    in the theory of sampling.
  • 8:40 - 8:44
    And the extension here for this
    particular project has been
  • 8:44 - 8:50
    generalization of the initial finite rate
    of innovation sampling methodology.
  • 8:50 - 8:54
    To get better compression, and better
    modelization of the signals.
  • 8:54 - 8:58
    So here we have the ECG signal, and then
    there is sophisticated models that
  • 8:58 - 9:03
    allows, to take very few parameters, to
    model the ECG signal.
  • 9:03 - 9:06
    There are a number of papers here, the
    initial paper on finite rate of
  • 9:06 - 9:11
    innovation sampling is this 2002 paper,
    and the number of recent papers have done
  • 9:11 - 9:17
    extension to this theory.
    So if you like sampling I welcome you to
  • 9:17 - 9:23
    actually read up on this stuff, it's one
    of my favorite research topics.
  • 9:25 - 9:29
    When we talk about sampling already in
    sensor networks we have mentioned that
  • 9:29 - 9:34
    placing a sensor is like taking a sample.
    And so that spatial sampling, now if you
  • 9:34 - 9:39
    do spatial sampling, you can also use
    mobile sensors and Dr.
  • 9:39 - 9:43
    Unnikrishnan here, a post doc in the lab,
    has worked on this or generalization of
  • 9:43 - 9:48
    the theory of sampling when you have
    mobile sensors that can actually go over
  • 9:48 - 9:56
    a field in an arbitrary fashion.
    Then you maybe show this in an example.
  • 9:56 - 10:00
    It's again a temperature monitoring
    example here on the EPFL campus, or you
  • 10:00 - 10:04
    have buildings.
    You have that open space between
  • 10:04 - 10:07
    buildings.
    Those buildings are, of course, hot.
  • 10:07 - 10:11
    The open space are cool.
    And you would like to have monitoring of
  • 10:11 - 10:17
    this temperature field not with static
    spatial sensors, but with people running
  • 10:17 - 10:24
    around, having a thermal meter let's say
    on their mobile phone.
  • 10:24 - 10:28
    And the question is, how accurate can you
    actually measure temperature using a
  • 10:28 - 10:32
    device like this?
    And so, this is being done actually for
  • 10:32 - 10:38
    pollution monitoring in the city of
    Lausanne so there's some equipment put on
  • 10:38 - 10:45
    buses to measure pollution parameters.
    And what we do here is we try to develop
  • 10:45 - 10:50
    a theory of how good you can sample when
    you have these mobile sensors going over
  • 10:50 - 10:57
    a surface and measuring a field.
    The results are very mathematical but are
  • 10:57 - 11:01
    interesting because our non-trivial
    extension of sampling theory through
  • 11:01 - 11:05
    multiple dimensions.
    And a few papers are mentioned here if
  • 11:05 - 11:11
    you are interested in more detail.
    The next project is about a new way of
  • 11:11 - 11:15
    doing image acquisition.
    So in this class, we have seen sampling
  • 11:15 - 11:20
    and we have seen quantization.
    And when we do quantization typically we
  • 11:20 - 11:25
    say, let's take [UNKNOWN] samples and
    then take as many bits as possible.
  • 11:25 - 11:31
    Let's say eight bits for speech, 12 bits
    for images 24 bits maybe for audio,
  • 11:31 - 11:35
    etcetera.
    Now here we took the extreme other
  • 11:35 - 11:40
    example we said lets build an image
    sensor that has many, many, many pixels
  • 11:40 - 11:47
    but the pixels only detect either a
    enough light or not.
  • 11:47 - 11:49
    So the pixels are actually binary
    detectors.
  • 11:49 - 11:55
    And so you have a light intensity here.
    Which changes over space.
  • 11:55 - 11:58
    You have a lens that smooths the light
    intensity.
  • 11:58 - 12:01
    So what reaches the camera is this smooth
    curve here.
  • 12:01 - 12:05
    And this smooth curve you sample very,
    very, very finely.
  • 12:05 - 12:09
    But you only decide if it's above or
    below a certain threshold.
  • 12:09 - 12:13
    So the sensor only generates a sequence
    of binary digits.
  • 12:13 - 12:18
    So that's the imaging model.
    And this has been studied by Dr.
  • 12:18 - 12:22
    Feng Yang, did his PhD thesis on this, is
    now a post-doc working on this project,
  • 12:22 - 12:27
    and a whole slew of other people.
    This was a very extensive project.
  • 12:27 - 12:33
    And what is interesting is that this new
    way of acquiring images, for example,
  • 12:33 - 12:40
    allows to do high dynamic range imaging.
    Here is a simulation of a high dynamic
  • 12:40 - 12:46
    range image in a much easier way than
    with conventional cameras.
  • 12:46 - 12:50
    That's one advantage.
    Another one is that you can have very,
  • 12:50 - 12:53
    very cheap sensors.
    So here's an example of one that was
  • 12:53 - 12:57
    built in the lab.
    And then, you take many, many frames.
  • 12:57 - 13:00
    They are extremely noisy.
    If they look noisy, they are simply
  • 13:00 - 13:04
    binary, so you only have zeroes and ones,
    but you have enough of these, and you do
  • 13:04 - 13:10
    an optimal reconstruction method.
    You actually can recognize here, the logo
  • 13:10 - 13:15
    of EPFL.
    There are publications here that you are
  • 13:15 - 13:19
    welcome to look up.
    And the thesis is online.
  • 13:19 - 13:23
    Last but not least Rambus silicon valley
    company, actually works with us on this
  • 13:23 - 13:29
    and has acquired some of the technologies
    that was developed in this project.
  • 13:31 - 13:34
    And old classic is trying to predict the
    stock market.
  • 13:34 - 13:38
    So, we gave it another shot.
    so Lionel Coulot did his PhD thesis, was
  • 13:38 - 13:43
    co-advised with Peter Bossaerts who is at
    Caltech.
  • 13:43 - 13:47
    And we were trying to understand if
    methods from information theory would
  • 13:47 - 13:52
    allow to predict models for the stock
    market, and that requires statistical
  • 13:52 - 13:56
    models for what the stock market might
    be.
  • 13:56 - 14:00
    And what is interesting is that you have
    to decide between very sophisticated
  • 14:00 - 14:04
    models that might be overkill and are
    hard to estimate, and very simple models
  • 14:04 - 14:08
    which might be too simplistic, but which
    might be very robust to things that
  • 14:08 - 14:15
    happen in the stock market.
    And, in the end we used coding theory and
  • 14:15 - 14:20
    classic algorithmic methods like dynamic
    programming to come up with a method that
  • 14:20 - 14:24
    decides what is the correct model at
    every time of, the observation of the
  • 14:24 - 14:32
    stock market.
    So I'm just going to show a picture.
  • 14:32 - 14:36
    And the picture is, is a value on the
    stock market.
  • 14:36 - 14:40
    And the question is, can you detect if
    the stock market is in a bear market or a
  • 14:40 - 14:45
    bull market?
    So when the stock market goes up it's,
  • 14:45 - 14:48
    called bull market.
    If it goes down, it's a bear market.
  • 14:48 - 14:53
    What is very hard is to decide by
    watching every day what's happening.
  • 14:53 - 14:57
    If currently the trend is going up or the
    trend is going down and you need to do
  • 14:57 - 15:02
    this with an online algorithm.
    Okay, you cannot look into the future and
  • 15:02 - 15:07
    this method developed by Lionel allows to
    do a model fitting and to very quickly
  • 15:07 - 15:14
    detect when the stock market changes from
    a bull market to a bar, bear market.
  • 15:16 - 15:22
    The thesis online and this was sponsored
    by, as you may guess, by a bank.
  • 15:22 - 15:26
    And the results are interesting, but we
    are still having a regular day job so you
  • 15:26 - 15:29
    can guess that the method is not
    completely fool proof to predict the
  • 15:29 - 15:34
    stock market.
    But the methods, the algorithms, and the
  • 15:34 - 15:39
    theory behind it is quite cool.
    The next few projects are so called
  • 15:39 - 15:43
    inverse problems.
    So inverse problems are problems where
  • 15:43 - 15:46
    you have some measurements but the
    measurements do not describe the signal
  • 15:46 - 15:51
    you're interested in.
    But some indirect measurement of the
  • 15:51 - 15:56
    signal, so you try to invert the system
    to go back to the original signal.
  • 15:56 - 16:00
    You all know about computerized
    tomography, a medical image method, where
  • 16:00 - 16:04
    you can see inside the body without
    really going there.
  • 16:04 - 16:08
    And that's a typical inverse problem.
    Here we are interested in inverse
  • 16:08 - 16:13
    problems in environmental monitoring.
    So, the first example is diffusion
  • 16:13 - 16:17
    equation.
    And we have a physical phenomena, for
  • 16:17 - 16:22
    example temperature has been discussed,
    or atmospheric dispersal of pollution.
  • 16:22 - 16:27
    We want to measure the field at locations
    where we can put sensors, and the goal is
  • 16:27 - 16:32
    to find where are the sources, for
    example, of pollution.
  • 16:32 - 16:37
    Now this is a hard problem because, you
    have to model how, for example, pollution
  • 16:37 - 16:41
    is being diffused.
    That depends on weather patterns and so
  • 16:41 - 16:44
    on.
    But the tools we are using are typical
  • 16:44 - 16:49
    signal processing tools, for analysis.
    Sampling theory for exemplifying finite
  • 16:49 - 16:52
    rate of innovation sampling or
    compressive sensing, that has also been
  • 16:52 - 16:57
    mentioned earlier.
    Let's look at the picture.
  • 16:57 - 17:01
    That's a very simple example of this.
    Assume you have two smokestacks and
  • 17:01 - 17:07
    inside a factory compound, and the
    smokestacks produce pollution which
  • 17:07 - 17:12
    changes every day.
    You don't know how much pollution is
  • 17:12 - 17:17
    being released, and you're working for an
    environmental monitoring agency.
  • 17:17 - 17:22
    You put sensors outside of the compound
    and you measure what arrives, in terms of
  • 17:22 - 17:27
    pollution, at these sensors.
    And the goal is to figure out if what
  • 17:27 - 17:30
    came out of smoke stack was within the
    bounds allowed, lets say by z,
  • 17:30 - 17:36
    Environmental Protection Agency.
    So this is a interesting and non-trivial
  • 17:36 - 17:41
    problem but there are some interesting
    results that were produced by Yuri
  • 17:41 - 17:46
    Ranieri, whom you all know because he was
    the famous Master Chief assistant for the
  • 17:46 - 17:54
    BSB class.
    So we are able to recover sparse sources
  • 17:54 - 17:58
    using this inversion method.
    we use this finite rate of innovation
  • 17:58 - 18:03
    sampling techniques to actually do it.
    And here we is a list of publications
  • 18:03 - 18:10
    that came out of this research.
    This problem is also an inverse problem.
  • 18:10 - 18:14
    It's a Fukushima inverse problem.
    It is a PhD project of Marta
  • 18:14 - 18:19
    Martinez-Camara, and a few other of us
    are involved in this, and we collaborate
  • 18:19 - 18:26
    with a specialist Andreas Stohl.
    Who is a specialist of monitoring of
  • 18:26 - 18:31
    radioactive diffusion.
    So what we like to do is figure out how
  • 18:31 - 18:36
    much radionuclides were actually released
    in Fukushima at the time of the of the
  • 18:36 - 18:43
    nuclear accident at Fukushima.
    We have only very few sensors, they are
  • 18:43 - 18:46
    located around the world very far away
    from Fukushima.
  • 18:46 - 18:51
    And the question is, is it possible from
    these few measurements around the world
  • 18:51 - 18:56
    taken later, to invert the entire process
    as I diffuse the initial release of
  • 18:56 - 19:03
    radioactive material into the atmosphere.
    What tools are we using?
  • 19:03 - 19:06
    Sparse regularizations, so that's
    compressed sensing.
  • 19:06 - 19:10
    And we need to using atmospheric
    dispersion model to understand how
  • 19:10 - 19:14
    radioactive material from from Fukushima
    was actually transported across the
  • 19:14 - 19:20
    world.
    So one result that we have and which is
  • 19:20 - 19:24
    very interesting is we were able to
    estimate the emission of Xenons, that's
  • 19:24 - 19:29
    radioactive gas that was released at the
    time of explosions at Fukushima, went up
  • 19:29 - 19:37
    into the atmosphere, was transported by
    weather patterns all over the world.
  • 19:37 - 19:41
    And from the measurements all over the
    world, we were able to pinpoint exactly
  • 19:41 - 19:49
    when the Xenon was released, and how much
    Xenon was released into the atmosphere.
  • 19:49 - 19:52
    And it turns out we actually know the
    total amount of Xenon that was released,
  • 19:52 - 19:58
    because after the accident no Xenon was
    actually left in the nuclear power plant.
  • 20:00 - 20:04
    Currently we're trying to go beyond this
    and estimate the Cesium release, but that
  • 20:04 - 20:09
    turns out to be a harder problem.
    The paper that describes this will be
  • 20:09 - 20:14
    published ICASSP this year and is
    available online here in infoscience.
  • 20:16 - 20:21
    Last but not least is a project we call,
    "Can One Hear the Shape of a Room?" It's
  • 20:21 - 20:25
    a PhD project of Ivan Dokmanic and
    several other people in the lab, in
  • 20:25 - 20:31
    particular, Reza Parhizkar, Andreaz
    Walther, have worked on this.
  • 20:31 - 20:34
    And also we have a collaboration with Yue
    Lu.
  • 20:34 - 20:39
    He's now with Harvard.
    Now you know about this problem because,
  • 20:39 - 20:44
    Ivan gave module 512 about gear
    dereverberation, echo cancellation.
  • 20:44 - 20:49
    And, uh,the next step is to say, if I
    listen to echoes, can I actually
  • 20:49 - 20:55
    understand what is a room shape?
    So if I know the room shape, then I know
  • 20:55 - 21:00
    how to generate the echoes.
    But if you give me the echoes, can I know
  • 21:00 - 21:03
    the room shape?
    It's a classic inverse problem, very cute
  • 21:03 - 21:06
    one.
    And we usually explain it by saying,
  • 21:06 - 21:10
    let's say you enter a room, you're
    blindfolded.
  • 21:10 - 21:14
    And so you don't see the room at all.
    You snap your finger.
  • 21:14 - 21:19
    You therefore elicit echoes, you listen
    very carefully to the echoes.
  • 21:19 - 21:22
    Can you exactly see or hear the shape of
    the room?
  • 21:24 - 21:28
    Now this has a beautiful theory, which we
    won't have time to really explain, but
  • 21:28 - 21:32
    that you can read up about because it's
    published material.
  • 21:32 - 21:36
    But if you have a source or receiver you
    have a direct pass between the source and
  • 21:36 - 21:42
    the receiver, and you have echoes given
    by the walls.
  • 21:42 - 21:46
    The echoes given by the walls correspond
    to so called mirror or image sources, so
  • 21:46 - 21:50
    this is the same as if you had a source
    here and the sound would have gone
  • 21:50 - 21:55
    straight here.
    So if you can locate all these image
  • 21:55 - 21:59
    sources, then you can actually locate the
    room.
  • 21:59 - 22:03
    The walls, therefore the room.
    And this is, you know, in principal
  • 22:03 - 22:09
    do-able the question was is it always
    true that this can be done?
  • 22:09 - 22:12
    And is it also realistic to do it in
    practice?
  • 22:12 - 22:16
    So, here are examples of a system with
    five microforms, you have one source five
  • 22:16 - 22:19
    microforms.
    You have somebody snap his finger and you
  • 22:19 - 22:23
    have the echos related to the walls and
    you see there is a complexity which is,
  • 22:23 - 22:27
    these echos come in random orders because
    different walls are at different
  • 22:27 - 22:34
    distances of the microphone.
    And the question is, can we find out the
  • 22:34 - 22:38
    shape from a set of measurements as we
    see here?
  • 22:38 - 22:44
    How many measurements do we need?
    Can we have a robust algorithm?
  • 22:44 - 22:48
    So the answer is summarized in, yes we
    can.
  • 22:48 - 22:52
    And there are some experiments we did,
    both at the labs.
  • 22:52 - 22:56
    So this is one of our seminar rooms we
    created a, an artificial wall here to
  • 22:56 - 23:01
    have different shapes of rooms.
    So this is a typical shape of room.
  • 23:01 - 23:06
    Then in this case, with five microphone
    and one source, we were able to estimate
  • 23:06 - 23:12
    the size, shape of the room very
    accurately to more, better than 1%.
  • 23:12 - 23:17
    And once we had this, we said, well,
    let's see how robust this is.
  • 23:17 - 23:21
    We went to Lausanne Cathedral and that's
    actually a foyer of the Lausanne
  • 23:21 - 23:26
    Cathedral, which is not at all needing
    the assumptions of the algorithms that
  • 23:26 - 23:32
    I've described very briefly here.
    And it was still possible to see the
  • 23:32 - 23:36
    major refractors, meaning the major walls
    here in the Lausanne Cathedral.
  • 23:36 - 23:40
    And so the answer is yes, one can hear
    the shape of a room.
  • 23:40 - 23:45
    And you can visit Ivan's web page to see
    more details.
  • 23:47 - 23:51
    Now these were just a selection of
    projects, of works that is being done by
  • 23:51 - 23:55
    PhDs and post-docs and senior researchers
    in the lab.
  • 23:55 - 23:59
    Please go to the website, as that gives
    the entire portfolio of research here of
  • 23:59 - 24:02
    what the lab is currently doing.
Title:
10.2 - Some research projects that use techniques you learned here
Description:

From the official description of 10.. videos:

Goodbye!

As a parting message, we prepared an extra Module (yes, a "bonus feature", just like in DVDs!) with the following purposes:

give you some pointers if you want to learn more about signal processing
show you some of the cool research topics in signal processing that are currently being pursued in our lab
show you how signal processing translates to the real world by introducing several startups founded by current and former members of our lab

more » « less

English subtitles

Incomplete

Revisions