Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access November 10, 2023

Exploring relationships among soundscape perception, spatiotemporal sound characteristics, and personal traits through social media

  • Ta-Chien Chan EMAIL logo , Bing-Sheng Wu , Yu-Ting Lee , Ping-Hsien Lee and Ren-Hao Deng
From the journal Noise Mapping

Abstract

Understanding the causes of noise annoyance requires recognition of the factors that affect soundscape perceptions. This study aims to explore multilevel factors of acoustic comfort and the perceived health effects of sound exposure, including personal traits, sound characteristics, and spatiotemporal features of the surrounding environment. We designed the Soundmap chatbot to collect data from the participants during May 16-July 16, 2022. The participants completed two tasks including sound recording and questionnaires. Sound feature extraction and identification were computed. Two soundscape perception variables were used as outcome variables and measured, and cumulative link mixed models were applied for statistical analysis. Results showed that for female participants, lower acoustic comfort was associated with sound exposure at night, at the land use of transportation and built-up areas, as well as the sounds of machines, vehicles, and airplanes. Low-frequency sound exposure and sounds of nature, silence, music, and human activity were associated with higher acoustic comfort, and these exposures were also associated with positive perceptions of health in rural areas and on weekends. Individuals with extraversion had a higher tolerance to sound; those with high noise sensitivity had a lower tolerance to sound. Understanding sound perception is crucial in maintaining a sustainable urban acoustic environment.

1 Introduction

Noise annoyance is a global public nuisance [1,2,3], and improving noisy environments has become a sustainable development goal in European countries [4]. Noise exposure not only directly affects emotional and mental health [5,6] but also amplifies the subsequent disease burden, owing to the increased risk of cardiovascular diseases [7], metabolic syndromes [8], sleep disorders [9,10], and decreased cognition [11,12]. Noise annoyance is an important threat to public health. Fundamentally, continuous noise measurements are limited by spatiotemporal coverage, which makes quantifying noise exposure difficult. In addition, noise annoyance can sometimes be a very subjective feeling, depending on the sound, and varies in different social and geographical contexts. Personality [13,14] and noise sensitivity [15] influence noise annoyance. Acoustic comfort is the basic feeling of users toward an acoustic environment, which can provide either psychological relaxation or stress relief [16]. By contrast, noise annoyance is the result of a stress response to cumulative or repetitive noise exposure and poor acoustic perception. Therefore, the evaluation of soundscape perception can improve the understanding of the causes of noise annoyance attributed to multilevel factors.

Studies have primarily focused on the contributions of road traffic [17,18] and aircraft noise [19] to noise annoyance. Indicators of noise exposure primarily rely on the sound energy measured using the equivalent continuous sound level (Leq), which can be treated as the average outdoor noise exposure level, particularly in residential areas. If the focus of noise evaluation is long-term exposure alone, the traditional approach may capture some of these events. However, to quantify short-term exposure to noise and the subjective feelings involved, the traditional approach cannot capture an individual’s dynamic mobility pattern, diverse sources of sound exposure, or fleeting sense of sound. Therefore, some studies have proposed a global positioning system (GPS)-enabled tracking system with a personal sound meter to help understand the relationship between spatiotemporal features and noise exposure [20]. However, the types of sounds and locations cannot be clearly identified if the data are based only on decibel level and GPS information. In addition, if the feeling associated with a specific sound was too short-lived, the study design of the recall survey might have had an increased probability of recall bias.

Previously, there have been some studies on participatory noise sensing based on mobile phones. A mobile phone participatory noise mapping project called “NoiseTube” was proposed in 2010 [21]. The research team conducted experiments in Antwerp, Belgium, using laboratory-calibrated mobile phones to enable participants to record sound pressure in their daily lives. The results revealed a spatial distribution similar to the official simulated noise map but with some differences owing to real participation records. In addition to noise exposure, subjective perceptions and images of the location of the recorded sound are essential for understanding human–environment interactions in the soundscape. A mobile app named Hush City (https://opensourcesoundscapes.org/hush-city/), developed in Berlin and currently available in five languages, allows volunteers to locate quiet places in their communities. Many similar applications, such as NoiseCapture [22], are available worldwide. Due to the huge number of mobile phone users, noise mapping can not only rely on pure noise simulation models, but also conduct real-world noise monitoring through smartphones [23].

Owing to the aforementioned difficulties in evaluating soundscape perception, this study adopted a prospective participatory research plan to recruit subjects and innovatively designed an interactive survey platform on the LINE app (https://line.me/en/), the social media application with the most users in Taiwan. Furthermore, this study adopted artificial intelligence technology to identify types of sounds in real time. The purpose of this study was to explore multilevel factors of acoustic comfort and perceived health effects of sound exposure in daily life. These factors include personal characteristics, characteristics of the sound, and the spatiotemporal features of the surrounding environment.

2 Methods

2.1 Recruitment of the subjects

We designed the Soundmap chatbot to collect data using the LINE app, which has 21 million users. The recruitment of participants for the survey was mainly conducted using two channels: 1) Facebook posts and 2) a snowball strategy to invite our collaborators’ students and their friends or family members. The ages of the subjects were ≥20 years. Each participant completed two tasks during the 2-month study period (May 16–July 16, 2022). In the first task, participants were required to record sounds at least three times per day for a minimum of 15 days, including at least four weekends. For each sound recording, we suggested that the participants record sounds in their daily lives. Our platform also actively sent three notifications at random times in the morning (08:00–12:00), afternoon (12:01–18:00), and evening (18:01–22:00) to remind the participants to record sounds. The purpose of these random times was to capture diverse sources of sound at different times and places. Along with the sound recordings, six additional questions had to be answered. We suggest our participants record the sounds within 10 s. The average length of recordings used in this study is 12.476 s. The total time required to complete each task was less than 1 min.

The six questions were as follows: (Screenshots of Task 1 in the Soundmap are shown in Appendix 1.) The collection of different kinds of soundscapes, and their perception, is based on the concepts from ISO 12913-1 [24] and 12913-2 [25].

  1. After sound recording through the Soundmap, the A-weighted decibel (dBA) and types of sounds were identified using YAMNet (https://hub.tensorflow.google.cn/google/yamnet/1), and the results were returned to the LINE messages. The text below the dBA told the participants that the value represented the reference alone because the sound measurement differed depending on the microphone quality of different mobile phones, and the recording direction and distance from the source of the sound. YAMNet is a pre-trained artificial intelligence algorithm consisting of 521 audio event classes from the AudioSet-YouTube corpus. Our platform returned the three highest probability results for the participants’ sound identification. The participants were required to respond to the accuracy of the identification results. If the results were incorrect in the majority of cases, the chatbot asked the participants to respond to the actual type of sound in the recording.

  2. How comfortable are you with the recorded sounds? (1: very uncomfortable; 5: very comfortable).

  3. If you were exposed to this sound every day, would you agree that it would have a positive impact on your mental and physical health? (1: strongly disagree; 5: strongly agree).

  4. Where were the sounds recorded? (there were three options: indoor, outdoor, and transportation). During our pilot study, if the participants were on the public or private transportation (such as the car, bus, metro, or train), they would not know whether their position should be determined as an indoor or outdoor environment. Therefore, we decided to treat “transportation” as a unique category in addition to “indoor” or “outdoor” in our formal survey.

  5. If the answer was indoors or outdoors, there were 15 options to select from: industrial areas, homes, campuses, quiet workplaces, noisy workplaces, restaurants, roadsides, quiet leisure places, lively entertainment venues, hospitals or clinics, stations, markets, near the railway, near the airport, and other places inhabited or visited by the participants. If the answer was transportation, there were seven options to select from: cars, buses, motorcycles, high-speed rail, trains, mass rapid transit, and bicycles.

  6. In order to share the location information, the sound was recorded with assisted GPS. If the location was incorrectly predicted by automatic geolocation, the correct address or point of interest could be manually input in the search window, or the correct place on the map could be clicked. The corresponding x- and y-coordinates were stored in the database.

In Task 2, five questionnaires were designed using Google Forms and embedded into our chatbot. There was one unique scrambled key to link our participants’ Task 1 results with their corresponding questionnaires. The scrambled key is the encrypted ID provided by the LINE platform to identify the unique person. For different chatbots, the LINE platform will provide different encrypted IDs for the same person. In this study, we used parts of our questionnaires, including the demographics, Big-5 personality traits (scale: 1–4) – openness, conscientiousness, extraversion, agreeableness, neuroticism [26], noise sensitivity questionnaire (scale: 1–5) [27], and total noise exposure time within 1 week. Screenshots of Task 2 on the Soundmap are shown in Appendix 2.

2.2 Sound feature extraction, identification, and dimension reduction

For each recording, we computed LAeq, which is the A-weighted Leq sound level and the A-weighted sound pressure level exceeding different percentiles of sound pressure within a time interval, namely LAmin, LA90, LA50, LA10, and LAmax. Python [28] and SciPy Package (version 1.7.3, https://scipy.org/) were used to compute sound pressure levels. We also calculated the percentage of time at which a specific sound pressure interval occurred for each recording, including ≥75.0, 65.0–74.9, 55.0–64.9, 45.0–54.9, and <45.0 dBA. In addition to the sound pressure, we also calculated the frequency spectrum of each recording and converted it to the percentage of time in which three frequency intervals occurred: high frequency (>4 kHz), middle frequency (200–4 kHz), and low frequency (<200 Hz). We applied the SciPy package to perform the Fourier transform of each recording and converted it from the time domain to the frequency domain. Owing to the highly correlated features of the 14 sound characteristics, we used principal component analysis (PCA) to reduce the dimensions by using the “SciViews” packages [29] in R software version 4.1.2 [30]. Based on Kariser’s rule retaining the components’ eigenvalues larger than 1, the first four components explain 80.39% of the variance, including sound pressure-related factors (PC 1), middle frequency (PC 2), low sound pressure (PC 3), and low frequency (PC 4). The screen and PCA loading plots are presented in Figure A1. For sound identification, the participants reported an accuracy of 83.98% (8,526/10,152), and any inaccurate identification was corrected through self-reporting. Because YAMNet has 521 classifications, we returned the three maximum possible results with the highest probabilities for each recording. During data analysis, we subjectively combined the 22 sound categories into ten categories based on the participants’ answers, namely, the 15 options from either indoors or outdoors and the seven options from the transportation, as well as from the text in the “others” option. Appendix 1 lists 15 locations that participants could select from under the indoor or outdoor options, namely, industrial areas, homes, campuses, quiet workplaces, noisy workplaces, restaurants, roadsides, quiet leisure places, lively entertainment venues, hospitals or clinics, stations, markets, near the railway, near the airport, and other places inhabited or visited by participants. If the answer was transportation, seven options were available: cars, buses, motorcycles, high-speed rail, trains, mass-rapid transit, and bicycles. Considering the numerous answer combinations, we divided the participants’ responses into ten sound categories: human activity, white or pink noise, natural sounds, silence, music, machines, vehicles, railways, boats, and airplanes.

2.3 Demographic information, recording time, and questionnaires

The demographic information collected comprised age, sex, and socioeconomic status (SES). SES was computed from four education levels (i.e., below or equal to junior high school, high school, university or college, and graduate school) and five occupation levels (i.e., senior professionals/managers, professionals/middle managers, semi-professionals or general civil servants, skilled workers, and unskilled workers) from 14 occupations. Based on the formula suggested in the literature [31], we converted the SES into five classes, including three classes computed from the formula (i.e., upper middle and upper class, middle class, and lower middle class) and two classes of students, retirees, and unemployed individuals. As aforementioned, we sent notifications to the participants to remind them to record sounds thrice per day. A timestamp is used for each recording. The recordings were classified into three groups within 1 day based on the definition of the day-evening-night level (Lden) from the 2002 European Standard (https://www.eea.europa.eu/help/glossary/eea-glossary/lden). We borrowed the time definition from Lden to compute the average dBA separately within the three temporal windows of the day. We also computed weekday or weekend information on the recording date. The sum of noise sensitivity scores ≥3 was classified into the high noise sensitivity group, whereas scores <3 were placed into the low noise sensitivity group.

2.4 Processing spatial information

To protect participants’ identity and privacy, we geomasked point-based geospatial data into basic statistical units (BSAs), the smallest geographic census units in Taiwan, namely, blocks below the village level. We further used the spatial join function in QGIS 3.28 [32] to link the land use and population density groups to the BSA spatial polygons in the shapefile format where the sounds were recorded. Land use data for 2022 and population density for June 2022 at the BSA level were obtained from the socioeconomic database maintained by the Ministry of Interior, Taiwan (https://segis.moi.gov.tw/STAT/Web/Portal/STAT_PortalHome.aspx). We used the first set of land use data with nine categories in the first tier, 41 subcategories in the second tier, and 103 subcategories in the third tier. We used the first tier for our classification and treated the largest percentage of land use as the representative land use for BSA. The population density at the BSA was classified into three groups based on census data: areas without any registered residents (e.g., parks, schools, hospitals, or government agencies), urban areas (≥1,000 individuals/km2), and rural areas (<1,000 individuals/km2). The major land use types and population density groups for each record were further considered in the model. Moreover, we used QGIS and an open-source digital terrain layer [33] to visualize the relationship between acoustic comfort and land use or population density at a finer spatial resolution.

2.5 Statistical analysis

Two soundscape perception variables were used as the outcome variables and measured using a 5-point Likert scale. The first variable was the acoustic comfort of the recorded sound (Y1). Higher values indicate higher acoustic comfort. The second variable was whether the recorded sound could exert a positive impact on participants’ physical and mental health (Y2) if daily exposure occurs. Higher values represent a stronger agreement. Because our two response variables were in the ordinal scale, we applied cumulative link mixed models using the “ordinal package” [34] in R software for statistical analysis. In this model, we considered repeated recordings of the same participants and random effects from independent participants. The explanatory variables (X) included age, sex, SES, noise exposure, Big-5 personality traits, noise sensitivity, Lden group, population density, weekend/weekday, four PCA components, representative land use, and types of locations and sounds. We used a stepwise approach to retain the variables based on a statistical significance level (alpha) of 0.05. Because the types of locations and sounds were highly correlated, as measured using Pearson’s chi-square test (p < 0.05), for each outcome variable, we ran two models to retain the significant variables, including one with types of locations and the other with types of sounds.

log P ( Y c [ i ] j ) 1 P ( Y c [ i ] j ) = α j ( Z c [ i ] Y c + X c [ i ] β ) where Y c N ( 0 , σ c 2 ) ,

Y c [ i ] = Y 1 Y 2 , 1 Y i j ,

j = ordinal variable ( Y 1 / Y 2 ) level , 1 j 5 ,

α j = threshold for level j ,

c = user , i = i th recording sound in user c ,

Z c [ i ] = the design matrix for random effects ,

X c [ i ] = the covariate matrix .

This cumulative logit model satisfies the proportional odds assumption. Model interpretations of this cumulative logit model can use odds ratios (OR) to determine cumulative probabilities and their complements. For two values, X c [ i ] 1 and X c [ i ] 2 , of X c [ i ] , the OR comparing the cumulative probabilities can be expressed as

P ( Y c [ i ] j | X c [ i ] = X c [ i ] 1 ) / 1 P ( Y c [ i ] j | X c [ i ] = X c [ i ] 1 ) P ( Y c [ i ] j | X c [ i ] = X c [ i ] 2 ) / 1 P ( Y c [ i ] j | X c [ i ] = X c [ i ] 2 ) = P ( Y c [ i ] j | X c [ i ] = X c [ i ] 1 ) / P ( Y c [ i ] > j | X c [ i ] = X c [ i ] 1 ) P ( Y c [ i ] j | X c [ i ] = X c [ i ] 2 ) / P ( Y c [ i ] > j | X c [ i ] = X c [ i ] 2 ) .

The OR of cumulative probabilities is called a cumulative OR [35]. The log of this cumulative OR producing a response j at X c [ i ] = X c [ i ] 1 are exp [ ( X c [ i ] 1 X c [ i ] 2 ) β ] times the odds at X c [ i ] 2 . The log cumulative OR is proportional to the distance between X c [ i ] 1 and X c [ i ] 2 . The same proportionality constant applies to each logit. If the OR is greater than 1, it means that the numerator (a subject) has more chances of occurrence of events than the denominator (their counterparts).

3 Results

3.1 Participants’ inclusion and distribution

After a 2-month survey, 11,774 sound records were obtained from 314 participants (Figure 1). Next, we began data cleaning to combine different datasets that linked the daily recording in Task 1 to the questionnaires in Task 2 for the same participant. We excluded 895 records from 22 participants because of missing demographic data or incomplete records. We then joined the two waves’ questionnaires from Task 2 to the Task 1 dataset and excluded 214 sound records with at least one of the following criteria: less than good sound quality (LAeq < 20 dBA), missing land use information, or recorded on a boat. We included 10,152 sound records and questionnaires completed by 207 participants. The spatial distribution of these recordings covered all of Taiwan through a participatory survey (Figure 2). Although our participants were clustered mostly in northern Taiwan, the remainder of Taiwan still had participants help cover different types of soundscapes. In general, uncomfortable acoustic perception was clustered in the center of the Taipei metropolitan area, and comfortable acoustic perception was clustered in the surrounding area of the Taipei metropolitan area and in eastern and central Taiwan. When we zoomed into the Taipei metropolitan area (Figure 3), comfort sounds (Figure 3c) tended to be concentrated closer to forest land than to the city center. Similar trends were found in the perception of positive health in Figure A2 for the zoom scale of the Taipei metropolitan area.

Figure 1 
                  Flowchart of participants’ inclusion.
Figure 1

Flowchart of participants’ inclusion.

Figure 2 
                  Spatial distribution of all included recordings.
Figure 2

Spatial distribution of all included recordings.

Figure 3 
                  Spatial relationship between acoustic comforts and different land use types in Taipei metropolitan area (a) Very uncomfortable and uncomfortable, (b) Moderate, (c) Very comfortable and comfortable.
Figure 3

Spatial relationship between acoustic comforts and different land use types in Taipei metropolitan area (a) Very uncomfortable and uncomfortable, (b) Moderate, (c) Very comfortable and comfortable.

3.2 Descriptive statistics of variables

The descriptive statistics for noise perception, demographics, personality, and noise exposure indicators are listed in Table 1. Each participant contributed an average of 49.04 records (standard deviation [SD: 47.45] over 19.11 days [SD: 14.83 days]) during the study period. Among the noise perception indicators, the average value of acoustic comfort was 3.18, and agreeability on the positive impact of sound was 3.19. Regarding demographic factors, of the participants, the average age was 31.66 years, females accounted for 61.8%, students accounted for 45.41%, and 38.65% were upper-middle and upper classes. Regarding the Big-5 personality traits, the average values for open-mindedness, conscientiousness, extraversion, agreeableness, and neuroticism were 1.74, 1.84, 1.61, 1.88, and 1.75, respectively. Among noise exposure indicators, the average time exposed to noise within 1 week accounted for 3.6% (approximately 6 h per week). Based on the noise sensitivity scale, 110 participants (53.14%) were classified into the high-sensitivity group.

Table 1

The descriptive statistics of soundscape perception, demographic, personality, and noise exposure indicators

Variables Mean ± sd/n (%)
Average recordings per participant 49.04 ± 47.45
Average participating days per participant 19.11 ± 14.83
Soundscape perception indicators
Acoustic comforts (Y1) 3.18 ± 0.89
Agree on positive impact from the sound (Y2) 3.19 ± 1.02
Demographic factors
Age (years) 31.66 ± 11.21
Sex (n = 207)
 Male 79 (38.2%)
 Female 128 (61.8%)
SES
 Lower-middle class 5 (2.42%)
 Middle class 21 (10.14%)
 Upper-middle and upper class 80 (38.65%)
 Students 94 (45.41%)
 Retired or unemployed 7 (3.38%)
Personality
 Open mindedness 1.74 ± 1.38
 Conscientiousness 1.84 ± 1.45
 Extraversion 1.61 ± 1.34
 Agreeableness 1.88 ± 1.45
 Neuroticism 1.75 ± 1.4
Noise exposure indicators
Time exposed to noise within one week (%) 3.6 ± 9.45
Noise sensitivity scale
 Low sensitivity group 97 (46.86%)
 High sensitivity group 110 (53.14%)

The descriptive statistics of the sound characteristics and spatiotemporal features are listed in Table 2. Among sound pressure indicators, the average sound pressures of LAeq, LAmax, LA10, LA50, LA90, and LAmin were 43.8, 54.4, 45.9, 40.6, 36.6, and 26.0 dBA, respectively. Among percentages of sound pressure and frequency, most recordings were at <45 dBA (63.63%) and 200–4 kHz (81.6%). The representative land use at BSAs from the recording sites was 56.13% in built-up areas, 15.26% in water conservation and public facilities, 12.66% in agriculture, 9.08% in transportation, 3.83% in forests, and 3.04% in recreational facilities. The recording sites were mostly located in urban areas (68.64%), and 18.44% were in rural areas. The LAeq at night is 37.8 dBA; in the daytime, 44.9 dBA; and in the evening, 43.2 dBA. The LAeq on weekdays and weekends is 43.3 and 44.8 dBA. The top three recorded locations were indoor residences (47.08%), outdoor transport facilities (13.9%), and indoor workplaces and campuses (12.48%). The top six sound identification results were human activities (45.17%), natural sounds (36.64%), vehicles (25.45%), silence (22.17%), machines (19.06%), and music (12.03%).

Table 2

Descriptive statistics of sound characteristics and spatiotemporal features

Variables Mean ± sd/n (%)
Sound pressure indicators
LAmin (dBA) 26.0 ± 12.3
LA90 (dBA) 36.6 ± 11.2
LA50 (dBA) 40.6 ± 11.4
LA10 (dBA) 45.9 ± 12.0
LAmax (dBA) 54.4 ± 12.6
LAeq (dBA) 43.8 ± 11.5
Percentage of sound pressure (%)
≥75.0 dBA 0.00 ± 0.8
65.0–74.9 dBA 2.03 ± 8.62
55.0–64.9 dBA 12.13 ± 24.61
45.0–54.9 dBA 22.16 ± 29.49
<45.0 dBA 63.63 ± 41.02
Percentage of frequency (%)
>4 kHz (%) 10.13 ± 4.82
200–4 kHz (%) 81.60 ± 6.27
<200 Hz (%) 8.27 ± 5.46
Representative land use at BSAs
Agricultural 1,285 (12.66%)
Forest 389 (3.83%)
Transportation 922 (9.08%)
Water conservancy and public facility 1,549 (15.26%)
Built-up area 5,698 (56.13%)
Recreational facility 309 (3.04%)
Population density groups
Without any registered persons 1,312 (12.92%)
Urban 6,968 (68.64%)
Rural 1,872 (18.44%)
Time groups based on Lden (dBA)
Night (23:00–07:00) 37.8 ± 11.2
Day (07:00–19:00) 44.9 ± 11.3
Evening (19:00–23:00) 43.2 ± 11.3
LAeq on weekend/workday (dBA)
Weekday 43.3 ± 11.4
Weekend 44.8 ± 11.6
Recording locations
Indoor workplace and campus 1,267 (12.48%)
Indoor transport facility 238 (2.34%)
Indoor residence 4,780 (47.08%)
Indoor others 670 (6.6%)
Outdoor workplace and campus 319 (3.14%)
Outdoor transport facility 1,411 (13.9%)
Outdoor residence 450 (4.43%)
Outdoor others 332 (3.27%)
Train and metro 324 (3.19%)
Vehicles on the road 361 (3.56%)
Sound category (each recording might contain one to three categories)
Human activity 4,586 (45.17%)
Nature 3,720 (36.64%)
Silence 2,251 (22.17%)
Music 1,221 (12.03%)
Machine 1,935 (19.06%)
Vehicle 2,584 (25.45%)
White or pink noise 607 (5.98%)
Rail 465 (4.58%)
Boat 52 (0.51%)
Airplane 43 (0.42%)

3.3 Factors related to acoustic comfort

As shown in Table 3, females tended to have lower acoustic comfort than males (OR: 0.6, 95% CI: 0.42, 0.86). Recordings in rural areas compared to urban areas (OR: 1.19 95% CI: 1.01–1.39) and weekends compared to weekdays (OR: 1.12, 95% CI: 1.02–1.23) had higher acoustic comfort. The recording time during the daytime (OR: 0.76, 95% CI: 0.65, 0.88) and evening (OR: 0.81, 95% CI: 0.69, 0.94) had lower acoustic comfort than at night. The sound-related features showed that low frequency (OR: 1.91, 95% CI: 1.39, 2.61) was associated with higher acoustic comforts and high sound pressure-related factors (OR: 0.92, 95% CI: 0.90, 0.94) and that middle frequency (OR: 0.41, 95% CI: 0.26, 0.64) was associated with lower acoustic comforts. Compared with agricultural land use, the land use of transportation (OR: 0.74, 95% CI: 0.59, 0.93) and built-up area (OR: 0.82, 95% CI: 0.68, 0.98) were associated with lower acoustic comforts. Compared with the recording locations at indoor workplaces and campuses, the traffic-related locations, including indoor transport facilities (OR: 0.7, 95% CI: 0.52, 0.96), outdoor transport facilities (OR: 0.7, 95% CI: 0.58, 0.85), trains and metros (OR: 0.57, 95% CI: 0.43, 0.76), and vehicles on the road (OR: 0.62, 95% CI: 0.48, 0.81), were all associated with lower acoustic comfort, and the other outdoor locations had higher acoustic comfort. In Table A1, most variables are the same as those in Table 3; however, the recording sites are replaced with the sound category. Among sound categories, the sounds from human activity (OR: 1.16, 95% CI: 1.05, 1.27), nature sounds (OR: 2.17, 95% CI: 1.96, 2.41), silence (OR: 1.22, 95% CI: 1.06, 1.40), and music (OR: 2.19, 95% CI: 1.91, 2.51) were associated with higher acoustic comforts and the sounds from machines (OR: 0.58, 95% CI: 0.51, 0.65), vehicles (OR: 0.37, 95% CI: 0.33, 0.42), airplanes (OR: 0.25, 95% CI: 0.13, 0.47) were associated with lower acoustic comforts.

Table 3

Estimated effects on acoustic comforts with types of location

Variables OR (95% CI) P value
Sex
Male ref
Female 0.6 (0.42, 0.86) 0.0052
Population density groups
Without any registered persons 0.97 (0.82, 1.16) 0.7593
Urban ref
Rural 1.19 (1.01, 1.39) 0.038
Time groups based on Lden
Night (23:00–07:00) ref
Day (07:00–19:00) 0.76 (0.65, 0.88) <0.001
Evening (19:00–23:00) 0.81 (0.69, 0.94) 0.0075
Weekday/weekend
Weekday ref
Weekend 1.12 (1.02, 1.23) 0.0133
Sound features by principal component analysis
Sound pressure-related factors (PC 1) 0.92 (0.90, 0.94) <0.001
Middle frequency (PC 2) 0.41 (0.26, 0.64) <0.001
Low frequency (PC 4) 1.91 (1.39, 2.61) <0.001
Land use
Agricultural ref
Forest 1.22 (0.93, 1.61) 0.1536
Transportation 0.74 (0.59, 0.93) 0.0091
Water conservancy and public facility 0.88 (0.71, 1.1) 0.2652
Built-up area 0.82 (0.68, 0.98) 0.029
Recreational facility 0.83 (0.61, 1.11) 0.2091
Recording locations
Indoor workplace and campus ref
Indoor transport facility 0.7 (0.52, 0.96) 0.0251
Indoor residence 1.46 (1.25, 1.7) <0.001
Indoor others 1.76 (1.43, 2.18) <0.001
Outdoor workplace and campus 2.6 (1.98, 3.42) <0.001
Outdoor transport facility 0.7 (0.58, 0.85) <0.001
Outdoor residence 3.12 (2.41, 4.02) <0.001
Outdoor others 3.14 (2.38, 4.13) <0.001
Train and metro 0.57 (0.43, 0.76) <0.001
Vehicles on the road 0.62 (0.48, 0.81) <0.001

3.4 Factors related to positive perception of health impacts from sound exposure

In Table 4, older adults had a more positive perception of the impact of sound on health (OR: 1.02, 95% CI: 1.01, 1.04), and females (OR: 0.63, 95% CI: 0.44, 0.90) were less positive. The participants recording the sounds during the daytime (OR: 0.76, 95% CI: 0.66, 0.87) and evening (OR: 0.78, 95% CI: 0.67, 0.91) had a less positive perception than that of the nighttime. Individuals with an extraverted personality had a more positive perception (OR: 1.36, 95% CI: 1.15, 1.61), and individuals with high noise sensitivity had a less positive perception (OR: 0.51, 95% CI: 0.33, 0.79). The sound with low frequency was associated with more positive perception (OR: 1.87, 95% CI: 1.42, 2.47), and the higher sound pressure-related factors (OR: 0.92, 95% CI: 0.90, 0.94) and middle frequency (OR: 0.4, 95% CI: 0.27, 0.59) were associated with a less positive perception. Compared with agricultural land use, the land use of the transportation (OR: 0.71, 95% CI: 0.57, 0.87) and built-up area (OR: 0.84, 95% CI: 0.71, 0.99) was associated with a less positive perception. Compared with the recording locations at indoor workplaces and campuses, the traffic-related locations, including indoor transport facilities (OR: 0.75, 95% CI: 0.56, 1.01), outdoor transport facilities (OR: 0.76, 95% CI: 0.63, 0.90), trains and metros (OR: 0.49, 95% CI: 0.37, 0.63), and vehicles on the road (OR: 0.61, 95% CI: 0.48, 0.78), were all associated with less positive perceptions, and the other outdoor locations had higher positive perceptions. In Table A2, we further found that recordings on weekends (OR: 1.11, 95% CI: 1.02, 1.21), the sounds from nature sounds (OR: 1.86, 95% CI: 1.69, 2.04), silence (OR: 1.26, 95% CI: 1.11, 1.44), and music (OR: 1.86, 95% CI: 1.64, 2.11) had a more positive perception. In contrast, the sounds from machines (OR: 0.62, 95% CI: 0.56, 0.69), vehicles (OR: 0.46, 95% CI: 0.41, 0.51), and airplanes (OR: 0.45, 95% CI: 0.25, 0.82) had a less positive perception.

Table 4

Estimated effects on agreement with positive health impacts from sound exposure with types of location

Variables OR (95% CI) P value
Age 1.02 (1.01, 1.04) 0.0036
Sex
Male ref
Female 0.63 (0.44, 0.90) 0.0102
Time groups based on Lden
Night (23:00–07:00) ref
Day (07:00–19:00) 0.76 (0.66, 0.87) <0.001
Evening (19:00–23:00) 0.78 (0.67, 0.91) <0.001
Extraversion 1.36 (1.15, 1.61) <0.001
Noise sensitivity scale
Low sensitivity group ref
High sensitivity group 0.51 (0.33, 0.79) 0.0026
Sound features by principal component analysis
Sound pressure-related factors (PC 1) 0.92 (0.90, 0.94) <0.001
Middle frequency (PC 2) 0.4 (0.27, 0.59) <0.001
Low frequency (PC 4) 1.87 (1.42, 2.47) <0.001
Land use
Agricultural ref
Forest 1.25 (0.96, 1.61) 0.0932
Transportation 0.71 (0.57, 0.87) 0.0013
Water conservancy and public facility 1.01 (0.82, 1.23) 0.9489
Built-up area 0.84 (0.71, 0.99) 0.0383
Recreational facility 0.85 (0.64, 1.13) 0.2596
Location
Indoor workplace and campus ref
Indoor transport facility 0.75 (0.56, 1.01) 0.0545
Indoor residence 1.51 (1.31, 1.74) <0.001
Indoor others 1.37 (1.12, 1.67) 0.0018
Outdoor workplace and campus 2.42 (1.87, 3.13) <0.001
Outdoor transport facility 0.76 (0.63, 0.90) 0.0022
Outdoor residence 3.51 (2.75, 4.47) <0.001
Outdoor others 2.9 (2.23, 3.77) <0.001
Train and metro 0.49 (0.37, 0.63) <0.001
Vehicles on the road 0.61 (0.48, 0.78) <0.001

4 Discussion

This study successfully identified multilevel factors influencing the perception of acoustic comfort and the health impacts of different soundscape exposures. The study design introduced a social media platform to collect data from the participants in their daily lives. Simultaneous prospective repeated measurements of sound and perception can further reduce recall bias and represent daily exposure at different times. The findings indicated that sociodemographic factors, sound features, and spatiotemporal characteristics, including recording time, day (weekday or weekend), and location were associated with acoustic comfort, and personality and noise sensitivity were further associated with the perception of positive health impacts from soundscape exposure. With this innovative approach, complex and subjective feelings of sound can be investigated, and personal and environmental confounders can be controlled. For example, recordings at traffic-related locations and land use- and vehicle-related sounds were associated with lower acoustic comfort and a less positive health perception.

According to the United Nations Environment Program’s “Frontiers 2022” Report, noise in urban areas is a major environmental issue that affects public health (https://www.unep.org/resources/frontiers-2022-noise-blazes-and-mismatches). Our findings indicate that lower acoustic comfort levels are associated with land use in traffic and built-up areas. In contrast, higher acoustic comfort levels are associated with natural and quiet places. Therefore, we recommend that green spaces, tree belts, noise barriers, and green transportation considerations should be incorporated into urban planning to reduce traffic noise exposure and provide a more pleasant soundscape to residents.

The sound records revealed certain spatial patterns using a GPS-enabled tracking system and highlighted individuals’ perceptions of sound through a spatial lens. First, most records were from cities, including the three metropolitan areas of Taipei (northern), Taichung (central), and Kaohsiung (southern). This spatial pattern implies that individuals are sensitive to sound in urban acoustic environments. In addition, records that were comfortable and very comfortable were clustered at certain locations; however, records that were uncomfortable and very uncomfortable were not spatially concentrated. This finding indicated that specific locations tend to provide pleasant acoustic environments. Second, at the city level, such as in the Taipei metropolitan area, there was a solid spatial relationship between records and land use types. Most of the records were located in built-up areas, including residential and commercial areas. More than 50% of records were labeled as moderately acoustic comfort, which indicated the awareness of ambient sound. Records with “uncomfortable” or “very uncomfortable” tags were mostly in the built-in land use type. Some records with high acoustic comfort were clustered by green spaces, including agriculture, forests, and recreational areas. Urban green spaces play an essential role in the improvement of perceptions [36], and one study in Nigeria found that birdsongs and densities of vegetation in parks were associated with higher acoustic comfort [37]. These findings support our finding that natural sounds in urban green spaces were associated with higher acoustic comfort. Finally, the spatial distribution of acoustic comfort levels in a city reflected that in general, individuals felt neutral toward sound exposure around the city. The main factor leading to the spatial nexus is individuals’ perception of sounds. The survey revealed that the SES of most participants was upper-middle class, upper class, or students, and the implication of SES was that the workplaces or schools of those participants were likely located in cities. These urban dwellers could easily recognize noise because of the obvious difference between high-amplitude (LA10) and background sounds (LA90). Thus, the vibrancy of urban soundscapes and the diversity of urban landscapes triggered individuals’ sense of noise and generated variations in acoustic comfort.

In addition to the spatial characteristics of acoustic comfort, the temporal dimension of sound exposure affected the perception of environmental sounds. Based on the outcomes of sound measurements, noise levels during the daytime and evening were higher than those during the nighttime, and individuals had lower acoustic comfort during the daytime and evening than during the nighttime. This result indicated that individuals tend to be mentally affected by exposure to loud noises. The analytical results were similar to those in the literature that analyzed the importance of the time-related factors of sound exposure on mood [20,38]. However, the findings of this study highlighted that individuals had higher acoustic comfort on weekends rather than on weekdays, which was not discussed in the literature. On weekdays, human activities prevailed during the daytime and evening, and human-made sounds, such as those from vehicles and machines, were generated during these time periods [38]. These sound sources were considered inconvenient noises and made individuals feel acoustically uncomfortable. According to the analytical results of this study, the noise level on weekends was slightly higher than that on weekdays, with no significant difference; however, individuals felt pleased with the acoustic environment. This could be the pleasant or eventful environment individuals stayed [39,40]. When individuals stayed indoors, including in indoor residences and workplaces/campuses, they enjoyed music or silence because these types of sounds raised their perception of acoustic comfort. When individuals stayed in an outdoor environment, they either used transport vehicles for transit, reached their destination to enjoy natural sounds, or experienced a quiet environment. Under these circumstances, sound pressure was not the only factor affecting sound perception. Instead, the types of sound sources forming the acoustic environment became the key drivers influencing how individuals felt on weekends rather than on weekdays.

Because noise affects physical and mental health [41], it is essential to examine how the physiological and psychological conditions of humans are associated with the awareness of positive health-related effects [39]. From a physiological perspective, older adults were aware of the health impacts of sound exposure because they perceived annoyance, which affected their work performance in unpleasant acoustic environments [42]. From a psychological perspective, participants in the high noise sensitivity group implied that they were allergic to sounds, including loudness and frequency. They were not pleased with the acoustic comfort in general and thus had a less positive perception of health impacts because they suffered higher stress due to their SES and occupations. These features could lower the tolerance to sound and easily generate negative noise perception. By contrast, extroverted individuals tended to experience stronger positive health impacts from exposure to sound. Their focus was relatively broad; therefore, they were delightful in interacting with their environment. This characteristic increases their tolerance to sound and helps them develop a positive sound perception. As a result, although the measurement of acoustic comfort in this study was not identical to that in conventional research, the relationship between characteristics of human beings and the assessment of noise perception was still identical to that in the systematic review by Aletta and his colleagues [39].

The survey platform used in this study is the LINE application, a popular social media platform in many countries. Our interface is currently available in Chinese (Appendix 1). Researchers in other countries who wish to use this LINE chatbot through the LINE API SDK (https://developers.line.biz/en/docs/downloads/) will need to change the language and deploy sound recording and sound recognition algorithms on their servers. Those interested in pursuing international collaborations via the proposed method can directly contact the corresponding author.

5 Limitations

First, similar to other online surveys, the representativeness of the sample cannot be compared with that of the entire population. Among the participants, females, students, and younger participants accounted for a large proportion. Therefore, the generalizability of the results is limited. Second, the extraction and identification of sound features are highly related to the quality of the sound recording. In our platform, there was high-quality heterogeneity among the microphones of the participants’ mobile phones and the distances from the source of the sound. The sound measurements acquired from different mobile phones and different platforms (including Android and iOS) presented a major challenge. Without a standard calibration procedure, acquiring measurements from mobile phones was a limitation of this collection approach. The in-house controlled experiment conducted by the research team, which compared the mobile phones to the Class-1 sound meter, revealed that most of the tested mobile phones underestimated dBA levels. Although the current approach might lead to an underestimation of dBA, it can still provide an objective reference in addition to the subjective feelings. In addition, sound recognition by YAMNET will be affected by the volume of the wave. We noted the accuracy of the recordings made by the participants. If the result reported by YAMNET was incorrect, the participants would self-report the correct classification. For sound identification, the participants reported an accuracy of 83.98% (8,526/10,152) in our study. Third, a few seconds of a recording cannot represent the daily exposure. Thus, we attempted to collect recordings at three different time periods in a day for at least 15 days. This approach is similar to a sampling strategy to randomly collect data on repeated acoustic exposure in daily life. To strike a balance between survey feasibility, privacy concerns, and willingness of participants, the length of the recording should not be overly long. Finally, our research approach was based on voluntary reporting. Because mobile phones are not used while sleeping, subjective perceptions during sleep may be underestimated. However, we found that participants reported lower acoustic comfort when exposed to noise at night before bedtime. Noise from neighbors, entertainment, and traffic in residential areas is amplified when the background soundscape becomes quieter at night.

6 Conclusion

Sound perception is a complex and subjective result of multilevel factors, including demography, personality, sound features, and spatiotemporal characteristics of the surrounding environment. In addition to traditional approaches for monitoring long-term sound exposure, the LINE chatbot captures short-term exposure to soundscapes and immediately records individuals’ soundscape perceptions. Although considerable uncertainty exists in the sound quality of recordings taken using different mobile phones, the subjective experience accompanied by instant short recordings can still provide an instant interactive observation of people and the soundscape. This novel methodological framework has also revealed prolific outcomes. For female participants, lower acoustic comfort was associated with sound exposure at night, at the land use of transportation and built-up area, as well as the sounds of machines, vehicles, and airplanes. Low-frequency sound exposure and sounds of nature, silence, music, and human activity were associated with higher acoustic comfort, and these exposures were also associated with positive perceptions of health in rural areas and on weekends. The valence of acoustic comfort and the positive health effects of the sounds depended heavily on the spatiotemporal features of the soundscape and the types of sound. Additionally, personal characteristics also had effects on sound perception. Extroverted individuals had a higher tolerance to sound, whereas those with high noise sensitivity had a lower tolerance to sound. Understanding sound perception is an important step toward maintaining a sustainable urban acoustic environment.


tel: +886-2-2789-8160

Acknowledgments

The authors are grateful to all participants in the study and to Dr. Shu-Hui Hsieh from Academia Sinica, Taiwan, for providing valuable suggestions on designing the questionnaires.

  1. Funding information: This work was supported by Academia Sinica, Taiwan [grant number AS-SS-109-02].

  2. Author contributions: Ta-Chien Chan: conceptualization, supervision, funding acquisition, writing the original draft, writing the review, and editing; Bing-Sheng Wu: conceptualization, writing – original draft; Yu-Ting Lee: investigation, formal analysis, and visualization; Ping-Hsien Lee: investigation and formal analysis; Ren-Hao Deng: formal analysis and visualization.

  3. Conflict of interest: The authors declare no conflict of interest. The funders had no role in the design of the study; collection, analyses, or interpretation of data; writing of the manuscript; or decision to publish the results.

  4. Data availability statement: Data will be made available on request.

  5. Ethics approval and informed consent: This study was approved by the Institutional Review Board of Humanities & Social Science Research, Academia Sinica (AS-IRB-HS-22009). Individual identification data were not collected during the study period. This study was performed in accordance with the Declaration of Helsinki and followed an approved protocol. Informed consent was obtained from all subjects via our chatbot (named “Soundmap”) in the LINE app. The participants provided detailed informed consent from Soundmap. The interactive survey was only administered after participation was confirmed.

Appendix 1 Screenshots of Soundmap in Task 1

  1. Welcome message: Welcome. After recording the sound from your environment (around 10 seconds), Soundmap will help you analyze the types of sound.

  2. Press the microphone button to begin recording the sound (around 10 s).

  3. Return recognition results in decibels (dBA) and AI sound type: Your recording is around XXX dBA. (caution: Estimated dBA values vary from phone to phone. Results are for reference only.)

  4. Rate the subjective feeling of the sound: How comfortable are you with the recorded sounds? (1: very uncomfortable; 5: very comfortable).

  5. Rate the health impact of the sound: If you were exposed to this sound every day, would you agree that it would have a positive impact on your mental and physical health? (1: strongly disagree; 5: strongly agree).

  6. Indoors/Outdoors/Transportation mode: Where were the sounds recorded?

  7. Types of location: If the answer was indoors or outdoors, there were 15 options to select from: industrial areas, homes, campuses, quiet workplaces, noisy workplaces, restaurants, roadsides, quiet leisure places, lively entertainment venues, hospitals or clinics, stations, markets, near the railway, near the airport, and other places inhabited or visited by the participants. If the answer was transportation, there were seven options to select from: cars, buses, motorcycles, high-speed rail, trains, mass rapid transit, and bicycles.

  8. Assisted Global Positioning System (AGPS) location by LINE application: Sound was recorded with AGPS to facilitate sharing of location information. If the location was incorrectly predicted by automatic geolocation, the correct address or point of interest could be manually input in the search window, or the correct place on the map could be clicked.

Appendix 2 Screenshots of Soundmap in Task 2

  1. Start filling Task 2 questionnaires: There are five sets of questionnaires, namely demographic, noise exposure in the household, quality of life and healthy daily measures, sleep quality and mental health, Big-5 personality, and noise sensitivity and fatigue questionnaires.

  2. The five questionnaires are sent out sequentially every five days. Participants are required to complete the questionnaire in order and questions cannot be skipped.

  3. The questionnaire (Google form) is embedded in the LINE chatbot.

  4. The chatbot issues convenience store e-coupons directly to qualified participants who pass our quality control process. Participants can use the e-coupon to purchase items at nearby convenience stores.

Figure A1 
                  Screen plot and principal component analysis (PCA) loading plots from PCA analysis: (a) screen plot to choose optimal number of components; (b) PCA loading plots.
Figure A1

Screen plot and principal component analysis (PCA) loading plots from PCA analysis: (a) screen plot to choose optimal number of components; (b) PCA loading plots.

Figure A2 
                  Spatial distribution of corresponding perception of positive health effects from sound exposure in Taipei metropolitan area: (a) strongly disagree and disagree, (b) undecided, and (c) strongly agree and agree.
Figure A2

Spatial distribution of corresponding perception of positive health effects from sound exposure in Taipei metropolitan area: (a) strongly disagree and disagree, (b) undecided, and (c) strongly agree and agree.

Table A1

Estimated effects on acoustic comforts with types of sounds

Variables OR (95% CI) P value
Sex
Male ref
Female 0.59 (0.41, 0.85) 0.0051
Time groups based on Lden
Night (23:00∼07:00) ref
Day (07:00∼19:00) 0.74 (0.64, 0.86) <0.001
Evening (19:00∼23:00) 0.83 (0.71, 0.97) 0.0213
Weekday/weekend
Weekday ref
Weekend 1.17 (1.07, 1.28) <0.001
Sound features by principal component analysis
Sound pressure-related factors (PC 1) 0.93 (0.91, 0.95) <0.001
Middle frequency (PC 2) 0.41 (0.26, 0.65) <0.001
Low sound pressure (PC 3) 1.22 (1.07, 1.39) 0.0033
Low frequency (PC 4) 1.58 (1.17, 2.13) 0.0026
Land use
Agricultural ref
Forest 1.21 (0.92, 1.59) 0.1662
Transportation 0.64 (0.51, 0.79) <0.001
Water conservancy and public facility 0.8 (0.66, 0.98) 0.0335
Built-up area 0.77 (0.65, 0.91) 0.0026
Recreational facility 0.95 (0.71, 1.27) 0.716
Sound category
Human activity 1.16 (1.05, 1.27) 0.003
Nature 2.17 (1.96, 2.41) <0.001
Silence 1.22 (1.06, 1.40) 0.0057
Music 2.19 (1.91, 2.51) <0.001
Machine 0.58 (0.51, 0.65) <0.001
Vehicle 0.37 (0.33, 0.42) <0.001
Airplane 0.25 (0.13, 0.47) <0.001
Table A2

Estimated effects on agreement with positive health impacts from sound exposure with different types of sounds

Variables OR(95% CI) P value
Age 1.03 (1.01, 1.04) 0.0012
Sex
Male ref
Female 0.63 (0.44, 0.9) 0.0104
Time groups based on Lden
Night (23:00–07:00) ref
Day (07:00–19:00) 0.74 (0.64, 0.85) <0.001
Evening (19:00–23:00) 0.81 (0.69, 0.94) 0.0052
Extraversion 1.31 (1.11, 1.55) 0.0013
Noise sensitivity scale
Low sensitivity group ref
High sensitivity group 0.53 (0.34, 0.82) 0.0043
Weekday/weekend
Weekday ref
Weekend 1.11 (1.02, 1.21) 0.0182
Sound features by principal component analysis
Sound pressure-related factors (PC 1) 0.94 (0.92, 0.96) <0.001
Middle frequency (PC 2) 0.58 (0.4, 0.85) 0.0049
Low frequency (PC 4) 1.37 (1.05, 1.8) 0.0215
Land use
Agricultural ref
Forest 1.18 (0.92, 1.52) 0.1984
Transportation 0.62 (0.5, 0.76) <0.001
Water conservancy and public facility 0.94 (0.77, 1.13) 0.4941
Built-up area 0.84 (0.72, 0.99) 0.039
Recreational facility 0.94 (0.72, 1.25) 0.6875
Sound category
Nature 1.86 (1.69, 2.04) <0.001
Silence 1.26 (1.11, 1.44) <0.001
Music 1.86 (1.64, 2.11) <0.001
Machine 0.62 (0.56, 0.69) <0.001
Vehicle 0.46 (0.41, 0.51) <0.001
Airplane 0.45 (0.25, 0.82) 0.0095

References

[1] Guski R, Schreckenberg D, Schuemer R. WHO environmental noise guidelines for the european region: A systematic review on environmental noise and annoyance. Int J Env Res Public Health. 2017;14(12):1539.10.3390/ijerph14121539Search in Google Scholar PubMed PubMed Central

[2] Tang JH, Lin BC, Hwang JS, Chen LJ, Wu BS, Jian HL, et al. Dynamic modeling for noise mapping in urban areas. Env Impact Asses. 2022;97:106864.10.1016/j.eiar.2022.106864Search in Google Scholar

[3] Lan YL, Roberts H, Kwan MP, Helbich M. Transportation noise exposure and anxiety: A systematic review and meta-analysis. Env Res. 2020;191:110118.10.1016/j.envres.2020.110118Search in Google Scholar PubMed

[4] KingEA. Here, there, and everywhere: How the SDGs must include noise pollution in their development challenges. Environment. 2022;64(3):17–32.10.1080/00139157.2022.2046456Search in Google Scholar

[5] Dzhambov AM, Markevych I, Tilov B, Arabadzhiev Z, Stoyanov D, Gatseva P, et al. Pathways linking residential noise and air pollution to mental ill-health in young adults. Env Res. 2018;166:458–65.10.1016/j.envres.2018.06.031Search in Google Scholar PubMed

[6] Jensen HAR, Rasmussen B, Ekholm O. Neighbour noise annoyance is associated with various mental and physical health symptoms: results from a nationwide study among individuals living in multi-storey housing. BMC Public Health. 2019;19(1):1508.10.1186/s12889-019-7893-8Search in Google Scholar PubMed PubMed Central

[7] Munzel T, Sorensen M, Daiber A. Transportation noise pollution and cardiovascular disease. Nat Rev Cardiol. 2021;18(9):619–36.10.1038/s41569-021-00532-5Search in Google Scholar PubMed

[8] Huang T, Chan TC, Huang YJ, Pan WC. The association between noise exposure and metabolic syndrome: A longitudinal cohort study in Taiwan. Int J Env Res Public Health. 2020;17(12):4236.10.3390/ijerph17124236Search in Google Scholar PubMed PubMed Central

[9] Beutel ME, Brahler E, Ernst M, Klein E, Reiner I, Wiltink J, et al. Noise annoyance predicts symptoms of depression, anxiety and sleep disturbance 5 years later. Findings from the Gutenberg Health Study. Eur J Public Health. 2020;30(3):516–21.10.1093/eurpub/ckaa015Search in Google Scholar PubMed

[10] Mucci N, Traversini V, Lorini C, De Sio S, Galea RP, Bonaccorsi G, et al. Urban noise and psychological distress: A systematic review. Int J Env Res Public Health. 2020;17(18):6621.10.3390/ijerph17186621Search in Google Scholar PubMed PubMed Central

[11] Jafari Z, Kolb BE, Mohajerani MH. Noise exposure accelerates the risk of cognitive impairment and Alzheimer’s disease: Adulthood, gestational, and prenatal mechanistic evidence from animal studies. Neurosci Biobehav R. 2020;117:110–28.10.1016/j.neubiorev.2019.04.001Search in Google Scholar PubMed

[12] Zhang YF, Zhu M, Sun YT, Tang BL, Zhang GM, An PY, et al. Environmental noise degrades hippocampus-related learning and memory. Proc Natl Acad Sci USA. 2021;118(1):e2017841117.10.1073/pnas.2017841117Search in Google Scholar PubMed PubMed Central

[13] Abbasi M, Tokhi MO, Falahati M, Yazdanirad S, Ghaljahi M, Etemadinezhad S, et al. Effect of personality traits on sensitivity, annoyance and loudness perception of low- and high-frequency noise. J Low Freq Noise Vibr Act Control. 2021;40(2):643–55.10.1177/1461348420945818Search in Google Scholar

[14] Moghadam SMK, Alimohammadi I, Taheri E, Rahimi J, Bostanpira F, Rahmani N, et al. Modeling effect of five big personality traits on noise sensitivity and annoyance. Appl Acoust. 2021;172:107655.10.1016/j.apacoust.2020.107655Search in Google Scholar

[15] Cerletti P, Eze IC, Schaffner E, Foraster M, Viennau D, Cajochen C, et al. The independent association of source-specific transportation noise exposure, noise annoyance and noise sensitivity with health-related quality of life. Env Int. 2020;143:105960.10.1016/j.envint.2020.105960Search in Google Scholar PubMed

[16] Zhang X, Ba MH, Kang J, Meng Q. Effect of soundscape dimensions on acoustic comfort in urban open public spaces. Appl Acoust. 2018;133:73–81.10.1016/j.apacoust.2017.11.024Search in Google Scholar

[17] Brink M, Schaffer B, Vienneau D, Foraster M, Pieren R, Eze IC, et al. A survey on exposure-response relationships for road, rail, and aircraft noise annoyance: Differences between continuous and intermittent noise. Env Int. 2019;125:277–90.10.1016/j.envint.2019.01.043Search in Google Scholar PubMed

[18] Paiva KM, Cardoso MRA, Zannin PHT. Exposure to road traffic noise: Annoyance, perception and associated factors among Brazil’s adult population. Sci Total Env. 2019;650:978–86.10.1016/j.scitotenv.2018.09.041Search in Google Scholar PubMed

[19] Lefevre M, Chaumond A, Champelovier P, Allemand LG, Lambert J, Laumon B, et al. Understanding the relationship between air traffic noise exposure and annoyance in populations living near airports in France. Env Int. 2020;144:106058.10.1016/j.envint.2020.106058Search in Google Scholar PubMed

[20] Ma J, Li CJ, Kwan MP, Kou LR, Chai YW. Assessing personal noise exposure and its relationship with mental health in Beijing based on individuals’ space-time behavior. Env Int. 2020;139:105737.10.1016/j.envint.2020.105737Search in Google Scholar PubMed

[21] D’Hondt E, Stevens M, Jacobs A. Participatory noise mapping works! An evaluation of participatory sensing as an alternative to standard techniques for environmental monitoring. Pervasive Mob Comput. 2013;9(5):681–94.10.1016/j.pmcj.2012.09.002Search in Google Scholar

[22] Can A, Audubert P, Aumond P, Geisler E, Guiu C, Lorino T, et al. Framework for urban sound assessment at the city scale based on citizen action, with the smartphone application NoiseCapture as a lever for participation. Noise Mapp. 2023;10(1):20220166.10.1515/noise-2022-0166Search in Google Scholar

[23] Murphy E, King EA. Smartphone-based noise mapping: Integrating sound level meter app data into the strategic noise mapping process. Sci Total Env. 2016;562:852–9.10.1016/j.scitotenv.2016.04.076Search in Google Scholar PubMed

[24] International Organization for Standardization. ISO 12913-1:2014 Acoustics – Soundscape – Part 1: Definition and Conceptual Framework. Geneva, Switzerland: ISO; 2014.Search in Google Scholar

[25] International Organization for Standardization. ISO/TS 12913–2:2018 Acoustics-Soundscape – part 2: Data Collection and Reporting Requirement. Geneva, Switzerland: ISO; 2018.Search in Google Scholar

[26] Gosling SD, Rentfrow PJ, Swann WB. A very brief measure of the Big-Five personality domains. J Res Pers. 2003;37(6):504–28.10.1016/S0092-6566(03)00046-1Search in Google Scholar

[27] Aletta F, Van Renterghem T, Botteldooren D. Influence of personal factors on sound perception and overall experience in urban green areas. A case study of a cycling path highly exposed to road traffic noise. Int J Env Res Public Health. 2018;15(6):1118.10.3390/ijerph15061118Search in Google Scholar PubMed PubMed Central

[28] Van Rossum G, Drake FL. Python 3 Reference Manual. Valley (CA), USA: CreateSpace; 2009.Search in Google Scholar

[29] Grosjean P. SciViews-R. MONS, Belgium: UMONS; 2022.Search in Google Scholar

[30] R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2021.Search in Google Scholar

[31] Lin SC. Sociology of education. Taipei, Taiwan: Chuliu; 2005. p. 394.Search in Google Scholar

[32] QGIS.org. QGIS Geographic Information System. QGIS Association; 2023.Search in Google Scholar

[33] Earth Resources Observation And Science (EROS) Center. Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global. In: Survey USG Editor; 2017.Search in Google Scholar

[34] Rune Haubo B Christensen. Ordinal – Regression Models for Ordinal Data. R package version 2022.11-16 ed2022.Search in Google Scholar

[35] McCullagh P. Regression models for ordinal data. J R Stat Soc: Ser B (Methodol). 1980;42(2):109–27.10.1111/j.2517-6161.1980.tb01109.xSearch in Google Scholar

[36] Gozalo GR, Morillas JMB, Gonzalez DM. Perceptions and use of urban green spaces on the basis of size. Urban For Urban Gree. 2019;46:126470.10.1016/j.ufug.2019.126470Search in Google Scholar

[37] Nwankwo M, Meng Q, Yang D, Liu FF. Effects of forest on birdsong and human acoustic perception in Urban Parks: A case study in Nigeria. Forests. 2022;13(7):994.10.3390/f13070994Search in Google Scholar

[38] Zhang L, Zhou SH, Kwan MP. The temporality of geographic contexts: Individual environmental exposure has time-related effects on mood. Health Place. 2023;79:102953.10.1016/j.healthplace.2022.102953Search in Google Scholar PubMed

[39] Aletta F, Oberman T, Kang J. Associations between positive health-related effects and soundscapes perceptual constructs: A systematic review. Int J Env Res Public Health. 2018;15(11):2392.10.3390/ijerph15112392Search in Google Scholar PubMed PubMed Central

[40] Axelsson O, Nilsson ME, Berglund B. A principal components model of soundscape perception. J Acoust Soc Am. 2010;128(5):2836–46.10.1121/1.3493436Search in Google Scholar PubMed

[41] Themann CL, Masterson EA. Occupational noise exposure: A review of its effects, epidemiology, and impact with recommendations for reducing its burden. J Acoust Soc Am. 2019;146(5):3879–905.10.1121/1.5134465Search in Google Scholar PubMed

[42] Di Blasio S, Shtrepi L, Puglisi GE, Astolfi A. A cross-sectional survey on the impact of irrelevant speech noise on annoyance, mental health and well-being, performance and occupants’ behavior in shared and open-plan offices. Int J Env Res Public Health. 2019;16(2):280.10.3390/ijerph16020280Search in Google Scholar PubMed PubMed Central

Received: 2023-08-21
Revised: 2023-10-12
Accepted: 2023-10-14
Published Online: 2023-11-10

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 26.2.2024 from https://www.degruyter.com/document/doi/10.1515/noise-2022-0174/html
Scroll to top button