lang
stringclasses 1
value | s2FieldsOfStudy
listlengths 0
8
| url
stringlengths 78
78
| fieldsOfStudy
listlengths 0
5
| lang_conf
float64 0.8
0.98
| title
stringlengths 4
300
| paperId
stringlengths 40
40
| venue
stringlengths 0
300
| authors
listlengths 0
105
| publicationVenue
dict | abstract
stringlengths 1
10k
⌀ | text
stringlengths 1.94k
184k
| openAccessPdf
dict | year
int64 1.98k
2.03k
⌀ | publicationTypes
listlengths 0
4
| isOpenAccess
bool 2
classes | publicationDate
timestamp[us]date 1978-02-01 00:00:00
2025-04-23 00:00:00
⌀ | references
listlengths 0
958
| total_tokens
int64 509
40k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00c5185c5a5c68761ff4b22fb523d734af9c5c16
|
[
"Computer Science",
"Business"
] | 0.81413
|
Down with the #Dogefather: Evidence of a Cryptocurrency Responding in Real Time to a Crypto-Tastemaker
|
00c5185c5a5c68761ff4b22fb523d734af9c5c16
|
Journal of Theoretical and Applied Electronic Commerce Research
|
[
{
"authorId": "1435375867",
"name": "Michael Cary"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Theor Appl Electron Commer Res"
],
"alternate_urls": [
"http://www.jtaer.com/"
],
"id": "890beb40-ba59-4681-9bb6-88ed97b7decb",
"issn": "0718-1876",
"name": "Journal of Theoretical and Applied Electronic Commerce Research",
"type": "journal",
"url": "http://www.scielo.cl/scielo.php?lng=en&pid=0718-1876&script=sci_serial"
}
|
Recent research in cryptocurrencies has considered the effects of the behavior of individuals on the price of cryptocurrencies through actions such as social media usage. However, some celebrities have gone as far as affixing their celebrity to a specific cryptocurrency, becoming a crypto-tastemaker. One such example occurred in April 2021 when Elon Musk claimed via Twitter that “SpaceX is going to put a literal Dogecoin on the literal moon”. He later called himself the “Dogefather” as he announced that he would be hosting Saturday Night Live (SNL) on 8 May 2021. By performing sentiment analysis on relevant tweets during the time he was hosting SNL, evidence is found that negative perceptions of Musk’s performance led to a decline in the price of Dogecoin, which dropped 23.4% during the time Musk was on air. This shows that cryptocurrencies are affected in real time by the behaviors of crypto-tastemakers.
|
_Article_
# Down with the #Dogefather: Evidence of a Cryptocurrency Responding in Real Time to a Crypto-Tastemaker
**Michael Cary**
Division of Resource Economics and Management, West Virginia University, Morgantown, WV 26506, USA;
macary@mix.wvu.edu
**Abstract: Recent research in cryptocurrencies has considered the effects of the behavior of indi-**
viduals on the price of cryptocurrencies through actions such as social media usage. However,
some celebrities have gone as far as affixing their celebrity to a specific cryptocurrency, becoming a
crypto-tastemaker. One such example occurred in April 2021 when Elon Musk claimed via Twitter
that “SpaceX is going to put a literal Dogecoin on the literal moon”. He later called himself the
“Dogefather” as he announced that he would be hosting Saturday Night Live (SNL) on 8 May 2021.
By performing sentiment analysis on relevant tweets during the time he was hosting SNL, evidence
is found that negative perceptions of Musk’s performance led to a decline in the price of Dogecoin,
which dropped 23.4% during the time Musk was on air. This shows that cryptocurrencies are affected
in real time by the behaviors of crypto-tastemakers.
**Keywords: cryptocurrency; crypto-tastemaker; Dogecoin; price dynamics; sentiment analysis**
[����������](https://www.mdpi.com/article/10.3390/jtaer16060123?type=check_update&version=1)
**�������**
**Citation: Cary, M. Down with the**
#Dogefather: Evidence of a
Cryptocurrency Responding in Real
Time to a Crypto-Tastemaker. J. Theor.
_Appl. Electron. Commer. Res. 2021, 16,_
[2230–2240. https://doi.org/10.3390/](https://doi.org/10.3390/jtaer16060123)
[jtaer16060123](https://doi.org/10.3390/jtaer16060123)
Academic Editor: Arcangelo
Castiglione
Received: 13 August 2021
Accepted: 2 September 2021
Published: 3 September 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the author.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**JEL Classification: G41; G10**
**1. Introduction**
The number of cryptocurrencies has grown rapidly over the past decade. With such
diversity, choosing a specific cryptocurrency to use can be a daunting task, especially for
more casual cryptocurrency users. While some users are concerned with price dynamics,
others are concerned with the popularity of the cryptocurrency [1]. In fact, herding behavior
in cryptocurrency markets has become a well documented phenomenon in the literature [2],
and even cryptocurrencies such as Bitcoin are traded at least in part due to emotional
cues [3].
Herding behavior occurs in cryptocurrency markets for many different reasons and is
commonly observed during periods where higher levels of risk aversion are exhibited [4].
Herding behavior is particularly strong in smaller cryptocurrencies [5]. Such behavior is
a market inefficiency and can lead to market destabilization, particularly in the case of
smaller cryptocurrencies [6]. Combined with the effects of the ongoing COVID-19 global
pandemic on cryptocurrency markets, the potential for market destabilization among
smaller cryptocurrencies is only exacerbated [7]. It is important to note, however, that the
choice of empirical framework can potentially impact whether or not evidence of herding
is found [8].
On the other side of this phenomenon are the cryptocurrency tastemakers (cryptotastemakers) who attach their notoriety to a particular cryptocurrency, advocating for its
growth. There is evidence that social influences can affect cryptocurrencies [9]. However, current research on the impact of crypto-tastemakers is extremely limited, with no
papers looking at the real time effects of the actions of a major celebrity on the price of
a cryptocurrency to which the celebrity has affixed themselves as a crypto-tastemaker.
The literature that does exist considers the impact of social media on cryptocurrencies,
which, while extremely valuable, analyzes the impact of pre-planned, low risk activities
such as sending a single Tweet, e.g., the impact of a president’s tweets on Bitcoin [10],
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2231
predicting the price of a cryptocurrency using social media data [11,12], and predicting
bubbles in cryptocurrency markets with social media data [13]. This is in contrast to what is
studied in this paper, an extended period of heavily scrutinized, riskier actions performed
live, for a public audience.
One recent example of a celebrity becoming a crypto-tastemaker is Elon Musk, who
affixed his celebrity status to Dogecoin. On 1 April 2021, Elon Musk claimed via Twitter
that “SpaceX is going to put a literal Dogecoin on the literal moon”. Shortly after making
this hyperbolized claim, it was announced that Musk would be hosting the 8 May 2021
episode of Saturday Night Live (SNL). Musk confirmed this in a personal announcement
on his Twitter account on 28 April 2021 in which he called himself the “Dogefather”. All of
these specific examples of Musk’s Twitter activity are part of a much larger corpus of
crypto-tastemaking, dating back to January 2021 when Musk starting giving Dogecoin
attention on Twitter during the GameStop short squeeze [14]. Musk also went on to call
Dogecoin mining “fun” in order to increase the popularity of the cryptocurrency [14].
Elon Musk makes for a great example of a crypto-tastemaker since he has been a public
figure for decades, largely due to his business ventures and immense wealth. Moreover,
during this time he has become a rather divisive figure. He has both an ardent core
of followers and currently has 59.5 million followers on Twitter, but he also has many
detractors as well—a common nickname for Musk (which appears hundreds of time in our
data set) is “Muskrat”. This level of notoriety and divisiveness, along with his longstanding
interest in cryptocurrencies, means that once Musk coupled his name to Dogecoin, he was
indeed a crypto-tastemaker.
In this paper we test for evidence of the real time impact of the highly publicized
actions of a crypto-tastemaker by performing sentiment analysis on real time data from
twitter during the time that Musk was hosting SNL and finding its effect on the price of
Dogecoin. Using standard VAR techniques, we document for the first time in the literature
a definite instance of the price of a cryptocurrency responding in real time to the actions of a
crypto-tastemaker. Specifically, we find that Elon Musk’s performance on SNL significantly
and negatively affected the price of Dogecoin.
**2. Dogecoin**
Dogecoin is a cryptocurrency alternative to Bitcoin, or an altcoin, that was created in
2013 [15]. Originally created as a joke currency with a randomized reward for mining [14],
for most of its history Dogecoin was a niche cryptocurrency that had some degree of
cultural relevance due to the peculiarity of its name, but was not a target of significant
investment [15]. Prior to 2021, the price of Dogecoin had never been above $0.02 [14]. The
technical development of Dogecoin was also underwhelming, with the most recent consistent activity on its main branch on GitHub as of the writing of Young [15] occurring in 2015
(the rise in popularity experienced by Dogecoin in 2021 has led to renewed development,
[per the commit history found at https://github.com/dogecoin accessed on 13 August](https://github.com/dogecoin)
2021). However, Dogecoin users have performed some noteworthy, attention grabbing
events including sponsoring an American stock car race in 2013 and the Jamaican bobsled
team in the 2014 Winter Olympics [15].
Functionally, Dogecoin is based on the Scrypt algorithm and is a derivative of Litecoin,
another cryptocurrency derived from Bitcoin [15]. However, unlike Bitcoin and most
other cryptocurrencies, there is no limit to the amount of Dogecoin that can theoretically
exist [15]. Consequentially, mining Dogecoin remains a quicker and easier process than
mining other cryptocurrencies.
From a research perspective, Dogecoin remains essentially unstudied in the literature.
This is likely due to its effective irrelevance as a potential investment prior to 2021. In fact,
in the case of this paper, Dogecoin is studied not for anything intrinsic to Dogecoin itself,
but rather for the fact that a crypto-tastemaker affixed themselves to Dogecoin.
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2232
**3. Data and Methodology**
The ultimate goal of this paper is to test whether the price of Dogecoin responded in
real time to the public perception of Musk’s performance on SNL using a standard vector
autoregression (VAR) approach. To do this, we need data on the price of Dogecoin as well
as a measure of the public perception of Musk’s performance. While the former data set
is easily obtained, in this case from CoinDesk.com, the latter data requires some effort to
obtain. Twitter is an excellent source of public opinions and tweets are widely used in the
quantitative social sciences, e.g., [16–19], thus we will use data collected from Twitter as
the basis for measuring public opinion of Musk’s performance.
To create the final data on the public’s perception of Musk’s performance, two primary
steps were performed. First, relevant tweets from the time period of Musk’s performance
must be collected from Twitter. A window of one hour before and after the event was included in our sample to account for delayed responses since there was no a prioi knowledge
of the lag time from trade-causing-opinion to the trade itself. Tweets containing any of the
following key words as text, hashtags, and/or cashtags were collected: {SNL, SNLmay8,
Dogefather, tothemoon, Elon, Musk, Dogecoinrise, Doge, Dogecoin}. Once these tweets
were collected, sentiment analysis was performed on the tweets.
Sentiment analysis is a form of textual analysis which assigns quantitative values to
subjective statements [20]. Positive values are assigned to tweets with a positive opinion,
and negative values are assigned to tweets with a negative opinion. In our case, whenever
Musk’s performance was well received by the public we obtain positive scores from sentiment analysis, while poorly received portions of Musk’s performance received negative
scores from sentiment analysis. To obtain these scores, individual tweets were assigned
their own, unique score using the nltk module in Python.
Once every tweet had been assigned a score via sentiment analysis, two time series
measuring the overall public perception of the performance were created—one for positive
opinions and one for negative opinions—by aggregating the tweets during each minute of
the performance. The rationale for the two distinct times series is that positive and negative
opinions may have asymmetric affects on the price of Dogecoin. Asymmetric effects in
time series regressions have proven significant in many cases, e.g., [21–24]. In our case,
risk averse investors/users of Dogecoin may sell their holdings if they fear that a poor
performance by Musk is actively lowering the price of Dogecoin, but positive opinions of
Musk’s performance may have a more muted positive effect. Furthermore, by aggregating
all tweets during each minute of the event, we allow for a weighted time series where
larger magnitudes for the positive and negative sentiment analysis scores indicate a greater
degree of public consensus regarding the performance. The granularity of one minute
intervals was chosen because this matches the frequency of the price data for Dogecoin
obtained from CoinDesk.com. Summary statistics of the three time series are presented in
Table 1 and the three times series are plotted together in Figure 1.
**Table 1. Summary statistics of the three time series. The negative sentiment scores are in absolute**
value for convenience of use/interpretation.
**Time Series** **Mean** **SD** **Min** **Max**
Price of Dogecoin in USD 0.584 0.054 0.471 0.700
Total Positive Sentiment 29.67 37.58 4.79 188.35
Total Negative Sentiment 25.23 30.75 2.73 99.53
As can be seen in Figure 1, the general trends of positive and negative sentiment were
similar. Twitter activity pertaining to Musk’s performance spiked just as the episode began
to air, reaching a peak around 15 min into the episode. From there, a steady decline in both
positive and negative sentiment, driven by a decrease in the volume of tweets pertaining to
Musk’s performance, was observed. A small spike in both positive and negative sentiment
occurs shortly after the conclusion of the episode, likely driven by summary reviews of the
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2233
episode, but once the episode had finished airing, the volume of tweets steadily declined
to pre-episode levels. The sharp, early decline in the price of Dogecoin, a loss which was
never recouped, coincides with the outburst of opinions on Twitter pertaining to Musk’s
performance on SNL.
A Comparison of the price of dogecoin and public perception of Musk's performance
Price of dogecoin
Positive Sentiment
Negative Sentiment
0.7 175
0.6 150
0.5 125
0.4 100
0.3 75
0.2 50
0.1 25
0.0
0
−50 0 50 100 150
Time (Minutes Relative to Start of Episode)
**Figure 1. The three time series. The price of Dogecoin is measured in USD on the left hand axis while**
the positive and negative sentiment scores are measured in aggregate values on the right hand axis.
The x axis represents time relative to the start of the episode, and the two vertical black lines denote
the start and end of the episode.
Finally, to run the VAR, we first-difference the data to transform the times series and
ensure that they are stationary. As can be seen in Figure 1, the original time series are clearly
non-stationary. Augmented Dickey-Fuller tests confirmed that the three first-differenced
time series are indeed stationary. Once the time series were first-differenced, the optimal
lag length for VAR was determined to be 15 periods (minutes). Once the optimal lag length
was determined, VAR was performed. The VAR model takes the standard specification in
vector notation found in Equation (1) where p = 15 is the optimal lag length.
_Yt = a +_
_p_
## ∑ ΦkYt−k + ϵt (1)
_k=1_
**4. Results and Discussion**
Predicting the price of cryptocurrencies, even established ones such as Bitcoin, is no
easy feat. From a pure predictive standpoint, myriad machine learning techniques have
been applied to this problem with only limited success [25]. Data from social media have
been used to aid in this endeavor in various forms including search trends data [26] and
sentiment analysis performed on developers comments [27]. However, these types of
studies have historically relied on discrete events such as tweets by a crypto-tastemaker,
or have used more continuous data from groups of people rather than from individual
crypto-tastemakers. This rule extends to causal inference settings as well, e.g., [28].
The VAR results indicate that increases in the magnitude of negative public perception
of Musk’s performance had a negative effect on the price of Dogecoin. This can be seen
in the upper-rightmost subplot in the cumulative effects plot from our VAR in Figure 2.
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2234
Changes in the positive public perception of Musk’s performance had no significant long
run effect on the price of Dogecoin. Full VAR results can be found in Tables A1–A3,
along with the corresponding impulse response function plots and autocorrelation plots in
Figures A1 and A2, respectively.
**Figure 2. Cumulative effects plots from VAR.**
Looking at the impulse response functions, we see that negative sentiment had a
delayed but significant effect on the price of Dogecoin. There was a steady decline in the
impulse response function from 5 min to 12 min, and this effect can also be seen in the
point estimates from the VAR model for the price of Dogecoin, where the lagged values of
negative sentiment for 11 and 12 lags (L11 Negative Sentiment and L12 Negative Sentiment)
were negative and statistically significant (Table A1). What this shows is evidence that
increases in negative sentiment led to Dogecoin users selling their holdings. Trades began
to be finalized in earnest approximately five minutes after an event occurred that led to
an increase in negative sentiment, and this behavior continued until the cumulative effect
of these sales led to a statistically significant decrease in the price of Dogecoin, occurring
approximately at the 12 min mark.
These results indicate that investors/users of cryptocurrencies who are interested in
the popularity of the cryptocurrency are influenced by the actions of crypto-tastemakers,
but that crypto-tastemakers, once thoroughly affixed to a specific cryptocurrency, may only
be able to harm the popularity of the coin. Given the fact that this is the first such study,
it is possible that a “better performance” (perhaps, e.g., a humanitarian action involving
a crypto-tastemaker or a more convincing performance on SNL) could have a positive
effect on the price of that cryptocurrency. However, it is entirely possible that when a
crypto-tastemaker affixes themselves to a cryptocurrency, that cryptocurrency enters a high
risk, low reward state.
This is different than some previous, related results on cryptocurrencies, such as [29]
who found that Bitcoin responded positively to unscheduled news, whether that news was
positive or negative. However, our results do align with [30] who found that certain news
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2235
from authorities led to declines and increased volatility in cryptocurrency markets in the
largest cryptocurrency exchange in China.
Granger causality testing confirms that changes in the level of aggregate negative
sentiment Granger-causes changes in the level of the price of Dogecoin, but no other
instances of Granger causality exist in this study.
Finally, a stability analysis shows that the results are indeed stable. The roots of the
characteristic polynomial of the VAR are presented in Figure A3 and are clearly all within
the unit circle, a sufficient condition for stability.
**5. Conclusions**
Cryptocurrencies are used in part based on their popularity; this much is an observed
reality of cryptocurrencies. Consequentially, cryptocurrencies are being endorsed by cryptotastemakers. This analysis has shown for the first time that cryptocurrency price dynamics
are subject to the real time behaviors of a crypto-tastemaker. Since less mature cryptocurrencies are more likely to be influenced by a crypto-tastemaker, this suggests that less mature
cryptocurrencies may have a more complex nature to their price variance. Future research
on the relationship between cryptocurrencies and crypto-tastemakers should investigate
the direct impact of crypto-tastemakers on the volatility of cryptocurrencies, and if there
are spillover effects across cryptocurrencies due to the action of crypto-tastemakers.
**Funding: This research received no external funding.**
**Data Availability Statement: Final data for the econometric analyses and code for this project is**
[available at: https://github.com/cat-astrophic/dogefather accessed on 13 August 2021. The raw](https://github.com/cat-astrophic/dogefather)
twitter data set is not stored in the repository due to its size, but it is available from the author
upon request.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Appendix A. VAR Results**
**Figure A1. The impulse response function plots from the VAR.**
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2236
**Table A1. VAR results for the regressions on the price of Dogecoin. Optimal lag length was selected**
using a built in function in the VAR submodule of the statsmodels module in Python. Lx before a
variable name denotes that a variable was lagged × times.
**Variable** **Coefficient** **Std. Err.** **t-Stat** **_p_**
Constant _−0.0003_ 0.0007 _−0.4481_ 0.6541
L1 Price _−0.0565_ 0.0811 _−0.6970_ 0.4858
L1 Positive Sentiment _−0.0444_ 0.1131 _−0.3924_ 0.6947
L1 Negative Sentiment _−0.1338_ 0.1489 _−0.8985_ 0.3689
L2 Price _−0.0426_ 0.0767 _−0.5554_ 0.5787
L2 Positive Sentiment 0.2365 0.1108 2.1341 0.0328
L2 Negative Sentiment _−0.1537_ 0.1534 _−1.0018_ 0.3165
L3 Price _−0.0364_ 0.0783 _−0.4652_ 0.6418
L3 Positive Sentiment _−0.0358_ 0.1132 _−0.3164_ 0.7517
L3 Negative Sentiment 0.0680 0.1548 0.4395 0.6603
L4 Price _−0.1887_ 0.0786 _−2.3993_ 0.0164
L4 Positive Sentiment 0.2558 0.1109 2.3064 0.0211
L4 Negative Sentiment _−0.1971_ 0.1614 _−1.2208_ 0.2221
L5 Price _−0.0250_ 0.080 _−0.3131_ 0.7542
L5 Positive Sentiment _−0.0253_ 0.1160 _−0.2181_ 0.8273
L5 Negative Sentiment 0.0061 0.1637 0.0375 0.9701
L6 Price _−0.0542_ 0.0753 _−0.7191_ 0.4721
L6 Positive Sentiment _−0.1438_ 0.1207 _−1.1918_ 0.2334
L6 Negative Sentiment 0.1467 0.1614 0.9093 0.3632
L7 Price _−0.0498_ 0.0778 _−0.6396_ 0.5224
L7 Positive Sentiment _−0.0709_ 0.1213 _−0.5844_ 0.5590
L7 Negative Sentiment _−0.0612_ 0.1615 _−0.3789_ 0.7047
L8 Price _−0.1269_ 0.0761 _−1.6678_ 0.0954
L8 Positive Sentiment _−0.1935_ 0.1207 _−1.6028_ 0.1090
L8 Negative Sentiment _−0.0623_ 0.1615 _−0.3859_ 0.6996
L9 Price 0.0476 0.0766 0.6217 0.5341
L9 Positive Sentiment 0.0915 0.1214 0.7537 0.4510
L9 Negative Sentiment _−0.0143_ 0.1597 _−0.0897_ 0.9285
L10 Price 0.0742 0.0785 0.9449 0.3447
L10 Positive Sentiment _−0.1283_ 0.1166 _−1.1002_ 0.2712
L10 Negative Sentiment _−0.2174_ 0.1576 _−1.3795_ 0.1678
L11 Price 0.0761 0.0786 0.9681 0.3330
L11 Positive Sentiment 0.0118 0.1153 0.1025 0.9183
L11 Negative Sentiment _−0.2561_ 0.1550 _−1.6521_ 0.0985
L12 Price _−0.0401_ 0.0779 _−0.5144_ 0.6069
L12 Positive Sentiment 0.0812 0.1134 0.7165 0.4737
L12 Negative Sentiment _−0.2535_ 0.1523 _−1.6641_ 0.0961
L13 Price 0.0825 0.0797 1.0350 0.3007
L13 Positive Sentiment _−0.1346_ 0.1113 _−1.2099_ 0.2263
L13 Negative Sentiment _−0.1352_ 0.1476 _−0.9157_ 0.3598
L14 Price 0.0305 0.0785 0.3886 0.6976
L14 Positive Sentiment _−0.3995_ 0.1120 _−3.5670_ 0.0004
L14 Negative Sentiment _−0.0198_ 0.1447 _−0.1369_ 0.8911
L15 Price 0.1195 0.0793 1.5071 0.1318
L15 Positive Sentiment 0.1268 0.1178 1.0766 0.2817
L15 Negative Sentiment _−0.2441_ 0.1429 _−1.7083_ 0.0876
**Table A2. VAR results for the regressions on aggregate positive sentiment. Optimal lag length**
was selected using a built in function in the VAR submodule of the statsmodels module in Python.
Lx before a variable name denotes that a variable was lagged × times.
**Variable** **Coefficient** **Std. Err.** **t-Stat** **_p_**
Constant 0.0002 0.0006 0.4140 0.6789
L1 Price _−0.0462_ 0.0667 _−0.6918_ 0.4891
L1 Positive Sentiment _−0.1333_ 0.0930 _−1.4332_ 0.1518
L1 Negative Sentiment 0.3832 0.1226 3.1268 0.0018
L2 Price 0.1605 0.0631 2.5431 0.0110
L2 Positive Sentiment _−0.1773_ 0.0912 _−1.9441_ 0.0519
L2 Negative Sentiment 0.2909 0.1262 2.3052 0.0212
L3 Price _−0.0991_ 0.0644 _−1.5386_ 0.1239
L3 Positive Sentiment _−0.1636_ 0.0932 _−1.7553_ 0.0792
L3 Negative Sentiment 0.3428 0.1274 2.6913 0.0071
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2237
**Table A2. Cont.**
**Variable** **Coefficient** **Std. Err.** **t-Stat** **_p_**
L4 Price 0.1309 0.0647 2.0233 0.0430
L4 Positive Sentiment 0.2863 0.0913 3.1380 0.0017
L4 Negative Sentiment 0.0617 0.1328 0.4646 0.6422
L5 Price 0.0697 0.0658 1.0597 0.2893
L5 Positive Sentiment _−0.2438_ 0.0954 _−2.5543_ 0.0106
L5 Negative Sentiment 0.1894 0.1347 1.4055 0.1599
L6 Price 0.1146 0.0620 1.8488 0.0645
L6 Positive Sentiment _−0.1225_ 0.0993 _−1.2332_ 0.2175
L6 Negative Sentiment 0.2241 0.1328 1.6881 0.0914
L7 Price _−0.0369_ 0.0640 _−0.5758_ 0.5648
L7 Positive Sentiment _−0.1004_ 0.0998 _−1.0062_ 0.3143
L7 Negative Sentiment 0.1467 0.1329 1.1035 0.2698
L8 Price _−0.0459_ 0.0626 _−0.7336_ 0.4632
L8 Positive Sentiment _−0.1302_ 0.0993 _−1.3104_ 0.1900
L8 Negative Sentiment _−0.0109_ 0.1329 _−0.0818_ 0.9348
L9 Price _−0.0835_ 0.0630 _−1.3252_ 0.1851
L9 Positive Sentiment _−0.0479_ 0.0999 _−0.4797_ 0.6314
L9 Negative Sentiment _−0.2419_ 0.1314 _−1.8405_ 0.0657
L10 Price 0.0016 0.0646 0.0243 0.9806
L10 Positive Sentiment _−0.1976_ 0.0960 _−2.0589_ 0.0395
L10 Negative Sentiment _−0.0600_ 0.1297 _−0.4626_ 0.6437
L11 Price _−0.0624_ 0.0647 _−0.9645_ 0.3348
L11 Positive Sentiment _−0.0680_ 0.0949 _−0.7167_ 0.4735
L11 Negative Sentiment 0.0541 0.1275 0.4241 0.6715
L12 Price 0.1659 0.0641 2.5862 0.0097
L12 Positive Sentiment 0.1125 0.0933 1.2060 0.2278
L12 Negative Sentiment _−0.0633_ 0.1253 _−0.5050_ 0.6136
L13 Price _−0.0338_ 0.0656 _−0.5156_ 0.6061
L13 Positive Sentiment _−0.0902_ 0.0916 _−0.9855_ 0.3244
L13 Negative Sentiment 0.2246 0.1215 1.8485 0.0645
L14 Price 0.1669 0.0646 2.5841 0.0098
L14 Positive Sentiment 0.1857 0.0921 2.0157 0.0438
L14 Negative Sentiment _−0.1116_ 0.1191 _−0.9369_ 0.3488
L15 Price 0.1164 0.0653 1.7831 0.0746
L15 Positive Sentiment 0.0173 0.0969 0.1783 0.8585
L15 Negative Sentiment _−0.0698_ 0.1176 _−0.5938_ 0.5527
**Table A3. VAR results for the regressions on aggregate negative sentiment. Optimal lag length**
was selected using a built in function in the VAR submodule of the statsmodels module in Python.
Lx before a variable name denotes that a variable was lagged × times.
**Variable** **Coefficient** **Std. Err.** **t-Stat** **_p_**
Constant 0.0001 0.0004 0.2079 0.8353
L1 Price _−0.1288_ 0.0499 _−2.5803_ 0.0099
L1 Positive Sentiment _−0.1070_ 0.0696 _−1.5359_ 0.1246
L1 Negative Sentiment 0.0033 0.0917 0.0365 0.9709
L2 Price 0.0980 0.0472 2.0762 0.0379
L2 Positive Sentiment _−0.1651_ 0.0682 _−2.4197_ 0.0155
L2 Negative Sentiment 0.1167 0.0945 1.2353 0.2167
L3 Price _−0.0589_ 0.0482 _−1.2207_ 0.2222
L3 Positive Sentiment 0.0219 0.0697 0.3143 0.7533
L3 Negative Sentiment _−0.1159_ 0.0953 _−1.2163_ 0.2239
L4 Price 0.0611 0.0484 1.2624 0.2068
L4 Positive Sentiment 0.2521 0.0683 3.6914 0.0002
L4 Negative Sentiment _−0.1634_ 0.0994 _−1.6432_ 0.1003
L5 Price _−0.0091_ 0.0493 _−0.1853_ 0.8530
L5 Positive Sentiment 0.0732 0.0714 1.0247 0.3055
L5 Negative Sentiment 0.0968 0.1008 0.9604 0.3368
L6 Price 0.1543 0.0464 3.3251 0.0009
L6 Positive Sentiment 0.0828 0.0743 1.1142 0.2652
L6 Negative Sentiment _−0.0941_ 0.0994 _−0.9472_ 0.3435
L7 Price _−0.0023_ 0.0479 _−0.0489_ 0.9610
L7 Positive Sentiment 0.0496 0.0747 0.6640 0.5067
L7 Negative Sentiment 0.0919 0.0995 0.9237 0.3557
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2238
**Table A3. Cont.**
**Variable** **Coefficient** **Std. Err.** **t-Stat** **_p_**
L8 Price _−0.0279_ 0.0469 _−0.5960_ 0.5512
L8 Positive Sentiment _−0.0365_ 0.0743 _−0.4914_ 0.6232
L8 Negative Sentiment _−0.0170_ 0.0995 _−0.1709_ 0.8643
L9 Price _−0.1617_ 0.0472 _−3.4295_ 0.0006
L9 Positive Sentiment 0.0526 0.0748 0.7029 0.4821
L9 Negative Sentiment _−0.2434_ 0.0984 _−2.4742_ 0.0134
L10 Price _−0.0501_ 0.0484 _−1.0358_ 0.3003
L10 Positive Sentiment _−0.1080_ 0.0718 _−1.5032_ 0.1328
L10 Negative Sentiment _−0.1346_ 0.0971 _−1.3868_ 0.1655
L11 Price 0.0459 0.0484 0.9478 0.3432
L11 Positive Sentiment 0.0101 0.0710 0.1421 0.8870
L11 Negative Sentiment 0.0189 0.0955 0.1985 0.8427
L12 Price 0.1147 0.0480 2.3908 0.0168
L12 Positive Sentiment 0.0729 0.0698 1.0439 0.2965
L12 Negative Sentiment _−0.0868_ 0.0938 _−0.9253_ 0.3548
L13 Price _−0.0609_ 0.0491 _−1.2404_ 0.2148
L13 Positive Sentiment _−0.0424_ 0.0685 _−0.6182_ 0.5365
L13 Negative Sentiment 0.0477 0.0909 0.5241 0.6002
L14 Price 0.0462 0.0483 0.9568 0.3387
L14 Positive Sentiment 0.1419 0.0690 2.0582 0.0396
L14 Negative Sentiment 0.0179 0.0891 0.2011 0.8406
L15 Price 0.0871 0.0488 1.7832 0.0745
L15 Positive Sentiment _−0.1290_ 0.0725 _−1.7775_ 0.0755
L15 Negative Sentiment _−0.0307_ 0.0880 _−0.3485_ 0.7275
**Figure A2. The autocorrelation plots from the VAR.**
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2239
Roots of the VAR Characteristic Polynomial
1.00
0.75
0.50
0.25
0.00
−0.25
−0.50
−0.75
−1.00
−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00
Real
**Figure A3. This plot shows the unit roots from the VAR characteristic polynomial. Since all unit roots**
lie inside the unit circle (in red), the VAR process is stable.
**References**
1. Al Shehhi, A.; Oudah, M.; Aung, Z. Investigating factors behind choosing a cryptocurrency. In Proceedings of the 2014 IEEE
International Conference on Industrial Engineering and Engineering Management, Selangor, Malaysia, 9–12 December 2014;
pp. 1443–1447.
2. [Bouri, E.; Gupta, R.; Roubaud, D. Herding behaviour in cryptocurrencies. Financ. Res. Lett. 2019, 29, 216–221. [CrossRef]](http://doi.org/10.1016/j.frl.2018.07.008)
3. [Ahn, Y.; Kim, D. Emotional trading in the cryptocurrency market. Financ. Res. Lett. 2020, 101912. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2020.101912)
4. Da Gama Silva, P.V.J.; Klotzle, M.C.; Pinto, A.C.F.; Gomes, L.L. Herding behavior and contagion in the cryptocurrency market.
_[J. Behav. Exp. Financ. 2019, 22, 41–50. [CrossRef]](http://dx.doi.org/10.1016/j.jbef.2019.01.006)_
5. Vidal-Tomás, D.; Ibáñez, A.M.; Farinós, J.E. Herding in the cryptocurrency market: CSSD and CSAD approaches. Financ. Res.
_[Lett. 2019, 30, 181–186. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.09.008)_
6. [Kallinterakis, V.; Wang, Y. Do investors herd in cryptocurrencies–and why? Res. Int. Bus. Financ. 2019, 50, 240–245. [CrossRef]](http://dx.doi.org/10.1016/j.ribaf.2019.05.005)
7. Vidal-Tomás, D. Transitions in the cryptocurrency market during the COVID-19 pandemic: A network analysis. Financ. Res. Lett.
**[2021, 101981. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2021.101981)**
8. Stavroyiannis, S.; Babalos, V. Herding behavior in cryptocurrencies revisited: novel evidence from a TVP model. J. Behav. Exp.
_[Financ. 2019, 22, 57–63. [CrossRef]](http://dx.doi.org/10.1016/j.jbef.2019.02.007)_
9. Aggarwal, G.; Patel, V.; Varshney, G.; Oostman, K. Understanding the social factors affecting the cryptocurrency market. arXiv
**2019, arXiv:1901.06245.**
10. [Huynh, T.L.D. Does Bitcoin React to Trump’s Tweets? J. Behav. Exp. Financ. 2021, 31, 100546. [CrossRef]](http://dx.doi.org/10.1016/j.jbef.2021.100546)
11. Lamon, C.; Nielsen, E.; Redondo, E. Cryptocurrency price prediction using news and social media sentiment. SMU Data Sci. Rev.
**2017, 1, 1–22.**
12. [Philippas, D.; Rjiba, H.; Guesmi, K.; Goutte, S. Media attention and Bitcoin prices. Financ. Res. Lett. 2019, 30, 37–43. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2019.03.031)
13. Phillips, R.C.; Gorse, D. Predicting cryptocurrency price bubbles using social media data and epidemic modelling. In Proceedings
of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017;
pp. 1–7.
14. [Chohan, U.W. A History of Dogecoin. Discussion Series: Notes on the 21st Century. 2017. Available online: https://ssrn.com/](https://ssrn.com/abstract=3091219)
[abstract=3091219 (accessed on 13 August 2021).](https://ssrn.com/abstract=3091219)
15. [Young, I. Dogecoin: A Brief Overview & Survey. 2018. Available online: https://ssrn.com/abstract=3306060 (accessed on 13](https://ssrn.com/abstract=3306060)
August 2021).
-----
_J. Theor. Appl. Electron. Commer. Res. 2021, 16_ 2240
16. [Ante, L. How Elon Musk’s Twitter Activity Moves Cryptocurrency Markets. 2021. Available online: https://ssrn.com/abstract=](https://ssrn.com/abstract=3778844)
[3778844 (accessed on 13 August 2021).](https://ssrn.com/abstract=3778844)
17. López, M.; Sicilia, M.; Moyeda-Carabaza, A.A. Creating identification with brand communities on Twitter: The balance between
[need for affiliation and need for uniqueness. Internet Res. 2017, 27, 21–51. [CrossRef]](http://dx.doi.org/10.1108/IntR-12-2013-0258)
18. Saura, J.R.; Reyes-Menéndez, A.; deMatos, N.; Correia, M.B. Identifying Startups Business Opportunities from UGC on Twitter
[Chatting: An Exploratory Analysis. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1929–1944. [CrossRef]](http://dx.doi.org/10.3390/jtaer16060108)
19. Mohammadi, A.; Hashemi Golpayegani, S.A. SenseTrust: A Sentiment Based Trust Model in Social Network. J. Theor. Appl.
_[Electron. Commer. Res. 2021, 16, 2031–2050. [CrossRef]](http://dx.doi.org/10.3390/jtaer16060114)_
20. Liu, B. Sentiment analysis and subjectivity. Handb. Nat. Lang. Process. 2010, 2, 627–666.
21. Maiti, M.; Vyklyuk, Y.; Vukovi´c, D. Cryptocurrencies chaotic co-movement forecasting with neural networks. Internet Technol.
_[Lett. 2020, 3, e157. [CrossRef]](http://dx.doi.org/10.1002/itl2.157)_
22. Maiti, M.; Grubisic, Z.; Vukovic, D.B. Dissecting Tether’s Nonlinear Dynamics during Covid-19. J. Open Innov. Technol. Mark.
_[Complex. 2020, 6, 161. [CrossRef]](http://dx.doi.org/10.3390/joitmc6040161)_
23. Vukovic, D.; Maiti, M.; Grubisic, Z.; Grigorieva, E.M.; Frömmel, M. COVID-19 Pandemic: Is the Crypto Market a Safe Haven?
[The Impact of the First Wave. Sustainability 2021, 13, 8578. [CrossRef]](http://dx.doi.org/10.3390/su13158578)
24. Yue, W.; Zhang, S.; Zhang, Q. Asymmetric news effects on cryptocurrency liquidity: An Event study perspective. Financ. Res.
_[Lett. 2021, 41, 101799. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2020.101799)_
25. Ortu, M.; Uras, N.; Conversano, C.; Destefanis, G.; Bartolucci, S. On Technical Trading and Social Media Indicators in
Cryptocurrencies’ Price Classification Through Deep Learning. arXiv 2021, arXiv:2102.08189.
26. Matta, M.; Lunesu, I.; Marchesi, M. Bitcoin Spread Prediction Using Social and Web Search Media. In Proceedings of the UMAP
2015—23rd Conference on User Modeling, Adaptation and Personalization, Dublin, Ireland, 29 June 2015–3 July 2015; pp. 1–10.
27. Bartolucci, S.; Destefanis, G.; Ortu, M.; Uras, N.; Marchesi, M.; Tonelli, R. The Butterfly “Affect”: Impact of development practices
[on cryptocurrency prices. EPJ Data Sci. 2020, 9, 21. [CrossRef]](http://dx.doi.org/10.1140/epjds/s13688-020-00239-6)
28. Mai, F.; Shan, Z.; Bai, Q.; Wang, X.; Chiang, R.H. How does social media impact Bitcoin value? A test of the silent majority
[hypothesis. J. Manag. Inf. Syst. 2018, 35, 19–52. [CrossRef]](http://dx.doi.org/10.1080/07421222.2018.1440774)
29. Rognone, L.; Hyde, S.; Zhang, S.S. News sentiment in the cryptocurrency market: An empirical comparison with Forex. Int. Rev.
_[Financ. Anal. 2020, 69, 101462. [CrossRef]](http://dx.doi.org/10.1016/j.irfa.2020.101462)_
30. Zhang, S.; Zhou, X.; Pan, H.; Jia, J. Cryptocurrency, confirmatory bias and news readability–evidence from the largest Chinese
[cryptocurrency exchange. Account. Financ. 2019, 58, 1445–1468. [CrossRef]](http://dx.doi.org/10.1111/acfi.12454)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jtaer16060123?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jtaer16060123, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/0718-1876/16/6/123/pdf?version=1630647783"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-09-03T00:00:00
|
[
{
"paperId": "c0d2f057b8a1080c85ea56ccd6f37abcdde172ab",
"title": "Does Bitcoin React to Trump’s Tweets?"
},
{
"paperId": "fe9399330887ccadd3b7556656545deb0c842fb2",
"title": "COVID-19 Pandemic: Is the Crypto Market a Safe Haven? The Impact of the First Wave"
},
{
"paperId": "128238e97e16c20b610deef9f5ff6a12e893edb3",
"title": "SenseTrust: A Sentiment Based Trust Model in Social Network"
},
{
"paperId": "59d6741b709b907f019ae5ebe41947d74a614c68",
"title": "Identifying Startups Business Opportunities from UGC on Twitter Chatting: An Exploratory Analysis"
},
{
"paperId": "5d7b6f303ebee8c1bff3fc69f9c16e0a176404b9",
"title": "On Technical Trading and Social Media Indicators in Cryptocurrencies' Price Classification Through Deep Learning"
},
{
"paperId": "5e1c8359169dd62e5fbd16f46612034dbeb561f4",
"title": "How Elon Musk's Twitter Activity Moves Cryptocurrency Markets"
},
{
"paperId": "903b914604e1d28bd130e7837871352134acf24e",
"title": "Transitions in the cryptocurrency market during the COVID-19 pandemic: A network analysis"
},
{
"paperId": "c0871a4de73861e7ea55e16bec1c0ea5c8faa264",
"title": "Emotional trading in the cryptocurrency market"
},
{
"paperId": "27d2b03b3021128d0a4363dc456076b9352d27f5",
"title": "Dissecting Tether’s Nonlinear Dynamics during Covid-19"
},
{
"paperId": "03327a2ef5a32ce7a71e69259774fb920811168b",
"title": "Asymmetric News Effects on Cryptocurrency Liquidity: an Event Study Perspective"
},
{
"paperId": "adeb0a2c26f6cc9c0f953d3e9d372682449962f3",
"title": "The Butterfly “Affect”: impact of development practices on cryptocurrency prices"
},
{
"paperId": "50f316ef0b1c3ac87b117c94f7286712cf61ec94",
"title": "News sentiment in the cryptocurrency market: An empirical comparison with Forex"
},
{
"paperId": "4683a10d803eded36443d45dcce394b2a0fca0f5",
"title": "Cryptocurrencies chaotic co‐movement forecasting with neural networks"
},
{
"paperId": "a75d70b1f007319ccee4807ef76170a7a38058b6",
"title": "Do investors herd in cryptocurrencies – and why?"
},
{
"paperId": "70927b9d6f3e4fd45409e4f75d7cc6bc04f7a73c",
"title": "Herding in the cryptocurrency market: CSSD and CSAD approaches"
},
{
"paperId": "bc7ae10f0bcc1c11d5a2dcf3d09b9453f21a8706",
"title": "Herding behavior in cryptocurrencies revisited: Novel evidence from a TVP model"
},
{
"paperId": "c13adb42f983a1c9553cd0ea0826f896936faa4f",
"title": "Herding behaviour in cryptocurrencies"
},
{
"paperId": "29810afb6fa758911760eca141344d70ef199695",
"title": "Herding behavior and contagion in the cryptocurrency market"
},
{
"paperId": "8fab40b779469a4b7497c5aebbb59d34c6ec1d22",
"title": "Cryptocurrency, Confirmatory Bias and News Readability – Evidence from the Largest Chinese Cryptocurrency Exchange"
},
{
"paperId": "b0f2741f846fe08040a77c6bd59f0f9e8d6eff34",
"title": "Understanding the Social Factors Affecting the Cryptocurrency Market"
},
{
"paperId": "8c0a5bb73f6e03c0d7abebebe37efcef2fac940b",
"title": "Media Attention and Bitcoin Prices"
},
{
"paperId": "7a5ce021e8e2231a70e1c4a990b34a32a2e0308b",
"title": "Dogecoin: A Brief Overview & Survey"
},
{
"paperId": "7677a0070f604e97bc7544c168f84d20ebb26791",
"title": "How Does Social Media Impact Bitcoin Value? A Test of the Silent Majority Hypothesis"
},
{
"paperId": "097fa70b534568d46f665219f101e94a79f21dee",
"title": "Predicting cryptocurrency price bubbles using social media data and epidemic modelling"
},
{
"paperId": "4a610c43cb02610535facfb87f139f6ca6cb51c4",
"title": "Creating identification with brand communities on Twitter: The balance between need for affiliation and need for uniqueness"
},
{
"paperId": "5d21f76ff461fce03ec0e539eb1b9edd21a407c2",
"title": "Investigating factors behind choosing a cryptocurrency"
},
{
"paperId": "c3b80de058596cee95beb20a2d087dbcf8be01ea",
"title": "Cryptocurrency Price Prediction Using News and Social Media Sentiment"
},
{
"paperId": "1345a50edee28418900e2c1a4292ccc51138e1eb",
"title": "Bitcoin Spread Prediction Using Social and Web Search Media"
},
{
"paperId": "29366f39dc4237a380f20e9ae9fbeffa645200c9",
"title": "Sentiment Analysis and Subjectivity"
},
{
"paperId": null,
"title": "A Brief Overview & Survey"
},
{
"paperId": null,
"title": "Emotional trading in the cryptocurrency"
},
{
"paperId": null,
"title": "A History of Dogecoin. Discussion Series: Notes on the 21st Century"
},
{
"paperId": null,
"title": "Notes on the 21st Century"
},
{
"paperId": null,
"title": "A History of Dogecoin . Discussion Series : Notes on the 21 st Century . 2017"
}
] | 10,960
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00cb943e73fce88d768f3891ad67fa48c6b70ed2
|
[
"Computer Science"
] | 0.885892
|
Name enhanced SDN framework for service function chaining of elastic Network functions
|
00cb943e73fce88d768f3891ad67fa48c6b70ed2
|
Conference on Computer Communications Workshops
|
[
{
"authorId": "153673738",
"name": "Sameer G. Kulkarni"
},
{
"authorId": "2913151",
"name": "M. Arumaithurai"
},
{
"authorId": "40593388",
"name": "Argyrious G. Tasiopoulos"
},
{
"authorId": "3456310",
"name": "Yiaonis Psaras"
},
{
"authorId": "145922660",
"name": "K. Ramakrishnan"
},
{
"authorId": "1799074",
"name": "Xiaoming Fu"
},
{
"authorId": "1680313",
"name": "G. Pavlou"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"INFOCOM WKSHPS",
"Conf Comput Commun Work"
],
"alternate_urls": null,
"id": "be267cb9-6411-4126-8b64-4847025171ee",
"issn": null,
"name": "Conference on Computer Communications Workshops",
"type": "conference",
"url": null
}
| null |
# Name Enhanced SDN Framework for Service Function Chaining of Elastic Network Functions
## Sameer G Kulkarni[∗], Mayutan Arumaithurai[∗], Argyrious Tasiopoulos[‡], Yiaonis Psaras[‡], K.K. Ramakrishnan[†], Xiaoming Fu[∗], George Pavlou[‡]
_∗University of G¨ottingen, Germany, ‡University College London, †University of California, Riverside._
**_Abstract—Middleboxes have become an integral part of Inter-_**
**net infrastructure, providing additional flow processing for policy**
**control, security, and performance optimization. Network Func-**
**tion Virtualisation (NFV) proposes the deployment of software-**
**based middleboxes on top of commercial off-the-shelf (COTS),**
**enabling the dynamic adjustment of Virtual Network Functions**
**(VNFs), both in terms of instance numbers and computational**
**power. The performance of Data center and Enterprise networks**
**depend strongly on efficient scaling of VNFs and the traffic load**
**balance across VNF instances. To this end, we present Name**
**enhanced SDN framework for service function chaining of elastic**
**Network functions (NSN) that extends the Function-Centric**
**Service Chaining (FCSC) with load balancing functionalities to**
**achieve efficient network utilization while reducing the switch**
**flow rules by 2-4x compared to traditional SDN approaches.**
I. INTRODUCTION
Software Defined Networking (SDN) enables to realize
the policy enforcement by providing greater flexibility and
control in steering the packets through desired function chain.
With logically centralized controller, it is easier to enforce
heterogeneous policies for different flows and to steer the
traffic across the network. In addition, with the global view
of network topology, it is easier to monitor the resource
utilization. Network Function Virtualization (NFV) has caused
the paradigm shift towards deploying the soft middleboxes that
provide flexible realization of network services with greater
cost optimization [1]. SDN and NFV greatly augment to
provide flexible and dynamic software-based network environment. On the other hand, Information-Centric Networks (ICN)
and Named Data Networking (NDN) architectures introduce
the naming layer to the network architecture that decouple the
content/name from the location. This offers greater flexibility
in routing the flows based on service types, without actually
knowing the exact location in the network.
Traffic dynamics often trigger for reallocation and reconfiguration of network resources. In case of high demands, some
resources end up being over-utilized, resulting into higher
latency and SLA degradation, while on other occasions, end
up being underutilized. In such circumstances, in order to
meet the performance and energy objectives, the NF instances
(NFIs) need to be dynamically instantiated or decommissioned
or even relocated/migrated. However, to make it happen,
several key decisions need to be made in terms of knowing
when to instantiate, decommission or migrate the instance,
which network instances need to be scaled, where in the
network to place the instances and how to redistribute the
Fig. 1: NSN Architecture
load among the available instances. Several recent works [1],
[2], [3] have tried to address these aspects within the hood of
traditional or SDN framework.
FCSC [4] exploits the benefits of NDN in combination
with SDN to provide a more flexible, scalable and reliable
framework to realize service function chaining. However, it
falls short of incorporating a reliable mechanism for applying
load balancing over NFIs. Load balancing is fundamental
to ensure efficient utilization of resources and to meet the
SLA requirements. Herein, we present Name enhanced SDN
framework for service function chaining of elastic Network
functions(NSN) that exploits named service instances and
compliments the SDN framework by providing the capabilities of efficient load balancing and elastic scaling of VNF’s
via service instantiation, consolidation, while supporting flow
redirections for achieving higher VNF utilization.
II. RELATED WORK
Slick [1] provides a programming model abstraction, where
the SDN controller employs heuristic based approaches for
estimating the dynamic placement, steering and consolidation
of VNFs. However, load balancing is not explicitly addressed
and the routing does not take into consideration the network
load upon the load steering decisions. E2 [2] presents a
NFV scheduling framework that supports affinity based NF
placement while trying to minimize the traffic across switches
as well as deploying dynamic scaling of NF instances. SIMPLE [3] primarily addresses the SDN based traffic steering
approach that tries to optimize on the total rules. It relies on
the ILP solver to provide online load balancing.
III. NSN ENHANCED ARCHITECTURE
We present the high level architecture and design of NSN,
that incorporates name based network function instances and
enhances SDN’s capability to handle placement, routing and
flow redirections.
-----
_A. Name based Routing_
NSN enhances FCSC’s name based routing mechanism to
perform NFI based routing, wherein all the NFIs are uniquely
identified by a name. Policy enforcement on the flows is
performed by the controller by encoding the names of the
sequence of network functions that are required for the flow
to pass through. This aspect is essentially the concept of
Information-Centric Networking, wherein the function that a
flow requests is decoupled from the location where this function/service is going to be executed. That is, packets indicate
through their headers the service function they require and
the network is responsible for routing those packets towards
the right location. This notion of location-independence can
support real-time flow steering and redirection to dynamically
instantiated/re-located services out of the box. Once a packet
goes through a NF and the corresponding service is executed,
the header is modified to remove this service from the chain
of required services.
A key difference compared to current IP based SDN
solutions is that the intermediate switches do not need to
maintain per flow forwarding information or similar finegrained forwarding rules, but only need to store forwarding
information to reach the named instances. The switches that
only route packets at specific service instances, have to keep
a single forwarding rule for each service instance. Thus, the
state maintained at intermediate routers is proportional to the
number of instances and not to the number of flows. Moreover,
these rules can be set in a proactive manner as soon as an
NFI is instantiated, removed or re-located. Only the ingress
switches and the edge switches connected to NFI that are
servicing the flow, keep a per flow state forwarding table
to ensure that the right labels (i.e., the next hops service
instances) are placed on the flow’s header.
Another advantage compared to existing IP based solutions
is that when an NFI is removed or re-located, in the case
of NSN, only the forwarding entries to these instances need
to be changed, whereas in the case of current solutions, all
entries pertaining to flows that are being serviced by this NFI
needs to be modified. Similarly, in case of flow redirection,
the proposed scheme provides a notion of atomic rule update
as it needs one rule update. NFI node just needs to change the
NFI tag to another instance.
1,400
1,200
1,000
800
600
400
200
0
FCSC NSN S-SDN
677
1,264
100
80
60
40
20
0
FCSC NSN S-SDN
Svc-A Svc-B Svc-C
1,035
408 313 371 407
53 55109 86102231 112183 140 151 167
5 10 25 50 75 100
No. of Flows
(a) Total Switch rules for flows.
Network Functions
(b) Load across different NFs.
_B. VNF Placement and Elastic Scaling_
NSN supports placement, instantiation, removal and relocation of instances to better support the dynamic requirements
of flows. The NSN architecture can facilitate for different
heuristic based placement mechanisms. NSN enables SDN
framework to make quicker heuristic based placement decisions, and allows for finer and quicker course corrections
to redistribute the load by either redirecting flows via other
instances and/or by instantiating, removing or relocating the
NFIs, and thereby enables to overcome the disadvantages
of making such heuristic based decisions instead of time
consuming and complex ILP [3] decisions. SDN controller can
periodically monitor the utilization of the NFIs and network
Fig. 2: Evaluation Results indicating Flow rule optimization
and load balancing characteristics.
link utilization, that can assist in identifying the optimal
placement of NFI’s that need to be dynamically instantiated.
One such approach is to compute the optimal location by
accounting available resources with greater affinity for the
flows. On the same lines, the under-utilized NF instances
can be decommissioned. NSN estimates the load on each
instance based on the gathered link statistics, flows in the
system, and the explicit notifications from instances. Based
on the load thresholds, it can then dynamically instantiate
and decommission specific network instances to ensure the
instance utilization rates are kept within the optimal levels.
IV. IMPLEMENTATION AND EVALUATION
We implemented NSN, as set of modules on top of POX
SDN contoller in Python (around 2500 lines of code). We use
the Open vSwitch (2.4.0) for SDN switches and implemented
custom network modules in linux using python scapy utility.
We evaluate NSN using the Mininet network emulator and
use the data center tree topology. As in [3], each flow has a
policy chain of 3 distinct NFs that originate and terminate at
random nodes. Key focus of our evaluation is to measure and
quantify the benefits in terms of overall number of flow rules
required for steering and load across active network instances.
We compare NSN with Standard SDN (S-SDN) that employs
IP 5-tuple based rule setting and with FCSC that employs rule
setting purely based on the named network functions.
Figure 2(a) depicts the total number of flow rules installed
across all switches in the network for different number of
flows. Figure 2(b) indicates the normalized average load
observed across all the active instances of services A, B and
C. We can see that NSN provides significant reduction in the
total number of rules stored at switches compared to S-SDN.
Moreover, NSN ensures load balancing capability identical to
that of the S-SDN solution.
ACKNOWLEDGEMENT
This work was supported by EU FP7 Marie Curie Actions
CleanSky ITN project Grant No. 607584, and NICT EUJAPAN GreenICN project Grant No. 608518.
REFERENCES
[1] B. Anwer, T. Benson, N. Feamster et al., “Programming slick network
functions,” in ACM SOSR. ACM, 2015.
[2] S. Palkar, C. Lan, S. Han et al., “E2: A framework for nfv applications,”
in ACM SOSP, 2015.
[3] Z. A. Qazi, C.-C. Tu, L. Chiang et al., “Simple-fying middlebox policy
enforcement using sdn,” in ACM SIGCOMM, 2013.
[4] M. Arumaithurai, J. Chen, E. Monticelli et al., “Exploiting icn for flexible
management of software-defined networks,” in ACM ICN, 2014.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/INFCOMW.2016.7562043?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/INFCOMW.2016.7562043, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://discovery.ucl.ac.uk/1536699/1/Pavlou_Kulkarni-2016-infocom-workshop.pdf"
}
| 2,016
|
[
"JournalArticle",
"Conference"
] | true
| 2016-04-10T00:00:00
|
[] | 2,649
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00cf3d116538b3fbc71c113df890d54299369a61
|
[
"Computer Science"
] | 0.891446
|
Universal Randomized Guessing With Application to Asynchronous Decentralized Brute–Force Attacks
|
00cf3d116538b3fbc71c113df890d54299369a61
|
IEEE Transactions on Information Theory
|
[
{
"authorId": "1734871",
"name": "N. Merhav"
},
{
"authorId": "1796627",
"name": "A. Cohen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Inf Theory"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?puNumber=18",
"http://ieeexplore.ieee.org/servlet/opac?punumber=18"
],
"id": "748e730b-add9-47ee-819d-8ae54e504ef9",
"issn": "0018-9448",
"name": "IEEE Transactions on Information Theory",
"type": "journal",
"url": "http://www.comm.utoronto.ca/trans-it/"
}
|
Consider the problem of guessing the realization of a random vector <inline-formula> <tex-math notation="LaTeX">${X}$ </tex-math></inline-formula> by repeatedly submitting queries (guesses) of the form “Is <inline-formula> <tex-math notation="LaTeX">${X}$ </tex-math></inline-formula> equal to <inline-formula> <tex-math notation="LaTeX">${x}$ </tex-math></inline-formula>?” until an affirmative answer is obtained. In this setup, a key figure of merit is the number of queries required until the right vector is identified, a number that is termed the <italic>guesswork</italic>. Typically, one wishes to devise a guessing strategy which minimizes a certain guesswork moment. In this work, we study a universal, decentralized scenario where the guesser does not know the distribution of <inline-formula> <tex-math notation="LaTeX">${X}$ </tex-math></inline-formula>, and is not allowed to use a strategy which prepares a list of words to be guessed in advance, or even remember which words were already used. Such a scenario is useful, for example, if bots within a Botnet carry out a brute–force attack in order to guess a password or decrypt a message, yet cannot coordinate the guesses between them or even know how many bots actually participate in the attack. We devise universal decentralized guessing strategies, first, for memoryless sources, and then generalize them for finite–state sources. In each case, we derive the guessing exponent, and then prove its asymptotic optimality by deriving a compatible converse bound. The strategies are based on randomized guessing using a universal distribution. We also extend the results to guessing with side information. Finally, for all above scenarios, we design efficient algorithms in order to sample from the universal distributions, resulting in strategies which do not depend on the source distribution, are efficient to implement, and can be used asynchronously by multiple agents.
|
## Universal Randomized Guessing with Application to Asynchronous Decentralized Brute–Force Attacks
#### Neri Merhav[∗] Asaf Cohen[†]
#### November 13, 2018
Abstract
Consider the problem of guessing the realization of a random vector X by repeatedly
submitting queries (guesses) of the form “Is X equal to x?” until an affirmative answer
is obtained. In this setup, a key figure of merit is the number of queries required until
the right vector is identified, a number that is termed the guesswork. Typically, one
wishes to devise a guessing strategy which minimizes a certain guesswork moment.
In this work, we study a universal, decentralized scenario where the guesser does not
know the distribution of X, and is not allowed to use a strategy which prepares a list
of words to be guessed in advance, or even remember which words were already used.
Such a scenario is useful, for example, if bots within a Botnet carry out a brute–force
attack in order to guess a password or decrypt a message, yet cannot coordinate the
guesses between them or even know how many bots actually participate in the attack.
We devise universal decentralized guessing strategies, first, for memoryless sources,
and then generalize them for finite–state sources. In each case, we derive the guessing
exponent, and then prove its asymptotic optimality by deriving a compatible converse
bound. The strategies are based on randomized guessing using a universal distribution.
We also extend the results to guessing with side information. Finally, for all above scenarios, we design efficient algorithms in order to sample from the universal distributions,
resulting in strategies which do not depend on the source distribution, are efficient to
implement, and can be used asynchronously by multiple agents.
Index Terms: guesswork; universal guessing strategy; randomized guessing; decentralized guessing; guessing with side information; Lempel–Ziv algorithm; efficient sampling
from a distribution.
∗N. Merhav is with the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion – Israel
Institute of Technology, Haifa 32000, Israel. E-mail: merhav@technion.ac.il
†A. Cohen is with the Department of Communication System Engineering, Ben Gurion University of the
Negev, Beer Sheva 84105, Israel. E-mail: coasaf@bgu.ac.il
-----
### 1 Introduction
Consider the problem of guessing the realization of a random n–vector X using a sequence
of yes/no queries of the form: “Is X = x1?”, “Is X = x2?” and so on, until an affirmative
answer is obtained. Given a distribution on X, a basic figure of merit in such a guessing
game is the guesswork, defined as the number of trials required until guessing the right
vector.
Devising guessing strategies to minimize the guesswork, and obtaining a handle on key
analytic aspects of it, such as its moments or its large deviations rate function, has numerous
applications in information theory and beyond. For example, sequential decoding [1, 2] or
guessing a codeword which satisfies certain constraints [3]. In fact, since the ordering of
all sequences of length n in a descending order of probabilities is, as expected, the optimal
strategy under many optimality criteria[1], the guessing problem is intimately related to fixed
to-variable source coding without the prefix constraint, or one-shot coding, where it is clear
that one wishes to order the possible sequences in a descending probability of appearance
before assigning them codewords [4, 5, 6].
Contemporary applications of guesswork focus on information security, that is, guessing
passwords or decrypting messages protected by random keys. E.g., one may use guessing
strategies and their guesswork exponents while proactively trying to crack passwords, as a
mean of assessing password security within an organization [7, 8]. Indeed, it is increasingly
important to be able to assess password strength [9], especially under complex (e.g., non
i.i.d.) password composition requirements. While the literature includes several studies
assessing strength by measuring how hard it is for common cracking methods to break a
certain set of passwords [8, 9] or by estimating the entropy of passwords created under
certain rules [10], the guesswork remains a key analytic tool in assessing password strength
for a given sequence length and distribution. As stated in [11], “we are yet to see compelling
evidence that motivated users can choose passwords which resist guessing by a capable
attacker”. Thus, analyzing the guesswork is useful in assessing how strong a key–generation
system is, how hard will it be for a malicious party to break it, or, from the malicious side
point of view, how better is one guessing strategy compared to the other.
Arguably, human–created passwords may be of a finite, relatively small length, rather
1Specifically, if G(X) is the guesswork, this order minimizes E{F [G(X)]} for any monotone nondecreasing function F .
-----
than long sequences which justify asymptotic analysis of the guesswork. Yet, as mentioned
above, the guesswork, as a key figure of merit, may be used to aid in assessing computer
generated keys [12] or passwords as well. For example, a random key might be of tens or
even hundreds of bits long, and passwords saved on servers are often salted before being
hashed, resulting in increased length [13]. Moreover, experiments done on finite block
lengths agree with the insights gained from the asymptotic analysis [14]. As a result,
large deviations and asymptotic analysis remain as key analytic tools in assessing password
strength [15, 16, 14, 17, 18]. Such asymptotic analysis provides us, via tractable expressions,
the means to understand the guesswork behaviour, the effect various problem parameters
have on its value, and the fundamental information measures which govern it. E.g., while
the entropy is indeed a relevant measure for “randomness” in passwords [10], via asymptotic
analysis of the guesswork [2] we now know that the R´enyi entropy is the right measure when
guessing or even a distributed brute–force attack [18, 19]. Non–asymptotic results, such
as the converse result in [20], then give us finer understanding of the dependence on the
sequence length.
Keeping the above applications in mind, it is clear the vanilla model of a single, all
capable attacker, guessing a password X drawn from an i.i.d. source of a known distribu
tion, is rarely the case of interest. In practical scenarios, several intricacies complicate the
problem. While optimal passwords should have maximum entropy, namely, be memoryless
and uniformly distributed over the alphabet, human-created passwords are hardly ever such.
They tend to have memory and a non–uniform distribution [21], due to the need to remem
ber them as well as many other practical considerations (e.g., keyboard structure or the
native language of the user) [22, 23]. Thus, the ability to efficiently guess non-memoryless
passwords and analyze the performance of such guessing strategies is crucial.
Moreover, the underlying true distribution is also rarely known. In [21], the authors
investigated the distribution of passwords from four known databases, and tried to fit a
Zipf distribution[2]. While there was no clear match, it was clear that a small parameter s is
required, to account for a heavy tail. Naturally, [21] also stated that “If the right distribution
of passwords can be identified, the cost of guessing a password can be reduced”.
Last but not least, from the attacker’s side, there might be additional information which
2In this model, the probability of password with rank i is Pi = Kis, where s is a parameter and K is a
normalizing constant
-----
facilitates the guessing procedure on the one hand, yet there might be restrictions that
prevent him/her from carrying out the optimal guessing strategy. That is, on the one
hand, the attacker might have side information, e.g., passwords for other services which are
correlated with the one currently attacked, and thereby significantly decrease the guesswork
[2, 24, 15, 18]. On the other hand, most modern systems will limit the ability of an attacker
to submit too many queries from a single IP address, hence to still submit a large amount,
these must be submitted from different machines. Such machines may not be synchronized[3],
namely, one may not know which queries were already submitted by the other. Moreover,
storing a large (usually, an exponentially large) list of queries to be guessed might be a too
heavy burden, especially for small bots in the botnet (e.g., IoT devices). The attacker is
thus restricted to distributed brute force attacks, where numerous devices send their queries
simultaneously, yet without the ability to synchronize, without knowing which queries were
already sent, or which bots are currently active and which ones failed [19].
#### Main Contributions
In this paper, we devise universal, randomized (hence, decentralized) guessing strategies
for a wide family of information sources, and assess their performance by analyzing their
guessing moments, as well as exponentially matching converse bounds, thereby proving their
asymptotic optimality.
Specifically, we begin from the class of memoryless sources, and propose a guessing
strategy. The strategy is universal both in the underlying source distribution and in the
guesswork moment to be optimized. It is based on a randomized approach to guessing,
as opposed to an ordered list of guesses, and thus it can be used by asynchronous agents
that submit their guesses concurrently. We prove that it achieves the optimal guesswork
exponent, we provide an efficient implementation for the random selection of guesses, and
finally, extend the results to guessing with side information.
Next, we broaden the scope to a wider family of non–unifilar finite–state sources, namely,
hidden Markov sources. We begin with a general converse theorem and then provide a simple
matching direct theorem, based on deterministic guessing. We then provide an alternative
direct theorem, that employs a randomized strategy, which builds on the Lempel–Ziv (LZ)
algorithm [27]. Once again, both results are tight in terms of the guesswork exponent, and
3When bots or processes working in parallel are able to be completely synchronized, they may use a
pre-compiled list of usernames and passwords - “hard-coding” the guessing strategy [25, 26].
-----
are universal in the source distribution and the moment.
A critical factor in guessing strategies is their implementation. Deterministic approaches
require hard-coding long lists (exponentially large in the block length), and hence are mem
ory consuming, while in a randomized approach, one needs to sample from a specific dis
tribution, which might require computing an exponential sum. In this paper, we give
two efficient algorithms to sample from the universal distribution we propose. The first
algorithm is based on (a repeated) random walk on a growing tree, thus randomly and
independently generating new LZ phrases, to be used as guesses. The second algorithm is
based on feeding a (slightly modified) LZ decoder with purely random bits. Finally, the
results and algorithms are extended to the case with side information.
The rest of this paper is organized as follows. In Section 2, we review the current
literature with references both to information–theoretic results and to key findings regarding
brute force attacks on passwords. In Section 3, we formally define the problem and our
objectives. Section 4 describes the results for memoryless sources, while Section 5 describes
the results for sources with memory. Section 6 concludes the paper.
### 2 Related Work
The first information–theoretic study on guesswork was carried out by Massey [28]. Arikan
[2] showed, among other things, that the exponential rate of the number of guesses required
for memoryless sources is given by the R`enyi entropy of order [1]
2 [. Guesswork under a distor-]
tion constraint was studied by Arikan and Merhav [29], who also derived a guessing strategy
for discrete memoryless sources (DMS’s), which is universally asymptotically optimal, both
in the unknown memoryless source and the moment order of the guesswork being analyzed.
Guesswork for Markov processes was studied by Malone and Sullivan [30], and extended
to a large class of stationary measures by Pfister and Sullivan [3]. In [31], Hanawal and
Sundaresan proposed a large deviations approach. They derived the guesswork exponent
for sources satisfying a large deviations principle (LDP), and thereby generalized the re
sults in [2] and [30]. In [16], again via large deviations, Christiansen et al. proposed an
approximation to the distribution of the guesswork. In [32], Sundaresan considered guess
ing under source uncertainty. The redundancy as a function of the radius of the family of
possible distributions was defined and quantified in a few cases. For the special class of
discrete memoryless sources, as already suggested in [29], this redundancy tends to zero as
-----
the length of the sequence grows without bound.
In [33], Christiansen et al. considered a multi-user case, where an adversary (inquisitor)
has to guess U out of V strings, chosen at random from some string-source µ[n]. Later,
Beirami et al. [34] further defined the inscrutability S[n](U, V, µ[n]) of a string-source, the
inscrutability rate as the exponential rate of S[n](U, V, µ[n]), and gave upper and lower bounds
on this rate by identifying the appropriate string-source distributions. They also showed
that ordering strings by their type-size in ascending order is a universal guessing strategy.
Note, however, that both [33, 34] considered a single attacker, with the ability to create a
list of strings and guess one string after the other.
Following Weinberger et al. [35], ordering strings by the size of their type-class before
assigning them codewords in a fixed-to-variable source coding scheme was also found useful
by Kosut and Sankar in [6] to minimize the third order term of the minimal number of
bits required in lossless source coding (the first being the entropy, while the second is the
dispersion). A geometric approach to guesswork was proposed by Beirami et al. in [36],
showing that indeed the dominating type in guesswork (the position of a given string in the
list) is the largest among all types whose elements are more likely than the given string.
Here we show that a similar ordering is also beneficial for universal, decentralized guessing,
though the sequences are not ordered in practice, and merely assigned probabilities to be
guessed based on their type or LZ complexity.
Guesswork over a binary erasure channel was studied by Christiansen et al. in [15].
While the underlying sequence to be guessed was assumed i.i.d., the results therein apply
to channels with memory as well (yet satisfying an LDP). Interestingly, it was shown that
the guesswork exponent is higher than the noiseless exponent times the average fraction
of erased symbols, and one pays a non-negligible toll for the randomness in the erasures
pattern.
In [18], Salamatian et al. considered multi-agent guesswork with side information. The
effect of synchronizing the side information among the agents was discussed, and its affect
on the exponent was quantified. Multi-agent guesswork was then also studied in [19], this
time devising a randomized guessing strategy, which can be used by asynchronous agents.
The strategy in [19], however, is hard to implement in practice, as it depends on both the
source distribution and the moment of the guesswork considered, and requires computing
-----
an exponential sum.[4]
Note that besides the standard application of guessing a password for a certain service,
while knowing a password of the same user to another service, guessing with side information
may also be applicable when breaking lists of hashed honeywords [38, 39]. In this scenario,
an attacker is faced with a list of hashed sweatwords, where one is the hash of the true
password while the rest are hashes of decoy honeywords, created with strong correlation to
the real password. If one is broken, using it as side information can significantly reduce
the time required to break the others. Furthermore, guessing with side information is also
related the problem of guessing using hints [40]. In this scenario, a legitimate decoder
should be able to guess a password (alternatively, a task to be carried out) using several
hints, in the sense of having a low expected conditional guesswork, yet an eavesdropper
knowing only a subset of the hints, should need a large number of guesses. In that case,
the expected conditional guesswork generalizes secret sharing schemes by quantifying the
amount of work Bob and Eve have to do.
From a more practical viewpoint, trying to create passwords based on real data, Weir
et al. [41] suggested a context-free grammar to create passwords at a descending order of
probabilities, where the grammar rules as well as the probabilities of the generalized letters
(sequences of English letters, sequences of digits or sequences of special characters) were
learned based on a given training set. In [8], Dell’Amico et al. evaluated experimentally
the probability of guessing passwords using dictionary-based, grammar-free and Markov
chain strategies, using existing data sets of passwords for validation. Not only was it clear
that complex guessing strategies, which take into account the memory, perform better, but
moreover, the authors stress out the need to fine-tune memory parameters (e.g., the length
of sub-strings tested), strengthening the necessity for a universal, parameter-free guessing
strategy. In [11] Bonneau also implicitly mentions the problems coping with passwords from
an unknown distribution, or an unknown mixture of several known distributions.
4An algorithm which produces a Markov chain, whose distribution converges to this sum was suggested
in the context of randomized guessing tools in [37], yet only asymptotically and only for fixed, known
distributions.
-----
### 3 Notation Conventions, Problem Statement and Objectives
#### 3.1 Notation Conventions
Throughout the paper, random variables will be denoted by capital letters, specific values
they may take will be denoted by the corresponding lower case letters, and their alphabets
will be denoted by calligraphic letters. Random vectors and their realizations will be de
noted, respectively, by capital letters and the corresponding lower case letters, both in the
bold face font. Their alphabets will be superscripted by their dimensions. For example,
the random vector X = (X1, . . ., Xn), (n – positive integer) may take a specific vector
value x = (x1, . . ., xn) in X [n], the n–th order Cartesian power of X, which is the alphabet
of each component of this vector. Sources and channels will be denoted by the letter P
or Q with or without some subscripts. When there is no room for ambiguity, these sub
scripts will be omitted. The expectation operator will be denoted by E . The entropy
{·}
of a generic distribution Q on X will be denoted by HQ(X) where X designates a random
variable drawn by Q. For two positive sequences an and bn, the notation an = bn will
stand for equality in the exponential scale, that is, limn→∞ n[1] [log][ a]bn[n] [= 0. Similarly,][ a][n] ≤ bn
means that lim supn→∞ n1 [log][ a]bn[n]
[≤] [0, and so on. When both sequences depend on a vector,]
x ∈X [n], namely, an = an(x) and bn = bn(x), the notation an(x) =· bn(x) means that the
asymptotic convergence is uniform, namely, limn→∞ maxx∈X [n] | n[1] [log][ a]bn[n]([(]x[x])[)] [|][ = 0. Likewise,]
an(x) ≤ bn(x) means lim supn→∞ maxx∈X [n] n[1] [log][ a]bn[n]([(]x[x])[)]
[≤] [0, and so on.]
The empirical distribution of a sequence x ∈X [n], which will be denoted by P[ˆ]x, is
the vector of relative frequencies P[ˆ]x(x) of each symbol x ∈X in x. The type class of
x ∈X [n], denoted T (x), is the set of all vectors x[′] with P[ˆ]x′ = P[ˆ]x. Information measures
associated with empirical distributions will be denoted with ‘hats’ and will be subscripted
by the sequences from which they are induced. For example, the entropy associated with
Pˆx, which is the empirical entropy of x, will be denoted by ˆHx(X). Similar conventions
will apply to the joint empirical distribution, the joint type class, the conditional empirical
distributions and the conditional type classes associated with pairs of sequences of length n.
Accordingly, P[ˆ]xy would be the joint empirical distribution of (x, y) = {(xi, yi)}i[n]=1[,][ T][ (][x][,][ y][)]
or T ( P[ˆ]xy) will denote the joint type class of (x, y), T (x|y) will stand for the conditional
type class of x given y, H[ˆ]xy(X|Y ) will be the empirical conditional entropy, and so on.
In Section IV, the broader notion of a type class, which applies beyond the memoryless
-----
case, will be adopted: the type class of x w.r.t. a given class of sources, will be defined
P
as
(x) = � x[′] : P (x[′]) = P (x) . (1)
T { }
P ∈P
Obviously, the various type classes, {T (x)}x∈X [n], are equivalence classes, and therefore,
form a partition of [n]. Of course, when is the class of memoryless sources over, this
X P X
definition of (x) is equivalent to the earlier one, provided in the previous paragraph.
T
#### 3.2 Problem Statement and Objectives
In this paper, we focus on the guessing problem that is defined as follows. Alice selects a
secret random n–vector X, drawn from a finite alphabet source P . Bob, which is unaware
of the realization of X, submits a sequence of guesses in the form of yes/no queries: “Is
X = x1?” “Is X = x2?”, and so on, until receiving an affirmative answer. A guessing list,
Gn, is an ordered list of all members of X [n], that is, G = {x1, x2, . . ., xM }, M = |X|[n], and
it is associated with a guessing function, G(x), which is the function that maps [n] onto
X
{1, 2, . . ., M } by assigning to each x ∈X [n] the integer k for which xk = x, namely, the k–th
element of Gn. In other words, G(x) is the number of guesses required until success, using
Gn, when X = x.
The guessing problem is about devising a guessing list Gn that minimizes a certain
moment of G(X), namely, E G[ρ](X), where ρ > 0 is a given positive real (not necessarily
{ }
a natural number). Clearly, when the source P is known and ρ is arbitrary, the optimal
guessing list orders the members of [n] in the order of non–increasing probabilities. When P
X
is unknown, but known to belong to a given parametric class, like the class of memoryless
P
sources, or the class of finite–state sources with a given number of states, we are interested
in a universal guessing list, which is asymptotically optimal in the sense of minimizing the
guessing exponent, namely, achieving
log E G[ρ](X)
{ }
E(ρ) = lim sup, (2)
n→∞ [min]Gn n
uniformly for all sources in and all positive real values of ρ.
P
Motivated by applications of distributed, asynchronous guessing by several agents (see
Introduction), we will also be interested in randomized guessing schemes, which have the
advantages of: (i) relaxing the need to consume large volumes of memory (compared to
deterministic guessing which needs the storage of the guessing list Gn) and (ii) dropping
-----
the need for synchronization among the various guessing agents (see [19]). In randomized
guessing, the guesser sequentially submits a sequence of random guesses, each one dis
tributed independently according to a certain probability distribution P[˜](x). We would like
the distribution P[˜] to be universally asymptotically optimal in the sense of achieving (on the
average) the optimal guessing exponent, while being independent of the unknown source
P and independent of ρ. Another desirable feature of the random guessing distribution P[˜]
is that it would be easy to implement in practice. This is especially important when n is
large, as it is not trivial to implement a general distribution over [n] in the absence of any
X
structure to this distribution.
We begin our discussion from the case where the class of sources,, is the class of
P
memoryless sources over a finite alphabet of size α. In this case, some of the results
X
we will mention are already known, but it will be helpful, as a preparatory step, before we
address the more interesting and challenging case, where is the class of all non–unifilar,
P
finite–state sources, a.k.a. hidden Markov sources (over the same finite alphabet ), where
X
even the number of states is unknown to the guesser, let alone, the parameters of the source
for a given number of states. In both cases, we also extend the study to the case where the
guesser is equipped with side information (SI) Y, correlated to the vector X to be guessed.
### 4 Guessing for Memoryless Sources
#### 4.1 Background
Following [28], Arikan [2] has established some important bounds associated with guessing
moments with relation the R´enyi entropy, with and without side information, where the
main application he had in mind was sequential decoding. Some of Arikan’s results set the
stage for guessing n–vectors emitted from memoryless sources. Some of these results were
later extended to the case of lossy guessing[5] [29] with a certain emphasis on universality
issues. In particular, narrowing down the main result of [29] to the case of lossless guessing
considered here, it was shown that the best achievable guessing exponent, E(ρ), is given by
the following single–letter expression for a given memoryless source P :
E(ρ) = max (3)
Q [[][ρH][Q][(][X][)][ −] [D][(][Q][∥][P] [)] =][ ρH] [1][/][(1+][ρ][)][(][X][)][,]
5Here, by “lossy guessing” we mean that a guess is considered successful if its distance (in the sense of a
given distortion function) from the underlying source vector, does not exceed a given distortion level.
-----
where Q is an auxiliary distribution over to be optimized, and H [α](X) designates the
X
R´enyi entropy of order α,
1
H [α](X) =
1 α [ln]
−
�� P [α](x)�
x∈X
, (4)
which is asymptotically achieved using a universal deterministic guessing list, Gn, that
orders the members of [n] according to a non–decreasing order of their empirical entropies,
X
namely,
Hˆx1(X) ≤ Hˆx2(X) ≤ . . . ≤ HˆxM (X). (5)
In the presence of correlated side information Y, generated from X by a discrete memoryless
channel (DMC), the above findings continue to hold, with the modifications that: (i) HQ(X)
is replaced by HQ(X|Y ), (ii) D(Q∥P ) is understood to be the divergence between the two
joint distributions of the pair (X, Y ) (which in turn implies that H [1][/][(1+][ρ][)](X) is replaced by
the corresponding conditional R´enyi entropy of X given Y ), and (iii) H[ˆ]xk (X) is replaced
by H[ˆ]xky(X|Y ), k = 1, 2, . . ., M .
#### 4.2 Randomized Guessing and its Efficient Implementation
For universal randomized guessing, we consider the following guessing distribution
2[−][n][ ˆ]Hx(X)
P˜(x) = (6)
�x[′][ 2][−][n][ ˆ]Hx′ (X) [.]
We then have the following result:
Theorem 1 Randomized guessing according to eq. (6) achieves the optimal guessing expo
nent (3).
Proof. We begin from the following lemma, whose proof is deferred to the appendix.
Lemma 1 For given a 0 and ρ > 0,
≥
∞
� k[ρ](1 e[−][na])[k][−][1] e[(1+][ρ][)][na]. (7)
− ≤
k=1
Denoting by Pn the set of probability distributions over X with rational letter proba
bilities of denominator n (empirical distributions), we observe that since
1 � 2[−][n][ ˆ]Hx(X)
≤
x
-----
= � max
Q∈P [Q][(][x][)]
x
= � max
Q∈Pn [Q][(][x][)]
x
�
≤
x
� Q(x)
Q∈Pn
= �
Q∈Pn
= |Pn|
� Q(x)
x
it follows that
(n + 1)[|X|−][1], (8)
≤
P˜(x) = 2[·] [−][n][ ˆ]Hx(X). (9)
Given that X = x, the ρ–th moment of the number of guesses under P[˜] is given by
∞
� k[ρ][1 P (x)][k][−][1][ ˜]P (x)
− [˜]
k=1
∞
= P˜(x) � k[ρ][1 P (x)][k][−][1]
− [˜]
k=1
∞
=· 2[−][n][ ˆ]Hx(X) � k[ρ][1 2[−][n][ ˆ]Hx(X)]k−1
−
k=1
2[−][n][ ˆ]Hx(X)2n(1+ρ) H[ˆ]x(X)
≤
= 2[nρ]H[ ˆ]x(X), (10)
where in the inequality, we have used Lemma 1 with the assignment a = H[ˆ]x(X) ln 2. Taking
the expectation of 2[nρ]H[ ˆ]x(X) w.r.t. P (x), using the method of types [42], one easily obtains
(see also [29]) the exponential order of 2[nE][(][ρ][)], with E(ρ) as defined in (3). This completes
the proof of Theorem 1. ✷
Remark: It is easy to see that the random guessing scheme has an additional important
feature: not only the expectation of G[ρ](x) (w.r.t. the randomness of the guesses) has the
optimal exponential order of 2[nρ]H[ ˆ]x(X) for each and every x, but moreover, the probability
that G(x) would exceed 2[n][[ ˆ]Hx(X)+ǫ] decays double–exponentially rapidly for every ǫ > 0.
This follows from the following simple chain of inequalities:
Hx(X)+ǫ]
Pr �G(x) 2[n][[ ˆ]Hx(X)+ǫ][�] =· �1 2[−][n][ ˆ]Hx(X)[�][2][n][[ ˆ]
≥ −
= exp �2[n][[ ˆ]Hx(X)+ǫ] ln �1 2[−][n][ ˆ]Hx(X)[��]
−
-----
exp � 2[n][[ ˆ][H][x][(][X][)+][ǫ][]] 2[−][n][ ˆ]Hx(X)[�]
≤ −
= exp 2[nǫ] . (11)
{− }
A similar comment will apply also to the random guessing scheme of Section 5.
The random guessing distribution (6) is asymptotically equivalent (in the exponential
scale) to a class of mixtures of all memoryless sources over, having the form
X
�
M (x) = µ(Q) Q(x)dQ (12)
S
where µ( ) is a density defined on the simplex of all distributions on, and where it is
- X
assumed that µ( ) is bounded away from zero and from infinity, and that it is independent
of n. As mentioned in [43], one of the popular choices of µ( ) is the Dirichlet distribution,
parametrized by λ > 0,
where
µ(Q) = [Γ(][λ][|X|][)]
Γ[|X|](λ)
[·]
λ−1
�� Q(x)�, (13)
x∈X
∞
- �
Γ(s) = x[s][−][1]e[−][x]dx, (14)
0
and we remind that for a positive integer n,
Γ(n) = (n 1)! (15)
−
� 1 � √
Γ = π (16)
2
�
Γ n + [1]
2
� √ �
= π 1 n
- − [1]
2 2 2
[·][ 3] [· · ·]
�
, n 1. (17)
≥
For example, with the choice λ = 1/2, the mixture becomes
�x∈X [Γ(][n][ ˆ][P]x[(][x][) + 1][/][2)]
. (18)
π[|X|][/][2]Γ(n + /2)
|X|
� |X|
M (x) = Γ
2
�
This mixture distribution can be implemented sequentially, as
M (x) =
n−1
� M (xt+1|x[t]), (19)
t=0
where
M (xt+1|x[t]) = [M] [(][x][t][+1][)] = [t][ ˆ][P][x][t][(][x][t][+1][) + 1][/][2], (20)
M (x[t]) t + /2
|X|
where P[ˆ]xt (x) is the relative frequency of x ∈X in x[t] = (x1, . . ., xt). So the sequential
implementation is rather simple: draw the first symbol, X1, according to the uniform
-----
distribution. Then, for t = 1, 2, . . ., n − 1, draw the next symbol, Xt+1, according to the
last equation, taking into account the relative frequencies of the various letters drawn so
far.
#### 4.3 Side Information
All the above findings extend straightforwardly to the case of a guesser that is equipped
with SI Y, correlated to the random vector X to be guessed, where it is assumed that
(X, Y ) is a sequence of n independent copies of a pair of random variables (X, Y ) jointly
distributed according to PXY .
The only modification required is that the universal randomized guessing distribu
Hxy (X|Y )
tion will now by proportional (and exponentially equivalent) to 2[−][n][ ˆ] instead of
2[−][n][ ˆ]Hx(X), and in the sequential implementation, the mixture and hence also the relative
frequency counts will be applied to each SI letter y separately. Consequently, the
∈Y
conditional distribution M (xt+1|x[t]) above would be replaced by
M (xt+1|x[t], y[t]) = [t][ ˆ][P][x][t][y][t][(][x][t][+1][, y][t][+1][) + 1][/][2] . (21)
tP[ˆ]yt (yt+1) + |X|/2
### 5 Guessing for Finite–State Sources
We now extend the scope to a much more general class of sources – the class of non–unifilar
finite–state sources, namely, hidden Markov sources [44]. Specifically, we assume that X is
drawn by a distribution P given by
P (x) = �
z
n
� P (xi, zi+1|zi), (22)
i=1
where {xi} is the source sequence as before, whose elements take on values in a finite
alphabet X of size α, and where {zi} is the underlying state sequence, whose elements take
on values in a finite set of states, Z of size s, and where the initial state, z1, is assumed
to be a fixed member of . The parameter set P (x, z[′] z), x, z, z[′] is unknown
Z { | ∈X ∈Z}
the guesser. In fact, even the number of states, s, is not known, and we seek a universal
guessing strategy.
-----
#### 5.1 Converse Theorem
Let us parse x into c = c(x) distinct phrases, by using, for example, the incremental parsing
procedure[6] of the Lempel–Ziv (LZ) algorithm [27] (see also [45, Subsection 13.4.2]). The
following is a converse theorem concerning the best achievable guessing performance.
Theorem 2 Given a finite–state source (22), any guessing function satisfies the following
inequality:
E{G[ρ](X)} ≥ 2[−][n][∆][n]E [exp2{ρc(X) log c(X)}], (23)
where ∆n is a function of s, α and n, that tends to zero as n →∞ for fixed s and α.
Proof. Without essential loss of generality, let ℓ divide n and consider the segmentation
of x = (x1, . . ., xn) into n/ℓ non–overlapping sub–blocks, xi = (xiℓ+1, xiℓ+2, . . ., x(i+1)ℓ),
i = 0, 1, . . ., n/ℓ − 1. Let z[ℓ] = (z1, zℓ+1, z2ℓ+1, . . ., zn+1) be the (diluted) state sequence
pertaining to the boundaries between neighboring sub–blocks. Then,
P (x, z[ℓ]) =
n/ℓ−1
� P (xi, z(i+1)ℓ+1|ziℓ+1). (24)
i=0
For a given z[ℓ], let (x z[ℓ]) be the set of all sequences x[′] that are obtained by permuting
T | { }
different sub–blocks that both begin at the same state and end at the same state. Owing to
the product form of P (x, z[ℓ]), it is clear that P (x[′], z[ℓ]) = P (x, z[ℓ]) whenever x[′] (x z[ℓ]).
∈T |
It was shown in [46, Eq. (47) and Appendix A] that
|T (x|z[ℓ])| ≥ exp2{c(x) log c(x) − nδ(n, ℓ)}, (25)
independently of z[ℓ], where δ(n, ℓ) tends to C/ℓ (C > 0 – constant) as n for fixed ℓ.
→∞
Furthermore, by choosing ℓ = ℓn = [√]log n, we have that δ(n, ℓn) = O(1/[√]log n). We then
have the following chain of inequalities:
E G[ρ](X) = �
{ }
z[ℓn]
= �
z[ℓn]
= �
z[ℓn]
� P (x, z[ℓ][n]) � G[ρ](x)
{T (x|z[ℓn] )} x[′]∈T (x|z[ℓn] )
� P (x, z[ℓ][n])G[ρ](x)
x
�
{T (x|z[ℓn] )}
� P (x, z[ℓ][n])G[ρ](x)
x[′]∈T (x|z[ℓn] )
6The incremental parsing procedure is a sequential procedure of parsing a sequence, such that each new
parsed phrase is the shortest string that has not been obtained before as a phrase.
-----
G[ρ](x)
(x z[ℓ][n])
|T | |
= �
z[ℓn]
�
≥
z[ℓn]
� P (x, z[ℓ][n]) (x z[ℓ][n]) �
- |T | | ·
{T (x|z[ℓn] )} x[′]∈T (x|z[ℓn] )
� P (x, z[ℓ][n]) (x z[ℓ][n])
- |T | | · [|T][ (][x][|][z][ℓ][n][)][|][ρ]
1 + ρ
{T (x|z[ℓn] )}
1
≥
1 + ρ
�
z[ℓn]
� P (x, z[ℓ][n]) · |T (x|z[ℓ][n])| · exp2{ρ[c(x) log c(x) − nδ(n, ℓn)]}
{T (x|z[ℓn] )}
2[−][nδ][(][n,ℓ][n][)]
= E [exp2{ρ · c(X) log c(X)}]
1 + ρ
= 2[−][n][∆][n]E [exp2{ρ · c(X) log c(X)}], (26)
where the first inequality follows from the following genie–aided argument: The inner–most
summation in the fourth line of the above chain can be viewed as the guessing moment of a
guesser that is informed that X falls within a given (x z[ℓ][n]). Since the distribution within
T |
(x z[ℓ][n]) is uniform, no matter what guessing strategy may be,
T |
|T (x|z[ℓn] )|
� k[ρ]
k=1
�
x[′]∈T (x|z[ℓn] )
G[ρ](x) 1
≥
(x z[ℓ][n]) (x z[ℓ][n])
|T | | |T | |
1
(x z[ℓ][n])
≥ |T | |[ρ]
(x z[ℓ][n])
|T | |
|T (x|z[ℓn] )|
�
k=1
� k
(x z[ℓ][n])
|T | |
ρ
�
� 1
(x z[ℓ][n]) u[ρ]du
≥ |T | |[ρ]
0
(x z[ℓ][n])
|T | |[ρ]
= . (27)
1 + ρ
This completes the proof of Theorem 2. ✷
#### 5.2 Direct Theorem
We now present a matching direct theorem, which asymptotically achieves the converse
bound in the exponential scale.
Theorem 3 Given a finite–state source (22), there exists a universal guessing list that
satisfies the following inequality:
E{G[ρ](X)} ≤ 2[n][∆]n[′] E [exp2{ρc(X) log c(X)}], (28)
where ∆[′]n [is a function of][ s][,][ α][ and][ n][, that tends to zero as][ n][ →∞] [for fixed][ s][ and][ α][.]
Proof. The proposed deterministic guessing list orders all members of [n] in non–decreasing
X
order of their Lempel–Ziv code–lengths [27, Theorem 2]. Denoting the LZ code–length of
-----
x by LZ(x), we then have
G(x) x[′] : LZ(x[′]) LZ(x)
≤ |{ ≤ }|
LZ(x)
= � x[′] : LZ(x[′]) = i
|{ }|
i=1
LZ(x)
� 2[i]
≤
i=1
< 2[LZ][(][x][)+1]
≤ exp2{[c(x) + 1] log(2α[c(x) + 1]) + 1}
= exp2{c(x) log c(x)}, (29)
where the inequality x[′] : LZ(x[′]) = i 2[i] is due to the fact that the LZ code is
|{ }| ≤
uniquely decipherable (UD) and the last inequality is from Theorem 2 of [27]. By raising
this inequality to the power of ρ and taking the expectation of both sides, Theorem 3 is
readily proved.
An alternative, randomized guessing strategy pertains to independent random guesses
according to the following universal distribution,
2[−][LZ][(][x][)]
P˜(x) = (30)
�x[′][ 2][−][LZ][(][x][′][)] [.]
Since the LZ code is UD, it satisfies the Kraft inequality, and so, the denominator cannot
be larger than 1, which means that
P˜(x) ≥ 2[−][LZ][(][x][)] ≥ exp2{−[c(x) + 1] log(2α[c(x) + 1])}. (31)
Similarly as in (10), applying Lemma 1 to (30) (or to (31)), this time with a = [ln 2]
n [[][c][(][x][) +]
1] log(2α[c(x) + 1]), we obtain that the ρ–th moment of G(X) given that X = x is up
per bounded by an expression of the exponential of exp2{ρ[c(x) + 1] log(2α[c(x) + 1])} =
exp2{ρc(x) log c(x)}, and then upon taking the expectation w.r.t. the randomness of X, we
readily obtain the achievability result once again. ✷
#### 5.3 Algorithms for Sampling From the Universal Guessing Distribution
Similarly as in Section 4, we are interested in efficient algorithms for sampling from the
universal distribution, (30). In fact, it is enough to have an efficient implementation of an
algorithm that efficiently samples from a distribution P[˜] that satisfies P[˜](x) 2[−][c][(][x][) log][ c][(][x][)].
≥
We propose two different algorithms, the first is inspired by the predictive point of view
-----
associated with LZ parsing [47], [48], and the second one is based on the simple idea of
feeding the LZ decoder with purely random bits. The latter algorithm turns out to lend
itself more easily to generalization for the case of guessing in the presence of SI. Both algo
rithms are described in terms of walks on a growing tree, but the difference is that in the
first algorithm, the tree is constructed in the domain of the source sequences, whereas in
the second algorithm, the tree is in the domain of the compressed bit-stream.
First algorithm. As said, the idea is in the spirit of the predictive probability assignment
mechanism proposed in [47] and [48, Sect. V], but here, instead of using the incremental
parsing mechanism for prediction, we use it for random selection.
As mentioned before, the algorithm is described as a process generated by a repeated
walk on a growing tree, beginning, each time, from the root and ending at one of the
leaves. Consider a tree which is initially composed of a root connected to α leaves, each
one corresponding to one alphabet letter, x . We always assign to each leaf a weight
∈X
of 1 and to each internal node – the sum of weights of its immediate off-springs, and so,
the initial weight of the root is α. We begin by drawing the first symbol, X1, such that
the probability of X1 = x is given by the weight of x (which is 1) divided by the weight
of the current node, which is the root (i.e., a weight of α, as said). In other words, we
randomly select X1 according to the uniform distribution over X . The leaf corresponding
to the outcome of X1, call it x1, will now become an internal node by adding to the tree
its α off-springs, thus growing the tree to have 2α 1 leaves. Each one of the leaves of
−
the extended tree has now weight 1, and then, the weight of the their common ancestor
(formerly, the leaf of x1), becomes the sum of their weights, namely, α, and similarly, the
weights of all ancestors of x1, all the way up to the root, are now sequentially updated to
become the sum of the weights of their immediate off-springs.
We now start again from the root of the tree to randomly draw the next symbol, X2,
such that the probability that X2 = x, is given by the weight of the node x divided by the
weight of the current node, which is again the root, and then we move from the root to
its corresponding off-spring pertaining to X2, that was just randomly drawn. If we have
reached a leaf, then again this leaf gives birth to α new off-springs, each assigned with
weight 1, then all corresponding weights are updated as described before, and finally, we
move back to the root, etc. If we are still in an internal node, then again, we draw the next
-----
symbol according to the ratio between the weight of the node pertaining to the next symbol
and the weight of the current node, and so on. The process continues until n symbols,
X1, X2, . . ., Xn have been generated.
Note that every time we restart from the root and move along the tree until we reach
a leaf, we generate a new LZ phrase that has not been obtained before. Let c(x) be the
number of phrases generated. Along each path from the root to a leaf, we implement a
telescopic product of conditional probabilities, where the numerator pertaining to the last
conditional probability is the weight of the leaf, which is 1, and the denominator of the first
probability is the total number of leaves after i rounds, which is α + i(α 1) (because after
−
every birth of a new generation of leaves, the total number of leaves is increased by α 1). All
−
other numerators and denominators of the conditional probabilities along the path cancel
each other telescopically. The result is that the induced probability distribution along the
various leaves is uniform. Precisely, after i phrases have been generated, the probability of
each leaf is exactly 1/[α + i(α 1)]. Therefore,
−
c(x)−1
1
P (x) = �
α + i(α 1)
i=0 −
c(x)−1
1
�
≥
α + [c(x) 1](α 1)
i=0 − −
1
=
α + [c(x) 1](α 1)
{ − − }[c][(][x][)]
= 2[−][c][(][x][) log][{][[][c][(][x][)][−][1](][α][−][1)+][α][}], (32)
which is of the exponential order of 2[−][c][(][x][) log][ c][(][x][)].
Second algorithm. As said, the second method for efficiently generating random guesses
according to the LZ distribution is based on the simple idea of feeding purely random bits
into the LZ decoder until a decoded sequence of length n is obtained. To describe it, we refer
to the coding scheme proposed in [27, Theorem 2], but with a slight modification. Recall
that according to this coding scheme, for the i–th parsed phrase, x[n]n[j]j−1+1[, one encodes]
two integers: the index 0 π(j) j 1 of the matching past phrase and the index the
≤ ≤ −
additional source symbol, 0 ≤ IA(xnj ) ≤ α − 1. These two integers are mapped together
bijectively into one integer, I(x[n]n[j]j−1+1[) =][ π][(][j][)][ ·][ α][ +][ I][A][(][x][n]j [), which takes on values in the]
set {0, 1, 2, . . ., jα − 1}, and so, according to [27], it can be encoded using Lj = ⌈log(jα)⌉
-----
bits. Here, instead, we will encode I(x[n]n[j]j−1+1[) with a tiny modification that will make the]
encoding equivalent to a walk on a complete binary tree [7] from the root to a leaf. Considering
the fact that (by definition of Lj), 2[L][j] [−][1] < jα ≤ 2[L][j], we first construct a full binary tree
with 2[L][j] [−][1] leaves at depth Lj − 1, and then convert jα − 2[L][j] [−][1] leaves to internal nodes by
generating their off-springs. The resulting complete binary tree will then have exactly jα
leaves, some of them at depth Lj − 1 and some - at depth Lj. Each leaf of this tree will
now correspond to one value of I(x[n]n[j]j−1+1[), and hence to a certain decoded phrase. Let]
Lˆj denote the length of the codeword for I(x[n]n[j]j−1+1[). Obviously, either ˆ][L][j][ =][ L][j][ −] [1 or]
Lˆj = Lj. Consider now what happens if we feed the decoder of this encoder by a sequence
of purely random bits (generated by a binary symmetric source): every leaf at depth L[ˆ]j
will be obtained with probability 2[−][L][ˆ][j], and since the tree is complete, these probabilities
sum up to unity. The probability of obtaining x at the decoder output is, therefore, equal
to the probability of the sequence of bits pertaining to its compressed form, namely,
P˜(x) =
c(x)+1
� 2[−][L][ˆ][j]
j=1
c(x)+1
� Lˆj
j=1
c(x)+1
� Lj
j=1
c(x)+1
� log(2jα)
j=1
= exp2
≥ exp2
≥ exp2
[−]
[−]
[−]
≥ exp2 {−[c(x) + 1] log[2c(x)α]}
= exp2{−[c(x) + 1] log c(x) − [c(x) + 1] log(2α)}, (33)
which is again of the exponential order of 2[−][c][(][x][) log][ c][(][x][)].
#### 5.4 Side Information
As we have done at the end of Section 4, here too, we describe how our results extend to the
case where the guesser is equipped with SI. The parts that extend straightforwardly will be
described briefly, whereas the parts whose extension is non–trivial will be more detailed.
7By “complete binary tree”, we mean a binary tree where each node is either a leaf or has two off-springs.
The reason for the need of a complete binary tree is that for the algorithm to be valid, every possible
sequence of randomly chosen bits must be a legitimate compressed bit-stream so that it would be decodable
by the LZ decoder.
-----
Consider the pair process {(Xt, Yt)}, jointly distributed according to a hidden Markov
model,
P (x, y) = �
z
n
� P (xt, yt, zt+1|zt), (34)
t=1
where, as before, zt is the state at time t, taking on values in a finite set of states Z of
cardinality s.
Here, our objective is to guess x when y is available to the guesser as SI. Most of our
earlier results extend quite easily to this case. Basically, the only modification needed is
to replace the LZ complexity of x by the conditional LZ complexity of x given y, which
is defined as in [49] and [50]. In particular, consider the joint parsing of the sequence of
pairs, {(x1, y1), (x2, y2), . . ., (xn, yn)}, let c(x, y) denote the number of phrases, c(y) – the
number of distinct y-phrases, y(ℓ) – the ℓ-th distinct y-phrase, 1 ℓ c(y), and finally, let
≤ ≤
cℓ(x|y) denote the number of times y(ℓ) appears as a phrase, or, equivalently, the number
of distinct x-phrases that appear jointly with y(ℓ), so that [�]ℓ[c]=1[(][y][)] [c][ℓ][(][x][|][y][) =][ c][(][x][,][ y][). Then,]
we define
u(x y) =
|
c(y)
� cℓ(x|y) log cℓ(x|y). (35)
ℓ=1
For the converse theorem (lower bound), the proof is the same as the proof of Theorem
2, except that here, we need a lower bound on the size of a “conditional type” of x given
y. This lower bound turns to be of the exponential order of 2[u][(][x][|][y][)], as can be seen in [51,
Lemma 1]. Thus, the lower bound on the guessing moment is of the exponential order of
E[exp2{ρu(X|Y )}].
For the direct theorem (upper bound), we can either create a deterministic guessing list
by ordering the members of [n] according to increasing order of their conditional LZ code–
X
length function values, LZ(x y) u(x y), [49, p. 2617], [50, page 460, proof of Lemma 2],
| ≈ |
or randomly draw guesses according to
2[−][LZ][(][x][|][y][)]
P˜(x y) = (36)
|
�x[′][ 2][−][LZ][(][x][′][|][y][)][ .]
Following Subsection 5.C, we wish to have an efficient algorithm for sampling from the dis
tribution (36), or, more generally, for implementing a conditional distribution that satisfies
P˜(x y) - 2[−][LZ][(][x][|][y][)] = 2· −u(x|y).
| ≥
While we have not been able to find an extension of the first algorithm of Subsection 5.C
to the case of SI, the second algorithm therein turns out to lend itself fairly easily to such an
-----
extension. Once again, generally speaking, the idea is to feed a sequence of purely random
bits as inputs to the decoder pertaining to the conditional LZ decoder, equipped with y as
SI, and wait until exactly n symbols, x1, . . ., xn, have been obtained at the output of the
decoder. We need, however, a few slight modifications in conditional LZ code, in order to
ensure that any sequence of randomly drawn bits would be legitimate as the output of the
encoder, and hence be also decodable by the decoder. Once again, to this end, we must use
complete binary trees for the prefix codes for the various components of the conditional LZ
code.
As can be seen in [49], [50], the conditional LZ compression algorithm sequentially
encodes x phrase by phrase, where the code for each phrase consists of three parts:
1. A code for the length of the phrase, L[y(ℓ)].
2. A code for the location of the matching x–phrase among all previous phrases with
the same y–phrase.
3. A code for the index of the last symbol of the x–phrase among all members of .
X
Parts 2 and 3 are similar to those of the ordinary LZ algorithm and they in fact can even
be united, as described before, into a single code for both indices (although this is not
necessary). Part 1 requires a code for the integers, which can be implemented by the Elias
code, as described in [49]. However, for the sake of conceptual simplicity of describing
the required complete binary tree, consider the following alternative option. Define the
following distribution on the natural numbers,
6
Q(i) = i = 1, 2, 3, . . . (37)
π[2]i[2][,]
and construct a prefix tree for the corresponding Shannon code, whose length function is
given by
(i) = log Q(i) . (38)
L ⌈− ⌉
Next prune the tree by eliminating all leaves that correspond to values of i = L[y(ℓ)]
that cannot be obtained at the current phrase: the length L[y(ℓ)] cannot be larger than
the maximum possible phrase length and cannot correspond to a string that has not been
obtained as a y–phrase before.[8] Finally, shorten the tree by eliminating branches that
8This is doable since both the encoder and the decoder have this information at the beginning of the
current phrase.
-----
emanate from any node that has one off-spring only. At the end of this process, we have
a complete binary tree where the resulting code length for every possible value of L[y(ℓ)]
cannot be larger than its original value (38).
The probability of obtaining a given x at the output of the above–described conditional
LZ decoder is equal to the probability of randomly selecting the bit-stream that generates
x (in the presence of y as SI), as the response to this bit-stream. Thus,
cℓ(x|y)
� exp2{−⌈log(jα)⌉−L(L[y(ℓ)])}
j=1
P˜(x y)
| ≥
c(y)
�
ℓ=1
cℓ(x|y)
�
j=1
c(y)
�
ℓ=1
≥ exp2
≥ exp2
[−]
[−]
c(y) cℓ(x|y) � �[]
� � log(2jα) + 2 log L[y(ℓ)] + log [π][2]
6 [+ 1]
ℓ=1 j=1
c(y) c(y)
� cℓ(x|y) log[2αcℓ(x|y)] − 2 � cℓ(x|y) log L[y(ℓ)]−
ℓ=1 ℓ=1
��
log [π][2]
6 [+ 1]
c(x, y)
�
= exp2{−u(x|y)}, (39)
where the last step follows from the observation [50, p. 460] that
c(y) c(y)
� cℓ(x|y) log L[y(ℓ)] = c(x, y) � cℓ(x|y)
c(x, y) [log][ L][[][y][(][ℓ][)]]
ℓ=1 ℓ=1
c(y)
�
ℓ=1 [c][ℓ][(][x][|][y][)][L][[][y][(][ℓ][)]]
c(x, y) log
≤ c(x, y)
n
= c(x, y) log (40)
c(x, y) [,]
and the fact that c(x, y) cannot be larger than O(n/ log n) [27].
### 6 Conclusion
In this work, we studied the guesswork problem under a very general setup of unknown
source distribution and decentralized operation. Specifically, we designed and analyzed
guessing strategies which do not require the source distribution, the exact guesswork mo
ment to be optimized, or any synchronization between the guesses, yet achieve the optimal
guesswork exponent as if all this information was known and full synchronization was pos
sible. Furthermore, we designed efficient algorithms in order to sample guesses from the
universal distributions suggested. We believe such sampling methods may be interesting on
their own, and find applications outside the guesswork regime.
-----
### Appendix
Proof of Lemma 1. We denote
S =
∞
� k[ρ](1 e[−][na])[k][−][1]. (A.1)
−
k=1
For a given, arbitrarily small ǫ > 0, we first decompose S as follows.
∞
� k[ρ](1 e[−][na])[k][−][1][ △]= A + B. (A.2)
−
k=e[n][(][a][+][ǫ][)]+1
S =
e[n][(][a][+][ǫ][)]
� k[ρ](1 e[−][na])[k][−][1] +
−
k=1
Now,
A
≤
e[n][(][a][+][ǫ][)]
� e[n][(][a][+][ǫ][)][ρ](1 e[−][na])[k][−][1]
−
k=1
e[n][(][a][+][ǫ][)]
= e[n][(][a][+][ǫ][)][ρ] � (1 e[−][na])[k][−][1]
−
k=1
∞
e[n][(][a][+][ǫ][)][ρ] �(1 e[−][na])[k][−][1]
≤ −
k=1
1
= e[n][(][a][+][ǫ][)][ρ]
1 (1 e[−][na])
− −
= e[na] e[n][(][a][+][ǫ][)][ρ]
e[n][(1+][ρ][)(][a][+][ǫ][)]. (A.3)
≤
It remains to show then that B has a negligible contribution for large enough n. Indeed,
we next show that B decays double–exponentially rapidly in n for every ǫ > 0.
B =
=
=
≤
=
∞
� k[ρ](1 e[−][na])[k][−][1]
−
k=e[n][(][a][+][ǫ][)]+1
∞
� exp (k 1) ln(1 e[−][na]) + ρ ln k
{ − − }
k=e[n][(][a][+][ǫ][)]+1
∞
� exp k ln(1 e[−][na]) + ρ ln(k + 1)
{ − }
k=e[n][(][a][+][ǫ][)]
∞
� exp k e[−][na] + ρ ln(k + 1)
{− - }
k=e[n][(][a][+][ǫ][)]
∞
� � ��
� exp k e[−][na] (A.4)
− - − [ρ][ ln(][k][ + 1)]
k
k=e[n][(][a][+][ǫ][)]
Since {[ln(k + 1)]/k}k≥1 is a monotonically decreasing sequence, then for all k ≥ e[n][(][a][+][ǫ][)],
ρ ln(k + 1)
= ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1].
≤ [ρ][ ln[][e][n][(][a][+][ǫ][)][ + 1]]
k e[n][(][a][+][ǫ][)]
-----
Thus,
∞
B � exp k (e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1])
≤ {− - − }
k=e[n][(][a][+][ǫ][)]
exp e[n][(][a][+][ǫ][)](e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1])
{− − }
=
1 exp (e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1])
− {− − }
exp (e[nǫ] ρ ln[e[n][(][a][+][ǫ][)] + 1])
{− − }
=
1 exp (e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1])
− {− − }
[e[n][(][a][+][ǫ][)] + 1][ρ] exp e[nǫ]
{− }
= (A.5)
1 exp (e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1])
− {− − }
Now, for small x, we have 1 e[−][x] = x + O(x[2]), and so, the factor
−
[1 exp (e[−][na] ρe[−][n][(][a][+][ǫ][)] ln[e[n][(][a][+][ǫ][)] + 1]) ][−][1]
− {− − }
is of the exponential order of e[na], which does not affect the double exponential decay due to
the term e[−][e][nǫ]. The proof of the lemma is completed by taking into account the arbitrariness
of ǫ > 0 (in particular, one may let ǫ decay sufficiently slowly with n).
### References
[1] J. M. Wozencraft, “Sequential decoding for reliable communication,” Ph.D. disserta
tion, Research Laboratory of Electronics, Massachusetts Institute of Technology, May
1957.
[2] E. Arikan, “An inequality on guessing and its application to sequential decoding,”
IEEE Transactions on Information Theory, vol. 42, no. 1, pp. 99–105, January 1996.
[3] C. E. Pfister and W. G. Sullivan, “R´enyi entropy, guesswork moments, and large de
viations,” IEEE Transactions on Information Theory, vol. 50, no. 11, pp. 2794–2800,
November 2004.
[4] W. Szpankowski and S. Verd´u, “Minimum expected length of fixed-to-variable lossless
compression without prefix constraints,” IEEE Transactions on Information Theory,
vol. 57, no. 7, pp. 4017–4025, July 2011.
[5] I. Kontoyiannis and S. Verd´u, “Optimal lossless data compression: Non-asymptotics
and asymptotics,” IEEE Transactions on Information Theory, vol. 60, no. 2, pp. 777–
795, February 2014.
-----
[6] O. Kosut and L. Sankar, “Asymptotics and non-asymptotics for universal fixed-to
variable source coding,” IEEE Transactions on Information Theory, vol. 63, no. 6, pp.
3757–3772, June 2017.
[7] M. Bishop and D. V. Klein, “Improving system security via proactive password check
ing,” Computers & Security, vol. 14, no. 3, pp. 233–249, 1995.
[8] M. D. Amico, P. Michiardi, and Y. Roudier, “Password strength: An empirical analy
sis,” in Proceedings IEEE Infocom, March 2010, pp. 1–9.
[9] P. G. Kelley, S. Komanduri, M. L. Mazurek, R. Shay, T. Vidas, L. Bauer, N. Christin,
L. F. Cranor, and J. Lopez, “Guess again (and again and again): Measuring password
strength by simulating password-cracking algorithms,” in 2012 IEEE Symposium on
Security and Privacy, May 2012, pp. 523–537.
[10] S. Komanduri, R. Shay, P. G. Kelley, M. L. Mazurek, L. Bauer, N. Christin, L. F.
Cranor, and S. Egelman, “Of passwords and people: Measuring the effect of password
composition policies,” in Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, ser. CHI ’11. New York, NY, USA: ACM, 2011, pp. 2595–2604.
[11] J. Bonneau, “The science of guessing: Analyzing an anonymized corpus of 70 million
passwords,” in 2012 IEEE Symposium on Security and Privacy, May 2012, pp. 538–552.
[12] J. O. Pliam, “On the incomparability of entropy and marginal guesswork in brute-force
attacks,” in Progress in Cryptology —INDOCRYPT 2000, B. Roy and E. Okamoto,
Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000, pp. 67–79.
[13] P. Gauravaram, “Security analysis of salt——password hashes,” in 2012 International
Conference on Advanced Computer Science Applications and Technologies (ACSAT),
November 2012, pp. 25–30.
[14] A. Rezaee, A. Beirami, A. Makhdoumi, M. M´edard, and K. Duffy, “Guesswork subject
to a total entropy budget,” in 2017 55th Annual Allerton Conference on Communica
tion, Control, and Computing (Allerton), October 2017, pp. 1008–1015.
[15] M. M. Christiansen, K. R. Duffy, F. du Pin Calmon, and M. M´edard, “Guessing a
password over a wireless channel (on the effect of noise non-uniformity),” in 2013
Asilomar Conference on Signals, Systems and Computers, November 2013, pp. 51–55.
-----
[16] M. M. Christiansen and K. R. Duffy, “Guesswork, large deviations, and shannon en
tropy,” IEEE Transactions Information Theory, vol. 59, no. 2, pp. 796–802, February
2013.
[17] Y. Yona and S. Diggavi, “The effect of bias on the guesswork of hash functions,” in
2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp.
2248–2252.
[18] S. Salamatian, A. Beirami, A. Cohen, and M. M´edard, “Centralized vs. decentral
ized multi-agent guesswork,” in Information Theory (ISIT), 2017 IEEE International
Symposium on. IEEE, 2017, pp. 2258–2262.
[19] S. Salamatian, W. Huleihel, A. Beirami, A. Cohen, and M. M´edard, “Why botnets
work: Distributed brute-force attacks need no synchronization,” in revision, IEEE
Transactions on Information Forensics and Security, 2018.
[20] T. A. Courtade and S. Verd´u, “Variable-length lossy compression and channel coding:
Non-asymptotic converses via cumulant generating functions,” in 2014 IEEE Interna
tional Symposium on Information Theory, June 2014, pp. 2499–2503.
[21] D. Malone and K. Maher, “Investigating the distribution of password choices,” in
Proceedings of the 21st international conference on World Wide Web. ACM, 2012,
pp. 301–310.
[22] J. Yan, A. Blackwell, R. Anderson, and A. Grant, “Password memorability and secu
rity: empirical results,” IEEE Security Privacy, vol. 2, no. 5, pp. 25–31, September
2004.
[23] D. Vishwakarma and C. E. V. Madhavan, “Efficient dictionary for salted password
analysis,” in 2014 IEEE International Conference on Electronics, Computing and Com
munication Technologies (CONECCT), January 2014, pp. 1–6.
[24] R. Sundaresan, “Guessing under source uncertainty with side information,” in 2006
IEEE International Symposium on Information Theory, July 2006, pp. 2438–2440.
[25] J. Owens and J. Matthews, “A study of passwords and methods used in brute-force
SSH attacks,” in USENIX Workshop on Large-Scale Exploits and Emergent Threats
(LEET), 2008.
-----
[26] E. Tirado, B. Turpin, C. Beltz, P. Roshon, R. Judge, and K. Gagneja, “A new dis
tributed brute-force password cracking technique,” in International Conference on Fu
ture Network Systems and Security. Springer, 2018, pp. 117–127.
[27] J. Ziv and A. Lempel, “Compression of individual sequences via variable-rate coding,”
IEEE transactions on Information Theory, vol. 24, no. 5, pp. 530–536, September 1978.
[28] J. L. Massey, “Guessing and entropy,” in Proceedings of IEEE International Symposium
on Information Theory, 1994, p. 204.
[29] E. Arikan and N. Merhav, “Guessing subject to distortion,” IEEE Transactions on
Information Theory, vol. 44, no. 3, pp. 1041–1056, May 1998.
[30] D. Malone and W. G. Sullivan, “Guesswork and entropy,” IEEE Transactions on In
formation Theory, vol. 50, no. 3, pp. 525–526, March 2004.
[31] M. K. Hanawal and R. Sundaresan, “Guessing revisited: A large deviations approach,”
IEEE Transactions on Information Theory, vol. 57, no. 1, pp. 70–78, January 2011.
[32] R. Sundaresan, “Guessing under source uncertainty,” IEEE Transactions on Informa
tion Theory, vol. 53, no. 1, pp. 269–287, January 2007.
[33] M. M. Christiansen, K. R. Duffy, F. du Pin Calmon, and M. M´edard, “Multi-user guess
work and brute force security,” IEEE Transactions on Information Theory, vol. 61,
no. 12, pp. 6876–6886, December 2015.
[34] A. Beirami, R. Calderbank, K. Duffy, and M. M´edard, “Quantifying computational
security subject to source constraints, guesswork and inscrutability,” in 2015 IEEE
International Symposium on Information Theory (ISIT), June 2015, pp. 2757–2761.
[35] M. J. Weinberger, J. Ziv, and A. Lempel, “On the optimal asymptotic performance of
universal ordering and of discrimination of individual sequences,” IEEE Transactions
on Information Theory, vol. 38, no. 2, pp. 380–385, March 1992.
[36] A. Beirami, R. Calderbank, M. Christiansen, K. Duffy, A. Makhdoumi, and M. Mdard,
“A geometric perspective on guesswork,” in 2005 53rd Annual Allerton Conference on
Communication, Control, and Computing (Allerton), September 2015, pp. 941–948.
-----
[37] M. K. Hanawal and R. Sundaresan, “Randomised attacks on passwords,” DRDO-IISc
Programme on Advanced Research in Mathematical Engineering, 2010.
[38] A. Juels and R. L. Rivest, “Honeywords: Making password-cracking detectable,” in
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications
security. ACM, 2013, pp. 145–160.
[39] D. Wang, H. Cheng, P. Wang, J. Yan, and X. Huang, “A security analysis of honey
[words,” https://tinyurl.com/y87ffmny, NDSS, 2018.](https://tinyurl.com/y87ffmny)
[40] A. Bracher, E. Hof, and A. Lapidoth, “Guessing attacks on distributed-storage sys
[tems,” arXiv preprint arXiv:1701.01981, 2017.](http://arxiv.org/abs/1701.01981)
[41] M. Weir, S. Aggarwal, B. d. Medeiros, and B. Glodek, “Password cracking using prob
abilistic context-free grammars,” in 2009 30th IEEE Symposium on Security and Pri
vacy, May 2009, pp. 391–405.
[42] I. Csisz´ar and J. K¨orner, Information theory: coding theorems for discrete memoryless
systems. Cambridge University Press, 2011.
[43] R. Krichevsky and V. Trofimov, “The performance of universal encoding,” IEEE Trans
actions on Information Theory, vol. 27, no. 2, pp. 199–207, March 1981.
[44] Y. Ephraim and N. Merhav, “Hidden markov processes,” IEEE Transactions on infor
mation theory, vol. 48, no. 6, pp. 1518–1569, June 2002.
[45] T. M. Cover and J. A. Thomas, Elements of information theory, 2nd ed. New York,
NY, USA: John Wiley & Sons, 2006.
[46] N. Merhav, “Universal coding with minimum probability of codeword length overflow,”
IEEE Transactions on Information Theory, vol. 37, no. 3, pp. 556–563, May 1991.
[47] M. Feder, “Gambling using a finite state machine,” IEEE Transactions on Information
Theory, vol. 37, no. 5, pp. 1459–1465, September 1991.
[48] M. Feder, N. Merhav, and M. Gutman, “Universal prediction of individual sequences,”
IEEE transactions on Information Theory, vol. 38, no. 4, pp. 1258–1270, July 1992.
-----
[49] T. Uyematsu and S. Kuzuoka, “Conditional lempel-ziv complexity and its application
to source coding theorem with side information,” IEICE Transactions on Fundamentals
of Electronics, Communications and Computer Sciences, vol. 86, no. 10, pp. 2615–2617,
2003.
[50] J. Ziv, “Universal decoding for finite-state channels,” IEEE Transactions on Informa
tion Theory, vol. IT-31, no. 4, pp. 453–460, July 1985.
[51] N. Merhav, “Universal detection of messages via finite-state channels,” IEEE Trans
actions on Information Theory, vol. 46, no. 6, pp. 2242–2246, April 2000.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TIT.2019.2920538?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TIT.2019.2920538, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/1811.04363"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-01-01T00:00:00
|
[] | 20,643
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Physics",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00d0783da6191568a814381a0dc4db49262736e9
|
[
"Computer Science",
"Medicine"
] | 0.853884
|
On Global Quantum Communication Networking
|
00d0783da6191568a814381a0dc4db49262736e9
|
Entropy
|
[
{
"authorId": "143857714",
"name": "I. Djordjevic"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://www.mdpi.com/journal/entropy/",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155606",
"https://www.mdpi.com/journal/entropy"
],
"id": "8270cfe1-3713-4325-a7bd-c6a87eed889e",
"issn": "1099-4300",
"name": "Entropy",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155606"
}
|
Research in quantum communications networks (QCNs), where multiple users desire to generate or transmit common quantum-secured information, is still in its beginning stage. To solve for the problems of both discrete variable- and continuous variable-quantum key distribution (QKD) schemes in a simultaneous manner as well as to enable the next generation of quantum communication networking, in this Special Issue paper we describe a scenario where disconnected terrestrial QCNs are coupled through low Earth orbit (LEO) satellite quantum network forming heterogeneous satellite–terrestrial QCN. The proposed heterogeneous QCN is based on the cluster state approach and can be used for numerous applications, including: (i) to teleport arbitrary quantum states between any two nodes in the QCN; (ii) to enable the next generation of cyber security systems; (iii) to enable distributed quantum computing; and (iv) to enable the next generation of quantum sensing networks. The proposed QCNs will be robust against various channel impairments over heterogeneous links. Moreover, the proposed QCNs will provide an unprecedented security level for 5G+/6G wireless networks, Internet of Things (IoT), optical networks, and autonomous vehicles, to mention a few.
|
# entropy
_Perspective_
### On Global Quantum Communication Networking
**Ivan B. Djordjevic**
Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721, USA;
ivan@email.arizona.edu; Tel.: +1-520-626-5119
Received: 29 June 2020; Accepted: 28 July 2020; Published: 29 July 2020
[����������](https://www.mdpi.com/1099-4300/22/8/831?type=check_update&version=2)
**�������**
**Abstract: Research in quantum communications networks (QCNs), where multiple users desire to**
generate or transmit common quantum-secured information, is still in its beginning stage. To solve
for the problems of both discrete variable- and continuous variable-quantum key distribution (QKD)
schemes in a simultaneous manner as well as to enable the next generation of quantum communication
networking, in this Special Issue paper we describe a scenario where disconnected terrestrial QCNs
are coupled through low Earth orbit (LEO) satellite quantum network forming heterogeneous
satellite–terrestrial QCN. The proposed heterogeneous QCN is based on the cluster state approach
and can be used for numerous applications, including: (i) to teleport arbitrary quantum states between
any two nodes in the QCN; (ii) to enable the next generation of cyber security systems; (iii) to enable
distributed quantum computing; and (iv) to enable the next generation of quantum sensing networks.
The proposed QCNs will be robust against various channel impairments over heterogeneous links.
Moreover, the proposed QCNs will provide an unprecedented security level for 5G+/6G wireless
networks, Internet of Things (IoT), optical networks, and autonomous vehicles, to mention a few.
**Keywords: quantum key distribution (QKD); discrete variable (DV)-QKD; continuous variable**
(CV)-QKD; postquantum cryptography (PQC); quantum communications networks (QCNs)
**1. Introduction**
Quantum communication (QuCom) employs quantum information theory concepts, in particular
the no-cloning theorem and the theorem of indistinguishability of arbitrary quantum states, to
implement the distribution of keys with verifiable security, commonly referred to as quantum key
distribution (QKD), where security is guaranteed by the fundamental laws of physics as opposed to
unproven mathematical assumptions employed in computational security-based cryptography [1–3].
Despite the appealing features of QuComs, there are some fundamental and technical challenges
that need to be addressed prior to its widespread application. For instance, both the rate and
distance of QuCom are fundamentally limited by channel loss, which is specified by the rate-loss
tradeoff. To overcome the rate-distance limit of discrete variable (DV)-QKD protocols, two predominant
approaches have been pursued recently: (i) the development of quantum relays and (ii) the employment
of trusted relays. Quantum relays require the use of long-duration quantum memories and high-fidelity
entanglement distillation [4], which are not yet widely available. On the other hand, the trusted-relay
methodology assumes that the relay between two users can be trusted [5]; unfortunately, this
assumption is difficult to verify in practice. The measurement device independent (MDI)-QKD
approach [6] was able to close the detection loopholes; however, its secret-key rate (SKR) is still
bounded by O(T)-dependence (with T standing for transmissivity). Recently, twin-field (TF) QKD
has been proposed to overcome the rate-distance limit [7], whose SKR scales with the square-root of
transmittance, which represents a promising approach to extend the transmission distance. Another
key limitation of DV-QKD is the deadtime of single-photon detectors (SPDs), which limits the baud
rate and consequently the SKRs. To solve for this problem, a continuous variable (CV)-QKD can be
-----
_Entropy 2020, 22, 831_ 2 of 9
used instead [1,8–10], which employs homodyne/heterodyne detection instead and thus does not
exhibit the SPDS’ deadtime limitation problem. In particular, the discrete modulation (DM)-based
CV-QKD protocols offer much better reconciliation efficiency compared to that of Gaussian modulation
(GM)-based CV-QKD protocols. Unfortunately, the security proofs of DM-based CV-QKD schemes for
collective and coherent attacks are still incomplete. To overcome key challenges for DV-QKD, such as
low SKR values and limited distance, as well as for DM-based CV-QKD, such as incompleteness of
security proofs, the following approaches have been proposed in our recent papers: (1) discretized GM
(DGM)-CV-QKD [11], (2) optimized CV-QKD [12], and (3) hybrid DV-CV QKD [13]. An alternative
approach to QKD is post-quantum cryptography (PQC) [14]. PQC is typically referred to by various
cryptographic algorithms that are thought to be secure against any quantum computer-based attack.
Unfortunately, PQC is also based on unproven assumptions and some of the PQC algorithms will be
broken in the future by developing more sophisticated quantum algorithms.
Modern classical communication networks consist of multiple nodes connected by various types
of channels, including free-space optical (FSO) links, optical fibers, ground–satellite links, wireless
RF, and coaxial cables. Such a heterogeneous architecture would be equally important for QCNs, as
quantum nodes may access a QCN via different kinds of channels. Indeed, quantum communications
have been individually validated in free-space, optical fibers, and between a satellite and a ground
station, but a combined heterogeneous QCN employing multiple types of channels remains elusive.
Unlike in the point-to-point communication case, the fundamental quantum communication rate limits
are not well known. Several QKD testbeds have been reported so far, including the DARPA QKD
network [15], Tokyo QKD network [16], and secure communication based on quantum cryptography
(SECOQC) network [17]. The QKD can also be used to establish QKD-based campus-to-campus virtual
private networks employing the IPsec protocol [18] as well as to establish the network setup for using
transport-layer security (TLS) based on QKD [19]. However, all of these networks employ the dark
fiber infrastructure. Quantum communication over satellite links has already been demonstrated; see
for example [20,21].
In this Special Issue paper, we propose to implement the multipartite QCN by employing the
cluster state-based concept [22]. The proposed quantum network can be used to: (i) perform distributed
quantum computing, (ii) teleport quantum states between any two nodes in the network, and (iii)
enable the next generation of cyber security systems. The cluster states can be described by using
the stabilizer formalism and as such they can easily be certified by simple syndrome measurements.
In this formalism, the cluster states can be interpreted as codewords of a corresponding quantum
error correction code, while corresponding errors can be corrected for by simple syndrome decoding,
among others. By performing simple Y and Z measurements on properly selected nodes we can
straightforwardly establish the Einstein–Podolsky–Rosen (EPR) pair between any two nodes in the
network. Moreover, multiple EPR pairs can be established simultaneously. We further propose a cluster
state-based quantum network of satellites that enables global coverage. The quantum satellite network
would be composed of quantum subnetworks comprised of low Earth orbit (LEO) satellites. Some of
these LEO satellite-based quantum subnetworks can be connected to a subnetwork of medium Earth
orbit (MEO)/ geostationary orbit (GEO) satellites. The LEO satellites should be used to interconnect
terrestrial cluster state-based quantum networks. This quantum global network can also be used to
distribute the entangled states for quantum sensing applications and to enable distributed quantum
computing on a global scale. SDN concepts should be used to reconfigure the proposed QCN.
The paper is organized as follows. In Section 2, we describe the proposed cluster states-based
QCN concept. In Section 3, we describe potential approaches to extend the transmission distance
between QCN nodes. In Section 4, we describe the QCN that is currently under development at the
University of Arizona. Finally, in Section 5, we provide some relevant concluding remarks.
-----
_Entropy 2020, 22, 831_ 3 of 9
## N(a) denoting the neighborhood of a ∈ C. To create a 2-D cluster sta ilbert et al. [23] is applicable; it employs linear states, generated 2. Proposed Cluster States-Based Quantum Communications Networks wn conversion (SPDC), local unitaries, and type I fusion to create To enable the next generation of quantum communication networking, we envision a scenario
in which disconnected terrestrial cluster states-based QCNs are coupled through the LEO satellite
## e type I fusion is illustrated in Figure 1, based on [23]. The vertical ph(cluster state) quantum network, thus providing global coverage. The proposed quantum network
will be highly robust against turbulence encountered by FSO links, as the envisioned quantum
## ion beam splitter (PBS), while the horizontal photon is transmitted tsatellite network will communicate to ground nodes only through the LEO satellite-to-ground links,
exhibiting a vertical downlink profile through vacuum followed by a turbulence layer with strength
## abilistic nature of the PBS, with the photons present at both the leftthat is altitude-dependent.
The cluster states belong to the class of the graph states, which also include Bell states,
## e four possible outcomes, each occurring with probability 0.25. Greenberger–Horne–Zeilinger (GHZ) states, W-states, and various entangled states used in quantum
error correction [22]. When the cluster C is defined as a connected subset on a d-dimensional lattice, it
� �
## he desired fusion operators, and the success probability of the fusioobeys the set of eigenvalue equations Sa���φ C [=] ���φ C[,][ S][a][ =][ X][a] b∈N⊗(a) Zb, where Sa are stabilizer operators
with N(a) denoting the neighborhood of a ∈ _C. To create a 2-D cluster state, the approach proposed by_
## detected by the detector, a successful fusion is declared. The procedGilbert et al. [23] is applicable; it employs linear states, generated by spontaneous parametric down
conversion (SPDC), local unitaries, and type I fusion to create the desired 2-D cluster state. The type I
## state is described in Figure 2. To create the box-cluster state, we sfusion is illustrated in Figure 1, based on [23]. The vertical photon is reflected by the polarization beam
splitter (PBS), while the horizontal photon is transmitted through the PBS. Given the probabilistic
## ster state, re-label the qubits 2 and 3, and apply the Hadamard gatesnature of the PBS, with the photons present at both the left and right input ports, there are four
possible outcomes, each occurring with probability 0.25. Two outcomes correspond to the desired
## vely establish the bond between qubits 1 and 4. Namely, relabelifusion operators, and the success probability of the fusion is 0.5. When a single photon is detected
by the detector, a successful fusion is declared. The procedure to create the T-shape cluster state is
## e SWAP gate action. To create the box-on-chain cluster state, we stadescribed in Figure 2. To create the box-cluster state, we start with a four-qubit linear cluster state,
re-label the qubits 2 and 3, and apply the Hadamard gates to qubits 2 and 3, which effectively establish
## qubits and apply the same approach as in a box-state creation. Twothe bond between qubits 1 and 4. Namely, relabeling the qubits is equivalent to the SWAP gate action.
To create the box-on-chain cluster state, we start with a longer linear chain of qubits and apply the
## ed together to get the same approach as in a box-state creation. Two T-shape cluster states can be fused together to get theH-shape cluster state, etc.
_H-shape cluster state, etc._
###### Polarization discriminating detector
##### left right
|l PBS |r
**Figure 1. Illustrating the type I fusion process. PBS: polarization beam splitter.**
### igure 1. Illustrating the type I fusion process. PBS: polarization beam splitte
### 5
### 5
### 5
-----
|l PBS |r
_Entropy 2020, 22, 831_ 4 of 9
**Figure 1. Illustrating the type I fusion process. PBS: polarization beam splitter.**
_SWAP[(][2,3)]_
_T-shape_
cluster
_Z-measurement_ state
on qubit 3 1
_Entropy 2020, 22, x FOR PEER REVIEW Figure 2.Figure 2. Gilbert’s approach to create theGilbert’s approach to create the TT-shape cluster state.-shape cluster state._ 4 of 9
Once the 2-D cluster state of nodes is created, we can use properly selectedOnce the 2-D cluster state of nodes is created, we can use properly selected Y and Z measurementsY and _Z_
to create the EPR pair between any two arbitrary nodes in the quantum network. As a reminder, themeasurements to create the EPR pair between any two arbitrary nodes in the quantum network. As
role of thea reminder, the role of the Z measurement is to remove the particular node (qubit) from the cluster, whereas the roleZ measurement is to remove the particular node (qubit) from the cluster,
ofwhereas the role of Y measurement is to remove a given node and link neighboring nodes. As an illustration, the 2-DY measurement is to remove a given node and link neighboring nodes. As an
cluster state with nine nodes is shown in Figureillustration, the 2-D cluster state with nine nodes is shown in Figure 3. Let us assume that we are 3. Let us assume that we are interested in establishing
EPR pairs between nodes 3 and 7 as well as nodes 1 and 9. We first performinterested in establishing EPR pairs between nodes 3 and 7 as well as nodes 1 and 9. We first perform Y measurements in the
following order:Y measurements in the following order: Y8, Y5, and Y6 to get the intermediate stage. We then performY8, Y5, and Y6 to get the intermediate stage. We then perform Z-measurement on
node 2 and Y measurement on node 4 to get the two desired EPR pairs. Given that the 2-D cluster stateZ-measurement on node 2 and Y measurement on node 4 to get the two desired EPR pairs. Given
is universal, it is possible to use the same network architecture for both QCN and distributed quantumthat the 2-D cluster state is universal, it is possible to use the same network architecture for both QCN
computing. We also imagine the scenario in which each node is equipped with multiple qubits, whereinand distributed quantum computing. We also imagine the scenario in which each node is equipped
several layers of 2-D cluster states are active at the same time, which will allow us to simultaneouslywith multiple qubits, wherein several layers of 2-D cluster states are active at the same time, which
perform QCN and distributed quantum computing. Moreover, when several 2-D cluster states are runwill allow us to simultaneously perform QCN and distributed quantum computing. Moreover, when
in parallel on the same set of network nodes, we will be able to reconfigure the QCN as needed. Thisseveral 2-D cluster states are run in parallel on the same set of network nodes, we will be able to
can be done with the help of the SDN concept. The SDN has been introduced to separate the controlreconfigure the QCN as needed. This can be done with the help of the SDN concept. The SDN has
plane and data plane, manage network services through the abstraction of higher-level functionality,been introduced to separate the control plane and data plane, manage network services through the
and implement new applications and algorithms eabstraction of higher-level functionality, and implement new applications and algorithms efficiently. fficiently. It has already been studied to enable the
coexistence of classical and quantum communication channels. Our SDN-based QCN architectureIt has already been studied to enable the coexistence of classical and quantum communication
is composed of three layers, namely an application layer, a control layer, and a QCN layer. Userschannels. Our SDN-based QCN architecture is composed of three layers, namely an application layer,
send their requests from the application layer with the help of the northbound interface to the SDNa control layer, and a QCN layer. Users send their requests from the application layer with the help
controller. The SDN controller allocates the QCN resources with the help of its global map through theof the northbound interface to the SDN controller. The SDN controller allocates the QCN resources
southbound interface. The QCN layer would be composed of dense wavelength-division multiplexingwith the help of its global map through the southbound interface. The QCN layer would be composed
(DWDM) FSOof dense wavelength-division multiplexing (DWDM) FSO/single-mode fiber (SMF)/few-mode fiber /single-mode fiber (SMF)/few-mode fiber (FMF) links and QCN nodes. Any two nodes
in the QCN can communicate through either through a dedicated SMF(FMF) links and QCN nodes. Any two nodes in the QCN can communicate through either through a /FSO/FMF link or through
a wavelength channel. The SDN control should also determine sequence of measurements to bededicated SMF/FSO/FMF link or through a wavelength channel. The SDN control should also
performed in order to establish desired EPR pairs. To deal with time-varying channel conditions overdetermine sequence of measurements to be performed in order to establish desired EPR pairs. To
heterogeneous links, we should adapt the system configuration based on both application requirementdeal with time-varying channel conditions over heterogeneous links, we should adapt the system
and link condition.configuration based on both application requirement and link condition.
_Y8, Y5, Y6_ _Z2, Y4_
**Figure 3.Figure 3. Establishing EPR pairs between nodes 1 and 9 as well as between nodes 3 and 7.Establishing EPR pairs between nodes 1 and 9 as well as between nodes 3 and 7.**
**3. Extending the Distance between Nodes in QCN**
-----
_Entropy 2020, 22, 831_ 5 of 9
**3. Extending the Distance between Nodes in QCN**
The DV-QKD can be used to build QKD networks, as discussed in the introduction. Unfortunately,
the DV-QKD is affected by the deadtime of SPDs. Moreover, even if Eve cannot get the key because
DV-QKD is used, she can prevent parties from creating secure keys, which is similar to the Denial of
Service (DoS) attack. Further, since SKRs for DV-QKD are low, the quantum key pool, storing the
secure keys, will often be empty, hampering the operation of QKD networks. To solve for this problem
we propose to use the hybrid QKD-PQC protocols, in which QKD is used for raw key transmission and
PQC in information reconciliation to reduce the leakage during the error reconciliation stage, which is
illustrated in Figure 4. As mentioned in the introduction, the PQC is typically referred to in various
cryptographic algorithms that are thought to be secure against any quantum computer-based attack.
Unfortunately, the PQC is also based on unproven assumptions and some of the QPC algorithms
might be broken in the future by developing advanced quantum algorithms. For this reason we
propose to use the PQC algorithms only in the information reconciliation phase so as to limit the
_Entropy leakage due to transmission of parity bits over an authenticated classical channel (in conventional2020, 22, x FOR PEER REVIEW_ 5 of 9
QKD). The quantum algorithms to be developed (not yet known), which will be capable of breaking
terms of the number of operations the PQC algorithms, will have certain complexity expressed in terms of the number of operationsL. By ensuring that the number of parity bits N–K is shorter than L.
the number of secure PQC bits logBy ensuring that the number of parity bits2L, the proposed cryptographic scheme will be secure. Evidently, N–K is shorter than the number of secure PQC bits log2L,
the proposed cryptographic scheme exploits the complexity of corresponding quantum algorithms the proposed cryptographic scheme will be secure. Evidently, the proposed cryptographic scheme
used to break the PQC protocols. Given that the McEliece cryptosystem based on quasi cyclic (QC)-exploits the complexity of corresponding quantum algorithms used to break the PQC protocols. Given
low-density parity-check (LDPC) coding is straightforward to implement as shown in [24], whereas that the McEliece cryptosystem based on quasi cyclic (QC)-low-density parity-check (LDPC) coding
the corresponding LDPC encoders and decoders have been already implemented in field-is straightforward to implement as shown in [24], whereas the corresponding LDPC encoders and
programmable gate array (FPGA) [25], it represents an excellent candidate to be used for the decoders have been already implemented in field-programmable gate array (FPGA) [25], it represents
transmission of parity bits in the TF-QKD scheme. As an illustration, the secret fraction that can be an excellent candidate to be used for the transmission of parity bits in the TF-QKD scheme. As an
achieved with the BB84 protocol is lower bounded by [1]: illustration, the secret fraction that can be achieved with the BB84 protocol is lower bounded by [1]:
###### r r= =q( ) qZ [(][Z]1[)]−[�]1h e −2 ( h(2X�)e)[(][X][)][��]−− qq[(]( )[Z]Z[)]f h efeeh22(�e( )[(]Z[Z][)])[�],, (1) (1)
where where qq[(]([Z]Z[)]) denotes the probability of declaring a successful result when Alice sent a single-photon denotes the probability of declaring a successful result when Alice sent a single-photon
and Bob detected it in the and Bob detected it in theZ-basis, Z-basis,fe denotes the error correction inefficiency ( fe denotes the error correction inefe _≥ 1), fficiency (e[(X)] [e[(Z)]] denotes fe ≥_ 1),
the e[(X)] [eQBER [(Z)]] denotes the QBER in the X-basis (in the X-basis (Z-basis), _Zand -basis), andh2(x)_ _his 2(xthe ) is the binary entropy functionbinary_ entropy function
###### h xh22( )(x) ==− −xlogx log2 ( ) (2x(x−) −1−(1x −)log 1x)2 log( −2x()1. The second term − x). The second termq[(Z)]h2 q[e[(Z)][(X)]h] denotes the amount of information Eve 2[e[(X)]] denotes the amount of information
Eve was able to learn during the raw key transmission, and this information can be removed from the
was able to learn during the raw key transmission, and this information can be removed from the
final key during the privacy amplification phase. The third term final key during the privacy amplification phase. The third termq q[(Z)][(Z)]fe hfe2 h[e2[(Z)][e] represents the amount of [(Z)]] represents the amount
of information revealed during the error correction stage. By sending the parity bits over the PQC
information revealed during the error correction stage. By sending the parity bits over the PQC
channel this term can be effectively eliminated and the SKR can be increased. channel this term can be effectively eliminated and the SKR can be increased.
Bob Alice
Raw key Sifting Sifted Syndrome PQC Public channel PQC LDPC Corrected
input procedure key, x _p=xH[T]_ encryption decryption decoder key
**Figure 4. Figure 4. Illustration of post-quantum cryptography-based information reconciliation.Illustration of post-quantum cryptography-based information reconciliation.**
By using this approach, as illustrated in Figure 5, the transmission distance between two nodes
By using this approach, as illustrated in Figure 5, the transmission distance between two nodes
in QCN can be significantly extended. Here we provide comparisons of the joint TF-QKD-McEliece
in QCN can be significantly extended. Here we provide comparisons of the joint TF-QKD-McEliece
encryption scheme against the phase-matching (PM) TF-QKD protocol introduced in [26], the MDI-QKD
encryption scheme against the phase-matching (PM) TF-QKD protocol introduced in [26], the MDI
protocol [6], and the decoy-state-based BB84 protocol [27]. The system parameters are selected
QKD protocol [6], and the decoy-state-based BB84 protocol [27]. The system parameters are selected
as follows: the detector efficiency as follows: the detector efficiencyηd η = 0.25, reconciliation inefficiency d = 0.25, reconciliation inefficiencyfe = 1.15, the dark count rate f e = 1.15, the dark countpd
= 8 × 10rate pd =[−8], the misalignment error 8 × 10[−][8], the misalignment errored = 1.5%, and the number of phase slices for PM TF-QKD is set to ed = 1.5%, and the number of phase slices for PM
TF-QKD is set to M = 16. Regarding the transmission medium, it is assumed that recently reported
_M = 16. Regarding the transmission medium, it is assumed that recently reported ultra-low-loss fiber_
of attenuation 0.1419 dB/km (at 1560 nm) is employed [28]. In the same Figure, the Pirandola–
Lauren a Otta iani Banchi (PLOB) bound on a linear key rate is pro ided as ell Both PM TF QKD
-----
_Entropy 2020, 22, 831_ 6 of 9
ultra-low-loss fiber of attenuation 0.1419 dB/km (at 1560 nm) is employed [28]. In the same Figure, the
Pirandola–Laurenza–Ottaviani–Banchi (PLOB) bound on a linear key rate is provided as well. Both PM
TF-QKD and joint TF-QKD-McEliece encryption schemes outperform the decoy-state BB84 protocol
for distances larger than 162 km, while simultaneously outperforming the MDI-QKD protocol for all
distances, and exceed the PLOB bound at a distance of 322 km. The PM TF-QKD protocol can achieve
the maximum distance of 623 km. The proposed joint TF-QKD-McEliece encryption scheme is able to
achieve the distance of even 1127 km, thus significantly outperforming all other schemes. Even though
the operating wavelength was 1560 nm, other suitable wavelengths such as 2 µm and 3.9 µm can be
used as well.Entropy 2020, 22, x FOR PEER REVIEW 6 of 9
10-1
10-3
10-5
10-7
10-9
10-11
10-13
Distance, L [km]
**Figure 5.Figure 5. Proposed hybrid QKD-PQC scheme against MDI-QKD and TF-QKD in terms of secret-keyProposed hybrid QKD-PQC scheme against MDI-QKD and TF-QKD in terms of secret-key**
rate vs. distance, assuming that ultra-low loss fiber is used.rate vs. distance, assuming that ultra-low loss fiber is used.
Now, by connecting the base stations to the nodes in the proposed QCNs, we can provide
Now, by connecting the base stations to the nodes in the proposed QCNs, we can provide the
the unconditional security to the 5G+/6G wireless networks. By organizing the base stations in a
unconditional security to the 5G+/6G wireless networks. By organizing the base stations in a quantum
quantum optical mesh network and employing the proposed hybrid QKD-PQC concept we can provide
optical mesh network and employing the proposed hybrid QKD-PQC concept we can provide
unconditional security to a large number of users. The Internet of Things (IoT) architecture will comprise
unconditional security to a large number of users. The Internet of Things (IoT) architecture will
widely distributed nodes connected via different types of channels to enable new functionalities in
comprise widely distributed nodes connected via different types of channels to enable new
communication, sensing, and computing. Communication security in such a giant network is of
functionalities in communication, sensing, and computing. Communication security in such a giant
paramount importance. Our proposed QCNs will underpin the unconditional physical-layer security
network is of paramount importance. Our proposed QCNs will underpin the unconditional physical
of the IoT given that it will allow any two arbitrary nodes to securely transmit data at a high rate
layer security of the IoT given that it will allow any two arbitrary nodes to securely transmit data at
via an optical link. Critically, the security of such a network will not rest upon the trusted-node
a high rate via an optical link. Critically, the security of such a network will not rest upon the trusted
assumption, and a compromised node will not affect the security of other nodes. As such, the proposed
node assumption, and a compromised node will not affect the security of other nodes. As such, the
QCNs will lead to a substantially stronger security level for the IoT. To enable security for future 6G
proposed QCNs will lead to a substantially stronger security level for the IoT. To enable security for
wireless networks at a reasonable cost, the proposed joint satellite–terrestrial QCN can be based on the
future 6G wireless networks at a reasonable cost, the proposed joint satellite–terrestrial QCN can be
Cubesat satellites.
based on the Cubesat satellites.
For satellite-to-satellite quantum communications, in addition to the proposed hybrid QKD-PQC
For satellite-to-satellite quantum communications, in addition to the proposed hybrid QKD
concept, it also possible to employ our recent restricted eavesdropping concept [29], which offers a
PQC concept, it also possible to employ our recent restricted eavesdropping concept [29], which
significant increase in SKRs. This concept was presented in the ICTON 2020 paper [30]. Alternatively,
offers a significant increase in SKRs. This concept was presented in the ICTON 2020 paper [30].
the hybrid QKD can also be applied [13].
Alternatively, the hybrid QKD can also be applied [13].
**4. QCN under Development**
**4. QCN under Development**
The terrestrial QCN to be developed at the University of Arizona is shown in Figure 6; it will
The terrestrial QCN to be developed at the University of Arizona is shown in Figure 6; it will
exploit the existing NSF MRI INQUIRE quantum network, representing the quantum hub (QuHub)
exploit the existing NSF MRI INQUIRE quantum network, representing the quantum hub (QuHub)
to share entangled photons and SPDs among different labs across the campus. The outdoor FSO
to share entangled photons and SPDs among different labs across the campus. The outdoor FSO
bidirectional link, connecting the Electrical and Computer Engineering and Optical Sciences buildings,
bidirectional link, connecting the Electrical and Computer Engineering and Optical Sciences
has already been established, with the FSO transceiver shown in Figure 7. We will also create the mesh
buildings, has already been established, with the FSO transceiver shown in Figure 7. We will also
create the mesh network as well as the hybrid network composed of mesh, optical star, and ring
network segments The deployed heterogeneous QCNs will allow us to test novel quantum
-----
_py_,,
_Entropymuch faster than Gaussian beams for such long-distance applications. Hence, we need to use pure 2020, 22, 831_ 7 of 9
much faster than Gaussian beams for such long-distance applications. Hence, we need to use pure Bessel beams to overcome this problem, as we have shown in our recent paper [32]. To enable
Bessel beams to overcome this problem, as we have shown in our recent paper [32]. To enable robustness against turbulence encountered by FSO links, the envisioned quantum satellite QCN
network as well as the hybrid network composed of mesh, optical star, and ring network segments.
robustness against turbulence encountered by FSO links, the envisioned quantum satellite QCN should communicate to ground nodes only through the LEO satellite-to-ground links, exhibiting a
The deployed heterogeneous QCNs will allow us to test novel quantum-networking theories and
should communicate to ground nodes only through the LEO satellite-to-ground links, exhibiting a vertical downlink profile through vacuum followed by a turbulence layer with altitude-dependent
develop experimental tools for counteracting various channel impairments. To deal with atmospheric
vertical downlink profile through vacuum followed by a turbulence layer with altitude-dependent strength. In principle. MEO/GEO satellite QCNs can be created above LEO QCNs to provide the
strength. In principle. MEO/GEO satellite QCNs can be created above LEO QCNs to provide the turbulence eplanetary coverage. ffects, the adaptive optics (AO) subsystem, composed of a wavefront sensor (WFS) and
deformable mirror will be used. The AO will be combined with adaptive LDPC coding.
planetary coverage.
User 3 User 2
building MeinelUser 3 building MSE User 2
Meinel MSE
building building
SMF
links
User 4User 4 FMF/FMF/MMF linksQuComQuCom(ECE 549) Lab(ECE 549) Lab QuHub(ECE QuHub111) (ECE 111) User 1User 1SMF links Physicsbuilding Physicsbuilding
Alice MMF links OCSL(ECE Keatingbuilding
(server) OCSL441) Keating **NSF MRI INQUIRE**
Alice (ECE building **quantum network**
(server) 441) **NSF MRI INQUIRE**
**Figure 6. Terrestrial quantum communication network to be developed at the University of**
**Figure 6.Figure 6. Terrestrial quantum communication network to be developed at the University of Arizona.Terrestrial quantum communication network to be developed at the University of Arizona.**
Arizona.
**Figure 7.Figure 7. Free-space optical transceiver used in outdoor FSO link.Free-space optical transceiver used in outdoor FSO link.**
**Figure 7. Free-space optical transceiver used in outdoor FSO link.**
**5. Concluding Remarks To provide global coverage, we envision a scenario in which disconnected terrestrial QCNs, such**
**5. Concluding Remarks as the one shown in Figure 6, are coupled through the LEO satellite quantum network. We have recently**
To enable the next generation of quantum-enabled cyber security systems, we proposed a
shown that a Bessel–Gaussian (BG) beam, carrying an orbital angular momentum mode, exhibits better
quantum network of satellites that will provide the global coverage. The quantum satellite network To enable the next generation of quantum-enabled cyber security systems, we proposed a
tolerance to atmospheric turbulence effects compared to Gaussian beams for distances up to a few
quantum network of satellites that will provide the global coverage. The quantum satellite network will be composed of quantum subnetworks comprised of LEO satellites. Some of these LEO satellite
kilometers [31]. However, for LEO satellite-to-ground QuCom links, BG beams diffract much faster
will be composed of quantum subnetworks comprised of LEO satellites. Some of these LEO satellite-based quantum subnetworks will be connected to a subnetwork of MEO satellites. The MEO satellite
than Gaussian beams for such long-distance applications. Hence, we need to use pure Bessel beams
based quantum subnetworks will be connected to a subnetwork of MEO satellites. The MEO satellite subnetworks will then be interconnected to the global network of GEO satellites. The LEO/MEO
to overcome this problem, as we have shown in our recent paper [32]. To enable robustness against
subnetworks will then be interconnected to the global network of GEO satellites. The LEO/MEO satellites will also be used to interconnect terrestrial quantum networks. Each quantum
turbulence encountered by FSO links, the envisioned quantum satellite QCN should communicate to
satellites will also be used to interconnect terrestrial quantum networks. Each quantum communication subnetwork will be based on the cluster state concept. This quantum global network
ground nodes only through the LEO satellite-to-ground links, exhibiting a vertical downlink profile
communication subnetwork will be based on the cluster state concept. This quantum global network will allow us to establish EPR pairs between any two nodes in the global network. It can also be used
through vacuum followed by a turbulence layer with altitude-dependent strength. In principle.
will allow us to establish EPR pairs between any two nodes in the global network. It can also be used to distribute the entangled states for quantum-sensing applications and to enable distributed
MEO/GEO satellite QCNs can be created above LEO QCNs to provide the planetary coverage.
to distribute the entangled states for quantum-sensing applications and to enable distributed quantum computing on a global scale.
quantum computing on a global scale.
**5. Concluding RemarksFunding: This research received no external funding.**
**Funding: This research received no external funding.**
**Conflicts of Interest: To enable the next generation of quantum-enabled cyber security systems, we proposed a quantumThe author declares no conflict of interest.**
**Conflicts of Interest: network of satellites that will provide the global coverage. The quantum satellite network will beThe author declares no conflict of interest.**
composed of quantum subnetworks comprised of LEO satellites. Some of these LEO satellite-basedReferences
**References quantum subnetworks will be connected to a subnetwork of MEO satellites. The MEO satellite**
1. Djordjevic, I.B. Physical-Layer Security and Quantum Key Distribution; Springer Nature Switzerland: Cham,
subnetworks will then be interconnected to the global network of GEO satellites. The LEO/MEO
1. Djordjevic,Switzerland,I.B.2019. Physical-Layer Security and Quantum Key Distribution; Springer Nature Switzerland: Cham,
satellites will also be used to interconnect terrestrial quantum networks. Each quantum communication
Switzerland, 2019.
subnetwork will be based on the cluster state concept. This quantum global network will allow us to
-----
_Entropy 2020, 22, 831_ 8 of 9
establish EPR pairs between any two nodes in the global network. It can also be used to distribute the
entangled states for quantum-sensing applications and to enable distributed quantum computing on a
global scale.
**Funding: This research received no external funding.**
**Conflicts of Interest: The author declares no conflict of interest.**
**References**
1. Djordjevic, I.B. Physical-Layer Security and Quantum Key Distribution; Springer Nature Switzerland: Cham,
Switzerland, 2019.
2. Pljonkin, A.P. Features of the Photon Pulse Detection Algorithm in the Quantum Key Distribution System. In
Proceedings of the 2017 International Conference on Cryptography, Security and Privacy (ICCSP ’17), Wuhan,
China, 17–19 March 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 81–84.
[[CrossRef]](http://dx.doi.org/10.1145/3058060.3058078)
3. Pljonkin, A.P. Vulnerability of the synchronization process in the quantum key distribution system. Int. J.
_[Cloud Appl. Comput. 2019, 9, 4. [CrossRef]](http://dx.doi.org/10.4018/IJCAC.2019010104)_
4. Duan, L.-M.; Lukin, M.; Cirac, J.I.; Zoller, P. Long-distance quantum communication with atomic ensembles
[and linear optics. Nature 2001, 414, 413–418. [CrossRef]](http://dx.doi.org/10.1038/35106500)
5. [Qiu, J. Quantum communications leap out of the lab. Nature 2014, 508, 441–442. [CrossRef]](http://dx.doi.org/10.1038/508441a)
6. Lo, H.-K.; Curty, M.; Qi, B. Measurement-device-independent quantum key distribution. Phys. Rev. Lett.
**[2012, 108, 130503. [CrossRef]](http://dx.doi.org/10.1103/PhysRevLett.108.130503)**
7. Lucamarini, M.; Yuan, Z.L.; Dynes, J.F.; Shields, A.J. Overcoming the rate–distance limit of quantum key
[distribution without quantum repeaters. Nature 2018, 557, 400–403. [CrossRef]](http://dx.doi.org/10.1038/s41586-018-0066-6)
8. Fossier, S.; Diamanti, E.; Debuisschert, T.; Tualle-Brouri, R.; Grangier, P. Improvement of continuous-variable
[quantum key distribution systems by using optical preamplifiers. J. Phys. B 2009, 42, 114014. [CrossRef]](http://dx.doi.org/10.1088/0953-4075/42/11/114014)
9. Qu, Z.; Djordjevic, I.B. Four-dimensionally multiplexed eight-state continuous-variable quantum key
[distribution over turbulent channels. IEEE Photonics J. 2017, 9, 7600408. [CrossRef]](http://dx.doi.org/10.1109/JPHOT.2017.2777261)
10. [Ralph, T.C. Continuous variable quantum cryptography. Phys. Rev. A 1999, 61, 010303. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.61.010303)
11. Djordjevic, I.B. On the Discretized Gaussian Modulation (DGM)-based Continuous Variable-QKD. IEEE Access
**[2019, 7, 65342–65346. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2917587)**
12. Djordjevic, I.B. Optimized-Eight-State CV-QKD Protocol Outperforming Gaussian Modulation Based
[Protocols. IEEE Photonics J. 2019, 11, 4500610. [CrossRef]](http://dx.doi.org/10.1109/JPHOT.2019.2921521)
13. Djordjevic, I.B. Hybrid QKD Protocol Outperforming both DV- and CV-QKD Protocols. IEEE Photonics J.
**[2020, 12, 7600108. [CrossRef]](http://dx.doi.org/10.1109/JPHOT.2019.2946910)**
14. Bernstein, D.J.; Buchmann, J.; Dahmen, E. Post-Quantum Cryptography; Springer: Berlin, Germany, 2009.
15. Elliott, C.; Colvin, A.; Pearson, D.; Pikalo, O.; Schlafer, J.; Yeh, H. Current status of the DARPA quantum
network (Invited Paper). In Proceedings of the SPIE 5815, Quantum Information and Computation III,
Defense and Security, Orlando, FL, USA, 25 May 2005.
16. Sasaki, M.; Fujiwara, M.; Ishizuka, H.; Klaus, W.; Wakui, K.; Takeoka, M.; Miki, S.; Yamashita, T.; Wang, Z.;
Tanaka, A.; et al. Field test of quantum key distribution in the Tokyo QKD Network. Opt. Express 2011, 19,
[10387–10409. [CrossRef] [PubMed]](http://dx.doi.org/10.1364/OE.19.010387)
17. Alléaume, R.; Branciard, C.; Bouda, J.; Debuisschert, T.; Dianati, M.; Gisin, N.; Godfrey, M.; Grangier, P.;
Länger, T.; Lütkenhaus, N.; et al. Using quantum key distribution for cryptographic purposes. J. Theor.
_[Comput. Sci. 2014, 560, 62–81. [CrossRef]](http://dx.doi.org/10.1016/j.tcs.2014.09.018)_
18. [Nagayama, S.; Van Meter, R. Internet-Draft: IKE for IPsec with QKD. 2009. Available online: https:](https://tools.ietf.org/html/draft-nagayama-ipsecme-ipsec-with-qkd-01)
[//tools.ietf.org/html/draft-nagayama-ipsecme-ipsec-with-qkd-01 (accessed on 28 July 2020).](https://tools.ietf.org/html/draft-nagayama-ipsecme-ipsec-with-qkd-01)
19. Mink, A.; Frankel, S.; Perlner, R. Quantum Key Distribution (QKD) and Commodity Security Protocols:
Introduction and Integration. Intern. J. Netw. Secur. Appl. 2009, 1, 101–112.
20. Yin, J.; Cao, Y.; Li, Y.H.; Liao, S.K.; Zhang, L.; Ren, J.G.; Cai, W.Q.; Liu, W.Y.; Li, B.; Dai, H.; et al. Satellite-based
[entanglement distribution over 1200 kilometers. Science 2017, 356, 1140–1144. [CrossRef] [PubMed]](http://dx.doi.org/10.1126/science.aan3211)
21. Dequal, D.; Vallone, G.; Bacco, D.; Gaiarin, S.; Luceri, V.; Bianco, G.; Villoresi, P. Experimental single-photon
[exchange along a space link of 7000 km. Phys. Rev. A 2016, 93, 010301. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.93.010301)
-----
_Entropy 2020, 22, 831_ 9 of 9
22. Briegel, H.J. Cluster States. In Compendium of Quantum Physics; Greenberger, D., Hentschel, K., Weinert, F.,
Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 96–105.
23. Gilbert, G.; Hamrick, M.; Weinstein, Y.S. Efficient construction of photonic quantum-computational clusters.
_[Phys. Rev. A 2006, 73, 064303. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.73.064303)_
24. Baldi, M.; Bianchi, M.; Chiaraluce, F. Security and complexity of the McEliece cryptosystem based on QC
[LDPC codes. IET Inf. Secur. 2013, 7, 212–220. [CrossRef]](http://dx.doi.org/10.1049/iet-ifs.2012.0127)
25. Sun, X.; Zou, D.; Qu, Z.; Djordjevic, I.B. Run-time reconfigurable adaptive LDPC coding for optical channels.
_[Opt. Express 2018, 26, 29319–29329. [CrossRef]](http://dx.doi.org/10.1364/OE.26.029319)_
26. [Ma, X.; Zeng, P.; Zhou, H. Phase-matching quantum key distribution. Phys. Rev. X 2018, 8, 031043. [CrossRef]](http://dx.doi.org/10.1103/PhysRevX.8.031043)
27. Lo, H.-K.; Ma, X.; Chen, K. Decoy state quantum key distribution. Phys. Rev. Lett. 2005, 94, 230504.
[[CrossRef] [PubMed]](http://dx.doi.org/10.1103/PhysRevLett.94.230504)
28. Tamura, Y.; Sakuma, H.; Morita, K.; Suzuki, M.; Yamamoto, Y.; Shimada, K.; Honma, Y.; Sohma, K.; Fujii, T.;
Hasegawa, T.; et al. The First 0.14-dB/km loss optical fiber and its impact on submarine transmission. J.
_[Lightw. Technol. 2018, 36, 44–49. [CrossRef]](http://dx.doi.org/10.1109/JLT.2018.2796647)_
29. Pan, Z.; Seshadreesan, K.P.; Clark, W.; Adcock, M.R.; Djordjevic, I.B.; Shapiro, J.H.; Guha, S. Secret Key
Distillation over a Pure Loss Quantum Wiretap Channel under Restricted Eavesdropping. In Proceedings of
the 2019 IEEE International Symposium on Information Theory (ISIT 2019), Paris, France, 7–12 July 2019;
pp. 3032–3036.
30. Pan, Z.; Djordjevic, I.B. Security of Satellite-Based CV-QKD under Realistic Assumptions. In Proceedings of
the 22nd International Conference on Transparent Optical Networks ICTON 2020, Bari, Italy, 19–23 July 2020.
31. Wang, T.-L.; Gariano, J.; Djordjevic, I.B. Employing Bessel-Gaussian Beams to Improve Physical-Layer
[Security in Free-Space Optical Communications. IEEE Photonics J. 2018, 10, 7907113. [CrossRef]](http://dx.doi.org/10.1109/JPHOT.2018.2867173)
32. Wang, T.-L.; Djordjevic, I.B.; Nagel, J. Laser Beam Propagation Effects on Secure Key Rates for
[Satellite-to-Ground Discrete Modulation CV-QKD. Appl. Opt. 2019, 58, 8061–8068. [CrossRef]](http://dx.doi.org/10.1364/AO.58.008061)
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7517431, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1099-4300/22/8/831/pdf?version=1596101395"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-07-29T00:00:00
|
[
{
"paperId": "a35d44be73494a09fee9e465968b9f4b1c5ef876",
"title": "Security of Satellite-Based CV-QKD under Realistic Assumptions"
},
{
"paperId": "da74a376a0e5a8c775d3fcebbe6ffd377a2ce6bf",
"title": "Hybrid QKD Protocol Outperforming Both DV- and CV-QKD Protocols"
},
{
"paperId": "4a51b17ed9d09a4d33b466d5140a2cc387185e7a",
"title": "Laser beam propagation effects on secure key rates for satellite-to-ground discrete modulation CV-QKD."
},
{
"paperId": "747fe4e7823e19ab712cc0b625fd3b4da2fa0d8a",
"title": "Physical-Layer Security and Quantum Key Distribution"
},
{
"paperId": "057a4b21f3b3ce17e1c0071651f44a0efbbeea39",
"title": "Secret key distillation over a pure loss quantum wiretap channel under restricted eavesdropping"
},
{
"paperId": "46b300add8a462e9b5a0826dcbdeed0dacb297b9",
"title": "Optimized-Eight-State CV-QKD Protocol Outperforming Gaussian Modulation Based Protocols"
},
{
"paperId": "2c6f87bd06f8b19e9b65fa98cc7a907d9bbb5201",
"title": "On the Discretized Gaussian Modulation (DGM)- Based Continuous Variable-QKD"
},
{
"paperId": "229a3ae4136eb6f61d80c476eb57f9af5601151c",
"title": "Vulnerability of the Synchronization Process in the Quantum Key Distribution System"
},
{
"paperId": "5f9b0ed05a386a007aa5866388db5f5a2a68a918",
"title": "Run-time reconfigurable adaptive LDPC coding for optical channels."
},
{
"paperId": "c83b9da59c4aeae6eb578a4ad377caa73b9a119e",
"title": "Employing Bessel-Gaussian Beams to Improve Physical-Layer Security in Free-Space Optical Communications"
},
{
"paperId": "85670868b210bb28591ad775d815ef9f961c8566",
"title": "Phase-Matching Quantum Key Distribution"
},
{
"paperId": "0da7787e573e7e91dbbea1cb77c7b8c13e5a775e",
"title": "Overcoming the rate–distance limit of quantum key distribution without quantum repeaters"
},
{
"paperId": "3ef53e090fedfbef898561d18fe07960021ca08c",
"title": "Four-Dimensionally Multiplexed Eight-State Continuous-Variable Quantum Key Distribution Over Turbulent Channels"
},
{
"paperId": "7865f3132d4053c25f04ed148b480799932a825a",
"title": "Satellite-based entanglement distribution over 1200 kilometers"
},
{
"paperId": "21fb93ed12f3bb985eca64296d84b21a806722af",
"title": "Features of the Photon Pulse Detection Algorithm in the Quantum Key Distribution System"
},
{
"paperId": "172835329357c307cb296ce50c622374c0069eb1",
"title": "Experimental single photon exchange along a space link of 7000 km"
},
{
"paperId": "9f595e47ba244c2eda6f47fc98f42e3ae4db61b1",
"title": "IKE for IPsec with QKD"
},
{
"paperId": "5626ae9ea6b010871b0eb4a6237d1dae064f5c14",
"title": "Quantum communications leap out of the lab"
},
{
"paperId": "e0087bee26edc29886beb428767d26d5783097cf",
"title": "Security and complexity of the McEliece cryptosystem based on quasi-cyclic low-density parity-check codes"
},
{
"paperId": "0014862203b30180743559fe378fc4dc007107eb",
"title": "Field test of quantum key distribution in the Tokyo QKD Network."
},
{
"paperId": "46b77116268b9fe992a3d4763a73fbf2d3400157",
"title": "Quantum Key Distribution (QKD) and Commodity Security Protocols: Introduction and Integration"
},
{
"paperId": "bc1bcd12998be8437579cf7a95a8390576f65b5b",
"title": "Improvement of continuous-variable quantum key distribution systems by using optical preamplifiers"
},
{
"paperId": "99291ce0b97a31c786560241fea62604332afbf5",
"title": "Post-quantum cryptography"
},
{
"paperId": "f5480ff6e99948c9d38bafdb77924befe3ad7d87",
"title": "Using quantum key distribution for cryptographic purposes: A survey"
},
{
"paperId": "76682df264c4406b213410837035a2f3696311f1",
"title": "Efficient construction of photonic quantum-computational clusters"
},
{
"paperId": "f3dac0d820a7961310bbdb3966db68a4a3d2706b",
"title": "Current status of the DARPA quantum network (Invited Paper)"
},
{
"paperId": "c45f3b85a2f7efccf834de721e4183992312f859",
"title": "Decoy state quantum key distribution."
},
{
"paperId": "a435c4730520628f0fe48e289782925a4979c251",
"title": "Long-distance quantum communication with atomic ensembles and linear optics"
},
{
"paperId": "7cc963de3b2d36b4f9d5b6de50e6fb5508651556",
"title": "Continuous variable quantum cryptography"
},
{
"paperId": "d356042183a64e44251fe5b624c10800f6984176",
"title": "The First 0.14-dB/km Loss Optical Fiber and its Impact on Submarine Transmission"
},
{
"paperId": "a8d3024031c3248fc812edfbce864af7a11ece12",
"title": "Cluster States"
},
{
"paperId": null,
"title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license"
}
] | 11,683
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00d1e468ebce3a3c6197b876c32330c2f54777a3
|
[] | 0.813042
|
BioShare: An Open Framework for Trusted Biometric Authentication under User Control
|
00d1e468ebce3a3c6197b876c32330c2f54777a3
|
Applied Sciences
|
[
{
"authorId": "2027667427",
"name": "Quan Sun"
},
{
"authorId": "2145185836",
"name": "Jie Wu"
},
{
"authorId": "5493242",
"name": "Wenhai Yu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
Generally, biometric authentication is conducted either by mobile terminals in local-processing mode or by public servers in centralized-processing mode. In the former mode, each user has full control of his/her biometric data, but the authentication service is restricted to local mobile apps. In the latter mode, the authentication service can be opened up to network applications, but the owners have no control of their private data. It has become a difficult problem for biometric applications to provide open and trusted authentication services under user control. Existing approaches address these concerns in ad-hoc ways. In this work, we propose BioShare, a framework that provides trusted biometric authentication services to network applications while giving users full control of their biometric data. Our framework is designed around three key principles: each user has full control of his/her biometric data; biometric data is stored and processed in trusted environments to prevent privacy leaks; and the open biometric-authentication service is efficiently provided to network applications. We describe our current design and sample implementation, and illustrate how it provides an open face-recognition service with standard interfaces, combines terminal trusted environments with server enclaves, and enables each user to control his/her biometric data efficiently. Finally, we analyze the security of the framework and measure the performance of the implementation.
|
# applied sciences
_Article_
## BioShare: An Open Framework for Trusted Biometric Authentication under User Control
**Quan Sun** **[1,2,]*, Jie Wu** **[1]** **and Wenhai Yu** **[2]**
1 School of Computer Science and Technology, Fudan University, No. 220 Handan Rd., Shanghai 200433, China
2 China UnionPay Co., Ltd., No. 1699 Gutang Rd., Shanghai 201201, China
***** Correspondence: quansun@unionpay.com
**Abstract: Generally, biometric authentication is conducted either by mobile terminals in local-**
processing mode or by public servers in centralized-processing mode. In the former mode, each user
has full control of his/her biometric data, but the authentication service is restricted to local mobile
apps. In the latter mode, the authentication service can be opened up to network applications, but
the owners have no control of their private data. It has become a difficult problem for biometric
applications to provide open and trusted authentication services under user control. Existing approaches address these concerns in ad-hoc ways. In this work, we propose BioShare, a framework
that provides trusted biometric authentication services to network applications while giving users
full control of their biometric data. Our framework is designed around three key principles: each
user has full control of his/her biometric data; biometric data is stored and processed in trusted
environments to prevent privacy leaks; and the open biometric-authentication service is efficiently
provided to network applications. We describe our current design and sample implementation, and
illustrate how it provides an open face-recognition service with standard interfaces, combines terminal trusted environments with server enclaves, and enables each user to control his/her biometric
data efficiently. Finally, we analyze the security of the framework and measure the performance of
the implementation.
**Citation: Sun, Q.; Wu, J.; Yu, W.**
BioShare: An Open Framework for
Trusted Biometric Authentication
under User Control. Appl. Sci. 2022,
_[12, 10782. https://doi.org/10.3390/](https://doi.org/10.3390/app122110782)_
[app122110782](https://doi.org/10.3390/app122110782)
Academic Editor: Byung-Gyu Kim
Received: 12 September 2022
Accepted: 21 October 2022
Published: 25 October 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Keywords: biometric authentication; face recognition; trusted executive environment; enclave**
**1. Introduction**
With the prevalence of smartphones and e-commerce, biometric identification technologies are widely used by service providers and are favorably accepted by most consumers.
Fingerprint identification and face recognition have become standard features on most
Android and iOS devices, and payment platforms based on facial recognition are increasingly popular in countries such as China. Meanwhile, privacy protection has become a
serious topic and is attracting comprehensive attention from scholars and governments.
The European Union released the General Data Protection Regulation (GDPR) [1–3] in
2018, the United States issued the California Consumer Privacy Act (CCPA) [4,5] in 2018,
and China published the Personal Information Protection Law (PIPL) [6] in 2021. Google,
Facebook, British Airways, and many other companies were fined for data leakage, unauthorized usage, and other data issues. We are currently faced with the dilemma between
data usage and privacy protection, and we need a technical solution to achieve both goals
within the same framework.
Currently, there are two types of solutions available: local processing solutions and
centralized-processing solutions. The former method is to process and store personal
biometric data in local trusted environments integrated in personal mobile terminals,
and the representative implementations include Apple Touch ID/Face ID and Windows
Hello. Through local trusted computation, each user has full control of his/her biometric
data without the risk of privacy leakage, but authentication services are restricted only
-----
_Appl. Sci. 2022, 12, 10782_ 2 of 21
to local mobile apps. As such, the services are secure but not open. The latter method
uses a centralized system to collect and process biometric data in centralized-processing
mode on public servers, and typical application scenarios include many payment platforms
based on facial recognition, as described in Section 2.3. With the help of powerful servers,
service providers can provide open biometric authentication services to various network
applications efficiently. However, when raw biometric data or extracted feature data
are stored by service providers in their database servers, users are put at risk of privacy
leakages due to improper maintenance or malicious attacks. While user biometric data are
fully controlled by service providers without strict regulation and independent audit, users
have no control of their private data and are faced with the abuse of personal data. As a
conclusion, the first method is secure but not open, while the second method is open but
not secure. We need a framework that provides both secure and open biometric services.
In this paper, we propose BioShare, a framework that provides both secure and open
biometric services.To achieve this goal, we store and process biomatric data in secure
environments (terminal TEEs and server enclaves) to ensure security, and we coordinate
the terminals and the servers to provide open services for network applications. Our
framework is designed around three key principles:
**a. Each user has full control of his/her biometric data. Biometric services are pro-**
vided in either forwarding mode or authorized mode. In the forwarding mode, personal
biometric data are stored in the user’s terminals. In the authorized mode, biometric data are
temporarily stored in the server enclave under user authorization, and users can cancel the
authorization at any time. Both modes ensure users have full control of their private data.
**b. Open bioauthentication services are efficiently provided to network applica-**
**tions. We design standard interfaces for biometric services, develop service request handler**
modules running on the server side, and provide open services to network applications.
**c. Biometric data are stored and processed in trusted environments to prevent pri-**
**vacy leaks. On the terminal side, we use Trusted Executive Environments to protect**
biometric data. On the server side, we use enclave technologies to ensure data security. We
build secure channels between the terminal side and the server side using cryptographic
technologies to ensure data security during transmission.
The BioShare framework is designed to provide open and secure biometric services
with secure data storage both on terminal TEEs and server enclaves, and through standard
interfaces between terminal–server interactions. We implemented BioShare with facial
recognition as an example, and measured the performance of the implementation. Based
on the test results, the end-to-end response time is 91 ms to 1011 ms, which meets business
requirements in most human interactive scenarios, and the security strength is higher than
128 bits, which fulfils the recommended standard for business systems.
The rest of the paper is organized as follows: We enumerate related works in Section 2,
describe key parts of our current design in Section 3, illustrate key challenges with a sample
implementation in Section 4, and then measure and analyze the performance in Section 5.
Finally, we discuss several open issues in Section 6 and conclude in Section 7.
**2. Related Works**
_2.1. Biometric Identification on Personal Mobile Devices_
Currently most biometric services on mobile terminals are implemented as local
processing solutions, which is to say that they are secure but restricted to local mobile
apps. Since fingerprint identification was introduced into iPhone 5s smartphones by Apple
in 2013, biometric recognition technologies have undergone exponential development
in the smartphone market. Apple provides biometric services Touch ID [7] and Face
ID [8] on iOS platforms, and Microsoft offers Windows Hello [9] on personal Windows
devices. Many proven biometric technologies are widely integrated into mobile devices:
fingerprint [10,11], face [12–14], iris [15–17], vein [18,19], etc., and many other biometric modalities are under study: gait [20], keystroke [21], handwriting [22], etc. At the
same time, auxiliary methods are used to prevent spoofing and attacks, such as liveness
-----
_Appl. Sci. 2022, 12, 10782_ 3 of 21
detection [23–25], 3D recognition [26,27], micro-expression recognition [28,29], and movement detection such as head-shaking or blinking [30,31]. In general, biometric services are
conducted in terminal trusted environments and are provided for device unlocking, online
payment, and bank account verification, etc. However, these services are limited to local
apps and are not available for remote network applications.
_2.2. Trusted Executive Environment_
In local processing solutions, biometric data are processed and stored in local secured
environments to ensure data privacy and service security. Currently, operating systems
for both terminals and servers have become huge and complex because they must support
multiple devices and rich applications. As a result, they have become more and more vulnerable, because system software vulnerabilities depend on code size and complexity [32].
Therefore, current CPUs offer a special execution environment isolated from the OS, such
as ARM TrustZone [33–35], and Intel SGX [36,37] and RISC-V Keystone [38]).
On the terminal market, based on hardware capabilities, terminal manufacturers design and implement native TEE OS software, such as Qualcomm QSEE [39], Google Trusty
TEE [40], and Huawei iTrustee, etc. GlobalPlatform defines GPTEE specifications [41,42]
to resolve compatibility issues between different TEE OSes, and UnionPay releases TEEI
infrastructure [43] to support the coexistence and intercommunication of multiple TEEs
in the same CPU. On the server market, CPU manufacturers release original Software Development Kits (SDKs) for different CPU models, such as Intel SGX SDK [44] and RISC-V
Keystone SDK [45], and some other companies provide unified open-source solutions to
resolve the compatibility issues of different SDKs, such as Microsoft Open Enclave SDK [46],
Google Asylo [47], and Huawei secGear [48], etc.
Currently TEE technology is used on either the terminal side or the server side to
provide trusted computation environments for local applications; however, no solution
is available to combine the terminal TEEs and server enclaves to form an integrated
trusted environment.
_2.3. Payment Platforms Based on Facial Recognition_
As a typical implementation of centralized-processing solutions, most facial recognition payment systems are open to network applications, but they process user biometric
data in unsecure environments on public servers. Since the first facial recognition payment
system was launched by Uniqul in Finland in 2013 [49], the paying-with-your-face service
is becoming more and more popular in China. Alipay launched smile-to-pay products
named Dragonfly [50], Tencent issued face-payment devices named Frog [51], and UnionPay cooperated with 60+ banks to provide facial payment solutions to merchants [52]. All
of these cases have one thing in common: Users must upload private biometric data to
service providers, and service providers store and process user biometric data in centralized mode on background servers. However, when biometric data are stored by service
providers in their database servers, users are put at risk of privacy leakages due to improper
maintenance or malicious attacks. When biometric data are processed by service providers
without strict regulation and independent audit, users will lose control of their biometric
data and face the abuse of personal data.
**3. BioShare Design**
The goal of BioShare is to provide trusted biometric authentication services to network
applications under users’ full control of personal biometric data. It is designed to achieve:
**a. Self-determination: As the owner of personal biometric data, each user is entitled**
to grant or cancel the authorization to the server at any time, and can specify or change authorization conditions such as a specified number of times, specified time period, whitelist,
or blacklist.
-----
_Appl. Sci. 2022, 12, 10782_ 4 of 21
**b. Open Service: Biometric authentication services can be opened up to network appli-**
cations rather than being restricted to local applications, and therefore, standard interfaces
should be defined for the external invocation and internal processing of service requests.
**c. Privacy Protection: All biometric data must be stored and processed in trusted**
environments to prevent privacy leaks, and all critical operations such as encryption and
decryption must be conducted in trusted environments to ensure security.
The following subsections describe how our BioShare design achieves the aforementioned goals. First, we provide an overview of our framework, and then we describe the key
components of our framework: the terminal-side modules, including User Application and
Trusted Application, the server-side modules, including Service Application and Enclave
Module, and the secured channels between both sides.
_3.1. Overview_
We designed a unified and scalable infrastructure for BioShare that combines the capabilities of both the terminals and the server to conduct trusted biometric authentication and
provide open services to various network applications under user authorization. Figure 1
shows a high-level view of the BioShare framework. It consists of five main components:
**a. User Application (UA). BioShare UA is an app deployed on user terminals to**
conduct user-interactive actions, such as user signup, service registration, and authorizing
the server to perform biometric authentication.
**b. Service Application (SA). The SA is designed to handle service requests from**
network applications and calls Trusted Application (TA) or Enclave Module (EM) to conduct
biometric authentication.
**c. Trusted Application (TA). The TA provides trusted biometric services, such as**
biometric recognition and comparison, in trusted executive environments (TEE) on the
terminal side.
**d. Enclave Module (EM). The EM provides trusted biometric services on the server**
side under user authorization.
**e. Secured Communication Module (SCM). The SCM provides secure channels**
between user terminals and the server.
**Figure 1. BioShare Architecture.**
In the design of BioShare, standard interfaces are defined as API libraries for the
external invocation and internal processing of service requests, as shown in Table 1. This
facilitates the development and deployment of new biometric authentication services for
open access under user control. For example, a new iris authentication service can be implemented by replacing the biometric recognition algorithm and making minor changes to the
-----
_Appl. Sci. 2022, 12, 10782_ 5 of 21
User Application. Current terminal manufacturers can easily extend their biometric services
to network application scenarios with unified infrastructure and standard interfaces.
**Table 1. Standard Interface for BioShare.**
**Module** **API Function** **Description**
**(Internal Interface) Handle authentication requests from SA by calling**
TA function, parameters including biometric type and biometric data
**(Internal Interface) Acquire biometric raw data of the current user**
with hardware sensors, conduct biometric recognition on raw data, and
store feature data into secure storage parameters including biometric
type (such as facial, fingerprint, etc.)
**(Internal Interface) Conduct biometric comparison between biometric**
data input and feature data of the current user, parameters including
biometric type, and biometric data
**(Internal Interface) export the biometric feature data of the current**
user from TEE in an encrypted format, parameters including
biometric type
**(Open Service Interface) Handle biometric authentication requests**
from Network Applications, parameters including user ID, biometric
type and biometric data
**(Internal Interface) Handle register-service requests from User Appli-**
cation, inputs including user ID, terminal ID, and
additional parameters.
**(Internal Interface) Handle authorize-server requests from User Ap-**
plication, parameters including user ID, terminal ID, biometric type,
encrypted biometric feature data, and authorization options.
**(Internal Interface) Conduct biometric comparison between feature**
data input and feature data of user input, parameters including user
ID, biometric type, and the biometric data input
**(Internal Interface) Import the biometric feature data of a user into**
server enclave, parameters including user ID, biometric type and feature data in an encrypted format
**(Internal Interface) Encrypt a message with the symmetric encryption**
algorithm, parameters including plain text message
**(Internal Interface) Decrypt a message with the symmetric encryption**
algorithm, parameters including the encrypted message
**(Internal Interface) Sign a message, parameters including the message**
**User**
**Application**
**(UA)**
**Trusted**
**Application**
**(TA)**
**Service**
**Application**
**(SA)**
**Enclave**
**Module**
**(EM)**
**Secured**
**Communication**
**Module**
**(SCM)**
bool authenticate(
int biometricType,
byte[] biometricData)
bool acquireFeatureData(
int biometricType)
bool authenticate(
int biometricType,
byte[] biometricData)
byte[] exportFeatureData(
int biometricType)
bool authenticate(
String userId,
int biometricType,
byte[] biometricData)
bool registerService(
String userId,
String terminalId,
Map<String,String> params)
bool authorizeServer(
String userId,
String terminalId,
int biometricType,
byte[] featureData,
Map<int,object> options)
bool authenticate(
string userId,
int biometricType,
byte[] biometricData)
bool importFeatureData(
String userId,
int biometricType,
byte[] encryptedFeatureData)
byte[] encryptMessage(
byte[] plainMsg)
byte[] decryptMessage(
byte[] encryptedMsg)
byte[] signMessage(
byte[] msg)
_3.2. User Application (UA)_
A key design principle for BioShare is that the framework should enable each user to
control his/her biometric data handily and to authorize the server efficiently on personal
mobile terminals. We develop a user application and deploy it on user terminals. The UA
handles user commands as illustrated in Algorithm 1. The major functions include:
**User Signup: Each user inputs his/her basic information, including user name, mobile**
phone number, identity card number, etc., and UA verifies the authenticity of data input
and creates an account for the user. Please be reminded that each user can log in on multiple
terminals simultaneously with the same account.
**Service Registration: Currently, most mobile terminals are capable of providing bio-**
metric services to local apps. We extend the service object from local applications to
-----
_Appl. Sci. 2022, 12, 10782_ 6 of 21
network applications through a service-registration mechanism: a. The user registers terminal biometric capabilities to the server in UA and the server maintains a user-registration
database. b. When the server receives biometric authentication requests for the user from
network applications, it will forward the requests to the user’s terminals through standard
interfaces of UA, listed in Table 1. c. On the terminal side UA coordinates with Trusted
Application (TA) to conduct biometric authentication, and return the result to the server.
The service-registration mechanism successfully expands the biometric capabilities of user
terminals to empower network applications, but the service may sometimes be unstable
due to network issues or terminal power-off.
**Authorizing The Server: BioShare is designed as a general framework for various**
biometric services, including facial authentication, fingerprint authentication, and other
biometric services, so users can choose a specific biometric type and authorize the server to
conduct the biometric authentication. To provide stable services, each user can authorize
the server to conduct biometric authentication with the following process: a. In UA, the
user authorizes the server to conduct specified biometric authentication (such as facial
authentication, fingerprint authentication and so on) with specified options, such as a
specified number of times, specified time period, whitelist or blacklist, etc., and the server
updates authentication information of the user in the user-registration database. b. UA
exports the user’s private biometric data from the terminal TEE and imports it into the
server enclave, and the data will be stored temporarily in the server enclave according to
authorization options. c. When the server receives biometric authentication requests from
network applications, it will call the Enclave Module to conduct biometric authentication
with stored biometric data and return the result. Each user may cancel his/her authorization
anytime and anywhere, and accordingly, his/her private data will be purged from the
server enclave.
**Algorithm 1 Request-Handling Process in User Application**
**Input: The ID Value of the Current User, userId;**
Request Action, requestAction;
Request Data, requestData
**Output: Result (true or false), returnResult**
1: Initialize returnResult with 0
2: if requestAction = UserSignUp then
3: Verify requestData
4: Call TA to acquire biometric data with hardware
5: Create a new account for userId
6: **return true**
7: else if requestAction = ServiceRegistration then
8: Call SA to create new UserRecord into database set
UserRecord.UserId=userId and
UserRecord.TerminalId=requestData.TerminalId
9: **return true**
10: else if requestAction = AuthorizeServer then
11: Call SA to update database set
serRecord.ServerAuthorized=true and
serRecord.AuthorizeOptions=requestData.AuthorizeOptions
where UserRecord.UserId=userId and
UserRecord.TerminalId=requestData.TerminalId
12: Call SCM to build a secure channel between TA and EM
13: Transfer Biometric Data of the current user from TA to EM
14: **return true**
15: else if requestAction = AuthenticateUser then
16: Validate the service request
17: Call TA to conduct biometric authentication
18: **return authentication result from TA**
19: end if
20: return returnResult
-----
_Appl. Sci. 2022, 12, 10782_ 7 of 21
_3.3. Service Application (SA)_
BioShare achieves the goal of providing open services via the Service Application
running on the server, and Algorithm 2 shows the process flow in detail. SA listens for
biometric authentication requests from remote network applications, verifies the request
data, and checks the user in the user-registration database. If the user has registered the
biometric service, SA will process the request with either of the following modes:
**Forwarding Mode:** If the user does not authorize the server to conduct biometric
authentication, then SA will forward the request to the user terminals, and on the terminal
side, UA coordinates with TA to conduct biometric authentication and returns the result to
the server, as described in step “Service registration” of Section 3.2.
**Authorized Mode: If the user has authorized the server, SA will call Enclave Module**
to conduct biometric authentication and return the result, as described in step “Authorizing
the server” of Section 3.2.
The BioShare Service Application is also designed to handle command requests from
the users, including service-registration requests and authorizing-the-server requests,
which are described in Section 3.2. The server maintains a user-registration database
that records the registration and authorization information for each user.
**Algorithm 2 Process for Handling Service Requests from Network Applications**
**Input: The ID of The Target User, userId**
Biometric Modality Value, biometricType
Biometric Feature/Raw Data, biometricData
**Output: Authentication Result, returnResult**
(0-false/1-true/-1-failed)
1: Initialize returnResult with -1
2: Receive service requests from Network Applications
3: Verify input data
4: Search database for UserRecord where
UserRecord.UserId=userId
5: if UserRecord IS NOT NULL then // user registered
6: **if UserRecord.ServerAuthorized = false then**
//forwarding mode
7: Connect to the terminal with UserRecord.TerminalId
8: Call SCM to build a secure channel to TA through UA
9: Call TA to conduct biometric authentication
10: _returnResult = authentication result of TA_
11: **else// authorized mode**
12: Call EM to conduct biometric authentication
13: _returnResult = authentication result of EM_
14: **end if**
15: end if
16: return returnResult
_3.4. Trusted Application (TA)_
The trusted application is a module running in terminal TEE that receives calls from
the UA and conducts biometric operations through the standard interfaces listed in Table 1.
Because all actions are performed in the terminal TEE and all private data are stored in
tamper-resistant secure storage, biometric services from TA are secure and trusted.
**Biometric Data Acquisition: TA captures biometric raw data through biometric hard-**
ware integrated into mobile terminals, such as fingerprint devices, cameras, and other
devices. TA calls the hardware driver to control the biometric sensor, the sensor captures raw data and writes it to a memory buffer, and finally, the raw data are copied to
secure memory.
**Biometric Recognition: Biometric recognition transmutes biometric raw data into**
feature data in the terminal TEE. A typical process of biometric recognition consists of
three steps: a. Data processing. TA processes the raw data with specified algorithms,
-----
_Appl. Sci. 2022, 12, 10782_ 8 of 21
such as biometric detection algorithms. b. Liveness detection. As an option, TA performs
liveness detection with infrared images, RGB images, or depth images, etc. c. Feature
extraction. TA extracts feature data from raw data using a specified model. Taking face
recognition as an example, the typical process includes face detection, liveness detection,
and facial feature extraction.
**Secure Data Storage: Biometric feature data are stored in secure storage in terminal**
TEEs and are protected by hardware isolation to ensure privacy protection, and only trusted
applications (TAs) can obtain permissions to access biometric data in TEE mode. When
biometric data are moved from secure storage to insecure environments, they are encrypted
by the SCM module to ensure data security, as in Section 3.6.
**Biometric Comparison: TA conducts a biometric comparison by calculating the simi-**
larity between source feature values and the target feature values. If the similarity score
exceeds the threshold value, TA returns a success, otherwise it returns a failure. To ensure security, private biometric data are transmitted to TA in encrypted formats, and are
decrypted in the terminal TEE before biometric comparison.
**Constraints: Terminal TEE is a restricted zone in mobile CPU platforms with limited**
capabilities, and the limitations include CPU speed, maximum RAM, and maximum secure
storage, etc. Table 2 lists the TEE capabilities of typical CPUs. We therefore chose algorithms
and models with low computing and memory costs in the implementation of biometric
recognition and comparison.
**Table 2. TEE Capabilities of Typical CPUs.**
**CPU Model** **TEE** **CPU Cores** **Max RAM** **Max Secure Storage**
SamSung Exynos 1080 ARM Trustronic 8 100 MB 5 MB
Qualcomm Snapdrago 888 QSEE 8 100 MB 16 MB
Hisilicon Kirin 9000 iTrustee 8 48 MB 12 MB
_3.5. Enclave Module (EM)_
The Enclave Module is a module running on the server side that conducts biometric
recognition and comparison using the standard interfaces listed in Table 1. During the
user-authorizing process, EM exports private biometric data from Terminal TEEs and stores
it temporarily in the server enclave, as described in Section 3.2. Based on the data, EM can
conduct biometric authentication without the intervention of user terminals. Because all
actions are performed in the server enclave, biometric services provided by EM are secure
and trusted.
**Biometric recognition:** Biometric recognition transmutes biometric raw data into
feature data. EM conducts biometric recognition with the same algorithm and the same
steps as TA on the terminal side: data processing, and liveness detection and feature
extraction, as described in Section 3.4.
**Biometric comparison: EM on the server side conducts a biometric comparison with**
the same algorithm as TA on the terminal side, which is described in Section 3.4.
**Privacy Protection: Within BioShare, the security mechanisms for server enclaves are**
used to protect biometric data during both storage and processing. Specifically, the EPC
(Enclave Page Cache) protects all EM code and data in encrypted format during runtime
state, the Sealing/Unsealing mechanism secures EM data in encrypted format during
persistent storage, and the Attestation technology ensures that genuine code runs in secure
enclave environments to avoid data interception and code tampering.
**Constraints: Server enclaves are protected execution environments with restricted**
capabilities, and the restrictions include available CPU cores, maximum RAM, and memory
access latency, etc. Meanwhile, in EM, we have to choose the same algorithm as that of TA
on the terminal side due to the incompatibility of different algorithms.
-----
_Appl. Sci. 2022, 12, 10782_ 9 of 21
_3.6. Secured Communication Module (SCM)_
The Secured Communication Module ensures secure and undeniable communication
between user terminals and the server through cryptographic techniques such as symmetric
encryption, asymmetric encryption, and hash and digital signatures. To ensure security,
all encryption/decryption actions are conducted in the trusted environment of either the
server or the terminal. Secured communication channels combine terminal TEEs and the
server enclave, forming an integrated trusted environment for trusted computing.
**Secured Channels:** Before the server and the terminal communicate with each
other, for example, the UA module on the terminal authorizes the server in Section 3.2
and Algorithm 1 and the SA module on the server forwards service requests to the user
terminals in Section 3.3, we use an asymmetric encryption algorithm RSA (with a key length
of 3072 bytes) to build secure communication channels between the server and the terminal.
RSA algorithms with a key length of 7680 bytes will provide higher security, and alternative
algorithms include ECC algorithms with key lengths of 256/384 bytes. First, random
numbers are generated in the server enclave and terminal TEEs, then public/private key
pairs are generated on both sides, based on the random numbers with an asymmetric
encryption algorithm (i.e., the RSA or ECC algorithm). Finally, private keys are stored in
trusted environments and public keys are exchanged with each other. Please be reminded
that asymmetric encryption applies only to small data sets due to high computing costs.
**Data Encryption: In BioShare, user biometric data are only processed in secure en-**
vironments including terminal TEEs and server enclaves, and that are transferred in encrypted format outside secure environments and decrypted in secure environments to
ensure data security and privacy protection. We use the symmetric encryption algorithm
AES-128 to efficiently encrypt/decrypt the biometric data during terminal–server communications. The following are the main steps: a. A random number is generated for each
user by the hardware device in the server enclave. b. A working key is generated for the
user based on the random numbers using the symmetric encryption algorithm (i.e., AES),
and saved in the server enclave. c. The working key is encrypted with the terminal public
key and sent to the user’s terminal. d. The working key is decrypted and stored in the
terminal TEE, so that both sides share the same working key.
**Digital Signatures: To ensure nonrepudiation, we use digital signatures on both the**
server and terminal sides. In the terminal TEE, a message can be signed with the following
steps: a. Calculate the hash value for the target message with the hash algorithm SHA-256.
b. Encrypt the hash value with the user’s private key. c. Send the message along with
the ciphertext to the server. On the server enclave, the message can be verified via the
following steps: a. Decrypt the ciphertext to a plaintext format. b. Calculate the hash value
for the message with the SHA-256 algorithm. c. Check whether the hash value is consistent
with the plaintext. Digital signatures are used in the return values of the SA module as a
response to service requests from network applications as in Section 3.3 and Algorithm 2.
**4. Sample Implementation**
BioShare is currently implemented with facial recognition as an example, and deployed on Android mobile terminals with TEE communicating with a server with enclaves.
The following sections describe several implementation details and challenges specific to
our current implementation.
_4.1. General Information_
In our implementation, we chose mainstream CPU architectures with dominant TEE
platforms, specifically ARM Cortex-A architecture with ARM TrustZone on the terminal
side, and Intel Xeon E3 CPU with Intel SGX support on the server side. Table 3 lists the
configurations of the hardware and software on each side.
-----
_Appl. Sci. 2022, 12, 10782_ 10 of 21
**Table 3. The Configurations of Mobile Terminals and the Server.**
**Terminal** **Server**
CPU Mediatek MT8788 (ARM Cortex-A) Intel Xeon E3
RAM 1.3 GB 64 GB
TEE ARM TrustZone Intel SGX
OS Android 9.0 CentOS Linux 7.4
TEE OS Nebula TEEI 1.1.0
_4.2. User Application (UA)_
The BioShare User Application is implemented as an Android app that is deployed on
mobile terminals. The key challenge in the implementation of UA is to maintain a reliable
connection between the terminals and the server in a changing environment.
**TCP Connections: The user application either creates a short TCP/IP connection to**
the server for each call, or keeps a long connection with the server for a certain period.
The former requires lower resource costs, while the latter shows better performance.
**Notification Mechanism: Because Android or iOS kills the application running in the**
background automatically at irregular intervals, the User Application may be unreachable
to the server from time to time. We need a notification mechanism through which the
service application can notify the terminal to relaunch the user application at any time.
Apple Push Notification Service [53], Google Firebase Cloud Messaging [54], and other
third-party products provide reliable notification services for iOS and Android devices.
_4.3. Service Application (SA)_
The BioShare Service Application is implemented as two system service processes
running in the background. One service process is responsible for handling the command
requests from user applications on the terminal side, while the other is responsible for
handling the service requests from network applications. The two processes coordinate
with each other by sharing the same user-registration database.
We have developed two different experiments in which network applications call
the biometric service in either forwarding mode or authorized mode. Experimenters can
currently register terminal biometric capabilities to the server in the User Application
on the terminal side, then the biometric service on the server will run in forwarding
mode and all service requests from network applications will be forwarded to the user’s
terminals registered. Furthermore, experimenters can also authorize the server in the User
Application; accordingly, the biometric service on the server will run in authorized mode
and all service requests from network applications will be handled locally by the Enclave
Module on the server.
_4.4. Trusted Application (TA)_
The BioShare Trusted Application is implemented as a trusted module deployed in
the Trusted Executive Environment on the terminal side. The key challenge is the choice
of the appropriate biometric algorithms and implementations for facial recognition and
comparison, due to the limitations of TEE capabilities. We attempt to choose open-source
implementations with complete functionality, good performance, low computing and memory overheads, embedded-environment support, and appropriate licensing agreements.
**Facial Recognition and Comparison: Currently, many open-source projects for facial**
recognition are available from GitHub, and Table 4 lists some popular projects. In our
implementation, we chose SeetaFace2 because it is built with C++, is independent of
third-party libraries, and is compatible with X86, iOS, and Android systems. We used
pretrained models from the SeetaFace2 project. According to the SeetaFace2 document,
Cascaded-CNN is used as a face-detection algorithm, achieving 92% in the FDDB public
dataset. FEC-CNN is used as a face landmaker algorithm, achieving a 0.069 average
-----
_Appl. Sci. 2022, 12, 10782_ 11 of 21
positioning error on the 300-W Challenge public dataset. ResNet50 is used for facial feature
extraction/comparison with 25 million parameters, and the model has been pretrained
with 33 million photos with an accuracy rate of more than 98% in the general 1:N scenario
on a 1000-person dataset when the error acceptance rate is 1%. Figure 2 shows the flow
chart of the processes.
**Figure 2. The flow chart of face recognition.**
**Table 4. Typical Opensource Projects for Facial Recognition.**
**Functionality** **Hardware support**
**Open-Source Project** **Language** **License**
**Recognition** **Comparison** **X86** **Embedded**
**SeetaFace Engine** Yes Yes Yes Yes C++ BSD
**SeetaFace Engine2** Yes Yes Yes Yes C++ BSD
**SeetaFace2** Yes Yes Yes Yes C++ BSD
**FaceBoxes** No No Yes Yes C++ BSD
**libfacedetection** No No Yes Yes C BSD
**OpenCV 4** Yes Yes Yes No C++ BSD
**RetinaFace** Yes Yes Yes No Python MIT
**deepinsight/insightface** Yes Yes Yes No Python MIT
**Liveness Detection: We chose the open-source project FeatherNets for liveness detec-**
tion in the implementation, because several models provided by the project achieve the
goal of low computing cost (about 80 M FLOPs), small model size (0.35 M parameters),
and complete functionality (applicable to infrared images and depth images, etc.). Table 5
shows the test results of the models within the FeatherNets project.
**Table 5. Test Results of Liveness Detection Models.**
**Model** **ACER** **TPR@FPR = 1%** **TPR@FPR = 0.1%** **FLOPS**
FishNet150 0.00144 0.999668 0.998330 6452.72M
FishNet150 0.00181 1.0 0.9996 6452.72M
FishNet150 0.00496 0.998664 0.990648 6452.72M
MobileNet v2 0.00228 0.9996 0.9993 306.17M
MobileNet v2 0.00387 0.999433 0.997662 306.17M
MobileNet v2 0.00402 0.9996 0.992623 306.17M
MobileLiteNet54 0.00242 1.0 0.99846 270.91M
MobileLiteNet54-se 0.00242 1.0 0.996994 270.91M
FeatherNetA 0.00261 1.00 0.961590 79.99M
FeatherNetB 0.00168 1.0 0.997662 83.05M
**Ensembled all** **0** **1** **1** **-**
_4.5. Enclave Module (EM)_
The BioShare Enclave Module is implemented as trusted code that runs in the Intel SGX
enclave on the server side. On other TEEs such as ARM TrustZone and RISC-V keystone,
we have to develop new code due to incompatibilities between them. In order to achieve
the goal “develop once and deploy anywhere”, we use Microsoft Open Enclave SDK [46]
to resolve compatibility issues between different TEEs, and other opensource projects
including Google Asylo [47] and Huawei secGear [48] etc., provide similar functionality.
-----
_Appl. Sci. 2022, 12, 10782_ 12 of 21
We have developed invocation interfaces to authenticate both facial image data and
facial feature data, and experimenters can specify either raw data or feature data in service
requests from network applications. We use the open-source code SeetaFace2 to conduct
facial recognition and comparison. If the input is facial feature data, the service request
will be processed with high performance due to the low computing overhead of the feature
comparison algorithm, but this is achieved with low compatibility because the input feature
data must be extracted with the same algorithm as that of the Enclave Module. If the input
is facial image data, the service request will be processed with higher computing costs but
with better compatibility. The experimental results are described in Section 5.2.
_4.6. Secured Communication Module (SCM)_
The BioShare Secured Communication Module is implemented as trusted functions
running in trusted environments on both the terminal and server sides to provide cryptographic services. In our implementation, we choose Advanced Encryption Standard (AES)
as the symmetric encryption algorithm and specify 128 or 192 bytes as the key length to
ensure a sufficient encryption intensity. We select RSA or ECC as the asymmetric encryption
algorithm, specify 3072 or 7680 bytes as the key length for RSA, and 256 or 384 bytes as the
key length for ECC; and we specify the SHA-256 algorithm in the hash value calculation.
During the entire lifecycle of each transaction, all biometric data are transmitted in
encrypted format in untrusted environments, and decrypted and processed only in trusted
environments; and the result is returned with a digital signature to ensure non-repudiation.
**5. Evaluation and Analysis**
We now evaluate and analyze BioShare in terms of terminal-side performance, serverside performance, communication performance, and holistic security.
_5.1. Overview_
BioShare processes service requests from network applications in either forwarding
mode or authorized mode, and we measure the time overhead of processing a service
request in both modes.
In forwarding mode, the server forwards the service request to the user terminal, as
shown in Figure 3. The total overhead of processing a service request varies from 91 ms
to 1011.5 ms, and the major parts are the request-processing cost and the server-client
communication cost, which are evaluated in Sections 5.2 and 5.4.
In authorized mode, the server processes service requests in the server enclave under
user authorization, as shown in Figure 4. The total overhead of processing a service request
varies from 16.1 ms to 168.6 ms, and the major part is the request-processing cost, which is
evaluated in Section 5.3.
**Figure 3. Overhead Imposed by BioShare in Processing a Service Request with Forwarding Mode.**
-----
_Appl. Sci. 2022, 12, 10782_ 13 of 21
**Figure 4. Overhead Imposed by BioShare in Processing a Service Request with Authorized Mode.**
_5.2. Performance Evaluation of Terminal TEE_
We measure the performance of the terminal TEE in conducting facial authentication
upon different data inputs. The overhead of face recognition is 507.1 ms upon facial
raw image input, while the overhead of face comparison reduces to 1.6 ms upon facial
feature data input, as shown in Table 6. Figure 5 shows the cumulative distribution of time
consumption in face recognition, and we can see that the performance of the terminal TEE
is quite stable. We perform further research on each step of face recognition and find that
feature extraction and liveness detection are the most time-consuming steps, accounting
for 70.9% and 22.9%, respectively, as shown in Figure 6.
**Table 6. Overhead of TA in Processing Facial Recognition.**
**Input** **Action** **Time (ms)**
**Facial Image Data** Face Recognition 507.1
**Facial Feature Data** Face Comparison 1.6
**Figure 5. CDF of Time Consumption of Face Recognition.**
We measure the prediction accuracy rate of facial recognition with an actual business
dataset of 329 pre-captured photos with a resolution of 640 480 from 110 employees.
_×_
In the test, we compare each photo in the dataset with other photos, and the test results
are shown in Table 7. The accuracy rate is 99.937%, the false acceptance rate (FAR) is
0.054%, and the false rejection rate (FRR) is 1.524%, which meets business requirements in
most scenarios.
-----
_Appl. Sci. 2022, 12, 10782_ 14 of 21
**Figure 6. Overhead Proportion of Each Step in Facial Recognition.**
**Table 7. Test Results of Facial Recognition and Comparison.**
**Prediction Result**
**Actual Result**
**TRUE** **FALSE**
**TRUE** 323 (TP) 5 (FN)
**FALSE** 29 (FP) 53599 (TN)
We measure the performance difference between the terminal TEE and the server
enclave during facial authentication. To ensure the comparability of experimental results,
we specify that the same algorithmic code runs in a single thread on a single CPU core
in both environments. The evaluation results show that the face-recognition cost of the
terminal TEE is more than three times that of the server enclave, the face-comparison
overhead on both sides is as low as 1–2 ms, and the face-recognition accuracy is equal on
both sides. Please refer to Figure 7 for more details.
**Figure 7. Performance Difference between Terminal TEE and Server Enclave.**
_5.3. Performance Evaluation of Server Enclave_
The key challenge with server-side facial authentication is the performance of the
server enclave, which differs across CPU platforms. Our demo implementation works fine
on the Pentium Silver CPU enclave with four physical cores and 128 MB RAM at most for
-----
_Appl. Sci. 2022, 12, 10782_ 15 of 21
low or moderate load scenarios, but more powerful enclave environments with more CPU
cores and higher RAM support are required in high load scenarios. Table 8 lists some key
indicators of enclaves on specific Intel CPU platforms.
**Table 8. Key Indicators of Enclaves on Different CPU Platforms.**
**Key Indicators** **Pentium Silver** **Xeon E3** **Xeon SP Ice Lake**
Maximum Physical Cores 4 6 80
Maximum Encalve RAM 128 MB 128 MB 1 TB
Encalve Dynamic Memory Management Yes No Yes
Generally, the performance overhead of server enclaves consists of three major parts:
enclave memory access latency, enclave switching cost, and enclave dynamic-memorymanagement cost. Figure 8 lists the evaluation results of the three parts in professional
tests [55]. Specifically, on the Intel Xeon Icelake platform, the overhead caused by enclave memory access latency is almost negligible, the enclave switching cost is less than
10,000 CPU lifecycles, and the enclave dynamic-memory-management cost is quite high.
The performance analysis is instructive for further optimization of the current implementation.
(a) Enclave Memory Access Latency (b) Enclave Switching Cost
(c) Cost of Allocating Enclave Memory Pages (d) Cost of Deallocating Enclave Memory Pages
**Figure 8. Performance Overhead of Enclaves on Intel SGX Platforms.**
_5.4. Performance Analysis of Mobile Communication_
Most user terminals communicate with the server through short or long TCP/IP
connections over a 4G/5G network, and we measure the end-to-end delays of different
communication methods. Based on the experiment, the average delay is 39.7 ms for
one-way trips between the terminal and the server, long TCP/IP connections have lower
latency than short connections (average 21.1 ms vs. 58.3 ms), and 5G networks show better
performance than 4G (average 27.3 ms vs. 52.1 ms). Figure 9 shows the experimental results
in detail.
-----
_Appl. Sci. 2022, 12, 10782_ 16 of 21
**Figure 9. CDF of End-to-End Communication Delays.**
We have performed further research on the performance of the notification mechanism
used in BioShare. We kill the user application on the user terminal, relaunch it with the
notification services [56], and measure the communication delay from the server to the
terminal. Based on the experiment, it takes 415 ms for Aurora Mobile JPush to notify
the terminal from the server, which is close to the reported time delay for Apple Push
Notification Service and Google Firebase Cloud Messaging, as shown in Table 9.
**Table 9. Normal Delay of Notification Services.**
**Provider** **Platform** **Notification Service** **Delay (ms)**
Apple IOS Apple Push Notification Service 400
Google Android Firebase Cloud Messaging 500
Aurora Mobile Android/iOS Aurora Mobile JPush 415
_5.5. Security Analysis of BioShare_
We evaluate the security level of secure communication channels between the terminal
TEE and the server enclave by analyzing the cryptographic algorithms used in BioShare,
as shown in Table 10. In the implementation, we specify key lengths of 128/192 for the
AES algorithm; thus, the security strength of symmetric encryption is 128/192. We specify
key lengths of 3072/7680 for the RSA algorithm and key lengths of 256/384 for the ECC
algorithm, and therefore, the security strength of asymmetric encryption is 128/192. We use
SHA-256 as the hash algorithm, and the security strength is 256. In conclusion, the security
strength of secure communication channels is 128/192, which fulfils and even exceeds the
values recommended by the National Institute of Standards and Technology (NIST) [57].
The terminal TEE empowers the Trusted Application with secure hardware, secure
OS, secure storage, and secure provisioning, etc. The server enclave empowers the Enclave
Module with isolated execution, encrypted RAM, secure storage, sealing and attestation
mechanisms, etc. The TEEs on both terminal and server sides provide trusted environments
to process and store biometric data at a high security level, and Table 11 lists some security
properties of trusted environments.
-----
_Appl. Sci. 2022, 12, 10782_ 17 of 21
**Table 10. Security Strength of Cryptographic Algorithms of BioShare.**
**NIST Recommendations** **Algorithms of BioShare**
**Category** **Algorithms**
**(Key Length)** **Algorithm** **Key Length** **Security Strength**
AES-128 128 128
Symmetric encryption AES 128+
AES-192 192 192
RSA 3072 128
RSA 2048+
RSA 7680 192
Asymmetric encryption
ECC 256 128
ECC 224+
ECC 384 192
Hash SHA 224+ SHA-256 256 256
**Table 11. Security Properties of Terminal TEE and Server Enclave.**
**Server Enclave**
**(Intel SGX)**
**Security Properties**
**Terminal TEE**
**(ARM Trustzone)**
Isolated Execution Yes Yes
Encrypted RAM No Yes
Secure Storage Yes Yes
Remote Attestation Yes Yes
Secure provisioning Yes Yes
Privileged software attack defense Yes Yes
Trusted Path Yes No
**6. Discussion and Future Work**
This paper makes the case for biometric authentication as a trusted service under
users’ full control, and demonstrates how we addressed several challenges in making
this service secure, practical, and efficient. Our solution reconciles the demand for open
services and the requirement for privacy security, and shows comparative advantages
over the currently available solutions. Table 12 lists the details of the comparisons. When
compared to terminal local-processing solutions, such as Apple Touch ID and Windows
Hello, as shown in Section 2.1, our solution extends open biometric authentication services
from local mobile apps to remote network applications with high security levels. When
compared to centralized-processing solutions such as facial recognition payment systems,
as listed in Section 2.3, BioShare can avoid both unintentional leakages and the intentional
abuse of biometric data by keeping private data in an encrypted format within secure
storage, and running trusted code in isolated secure environments. Based on our test
results, the BioShare implementation meets the performance requirements of most humaninteractive businesses and fulfills the security requirements of recommended standards for
business systems.
We now discuss important challenges that are outside the focus of this paper.
**TEE Fragmentation: BioShare relies on terminal TEEs and server enclaves to handle**
service requests; however, the TEE market as a whole is currently fragmented. In both terminal and server markets, different manufacturers offer different TEE solutions that are incompatible with each other, such as QSEE/iTrustee on the terminal side and SGX/TrustZone
on the server side, and therefore, we have to develop native code for each manufacturer,
and even for each system model. This tremendously increases the development costs
and implementation complexity for large-scale applications of BioShare, and we need a
universal standard and interface protocol for terminal TEEs as well as server enclaves.
-----
_Appl. Sci. 2022, 12, 10782_ 18 of 21
**Table 12. Security Properties of Terminal TEE and Server Enclave.**
**Centralized-Processing**
**Objectives** **Local-Processing Solutions** **BioShare**
**Solutions**
Yes (Each user can authorize the
server to conduct biometric authentication or cancel the authorization at
any time)
Each user has full control of
his/her biometric data
Yes (Biomatric data are stored in
the local secure storage of pri- No
vate mobile terminals)
No (The service is restricted to Yes (The service is open Yes (The service is open to network
Open biometric services
local mobile apps) to network applications) applications)
Nonsecure storage Secure storage (Terminal TEEs and
Biometric data storage Secure storage (Terminal TEEs)
(Public databases) Server Enclaves)
Biometric data processing Trusted Environment (Terminal Nontrusted environ- Trusted Environment (Terminal TEEs
environment TEEs) ment (Public servers) and Server Enclaves)
Biometric data Not applicable (Local Secure channels, the data are transmitSecure/unsecure
transmission processing) ted in decrypted format
Prevent data leakage Yes No Yes
Resisting data interception Yes (Attestation mechanism of Yes (Attestation mechanism of server
No
and code tampering attacks terminal TEE) Enclave)
Resisting memory dump Yes (Enclave memory protection
No No
attacks mechanism)
Resisting privileged code Yes (Isolated Execution mecha- Yes (Isolated Execution mechanisms
No
attacks nism of terminal TEEs) of terminal TEEs and server Enclaves)
**Network Attacks: BioShare opens up terminal capabilities to network applications,**
meaning that the terminals are faced with various network attacks, such as replay attacks
and man-in-the-middle attacks. In addition, the server connects to numerous terminals
through TCP/IP connections, and therefore is put at risk of cyber-attacks such as impersonation attacks and distributed denial of service attacks. We need to design defense
technology and systems against various network attacks on user terminals, as well as the
BioShare server.
**Business Model: BioShare relies on users to install the User Application on individual**
terminals, raising the question of whether there are proper incentives for users to do so. We
need a business model for BioShare to ensure that all participants can benefit from service
provision. One possible solution is to charge a fee for each service request from network
applications, and reward the user with a certain proportion of the income.
**7. Conclusions**
In this paper, we make the case for a unified framework that provides trusted biometric
authentication services to network applications under the user’s full control of personal
biometric data. Our demo implementation of BioShare provides: (a) authorized and
forward service modes to ensure users’ full control of personal data, (b) standard interfaces
to ensure service openness to network applications, (c) trusted computation to prevent
privacy leaks on both terminal and server sides, (d) secure channels to ensure data security
during transmission, and (e) a unified framework to ensure effective collaboration between
user terminals and the server. We showed that our system is efficient, based on standard
interfaces, and provides a unified terminal–server collaboration mode to support new
biometric services under user control. As part of our future work, we are designing
standard TEE solutions to facilitate the development of new biometric service experiments,
developing defense technology and systems against various network attacks, and exploring
business models to ensure that each user benefits from participation.
-----
_Appl. Sci. 2022, 12, 10782_ 19 of 21
**Author Contributions: Conceptualization, Q.S. and J.W.; methodology, Q.S.; investigation, Q.S. and**
W.Y.; formal analysis, Q.S. and W.Y.; resources, Q.S. and J.W.; validation, Q.S. and W.Y.; visualization,
Q.S.; writing—original draft preparation, Q.S. and W.Y.; writing—review and editing, Q.S. and J.W.
All authors have read and agreed to the published version of the manuscript.
**Funding: This work was supported in part by the Program of Shanghai Academic/Technology**
Research Leader under Grant 19XD1433700.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Acknowledgments: The National Engineering Laboratory for Electronic Commerce and Electronic**
Payment provided the experimental environment for the model validation in this article, and we are
thankful for suggestions from academician Chai Hongfeng and manager Chen Chengqian for model
improvements.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. [General Data Protection Regulation. Available online: https://gdpr-info.eu/ (accessed on 24 February 2022).](https://gdpr-info.eu/)
2. Gobeo, A.; Fowler, C.; Buchanan, W.J. GDPR and Cyber Security for Business Information Systems. In GDPR and Cyber Security
_[for Business Information Systems ; River Publishers: Gistrup, Denmark, 2020; pp. i–xix. [CrossRef]](http://doi.org/10.1201/9781003338253)_
3. Layton, R.; Elaluf-Calderwood, S. A Social Economic Analysis of the Impact of GDPR on Security and Privacy Practices. In
Proceedings of the 2019 12th CMI Conference on Cybersecurity and Privacy (CMI), Copenhagen, Denmark, 28–29 November
[2019; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/CMI48017.2019.8962288)
4. [Bonta, R. California Consumer Privacy Act (CCPA). Available online: https://oag.ca.gov/privacy/ccpa (accessed on 24 February 2022).](https://oag.ca.gov/privacy/ccpa)
5. Stallings, W. Handling of Personal Information and Deidentified, Aggregated, and Pseudonymized Information under the
[California Consumer Privacy Act. IEEE Secur. Priv. 2020, 18, 61–64. [CrossRef]](http://dx.doi.org/10.1109/MSEC.2019.2953324)
6. [Personal Information Protection Law. Available online: http://www.npc.gov.cn/npc/c30834/202010/569490b5b76a49c292e64c4](http://www.npc.gov.cn/npc/c30834/202010/569490b5b76a49c292e64c416da8c994.shtml)
[16da8c994.shtml (accessed on 24 February 2022).](http://www.npc.gov.cn/npc/c30834/202010/569490b5b76a49c292e64c416da8c994.shtml)
7. [About Touch ID Advanced Security Technology. Available online: https://support.apple.com/en-us/HT204587 (accessed on 24](https://support.apple.com/en-us/HT204587)
February 2022).
8. [About Face ID Advanced Technology. Available online: https://support.apple.com/en-us/HT208108 (accessed on 24 February 2022).](https://support.apple.com/en-us/HT208108)
9. [Learn about Windows Hello and Set It up. Available online: https://support.microsoft.com/en-us/windows/learn-about-](https://support.microsoft.com/en-us/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0)
[windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0 (accessed on 24 February 2022).](https://support.microsoft.com/en-us/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0)
10. Xu, Y.; Lu, G.; Lu, Y.; Zhang, D. High resolution fingerprint recognition using pore and edge descriptors. Pattern Recognit. Lett.
**[2019, 125, 773–779. [CrossRef]](http://dx.doi.org/10.1016/j.patrec.2019.08.006)**
11. Jo, Y.H.; Jeon, S.Y.; Im, J.H.; Lee, M.K. Security analysis and improvement of fingerprint authentication for smartphones. Mob. Inf.
_[Syst. 2016, 2016, 8973828. [CrossRef]](http://dx.doi.org/10.1155/2016/8973828)_
12. Cavazos, J.G.; Phillips, P.J.; Castillo, C.D.; O’Toole, A.J. Accuracy comparison across face recognition algorithms: Where are we
on measuring race bias? arXiv 2019. arXiv:1912.07398.
13. Rana, A.; Ciardulli, A. Identity verification through face recognition, Android smartphones and NFC. In Proceedings of the
[World Congress on Internet Security (WorldCIS-2013), London, UK, 9–12 December 2013; pp. 162–163. [CrossRef]](http://dx.doi.org/10.1109/WorldCIS.2013.6751039)
14. Baqeel, H.; Saeed, S. Face detection authentication on Smartphones: End Users Usability Assessment Experiences. In Proceedings
of the 2019 International Conference on Computer and Information Sciences (ICCIS), Aljouf, Saudi Arabia, 3–4 April 2019; pp. 1–6.
[[CrossRef]](http://dx.doi.org/10.1109/ICCISci.2019.8716452)
15. Hongo, K.; Takano, H. Personal Authentication with an Iris Image Captured Under Visible-Light Condition. In Proceedings of the
2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2266–2270.
[[CrossRef]](http://dx.doi.org/10.1109/SMC.2018.00389)
16. Ali, S.A.; Shah, M.A.; Javed, T.A.; Abdullah, S.M.; Zafar, M. Iris recognition system in smartphones using light version (LV)
recognition algorithm. In Proceedings of the 2017 23rd International Conference on Automation and Computing (ICAC),
[Huddersfield, UK, 7–8 September 2017; pp. 1–6. [CrossRef]](http://dx.doi.org/10.23919/IConAC.2017.8082011)
17. Raja, K.B.; Raghavendra, R.; Busch, C. Smartphone based robust iris recognition in visible spectrum using clustered K-means
features. In Proceedings of the 2014 IEEE Workshop on Biometric Measurements and Systems for Security and Medical
[Applications (BIOMS) Proceedings, Rome, Italy, 17 October 2014; pp. 15–21. [CrossRef]](http://dx.doi.org/10.1109/BIOMS.2014.6951530)
18. [Garcia-Martin, R.; Sanchez-Reillo, R. Vein Biometric Recognition on a Smartphone. IEEE Access 2020, 8, 104801–104813. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3000044)
19. Garcia-Martin, R.; Sanchez-Reillo, R. Deep Learning for Vein Biometric Recognition on a Smartphone. IEEE Access 2021, 9,
[98812–98832. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3095666)
-----
_Appl. Sci. 2022, 12, 10782_ 20 of 21
20. Zou, Q.; Wang, Y.; Wang, Q.; Zhao, Y.; Li, Q. Deep Learning-Based Gait Recognition Using Smartphones in the Wild. IEEE Trans.
_[Inf. Forensics Secur. 2020, 15, 3197–3212. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2020.2985628)_
21. Coakley, M.J.; Monaco, J.V.; Tappert, C.C. Keystroke biometric studies with short numeric input on smartphones. In Proceedings
of the 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA,
[6–9 September 2016; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/BTAS.2016.7791181)
22. Kutzner, T.; Ye, F.; Bönninger, I.; Travieso, C.; Dutta, M.K.; Singh, A. User verification using safe handwritten passwords on
smartphones. In Proceedings of the 2015 Eighth International Conference on Contemporary Computing (IC3), Washington, DC,
[USA, 20–22 August 2015; pp. 48–53. [CrossRef]](http://dx.doi.org/10.1109/IC3.2015.7346651)
23. Daniel, N.; Anitha, A. A Study on Recent Trends in Face Spoofing Detection Techniques. In Proceedings of the 2018 3rd
International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 15–16 November 2018; pp. 583–586.
[[CrossRef]](http://dx.doi.org/10.1109/ICICT43934.2018.9034361)
24. Chen, H.; Chen, Y.; Tian, X.; Jiang, R. A Cascade Face Spoofing Detector Based on Face Anti-Spoofing R-CNN and Improved
[Retinex LBP. IEEE Access 2019, 7, 170116–170133. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2955383)
25. Simanjuntak, G.D.; Nur Ramadhani, K.; Arifianto, A. Face Spoofing Detection using Color Distortion Features and Principal
Component Analysis. In Proceedings of the 2019 7th International Conference on Information and Communication Technology
[(ICoICT), Kuala Lumpur, Malaysia, 24–26 July 2019; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1109/ICoICT.2019.8835343)
26. Luo, J.; Hu, F.; Wang, R. 3D Face Recognition Based on Deep Learning. In Proceedings of the 2019 IEEE International Conference
[on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 1576–1581. [CrossRef]](http://dx.doi.org/10.1109/ICMA.2019.8816269)
27. Xu, K.; Wang, X.; Hu, Z.; Zhang, Z. 3D Face Recognition Based on Twin Neural Network Combining Deep Map and Texture. In
Proceedings of the 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi’an, China, 16–19 October
[2019; pp. 1665–1668. [CrossRef]](http://dx.doi.org/10.1109/ICCT46805.2019.8947113)
28. Li, Y.; Huang, X.; Zhao, G. Joint Local and Global Information Learning With Single Apex Frame Detection for Micro-Expression
[Recognition. IEEE Trans. Image Process. 2021, 30, 249–263. [CrossRef] [PubMed]](http://dx.doi.org/10.1109/TIP.2020.3035042)
29. Yao, L.; Xiao, X.; Cao, R.; Chen, F.; Chen, T. Three Stream 3D CNN with SE Block for Micro-Expression Recognition. In
Proceedings of the 2020 International Conference on Computer Engineering and Application (ICCEA), Guangzhou, China, 27–29
[March 2020; pp. 439–443. [CrossRef]](http://dx.doi.org/10.1109/ICCEA50009.2020.00101)
30. Suzaki, K.; Shimizu, K.; Oguchi, K. Feasible Personal Identification by Eye Blinking Using Wearable Device. In Proceedings of
the 2019 IEEE 15th International Colloquium on Signal Processing Its Applications (CSPA), Batu Feringghi, Malaysia, 8–9 March
[2019; pp. 266–269. [CrossRef]](http://dx.doi.org/10.1109/CSPA.2019.8696045)
31. Anjos, A.; Chakka, M.M.; Marcel, S. Motion-based counter-measures to photo attacks in face recognition. IET Biom. 2013,
_[3, 147–158. [CrossRef]](http://dx.doi.org/10.1049/iet-bmt.2012.0071)_
32. Alhazmi, O.; Malaiya, Y. Quantitative vulnerability assessment of systems software. In Proceedings of the Annual Reliability and
[Maintainability Symposium, 2005. Proceedings, Alexandria, VA, USA, 24–27 January 2005; pp. 615–620. [CrossRef]](http://dx.doi.org/10.1109/RAMS.2005.1408432)
33. [Arm TrustZone Technology. Available online: https://developer.arm.com/ip-products/security-ip/trustzone (accessed on 24](https://developer.arm.com/ip-products/security-ip/trustzone)
February 2022).
34. Cerdeira, D.; Santos, N.; Fonseca, P.; Pinto, S. SoK: Understanding the Prevailing Security Vulnerabilities in TrustZone-assisted
TEE Systems. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), Francisco, CA, USA, 17–21 May 2020;
[pp. 1416–1432. [CrossRef]](http://dx.doi.org/10.1109/SP40000.2020.00061)
35. [Pinto, S.; Santos, N. Demystifying Arm TrustZone: A Comprehensive Survey. ACM Comput. Surv. 2019, 51, 1–36. [CrossRef]](http://dx.doi.org/10.1145/3291047)
36. Costan, V.; Lebedev, I.; Devadas, S. Secure Processors Part I: Background, Taxonomy for Secure Enclaves and Intel SGX Architecture;
Now Publishers Inc.: Delft, The Netherlands, 2017.
37. Costan, V.; Lebedev, I.; Devadas, S. Secure Processors Part II: Intel SGX Security Analysis and MIT Sanctum Architecture; Now Publishers Inc.: Delft, The Netherlands, 2017.
38. Lee, D.; Kohlbrenner, D.; Shinde, S.; Asanovi´c, K.; Song, D. Keystone: An Open Framework for Architecting Trusted Execution
Environments. In Proceedings of the Fifteenth European Conference on Computer Systems (EuroSys’20), Heraklion, Greece,
[27–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020. [CrossRef]](http://dx.doi.org/10.1145/3342195.3387532)
39. [Mobile Security Solutions Secure Mobile Technology | Qualcomm. Available online: https://www.qualcomm.com/products/](https://www.qualcomm.com/products/features/mobile-security-solutions)
[features/mobile-security-solutions (accessed on 24 February 2022).](https://www.qualcomm.com/products/features/mobile-security-solutions)
40. Trusty TEE Android Open Source Project. [Available online: https://source.android.com/security/trusty (accessed on 24](https://source.android.com/security/trusty)
February 2022).
41. [GlobalPlatform Specifications Archive—GlobalPlatform. Available online: https://globalplatform.org/specs-library/ (accessed](https://globalplatform.org/specs-library/)
on 24 February 2022).
42. Suzaki, K.; Nakajima, K.; Oi, T.; Tsukamoto, A. Library Implementation and Performance Analysis of GlobalPlatform TEE Internal
API for Intel SGX and RISC-V Keystone. In Proceedings of the 2020 IEEE 19th International Conference on Trust, Security and
[Privacy in Computing and Communications (TrustCom), Guangzhou, China, 10–13 November 2020; pp. 1200–1208. [CrossRef]](http://dx.doi.org/10.1109/TrustCom50675.2020.00161)
43. Chai, H.; Lu, Z.; Meng, Q.; Wang, J.; Zhang, X.; Zhang, Z. TEEI—A Mobile Security Infrastructure for TEE Integration. In
Proceedings of the 2014 IEEE 13th International Conference on Trust, Security and Privacy in Computing and Communications,
[Beijing, China, 24–26 September 2014; pp. 914–920. [CrossRef]](http://dx.doi.org/10.1109/TrustCom.2014.121)
-----
_Appl. Sci. 2022, 12, 10782_ 21 of 21
44. Intel[®] Software Guard Extensions (Intel[®] [SGX). Available online: https://www.intel.com/content/www/us/en/architecture-](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html)
[and-technology/software-guard-extensions.html (accessed on 24 February 2022).](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html)
45. [Keystone. An Open Framework for Architecting TEEs. Available online: https://keystone-enclave.org/ (accessed on 24 February 2022).](https://keystone-enclave.org/)
46. [Open Enclave SDK. Available online: https://openenclave.io/sdk/ (accessed on 24 February 2022).](https://openenclave.io/sdk/)
47. [Asylo: An Open, Flexible Framework for Enclave Applications. Available online: https://asylo.dev/ (accessed on 24 February 2022).](https://asylo.dev/)
48. [secGear. Available online: https://gitee.com/src-openeuler/secGear (accessed on 24 February 2022).](https://gitee.com/src-openeuler/secGear)
49. [Vrankulj, A. Uniqul Launches Facial Recognition Payment System. Available online: https://www.biometricupdate.com/201307](https://www.biometricupdate.com/201307/uniqul-launches-facial-recognition-payment-system)
[/uniqul-launches-facial-recognition-payment-system (accessed on 24 February 2022).](https://www.biometricupdate.com/201307/uniqul-launches-facial-recognition-payment-system)
50. [About AliPay Facial Payment. Available online: https://opendocs.alipay.com/open/20180402104715814204/intro (accessed on](https://opendocs.alipay.com/open/20180402104715814204/intro)
24 February 2022).
51. [About WeChat Facial Payment. Available online: https://pay.weixin.qq.com/wiki/doc/wxfacepay/ (accessed on 24 February 2022).](https://pay.weixin.qq.com/wiki/doc/wxfacepay/)
52. [About UnionPay Facial Payment. Available online: https://cn.unionpay.com/upowhtml/cn/templates/newInfo-nosub/9ed2b7](https://cn.unionpay.com/upowhtml/cn/templates/newInfo-nosub/9ed2b7ea4873410186ae96112fccfc7d/20191211194221.html)
[ea4873410186ae96112fccfc7d/20191211194221.html (accessed on 24 February 2022).](https://cn.unionpay.com/upowhtml/cn/templates/newInfo-nosub/9ed2b7ea4873410186ae96112fccfc7d/20191211194221.html)
53. Notifications Overview—Apple Developer. [Available online: https://developer.apple.com/notifications/ (accessed on 24](https://developer.apple.com/notifications/)
February 2022).
54. [Firebase Cloud Messaging. Available online: https://firebase.google.com/docs/cloud-messaging/ (accessed on 24 February 2022).](https://firebase.google.com/docs/cloud-messaging/)
55. Ant-Techfin. [Performance Test Results of Intel SGX. Available online: https://my.oschina.net/u/4587334/blog/5014463](https://my.oschina.net/u/4587334/blog/5014463)
(accessed on 24 February 2022).
56. [About JPush. Available online: https://docs.jiguang.cn/jpush/guideline/intro/ (accessed on 24 February 2022).](https://docs.jiguang.cn/jpush/guideline/intro/)
57. [Giry, D. Keylength—NIST Report on Cryptographic Key Length and Cryptoperiod (2020). Available online: https://www.](https://www.keylength.com/en/4/)
[keylength.com/en/4/ (accessed on 24 February 2022).](https://www.keylength.com/en/4/)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app122110782?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app122110782, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/12/21/10782/pdf?version=1667369371"
}
| 2,022
|
[] | true
| 2022-10-25T00:00:00
|
[] | 18,439
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00d4ca5c46ab55cd82aff4fffd52b182c9a4ca34
|
[
"Computer Science"
] | 0.887592
|
Trust-based hexagonal clustering for efficient certificate management scheme in mobile ad hoc networks
|
00d4ca5c46ab55cd82aff4fffd52b182c9a4ca34
|
Sādhanā
|
[
{
"authorId": "40066145",
"name": "V. Janani"
},
{
"authorId": "143894255",
"name": "M. Manikandan"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
DOI 10.1007/s12046 016 0545 0
# Trust-based hexagonal clustering for efficient certificate management scheme in mobile ad hoc networks
## V S JANANI[*] and M S K MANIKANDAN
Department of Electronics and Communication Engineering, Thiagarajar College of Engineering,
Madurai 625015, India
e-mail: jananivs@tce.edu
MS received 17 December 2015; revised 18 March 2016; accepted 20 April 2016
Abstract. The wireless and dynamic nature of mobile ad hoc networks (MANET) render them more vulnerable to security attacks. However, providing a security mechanism implicitly has been a major challenge in
such an ad-hoc environment. Certificate management plays an important role in securing an ad-hoc network.
Certificate assignment, verification, and revocation complexity associated with the Public Key Infrastructure
(PKI) framework is significantly large. Smaller the size of the network lesser will be the certificate management
complexity. However, smaller the size, large will be the overall infrastructural cost, and also larger will be the
overall redundant certificates due to multiple certificate assignment at the boundary regions, that in turn affects
the prompt and accurate certificate revocation. By taking these conflicting requirements into consideration, we
propose the trust-based hexagonal clustering for an efficient certificate management (THCM) scheme, to bear an
absolutely protected MANET Disparate to the existing clustering techniques, we present a hexagonal geographic
clustering model with Voronoi technique where trust is accomplished. In particular, to compete against
attackers, we initiate a certificate management strategy in which certificate assignment, verification, and
revocation are carried out efficiently. The performance of THCM is evaluated by both simulation and empirical
analysis in terms of effectiveness of revocation scheme (with respect to revocation rate and time), security, and
communication cost. Besides, we conduct a mathematical analysis of measuring the parameters obtained from
the two platforms in multiple times. Relevant results demonstrate that our design is efficient to guarantee a
secured mobile ad hoc network.
Keywords. Clustering; certificate management; MANET; security; trust; Voronoi.
## 1. Introduction Moreover, to manage issues other than secured routing
such as authentication, privacy, integrity, and other security
Ensuring an efficient security mechanism in a dynamic services, Public Key Infrastructure (PKI) was deduced.
communication system is quite a challenging operation. In From past several years the PKI framework has been well
MANET, no distinct part is dedicated to support any established that offers securing applications on the MANspecific functionality individually, with reliable routing ETs, due to its effectiveness in providing security in the
being an eminent example. For decades, many routing form of digital signatures and certificate management. In
protocols have been introduced for mobile ad hoc envi- traditional PKI-based approaches a centralized trusted
ronment to achieve efficient secure routing, especially in a certificate authority (CA) provides certificates for the nodes
multicast geocast region. These protocols differ in the in a network, by which the nodes are authenticated during
approaches for finding routes between nodes in the net- communication, that is, certificates for each node are signed
work. A location-based multicast (LBM) protocol for a by CAs and managed by the PKI system by which the
secured route discovery was introduced by Ko and Vaidya nodes are authenticated during communication. Research
[1]. In LBM protocol, the route has been discovered by ers have identified various security concerns when PKI
utilizing location information from the Global Positioning system is used for security applications. These concerns
System (GPS). This protocol reduces the overhead of the include: (a) Computational complexity that affects comroute discovery by limiting the search for a new route and putational cost and (b) PKI management, including certhe attackers within the ad hoc network, thus securing the tificate management. However, it is difficult for such a
communication. certificate-based PKI strategy in MANETs for its self-or
ganizing and infrastructure-less property. Therefore,
*For correspondence deploying such a PKI-based communication system where
-----
1136 V S Janani and M S K Manikandan
geographical or terrestrial constraints demands (such as
battlefields, emergency, and disaster areas) is difficult.
An ad hoc network is exposed to many kinds of attacks
and so it is difficult to ensure a secure communication. To
protect the legitimate nodes from these attacks, a vulnerable
system should be considered in ad hoc networks. This can
be achieved through the use of an efficient certificate
management scheme that conveys trust in PKI. Certificate
Management (CM) is considered to be a crucial task that
promises trust in PKI. An efficient security solution for CM
should confine two main factors: assignment and revocation. An enormous amount of researches has been made in
these areas to provide a promising solution for security
issues in MANET Certificate Revocation (CR) is an integral mechanism in certificate management, which enlists
and removes the node’s certificate that has been identified
to launch attack. If a node is found to be compromised or
misbehaved, it should be denied from all activities and
removed from the network. Certificate Revocation Lists
(CRLs) are mechanism through which revocation information is propagated in a PKI framework. A CRL is a
signed list by the CA listing all the certificates that are
revoked. It is therefore considered as a main challenge for
certificate revocation to revoke the certificates of malicious
nodes promptly and accurately. In addition, the size of
CRLs is important in a PKI system.
However, there are several drawbacks in establishing
PKI communication system to the ad hoc communications.
Some of them are:
In a traditional flat PKI system, a CA maintains the
certificate authorization and a complete CRL list for the
cluster. Single CA issues all of the certificates within a
cluster. This list will be passed on to cluster head (CH) to
dispatch the certificates to the nodes. Such a structure can
be prone to delay, and maintaining such an infrastructure
may add up the infrastructural cost to a large extent.
Issuing network-wide certificate to the nodes may lead to
resource underutilization. We may require to restrict the
usage of communication resources by node to certain
region only, for example, the region where it has been
registered.
Revocation checking can be problematic in this structure,
since all of the revoked certificates in the network are listed
in a single CRL, the number of entries on that CRL can
become quite large. Further, there are cases where malicious nodes and their certificates can no longer be revoked
in a timely practice.
A large CRL takes significant bandwidth to download
and consumes significant computational resources on the
CA to check the revocation status of a particular node also;
the amount of revocation information that can be stored at a
CA is limited by the memory available at the CA.
Therefore, it is clear that the complexity of the PKI
system and also the size of the CRL have to be minimized
with prompt and accurate certificate management, in order
to make the PKI-based security viable for MANET security
deployment. In this pursuit, we make the following contributions in this paper:
A certificate assignment strategy is introduced for MANETs in order to reduce the complexity of managing the PKIbased security framework. A cluster region-based certification approach is established, where the entire network is
partitioned into several geographical clusters provided by the
LBM protocol. (2) To avoid dynamic communication droppings, the nodes in the boundary region of any particular
geographical clusters are assigned with multiple certificates
corresponding to its current region as well as several other
regions in its vicinity, which in turn reduces the size of CRLs.
(3) (inspired by Bellur.B’scertificate assignment strategies in
[2] To reduce the size of CRLs further, the CA tailors the
expiry time of each certificate with the distance of node from
the cluster region. (4) Using Voronoi Diagram (VD), the
optimal strategy for multiple certificate assignment is
resolved. We assume the cluster to be in hexagonal shape for
geometrical simplification as presented by Chang and Wang
[3] and Zhuang et al [4].
This paper is structured as follows. Section 2 describes
the works related to certificate management in MANET are
described. Section 3 describes the proposed region-based
certificate method. Section 4 presents the operations of
nodes with region-specific certificates. An analytic model
for the size of CRL and communication cost is derived in
section 5, followed by the performance evaluation and
simulations in section 6, and empirical analysis in section 7. The concluding remarks appear in section 8.
## 2. Related works
In recent years, researchers have focused on MANET
security issues as done by Fan et al [5].
It is difficult to provide a complete security solution to
mobile networks due to the wireless connectivity, dynamic
topology, and infrastructure-less features. To cope with
uncertain nodes, clustering techniques have been widely
applied in MANET by Cao and Hadjicostic [6], Chau et al
[7], Cheng et al [8], Kao et al [9], and Mohamed and
Abdelfettah [10]. Many clustering algorithms in the ad hoc
network were investigated by Abdelhak et al [11], Khalid
et al [12], and Mohd. Junedul Haque [13]. With the
objective to reduce the distance calculation complexities of
uncertain nodes, an important structure in computational
geometry named Voronoi diagrams are applied for wireless
application as proposed by Fan et al [14] and Kao et al [9].
Stojmenovic et al [15] introduced a distributed algorithm to
compute the Voronoi region of each node. A general
algorithm to reduce flooding ratio in routing within a
Voronoi network was presented by Kao et al [9]. The
topology control and routing in a wireless ad hoc network
was done by Ngai et al [16] using Voronoi design and
delaunary triangulation. To increase the spatial reuse, the
network areas are clustered into congruent polygons with
-----
Trust-based hexagonal clustering for efficient certificate 1137
Voronoi geometric features. A hexagonal spatial geometric
distribution of nodes was introduced by Zhuang et al [4].
This partitioning technique has shown to increase the network capacity and throughput of the network. It was proven
that the regular hexagons have flexibility to be partitioned
into smaller hexagonal shapes and grouped together to form
larger ones.
Most of the previous studies on MANETs have utterly
assumed that nodes are cooperative. As an effective
mechanism to consider issues in node cooperation, trust has
been highly recommended in recent researches as done by
Renu et al [17] and Manju and Yudhvir Singh [18]. Jingwei
and David [19] quantified trust relationships with the risk in
a PKI system. A fully trust-based PKI approach for ad hoc
networks was presented by the authors Liu et al [20],
Ferdous et al [21], Cho et al [22], and Wei et al [23]. This
approach proved to eliminate security vulnerabilities to a
large extent with maximized performance characteristics.
The performance issues in trust management protocols
were addressed by Ing-Ray Chen et al [24] with minimized
trust bias and maximized application performance.
To provide a trade-off between cryptographic security
and vulnerability, the MANET applications necessitated
protocols in multicast condition. In wireless networks
numerous researches have been done in multicast routing
and routing protocols by Deering et al [25] and Mohammad
M Qabajeh et al [26]. Kanchan and Asutkar [27] applied
clustering, encryption, and cryptography techniques to
improve the performance of these routing protocols. The
dynamic movement of nodes in the dynamic environment
made these existing protocols inept. This drives the need
for an improved multicast flooding approach. To reduce
route establishing overhead and to improve the performance of routing protocol, a location information-based
approach was introduced. Here, to handle the duplication
mechanism so that each destination receives atleast a single
copy of the original message, flooding algorithm was used.
A location-based approach (LBM) proposed by Ko and
Vaidya [1] described the flooding algorithm in wireless
topology, which used physical location information
obtained from the GPS.
On the demand for providing security to the legitimate
nodes against attackers, many certificate revocation
schemes have been proposed in PKI networks and military
ad hoc environment. Jormakka and Jormakka [28] presented a certificate revocation scheme designed for a semiad hoc military and civilian network to prevent fake certificate revocations.
A survey on the certificate in a distributed system from
the year 2000 onward is done by Yki and Mikko [29]. Wei
Liu et al in [30] and Mohammad and Javad [31] studied a
cluster–based certificate revocation scheme that quickly
revoke malicious certificates and retrieve falsely accused
certificates in distributed networks. Mohamed M E A
Mahmoud et al [32] carried out a study on revoking certificates in a pseudonymous PKI system in which certified
key pairs were assigned to maintain privacy in each node.
To ensure the validity of certificates in PKI system, a validation technique was proposed by Mohammad Masdari
et al [33, 34], in which the trust level of CA on each node
was considered. The certificate revocation method, CCRVC
presented by Liu et al [35], handles attacker nodes. CCRVC
revoked malicious nodes to solve false accusation. URSA
proposed by Luo et al [36] implemented a novel ticket
certification process that used tickets to recognize and to
grant access to well–behaved nodes. This scheme maximized the service availability with a distributed and localized mechanism. Later, Taisuke et al [37] considered the
complexities of Certificate Dispersal Problem in a tree
structure where the problem was solved in polynomial time.
Certificate management with trust in a PKI framework
has been used as a security mechanism for attack handling.
CR scheme was presented by Park et al [38] and Raya et al
[39] to identify and remove certificate of those nodes that
were detected as attacker node. This scheme provided
security of the network by revoking the compromised or
misbehaved nodes. The revocation scheme by Park et al
supported a cluster-based network. The cluster head performed the necessary revocation action of removing the
nodes in black list in this scheme. Mawloud Omar et al [40]
addressed the constraints in node mobility while designing
a reliable certificate system. The authors proposed a
recovery protocol based on web-of-trust where the nodes
themselves issue and manage the public key certificates. A
short and safe certificate chain was selected in order to
reduce the communication overhead and resist attacks.
Nevertheless, there are certain flaws in the existing certificate management mechanism in utilizing PKI-based
communication system to a mobile environment. In efficient deployment of revocation scheme add up the resource
utilization as well as communication cost. Owing to the
absence of topology, providing a promising security to the
mobile nodes in MANET is difficult to achieve. We propose an efficient Trust-based Hexagonal cluster for certificate management (THCM) strategy for use in mobile
networks, to secure MANET and to reduce the complexities
in the PKI-based security system. To partition the uncertain
nodes of MANET, a Voronoi-based clustering is performed
in hexagon structured polygon to reduce region overlapping
drawbacks that occur in traditional clustering shapes. A
trust-based hexagonal clustering is incorporated in our
scheme, where the CH selection is performed with high
trust degree. Considering the communication cost and
certificate management complexities, optimal sizes of
regions are calculated.
## 3. Proposed system design
This section provides a detailed description of our proposed
certificate management scheme that significantly reduces
the complexity of the PKI system. We begin with
-----
1138 V S Janani and M S K Manikandan
partitioning of the network into different geographical
regions with a trust-based clustering approach. The proposed certificate assignment and revocation mechanism is
implemented in each such geographic cluster that provides
secure intra-clustering and inter-clustering communication.
## 3.1 Proposed clustering technique
There have been several clustering strategies proposed in
literature. In an uncertain clustering (UC) model, it has
been assumed that a node or a point ‘ni’ should be located
inside a (closed) region with a probability density function
(PDF) to describe the distribution of nodes within a region.
The uncertain point clustering has been performed with
different methods such as K-means, UK-means, pruning,
Min-Max BB, partial ED, and so on. To compute the
closeness of the node and the cluster representative, different methods based on mean, Euclidean distance, and
probability have been in practice. However, these traditional clustering techniques of uncertain nodes increase the
computational complexities and communication cost in
mobile environment, especially in mobile ad hoc networks.
To construct a highly desirable uncertain clustering cell in
MANET, we propose to use VD-based clustering in which
the clustering issues are managed considering the drawbacks of existing UC methods.
In MANET, VD is used to partition network into clusters
based on Euclidean distances to nodes in a specific subset
of the plane. A Voronoi diagram represents the region of
influence around each of a given set of nodes. This geometric structure partitions the entire plane into polygon
cells, called Voronoi polygons, formed with respect to n
nodes in a plane. It is widely used since it offers an efficient
solution for point location. In recent years this structuring
concept is widely used for exploring location and routingbased issues. The Voronoi partition or cluster for a given
set of nodes is unique and produces polygons that are route
connected. A Voronoi polygon, traditionally, constructed as
follows:
V xð Þi [¼] �yjd xð i; yÞ � d x� j; y�; i 6¼ j� ð1Þ
where V xð Þi [: Voronoi polygon of][ x]i
xi : Node, y : Set of points closer to xi
d xð i; yÞ : Distance from point y and xi and x� j; y� : Distance from point y and xj.
Our clustering technique consists of two steps: 1. Cluster
construction 2. Cluster head selection.
3.1a Cluster construction: In the first step, Voronoi
clusters (VCs) are constructed on a set of nodes N
¼
fn1; n2. . .nkg with a distance function d : S[m] � S[m] ! S (mdimensional space) giving the distance d x; y 0 between
ð Þ �
any nodes x; y S[m]. The VD partitions the space S[m] in k
2
cells with cluster representatives C ¼ cf 1; c2. . .ckg with the
property mentioned by Cao and Hadjicostis [6] as
where the neighboring region, Xn mð Þ is the region on one
side of the cluster cell edge En mð Þ and Ej j is the empty set.
Figure 1. Voronoi-hexagonal clustering in MANET.
d xð ; ciÞ\d x� ; cj�8x 2 V cð Þi ; ci 6¼ cj: ð2Þ
In the second step, the distance between the nodes and a
cluster representative (a node) is calculated. The Voronoi
partitioning of a network can be of any polygonal shape and
for its beneficial geometrical characteristics, we assume
that the uncertainty region of Ni is a regular hexagon with
nodes whose center are equidistance to each other with
distance d and radius r, where r [ 0. The hexagonal clustering partitions a larger area into adjacent, nonoverlapping
areas and can be subdivided into smaller hexagons. Nodes
join to form hexagonal clusters and each cluster consists of
CH and Cluster Members as shown in figure 1. The distance d a; b between nodes in MANET plays an important
ð Þ
role in determining the network performance. We shall
assume that the nodes of the ad hoc network are independent and randomly distributed in the hexagonal structure.
The edges of the hexagonal polygon is perpendicular to the
line joining a node with another in N. Considering the
radius, for any query point S, (2) can be written as
2
d pð ; ciÞ � d p� ; cj� ¼ ri þ rj: ð3Þ
If two nodes overlap, the distance d n� i; nj�\ri rj and
þ
(3) become unreal, which means the edges cannot be found
and we consider the cluster as empty.
The hexagonal cluster construction in the MANET is
illustrated in Algorithm 1. The expected region of each
node ni is initialized as a whole space (step 2). The VC
edges and the corresponding neighboring regions of ni are
then computed for each node nj (steps 4 and 5). The VD for
cluster construction considers an expected region of node ni
and the neighboring region of VC edge En mð Þ. The
expected region of ni, denoted by Eri is the intersection of
all the internal regions; that is,
\
Eri
¼
j¼1... Ej j[V]j6¼i
Xn mð Þ ð4Þ
-----
Trust-based hexagonal clustering for efficient certificate 1139
The clustering polygon can be generated by excluding all
the neighboring regions from the domain space. The
overlapped regions are reduced to generate the expected
region Eri (step 6). For each node nj, we verify the expected
region lie inside a Minimum and Maximum Region
**A** O2 Bounding (MinMax-RB) of the domain space; MinMax-RB
is the minimum or maximum region with sides perpen
O1 dicular to the principle axes of S[m] that encloses a finite
region. If so, the node nj is then assigned to a cluster. Let us
**F** consider six equilateral triangles in a regular hexagon. For
calculation we take a single equilateral triangle DOAF. A
circle with center cn and radius rn is assumed to intersect
the DOAF as in figure 2. On spatial decomposition the
**E** region that does not contain the hexagonal region is con
sidered as neighboring regions Nn mð Þ and the region where
the area of the circle and the neighboring region overlap as
overlap region Oi (ie., Oi xð ; yÞ ¼ O1 þ O2 þ O3). The
probability of the expected region Eri in a hexagonal cluster
with area A and x; y as coordinates of any random node is
ð Þ
given as
Hexagonal Voronoi cluster construction.
-----
1140 V S Janani and M S K Manikandan
PEri ¼ A[1][2]
" 6 #
ZZ
X
prn[2] [�] Oi xð ; yÞ
i¼1
dxdy 5
ð Þ
The degree of successive encounter ‘x’ made be trustee on
trustor may either be positive (represented as ep xð Þ) or
negative (represented as ep xð Þ). Here, to evaluate the trust,
we consider three cases of uncertainty degree, i.e., 0,
¼
0\UD\1 and UD ¼ 1 as shown in figure 3.
When the uncertain degree is low (UD 0, the nodes
¼ Þ
are highly trustable. This highly certain case shows that the
trustor is very much confident with the trustee. If the
uncertain degree varies from low to high (0\UD\1Þ, the
trustor may not have sufficient confidence with the trustee.
On the other hand, a highly uncertain case occurs when the
uncertain degree UD 1. At this state the trustor may be
¼
completely unknown about the trustee.
The nodes with highest trust degree, that is, UD 0 and
¼
TD ¼ 1, is considered as CH, initially at time T1. As time
progresses, the topology changes frequently in a MANET,
which varies the cluster nodes and the cluster heads. Hence,
the cluster head selection procedure is adaptable for the
change in topology. The trust value of each node is
recomputed and the CH is selected, comparing the current
CH ðCHcurrÞ with the previous CH ðCHpreÞ and location
ðLOCpreÞ.
The nodes with trust degree between 0 and
1ðthat is; 0\UD\1Þ have undergone distrust test to
reduce the rate of risks. On comparison with the trust
degree and the distrust degree of such nodes, they are
either revoked or considered as cluster members, that is,,
the nodes with highest distrust degree DTD
ð ¼
1 or DTD [ TD and UD ¼ 1Þ are revoked and the
remaining nodes are assigned as CH. This trust-based
cluster head selection eliminates a certain amount of
risk in communication within the network. The
detailed cluster head selection process is shown in flow
chart 1.
To perceive the exact location information of any node,
each node in the network is enabled with a position identification system. Our proposed scheme makes use of the
Figure 3. Trustability.
PEri ¼ [p]A[r]n[2] [�] A[6][2]
ZZ
Oi xð ; yÞdxdy: ð6Þ
3.1b Cluster head selection: In MANET, the nods join or
leave the cluster dynamically and thus the CH selection is
difficult. We consider a distributed cluster head selection
procedure with n nodes, which are of h hops distance within
a cluster. It is much easier to select an efficient mechanism to
establish security, if trust relationship among the nodes is
obtainable for every cooperating node. Hence, to provide a
secured communication among cooperative nodes, it is
important to calculate the trust and distrust degrees of nodes
in the network. The trust of a node can be defined as the
probability of belief of a trustor (t) on a trustee (s), varying
from 0 (complete distrust) to 1 (complete trust). The probability of trust and distrust of the trustor on information (i)
sent by the trustee with context to belief (b) is given in (7)
and (8), presented by Jingwei and David [19].
TrustDegree; TD t; s; i; b
ð Þ
h ^ i ð7Þ
P belief t; i madeBy i; s; b beTrue b
¼ ð Þ��� ð Þ ð Þ
DistrustDegree; DTD t; s; i; b
ð Þ
h ^ i
P belief t; _i madeBy i; s; b beTrue b : 8
¼ ð : Þ��� ð Þ ð Þ ð Þ
To measure the trust degree explicitly in an ad hoc
environment, we present a trust calculation method with
uncertainty degree. With this a high level of trust can be
achieved for secured communication. The certainty of
nodes in MANET is considered as the summation of trust
and distrust degrees. Consequently, the uncertainty degree
UD by Jingwei and David [19] is defined as
ð Þ
UD t; s; i; b 1 certainity of nodes: 9
ð Þ ¼ � ð Þ
An important factor that affect the trust level of a node is
the Encounter History (EH), which specifies the number of
successive interactions between the trustor and the trustee
in a network. Initially we assume EH as greater than or
equal to 0. The trust and the distrust level of any node can
be measured with the relation as shown in (10) and (11).
TD t; s; i; b
ð Þ ¼
DTD t; s; i; b
ð Þ ¼
Pn
x¼1 [e][p][ x]ð Þ 10
ð Þ
EH
Pn
x¼1 [e][n][ x]ð Þ : 11
ð Þ
EH
Therefore, (9)
UD t; s; i; b 1
ð Þ ¼ �
�Pn Pn �
x¼1 [e][p][ x]ð Þ x¼1 [e][n][ x]ð Þ : 12
þ ð Þ
EH EH
-----
Trust-based hexagonal clustering for efficient certificate 1141
**Start**
**List the nodes in cluster**
**Initialize CHpre, CHcurr,LOCpre and**
**present ( ) as 0 and EH ≥1**
**co-operativeIs node** **No** **Revoke nodes**
**Yes**
**Generate probability as trust metric**
**Calculate encounter history (EH)**
**Compute trust degree (TD) for each**
**node**
**Yes** **If TD =1 and** **No**
**UD=0**
**No**
**TD ranges [0,1]**
**If** **Yes**
**≤present( )** **False** **TD(CHpre) =TD(CHcurr)** **False**
**&** **Compute distrust**
**) ≤1** **EH(CHpre)= EH(CHcur)** **probability**
**Select new**
**cluster head**
**True** **Calculate distrust**
**True** **degree (DTD)**
**remains** **Assign CHpre and CHcurr**
**as cluster head** **If**
**DTD >TD/DTD = 1** **Yes**
**and**
**UD=1**
**No**
**Broadcast cluster** **Assign nodes as**
**head** **cluster members**
**Stop**
Flow chart 1. Trust-based cluster head selection process.
clusters as well as the geographic location information
intensively.
3.1c Geocast clusters: We use a geographical positionbased routing scheme of Ko and Vaidya [1] for
improving the efficiency of routing in a multicast environment. LBM assumes the availability of GPS for
obtaining location information essential for routing in the
hexagon clusters. Limiting the search area for finding a
path, reduces control overhead and increases bandwidth
utilization, in LBM, makes it suitable for mobile networks. LBM uses two approaches for flooding control
packets, namely, multicast tree and multicast flooding, in
a geographic cluster.
## 3.2 Certificate authority
In certificate management the nodes have to obtain valid
digital certificates from the CA, before it takes part in the
communication. A trusted third party, CA is deployed in
cluster-based scheme to enable nodes to preload the certificate. The CA distributes and manages certificates to all
nodes within the cluster. The validity of a certificate can be
verified by ensuring that the certificate is neither expired
nor revoked by the CA. In the proposed cluster-based CM
scheme, depending on the location of each node, certificates are assigned. It is constrained that the node uses only
the certificate corresponding to their current geographic
location and discards those certificates that are not
-----
1142 V S Janani and M S K Manikandan
appended with a certificate assigned to that particular node.
In addition to this, the nodes in the boundary regions are
assigned with multiple certificates corresponding to several
clusters in its vicinity, in advance, making flexible roaming
between adjacent regions. The multiple certificates
assigned to the nodes can be derived from the same key pair
(public–private keys) for simplicity.
The CRL is chosen to access CA in the mobile environment concatenated with a time stamp as an indication of its
updates. This list enumerates the digital certificate’s status of
all nodes, that is, date of certificate issued, entity that issued,
and the reason for revocation of the certificate. When a node
attempts to access the cluster, the CA allows or denies access
based on the CRL entry for that particular node
3.2a Region-based CRL concept: To reduce the potential
network and computational overhead raised by larger CRLs
improve the revocation efficiency, a scheme for partitioning
the CRLs into several smaller lists has been in practice. The
partitioning of CRL is transparent to all the nodes and for each
certificate, available information shall indicate the segment
that it should be consulted. In our model CRLthe network is
segmented based on the geographic information. The certificate assigned to all the nodes in a particular geographic cluster
A are mapped to a CRL Register Head represented by
CRLRegNo(A). The proposed system constrained the nodes to
append signed message with the certificate corresponding to
their present geographically partitioned cluster. All nodes in a
given cluster, therefore, append signed messages using the
certificate that belong to the CRL Register Head of that particular cluster. For example, the nodes in cluster A appends
signed messages with certificate that have same CRL Identity
(ID), CRLRegNo(A). During verification of received message, a node in cluster A obtain the CRL corresponding to its
current cluster, represented by CRLRegNo(A) by the CA. In
addition, the nodes will discard the signed messages that are
appended with certificates other than the current location.
When a node moves closer to the boundary of a neighboring
cluster B, it accepts signed message appended using certificate
corresponding to cluster B in addition to its corresponding
cluster A. Such nodes receive the CRL Identity of B represented by CRLRegNo(B) issued by the CA. This proposed
strategy of CRL partitioning will minimize the CRL size to a
great amount and hence the communication costs enormously.
To reduce the size of CRLs further, we considered the expiry
time of certificates. It is proposed that the CA can alter the
expiry time of certificates assigned to nodes of different
geographic clusters to be inversely proportional to the distance between the current cluster region and home cluster
region of the node., that is,, the certificates get expired when
the node moves apart from its home cluster region. This
eliminates direct revocation of such expired certificates that
reduce the size of CRL. Let the distance between positions p
and q be Dist p; q and the boundary of A be Bound A . Then,
ð Þ ð Þ
Dist Nð i; AÞ ¼ MinpinBound Að Þ½[Dist GPS]ð [ð][N]iÞ; AÞ�: ð13Þ
If the node moves closer to the boundary of cluster A
then Dist Nð i; AÞ\MaxiRange, where MaxiRange is the
maximum range of a cluster. Likewise, if a node is said to
be in the center of a cluster, then Dist Nð i; AÞ [ MaxiRange.
It is assumed that a geographic cluster B is said to be the
neighbor of A, if there exists position p and q and
Dist pð ; qÞ\MaxiRange.
## 4. Certificate management strategy
The MANET environment can be organized into different
clusters with several shapes such as circle, rectangle, and
hexagon. To gain advantage in faster searching speed and
to have successive search patterns overlapped, we considered the clusters as regular hexagons. We consider the
nodes and the CA in the network are knowledgeable about
the clustering as well as CRL partitioning. To know the
physical location of each node in the cluster, the LBM
protocol used in THCM updates its geographic information,
whenever required. The LBM protocol in THCM will
update geographic information of each node. Moreover, the
nodes can be determined even before they are about to
migrate from its current cluster location to the neighboring
cluster of its vicinity.
## 4.1 Functionalities of certificate management
When a hexagonal geographic cluster is organized, the
nodes in a particular cluster place request to the CA for
assigning the certificate for authenticated participation in
the communication. After verification, the CA responds
with multiple certificates corresponding to the current
location as well as neighboring location of the nodes as
shown in figure 4.
Our proposed THCM is considered in different phases.
Initialization Phase: During this phase, each node will
send a request ðCERReqÞ for assigning certificate to the CA.
The request sent by the nodes are signed using its private
key.
Verification Phase: Upon receiving the request, CA first
verifies the message using the public key attached with the
request. From the GPS information received with the
request, CA determines the cluster in which the node is
currently located and its neighboring clusters.
Assignment Phase: The CA responds to the node with
multiple certificates ðCERResÞ corresponding to its current
as well as neighboring locations. Besides, the CA responds
to the CRLs corresponding to different geographic locations of the nodes.
The functionalities in the proposed certificate management scheme in each hexagonal cluster are described as To
send a message: To begin a secure communication, each
node in a hexagonal geographic cluster should obtain
-----
Trust-based hexagonal clustering for efficient certificate 1143
Request for certificate
(CER Req)
|Nodes (N ) i|(CER Req) Assign certificate ( CER Res)|Verify request Certificate Authority (CA) Determine current Certificate of & neighboring N i regions of Ni|
|---|---|---|
Figure 4. Certificate request and assignment.
signed messages that are appended with corresponding
certificates from CRL. A node signs the hash of the message with its private key. This signed message is then
appended with the certificate corresponding to the geographic location.
To receive a message: This is an important functionality
of certificate management where the messages are verified
and processed. It includes the following operations (with
figure 1).
Verification: A node that receives messages verifies three
main elements namely; certificate, validity,and signature.
Sender’s certificate: The certificate of the sender is verified first to analyze whether it belongs to the current
geographic cluster A or its neighboring region B . If the
ð Þ ð Þ
sender’s certificate belongs to cluster region other than
AorB, the message is discarded.
Verify validity: If the sender’s certificate corresponds to
either AorB, it is further verified to check its validity, that
is, it has not expired or has not been revoked. The certificate is discarded if it is expired. Further, the revoked status
of the certificate is determined from the appropriate CRL,
that is, if the sender’s certificate corresponds to cluster A, it
is specified by CRLRegNo(A) and if it corresponds to
cluster B, it is specified by CRLRegNo(A). The certificate is
discarded, if it is proved as revoked from above
verification.
Signature: The signature of the message is verified
whether it is received from current or neighboring geographic cluster.
Accept message: If the messages pass all the above
verification procedures, then the messages are accepted.
Re-Organize: When a node in cluster A is identified to
move closer to the boundary of a neighboring cluster B, the
CRL Register Number is reorganized. In addition to the
certificate of current cluster location, the node accepts new
signed messages that are appended with the certificate
corresponding to the new cluster region. It also acquires the
CRL of cluster B represented as CRLRegNo(B), issued by
the CA. For example, when a node N1 of cluster A moves
closer to the boundary of cluster, N1 accepts the CRL of B
(CRLRegNo(B)) in addition to the CRLRegNo(A) of cluster A. This re-organize functionality of proposed system
maximizes the availability of services to each node and
resilience against attacks.
Request for new messages: When a node identifies the
certificate corresponding to the neighboring cluster that it
probably visits in near future is about to expire, the nodes
send a request to the CA for new certificates. These rerequests processes for a new set of certificate to the CA and
the assignment reply from the CA are performed in three
phases; initialization phase, verification phase, and assignment phase as described in section 4.
## 4.2 Certificate assignment
In a random network, wireless nodes are distributed randomly over an area. This random distance between nodes in
MANET plays an important role in the performance of the
system. We propose a random probability distribution of
the distance between nodes distributed in a regular hexagon. To reduce the complexities, we use spatial decomposition method of Bettstetter and Wagner [41], for certificate
assignment. Figure 5 shows the hexagonal clustering of
networks in a MANET environment. For reference we have
taken a single hexagon cluster ABCDEF with center [0]O[0]
intersected by a circle of center cn and radius rn. The sides
of each hexagon is taken as ‘S’. The equilateral triangle
DOAF represents one of the six equilateral triangle regions
in ABCDEF. When a node sends a request for certificate,
the certificate authority will assign one or more certificates
depending on the random distance of the CA and the
requested node, after verification processes. We consider
three different cases of certificate assignment strategy in a
mobile ad hoc network.
Single certificate assignment: The requestor node R
ð Þ
and the CA lie completely inside the circle, within the
hexagonal region as shown in figure 5(a), that is, the CA
and the node R correspond to same geographic cluster
BCDEF. During this case the CA assigns a single certificate
to R corresponding to its current geographic distance.
Multiple certificate assignment: As in figure 5(b), suppose
the requestor node R moves closer to the boundary of
ð Þ
cluster ABCDEF so that the circle cuts the edges of the
hexagon cluster ABCDEF. Multiple certificates are assigned
to R in this case, corresponding to its current geographic
location (hexagon ABCDEF) and neighboring cluster region.
Null certificate: Suppose the requestor node R does not
belong to the geographic location of CA ABCDEF or at
ð Þ
-----
1144 V S Janani and M S K Manikandan
**B** **A**
**B** **A**
R
R CA
**C** CA **O** **F** **C** **O** **F**
D E D E
## (a) (b)
**B** **A**
R
CA
**C** **O** **F**
D E
## (c)
Figure 5. Certificate assignment schemes. (a) Single certificate assignment. (b) Multiple certificate assignment. (c) Null certificate
assignment.
the boundary of ABCDEF, the request is discarded and no
certificate is assigned to R as shown in figure 5(c).
## 4.3 Attack model
The proposed certificate management scheme aims to
achieve resistance against the following security attacks:
Forging Attacks: The revocation information generated
in a cluster should be unforgettable, so that any node in the
cluster must not be able to generate duplicate revocation
information, even though it has the revocation information
generated earlier.
Collusion Attack: A revoked node should not be able to
collude to revoke a trustable node.
Revocation Denial Attack: Neither a trustable node nor a
distrust node should purposely fail the revocation process
of a misbehaved node, internally or externally.
## 5. Analytic model
5.1 Size of CRLs
The proposed system benefits the reduction in the size of
CRLs. Generally, the size of CRL in a network depends on
the number of nodes to which certificates are to be assigned
ðNT Þ, rate of revocation (R), validity of node’s certificate
(V), and the order of CRL entries (m). The revocation rate
R is an integral part for evaluating the CRL size as well as
ð Þ
the performance of revocation system. It can be stated as
the rate of launching attack by an attacker node before its
certificates get revoked. Owing to the dynamic movement
of nodes in a MANET environment, the order of CRL
entries varies frequently, which affects the validity of certificates. The CRL size is given as
Size of CRL ¼ NT � R � V � m: ð14Þ
-----
Trust-based hexagonal clustering for efficient certificate 1145
When a cluster-based certificate management is applied,
the size of CRL also varies with the number of geographically partitioned clusters. It is assumed that the average
number of valid certificates assigned to each node in a
cluster as V CER. The average number of nodes in a cluster is
noted as Navg ¼ [N]RT[T][; where][ R][T][ is the total number of cluster]
in the network. Thus, the size of CRL for the proposed
system is given by
Size of CRL THCM schemeð Þ ¼ Navg � V CER � R � V � m:
15
ð Þ
Hence, the size of CRL can be reduced depending on the
degree of cluster partitioning. When the size of the cluster
is smaller, the cost of certificate gets reduced. Conversely,
the complexity of PKI system increases. This is because the
cost of certificate especially in the boundaries of the cluster
increases. In addition to this, the installation cost of the CA
in different clusters adds up the framework cost. It is
therefore necessary to determine an optimal size for the
cluster to reduce the cost of communication.
## 5.2 Communication cost
One of the important issues in MANET communication
system is the rise in communication cost due to the certificate assignment and revocation processes within each
cluster. In THCM scheme, in order to reduce the cost of
communication, the certificates are assigned based on random distance between the CA and any requestor node. The
efficient revocation scheme in THCM reduces the communication cost due to revocation. We assume an optimum
size for each cluster in CA installation and certificate
management. The overall communication cost in a particular geographic cluster includes the cost in sending a
request for certificate by any node, cost in issuing certificate
by the CA, and the revocation cost.
Commc ¼ Creq þ Cassign þ Crevoke: ð16Þ
Let the average number of nodes in ABCDEF is
(single or multiple certificates), and the length of certificate
issues, that is, response length ðReslÞ. The verification
process incorporates the revocation of certificates, the
expiry check, and the length of revoked message ðRevlÞ,
which may change frequently in a dynamic infrastructure
like MANET.
Cassign ¼ Cverify � Cissuing � Resl: ð19Þ
Usually, the CA verifies the request for certificate from
any node in a cluster and issues multiple certificates. This
increases the communication overhead as well as communication cost to a larger extent. To reduce the cost of
communication and overhead, probability density functions
(pdf) of the random distance discussed in Section 4 (with
reference to figure 5) are carried out. For reference we have
taken hexagonal clusters ABCDEF with sides as ‘s’.
It is assumed that the nodes within the circle and
hexagonal region ABCDEF are assigned with one certificate
that correspond to the current home cluster region ABCDEF
(figure 5(a)). The nodes at the boundary of the region
ABCDEF are assumed to be assigned with the multiple
certificates corresponding certificate of home cluster and the
neighboring cluster region (figure 5(b)). It is also assumed
that the request from the nodes, belonged to other adjacent
clusters is discarded (figure 5(c)). The efficient certificate
management scheme within the cluster is formulated with
the probability of certificate assignment and management.
In wireless networks, random distance between nodes is
considered as a critical factor that affects the system performance. The closed-form distribution for random distance can
be applied to calculate path loss, link capacity, near-far
neighbors, transmission power, and other performance metrics in MANET. Here we use a modified random distance
calculation concept of Bettstetter and Wagner [41] for probability density function calculations. The random distance
formulates the stochastic activities within the mobile network.
A random distance of a node is considered as a location-based
discrete time process, where each node moves randomly with
same length ðDtÞ and duration ðDxÞ. When the node moves to
the boundary of a cluster, the probability varies.
Let us assume that the coordinates of cn and cm be
ðxn; ynÞ and ðxm; ymÞ with fn ¼ [x][n][þ]2[x][m] and fm ¼ [y][n][þ]2[y][m][. Let]
cos h ¼ d c[x][m][�]i;c[x]j[n] [and sin][ h][ ¼][ y]d c[m][�]i;c[y]j[n]
ð Þ ð Þ[. The probability of the]
random distance between nodes in the MANET is calculated with area-ratio approach. Suppose the side of the
equilateral triangle a ¼ 1 and the distance be RD, then the
probability P Rð D � DÞ is taken as the ratio of the area
between the triangle and the circle.
In figure 5 the distance is calculated from the center of
the hexagon with two different cases, depending on the
value of the distribution function D.
(i) The circle x[2] y[2] D[2] is completely inside the
þ ¼
p
3ffiffi
hexagon of area [p][D]6 [2][;][ thatis][;][ 0][ �] [D][ �] 2 [, then random]
distribution function is given as
NABCDEF ¼ NT
p
3 23[ffiffi]s[2] 17
ð Þ
A
where A is the area of all the clusters.
The cost of sending a request to CA by any node depends
on the average number of nodes in each region and the
length of the request ðReqlÞ, which is the distance of the
node and the CA, given by
Creq ¼ NT
p
3 23[ffiffi]s[2] � Reql ð18Þ
A
The certificate assignment by CA plays a vital amount in
the increment of overall communication cost. It depends on
the verification cost, the cost of issuing the certificates
-----
1146 V S Janani and M S K Manikandan
FRD Dð Þ ¼ P Rð D � DÞ ¼ [area of hexagon]area of circle
¼ 3[2]p[p] Dffiffiffi3 [2]: ð20Þ
(ii) The circle x[2] y[2] D[2] cut-off the edges of the
þ ¼
p
3ffiffi
hexagon; that is, 2 [�] [D][ �] [1, then random distribu-]
tion function is given as
On differentiating the cost function with respect to s
and equate that to 0, we can minimize the communication cost value in the proposed THCM scheme. For
simplification, here we assume D s h; where
¼ �
0 h 1.
� �
CommC )
s^[4]
¼
0
B
B
B
@
25
ð Þ
4A
p
NT 3Apffiffiffi3 ðReql þ ReslÞ þ �1 � 3[4]p[p][h]ffiffiffi3� 3� 23[ffiffi] � NT 3pA ffiffiffi3 � Resl þ [C]T [�] [3]pA ffiffiffi3 � Revlþ
C � � 3pffiffiffi3
T [�] 1 � 3[4]p[p][h]ffiffiffi3 A � Revl
1
C
C
C
A
"pD[2] pffiffiffi3 pffiffiffirffiffiffiffiffiffiffiffiffiffiffiffiffi#
FRD Dð Þ ¼ p[2]ffiffiffi3 3 [�] [2][D][2][ cos][�][1] 2D [þ] 3 D[2] � [3]4 :
21
ð Þ
(iii) The circle x[2] y[2] D[2] completely outside of the
þ ¼
hexagon; that is, 1 D 2, then random distribu� �
tion function is given as
FRD Dð Þ ¼ 0: ð22Þ
The probability of the distance between any two nodes in a
hexagonal cluster can be written as (from (20), (21), and (22))
4pD pffiffiffi3
3p ;ffiffiffi3 0 � D � 2
4D pffiffiffi3 pffiffiffi3
p ðffiffiffi3 [p]3 [�] [2 cos][�][1] 2D [;] 2 [�] [D][ �] [1]
0; else
## 6. Performance evaluation and simulation results
In this section, we evaluate the performance of the proposed
certificate management scheme in terms of effectiveness and
reliability of revocation scheme and communication cost. To
verify the performance in terms of cost of communication
and effectiveness of revocation, we compare THCM with
two existing schemes; CCRCV by Liu et al [35] and a
voting-based scheme proposed by Luo et al [36].
## 6.1 Simulation environment
The MANET simulation setup is performed in QualNet 4.5
environment with IDE: Visual studio 2013, programming
language: VC?? and SDK: NSC_XE-NETSIMCAP
(Network Simulation and Capture). The nodes follow a
random waypoint approach (RWP) presented by the authors
Bettstetter and Wagner [41], Bai and Helmy [42] and
Aschenbruck et al [43], where the speed and the direction
of each nodes are chosen randomly and independently.
When the simulation starts, each node chooses one location
randomly as the destination within the simulation field. The
nodes then move with constant velocity chosen uniformly
and randomly in a range 0½ ; Vm�; where Vm is the maximum
range of velocity that a node can travel. When the node
reaches its destination, it halts for a time period, referred as
halt time ðT haltÞ. If T halt ¼ 0, a continuous mobility can be
experienced. However, when the T halt expires, the nodes
again move randomly in the simulation field. On the one
hand, we evaluate the performance of the proposed THCM
by varying the two parameters Vm and T halt for topology
alterations, that is, If Vm is less and T halt is high, a relatively
stable topology can be achieved. On the other hand a highly
dynamic topology can be obtained if Vm is high and T halt is
less.
PRD Dð Þ ¼
8
>>>><
>>>>:
23
ð Þ
The Cassign and Crevoke also depends on the number of
certificate assigned k and the average number of certifið Þ
cate revoked per time slot ��CT .
Therefore, the overall communication cost is given as,
(16)
)
p
CommC ¼ NT 3 A23[ffiffi]s[2] ðReql þ ReslÞ
p
þ �p4Dffiffiffi3 �p3 [�] p[2 cos][�][1] p2Dffiffiffi3�� � 2 � NT 3 A23[ffiffi]s[2]
� Resl þ [C]T 3 A23[ffiffi]s[2] � Revl þ [C]T
[�] p
� �p4Dffiffiffi3 �p3 [�] [2 cos][�][1] p2Dffiffiffi3�� � 3 A23[ffiffi]s[2] � Revl
24
ð Þ
-----
Trust-based hexagonal clustering for efficient certificate 1147
## 6.2 Effectiveness of revocation scheme
The revocation rate and revocation time are the two core
factors that evaluate the effectiveness of any revocation
scheme. The potency of the proposed THCM scheme is
shown in terms of revocation rate and revocation time as
shown in figures 6 and 7. Revocation time is defined as the
time period by which an attacker launches an attack before
its certificate gets revoked. Whereas, the revocation rate
represents the rate of attacker nodes revoked before
launching the attacks. To analyze the impact of attacker
nodes on revocation, we deploy 100 nodes in the network,
whereas the attacker nodes range upto 30% to 35%. As
shown in figures 6 and 7, the effectiveness of revocation is
performed by comparing the proposed revocation
scheme with an existing CCRVC scheme and voting
scheme.
Figure 6 shows the change in the revocation time with
the increase in attacker nodes, between the proposed
THCM scheme and existing non-trust-based schemes of
Figure 6. Revocation time.
Figure 7. Revocation rate.
Liu et al [35] and Luo et al [36]. On comparing, the voting
scheme takes higher revocation time than the other two
schemes. This is due to the waiting period for the votes
from different nodes to make revocation decision. THCM
maintains a beneficial revocation time even with higher
number of attackers. A maximum revocation time of 60 s
can be noted in THCM for highest percentage of attacker.
The revocation times of the three different schemes for
increasing number of attackers are given in table 1.
Figure 7 demonstrates the revocation rate for different
attacker node levels. It is noted that the revocation rate
improves with the increase in attackers for proposed trustbased revocation scheme. The proposed THCM revocation
scheme works well on the attacker by calculating the
trustability of each node. Even though the rate drops a little
in between, it gradually increases for larger number of
attackers, that is, a revocation rate of 98% is achieved for
35% of attackers in THCM. The simulation results of the
three schemes for various percentage of attackers are given
in table 2.
## 6.3 Reliability of revocation
The reliability of our scheme can be determined from the
proposed algorithm by calculating the probability of success revocations given by Wasef and Shen [44].
0
B
Psuccess ¼ BB1 �
@
� p 1 �N
�
n
� p �N
n
x
1
C
C
C
A
: 26
ð Þ
Table 1. Revocation time (s) of different key management
schemes.
Number of attackers
Schemes
5 10 15 20 25 30
Voting scheme 110 120 128 130 132 134
CCRCV 20 32 55 68 70 78
THCM 17 23 44 57 59 60
Table 2. Revoked attackers (in %) of different key management
schemes.
Percentage of attackers
Schemes
5 10 15 20 25 30 35
Voting scheme 92 91 90 88 85 82 78
CCRCV 95 96 97 96 95 92 93
THCM 97 98 97 96 96 98 98
-----
1148 V S Janani and M S K Manikandan
Figure 8. Successful revocation probability. Figure 9. Cost of communication.
Figure 8 shows the probability of successful revocations
ðPsuccessÞ with different values of positive encounters (p),
negative encounters (n), and secret key (x), varying the
average number of nodes N within the communication
ð Þ
range of a node in the cluster.
It is observed that Psuccess increases with N for constant n
and x. It can also be noted that the Psuccess increases with
increase in n and decrease in p. This indicates the vulnerability strength of the system against attackers, that is, if the
negative encounters n rises, the network is subjected to
ð Þ
more number of attackers to which a desired security level
should be provided. The above discussion proves the reliability of our proposed THCM scheme with desired security level.
## 6.4 Communication cost
In the proposed certificate management schemes, the main
factors that have high impact on the communication cost are
certificate revocation and certificate issuing processes. Figures 9 and 10 represents the efficiency of our scheme on cost
factor. The communication cost can be conserved in a successful manner with this scheme. Comparison of two different schemes run in a simulation environment of 100 nodes
that follow a random walk mobility model Bettstetter and
Wagner [41] (a specific RWP mobility model with T halt ¼ 0),
in which each node changes its mobility rate at different time
intervals. The proposed THCM scheme is compared in terms
of cost of communication, with CCRVC and voting-based
scheme for different number of attackers, as in figures.
We plotted the cost of certificates of each scheme in
figure 9 where we can see that CCRVC is costlier than
other two schemes. Although THCM is costlier than votingbased scheme for small numbers of attackers, the cost of the
voting scheme increases abruptly with the number of
attackers. At the most, the cost range is limited to 128 in
THCM, where it is 198 for voting scheme and 235 in
CCRVC.
Our cost-conservative certificate management scheme is
analyzed for different number of certificates revoked, as
shown in figure 10. It is noticed that the communication
cost increases 180 for the maximum number of attackers,
which is lower compared with other two schemes (i.e.,
voting scheme attained a cost of 240 and CCRVC reaches
212). It is noted that the proposed THCM scheme outperforms the voting–based method in terms of communication
cost for different number of attackers as well as certificates
revoked.
The communication cost in issuing the certificate and
communication cost in sending the revocation informations
for existing schemes and the proposed THCM scheme are
given in tables 3 and 4.
Figure 10. Communication cost with revocation.
-----
Trust-based hexagonal clustering for efficient certificate 1149
Table 3. Cost of communication (certificates/node) of different
key management schemes.
Percentage of attackers
Schemes
5 10 15 20 25 30
## 7. Empirical analysis
7.1 Emulator platform
A real-world certificate management system is developed
with Android Emulator: T-Engine, which is renamed as
TRON Forum on April 2015, to analyze the performance of
proposed THCM scheme. The emulator introduces the
QULANET simulator into a real network. T-Engine enables
the users to rapidly build a ubiquitous computing solution
utilizing off-the-shelf components with complete mobility
permitted as presented by Krikke [45] and Noboru and Ken
[46]. The middleware library available for T-Engine supports network protocol, GUI, and specified security tools
(as presented by Khan and Sakamura [47] and many other
added features in order to emulate real smart mobile nodes.
The platform also supports the resource distribution of
software and tamper resistant network security. Figure 11
shows the emulator platform run for the proposed THCM
scheme with 50 mobile nodes represented as
TM i ; 0 i 50.
ð Þ � �
Our study facilitates to understand the certificate revocation time, the rate of revoked node, and the communication cost. This study also provides solid evidence on the
optimal certificate management for the three schemes for
different number of attacker nodes. Figure 12(a) and
(b) shows the emulator output for the effectiveness of
revocation scheme in terms of revocation time and rate of
revocation of voting scheme, CCRVC, and THCM
schemes. The numbers of attackers vary from 5% to 50% of
all the cases. The results in the emulator evidently show
there is no significant change in the time and rate of
revocation comparing with that of QUALNET.
Figure 13(a) and (b) represents the emulator output for
the cost factor. The results are plotted for the cost of
communication with respect to certificate revocation and
certificate issuing processes. Likewise the revocation
scheme, the cost-conservative feature of the THCM also
Figure 11. Emulator execution.
Voting scheme 30 62 100 118 144 198
CCRCV 78 146 170 196 224 230
THCM 56 74 88 102 120 124
Table 4. Communication cost with revocation for different key
management schemes.
Average number of certificates revoked
Schemes
5 10 15 20 25 30
Voting scheme 52 86 112 160 200 236
CCRCV 64 86 124 146 188 212
THCM 50 74 100 128 152 180
## 6.5 Security analysis
This section analyzes the proposed THCM scheme against
security attacks discussed in section 4.
Resilience against Forging attack: To forge the revocation information, an attacker should determine the trust
degree and distrust degree of any node. The attack node
should be aware of the positive and negative encounters for
calculating the trust or distrust degree. Further, the attacker
node should collect the information regarding successive
interactions as well as location information of that node to
forge the total revocation information. Furthermore, the CA
signs the revocation message and sends to all the nodes in
the cluster, which cannot be forged. From the above discussion our THCM scheme is resistant enough to forge
attack.
Resilience against collusion attack: When a node’s certificate that it likely to visit in near future is about to expire,
in THCM, request for a fresh set of certificates is sent in
advance. Hence, it is assured that the revoked node can
never have the entire revocation certificate and so they
cannot collude to revoke any other node. Therefore, the
proposed THCM is resilient against collusion attack in the
network.
Resilience against revocation denial attack: THCM
conducts the verification phase in section 4 each time,
which includes the sender certificate check, validity check,
and the signature check. By this the CA detects and discards fallacious process. In addition, since the proposed
certificate management scheme adopts a probability certificate assignment technique, same revocation information
may be found with more than one node. Consequently, the
CA identifies the multiple copies and excludes the duplicate
ones. Hence, the THCM scheme exhibit robustness against
revocation denial attacks.
-----
1150 V S Janani and M S K Manikandan
Figure 12. Emulator output for effectiveness of revocation scheme. (a) Revocation time and (b) Revocation rate.
Figure 13. Emulator output for efficiency on cost factor. (a) Cost of communication and (b) communication cost with revocation.
shows no much significant variation compared with the
simulation results.
## 7.2 Simulation and emulation: a comparison
The QUALNET and T-Engine outputs are compared and
plotted in figure 14(a), (b), (c), and (d). The graphs shows
the performance effectiveness of the proposed THCM
scheme compared with the existing certificate management
methods.
The results show that THCMs have no significant variation
in the values while implementing the simulator in a live
environment. The values obtained by emulation are very
close to that of simulation, which certainly shows the optimal
management of THCM scheme. To get an efficient and
accurate output, multiple trails were performed with the
simulation and emulation parameters using T-Test methodology. T-Test is conducted with 10 numbers of trails in order
to prove the accuracy of the output statistically. Various
hypotheses were stated to support the T-Testing, which is
summarized in table 5. To compare the performance boost of
THCM, simulation as well as emulation results was processed through statistical tests and calculations. The mean
and standard deviation are calculated from the data acquired
through 10 rounds with a limit of significance (LoS) set at 2.
The values within the specified LoS are assumed to be
acceptable and those above the LoS are assumed as
insignificant. The proposed THCM scheme statistically
demonstrated that there is no significant difference in the
simulation and emulation values. The mean for each
parameter is calculated using the following formula:
-----
Trust-based hexagonal clustering for efficient certificate 1151
Figure 14. Simulation vs emulation. (a) Revocation time. (b) Revocation rate. (c) Cost of communication. (d) Communication cost
with revocation.
1
N
N
X
yi ð27Þ
i¼1
where N is the total number of data trials, yi is the observed
value and the standard deviation is calculated as
�
rM�M ¼ sqrt [r]source[2]Na þ [r]source[2]Nb
�
28
ð Þ
THCM to efficiently partition the network into nonoverlapping clusters and to manage certificates. Our approach
enables each node to establish a trust value with other
interacting nodes, in each Voronoi hexagonal cluster, with
minimal communication cost and maximum utilization of
the certificate management scheme. Simulation results
show that our scheme achieved a revocation rate of 98%
in maximum of 60 s, for a higher percentage of attackers.
We seek to lower the cost of certificate assignment and
revocation, as shown in the simulation results. We also
developed an analytic—statistical approach to study the
impact of certificate management on attacker nodes and
cost in a real-time MANET emulator. In addition, we
provided a simple mathematical analysis to justify the
results. We believe that the proposed scheme works efficiently and also has remarkable contributions to the
modeling, design, and analysis of an effective certificate
management scheme for MANETs. Therefore, our
r[2]source [is the variance of source population and]
Na and Nb: are the sizes of the two types of samples.
## 8. Conclusion
In this paper, we have addressed complete security measure against attackers for mobile ad hoc networks. In
contrast to the existing techniques, we have proposed
-----
1152 V S Janani and M S K Manikandan
Table 5. Statistical analysis of key management schemes.
QUALNET values EMULATOR values
Trails
x ið Þ Mean ðMean � x ið ÞÞ[2] x ið Þ Mean ðMean � x ið ÞÞ[2]
(i) Revocation time
1 0 47.3 2237.29 0 48.8 2381.44
2 19 47.3 800.89 22 48.8 718.24
3 25 47.3 497.29 28 48.8 432.64
4 47 47.3 0.09 45 48.8 14.44
5 57 47.3 94.09 58 48.8 84.64
6 60 47.3 161.29 63 48.8 201.64
7 62 47.3 216.09 64 48.8 231.04
8 66 47.3 349.69 65 48.8 262.44
9 68 47.3 428.49 72 48.8 538.24
10 69 47.3 470.89 71 48.8 492.84
Average mean 47.3 48.8
Standard deviation 22.9261859 23.14649
T-test result 1.605403
T-test hypothesis Since T-Test Result is less than 2, there is no significant difference between simulated and emulated results
(ii) Revocation rate
1 98 95.4 6.76 96 93.9 4.41
2 96 95.4 0.36 97 93.9 9.61
3 100 95.4 21.16 96 93.9 4.41
4 96 95.4 0.36 95 93.9 1.21
5 95 95.4 0.16 94 93.9 0.01
6 95 95.4 0.16 92 93.9 3.61
7 93 95.4 5.76 94 93.9 0.01
8 94 95.4 1.96 92 93.9 3.61
9 92 95.4 11.56 92 93.9 3.61
10 95 95.4 0.16 91 93.9 8.41
Average mean 95.4 93.9
Standard deviation 2.2 1.9723083
T-test result 0.1456
T-test hypothesis Since T-Test Result is less than 2, there is no significant difference between simulated and
emulated results
(iii) Communication cost
1 0 102.3 10465.3 0 103.7 10753.69
2 57 102.3 2052.09 56 103.7 2275.29
3 78 102.3 590.49 77 103.7 712.89
4 92 102.3 106.09 93 103.7 114.49
5 107 102.3 22.09 109 103.7 28.09
6 118 102.3 246.49 120 103.7 265.69
7 130 102.3 767.29 133 103.7 858.49
8 136 102.3 1135.69 137 103.7 1108.89
9 143 102.3 1656.49 147 103.7 1874.89
10 162 102.3 3564.09 165 103.7 3757.69
Average mean 102.3 103.7
Standard deviation 45.39394233 46.637002
T-test result 0.06803
T-test hypothesis Since T-Test Result is less than 2, there is no significant difference between simulated and emulated results
(iv) Communication cost per average number of revocation
1 0 128.7 16563.7 0 129.2 16692.64
2 54 128.7 5580.09 53 129.2 5806.44
3 79 128.7 2470.09 84 129.2 2043.04
4 105 128.7 561.69 103 129.2 686.44
5 128 128.7 0.49 128 129.2 1.44
6 155 128.7 691.69 153 129.2 566.44
7 181 128.7 2735.29 185 129.2 3113.64
8 189 128.7 3636.09 187 129.2 3340.84
-----
Table 5 continued
Trails
Trust-based hexagonal clustering for efficient certificate 1153
QUALNET values EMULATOR values
x ið Þ Mean ðMean � x ið ÞÞ[2] x ið Þ Mean ðMean � x ið ÞÞ[2]
9 194 128.7 4264.09 198 129.2 4733.44
10 202 128.7 5372.89 201 129.2 5155.24
Average mean 128.7 129.2
Standard deviation 64.71174546 64.915021
T-test result 0.01725
T-test hypothesis Since T-Test Result is less than 2, there is no significant difference between simulated and emulated results
scheme, THCM can be adequately adopted for wireless ad
hoc networks.
Acknowledgment
This research is supported by All India Council for
Technical Education (AICTE), Government of India.
## References
[1] Ko Y B and Vaidya N H 1999 Geocasting in mobile ad hoc
networks: Location-based multicast algorithms. In: Proceedings of IEEE WMCSA, pp. 101–110
[2] Bellur B 2008 Certificate assignment strategies for a pkibased security architecture in a vehicular network. Proceedings IEEE GLOBECOM, pp 1–6
[3] Chang R S and Wang S H 2008 Hexagonal collaboration
groups in sensor networks. Proc. IEEE CCNC pp 358–359
[4] Zhuang Y, Gulliver T A and Coady Y 2013 On planar tessellations and interference estimation in wireless ad-hoc
networks. IEEE Wireless Commun. Lett. 2(3): 331–334
[5] Fan Y, Yulan Z and Ping X 2015 An overview of ad hoc
network security. communications in computer and information science, Springer, vol. 557, pp 129–137
[6] Cao M and Hadjicostis C N 2003 Distributed algorithms for
Voronoi diagrams and applications in ad-hoc networks.
Technical Report UILUENG-03-2222
[7] Chau M, Cheng R, B Kao B and Ng J 2006 Uncertain data
mining: An example in clustering location data. In: Proceedings of PAKDD, pp 199–204
[8] Cheng R, Xie X, Yiu M L, Chen J and Sun L 2010 Uvdiagram: A Voronoi diagram for uncertain data. In: Proceedings of 26th IEEE International Conference on Data
Engineering, pp 796–807
[9] Kao B, Lee S D, Lee F, Cheung D and Ho W S 2010
Clustering uncertain data using Voronoi diagrams and R-tree
index. IEEE Trans. Knowledge and Data Eng. 22(9):
1219–1233
[10] Mohamed Aissa and Abdelfettah Belghith 2014 A node
quality based clustering algorithm in wireless mobile ad hoc
networks. In: Proceedings of the 5th International Conference on Ambient Systems, Networks and Technologies,
Elsevier, vol. 32, pp. 174–181
[11] Abdelhak B, Abdelhak B and Saad H 2013 Survey of clustering schemes in mobile ad hoc networks, Commun. Netw.
pp 8–14
[12] Khalid H, Abdul H Abdullah, Khalid M Awan, Faraz Ahsan,
Akhtab Hussain and Johor Bahru 2013 Cluster head election
schemes for WSN and MANET: A survey. World Appl. Sci.
J. 23(5): 611–620
[13] Mohd Junedul Haque, Mohd Muntjir and Hussain A S 2015
A comparative survey for computation of cluster-head in
MANET. Int. J. Comput. Appl. 118 (3): 6–9
[14] Fan P, Li G, Kai Cai and Letaief K B 2007 On the geometrical characteristic of wireless ad-hoc networks and its
application in network performance analysis. IEEE Trans.
Wireless Commun. 6(4): 1256–1265
[15] Stojmenovic I, Ruhil A P and Lobiyal D K 2006 Voronoi
diagram and convex hull based geocasting and routing in
wireless networks. Wireless Commun. Mobile Comput. 6:
247–258
[16] Ngai W K, Kao B, Chui C K, Cheng R, Chau M and Yip K Y
2006 Efficient clustering of uncertain data. Proceedings of
ICDM pp. 436–445
[17] Renu D, Manju Khari and Yudhvir Singh 2012 Survey of
trust schemes on ad-hoc network. Int. J. AdHoc Netw. Syst.
springer, 2 pp. 170–180
[18] Manju Khari and Yudhvir Singh 2012 Different ways to
achieve trust in MANET. Int. J. AdHoc Netw. Syst. 2(2):
1–10
[19] Jingwei H and David N 2009 A calculus of trust and its
application to pki and identity management. In: Proceedings
of 8th Symposium on Identity and Trust on the Internet,
pp. 23–37
[20] Liu K, Abu-Ghazaleh N and Kang K 2007 Location verification and trust management for resilient geographic routing.
J. Parallel Distributed Comput. 67: 215–228
[21] Ferdous R, Muthukkumarasamy V and Sithirasenan E 2011
Trust-based cluster head selection algorithm for mobile ad
hoc networks. In: Proceedings of International Joint Conference IEEE TrustCom pp. 589–596
[22] Cho J H, Chan K S, Chen I R 2013 Composite trust-based
public key management in mobile ad hoc networks. ACM
28th Symposium on Applied Computing, Coimbra, Portugal,
pp 1949–1956
[23] Wei Z, Tang H, Richard Yu, Wang M and Mason P 2014
Security enhancements for mobile ad hoc networks with trust
management using uncertain reasoning. IEEE Trans. Vehicular Technol. 63(9): 4647–4658
-----
1154 V S Janani and M S K Manikandan
[24] Ing-Ray Chen, Jia Guo, Fenye Bao and Jin-Hee Cho 2014
Trust management in mobile ad hoc networks for bias minimization and application performance maximization ad hoc
networks. Ad hoc networks. Elsevier, vol. 19, pp. 59–74
[25] Deering S, Estrin D, Farinacci D, Jacobson V, Helmy A and
Wei L 1997 Protocol independent MulticastVersion 2, dense
mode specification. Internet Draft, ftp://ietf.org/internetdrafts/draft-ietf-idmr-pim-dm-spec-05.txt
[26] Mohammad M Qabajeh, Aisha H Abdalla, Othman O
Khalifa and Liana K Qabajeh2015 A survey on scalable
multicasting in mobile ad hoc networks. Wireless Personal
Commun. 80(1): 369–393
[27] Kanchan D and Asutkar G M 2016 Enhancement in the
performance of routing protocols for wireless communication using clustering, encryption, and cryptography. Artificial
intelligence and evolutionary computations in engineering
systems, advances in intelligent systems and computing, vol.
394, pp 547–558.
[28] Jormakka J and Jormakka H 2014 Revocation of user certificates in a military ad hoc network. Brazilian J. Inform.
Security Cryptogr. 1(1): 1–3
[29] Yki K and Mikko S 2014 Survey of certificate usage in
distributed access control. Computers & security, Elsevier
vol 44, pp 16–32
[30] Wei Liu, Hiroki Nishiyama, Nirwan Ansari and Nei Kato
2011 A study on certificate revocation in mobile ad hoc
networks. IEEE International Conference on Communications (ICC), pp 1–5
[31] Mohammad Masdari and Javad P B 2012 Distributed certificate management in mobile ad hoc networks. Int. J. Appl.
Inform. Syst. 4(6): 33–40
[32] Mohamed M E A Mahmoud, Jelena Misic, Kemal Akkaya
and Xuemin Shen 2015 Investigating public-key certificate
revocation in smart grid. IEEE Internet Things J. 2:
490–503
[33] Mohammad Masdari, Sam J and Jamshid B 2015a Improving
OCSP-based certificate validations in wireless ad hoc networks. Wireless Personal Commun., 82 (1): 377–400
[34] Mohammad Masdari, Sam J, Jamshid B and Ahmad Khadem-Zadeh 2015b Towards efficient certificate status validations with E-ADOPT in mobile ad hoc networks.
Computers & security, Elsevier, vol 49, pp. 17–27
[35] Liu W, Nishiyama H, Ansari N, Yang J and Kato N 2013
Cluster-based certificate revocation with vindication
capability for mobile ad hoc networks. IEEE Trans. Parallel
Distributed Syst. 24(2): 239–249
[36] Luo H, Kong J, Zerfos P, Lu S and Zhang L 2004 URSA:
Ubiquitous and robust access control for mobile ad hoc
networks. IEEE/ACM Trans. Netw. 12(6): 1049–1063
[37] Taisuke Izumi, Tomoko Izumi, Hirotaka Ono and Koichi
Wada 2015 Approximability of minimum certificate dispersal with tree structures. Theoretical computer science, Elsevier, vol. 591, pp 5–14
[38] Park K, Nishiyama H, Ansari N and Kato N 2010 Certificate
revocation to cope with false accusations in mobile ad hoc
networks. Proceedings of IEEE 71st Vehicular Technology
Conference pp 1–5
[39] Raya M, Manshaei M H, Felegyhazi M and Hubaux J P 2008
Revocation games in ephemeral networks. Proceedings of
ACM CCS
[40] Mawloud Omar, Hamida B, Lydia Mammeri, Amel Taalba
and Abdelkamel T 2016 Secure and reliable certificate chains
recovery protocol for mobile ad hoc networks. J. Netw.
Comput. Appl., Elsevier, 62: 153–162
[41] Bettstetter C and Wagner C 2002 The spatial node distribution of the random waypoint mobility model. Proceedings
German Workshop on Mobile Ad Hoc Networks (WMAN)
[42] Bai F and Helmy A 2004 A survey of mobility modeling and
analysis in wireless ad hoc networks. Wireless ad hoc and
sensor networks. Kluwer academic publishers
[43] Aschenbruck N, ErnstR, Gerhards-Padilla E and Schwamborn M 2010 BonnMotion – A mobility scenario generation and analysis tool. In: Proceedings of the 3rd
International Conference on Simulation Tools
andTechniques
[44] Wasef A and Shen X 2009 EDR: Efficient decentralized
revocation protocol for vehicular ad hoc networks. IEEE
Trans. Veh. Tech. 58(9): 5214–5224
[45] Krikke 2005 T-engine: Japan’s ubiquitous computing architecture is ready for prime time. IEEE Pervasive Comput.
4(2): 4–9
[46] Noboru K and Ken S 2010 Ubiquitous ID: Standards for
ubiquitous computing and the internet of things. IEEE Pervasive Comput. 9(4): 98–101
[47] Khan M F F and Sakamura K 2015 Tamper-resistant security
for cyber-physical systems with eTRON architecture. IEEE
International Conference on Data Science and Data Intensive Systems, Sydney, NSW, pp 196–203
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s12046-016-0545-0?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s12046-016-0545-0, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://www.ias.ac.in/article/fulltext/sadh/041/10/1135-1154"
}
| 2,016
|
[] | true
| 2016-10-01T00:00:00
|
[] | 19,516
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00d793b0c383f3232ba96b73d6917925ebb75ebe
|
[
"Computer Science"
] | 0.853692
|
Single-View and Multiview Depth Fusion
|
00d793b0c383f3232ba96b73d6917925ebb75ebe
|
IEEE Robotics and Automation Letters
|
[
{
"authorId": "7726696",
"name": "José M. Fácil"
},
{
"authorId": "144583933",
"name": "Alejo Concha"
},
{
"authorId": "2987153",
"name": "L. Montesano"
},
{
"authorId": "143750691",
"name": "Javier Civera"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Robot Autom Lett"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=7083369"
],
"id": "93c335b7-edf4-45f5-8ddc-7c5835154945",
"issn": "2377-3766",
"name": "IEEE Robotics and Automation Letters",
"type": null,
"url": "https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE"
}
|
Dense and accurate 3-D mapping from a monocular sequence is a key technology for several applications and still an open research area. This letter leverages recent results on single-view convolutional network (CNN)-based depth estimation and fuses them with multiview depth estimation. Both approaches present complementary strengths. Multiview depth is highly accurate but only in high-texture areas and high-parallax cases. Single-view depth captures the local structure of midlevel regions, including texture-less areas, but the estimated depth lacks global coherence. The single and multiview fusion we propose is challenging in several aspects. First, both depths are related by a deformation that depends on the image content. Second, the selection of multiview points of high accuracy might be difficult for low-parallax configurations. We present contributions for both problems. Our results in the public datasets of NYUv2 and TUM shows that our algorithm outperforms the individual single and multiview approaches. A video showing the key aspects of mapping in our single and multiview depth proposal is available at https://youtu.be/ipc5HukTb4k.
|
This paper has been accepted for publication in IEEE Robotics and Automation Letters.
DOI:10.1109/LRA.2017.2715400
[IEEE Xplore: http://ieeexplore.ieee.org/document/7949041/](http://ieeexplore.ieee.org/document/7949041/)
c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
_⃝_
current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
-----
## Single-View and Multi-View Depth Fusion
### Jos´e M. F´acil[1], Alejo Concha[1], Luis Montesano[1][,][2] and Javier Civera[1]
**_Abstract— Dense and accurate 3D mapping from a monoc-_**
**ular sequence is a key technology for several applications and**
**still an open research area. This paper leverages recent results**
**on single-view CNN-based depth estimation and fuses them**
**with multi-view depth estimation. Both approaches present**
**complementary strengths. Multi-view depth is highly accurate**
**but only in high-texture areas and high-parallax cases. Single-**
**view depth captures the local structure of mid-level regions,**
**including texture-less areas, but the estimated depth lacks**
**global coherence. The single and multi-view fusion we propose is**
**challenging in several aspects. First, both depths are related by**
**a deformation that depends on the image content. Second, the**
**selection of multi-view points of high accuracy might be difficult**
**for low-parallax configurations. We present contributions for**
**both problems. Our results in the public datasets of NYUv2**
**and TUM shows that our algorithm outperforms the individual**
**single and multi-view approaches. A video showing the key**
**aspects of mapping in our Single and Multi-view depth proposal**
**[is available at https://youtu.be/ipc5HukTb4k.](https://youtu.be/ipc5HukTb4k)**
**_Index Terms— Deep Learning in Robotics and Automation,_**
**Mapping, SLAM**
I. INTRODUCTION
Estimating an online, accurate and dense 3D scene reconstruction from a general monocular sequence is one of
the fundamental research problems in computer vision. The
problem has nowadays a high relevance, as it is a key
technology in several emerging application markets (augmented and virtual reality, autonomous cars and robotics in
general). The state of the art are the so-called direct mapping
methods [1], that estimate an image depth by minimizing
a regularized cost function based on the photometric error
between corresponding pixels in several views. The accuracy of the multi-view depth estimation depends mainly on
three factors: 1) The geometric configuration, with lower
accuracies for low-parallax configurations; 2) the quality of
the correspondences among views, that can only be reliably
estimated for high-gradient pixels; and 3) the regularization
function, typically the Total Variation norm, that is inaccurate
for large texture-less areas. Due to this poor performance
on large low-gradient areas, semi-dense maps are sometimes
estimated only in high-gradient image pixels for visual direct
SLAM (e.g., [2]). Such semi-dense maps are accurate in
high-parallax configurations but not a complete model of
We gratefully acknowledge the support of NVIDIA Corporation for the
donation of a Titan X GPU, the Spanish government (projects DPI201232168 and DPI2015-67275), the Aragon regional government (Grupo DGA
T04-FSE) and the University of Zaragoza (JIUZ-2015-TEC-03).
1The authors are with the I3A, University of Zaragoza, Spain
_{jmfacil, montesano, jcivera}@unizar.es,_
aconchabelenguer@gmail.com
2Luis Montesano is also with Bit&Brain Technologies SL.
RGB sequence Single-view Prediction
CNN
Depth Fusion
Our Fusion
Multi-view Estimation
Fig. 1: Overview of our proposal. The input is a set of
overlapping monocular views. The learning-based singleview and geometry-based multi-view depth are fused, outperforming both of them. All the depth images are colornormalized for better comparison. This figure is best viewed
in color.
the viewed scene. Low-parallax configurations are mostly
ignored in the visual SLAM literature.
An alternative method is single-view depth estimation,
which has recently experienced a qualitative improvement in
its accuracy thanks to the use of deep convolutional networks
[3]. Their accuracy is still lower than that of multi-view
methods for high-texture and high-parallax points. But, as
we will argue in this paper, they improve the accuracy of
multi-view methods in low-texture areas due to the highlevel feature extraction done by the deep networks –opposed
to the low-level high-gradient pixels used by the multi-view
methods. Interestingly, the errors in the estimated depth seem
to be locally and not globally correlated since they come
from the deep learning features.
The main idea of this paper is to exploit the information
of single and multi-view depth maps to obtain an improved
depth even in low-parallax sequences and in low-gradient
areas. Our contribution is an algorithm that fuses these complementary depth estimations. There are two main challenges
in this task. First, the error distribution of the single-view estimation has several local modes, as it depends on the image
content and not on the geometric configuration. Single and
multi-view depth are hence related by a content-dependent
deformation. Secondly, modeling the multi-view accuracy is
not trivial when addressing general cases, including high and
low-parallax configurations.
-----
We propose a method based on a weighted interpolation
of the single-view local structure based on the quality and
influence area of the multi-view semi-dense depth and evaluate its performance in two public datasets –NYU and TUM.
The results show that our fusion algorithm improves over
both individual single and multi-view approaches.
The rest of the paper is organized as follows. Section
II describes the most relevant related work. Section III
motivates and details the proposed algorithm for single and
multi-view fusion. Section IV presents our experimental
results and, finally, Section V contains the conclusions of
this work.
II. RELATED WORK
We classify the related work for dense depth estimation
into two categories: methods based in multiple views of the
scene and those which predict depth from one single image.
_A. Multi-View Depth_
In the multi-view depth estimation, [1], [4], [5] are the first
works that achieved dense and real-time reconstructions from
monocular sequences. Some of the most relevant aspects are
the direct minimization of the photometric error –instead of
the traditional geometric error of sparse reconstructions– and
the regularization of the multi-view estimation by adding the
total variation (TV) norm to the cost function.
TV regularization has low accuracy for large textureless
areas, as shown recently in [6], [7], [8] among others.
In order to overcome this [6] proposes a piecewise-planar
regularization; the plane parameters coming from multiview superpixel triangulation [9] or layout estimation [10].
[7] proposes higher-order regularization terms that enforce
piecewise affine constraints even in separated pixels. [8]
selects the best regularization function among a set using
sparse laser data. Building on [6], [11] adds the sparse
data-driven 3D primitives of [12] as a regularization prior.
Compared to these works, our fusion is the first one where
the information added to the multi-view depth is fully dense,
data-driven and single-view; and hence it does not rely on
additional sensors, parallax or Manhattan and piecewiseplanar assumptions. It only relies on the network capabilities
for the current domain, assuming that the test data follows
the same distribution that the data used for training.
Due to the difficulty of estimating an accurate and fully
dense map from monocular views there are several approaches that estimate only the depth for the highest-gradient
pixels (e.g., [2]). While this approach produces maps of
higher density than the more traditional feature-based ones
(e.g., [13]), they are still incomplete models of the scene and
hence their applicability might be more limited.
_B. Single-View Depth_
Depth can be estimated from a single view using different
image cues, for example focus (e.g., [14]) or perspective
(e.g., [15]). Learning-based approaches, as the one we use,
basically discover RGB patterns that are relevant for accurate
depth regression.
|Col1|High-Gradient|Low-Gradient|
|---|---|---|
|Multi-View|0.18|1.02|
|Single-View|0.36|0.42|
TABLE I: Median depth error [m] for single and multiview depth estimation, and high and low-gradient pixels. This
evaluation has been done in the sequence living room 0030a
from the NYUv2 dataset (one of the sequences with higher
parallax). The normalized threshold between high and lowgradient pixels is 0.35 (gray scale).
The pioneering work of Saxena et al. [16] trained a
MRF to model depth from a set of global and local image
features. Before that, [17] presented an early approach to
depth prediction from monocular and stereo cues. Eigen
_et al. [18] presented a two deep convolutional neural network_
(CNN) stacked, one to predict global depth an the second one
that refines it locally. Build upon this method, [3] recently
presented a three scale convolutional network to estimate
depth, surface normals and semantic labeling. Liu et al.
[19] use a unified continuous CRF-and-CNN framework to
estimate depth. The CNN is used to learn the unary and
pairwise potentials that the CRF uses for depth prediction.
Based on [3], [20] incorporates mid-level features in its
prediction using skip-layers. It shows competitive results
and a small batch-size training strategy that makes their
network faster to train. [21] introduces a different method to
predict depth from single-view using deep neural networks,
showing that training the network with a much richer output
improves the accuracy. [22] formulates the depth prediction
as a classification problem and the net output is a pixelwise distribution over a discrete depth range. Finally, [23]
presents an unsupervised network for depth prediction using
stereo images.
III. SINGLE AND MULTI-VIEW DEPTH FUSION
State-of-the-art multi-view techniques have a strong dependency on high-parallax motion and heterogeneous-texture
scenes. Only a reduced set of salient pixels that hold both
constraints has a small error, and the error for the majority
of the points is large and uncorrelated. In contrast, singleview methods based on CNN networks achieve reasonable
errors in all the image but they are locally correlated. Our
proposal exploits the best properties of these two methods.
Specifically, it uses a deep convolutional network (CNN) to
produce rough depth maps and fuses their structure with the
results of a semi-dense multi-view depth method (Fig. 1).
Before delving into the technical aspects, we will motivate
our proposal with some illustrative results. Table I shows the
median depth error of the high-gradient and low-gradient
pixels for a multi-view and single view reconstruction using
a medium/high-parallax sequence of the NYUv2 dataset. For
the multi-view reconstruction, the error for the low-gradient
pixels increases by a factor of 2. Notice that the opposite
happens for the single-view reconstruction: the error of highgradient pixels is the one increasing by a factor of 2. For
this experiment, the threshold used to distinguish between
-----
2000
1000
00 1 2 3 4 0 1 2 3 4 0 1 2 3 4
### Error (m)
Fig. 2: Histogram of single-view depth error [m] for three
sample sequences. Notice the multiple modes, each one
corresponding to a local image structure, this can be seen
in the error images in the top row of the figure.
high and low-gradient pixels is 0.35 in gray scale (where the
maximum gradient would be 1).
Furthermore, the single-view depth error usually has a
structure that indicates the presence of local correlations.
For instance, Fig. 2 shows the histogram of the single-view
depth estimation error for three different sequences (two of
the NYUv2 dataset and one of the TUM dataset). Notice that
the error distribution is grouped in different modes, each one
corresponding to an image segment.
This effect is caused by the use of the high-level image
features of the latest layers of the CNN network, that extend
over dozens of pixels in the original image and hence over
homogeneous texture areas. The different nature of the errors
can be exploited to outperform both individual estimations.
This fusion, however, cannot be na¨ıvely implemented with
a simple global model as it requires content-based deformations.
In the next subsections we detail the specific multi and
single-view methods that we use in this work and our fusion
algorithm.
_A. Multi-view Depth_
For the estimation of the multi-view depth we adopt a
direct approach [2], that allows us to estimate a dense or
semi-dense map in contrast to the more sparse maps of
the feature-based approaches. In order to estimate the depth
of a keyframe Ik we first select a set of n overlapping
frames {I1, . . ., Io, . . ., In} from the monocular sequence.
After that, every pixel x[k]l [of the reference image][ I][k][ is first]
backprojected at an inverse depth ρ and projected again in
every overlapping image Io.
every high-gradient pixel if we want a semi-dense map) x[k]l
in the reference image Ik and its corresponding one x[o]l [in]
every other overlapping image Io at an hypothesized inverse
depth ρl,
1
_C(ρ)_ =
_n_
_n_
�
_o=1,o≠_ _k_
_t_
�
_ϵl(Ik, Io, x[k]l_ _[, ρ][l][)][.]_ (2)
_l=1_
�
_x[o]l_ [=][ T][ko][(][x]l[k][, ρ][l][) =][ KR]ko[⊤]
�� _K[−][1]x[k]l_
_||K[−][1]x[k]l_ _[||]_
_ρl_
�
_−_ _tko_
The error ϵl(Ik, Io, x[k]l _[, ρ][l][)][ for each individual pixel][ x]l[k]_ [is]
the difference between the photometric values of the pixel
and its corresponding one
_ϵl(Ik, Io, x[k]l_ _[, ρ][l][)]_ = _Ik(x[k]l_ [)][ −I][o][(][x]l[o][)][.] (3)
The estimated depth for every pixel _ρˆ_ =
( ˆρ1 . . . _ρˆl_ _. . ._ _ρˆt )[⊤]_ is obtained by the minimization
of the total photometric error C(ρ):
_ρˆ_ = arg min _C(ρ)_ (4)
_ρ_
_B. Single-view Depth_
For single-view depth estimation we use the Deep Convolutional Neural Network presented by Eigen et al., [3].
This network uses three stacked CNN to process the images
in three different scales. The input to the network is the
RGB keyframe Ik. As we use the network structure and
parameters released by the authors without further training,
our input image size is 320 240. The output of the network
_×_
is the predicted depth, that we will denote as s. The size of
the output is 147 109, that we upsample in our pipeline in
_×_
order to fuse it with the multi-view depth.
The first scale CNN extract high-level features tuned for
depth estimation. This CNN produces 64 feature maps of
size 19 14 that are the input, along with the RGB image,
_×_
of the second scale CNN. This second stacked CNN refines
the output of the first one with mid-level features to produce
a first coarse depth map of size 74 55. This depth map is
_×_
upsampled and feeds a third stacked CNN that does a local
refinement of the depth. This final step is necessary, as the
convolution and pooling steps of the previous layers filter
out the high-frequency details.
The first scale was initialized with two different pre-trained
networks: the AlexNet [24] and the Oxford VGG [25]. We
use the VGG version, the most accurate one as reported by
the authors. This network has been trained in indoor scenes
with the NYUDepth v2 dataset [26]. As they used the official
train/test splits of the dataset, so do we. We decided to use
this neural network because it was the best-performing dense
single-view method at the moment we started this work and
still it is the one that keeps better trade off between quality
and efficiency. We refer the reader to the original work [3]
for more details on this part of our pipeline.
_C. Depth Fusion_
As we mentioned before, the objective is to fuse the output
of each previous method while keeping the best properties of
each of them: the single-view reliable local structure and the
accurate, but semi-dense multi-view depth estimation. Let
_, (1)_
where Tko,Rko and tko are respectively the relative transformation, rotation and translation between the keyframe Ik
and every overlapping frame Io. K is the camera internal
calibration matrix.
We define the total photometric error C(ρ) as the summation of every photometric error ϵl between every pixel (or
-----
denote s and m to the single-view depth and the multi-view
semi-dense depth estimation, respectively. s is predicted as
detailed in section III-B and m = _ρ1_ [is the inverse of the]
inverse depth estimated in section III-A.
The fused depth estimation fij for each pixel (i, j) of
a keyframe Ik is computed as a weighted interpolation of
depths over the set of pixels in the multi-view depth image
_fij =_ � _Ws[m]ij[uv]_ (muv + (sij − _suv)),_ (5)
(u,v)∈Ω
where Ω is the semi-dense set of pixels estimated by the
multi-view algorithm (e.g. in a high-parallax sequence, they
usually correspond with the high-gradient pixels). The interpolation weights Ws[m]ij[uv] model the likelihood for each pixel
(u, v) Ω belonging to the same local structure as pixel
_∈_
(i, j). The interpolation can be interpreted in two ways. First,
the depth gradient (sij − _suv) is added to each multi-view_
depth muv, i.e. we create depth map for each muv with the
structure of s and then weigh them with pixel based weights.
Second, for each depth sij we modify it according to the
weighted discrepancy between (muv − _suv)._
The key ingredient of this interpolation are the weights
_Ws[m]ij[uv]_ that model a deformation based on the local image
structures. Each weight is computed as the product of four
different factors. The first factor
RGB image with the point Weigth factors of the point Non-normalized influence
Fig. 3: Non-normalized influence of the highlighted red
point in the image. First column: RGB input image with
a red point over the table, this point represent one pixel
estimated by the multi-view algorithm. Second column: each
one of the weights calculated separately, the third and fourth
weights are shown as a product for a more intuitive view.
_Third column: Non-normalized influence of the highlighted_
point in the RGB image. Notice how its influence is cut on
the edge of the table. Figure best viewed in electronic format.
The product of this four factor makes a non-normalized
weight for each pixel in Ω
Non-normalized influence
Weigth factors of the point
_W˜_ _m[s][ij]uv_ [=]
Weigth factors of the point
4
� _W˜nsmijuv_ (10)
_n=1_
RGB image with the point
(i−u)[2] +(j−v)[2] ))
_σ1_ _,_ (6)
_W˜1msijuv_ = e
_−[√]_
simply measures proximity based on the distance of the
pixels (i, j) and (u, v). The parameter σ1 controls the radius
of proximity for each point. The remainder three factors
depend on the structure of the single-view prediction s. The
second factor
_muv_ 1
_W˜2sij_ = _|∇xsuv −∇xsij| + σ2_
1
_·_
_|∇ysuv −∇ysij| + σ2_
(7)
and represents its area of influence. The parameters σ1, σ2
and σ3 shape the area of influence and have to be selected
to balance proximity, gradient and planarity and to avoid
discontinuities in the result of the fusion. This was done
empirically on a small set of three images. The values of the
parameters are 15, 0.1 and 1e 3, respectively, and we kept
_−_
them fixed for all our experiments.
Fig. 3 shows this area for a point on an image and how it
is computed. Notice how the influence expands around the
point but is kept inside the same local structure (the table).
Once all the factors has been computed, since all the pixels
(i, j) are influenced by all the pixels in Ω (see Eq. 5), we
normalize the weights for each single-view pixel so all the
weights over a pixel (i, j) sum 1.
_Ws[m]ij[uv]_ = �(p,kW˜)∈s[m]Ωijuv[W][˜]−[ m]sij[pk]min−(g,hmin)∈(Ωg,hW[˜])∈s[m]Ωij[gh]W[˜] _s[m]ij[gh]_ (11)
The normalized weights expand the local influence to the
whole image (see Fig. 4 and Fig. 5 for a more detailed
view). Notice how the influence expands along planes even
if the points in Ω do not reach the end of the plane; and
is sharply reduced when the local structure changes. Once
these influence weights have been calculated and normalized,
the fusion depth estimation, f, for each point (i, j) is a
combination of all the selected points in Ω, as presented
in Eq. 5.
_D. Multi-view Low-Error Point Selection_
Up to now we have assumed that all the points in the
multi-view semi-dense depth map Ω have low error. This is
easily achievable in high-parallax sequences by using robust
estimators –robust cost functions or RANSAC. However, it
measures the similarity of depth gradients and assigns larger
weights to similar ones. ∇xsij and ∇ysij represent the depth
gradient in the x and y direction respectively at the pixel
(i, j). σ2 limits the influence of a point to avoid extremely
high weights for very similar or identical gradients. We set
it to 0.1 in the experiments.
Finally, the factors _W˜3msijuv_ and _W˜4msijuv_ strengthen the
influence between the points lying in the same plane and
are defined as
_W˜3msijuv_ = e[−|][(][s][ij] [+][∇][x][s][ij] _[·][(][u][−][i][))))][−][s][uv][|]_ + σ3 (8)
and
_W˜4msijuv_ = e[−|][(][s][ij] [+][∇][y][s][ij] _[·][(][v][−][j][))))][−][s][uv][|]_ + σ3, (9)
where σ3 sets a minimum weight to any point in Ω. This is
required to avoid vanishing weights when they are combined
with _W[˜]1msijuv_ and _W[˜]2msijuv_ [.]
-----
Fig. 4: Normalized influence area of the points. Notice
how it expands around local structure areas given a set of
points in Ω. First column: RGB image with the points of
Ω labeled with different colors. Second column: influence
areas computed by our method. Notice how this influence
expands in areas with the same local structure but can be
misled in areas where there is a lack of points or where the
estimation from the neural net is not accurate enough. Figure
best viewed in color.
Fig. 5: Detail of the influence area. Notice how it expands
mainly in the areas with same local structure. Figure best
viewed in color.
is problematic for the degenerate or quasi-degenerate lowparallax geometries that we also target in this paper. In this
case, multi-view depths may contain large errors that will
propagate to the fused depth map and it is necessary to filter
them out. Unexpectedly, selecting high gradient pixels was
not robust enough to remove points with large depth errors
and we have developed a two step algorithm that takes into
account photometric and geometric information in the first
step and the single-view depth map in the second one.
The first step selects a fixed percentage of the best correspondence candidates –the best 25% in our experiments–
based on the product of a photometric and a geometric
scores. On one hand, the photometric criterion focuses on
the quality of the correspondences using image information.
We apply a modified version of the second best ratio.We
first extract the two closest matches for a pixel (smallest
photometric errors according to Eq. 3). We then compute the
score as a function of the ratio between the distance of the
two descriptors (a high ratio suggesting a good match) and
the gradient of the distance function along the epipolar line
(i.e., the error function presenting a distinct V-shape around
this match and suggesting spatial accuracy). On the other
hand, the geometric score simply backpropagates the image
correspondence error to the depth estimation, resulting in low
scores for low-parallax correspondences.
In a second stage we also use the structure of the
single-view reconstruction and apply RANSAC to estimate
a spurious-free linear transformation between the multi and
single-view points using only the points pre-filtered in the
first stage. We apply this linear model along the entire image,
consensus with outliers is found if small patches are used.
This reduces further the number of spurious depth values
from the multi-view algorithm. The result is a small set of
low-error points that we use for the interpolation of the previous section. As mentioned before, in our experiments this
algorithm behaves better than a geometric-only compatibility
test, especially in the low-parallax sequences of the NYUv2
dataset.
IV. EXPERIMENTAL RESULTS
In this section we evaluate the algorithm and compare
its performance against two state-of-the-art methods: multiview direct mapping using TV regularization (implemented
following [1], [28]) and the single-view depth estimation
using the network of [3]. We have selected two datasets
with different properties. The first one is the NYUv2 Depth
Dataset [26], a general dataset aimed at image segmentation
evaluation and hence likely to contain low-parallax and lowtexture sequences. We analyze results in six sequences from
the test set (i.e. the single-view net had not been trained on
these sequences) selected just to include different types of
rooms. The second one is the TUM RGB-D SLAM Dataset
[27], a dataset oriented to visual SLAM and then likely to
present a bias benefiting multi-view depth. In this case, we
evaluated two sequences selected randomly.
We run our algorithm in a 320 240 subsampled version
_×_
of the images, as this is the size of the single-view neural
network given by the authors. We also run our multi-view
depth estimation at this image size, and upsample the fused
depth to 640 480 in order to compare it against the ground
_×_
truth D channel from the kinect camera.
As our aim is to evaluate the accuracy of the depth
estimation, we will assume that camera poses are known
for the multi-view estimation. In the TUM RGB-D SLAM
Dataset [27] we use the ground truth camera poses. In the
NYUv2 Depth Dataset sequences we estimate them using
the RGB-D Dense Visual Odometry by Gutierrez-Gomez
_et al. [29]. These camera poses will remain fixed and used_
to create the multi-view depth maps. As mentioned before,
the parameters of the fusion algorithm were experimentally
set prior to the evaluation on a small separate set of images.
To evaluate the methods, we computed three different metrics, the RMSE, the Mean Absolute Error in meters and the
-----
RMSE SCALE INVARIANT MEAN ERROR (m) MEAN ERROR (m)
Sequence TV Eigen[3] Ours(auto) TV Eigen[3] Ours(auto) TV Eigen[3] Ours(auto) Ours(man)
bathroom 0018 1.458 0.852 **0.793** 0.405 0.150 **0.145** 1.174 0.692 **0.612** 0.263
bedroom 0013 1.004 0.550 **0.482** 0.212 0.139 **0.136** 0.690 0.441 **0.344** 0.163
dining room 0032 2.212 0.710 **0.694** 0.416 0.209 **0.204** 1.797 0.581 **0.554** 0.318
kitchen 0032 3.599 1.621 **1.572** 0.812 0.592 **0.583** 2.920 1.222 **1.183** 0.805
living room 0025 1.073 0.620 **0.597** 0.289 0.236 **0.219** 0.798 0.471 **0.435** 0.289
living room 0030a 1.031 0.818 **0.792** 0.411 0.228 **0.219** 0.849 0.532 **0.440** 0.329
fr1 desk 1.581 0.433 **0.410** 0.255 0.121 **0.103** 1.211 0.317 **0.294** 0.154
fr1 room 1.467 0.323 **0.301** 0.167 0.092 **0.081** 1.163 0.231 **0.207** 0.102
TABLE II: Left table: Error metrics for the NYUv2 and TUM datasets. For each sequence and metric we compare the
TV-regularized multi-view depth, the single-view depth [3] and our fused depth. Right table: Mean error for the fused depth
with manual multi-view point selection. (The evaluation has been performed in the first 100 frames of each sequence)
|Sequence|Col2|RMSE|SCALE INVARIANT|MEAN ERROR (m)|
|---|---|---|---|---|
||Sequence|TV Eigen[3] Ours(auto)|TV Eigen[3] Ours(auto)|TV Eigen[3] Ours(auto)|
|NYUDepth v2|bathroom 0018 bedroom 0013 dining room 0032 kitchen 0032 living room 0025 living room 0030a|1.458 0.852 0.793 1.004 0.550 0.482 2.212 0.710 0.694 3.599 1.621 1.572 1.073 0.620 0.597 1.031 0.818 0.792|0.405 0.150 0.145 0.212 0.139 0.136 0.416 0.209 0.204 0.812 0.592 0.583 0.289 0.236 0.219 0.411 0.228 0.219|1.174 0.692 0.612 0.690 0.441 0.344 1.797 0.581 0.554 2.920 1.222 1.183 0.798 0.471 0.435 0.849 0.532 0.440|
|TUM|fr1 desk fr1 room|1.581 0.433 0.410 1.467 0.323 0.301|0.255 0.121 0.103 0.167 0.092 0.081|1.211 0.317 0.294 1.163 0.231 0.207|
RGB input TV Eigen Ours(auto) Ours(man) Ground Truth
Fig. 6: The first six rows are depth images for the NYUDepth v2 dataset [26] and the last two rows are for the TUM
Dataset [27]. Color ranges are row-normalized to facilitate the comparison between different methods. First column RGB
keyframe, second column TV-regularized multi-view depth, third column single-view depth, fourth column our depth fusion
with automatic multi-view point selection, fifth column our depth fusion with manual multi-view point selection, and sixth
_column ground truth. Figure best viewed in electronic format._
scale invariant error proposed in [18] _n[1]_ �i _[d]i[2]_ _[−]_ _n1[2][ (][�]i_ _[d][i][)][2]_
where d is (log(y) − log(y[∗])), y and y[∗] are the ground
truth depth and the estimated depth respectively. The results
are summarized in Table II. Our method outperforms the
TV regularization in both datasets obtaining an average
improvement over 50% with respect to the mean of the error
-----
|NYUv2|SCALE INVARIANT|MEAN ERROR (m)|
|---|---|---|
||W1 W1 · W2 Q4 i=1 Wi|W1 W1 · W2 Q4 i=1 Wi|
||0.224 0.216 0.208|0.390 0.376 0.353|
|TUM|0.098 0.088 0.064|0.145 0.142 0.128|
TABLE III: Mean of error metrics for the NYUv2 and
TUM datasets. For each sequence and metric we compare
the fusion with the only use of the weight W1, the use of
_W1 · W2. and all the weights together._
in meters. As expected, the TV regularization performs better
in the TUM sequences and achieves lower errors, but in
terms of improvement there seems not to be big differences
between both datasets. Our fusion of depths also outperforms
the single-view depth reconstruction, the improvement being
10% on average. Both methods perform similarly in both
datasets, but except in one sequence, our method is always
better or as good as the deep single-view reconstruction.
Notice that the improvement does not come exclusively from
scale correction; the scale invariant error shows that our
method improves the structure estimation in both the single
and multi-view cases.
The right-most colum of Table II shows the depth errors
when the set of multi-view points does not contain outliers.
We selected them using the ground-truth data from the D
channel, and keeping only those points whose depth error
was lower than 10cm. The results are for all sequences better
than any method attaining improvements around 70% and
38% with respect to TV and [3], respectively. Although
expected, this result highlights the impact of multi-view
outliers and the need for good point selection. It also provides
an upper bound and shows that there is still room for
improvement in this latest part of our algorithm. In Table III
we show an experiment to better understand the contribution
of each weight of our algorithm. For this evaluation we have
considered the spurious-free set of multi-view points in order
to avoid the influence of noise. It can be seen that using all
the weights has an average of 9.8% improvement in mean
absolute error with respect to using just W1 and a 6.5% of
improvement with respect to using W1 and W2.
Finally, we present the results of some randomly picked
images for each sequence of each dataset. Fig. 6 shows the
obtained depth images for the NYUDepth v2 and the TUM
datasets. The improvement with respect to the regularized
multi-view approach is clear visually since the depth structure is much more consistent. Improvements with respect
to single-view images are more subtle and are best viewed
by looking at the corresponding depth error images of Fig.
7. Usually, the improvement comes from a better relative
placement of some local structure. For instance, the walls are
darker in the error images (see the bathroom 18, bedroom 13
or fr1 desk in Fig. 7). The effect is more evident when the
multi-view points were selected based on the ground truth.
This better alignment of local structures reduces the error, as
can be seen in the per-sequence error boxplots of Fig. 8.
RGB input Eigen Ours(auto) Ours(man)
Fig. 7: The first six rows are error images (predicted depth
- ground truth) for the NYUDepth v2 dataset [26] and the
last two rows are for the TUM Dataset [27]. Color ranges
are row-normalized to facilitate the comparison between
different methods. Darker blue is better. First column RGB
keyframe, second column single-view depth, third column
our depth fusion with automatic multi-view point selection,
_fourth column our depth fusion with manual multi-view point_
selection. In the third column, in yellow, are highlighted the
areas where the improvement of our method can be easily
appreciated with respect to single-view’s error. Figure best
viewed in electronic format.
V. CONCLUSIONS
In this paper we have presented an algorithm for dense
depth estimation by fusing 1) the multi-view depth estimation
from a direct mapping method, and 2) the single-view depth
that comes from a deep convolutional network trained on
RGB-D images. Our approach selects a set of the most
accurate points from the multi-view reconstruction and fuses
them with the dense single-view estimation. It is worth
remarking that the single-view depth errors do not depend
on the geometric configuration but on the image content
and hence the transformation is not geometrically rigid and
varies locally. The estimation of this alignment is our main
contribution and the most challenging aspect of this research.
-----
6
5
4
3
2
1
3.5
3
2.5
2
1.5
1
0.5
0
2.5
2
1.5
1
0.5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0 0 0 0
Ours(man) Ours(auto) Eigen TV Ours(man) Ours(auto) Eigen TV Ours(man) Ours(auto) Eigen TV Ours(man) Ours(auto) Eigen TV
bathroom_0018 bedroom_0013 living_room_0025 living_room_0030a
Fig. 8: Box-and-Whiskers plots of the pixel error distribution for four of our test scenes. From left to right: Our method with
manual point selection, our method with automatic point selection, single-view depth from Eigen et al. [3] and TV-regularized
multi-view depth.
Our experiments show that our proposal improves over
the state of the art (Eigen et al. [3] for single-view depth
and direct mapping plus TV regularization for multi-view
depth). Contrary to other approaches, the single-view depth
we use is entirely data-driven and hence does not rely on
any scene assumption. As mentioned, we take the network
of [3] as our single-view baseline, because of its availability
and its excellent accuracy-cost ratio. However, our fusion
algorithm is independent of the specific network and could
be used with any of the single-view approaches mentioned
in Section II. Future work will, as suggested by the results,
try to improve the multi-view points selection and the fusion
of both images using, for instance, iterative procedures or
segmentation-based fusion.
REFERENCES
[1] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense
tracking and mapping in real-time,” in Computer Vision (ICCV), 2011
_IEEE International Conference on, pp. 2320–2327, IEEE, 2011._
[2] J. Engel, T. Sch¨ops, and D. Cremers, “LSD-SLAM: Large-scale direct
monocular SLAM,” in European Conference on Computer Vision,
pp. 834–849, Springer, 2014.
[3] D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,”
in Proceedings of the IEEE International Conference on Computer
_Vision, pp. 2650–2658, 2015._
[4] G. Graber, T. Pock, and H. Bischof, “Online 3D reconstruction using
convex optimization,” in 2011 IEEE International Conference on
_Computer Vision Workshops, pp. 708–711, IEEE, 2011._
[5] J. St¨uhmer, S. Gumhold, and D. Cremers, “Real-time dense geometry
from a handheld camera,” in Joint Pattern Recognition Symposium,
pp. 11–20, Springer, 2010.
[6] A. Concha, W. Hussain, L. Montano, and J. Civera, “Manhattan
and Piecewise-Planar Constraints for Dense Monocular Mapping,” in
_Robotics: Science and Systems, 2014._
[7] P. Pinies, L. M. Paz, and P. Newman, “Dense mono reconstruction:
Living with the pain of the plain plane,” in 2015 IEEE International
_Conference on Robotics and Automation, pp. 5226–5231, 2015._
[8] P. Pini´es, L. M. Paz, and P. Newman, “Too much TV is bad: Dense
reconstruction from sparse laser with non-convex regularisation,” in
_2015 IEEE International Conference on Robotics and Automation_
_(ICRA), pp. 135–142, IEEE, 2015._
[9] A. Concha and J. Civera, “Using superpixels in monocular SLAM,” in
_Robotics and Automation (ICRA), 2014 IEEE International Conference_
_on, pp. 365–372, IEEE, 2014._
[10] V. Hedau, D. Hoiem, and D. Forsyth, “Recovering the spatial layout
of cluttered rooms,” in 2009 IEEE 12th international conference on
_computer vision, pp. 1849–1856, IEEE, 2009._
[11] A. Concha, W. Hussain, L. Montano, and J. Civera, “Incorporating
scene priors to dense monocular mapping,” Autonomous Robots,
vol. 39, no. 3, pp. 279–292, 2015.
[12] D. F. Fouhey, A. Gupta, and M. Hebert, “Data-driven 3d primitives for
single image understanding,” in Proceedings of the IEEE International
_Conference on Computer Vision, pp. 3392–3399, 2013._
[13] R. Mur-Artal, J. Montiel, and J. D. Tardos, “ORB-SLAM: a versatile
and accurate monocular SLAM system,” Robotics, IEEE Transactions
_on, vol. 31, no. 5, pp. 1147–1163, 2015._
[14] J. Ens and P. Lawrence, “An investigation of methods for determining
depth from focus,” IEEE Transactions on pattern analysis and machine
_intelligence, vol. 15, no. 2, pp. 97–108, 1993._
[15] P. Sturm and S. Maybank, “A method for interactive 3d reconstruction
of piecewise planar objects from single images,” in The 10th British
_machine vision conference (BMVC’99), pp. 265–274, 1999._
[16] A. Saxena, M. Sun, and A. Y. Ng, “Make3D: Learning 3D scene
structure from a single still image,” IEEE transactions on pattern
_analysis and machine intelligence, vol. 31, no. 5, pp. 824–840, 2009._
[17] A. Saxena, J. Schulte, and A. Y. Ng, “Depth estimation using monocular and stereo cues.,” in IJCAI, vol. 7, 2007.
[18] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a
single image using a multi-scale deep network,” in Advances in Neural
_Information Processing Systems, pp. 2366–2374, 2014._
[19] F. Liu, C. Shen, and G. Lin, “Deep convolutional neural fields for depth
estimation from a single image,” in IEEE Conference on Computer
_Vision and Pattern Recognition, pp. 5162–5170, 2015._
[20] J. Li, R. Klein, and A. Yao, “Learning fine-scaled depth maps from
single rgb images,” arXiv preprint arXiv:1607.00730, 2016.
[21] A. Chakrabarti, J. Shao, and G. Shakhnarovich, “Depth from a single
image by harmonizing overcomplete local network predictions,” in
_Advances in Neural Information Processing Systems, pp. 2658–2666,_
2016.
[22] Y. Cao, Z. Wu, and C. Shen, “Estimating depth from monocular images
as classification using deep fully convolutional residual networks,”
_arXiv preprint arXiv:1605.02305, 2016._
[23] C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised
monocular depth estimation with left-right consistency,” arXiv preprint
_arXiv:1609.03677, 2016._
[24] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in neural
_information processing systems, pp. 1097–1105, 2012._
[25] K. Simonyan and A. Zisserman, “Very deep convolutional networks
for large-scale image recognition,” arXiv preprint arXiv:1409.1556,
2014.
[26] P. K. Nathan Silberman, Derek Hoiem and R. Fergus, “Indoor segmentation and support inference from RGBD Images,” in ECCV, 2012.
[27] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A
benchmark for the evaluation of RGB-D SLAM systems,” in Intelligent
_Robots and Systems (IROS), 2012 IEEE/RSJ International Conference_
_on, pp. 573–580, IEEE, 2012._
[28] A. Handa, R. A. Newcombe, A. Angeli, and A. J. Davison, “Applications of legendre-fenchel transformation to computer vision problems,”
_Department of Computing at Imperial College London. DTR11-7,_
vol. 45, 2011.
[29] D. Guti´errez-G´omez, W. Mayol-Cuevas, and J. Guerrero, “Inverse
Depth for Accurate Photometric and Geometric Error Minimisation in
RGB-D Dense Visual Odometry,” in Robotics and Automation (ICRA),
_2015 IEEE International Conference on, pp. 83–89, IEEE, 2015._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1611.07245, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1611.07245"
}
| 2,016
|
[
"JournalArticle"
] | true
| 2016-11-22T00:00:00
|
[
{
"paperId": "4463dc4a32b948f0230f3b782cbfecaf1c9e5b1d",
"title": "Unsupervised Monocular Depth Estimation with Left-Right Consistency"
},
{
"paperId": "846b35459dd57bc3aee85eb209e97fc180c0cbec",
"title": "A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images"
},
{
"paperId": "e41812aa74d7d2d8d7089d74900adbda51c6b131",
"title": "Learning Fine-Scaled Depth Maps from Single RGB Images"
},
{
"paperId": "d27c6800894441054685ffdf896fab15cebf5fd6",
"title": "Estimating Depth From Monocular Images as Classification Using Deep Fully Convolutional Residual Networks"
},
{
"paperId": "13f85dae967a9c80f408f599086013ef9dbd1ad8",
"title": "Depth from a Single Image by Harmonizing Overcomplete Local Network Predictions"
},
{
"paperId": "3c52cd2b6f7a375b881f97a6f2c2925c79c9747d",
"title": "Incorporating scene priors to dense monocular mapping"
},
{
"paperId": "4bceb95e022bcea85178c29a8be807042502533c",
"title": "Too much TV is bad: Dense reconstruction from sparse laser with non-convex regularisation"
},
{
"paperId": "057b05c3ac3fc34e11994503cbbfdb01ad16d7f7",
"title": "Dense mono reconstruction: Living with the pain of the plain plane"
},
{
"paperId": "40c9d47bb0fdb39df6afb5fbdfda876f71bb4ca5",
"title": "Inverse depth for accurate photometric and geometric error minimisation in RGB-D dense visual odometry"
},
{
"paperId": "6933c70c747e6a8103f68f1a1db80185401d537b",
"title": "ORB-SLAM: A Versatile and Accurate Monocular SLAM System"
},
{
"paperId": "3f25b3ddef8626ace7aa0865a1a9e3dad1f23fb6",
"title": "Deep convolutional neural fields for depth estimation from a single image"
},
{
"paperId": "cb3a2ddcf305e2ec0f6b94af13d1e631ed261bdc",
"title": "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture"
},
{
"paperId": "c13cb6dfd26a1b545d50d05b52c99eb87b1c82b2",
"title": "LSD-SLAM: Large-Scale Direct Monocular SLAM"
},
{
"paperId": "eb42cf88027de515750f230b23b1a057dc782108",
"title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"
},
{
"paperId": "d7c8ebb5ace5163e303a22e83308846fc167dad9",
"title": "Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping"
},
{
"paperId": "dd2cf76ae78a3262a094ac865aa9f60c55472c5d",
"title": "Depth Map Prediction from a Single Image using a Multi-Scale Deep Network"
},
{
"paperId": "9fd9cf375033e38262353363b593befbd66426c7",
"title": "Using superpixels in monocular SLAM"
},
{
"paperId": "c855d8b75090e4d4aadd6ce936046327774470c9",
"title": "Data-Driven 3D Primitives for Single Image Understanding"
},
{
"paperId": "5eb4c55740165defacf08329beaae5314d7fbfe6",
"title": "A benchmark for the evaluation of RGB-D SLAM systems"
},
{
"paperId": "abd1c342495432171beb7ca8fd9551ef13cbd0ff",
"title": "ImageNet classification with deep convolutional neural networks"
},
{
"paperId": "c1994ba5946456fc70948c549daf62363f13fa2d",
"title": "Indoor Segmentation and Support Inference from RGBD Images"
},
{
"paperId": "7633c7470819061477433fdae15c64c8b49a758b",
"title": "DTAM: Dense tracking and mapping in real-time"
},
{
"paperId": "88b4037bd43133dcc39329dec1866e5071e6d95b",
"title": "Online 3D reconstruction using convex optimization"
},
{
"paperId": "e577d7f4577e904d41d22bc1edc29d8264b23f66",
"title": "Real-Time Dense Geometry from a Handheld Camera"
},
{
"paperId": "451a06626afe8dd70099c7dfec86de7af909a062",
"title": "Recovering the spatial layout of cluttered rooms"
},
{
"paperId": "41bcea1bec0f0b0e9e2cb4894bf6bfda091a4eae",
"title": "Make3D: Learning 3D Scene Structure from a Single Still Image"
},
{
"paperId": "49531103099c8d17ea34eb09433688e84de4f35f",
"title": "Depth Estimation Using Monocular and Stereo Cues"
},
{
"paperId": "62bdf4743c5ad00a32790a18dd593be0813e571e",
"title": "A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images"
},
{
"paperId": "7c0f08f0d00d3697f40f461902dff2906fa25e57",
"title": "An Investigation of Methods for Determining Depth from Focus"
},
{
"paperId": null,
"title": "Applications of Legendre-Fenchel transformation to computer vision problems"
}
] | 11,339
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00de0b220e561f58bda5643968bf05822b7f896d
|
[
"Computer Science"
] | 0.914832
|
Experimental Evaluation of Transmitted Signal Distortion Caused by Power Allocation in Inter-Cell Interference Coordination Techniques for LTE/LTE-A and 5G Systems
|
00de0b220e561f58bda5643968bf05822b7f896d
|
IEEE Access
|
[
{
"authorId": "1397187109",
"name": "Á. Hernández-Solana"
},
{
"authorId": "1405613007",
"name": "Paloma García-Dúcar"
},
{
"authorId": "2136979",
"name": "A. Valdovinos"
},
{
"authorId": "2164974275",
"name": "Juan Ernesto García"
},
{
"authorId": "134151207",
"name": "J. de Mingo"
},
{
"authorId": "2231623",
"name": "P. L. Carro"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Error vector magnitude (EVM) and out-of-band emissions are key metrics for evaluating in-band and out-band distortions introduced by all potential non-idealities in the transmitters of wireless systems. As EVM is a measure of the quality of the modulated signal/symbols, LTE/LTE-A and 5G systems specify mandatory EVM requirements in transmission for each modulation scheme. This paper analyzes the influence of the mandatory satisfaction of EVM requirements on the design of radio resource management strategies (RRM) (link adaptation, inter-cell interference coordination), specifically in the downlink (DL). EVM depends on the non-idealities of the transmitter implementations, on the allocated power variations between the subcarriers and on the selected modulations. In the DL of LTE, link adaptation is usually executed by adaptive modulation and coding (AMC) instead of power control, but some flexibility in power allocation remains being used. LTE specifies some limits in the power dynamic ranges depending on the allocated modulation, which ensures the satisfaction of EVM requirements. However, the required recommendations concerning the allowed power dynamic range when inter-cell interference coordination (ICIC) and enhanced ICIC (eICIC) mechanisms (through power coordination) are out of specification, even though the EVM performance should be known to obtain the maximum benefit of these strategies. We perform an experimental characterization of the EVM in the DL under real and widely known ICIC implementation schemes. These studies demonstrate that an accurate analysis of EVM is required. It allows a better adjustment of the design parameters of these strategies, and also allows the redefinition of the main criteria to be considered in the implementation of the scheduler/link adaptation concerning the allocable modulation coding scheme (MCS) in each resource block.
|
Received April 9, 2022, accepted April 24, 2022, date of publication April 28, 2022, date of current version May 9, 2022.
_Digital Object Identifier 10.1109/ACCESS.2022.3170910_
# Experimental Evaluation of Transmitted Signal Distortion Caused by Power Allocation in Inter-Cell Interference Coordination Techniques for LTE/LTE-A and 5G Systems
ÁNGELA HERNÁNDEZ-SOLANA, PALOMA GARCÍA-DÚCAR, ANTONIO VALDOVINOS,
JUAN ERNESTO GARCÍA, JESÚS DE MINGO, AND PEDRO LUIS CARRO
Aragon Institute for Engineering Research (I3A), University of Zaragoza, 50018 Zaragoza, Spain
Corresponding author: Ángela Hernández-Solana (anhersol@unizar.es)
This work was supported in part by the Spanish Ministry of Science with European Regional Development Funds (ERDF) under the
projects RTI2018-099063-B-I00 and RTI2018-095684-B-I00, and in part by the Government of Aragon (Reference Group T31_20R).
**ABSTRACT Error vector magnitude (EVM) and out-of-band emissions are key metrics for evaluating**
in-band and out-band distortions introduced by all potential non-idealities in the transmitters of wireless
systems. As EVM is a measure of the quality of the modulated signal/symbols, LTE/LTE-A and 5G systems
specify mandatory EVM requirements in transmission for each modulation scheme. This paper analyzes the
influence of the mandatory satisfaction of EVM requirements on the design of radio resource management
strategies (RRM) (link adaptation, inter-cell interference coordination), specifically in the downlink (DL).
EVM depends on the non-idealities of the transmitter implementations, on the allocated power variations
between the subcarriers and on the selected modulations. In the DL of LTE, link adaptation is usually
executed by adaptive modulation and coding (AMC) instead of power control, but some flexibility in
power allocation remains being used. LTE specifies some limits in the power dynamic ranges depending
on the allocated modulation, which ensures the satisfaction of EVM requirements. However, the required
recommendations concerning the allowed power dynamic range when inter-cell interference coordination (ICIC) and enhanced ICIC (eICIC) mechanisms (through power coordination) are out of specification,
even though the EVM performance should be known to obtain the maximum benefit of these strategies.
We perform an experimental characterization of the EVM in the DL under real and widely known ICIC
implementation schemes. These studies demonstrate that an accurate analysis of EVM is required. It allows
a better adjustment of the design parameters of these strategies, and also allows the redefinition of the main
criteria to be considered in the implementation of the scheduler/link adaptation concerning the allocable
modulation coding scheme (MCS) in each resource block.
**INDEX TERMS EVM, inter-cell interference coordination, LTE, LTE-A, 5G.**
**I. INTRODUCTION**
The error vector magnitude (EVM) and out-of-band emissions resulting from the modulation process are the habitual figures of merit adopted by the 4G/5G (i.e., long term
evolution –LTE– standards) for evaluating in-band and outband distortions introduced in the transmitter communication system and, thus, the signal accuracy of orthogonal
The associate editor coordinating the review of this manuscript and
approving it for publication was Tariq Masood .
frequency division multiple access (OFDMA) transmissions.
These distortions limit the signal-to-noise ratio (SNR) in
transmission. EVM is the measure of the difference between
the ideal modulated symbols and the measured symbols after
the equalization (this difference is called the error vector).
In order to exploit the full benefit of the modulation, when
base stations (named evolved Node B – eNB– in 4G) perform
radio resource management (RRM) strategies (i.e., scheduling, link adaptation, and inter-cell interference management),
it is important that eNBs take into account not only the
-----
target block error rate (BLER), linked to the expected signalto-interference-plus-noise ratio SINR in reception (derived
from channel state information –CSI– reported by user
equipment –UE–), but also the influence of EVM in the SNR
in transmission in order to guarantee that SNR does not
degrade too much at the transmitter. In this way, from release
8 to the last specification of 4G/5G standards, specifications
have set mandatory and specific EVM requirements for each
modulation scheme (QPSK, 16QAM, etc.). Because EVM
depends, in addition to several other factors, on the difference in power allocated per subcarrier, satisfaction or EVM
requirements must be considered when selecting the modulation coding scheme (MCS) and the transmission power
per subcarrier as part of interference management strategies.
In fact, this may severely impact the definition of this type
of RRM strategies. Specifications set limits linking power
and modulation allocation in order to meet mandatory EVM
requirements. These limits are defined as the dynamic power
range. However, although the required EVM must be fulfilled for all transmit configurations, it is a key aspect that is
absent in almost all studies concerning RRM management,
which are generally decoupled from radio frequency (RF)
transmission analysis [1]–[14]. There are only a very limited number of contributions in which EVM requirements
are considered [15]–[18], showing that the performance of
the ideal implementations of RRM schemes are severely
reduced. However, they do not explicitly analyze EVM. They
assume that the dynamic power range defined in the specifications is a mandatory requirement in any scenario to meet
the EVM. Nevertheless, we will see that the dynamic power
range defined in LTE unnecessarily limits the flexibility of
link adaptation in the inter-cell interference (ICI) coordination (ICIC) design, when these ICIC schemes are based on
power coordination. The purpose of this study is to analyze
the influence of mandatory satisfaction of EVM requirements
on RRM design (related to link adaptation and power allocation constraints), specifically when ICIC mechanisms are
applied in the downlink (DL). To the best of our knowledge,
this aspect has not been previously analyzed in the literature.
QoS in LTE/LTE-A and 5G evolutions depends on RRM
strategies, including ICIC and resource and power allocation,
operating in an interrelated fashion. By applying rules and
restrictions on resource assignments in a coordinated manner between cells concerning the allocable time/frequency
resources and the power constraints attached to them, LTE
reduces ICI and ensures QoS, particularly at the cell edge.
These schemes are required in both the DL and
uplink (UL), but they present differences in the rate
(MCS selection) and/or power adaptation according to the
link channel conditions and data user requirements.
Power management takes place in both the DL and UL,
although the approaches are clearly differentiated. Both conventional and fractional power controls (FPCs) are applied
in the UL [19]. The first case is a subcase of the second.
Used to limit ICI and to reduce UE power consumption, the
aim of UL power control is to fully or partially compensate
(when FPC is applied) the path loss to satisfy the SINR
requirements of a selected MCS. As a UE should adopt
the same MCS and power at all allocated subcarriers, the
satisfaction of EVM requirements in transmission (which are
the same as in DL [20]) does not translate into restrictions for
RRM implementations. EVM depends only on the nonlinearities of the real transmitter chain.
Contrary to the uplink UL, in the downlink DL of
LTE/LTE-A systems, to better control interference variations
at the UEs in inter-cell scenarios, the first approach and
overall goal of downlink power allocation is to budget a
constant power spectral density (PSD) for all occupied frequency subcarriers or resource elements (REs) over large
time periods. Thus, link adaptation is executed by adaptive
modulation and coding (AMC) selection instead of pathloss compensation through power adaptation [19]. A priori,
this approach does not preclude the use of ICIC schemes,
where frequency coordination aims to reduce ICI by defining
different frequency allocation patterns for UEs located in
different areas of the cell (for instance, the inner zone and the
cell edge) and (in some implementations) by defining different power levels (PSD) on each frequency partition (power
coordination). Fig. 1.b1 and Fig. 1.b2 in conjunction with
Fig. 1.a illustrate examples of the well-known soft frequency
reuse (SFR) and fractional frequency reuse (FFR) schemes,
which will be described later. In fact, ICIC derived from
FFR and SFR schemes continues to be an important issue to
facilitate spatial reuse in both DL [10]–[13] and UL [14] of
5G networks.
Nevertheless, the link quality is not limited only by noise
and interference at the receiver, which are the effects considered in almost all the RRM studies. Because of the imperfections of the real transmission chains and because the
base station (i.e., evolved Node B –eNB- in LTE) transmits
simultaneously to several UEs with different MCS and power
levels according to the selected frequency partitions, there
are distortion effects that limit the SNR in transmission and,
as a result, the maximum SINR achievable in reception. The
analysis of these effects, characterized by the EVM measure,
must be considered. As mentioned above, according to specifications, a maximum EVM per each modulation level must
be guaranteed at the transmitter output. With this aim, from
release 8, specifications have set and maintained some limits
on the difference between the power of an RE and the average
RE power for an eNB at the maximum output power (defined
as the dynamic power range [21]) to achieve specific EVM
requirements for each modulation scheme (QPSK, 16QAM,
64 QAM, 256 QAM) [21]. However, there are two drawbacks
to overcome.
First, almost all the proposals of RRM in the state-of-theart exclude EVM effects and show the benefits of higher
power ranges for the modulation order [15], [6], [7]. Results
are obtained under idealized conditions that do not match the
actual operation of RF transmitters. Meeting dynamic power
range constraints limits the flexibility of using modulations in
some power ranges, and drastically degrades the performance
-----
**FIGURE 1. Inter-cell interference coordination schemes.**
**FIGURE 2. Resource allocation and downlink power allocation in LTE/LTE-A.**
of ICIC schemes [15]–[18] compared with ideal implementation. However, in a second place, these dynamic power range
restrictions should be interpreted with caution because they
had been defined and suggested under specific conditions
and simplified assumptions, because ICIC effects were not
included in the studies conducted for the specification.
The EVM depends on many factors related to the implementation of real transmitters. More flexibility in power allocation is possible while meeting EVM requirements, which
are considered mandatory. The actual EVM values may be
significantly different from those assumed when the standard
limits are stated. Contrary to the simplicity of the assumptions
used to define the specification, when different power levels are defined in the transmission spectrum mask, different
EVM levels can be obtained depending on the location of
the RE and not only on the difference of each power level
with respect to the average power. These results are useful
for improving the resource allocation.
Thus, the objectives of this work are:
1) To emphasize that the proposal and evaluation of RRM
strategies, specifically ICIC strategies, need to include
mandatory EVM requirements. RRM evaluations that
are agnostic of EVM requirements do not properly estimate the actual performance of the proposed schemes.
In this context, it is important to note that ICIC and
enhanced ICIC (eICIC) based on power coordination
remain important ways to facilitate spatial reuse in both
downlink and, even, uplink, not only in 4G but also
in 5G.
2) To characterize the EVM in the DL transmitter of real
RF subsystems, depending on the distribution of modulation and power among subcarriers linked to ICIC
and eICIC implementations. The aim is to derive some
general performance patterns that allow improving the
implementation of these strategies. The objective is to
obtain information to be used in the redefinition of the
restrictions that must be applied to achieve a better use
of resources while meeting the QoS.
First, we concisely review the LTE resource allocation
basis and constraints in terms of power allocation defined
in the specification while reviewing the expected impact
of EVM. Then, the motivation for using some well-known
-----
ICIC and eICIC mechanisms for homogeneous and heterogeneous (HetNet) deployment scenarios schemes is discussed,
and the conditioning factors that arise in terms of satisfaction
with the EVM requirements are analyzed. Finally, we evaluate the effect of power allocation on the EVM measured
over a standard-compliant LTE downlink signal in a real RF
subsystem.
**II. RELATED WORK**
_A. DL POWER ALLOCATION ACCORDING_
_TO SPECIFICATIONS_
As stated above, conventional power control does not apply
to DL, which considers a constant power spectral density for
all occupied REs over large time periods and link adaptation
through MCS selection.
However, in accordance with this goal, owing to their particular requirements, cell-specific reference signals (CRSs),
which are embedded into the overall system bandwidth at
certain REs, are transmitted with constant power through
all DL system bandwidth and across all subframes. CRSs
are involved in several of the most important procedures at
the air interface: cell search and initial acquisition, downlink
channel quality, reference signal received power (RSRP),
reference signal received quality (RSRQ) measurements, and
cell (re)selection and measurements for handover support.
Therefore, their power level must be constant and known by
the UEs, being broadcast in mandatory system information
block 2 (SIB2).
DL power management determines the energy (power) per
resource element (EPRE). The reference signal (RS) EPRE
(RS-EPRE) is easily obtained by dividing the maximum
allowed output power (P[(]max[p][)] [) per antenna port (][p][) in the car-]
rier frequency by the number of REs in the entire bandwidth
(see Fig. 2). That is, being the ‘‘physical resource block’’
(PRB) the smallest resource unit that can be scheduled for
a UE (which is composed by a number NSC[RB] [=][ 12 of subcar-]
riers in the frequency domain with �f = 15 kHz subcarrier
spacing), the nominal EPRE is obtained by (1):
_Pmax_
_EPRE = Emax _[(][p][)]_ _nom_ [=] _NRB[DL]_ [·][ N]SC[ RB] _,_ (1)
being NRB[DL] [the number of PRBs in the downlink bandwidth]
configuration.
When the RS-EPRE is defined, this parameter is used as
a reference to determine the DL EPRE of other DL physical signal components or channels (synchronization signals,
broadcast channel –PBCH–, DL control channel –PDCCH–,
DL shared channel–PDSCH–, control format indicator channel –PCFICH– and physical hybrid automatic repeat-request
indicator channel –PHICH–), whose EPRE (i.e., PDSCH
EPRE) is set relative to this value.
Thus, the specification of LTE allows DL power management to allocate different PDSCH EPRE levels. Nevertheless,
the ratio of the PDSCH EPRE to cell-specific RS-EPRE
among the PDSCH REs (not applicable to PDSCH REs with
zero EPRE) should be maintained for a specific condition.
This ratio, which depends on the OFDM symbol, is denoted
by either ρB if the PDSCH RE is on the same symbol where
there is an RS (symbol index 0 and 4 of each slot) or ρA,
otherwise (symbols index 1, 2, 3, 5 and 6) (see Fig. 2a). In our
analysis, ρA / ρB is set to 1.
In addition, the RE power control dynamic range, which
is defined as the difference between the power of an RE and
the averaged RE power for an eNB at the maximum output
power for a specific reference condition (i.e., the threshold of
_ρA and ρB), is limited for each modulation scheme used in the_
PDSCH, according to Table 1 defined in the specification [21]
(see Fig. 2.b). In fact, in some specific UE configuration
conditions, the allowed ratio ρA in OFDM symbols that do
not carry RS is limited to eight values (ρA is equal to the PA
parameter [22]), ranging from 6 dB to 3 dB { 6, 4.77,
− + − −
3, 1.77, 0, 1, 2, 3}. Note that in all cases, the output
− −
power per carrier should always be less than or equal to the
maximum output power of the eNB. This could be considered an additional limitation, but it refers only to signaling.
In fact, in release 10 [19], [23], the relative narrowband
TX power (RNTP) indicator was exchanged between the
eNB through the X2 interface to support dynamic ICIC. The
RNTP bit map provides an accurate indication of the power
allocation status of each PRB (RNTP (nRB) with nPRB =
0, . . ., NRB[DL] [−] [1), taking one of the following values: {][−∞][,]
11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 1,
− − − − − − − − − − −
2, 3}. This power status is defined as the ratio between the
maximum intended EPRE of the UE-specific PDSCH REs in
OFDM symbols that do not contain an RS and the nominal
EPRE. Therefore, greater flexibility is considered.
**TABLE 1. E-UTRA BS RE power control dynamic range (Table 6.3.1.1-1**
in [21]).
**TABLE 2. EVM requirements for E-UTRA carrier (Table 6.5.2-1 in [21]).**
In any case, as stated above, power thresholds are set
because the different modulations for the DL (QPSK,
16QAM, 64QAM, and 256QAM) require different limits of
EVM to exploit the full benefit of the modulation, and the
power control range affects EVM. According to the specifications, the EVM for each modulation of the PDSCH is
-----
better than the values listed in Table 2. The EVM is defined
according to (2) as the square root of the ratio of the mean
error vector power (difference between the ideal modulated
symbols and the measured symbols after equalization) to the
mean reference power expressed as a percentage. The EVM
measurement shall be performed over all allocated resource
blocks and DL subframes within at least 10ms measurement
periods. The basic unit of EVM measurement is defined
over one subframe (1ms) in the time domain and NBW[RB] [=]
_NSC[RB]_ [=][ 12 subcarriers (180KHz) as defined in (][2][) (annex E]
in [21])):
these requirements make the application of many ICIC and
eICIC mechanisms difficult (for instance, 64QAM is not
allowed in reduced power partitions), a more precise analysis
of EVM for ICIC (out of specification) is required to specify
enhanced requirements for joint power allocation and modulation selection.
First, as stated above, EVM depends on a number of
factors, including thermal noise in various parts of the
transmitter chain, precoding, PA linearity, and predistortion characteristics. They are difficult to quantify theoretically, because they depend on vendor implementation.
However, apart from the absolute values, general patterns are
identified.
1) PA imperfections are the main contributors to EVM,
causing a certain loss of signal orthogonality and, thus,
a type of in-band interference. This means that even
if the total output power does not significantly change
by reducing the power of some PRBs while other
PRBs are power boosted (as in ICI schemes described
next), more degradation is expected to occur when
the power is reduced on the selected PRBs. This is
because they are more affected by the in-band interference caused by power-boosted PRBs. Therefore, the
interference, and thus, this degradation, could not be
uniform in all PRBs. This depends on the distance to
the power-boosted PRB and the ratio between their
respective power levels. The aim of this study is to
analyze the EVM degradation depending on the PRB
position to improve radio resource allocation.
2) It is a straightforward conclusion that further power
reduction beyond the maximum specified power
dynamic range can be considered based on the vendor
implementation. The EVM could be better than 7.5%
for the working point of PDSCH-EPRE RS-EPRE,
=
and as a result, a power dynamic range could also be
defined as the 64QAM meeting EVM requirements.
However, in any case, the EVM impact analysis of
setting different PDSCH EPRE levels on several PRB
partitions on the same OFDM symbol is still necessary.
3) When eICIC mechanisms are applied (reviewed in
Section B), normal and low-power (LP) subframes are
distributed within a frame (10ms) as shown in Fig. 3.
In this case, when a power reduction is applied to all
PRBs in an LP subframe, although the channel powers
of the cell RS (CRS) and PBCH are maintained to avoid
time-variant CRS transmission power fluctuations, the
total output power is reduced, and low EVM degradation is expected in these LP subframes compared with
subframes when normal operation occurs. It is expected
that the PA operates within the saturation limits for normal subframes and applies a power back-off at the PA.
In this case, because EVM measurements should be
performed within at least 10ms measurements, EVM
variations between normal and low-power subframes
must be considered in order to ensure EVM requirements in all subframes.
� |Z [′](t, f ) − _I_ (t, f )|[2]
_f ∈F(t)_ _,_ (2)
� � |I (t, f )|[2]
_t∈T_ _f ∈F(t)_
_EVM_
=
�
�
�
�
�
�
�
_t∈T_
where T is the set of symbols with the considered modulation
scheme being active within the subframe, F(t) is the set of
subcarriers within the NBW[RB] [subcarriers with the considered]
modulation scheme being active in symbol t, I (t, f ) is the
ideal signal reconstructed by the measurement equipment
in accordance with relevant transmission (TX) models, and
_Z_ [′](t, f ) is the modified signal under test.
The method for measuring EVM is quite involved (annex
E in [21]), but a simple approximation assumes that the
error vector resembles white noise. In this case, EVM can be
converted to SNR using the following formula: SNR 10
= ×
log10(1/EVM [2]). Considering this, the limits in Table 1 were
set to meet the EVM requirements defined in the specifications (Table 2) [21]. These values were obtained through
simulations to ensure that the system performance was
not significantly degraded. Specifically, the dynamic power
range was defined for these minimum performance requirements. A range of (7.5% - 8%) EVM was proposed as
a working assumption for 64QAM modulated PRBs when
PDSCH-EPRE RS-EPRE. Thus, a better SNR achiev=
able under this condition is 22 dB. To define power range
limits, they consider that although there are many causes
of EVM, power amplifier (PA) nonlinearities, specifically
clipping noise, are the major contributors to EVM. To make
the PA implementation efficient, the peak-to-average power
ratio (PAPR) of the signal was reduced by clipping the highest peaks. Thus, the signals were slightly modified, indicating this as an additional noise source. The power range
was estimated by assuming that if the output power did
not change significantly, the clipping noise remained nearly
constant and with similar levels for all the PRBs [24] [26].
−
Under this assumption, an RE power reduction will lead
to a reduced SNR in transmission and a higher EVM that
was quantified up to 12.5% (SNR 18 dB) for 16QAM if
=
the power is reduced by 4 dB and up to 17.5% for QPSK
(SNR 15 dB) with 7 dB of power reduction. By applying
=
a margin, they resulted in 3 dB and 6 dB, as defined
− −
in Table 2.
However, these assumptions are not very close to the performance of actual implementations. In addition, because
-----
**FIGURE 3. Enhanced Inter-Cell Interference Coordination (eICIC).**
_B. DL POWER MANAGEMENT AND INTER-CELL_
_INTERFERENCE COORDINATION IMPLEMENTATION_
A deeper EVM analysis for ICIC and eICIC may seem unnecessary given the evolution of 4G / 5G systems. However,
nothing could be further from the truth.
There are diverse mechanisms to combat inter-cell interference in LTE, including ICI cancellation (IC), ICIC, eICIC
mechanisms for the HetNet, coordinated multipoint (CoMP),
and coordinated beamforming. A good survey of ICIC techniques can be found in [1]–[4], as well as related radio
resource strategies [5] in both standard and heterogeneous
LTE/LTE-A networks, along with different performance
assessments. To illustrate the impact of EVM on ICIC, it is
sufficient to consider two main and well-known categories
of ICIC: fractional frequency reuse (FFR) and soft frequency
reuse (SFR). FFR and SFR are static ICIC techniques that
require interventions from mobile network operators to adjust
the PRB and power distribution between cell zones according
to UE distribution and quality of service demands. Although
simple, these approaches are preferred by many operators,
including public safety operators, because of their compatibility with the standard, their inherent ease of implementation, and the fact that they require little or no inter-cell
communication. More dynamic implementations are possible by applying dynamic coordination between the eNB.
For instance, they are possible using narrow-band transmit
power (RNTP) indicator exchange through the X2 interface [2], [3]. In any case, from the early stages of the 4G
definition to the present 5G context, a large number of works
have studied ICI management based on these low-complexity
schemes, resulting in the proposal of many derived variants
for 4G and 5G [1]–[5], [8], [10]–[14]. First deployments of
5G networks have been made using OFDMA and proposals for new multiple access techniques are also based on
OFDMA, thus, ICIC techniques derived from FFR and SFR
remain an important issue to facilitate spatial reuse in both
DL [10]–[13] and UL [14].
The basic principle behind these schemes is the division of the available PRBs in the carrier spectrum into two
partitions/sub-bands: one intended for mobile users (UE)
found in the inner part of the cell and the other that is
reserved for users found in the outer or cell-edge area (celledge users). Subsequently, several degrees of reuse factors
for the inner and outer partitions are applied in a multicell
system. In FFR-based approaches, as illustrated in Fig. 1.a
with Fig. 1.b2, cell-inner users use the same sub-band in every
cell, but the outer sub-band is usually divided into several
sub-bands (usually three), and each cell is given a sub-band
that is orthogonal to the outer sub-bands in neighboring cells.
In this case, the inner region does not share any spectrum with
the cell-edge or adjacent cell-edge regions. This strict policy
reduces interference on both inner-and cell-edge users but
may underutilize the available frequency resources. On the
other hand, in SFR-based approaches, the entire bandwidth
can be utilized in every cell, and the effective reuse can be
adjusted by power coordination between the PRBs used in
the inner and outer sub-bands, as illustrated in Fig. 1.a with
Fig. 1.b1. Cell-inner users can have access to the cell-edge
sub-bands selected by the neighboring cells, but with a lower
power level to reduce interference to the neighboring cells.
Thus, the SFR achieves higher spectrum efficiency, but there
is a higher ICI.
Despite these differences, in all ICIC schemes, there
is a common set of basic parameters that must be specified, whose adjustment and optimization have a severe
impact on the performance of these schemes [1], [2],
[6]–[9]:
1) The set and size of the frequency partitions defined
in the PDSCH (e.g., we can identify them as NRB[inner]
and NRB[edge][).]
2) The power level per RE for each frequency partition (e.g., EPRERB[inner] and EPRERB[outer] ) and, thus, the
power range between them (e.g.,� = EPRERB[outer] −
_EPRERB[inner][)]. Note that in partial FFR, different power_
levels are not strictly needed, although at the edge, subband resources can be power-boosted to reach the cell
coverage limit.
3) The spatial region where the partitions are used
(e.g., cell center or cell edge), and thus, the number of
user groups or classes.
-----
4) Threshold criterion for classifying users into groups.
Note that FFR and SFR are static/semi-static ICIC
techniques, however, they always require interventions
in the network to adjust the PRB and power distribution
between partitions according to the UE distribution and
QoS demands.
Concerning the power level settings, the power in the
cell-edge sub-band(s) should be boosted, while the power
in the cell-inner sub-band should accordingly be de-boosted
to maintain a constant nominal power and maximum output
power. Equation (3) sets the expression to compute the power
level settings depending on the � values that will be used in
the evaluation.
_P[(]max[p][)]_ [[][mW] []][ =][ N]RB[ inner] - 10[(12][·][EPRE]RB[inner] [dBm]�10)
+NRB[outer] - 10[(12][·][EPRE]RB[outer] [dBm]�10+�[dB]/10)
(3)
As anticipated, the problem is that if power control
dynamic range constraints are required to be satisfied to meet
EVM requirements, it is difficult to use or to obtain the
maximum benefit of the power control coordination schemes
in practical systems. According to Table 1, for instance,
if �> 0 dB, 64QAM cannot be applied to the SFR or FFR
when the outer-cell power boost is also considered, which
occurs in most cases. However, almost all state-of-the-art
studies until the present have evaluated the performance of
their SFR-based or FFR-based proposals without considering
these constraints [1] [13], being the ratio of the outer and
−
inner power densities (�) one of the most important parameters in the analysis [1], [2], [6] [9], [11], [12]. In these
−
studies, 64QAM are frequently used, particularly in the inner
sub-bands.
As SFR-based approaches improve spectral utilization, this
technique could be particularly effective in some cases; for
instance, public safety operators are more affected by limitations on available bands and spectrum bandwidth (3–5 MHz).
They often provide support to scenarios with few UE but
high resource occupancy. However, the SFR causes more
interference to all the center and edge users when compared
with the FFR and FFR power-boosted cases. By properly
adjusting � (for instance, from 0 dB to 10 dB), the operator
can control the tradeoff between improving the average cell
throughput (low � values) and the cell edge throughput (high
_�_ values). Recently, in [10], the authors propose a flexible
soft frequency reuse (F-SFR) that enables a self-organization
of a common SFR in the networks with an unpredictable and
dynamic topology of flying base stations. Authors propose a
graph theory-based algorithm for bandwidth allocation and
for a transmission power setting in the context of SFR. They
use a deep neural network (DNN) to significantly reduce the
computation complexity. However, same as in [8], where a
multi-layer SFR is proposed, or [6], [7], [9], where the ratio
between the power density in the outer cell region and in
the inner cell region is evaluated (from 0 dB to 10 dB or
12 dB), the performance is obtained without considering the
effects linked to a real transmitter implementation. Under this
assumption, a priori, the dynamic power range is not limited,
and the selection of the optimal values, in order to lead to
high performance, only depends on the interference caused by
co-channel neighbor cells, in addition to RRM (link planning
and adaptation) implementations. The same assumptions are
applied in the studies conducted in [11], where FFR and SFR
with K edge sub-bands are considered and the power ratio
ranges from 0 to 20 dB. The same occurs in [12], where
authors propose a generalized model of FFR for ultra-dense
networks. Knowing that, according to [19], the transmission power in DL should not dynamically change, an FFR
scheme extended to N (from 2 to 4) power/frequency subbands/groups is proposed. Power levels of each frequency
group are appropriately selected to optimize the system operation while the total power consumption remains unchanged.
Power ratio between groups varies from 3 to 13 dB, but the
optimization does not consider the mandatory requirements
of the specifications in order to limit the dynamic power
range (linked to power allocation and link adaptation) to
meet the error vector magnitude (EVM) requirements at the
transmitters. In fact, this is a key aspect that is absent in all
the referred studies and in almost all the studies carried out
concerning ICI management in DL. All of these studies, and
many others available in the literature, are of great interest
(i.e., some interesting reviews are available in [1] [5]). How−
ever, a practical limitation of all the proposals is that they do
not satisfy the EVM requirements, which has a significant
impact on the optimal power allocation, dynamic scheduling,
and link adaptation. Considering the interest in ICIC based on
power coordination, it is clear that the power control dynamic
range must be re-evaluated for ICIC, considering the effects
of real transmitter implementations for several values of �
and MCS distributions among the bandwidth partitions.
To our knowledge, there is only a very limited number
of contributions in which satisfaction of EVM requirements
is considered (not explicitly but in some way) [15] [18].
−
In these contributions, ICIC schemes are proposed and studied in HetNet scenarios, but the problem is similar.
Power control dynamic range defined in Table 1 is imposed
in eICIC for HetNet deployments, as illustrated in Fig. 3,
where low-power nodes (LPN) are deployed under macrocell coverage. In this case, cell range expansion (CRE) is
used to extend the coverage of the LPN, whereas the low
power almost blank subframe (LP-ABS) technique is used to
decrease the interference caused by the macrocell to the LPN
in the extension area (that is, to the cell-edge user in the LPN).
LP-ABS is a time-domain ICIC. Contrary to the traditional
ABS mechanism, where the macrocell stops its PDSCH
transmissions in predefined black subframes intended only
for LPN transmissions, in LP-ABS, the macro eNB maintains its data transmissions on the ABS subframes, but the
PDSCH EPRE is reduced, with α (where 0 ≤ _α ≤_ 1) the
reducing factor. Fig. 3 illustrates the LP-ABS concept with
_α (e.g. −3 dB). Similar to �_ in ICIC schemes, α is the key
design parameter. The studies in [15] [18] are conducted
−
-----
from the point of view of RRM design. Thus, the effects
of RF implementation are not explicitly considered, but the
authors emphasize that the maximum value of allowed power
boosting relative to the nominal value must be properly
designed to limit the dynamic range and the EVM requirements [21]. In [15], the authors perform a good analysis of
the impact of 6 dB, 9 dB, and 12 dB power reductions,
− − −
concluding that although small values are sufficient to reach
the maximum performance in macrocells applying eICIC,
larger reductions (i.e., 12 dB) make it possible the application
of larger cell range expansion offsets and the consequent
improvement of macrocell performance owing to higher
picocell offloading ratios. Under ideal conditions (without
considering EVM requirements), the same study shows that
although it is normal that the modulation order decreases
when the transmission power is reduced, a large percentage
of 16QAM and 64QAM transmissions are often used in low
power subframes. This is because many of these transmissions are directed to the inner UEs, are little affected by interference, and have good channel conditions. However, in [15],
when LTE specification constraints (Table 1) are considered,
the authors remark that LP-ABS subframes could only be deboosted to -6 dB from RS-EPRE without significant specification changes, and only if the modulation is constrained
to QPSK during these de-boosted subframes. If modulations
are limited to QPSK in low-power subframes, as specified,
perceptible degradation in the macrocell performance occurs.
Increasing the dynamic range (i.e. up to 9 dB) for all the
modulations yields in a degradation in the EVM. The same
consideration is applied in [16], [17], limiting the power
de-boost to 6 dB. They conclude that the support of high
power reductions will only be possible at the expense of better
EVM requirements for 0 dB to meet the EVM requirements
for large power reductions. In a similar way, in [18], where a
coordinated multi-point transmission (CoMP) scheduling is
applied in combination with ICIC techniques with different
power reduction levels, the authors compare the achieved user
data rate and system throughput performance without (ideal
case) and with LTE constraints (in this case, a dynamic power
range is applied and a lower modulation order should be
used to conserve modulation accuracy). They show that when
the LTE constraints are employed (modulation order is constrained based on the used power offset level), the obtained
user data and system throughput performance under the ideal
case are drastically degraded regardless of the ICIC technique
used. This shows that the LTE constraints should be explicitly
considered in any practical RRM proposal. The limitation
of all these studies is that they are based on the theoretical
power control dynamic range defined in the specifications
without an explicit EVM evaluation. However, EVM depends
on the vendor implementation and ICIC was out of the studies
conducted by the specifications in order to set the dynamic
power range.
The only actual limitation is that the EVM for the different
modulation schemes in the PDSCH should be better than
the limits defined in Table 2. Some power back-offs can
be applied to the PA in LP-ABS subframes compared with
normal subframes (which impacts the CRS TX power) and
need to be evaluated [27]. Thus, the requirements listed in
Table 1 must be applied with caution in the LP-ABS. Further
power reductions compared with those considered in the
dynamic power range can be applied. However, as defined
above, the EVM measurements were performed for each
PRB within at least 10 measurement periods. This implies
that the EVMs from the normal and LP-ABS are averaged.
Thus, to ensure a good system performance, the differences
between EVMs in PRBs that are not affected by power reductions must be explicitly considered. Concerning the SFR and
FFR-based schemes, the most relevant issue is to evaluate the
distribution of the EVM along the PRBs in the entire carrier
bandwidth according to �. Owing to the loss of orthogonality
in the transmitted signal caused by many imperfections in
the transmitter chain, EVM is expected to vary on the PRBs
of the same sub-band depending on their position relative to
the boundary between the inner and outer sub-bands. This
information can be used in resource allocation, allowing a
more precise MCS selection according to the expected EVM
on the PRB.
In summary, RRM and RF transmission studies have generally been decoupled in the literature. However, power coordination (linked to ICIC and eICIC), and link adaptation
cannot be agnostic to the RF implementation. Being EVM an
essential indicator to quantify the transmission performance
of a wireless communication, the aim of this work is to
quantify the effects in terms of EVM degradation in a real
RF subsystem depending on the modulation when power
allocation schemes linked to ICIC and eICIC are considered.
The kernel of this contribution is that there are no similar
studies in the literature. The goal is to avoid unnecessary
performance limitations when applying ICIC and eICIC in
LTE/LTE-A and 5G cells by restricting the variation range
of � and α and allocable MCSs. Absolute values depend on
vendor implementation, but some generalizable results can be
obtained from a detailed study of power coordination linked
to ICIC variants derived from the SFR and/or FFR schemes.
**III. EXPERIMENTAL RESULTS**
In this work, we have carried out an experimental characterization of EVM degradation in a real RF subsystem for several
MCS allocations when different power levels are applied as
part of the ICIC and eICIC schemes proposed in 4G/5G
networks.
To evaluate the effect of power allocation on the EVM
measured over the transmitted signal, we have generated
a standard-compliant LTE downlink signal (OFDM modulation) with QPSK, 16QAM, and 64QAM modulated subcarriers and a bandwidth of BW 5 MHz. Thus, a total
=
of 25 PRBs are available. The test signal, generated with
MATLAB, which is used in the experiments, follows the LTE
frame structure, consisting of different physical signals and
channels, including PDSCH, PDCCH, RS, and synchronization data. However, ICIC only applies to the PDSCH; thus, the
-----
power and modulation variation in each PRB is only carried
out in the PDSCH.
The power level and MCS can be independently selected
for each PRB with the EVM obtained for each PRB. We evaluated different distributions for the inner and outer sub-bands
according to the patterns defined for the SFR scheme in
sectors A, B, and C (see Fig. 1.a with Fig. 1.b1). The conclusions derived from the results obtained for all the patterns are
similar; therefore, without loss of generality, we will include
those obtained for pattern C. That is, the outer sub-band is
allocated to the first PRB. The two most relevant parameters
that affect the EVM are the power ratio between the outer
and inner power densities (�), defined in Fig. 1, and the
distance from the PRB, where the EVM is evaluated as the
jump point between the inner and outer sub-bands. Taking
this into account and without loss of generality, the results
shown here correspond to a scenario in which the sizes of the
outer and inner sub-bands are adjusted to be almost equal.
That is, NRB[inner] = 12 and NRB[outer] = 13. Power levels are set for
different � rates according to (2). For example, Fig. 4 shows
an LTE frame with 25 PRBs (5 MHz bandwidth), where the
first 13 PRBs have a power level 9 dB higher than that of the
last 12 PRBs, and a 64-QAM modulation scheme is used for
all PRBs.
**FIGURE 4. Test signal: LTE frame with 25 PRBs where the first 13 PRBs**
have a power level 9dB higher than the last 12 PRBS (64QAM
modulation).
_A. EXPERIMENTAL SETUP_
The complete experimental test bench is shown in Fig. 5 using
an equivalent block diagram. The experimental setup used in
this study is shown in Fig. 6.
The digital development platform used for the implementation of digital signal processes and the digital I/Q modulator
and demodulator consists of an FPGA Zynq-7000 AP SoC
connected to a PC that controls a high-speed analog module
with an integrated RF agile transceiver, the Analog Devices
AD9361 software defined radio (SDR). It comprises an
RF 2 2 transceiver with integrated 12-bit digital analog con×
verters (DACs) and analog to digital converter (ADCs), and
**FIGURE 5. Block diagram scheme stands for the experimental setup.**
**FIGURE 6. Laboratory experimental test setup.**
has a tunable channel bandwidth (from 200 kHz to 56 MHz)
and receiver (RX) gain control. It is used as a generator and
receptor for the LTE signal, as described above. The RF
carrier frequency is set at 1.815 GHz within band 3 of the
LTE standard [21], which is called DCS.
Because the signal power at the output of the board is low,
it is amplified using a low-noise amplifier (LNA) (Minicircuits ZX60-P33ULN ). The signal is then amplified with a
+
PA (Minicircuits ZHL-4240), which has a 1-dB compression
point of 26 dBm and an approximate gain of 41.7 dB at the
test frequency. As previously stated, the most important cause
of the increased EVM level in the transmitted signal is the
nonlinear distortion caused mainly by the RF power amplifier (PA), which depends on the operating point of the RF
PA. For this reason, in this work, several tests are conducted
by varying the operating point of the RF power amplifier, and
consequently its RF output power, to evaluate the impact of
the nonlinearities of the PA on the EVM level. Fig. 7 shows
the amplitude-to-amplitude modulation (AM/AM) characteristics of the RF PA used in the experimental setup at a
linear (red dots) and nonlinear (blue dots) operating points,
corresponding to an averaged RF output power of 18.4 dBm
and 22 dBm, respectively.
The output signal is shown on an oscilloscope (Agilent
Infiniium DSO90804A), which measures the signal power.
A splitter (Minicircuits ZAPD-2-21-3W-S ) has been added
+
to the setup to measure the signal power before amplification.
A second splitter (Minicircuits ZN2PD2-50-S ) is used to
+
capture the amplified output signal and send it to the feedback
-----
The predistorted output signal, u(n), is obtained from the
baseband input x(n) using (5):
_M_
�
_apmx (n −_ _m) |x (n −_ _m)|[p][−][1],_ (5)
_m=0_
_u (n) =_
_N_
�
_p=1_
**FIGURE 7. AM/AM characteristics of the PA measured in the**
experimental test setup in a linear and nonlinear scenario.
loop to be demodulated on the board. The demodulation
process is carried out on a digital platform and analyzed on
a PC with MATLAB. An attenuator of 30 dB is used at the
output of the PA to avoid damaging the oscilloscope.
Starting from this testbed, it is known, as we refer above,
that the nonlinear distortion caused mainly by the RF power
amplifier is the most important cause of increased EVM level
in the transmitted signal. Therefore, depending on the operating point of the RF PA, some type of linearization technique
may be necessary in a real implementation to reduce the nonlinear distortion produced by the RF power amplifier and thus
decrease the EVM level. Therefore, we performed an analysis
using a digital predistorter (DPD) included in the system
when the PA works in a nonlinear region. DPD processing
is performed in the FPGA Zynq-7000 AP SoC, as explained
above. In this study, a classical polynomial model based on
a truncated Volterra series is chosen for the amplifier model
and is defined as (4):
_M_
�
_bkmx (n −_ _m) |x (n −_ _m)|[k][−][1],_ (4)
_m=0_
where M is the memory depth, N is the nonlinear order,
m is the memory tap delay, and apm are the predistorter
model coefficients. They are calculated in the first stage of
the feedback path (post-distorter), whose input is v(n) and is
defined as (6):
_v (n) = y (n) /Gnorm, where Gnorm = GlinRF = βGRF_ _, (6)_
where GlinRF is the linearized RF complex gain, β is the gain
factor, and GRF is the complex gain without linearization
defined as GRF = max |y(n)| / max |x(n)|. This factor β
is used to compensate for the gain reduction owing to the
linearization process.
DPD performance can be improved by carefully adjusting
this factor as long as the DPD model remains stable [29].
A more detailed description of this well-known method and
how to obtain the input signal matrix expression as well as
the coefficient vector can be found in [30]. This model can fit
the nonlinearity and memory effects of a power amplifier.
In this study, the DPD parameters, nonlinearity order, and
memory depth are fixed (N 7 and M 0) for all the
= =
downlink RF input signal powers. This corresponds to a basic
model without memory, but the aim of this work is not to
optimize the DPD but to evaluate the improvement of the
EVM with the use of a DPD.
_B. RESULTS_
A set of experiments has been conducted to evaluate the real
effects in terms of EVM degradation in a real RF subsystem
considering the MCS allocation, the operating point of the
RF PA output power, and the power ratio between the outer
and inner power densities (�). The final objective is to obtain
information to set real restrictions concerning EVM requirements that affect ICIC and eICIC implementations in order to
improve resource allocation strategies, including scheduling
and link adaptation through MCS selection. As mentioned
above, the results presented here correspond to a pattern
where the first 13 PRBs correspond to the outer sub-band
of an SFR scheme and the last 12 PRBs correspond to the
inner sub-band. Similar analyses and equivalent conclusions
have been obtained for other patterns of the inner and outer
sub-bands and sizes of the sub-bands.
First, Fig. 9 shows the EVM measured in each PRB in
various situations depending on the operating point of the
RF PA, each of which corresponds to the respective average
output power of the PA, with no difference in the power
level (� = 0 dB) between the inner and outer sub-bands
and considering 64 QAM. This allows us to observe the
influence of the operating point of the PA on the measured
EVM. As expected, the higher the RF output power, the more
_y (n) =_
_N_
�
_k=1_
where N is the nonlinear order, M is the memory depth,
_x(n) and y(n) are the baseband input and output signals,_
respectively, and bkm are the model coefficients.
This model allows us to obtain the DPD characteristics
using an indirect learning structure, as explained in [28] and
shown in Fig. 8.
**FIGURE 8. Predistorter Scheme using an indirect learning structure.**
-----
**FIGURE 9. EVM measured in each PRB varying the operating point of the**
RF power amplifier. (Modulation 64QAM and no difference in power level
between PRBs � **= 0 dB).**
**FIGURE 10. EVM measured at an intermediate operating point of the RF**
PA, corresponding to Pout = 20.3 dBm, varying the power level � between
the first 13 and the last 12 PRBs from 0 dB to 9 dB. (Modulation 64QAM).
nonlinearities in the transmitter, and the higher the EVM
value along the whole carrier band.
In addition, Fig. 9 evidences how all the possible nonlinearities in the transmitter lead to the loss of orthogonality between the subcarriers, which generates inter-subcarrier
interference (in-band interference), affecting PRBs differently across the band. Because of the decreasing spectral
power of individual subcarriers in the side lobes, PRBs
located at the edges of the carrier spectrum are affected
by a fewer number of interfering subcarriers able to add a
significant interference power. Fig. 9 shows this effect: PRBs
in the middle of the system band (i.e., PRB#8 to PRB#16) are
more affected by the in-band interference and present higher
EVM values. Then, EVM decreases slightly at the ends of
the band (i.e., PRB#0, PRB#1 or PRB#23, PRB#24). This
effect is more significant as the RF PA works in a nonlinear
operating point. It should be noted that the EVM requirements
of the standard (see Table 2) are not met for higher output
power levels. In these cases, it is necessary to include a DPD
in the transmitter to meet the specifications for 64QAM (8%).
**FIGURE 11. EVM measured at a nonlinear operating point of the RF PA,**
corresponding to Pout = 22 dBm, varying the power level � between the
first 13 and the last 12 PRBs from 0 dB to 9 dB. (Modulation 64QAM).
It is expected that the effect of in-band interference will
be more evident when power coordination, linked to SFR,
is applied. That is, a power ratio � is applied between the
outer sub-band (used by the users located in the cell edge)
and the inner sub-band (used by the users located in the cellcenter). After seeing in Fig. 9 the performance for different points of operation, in Fig. 10 and Fig. 11 we analyze
the impact of � values in two different points of operation, always considering the more demanding modulation
(64QAM) from the EVM point of view. Fig. 10 shows the
EVM measured at an intermediate operating point of the RF
PA corresponding to Pout 20.3 dBm, and Fig. 11 shows the
=
EVM measured at a nonlinear operating point corresponding
to Pout 22 dBm. The difference between the power levels
=
of the outer (first 13 PRBs) and inner (last 12 PRBs) subbands (�) varied from 0 dB to 9 dB.
As expected, the PRBs of the inner sub-band (powered
down) will lead to a reduced SNR in transmission and a
higher EVM than the PRBs of the outer sub-band. This is
why the RE power control dynamic ranges (dB) suggested in
the specification depend on the MCS. In addition, we observe
that the EVM strongly depends on the distance from the PRB
(where the EVM is evaluated) to the jump point between the
inner and outer sub-bands. It should be noted that the PRB in
the transition zone between the two sub-bands is considerably
affected. However, EVM degradation diminished when we
moved away from the transition zone. The effect is more
noticeable with a larger value of �, and for the nonlinear
operation point of the RF PA. This is because the in-band
interference will affect to a greater extent the subcarriers
that are transmitted with less power and are close to others
that are transmitted with greater power, because the side
lobes of the latter will have a higher relative power with
respect to the main lobe of the subcarriers transmitted with
less power. This occurs in the transition zone between the
inner and outer sub-bands. In this area, the PRBs of the inner
sub-band (i.e., PRB#13) suffer more in-band interference
level coming from the nearest PRBs of the outer sub-band
-----
(which are power boosted with respect to those of the inner
sub-band) than coming from PRBs of the own inner sub-band.
This results in an increase of EVM in PRBs of the transition
zone (i.e., PRB#13), which decreases in the PRBs as long
as they are farther to it. For this reason, as PRB move away
from the transition zone in the inner sub-band, the high power
subcarriers are further away and affect less, so the PRBs at
the band edge (i.e., PRB#23 and PRB#24) will present lower
EVM values.
On the contrary, in the outer sub-band, PRBs are affected
by the subcarriers of the inner sub-band, which have less
power, and by the subcarriers of their own sub-band with
similar power. This results in a lower EVM that derives in
the corresponding jump between inner and outer sub-bands.
Compared with the inner sub-band, EVM appears to remain
almost unchanged. However, we see that in the outer subband, the EVM increases as we move away from the edge of
the carrier band (i.e., PRB#0) to the center because PRBs are
affected by more subcarriers adding significant interference
on each side. This increase stabilizes when we approach
the transition zone (i.e., PRB#12) because subcarriers of the
inner sub-band become part of the group of most significant
interfering ones and they have less power.
A good characterization of the EVM performance in the
inner sub-band will allow us to make suitable decisions at
the scheduler concerning the allowed allocable MCS in each
PRB. For instance, in Fig. 10, the EVM requirements (8%)
are satisfied for � = 3 dB and � = 6 dB in all PRBs.
However, when � = 9 dB is budgeted, 64QAM selection
is also allowed for PRB#18to PRB#24. In fact, a general
indication from the transmitter SNR point of view is to
allocate the highest MCS as far as possible from the jump
point. As anticipated in Section II, when ICIC strategies are
applied, detailed and individualized analyses are required,
which are not considered in the standard. Concerning EVM
degradation in the low-power sub-band (in this case, the
inner-sub-band), it is clear that it depends on the � factor.
However, the specific relationship between EVM and � must
be analyzed by considering the transmitter implementation,
particularly the actual PA, its operating point, and the use of
any linearization technique. The type of analysis that makes
it possible to obtain the dynamic power ranges defined in
Table 2 cannot be ignored. However, the specific values
should not be misunderstood: �> 0 does not prevent the use
of 64QAM when ICIC and eICIC are applied.
Comparing Fig. 10 and Fig. 11, we can see that the greater
the nonlinearity in the transmitter, the greater the difference
between the EVM values for PRB#13 and PRB#24.
In Fig. 12, the effect of nonlinearity can be clearly
observed. In Fig.12, EVM is measured by varying the operating point of the RF PA when the difference in power
level � between both sub-bands is set to 6 dB. As shown in
Fig. 10 and Fig. 11, there is an increase in the EVM level in the
transition zone between the two sub-bands, which decreases
as we move away from the transition zone. In this case,
it is observed that this effect is more significant when more
**FIGURE 12. EVM measured varying the operating point of the RF power**
amplifier with a difference in power level � **= 6dB between the first**
13 and the last 12 PRBs. (Modulation 64QAM).
nonlinearities exist in the transmitter, as already observed
when we compare Fig. 10 and Fig. 11. In Fig. 11, because
the baseline EVM is approximately 8% for � = 0 dB, the
standard requirements are not satisfied when �> 0. In this
case, a digital predistorter (DPD) must be included in the
transmitter to reduce the measured EVM. When the transmitter works in a more linear zone (i.e., Pout 16.4 dBm), the
=
non-idealities persist but are less significant, which allows
a better preservation of orthogonality. Because of the outer
sub-band is power de-boosted, a lower SNR is achieved in
the inner sub-band compared to the outer sub-band, resulting
in a higher EVM. However, the decreasing effect that occurs
when we move away from the transition zone to the edge of
the carrier band is almost negligible.
The results of Figs. 9–12 also allow us to infer some relevant conclusions regarding the management of eICIC strategies to combat the interference in HeNet, regardless of the
ICIC scheme used in the macrocell to combat inter-cell interference from other macrocells. As mentioned in Section II,
it is expected that the PA operates within saturation limits for normal subframes and applies a power back-off at
the PA. However, the total output power is also reduced, and
a low EVM degradation is expected in these LP subframes
compared with subframes where normal operation occurs.
If the EVM measurements are performed over a period of at
least 10ms, the EVM values are averaged. This means that
the EVM requirements can be satisfied in the LP subframes,
whereas in normal subframes, it cannot be guaranteed. Thus,
the measurements must consider both types of subframes
separately. The EVM variations between the normal and lowpower subframes should be considered to satisfy the EVM
requirements in all subframes.
Taking the working point of the PA that corresponds to a
nonlinear zone (Pout 22 dBm), we want to evaluate the
=
influence of changing the modulation scheme between the
inner and outer sub-bands. In Fig. 12, the tests have been
performed with 64QAM modulation in all PRBs, whereas
-----
**FIGURE 13. EVM measured with different modulations between inner**
and outer sub-bands (difference in power level � **= 6dB, Pout = 22 dBm).**
**FIGURE 14. EVM measured in a linear (Pout = 18.4 dBm) and nonlinear**
operating point (Pout = 22 dBm) of the RF PA with and without DPD and
with different modulations between inner and outer sub-bands
(difference in power level � **= 6dB).**
Fig. 13 shows the results with different modulation schemes.
In all cases, an increase in the EVM level appears in the
transition zone due to the jump in the power level (� = 6 dB),
but it is more significant as long as the order of the modulation
used in the inner sub-band decreases. For instance, in this specific implementation, when 64QAM is considered in the outer
sub-band, in the inner sub-band the EVM increases from
12.9% when 64QAM is used to 15.2 and 16.2% if 16QAM
and QPSK are selected, respectively. For a given SNR, the
EVM is lower as the modulation order increases. In addition,
as SNR increases, the slope of the EVM improvement is
larger for the lower modulation orders. In Fig. 13 we see how
EVM decreases faster for QPSK as we move away from the
transition zone.
Concerning the impact of the MCS used on the outer subband, the results are not conclusive; however, in general, the
EVM in the inner sub-band decreases as long as a lower
modulation is used in the outer sub-band. Related to the EVM
in the outer sub-band, slightly lower EVM values are obtained
as the EVM grows in the inner band.
Finally, as shown in Fig. 11, in a nonlinear operating
point situation of the RF PA, the EVM requirements are
**FIGURE 15. EVM measured in a linear (Pout = 18.4 dBm) and nonlinear**
operating point (Pout = 22 dBm) of the RF PA with and without DPD and
with different modulations between inner and outer sub-bands
(difference in power level � **= 3dB).**
not satisfied; therefore, a DPD must be included in the
transmitter to reduce the measured EVM. To evaluate the
effects of the inclusion of a digital predistorter (DPD) at
the transmitter, Fig. 14 and 15 show the EVM measured
in each PRB at two different operating points of the RF
PA: one linear (Pout 18.4 dBm) and the other nonlinear
=
(Pout 22 dBm). In both cases, the results have been
=
obtained in two situations: when a QPSK modulation scheme
was used in the first 13 PRBs (outer sub-band) and 64QAM
in the last 12 PRBs (inner sub-band), and with the same
64QAM modulation scheme in all PRBs. To observe the
influence of � on the measured EVM, Fig. 14 shows that
the first 13 PRBs have a power level � = 6 dB higher than the
last 12 PRBs, while in Fig. 15, � = 3 dB. As in the previous
figures, Fig. 14 and Fig. 15 show how a significant increase
in the EVM level appears in PRBs 12 and 13 (transition
zone), and a decreasing effect as moving away from the
transition zone to the edge of the carrier band. This effect
is relevant in the nonlinear scenario and is almost negligible
in the linear PA and when DPD is applied. In fact, when a
DPD is applied, the EVM decreases in all PRBs. For instance,
in Fig. 14, EVM reaches values higher than 10% in all the last
12 PRBs when Pout 22 dBm. When a DPD is applied, EVM
=
decreases in all PRBs below 7%, reaching a value of 3% in
the PRBs with a higher power level. The most relevant aspect
is that the differences among the PRBs are negligible. Similar
conclusions can be obtained from Fig. 15, which shows that
the power level difference (�) between the inner and outer
sub-bands only affects the specific expected EVM values.
It can also be observed that in these situations, the influence
of the modulation scheme in the EVM is not significant.
**IV. CONCLUSION**
In this study, we analyzed the influence of mandatory satisfaction of EVM requirements at the transmitter in the design
of radio resource management strategies (RRMs) for DL in
4G/5G mobile systems. Specifically, we experimentally analyzed the real effects of the power allocation schemes linked
-----
to ICIC and eICIC in terms of EVM degradation in transmissions. This aspect has not been addressed in studies on
ICIC or eICIC, which usually overlook these EVM requirements, resulting in ideal evaluations of RRM proposals and
overestimations of the user data transmission and system
throughput performance. Only a few works have considered
LTE constraints related to the dynamic power range for each
modulation order to ensure EVM requirements. However,
constraints for ICIC was out of the studies conducted by the
specifications. Therefore, the analysis of this work avoids
the unnecessary performance limitations that can be achieved
when applying ICIC and eICIC in LTE/LTE-A and 5G cells
by unnecessarily restricting the range of variation of the
allocable power masks and MCSs.
As it is known, the particular numerical results obtained
depend on the specific transmitter implementation. Thus, the
contribution does not lie in providing a precise numerical
quantification of the effects, but in the analysis and verification of some EVM behavior patterns that should be considered to maximize the performance of the ICIC and eICIC
schemes while ensuring QoS. We can conclude that the two
most relevant parameters that affect the EVM are the power
ratio between the outer and inner sub-bands (�), PA operation
points and the distance from the PRB where the EVM is
evaluated to the jump point between the inner and outer subbands. It has been shown that the PRB of low-power subbands in the transition zone between the two sub-bands is
considerably affected by the power jump. However, EVM
degradation diminished when we moved away from the transition zone. Future research work could address the design of
RRM strategies based on the type of analysis performed in
this work. This will allow us to make more suitable decisions
at the scheduler concerning the allowed allocable MCS in
each PRB.
**REFERENCES**
[1] A. S. Hamza, S. S. Khalifa, H. S. Hamza, and K. Elsayed, ‘‘A survey on
inter-cell interference coordination techniques in OFDMA-based cellular
networks,’’ IEEE Commun. Surveys Tuts., vol. 15, no. 4, pp. 1642–1670,
4th Quart., 2013.
[2] D. G. González, M. García-Lozano, S. Ruiz, and J. Olmos, ‘‘On the need
for dynamic downlink intercell interference coordination for realistic long
term evolution deployments,’’ Wireless Commun. Mobile Comput., vol. 14,
no. 4, pp. 409–434, Mar. 2014.
[3] C. Kosta, B. Hunt, A. U. Quddus, and R. Tafazolli, ‘‘On interference
avoidance through inter-cell interference coordination (ICIC) based on
OFDMA mobile systems,’’ IEEE Commun. Surveys Tuts., vol. 15, no. 3,
pp. 973–995, 3rd Quart., 2013.
[4] E. Pateromichelakis, M. Shariat, A. U. Quddus, and R. Tafazolli, ‘‘On the
evolution of multi-cell scheduling in 3GPP LTE/LTE-A,’’ IEEE Commun.
_Surveys Tuts., vol. 15, no. 2, pp. 701–717, 2nd Quart., 2013._
[5] Y. L. Lee, T. C. Chuah, J. Loo, and A. Vinel, ‘‘Recent advances in radio
resource management for heterogeneous LTE/LTE-A networks,’’ IEEE
_Commun. Surveys Tuts., vol. 16, no. 4, pp. 2142–2180, 4th Quart., 2014._
[6] T. Novlan, R. Ganti, A. Ghosh, and J. Andrews, ‘‘Analytical evaluation of fractional frequency reuse for OFDMA cellular networks,’’
_IEEE Trans. Wireless Commun., vol. 10, no. 12, pp. 4294–4305,_
Dec. 2011.
[7] B. M. Hambebo, M. M. Carvalho, and F. M. Ham, ‘‘Performance evaluation of static frequency reuse techniques for OFDMA cellular networks,’’
in Proc. 11th IEEE Int. Conf. Netw., Sens. Control, Apr. 2014, pp. 355–360.
[8] M. S. Hossain, F. Tariq, G. A. Safdar, N. H. Mahmood, and
M. R. A. Khandaker, ‘‘Multi-layer soft frequency reuse scheme for 5G
heterogeneous cellular networks,’’ in Proc. IEEE Globecom Workshops
_(GC Wkshps), Dec. 2017, pp. 1–6._
[9] F. Hamdani, A. Maurizka, M. M. Ulfah, and Iskandar, ‘‘Power ratio
evaluation for soft frequency reuse technique in LTE-A heterogeneous
networks,’’ in Proc. 11th Int. Conf. Telecommun. Syst. Services Appl.
_(TSSA), Oct. 2017, pp. 1–5._
[10] M. S. Hossain and Z. Becvar, ‘‘Soft frequency reuse with allocation of
resource plans based on machine learning in the networks with flying base
stations,’’ IEEE Access, vol. 9, pp. 104887–104903, 2021.
[11] A. D. Firouzabadi, A. M. Rabiei, and M. Vehkaperä, ‘‘Fractional frequency
reuse in random hybrid FD/HD small cell networks with fractional power
control,’’ IEEE Trans. Wireless Commun., vol. 20, no. 10, pp. 6691–6705,
Oct. 2021.
[12] S. C. Lam and X. N. Tran, ‘‘Fractional frequency reuse in ultra dense
networks,’’ Phys. Commun., vol. 48, Oct. 2021, Art. no. 101433.
[13] Z. H. Abbas, M. S. Haroon, F. Muhammad, G. Abbas, and F. Y. Li,
‘‘Enabling soft frequency reuse and Stienen’s cell partition in
two-tier heterogeneous networks: Cell deployment and coverage
analysis,’’ IEEE Trans. Veh. Technol., vol. 70, no. 1, pp. 613–626,
Jan. 2021.
[14] H. Carvajal, N. Orozco, D. Altamirano, and C. De Almeida, ‘‘Performance
analysis of non-ideal sectorized SFR cellular systems in rician fading channels with unbalanced diversity,’’ IEEE Access, vol. 8, pp. 133654–133672,
2020.
[15] B. Soret and K. I. Pedersen, ‘‘Macro transmission power reduction for
HetNet co-channel deployments,’’ in Proc. IEEE Global Commun. Conf.
_(GLOBECOM), Dec. 2012, pp. 4126–4130._
[16] B. Soret, A. D. Domenico, S. Bazzi, N. H. Mahmood, and K. I. Pedersen,
‘‘Interference coordination for 5G new radio,’’ IEEE Wireless Commun.,
vol. 25, no. 3, pp. 131–137, Jun. 2018.
[17] B. Soret and K. I. Pedersen, ‘‘On-demand power boost and cell muting for
high reliability and low latency in 5G,’’ in Proc. IEEE 85th Veh. Technol.
_Conf. (VTC Spring), Jun. 2017, pp. 1–5._
[18] T. Cogalan, S. Videv, and H. Haas, ‘‘Coordinated scheduling for aircraft
in-cabin LTE deployment under practical constraints,’’ in Proc. IEEE 87th
_Veh. Technol. Conf. (VTC Spring), Jun. 2018, pp. 1–6._
[19] Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer
_Procedures, document TS-36.213 V14.16.0, 3GPP, Release 14, Sep. 2020._
[20] Evolved Universal Terrestrial Radio Access (E-UTRA); User Equipment
_(UE) Radio Transmission and Reception, document TS 36.101 V14.22.0,_
Release 14, 3GPP, Mar. 2022.
[21] Evolved Universal Terrestrial Radio Access (E-UTRA); Base Station (BS)
_Radio Transmission and Reception, document TS 36.104 V14.10.10,_
Release 14, 3GPP, Mar. 2021.
[22] Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource
_Control (RRC) Protocol Specification, document TS 36.331 V14.16.0,_
Release 14, 3GPP, Jan. 2021.
[23] Y. Wang, W. Zhang, F. Peng, and Y. Yuan, ‘‘RNTP-based resource block
allocation in LTE downlink indoor scenarios,’’ in Proc. IEEE Wireless
_Commun. Netw. Conf. (WCNC), Apr. 2013, pp. 334–3341._
[24] Proposal for eNB TX Dynamic Range Requirements, document R4080113, Nokia Siemens, 3GPP TSG-RAN Working Group 4 Meeting #46,
Sorrento, Italy, Feb. 2008.
[25] BS TX Dynamic Range, document R4-080038, TP for 36.104, NTT
DoCoMo, NXP, 3GPP TSG-RAN Working Group 4 Meeting #46,
Sorrento, Italy, Feb. 2008.
[26] BS TX Dynamic Range, Panasonic, document R4-080084, 3GPP TSG
RAN WG4 (Radio) Meeting #46, Sorrento, Italy, Feb. 2008.
[27] LS on BS Implications Due to LP-ABS for feICIC, document R4122088, Huawei, HiSilicon, 3GPP TSG-RAN WG4 Meeting #62, Jeju,
South Korea, Mar. 2012.
[28] C. Eun and E. J. Powers, ‘‘A new Volterra predistorter based on the
indirect learning architecture,’’ IEEE Trans. Signal Process., vol. 45, no. 1,
pp. 223–227, Jan. 1997.
[29] A. Zhu, P. J. Draxler, J. J. Yan, T. J. Brazil, D. F. Kimball, and P. M. Asbeck,
‘‘Open-loop digital predistorter for RF power amplifiers using dynamic
deviation reduction-based Volterra series,’’ IEEE Trans. Microw. Theory
_Techn., vol. 56, no. 7, pp. 1524–1534, Jul. 2008._
[30] L. Ding, G. T. Zhou, D. T. Morgan, Z. Ma, J. S. Kenney, J. Kim, and
C. R. Giardina, ‘‘A robust digital baseband predistorter constructed using
memory polynomials,’’ IEEE Trans. Commun., vol. 52, no. 1, pp. 159–165,
Jan. 2004.
-----
ÁNGELA HERNÁNDEZ-SOLANA received the
degree in telecommunications engineering and the
Ph.D. degree from the Universitat Politècnica de
Catalunya (UPC), Spain, in 1997 and 2005, respectively. She has been working at UPC and the
University of Zaragoza, where she has been an
Associate Professor, since 2010. She is a member
of the Aragón Institute of Engineering Research
(I3A). Her research interests include 5G/4G technologies, heterogeneous communication networks
and mission-critical communication networks, with emphasis on transmission techniques, radio resource management and quality of service, mobility
management and planning, and dimensioning of mobile networks.
PALOMA GARCÍA-DÚCAR was born in
Zaragoza, Spain, in 1972. She received the
degree in telecommunications engineering and the
Ph.D. degree from the University of Zaragoza,
in 1996 and 2005, respectively. In 1995, she was
employed at Teltronic, S.A.U., where she worked
with the Research and Development Department,
involved in the design of radio communication
systems (mobile equipment and base station),
until 2002. From 1997 to 2001, she has collaborated in several projects with the Communication Technologies Group,
Electronics Engineering and Communications Department, University of
Zaragoza. In 2002, she joined the Centro Politécnico Superior, University of
Zaragoza, where she is currently an Assistant Professor. She is also involved
as a Researcher with the Aragon Institute of Engineering Research (I3A). Her
research interests include the area of linearization techniques of power amplifiers and signal processing techniques for radio communication systems.
ANTONIO VALDOVINOS received the degree
in telecommunications engineering and the
Ph.D. degree from the Universitat Politècnica
de Catalunya (UPC), Spain, in 1990 and 1994,
respectively. He was with UPC and the University of Zaragoza, where he has been a Full
Professor, since 2003. He is a member of the
Aragón Institute of Engineering Research (I3A).
His research interests include 5G/4G technologies, heterogeneous communication networks and
mission-critical communication networks, with emphasis on transmission
techniques, radio resource management and quality of service, mobility
management, and planning and dimensioning of mobile networks.
JUAN ERNESTO GARCÍA was born in Zaragoza,
Spain, in 1997. He received the bachelor’s and
master’s degrees in telecommunications engineering from the University of Zaragoza, in 2019
and 2021, respectively. In 2020, he was employed
with the Communication Technologies Group,
Department of Electronics Engineering and Communications, University of Zaragoza, after collaborating with them during the final bachelor’s
degree thesis, where he worked in the research of
several lineralization techniques for critical mobile communication systems.
In 2021, he joined Indra Sistemas S. A., where he is currently working in
the Solution and Product Area as a System Engineer. He is still collaborating
as a Researcher with the Aragon Institute of Engineering Research (I3A).
His research interests include the area of radio-frequency design and signal
processing techniques for critical radio communication systems.
JESÚS DE MINGO was born in Barcelona, Spain,
in 1965. He received the Ingeniero de Telecomunicación degree from the Universidad Politécnica
de Cataluña (UPC), Barcelona, in 1991, and the
Doctor Ingeniero de Telecomunicación degree
from the Universidad de Zaragoza, in 1997.
In 1991, he joined the Antenas Microondas
y Radar Group, Departamento de Teoría de la
Señal y Comunicaciones, until 1992. In 1992,
he was employed at Mier Comunicaciones, S.A.,
where he worked in the solid state power amplifier design, until 1993.
Since 1993, he has been an Assistant Professor, since 2001, an Associate
Professor, and since 2017, a full Professor with the Departamento de Ingeniería Electrónica y Comunicaciones, Universidad de Zaragoza. He is a
member of the Aragon Institute of Engineering Research (I3A). His research
interests include the area of linearization techniques of power amplifiers,
power amplifier design, and mobile antenna systems.
PEDRO LUIS CARRO was born in Zaragoza,
Spain, in 1979. He received the M.S. degree
in telecommunication engineering and the
Ph.D. degree from the University of Zaragoza,
in 2003 and 2009, respectively. In 2002, he carried out his master thesis on antennas for mobile
communications at Ericsson Microwave Systems,
A.B., Göteborg, Sweden, with the Department of
GSM and Antenna Products. From 2002 to 2004,
he was employed at RYMSA S.A., where he
worked with the Space and Defense Department as an Electrical Engineer,
involved in the design of antennas and passive microwave devices for satellite
communication systems. From 2004 to 2005, he worked with the Research
and Development Department, Telnet Redes Inteligentes, as a RF Engineer,
involved in radio over fiber systems. In 2005, he joined the University of
Zaragoza, as an Assistant Professor with the Electronics Engineering and
Communications Department. His research interests include the area of
mobile antenna systems, passive microwave devices, and power amplifiers.
in
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3170910?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3170910, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09764687.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "39816e34282494bed4308a78808f0f3a7bcdaaf4",
"title": "Fractional Frequency Reuse in Ultra Dense Networks"
},
{
"paperId": "1b337a801602f8de8ee90cb1219bea384c945994",
"title": "Fractional Frequency Reuse in Random Hybrid FD/HD Small Cell Networks With Fractional Power Control"
},
{
"paperId": "a3414c45c1d82767ee6826c251a18cef99690007",
"title": "Enabling Soft Frequency Reuse and Stienen's Cell Partition in Two-Tier Heterogeneous Networks: Cell Deployment and Coverage Analysis"
},
{
"paperId": "650823d4268242cfc2bc3d79351afd5bf0f61acf",
"title": "Performance Analysis of Non-Ideal Sectorized SFR Cellular Systems in Rician Fading Channels With Unbalanced Diversity"
},
{
"paperId": "3467505bdd45a9f5d851703cbfda3dad8b5f0faf",
"title": "Coordinated Scheduling for Aircraft In-Cabin LTE Deployment under Practical Constraints"
},
{
"paperId": "c4887b774c799427762bc87b26a9bc48020d98d2",
"title": "Interference Coordination for 5G New Radio"
},
{
"paperId": "e32c9efcd9b54c5198e1f2f5315fc04552fae7c3",
"title": "Multi-Layer Soft Frequency Reuse Scheme for 5G Heterogeneous Cellular Networks"
},
{
"paperId": "03ee57135a64bdbe29a8330217d91982d29f3033",
"title": "Power ratio evaluation for soft frequency reuse technique in LTE-A heterogeneous networks"
},
{
"paperId": "b6cb5df07c118a909341c26710ad0572711c8efc",
"title": "On-Demand Power Boost and Cell Muting for High Reliability and Low Latency in 5G"
},
{
"paperId": "f2d8f65b392b857751d5d13a325cf759fb96f4f6",
"title": "Recent Advances in Radio Resource Management for Heterogeneous LTE/LTE-A Networks"
},
{
"paperId": "a7e7034df2b0e9bc81ed7b91eb599245aa29abaf",
"title": "Performance evaluation of static frequency reuse techniques for OFDMA cellular networks"
},
{
"paperId": "0a45f58f55b09a32034cc11014bff1074059e8cb",
"title": "On the need for dynamic downlink intercell interference coordination for realistic Long Term Evolution deployments"
},
{
"paperId": "8937613dda07ee1df0291a43dd46d88252aabda1",
"title": "RNTP-based resource block allocation in LTE downlink indoor scenarios"
},
{
"paperId": "a80354929ab76744a4844912febf22655909d705",
"title": "A Survey on Inter-Cell Interference Coordination Techniques in OFDMA-Based Cellular Networks"
},
{
"paperId": "c4cd5b75c9453f2e4e29d3508728fd7b373ec490",
"title": "On Interference Avoidance Through Inter-Cell Interference Coordination (ICIC) Based on OFDMA Mobile Systems"
},
{
"paperId": "1d07843cf816e58338e741ddf24575576875ddb6",
"title": "On the Evolution of Multi-Cell Scheduling in 3GPP LTE / LTE-A"
},
{
"paperId": "ad1de27490d83edceac2843a6d3b488b78afabbe",
"title": "Macro transmission power reduction for HetNet co-channel deployments"
},
{
"paperId": "36e6a7f5dbe5e969dc0e41aa1b703e3b5a3d48ab",
"title": "Analytical Evaluation of Fractional Frequency Reuse for OFDMA Cellular Networks"
},
{
"paperId": "c46d2fd1d644b66cb75784026f2241e0e8994bb0",
"title": "Open-Loop Digital Predistorter for RF Power Amplifiers Using Dynamic Deviation Reduction-Based Volterra Series"
},
{
"paperId": "be250d0548418ee7f8519261920d10b8c26a6a3a",
"title": "A robust digital baseband predistorter constructed using memory polynomials"
},
{
"paperId": "a3b7c381b5cbd276a324a8105b0fb0cf2bc6449f",
"title": "Soft Frequency Reuse With Allocation of Resource Plans Based on Machine Learning in the Networks With Flying Base Stations"
},
{
"paperId": null,
"title": "Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource Control (RRC) Protocol Specification , document TS 36.331 V14.16.0, Release 14"
},
{
"paperId": null,
"title": "LS on BS Implications Due to LP-ABS for feICIC"
},
{
"paperId": null,
"title": "Dynamic Range, Panasonic, document R4-080084, 3GPP TSG RAN WG4 (Radio) Meeting #46"
},
{
"paperId": null,
"title": "Dynamic Range, document R4-080038, TP for 36.104, NTT DoCoMo, NXP, 3GPP TSG-RAN Working Group 4 Meeting #46"
},
{
"paperId": null,
"title": "BS TX Dynamic Range, Panasonic , document R4-080084"
},
{
"paperId": "e418432b5c4d0d53ad60f1926fbeb9767e841bea",
"title": "A new Volterra predistorter based on the indirect learning architecture"
},
{
"paperId": null,
"title": "The spatial region where the partitions are used (e.g., cell center or cell edge), and thus, the number of user groups or classes"
},
{
"paperId": null,
"title": "The power level per RE for each frequency partition (e.g., EPRE innerRB and EPRE outerRB ) and, thus, the power range between them (e.g., (cid:49) = EPRE outerRB − EPRE inner ) RB"
},
{
"paperId": null,
"title": "User Equipment (UE) Radio Transmission and Reception"
}
] | 19,266
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00e55a9d00b3beab2f3a952e50d8a7359ae8f1c8
|
[
"Computer Science"
] | 0.930337
|
Consortium Blockchain-Based Decentralized Stock Exchange Platform
|
00e55a9d00b3beab2f3a952e50d8a7359ae8f1c8
|
IEEE Access
|
[
{
"authorId": "1825767965",
"name": "Hamed Al-Shaibani"
},
{
"authorId": "2515757",
"name": "Noureddine Lasla"
},
{
"authorId": "2140299480",
"name": "Mohamed M. Abdallah"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
The global implementation architecture of the traditional stock market distributes responsibilities and data across different intermediaries, including financial and governmental organizations. Each organization manages its system and collaborates with the others to facilitate trading on the stock exchange platform, and typically buy-sell orders go through different parties before settlement. This design architecture that involves a complex chain of intermediaries has several limitations and shortcomings, such as a single point of failure, a longer time for financial settlements, and weak transparency. Blockchain technology consists of a network of computer nodes that securely share a common ledger without the need of having any kind of intermediaries. In this paper, we present a novel blockchain-based architecture for a fully decentralized stock market. Our architecture is based on a private Ethereum blockchain to create a consortium network leveraging organizations that are already involved in the traditional stock exchange to act as validating nodes. In our architecture, the stock exchange trading logic is completely implemented on a smart contract, while considering the existing governmental market regulations. Since the new platform does not introduce significant changes to the stock exchange trading logic and does not eliminate any of the traditional parties from the system, our proposal promotes efficient adoption and deployment of decentralized stock exchange platforms. In addition, we present a proof of concept implementation of the new architecture, including the smart contract for trade exchange, as well as a virtualization-based test network to assess the platform performance. The test network consists of virtual nodes that run the developed stock exchange smart contract where we measure the buy-sell orders throughput and latency under different network sizes and trading workload scenarios. The obtained results have shown that the proposed trading platform can reach a throughput of 311.8 tx/sec, which is equivalent to 89% of the optimal throughput when the sending rate is 350 tx/sec. This throughput is largely sufficient to meet the requirement of major stock exchanges, such as Singapore stock market.
|
Received May 29, 2020, accepted June 15, 2020, date of publication June 29, 2020, date of current version July 20, 2020.
_Digital Object Identifier 10.1109/ACCESS.2020.3005663_
# Consortium Blockchain-Based Decentralized Stock Exchange Platform
HAMED AL-SHAIBANI, NOUREDDINE LASLA, (Member, IEEE),
AND MOHAMED ABDALLAH, (Senior Member, IEEE)
Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha, Qatar
Corresponding author: Hamed Al-Shaibani (halshaibani@mail.hbku.edu.qa)
**ABSTRACT The global implementation architecture of the traditional stock market distributes responsi-**
bilities and data across different intermediaries, including financial and governmental organizations. Each
organization manages its system and collaborates with the others to facilitate trading on the stock exchange
platform, and typically buy-sell orders go through different parties before settlement. This design architecture
that involves a complex chain of intermediaries has several limitations and shortcomings, such as a single
point of failure, a longer time for financial settlements, and weak transparency. Blockchain technology
consists of a network of computer nodes that securely share a common ledger without the need of having
any kind of intermediaries. In this paper, we present a novel blockchain-based architecture for a fully
decentralized stock market. Our architecture is based on a private Ethereum blockchain to create a consortium
network leveraging organizations that are already involved in the traditional stock exchange to act as
validating nodes. In our architecture, the stock exchange trading logic is completely implemented on a smart
contract, while considering the existing governmental market regulations. Since the new platform does not
introduce significant changes to the stock exchange trading logic and does not eliminate any of the traditional
parties from the system, our proposal promotes efficient adoption and deployment of decentralized stock
exchange platforms. In addition, we present a proof of concept implementation of the new architecture,
including the smart contract for trade exchange, as well as a virtualization-based test network to assess the
platform performance. The test network consists of virtual nodes that run the developed stock exchange
smart contract where we measure the buy-sell orders throughput and latency under different network sizes
and trading workload scenarios. The obtained results have shown that the proposed trading platform can
reach a throughput of 311.8 tx/sec, which is equivalent to 89% of the optimal throughput when the sending
rate is 350 tx/sec. This throughput is largely sufficient to meet the requirement of major stock exchanges,
such as Singapore stock market.
**INDEX TERMS Blockchain, smart contract, stock exchange, trading.**
**I. INTRODUCTION**
The stock market is a platform composed of financial and
governmental organizations that participate in exchanging
shares, bonds, or other securities in a transaction known
as a trade. The growth of this market has a positive and
direct impact on the financial growth of a country’s economy since it offers opportunities that attract investors to
trade and exchange shares. Studies conducted in USA [1]
and Pakistan [2] examine this relationship by comparing the
stock market performance with the Gross Domestic Product
The associate editor coordinating the review of this manuscript and
approving it for publication was Christian Esposito .
(GDP). The authors conclude that the performance of the
stock exchange is directly proportional to the performance of
a country’s economy. For instance, Pakistan achieved growth
in both the GDP and stock market of about 30% and 6.08%,
respectively, between 2003 and 2008. Therefore, the stability
and security of the stock platform are vital in increasing the
confidence to invest and trade and eventually results in better
economic growth for the country.
Despite the wide popularity and adoption of the conventional stock exchange platform architecture, it suffers from
different limitations and shortcomings as follows [3], 1) due
to the centralization architecture of the stock exchange platform, each participating system such as brokers and the stock
-----
exchange is considered as a single point of failure, 2) there is
inconsistency in the data managed by each system resulting
in errors and extended recovery time when failures happen,
3) the availability of the system on a daily basis is limited
which can impact exercises such as auditing the data as
well as providing transparent access to the users throughout
the day, and 4) the long time the system takes to perform
financial settlements; typically it takes three days after trading
happens to achieve the settlement. There have been many
attempts by major vendors who build trade execution and
matching engines in the financial sector to address the above
limitations. For instance, Nasdaq, which is one of the leading
tradings and matching technology vendors, is offering different services and products such as hosting, managing and providing services to the complete end to end trading processes.
They also provide surveillance systems integrated with other
systems to allow their clients to monitor and regulate the
process of trading and settlement. However, this approach
still suffers from limitations such as the long settlement time,
the limited level of data transparency, and the defined trading
hours. Most importantly, clients might require hardware specifications or the need to follow an enforced architecture that
involves the distribution of the systems involved in the trading
process across multiple organizations where each manages its
system separately. For instance, some stock market regulators enforce specific cash settlement and clearance solutions,
which is a different system than what Nasdaq provides [4].
This results in maintaining multiple systems, which increases
the chance of having a single point of failure among the
participating systems as well as increasing the complexity of
the overall trading platform system architecture.
According to [5], blockchain can solve many of the identified limitations affecting the traditional stock exchange platform, such as the lack of transparency, the long settlement
time between brokers and the central bank, and the high transaction fees paid to brokers for each generated trade. In [3],
the centralized architecture of Bucharest stock exchange market has been analyzed to address its limitations with the
main objective of addressing the issue of the high fees the
investor pays to the broker for each successfully executed
trade. The authors define the new stock market in a smart
contract and deploy it into the Ethereum public network.
This implementation requires a form of payment, in Ethereum
cryptocurrency (Ether), for each performed transaction. Their
conclusion shows that decentralizing the Bucharest stock
exchange platform can help in reducing the total transaction
fees.
The research objective and implementation approach to
decentralize the Bucharest stock exchange platform varies
with our objective and the approach we took in several key
areas. First, we are using a consortium blockchain network
in which all participants are known and trusted, and there
is no form of cryptocurrency fees that will be used to pay
the miners in the network. Second, our main objective is to
optimize the performance of the decentralized system rather
than reducing the fees, as we measure the throughput and
**TABLE 1. List of acronyms.**
latency to ensure our implementation meets the required level
of the stock market platform. Also, our consensus algorithm
is based on Proof of Authority (PoA). It provides better
performance in terms of execution time and power efficiency
in comparison with the public network consensus algorithms
such as Proof of Work (PoW) used by the decentralized
Bucharest stock exchange. The consensus algorithms will be
discussed in more detail later in this paper. TABLE 1 provides
definitions of the acronyms used in the paper.
In this paper, we propose a consortium blockchain-based
stock exchange platform that meets the performance requirement of the stock exchange platform while also addressing the
limitations of the traditional stock exchange. Our proposal is
based on Ethereum blockchain technology in which all necessary business regulations and rules defined in a smart contract
shared across a permissioned blockchain network with the
participating financial and governmental institutions. We perform experimental tests by deploying the smart contract on
virtual nodes and measuring the network performance under
different workloads by increasing the number of generated
trades and the number of validating nodes. Our results show
that this architecture does meet the required performance of
the stock exchange platform in terms of latency and throughput under different test scenarios.
The remaining of this paper is organized as follows:
Section II provides an overview of the traditional stock
exchange platform. Section III presents an overview of
blockchain technology and discusses the different implementations and consensus algorithms of blockchain for public and private implementations. Section IV discusses the
related work focusing on the implementation design based
on blockchain for stock market and an E-auction system.
Our proposed blockchain-based stock exchange framework
is presented in Section V, where we discuss the system
architecture and the smart contract defining the functionalities and business logic of stock exchange. In section VI,
-----
**TABLE 2. Major entities of a traditional stock market.**
we evaluate the performance of the proposed architecture
in terms of transaction throughput and latency. Finally,
Section VII concludes the paper.
**II. TRADITIONAL STOCK EXCHANGE OVERVIEW**
The stock market can be defined as ‘‘an aggregation of buying
and selling offers corresponding to an asset’’ [6]. The asset
has a form of bonds, shares, or other securities that the market
offers. The person who trades in the stock market is called
investor or trader and needs first to open a trading account
with the Central Securities Depository (CSD), which takes
the responsibility of managing investors’ trading accounts
and personal data. Due to market regulation, investors cannot directly place an order into the system and need to go
through a third party, namely brokers. In the case of international traders, a special financial entity, named Custodian,
is employed to place orders in the local market. The matching
engine of the entered buy and sell orders is hosted by the
Stock Exchange (SE) entity. The Central Bank (CB) manages
the financial settlement between brokers and custodians. All
the participating governmental and financial organizations
need to follow the rules and regulations defined by the Financial Market Authority (FMA). FMA is also responsible for
continuously monitoring the stock exchange platform and
reviewing the data.
A summary of the different entities involved in the traditional stock market with their respective description is given
in TABLE 2.
_A. TRADING OVERVIEW_
The traditional stock market is a centralized platform as
shown in FIGURE 1, which presents this architecture and
the flow of events that take place when a new investor participates in the platform. First, the investor needs to open
an investor account from CSD and obtains his/her National
Investor Number (NIN). The investor then needs to open a
trading account with a broker by providing the mandatory
NIN account. Once the investor information is validated,
he/she can place orders of buying or selling shares through
the associated broker services such as the website or the
mobile application. The broker takes the responsibility of
using its Order Management System (OMT), which acts as
an interface with SE to submit the investor’s order. Once
a successful trade is generated for that order, SE sends the
**FIGURE 1. Centralized stock exchange platform flow of events.**
acknowledgment message to the broker, who then notifies
the user via the different services provided by the broker.
The shares owned by the investor are updated in the broker
account as well as the investor account held in CSD system.
The market regulator FMA has access to both SE and CSD
systems to monitor the market and validate the trades during
and after trading hours.
_B. TRADING HOURS AND PHASES_
There are usually four different phases that a stock market
goes into in most implementations, as shown in FIGURE 2.
The market starts with a 30 minutes pre-open phase in which
investors can enter their orders, but no trades are generated.
Based on bids and offers entered, opening prices for listed
securities are calculated, so when the market opens, those
calculated prices will be the buy/sell prices used by the
investors. The next phase is market opening for trading in
which the listed securities can be traded, and orders entered
in the pre-open phase gets executed. This is the main phase
of the stock exchange, where orders keep entering, and trades
get generated. The duration of this phase is approximately
3 hours and 30 minutes, and this time can vary from one
stock exchange to another as per the regulations of the country
hosting it. The market then prepares for closing and enters
the pre-close phase, which is estimated to last for 10 minutes.
-----
**FIGURE 2. Stock exchange market trading hours and market phases.**
In this phase, algorithms run to generate the closing prices for
listed securities, and investors can still enter their orders, but
no trade is generated. Finally, the market enters the closing
phase, which is estimated to last for 5 minutes in which
entered orders get executed. The market then closes for the
day.
_C. ORDER TYPES_
Investors can place different types of orders in terms of
buying or selling shares [3]. These types are listed below to
explain how a successful match between buy and sell transactions happen, as some orders allow the investors to specify
conditions on which if triggered, the order gets executed and
hence, results in generating trades:
- Market Order: The buyer would like to buy the shares
with the current price of the share in the market. The
same applies to the seller. This order type does not guarantee the highest financial gain from the transaction but
ensures the order is immediately executed when entered
- Limit Order: The buyer sets a maximum limit he/she is
willing to pay to buy a particular share. The order gets
executed for any sell offer that is equal to or less than
this limit. In the case of the seller, the limit will be the
minimum price he/she is willing to sell with. The order
gets executed for a buy order with a price higher than or
equals to the minimum limit.
- Validity Defined Order: This order can be associated
with either a limit or market order types. The entered
order will remain valid for a single day (Day Order)
or until a certain date (Good Till Date Order). In most
stock exchange markets, the system cancels the order
in approximately 62 days if no validity date is provided
(Open Order).
- Fill or Kill (FOK) Order: Either execute the full order
(Sell or buy all indicated shares) or cancel the order
- Immediate or Cancel (IOC) Order: The order is
immediately executed, and the remaining quantity that
has not been fulfilled will be canceled.
**III. BLOCKCHAIN OVERVIEW**
To address the limitations and shortcomings of the traditional
stock exchange platform, we opt for Blockchain technology.
Blockchain can be defined as a ‘‘network of computers, all
of which must approve a transaction has taken place before
it is recorded, in a ‘chain’ of computer code. The details of
the transfer are recorded on a public ledger where anyone
on the network can see the information [7]. It consists of
blocks with each containing a pointer in the form of a hash
of the previous block and verified transaction data protected
with hash signatures [8]–[10]. Transactions in blockchain are
broadcasted in the network and are validated by a process
known as mining that is performed by special nodes in the
network known as miner nodes [7]. Miner nodes are specific
nodes that append a new block to the chain once the block
becomes full. It is extremely difficult to change a block in
the chain as it requires to have subsequent blocks to be
recreated, and hence this mechanism prevents modification
and maintains a high level of security. FIGURE 3 shows the
content of the first three blocks.
As shown, each block consists of the list of transactions,
the hash of the previous block (except for the first block),
a nonce value, which is a number that can be used only once.
In some blockchain implementations such as Bitcoin the
nonce is altered by the miner such that the hash of the block is
equal to or less than a certain numerical target value provided
by the network as a challenge. The block also contains the
hash of the block itself.
-----
**FIGURE 3. The first three blocks in a Blockchain.**
In order to keep track of all transactions, blockchain ledger
is used in a network where participants have access to the
same ledger replicating the transactions among all peer nodes
in that network. This replication ensures that the overall system built on blockchain can resume if multiple participating
nodes failed to connect to the network. The nodes in the
network use addresses or identifiers known as public keys
to be distinguished by, and hence, defined roles, privacy,
and anonymity can be efficiently maintained [11]. Miner
nodes rely on the fact that all transactions in the network
are duplicated across all nodes involved. Therefore, ‘‘Distributed Consensus’’ needs to be achieved, meaning that an
agreement on the validity of the blockchain is achieved by
all nodes involved and they all share the same version of the
Blockchain [12].
Some implementations of Blockchain such as Ethereum,
uses protocols to present the logic that need to be followed
and this is known as a smart contract. According to [13] a
smart contract can be defined as ‘‘the computer protocols
that digitally facilitate, verify, and enforce the contracts made
between two or more parties on blockchain’’. The smart
contract ensures that the defined logic is validated and needs
to be followed. It does not need a centralized entity to validate
the defined conditions in the smart contract since once it
is deployed, all participating nodes in the network need to
follow the defined logic in it.
_A. BLOCKCHAIN IMPLEMENTATION TYPES_
In Blockchain, all nodes need to reach a state of agreement
or consensus on the next block that needs to be added to
the chain, especially that in a peer to peer network such as
blockchain these nodes don’t trust each other. However, there
are many consensus mechanisms that can be used depending
on the implementation type and the blockchain technology
used. There are mainly two types of implementation for
Blockchain network: permissionless and permissioned.
1) PERMISSIONLESS BLOCKCHAIN (PUBLIC)
Permissionless or public implementation of blockchain
allows any user to become a node and connect to the network
through the internet. The implementation utilizes the concept
of Peer-to-Peer network (P2P) that uses distributed architecture in which no client takes the form of an administrator.
All clients on the network are connected by flat topology
where each peer shares the same rights and privileges as
other peers and have access to the same resources as other
peers [7], [14], [15].
2) PERMISSIONED BLOCKCHAIN (PRIVATE)
Unlike the permissonless blockchain, nodes in permissioned
blockchain are identified and authenticated. In some implementations, an entity takes the responsibility of managing
the roles and responsibilities of the nodes and granting them
permission to the data accordingly.
_B. CONSENSUS ALGORITHMS_
There are several consensus algorithms for the implementation of public blockchain network such as Proof of Work
(PoW), Proof of Stake (PoS), Proof of Burn (PoB), Delegated
Proof of Stake (DPoS), and Proof of Importance (PoI). PoW
and PoS are the two most famous and commonly used consensus algorithms [16]. PoW is an intensive hashing mechanism
that provides a difficult mathematical challenge for the block
miners to solve, and whoever manages to solve the challenge first will become the block miner [17]. This protocol
ensures integrity among all nodes but suffers greatly when it
comes to its performance time, processing, and energy power
required [17]. On the other hand, PoS consensus protocol is
more power-efficient and reduces mining costs. This protocol
takes less time than PoW to validate a transaction as it relies
on validators taking part in voting for the next block, and the
weight of each validator’s vote is dependent on how much it
deposited in that system. Nodes that are allowed to create a
block act as validators who need to deposit some cryptocurrency as a stake in the network that will be locked to have the
chance of being selected as the next block miners. The more
stake a validator has in the network, the higher the chance
of it being selected to validate the new block. Such protocol
ensures acting correctly since any validator who violates the
network rules or acts maliciously will lose its stake deposited
in the network [18]. PoS has several advantages such as
consuming less power and energy, better performance time,
and the mechanism of having a stake that can be lost for any
malicious behavior is expected to pressure validators to act
genuinely more than in PoW.
In the case of permissioned blockchain networks, PoA and
RAFT are popular consensus algorithms where the participants are known and trusted in a private network. According
to [17], PoA is an algorithm that attracted a lot of attention due
to its offered performance resulting from lighter exchanged
messages. It operates in rounds where several nodes are
elected, with one of them acting as a mining leader charged
with the task of proposing the new block and eventually
reaching consensus. These elected nodes are called ‘‘authorities,’’ and each has its unique ID in which if we have N
authorities, at least N/2 1 are assumed to be honest. This
+
algorithm follows a ‘‘mining rotation schema’’ to distribute
the block creation among the authorities in a fair manner, and
for each round step, a mining leader from the authority is
elected to mine the new block [17].
-----
**TABLE 3. Comparison between the consensus algorithms.**
In [19], the author argues that RAFT consensus is easy to
understand and implement, which makes it efficient to use
when building applications and systems. It works by having
a set timer for all authorized nodes, which can validate new
blocks in ‘‘terms’’ that can be seen as rounds that get repeated
over time. For a given term, as the timer runs out for the first
authority, it enters what is called ‘‘candidate state’’, in which
it votes for itself to become a leader and broadcasts requests to
other authorities to vote for it. If the majority positively voted
for the candidate node, then it becomes the leader of that term.
Once a leader is elected, its role is to replicate the transaction
logs across all other nodes. The logs reach finality and get
committed by the leader if and only if it reached the majority
of the nodes, once this happens, the leader will commit the
log and asks the rest to do the same via a broadcast message.
In case the majority of the nodes are offline, the leader will
not be able to commit the logs, and there is a high risk of
losing the log if the leader and the remaining nodes went
offline [19].
TABLE 3 provides a comparison between the permissionless and permissoined consensus algorithms presented
in this paper. For the proposed solution, all network participants should be known and trusted. The selected consensus algorithm should allow an authorized participant to
act as an administrator for the overall platform since FMA
regulates the stock market, and its role is required to be
perceived. Moreover, the network should be Byzantine fault
tolerance in case some of the network validators act maliciously. PoA consensus algorithm satisfies these requirements. Moreover, for our proof-of-concept implementation,
the Geth implementation of PoA, named Clique, is adopted.
Clique has a rotation schema for leader election, such
that in each round, the leader of the round announces the
block and it gets added to the blockchain by the receiving
nodes [17].
**IV. RELATED WORK**
According to [20], implementations of blockchain in the
financial sector focus on four main areas, which are improving the transaction processing time, having sustainability
for banking and financial transactions, improving financial
data privacy and security, and automating financial contracts.
For transaction time improvement, the authors highlight that
the current banking systems rely on centralized databases
that require several days to achieve financial settlements for
the executed transactions [21]. The solution that blockchain
offers to solve this problem, according to the authors, is to
automate financial transaction settlement by setting up a
single account structure that will be used by financial institutions, as well as speeding up international fund transfers [21].
Sustainability is another problem that banks and financial
institutions suffer from, especially when a bankruptcy of one
bank can have a strong impact on the overall financial sectors. The authors in [24] argue that implementing blockchain
can lead the financial sector to achieve stability, especially
when the decentralized ledger of money is independent of
financial regulations of countries and regions. Financial data
security and privacy currently face many challenges due to
the nature of the centralized data storage that banking and
financial instructions rely on [22]. This can lead to data
breaches that does reveal not only financial data, but also
personal and demographic data that were also stored in the
centralized storage. In addition, banking transactions do not
provide sufficient anonymity or extending the freedom of
privacy that clients would like to have. Blockchain addresses
these two issues by decentralizing the data and ensuring they
are securely stored in the participating nodes, which add
high complexity to unauthorized attempts to alter or access
the stored data. Each participant is authorized to perform
changes according to the role assigned while maintaining
anonymity on transactions performed [23]. Finally, authors
in [20] highlight that blockchain automates financial contracts in terms of execution by eliminating the need of a third
party in the middle and allowing a financial transaction to be
triggered between the two involved parties. To demonstrate
such a feature, money transfer usually takes a couple of
days, especially in developing countries, as some controls and
regulations need to be verified. When such a transaction is
implemented using a financial contract in blockchain, it will
no longer require a third party intervention as long as both
parties perform their roles as defined in the contract. The
financial transaction will be securely executed, and the money
will be transferred within minutes [24].
We have analyzed two particular implementations that
resemble a close similarity to our idea. The first paper discusses the concept of decentralizing the stock market platform by using Blockchain technology while the second paper
utilizes the concept of smart contract in blockchain to build a
bidding platform.
_A. DECENTRALIZING BUCHAREST STOCK MARKET_
_PLATFORM_
In [3], the authors discuss the limitation of the traditional
stock market and propose a solution to implement the trading platform on Blockchain. Their research objective is to
showcase how transaction fees can be reduced if blockchain
-----
**FIGURE 4. Bucharest centralized stock exchange platform [3].**
technology is used as a trading platform instead of the
traditional stock exchange platform the Bucharest stock market uses. To test their experiment, the authors have implemented two systems with the first one being modeled according to Bucharest centralized stock exchange platform as per
FIGURE 4 in which all orders entered by different brokers are
gathered in a single system. The second system implemented
is a decentralized blockchain based solution that uses a smart
contract to simulate the stock exchange platform. This design
does not require to have brokers to enter orders, and instead,
investors can interact directly with the system and enter the
order themselves. By doing so, the fees paid to brokers are
eliminated, and the fees the investors pay per transaction
in the proposed blockchain based trading system overall is
less than the fees paid in the traditional stock exchange platform. The authors conclude that the fees in the decentralized
system will increase as the number of orders in the order
book increases since the transaction complexity will become
higher. Therefore, the decentralized system will be giving a
better transaction fee than the centralized system when the
order book is partially full.
_B. BIDDING SYSTEM BASED ON BLOCKCHAIN SMART_
_CONTRACT_
An e-auction system has several elements that are in common
with the stock exchange platform. It consists of bidders, auctioneers, and third-party intermediaries who provide the platform that connects bidders to auctioneers and allows posting
products, checking the highest bidding price, and declaring
the winner with the highest bidding price. The authors in [10]
suggest building an e-auction system without having intermediaries between the sellers and buyers by using Ethereum
based smart contract. Their objective is to solve two main
problems the current e-auction systems have, which are the
limited level of security offered by the online platform and the
high transaction fees users have to pay. The authors claim that
their blockchain based solution addresses the first problem by
ensuring security related to data shared among the different
users of the system is appropriately managed and perceived.
The second problem is addressed by reducing the transaction
cost by removing any intermediary in the system. FIGURE 5
shows a flowchart representing the bidding process taking
**FIGURE 5.** Flowchart showcasing the bidding process [10].
place from start to finish. First, the seller posts the bidding
information and the starting price. The bidders bid the price in
the sealed envelope, and when it is received by the auctioneer,
the sealed envelope price that is the highest is announced as
the current highest price. If no price received higher than the
current bidder’s highest price or the ending time is due, it is
announced as the winning price, and the auctioneer can send
the product and receive the money from the winning bidder [7]. By applying the proposed blockchain based e-auction
platform as an experiment, the authors conclude that the
smart contract can enforce confidentiality, non-repudiation,
and prevention of unauthorized alteration of entered bidding
orders.
TABLE 4 showcases the main differences between the proposed platform and the already discussed related work. For
instance, our main research objective is to improve the performance of the system in terms of availability, security, and
transparency by adopting a consortium blockchain based on
Ethereum with PoA consensus algorithm. We are maintaining
all key participants in the traditional stock exchange platform
to be part of the proposed platform. We are not introducing
major changes that conflict with the roles and regulations
imposed by the government. For the case of the decentralized
Bucharest stock exchange, the research objective is to reduce
the transaction fees paid to the brokers by making significant
changes to the existing architecture and eliminating the broker completely from the platform. The new proposed architecture is based on permissionless Ethereum network that
uses PoW consensus algorithm. This new platform introduces
new fees that are less than the fees paid to the brokers in
the traditional stock exchange for cases with partially full
order book. The objective of the second related work is to
build a secure e-auction system without having intermediaries. The authors used a permisionless Etherem network
with a PoW consensus algorithm and made changes to the
-----
**TABLE 4. Comparing our paper with related works.**
traditional bidding platform by removing intermediaries managing it. Both related works rely on using a public blockchain
network which has poor performance and cannot handle
the required throughput and latency of the current stock
exchange.
**V. PROPOSED BLOCKCHAIN-BASED STOCK EXCHANGE**
**FRAMEWORK**
In this section, we describe our proposed decentralized stock
exchange platform that is based on a consortium blockchain
between financial and organizational entities that are already
part of the traditional stock market. We first give an overview
of the system architecture, define the roles and responsibilities of the participating entities, and finally present the smart
contract holding and managing the stock exchange trading
logic.
_A. SYSTEM ARCHITECTURE_
As shown in FIGURE 6, our system is composed of a consortium blockchain network, a smart contract, and financial and
organizational entities. The consortium blockchain facilitates
transactions between the different participating entities and
manages the stockeExchange smart contract that handles the
stock trading logic. We select a permissioned blockchain as
the entities are all known and also because private version of
blockchain is more effective in terms of transaction throughput and latency. The consortium network is composed of
a set of authorized participants (validators) which are the
CSD, FMA, Broker, Government, and SE. Each of them
has specific roles and responsibilities as per the traditional
stock exchange platform. The StockExchange smart contract
defines all the trading logic as well as the different functions
that can be performed by the participating entities, such as,
create broker, create new investor, assign share to investor,
etc. Each participating entity has a private key along with the
associated address and public key that are used for authentication. Therefore, the smart contract ensures that each entity
is only allowed to trigger functions according to its associated
privileges. TABLE 5 summaries the StockExchange smart
contract functionalities and the entities authorized to execute
each of them. The detail description of the role of each of the
participating entities is given in the following
:
- FMA: it is responsible for creating and maintaining the
smart contract as well as defining all the trading logic
and functionalities. It also monitors the trading process
and ensures that all defined rules and regulations are
properly maintained. It interacts with the smart contract
to create and maintain companies with shares and to
create and maintain brokers.
**FIGURE 6.** System architecture.
**TABLE 5. StockExchange smart contract authorizations.**
- CSD: is responsible for creating and maintaining
investor accounts. It interacts with the smart contract to
create investor accounts and assign shares to them.
- Broker: it takes the role of trading on behalf of
investors. It interacts with the smart contract by associating investors to it and entering buy and sell orders
for the associated investors. Brokers are also authorized
to assign shares from CSD investor accounts to the
investor trading account managed by the broker. Each
investor can have multiple trading accounts managed
by a different broker for each, while each investor must
have a single unique investor account (NIN).
- Government: it validates the investor data sent by CSD
- SE: it is responsible for matching orders queued in the
order book, and generating trades.
-----
_B. StockExchange SMART CONTRACT_
We define a smart contract called ‘‘StockExchange’’ to
include the business logic and the authorization roles of each
participating entity. The smart contract manages the buy and
sell orders and generates respective trades whenever a buy
order offers a price that is equal to or more than the sell order’s
price. The different steps of the trading, implemented in the
smart contract, are detailed below:
1) Let QB be a queue of all buy orders sorted in ascending
order such that i represents the index of the maximum
element in the queue denoted as B[P]. Let B[Q] denotes the
quantity of the shares in B[P].
2) Let QA be a queue of all sell orders sorted in descending
order such that y represents the index of the minimum
element in the queue denoted as A[P]. Let A[Q] denotes the
quantity of the shares in A[P].
3) We assume that all orders entered are of the types limit
order or market order and that partially matched orders
are possible in cases where B[Q] _A[Q]_
̸=
4) If B[P] _A[P]_ and B[Q] _A[Q], both orders are fully_
≥ =
matched, and a trade is generated. Both i and y indices
are decremented by 1.
5) If B[P] _A[P]_ and B[Q] _A[Q], B[P]_ is partially matched with
≥ ≥
_A[P], and a trade is generated. The value of B[Q]_ is updated
such that B[Q] _B[Q]_ _A[Q]_ and y index is decremented
= −
by 1.
6) If B[P] ≥ _A[P]_ and B[Q] _< A[Q], A[P]_ is partially matched with
_B[P], and a trade is generated. The value of A[Q]_ is updated
such that A[Q] _A[Q]_ _B[Q]_ and i index is decremented
= −
by 1.
FIGURE 7 shows the sequence diagram between the participating entities and the smart contact, including all the
steps required before generating trades and matching buy/sell
orders. The detail description of each of the diagram steps are
given in the following:
1) FMA defines the list of all brokers that the stock market
consists of by calling the ‘‘addBroker’’ function. The
system replies with a message showing the successful
creation of the broker.
2) FMA defines the companies that are listed in the stock
market along with their details such as number of shares
they consist of and their prices. The function ‘‘addCompany’’ is used for this purpose.
3) CSD Validates the investor’s data integrity by sending it
to the government. The government replies to the smart
contract to update the investor validation status.
4) CSD assigns to each validated investor a new
investor account number ‘‘NIN’’ by using the function
‘‘addNin’’.
5) The broker associates an investor to its account
using the function ‘‘AssociateBrokerToInvestor’’. The
smart contract then validates by checking the NIN
Account subsystem to ensure that the NIN exists. If it
does, the NIN gets associated to the broker account
successfully.
6) The broker assigns shares that are stored in the
investor’s NIN account maintained by CSD to the trading account maintained by the broker, by calling the
function ‘‘AssignShareToNin’’.
7) Buy orders are entered by the broker into the smart
contract. Once these orders are entered, the ‘‘StockExchange‘‘ subsystem logs and stores the order in a sorted
queue and tries to match these orders with existing sell
orders pending in the sell queue list. If a successful
match is generated, the system replies back to the
broker that successful trades have been generated for
the entered orders. If no match could be generated,
the broker will be informed that the orders have been
successfully entered the system.
8) This step is similar to step 7 as brokers enter sell
orders into the smart contract. If a successful match is
generated, the broker is informed about it or else; the
broker will be informed that the orders have successfully entered the system.
_C. SECURITY AND SYSTEM EFFICIENCY ANALYSIS_
The proposed blockchain-based stock exchange architecture
ensures the following security and system efficiency
:
1) Transparency: the level of transparency provided by
using blockchain guarantees that all transactions and
data maintained by the system are visible to the authorized participants and cannot be manipulated. However,
any change requires consensus and commitment from
all network participants before it gets validated. In contrast, the traditional stock exchange suffers from insufficient transparency level as each party has its system
and can hide or manipulate the data before sharing it
with other participants.
2) High availability: the proposed architecture addresses
the single point of failure by ensuring high availability
through decentralizing the data across multiple participants. The smart contract can still be executed even
if some nodes were disconnected from the network.
Contrary to the traditional stock market, if any of the
system participants is unavailable, the whole market is
affected.
3) Network efficiency: in the stock exchange, the quality of network connectivity has a critical impact on
investors’ profits. For instance, an order sent by an
investor through his/her associated broker can be
delayed by the network if the broker has connectivity
issues, or it is physically located far from the SE. Orders
that were entered later by other brokers, with better
network connectivity or located physically closer to the
SE, will be executed first. This results in a financial loss
to the investor despite entering the order first and can
cause a lack of fairness and trust in the overall platform.
The blockchain network provides better connection utilization between the different participants since nodes
are distributed in different physical locations. The node
-----
**FIGURE 7.** Sequence diagram between participating entities and the StockExchange smart contract.
physically located closest to the users interacting with
the smart contract will receive the transactions and
broadcast them to the remaining nodes in the network.
4) Consistency: since the ledger is shared across different
participants, they all have the same version of the data,
and any change that happens in one node will be immediately reflected in the ledgers of the other nodes. This
solves the issue of having conflicting data that are not
synchronized across the participating systems as it is in
the traditional stock exchange platform. For instance,
if an investor updates his/her personal data directly
with CSD without updating it in the broker system
too, a delay in authenticating transactions happens thus,
impacting the investor’s profit.
5) Cost efficiency: in our proposed architecture, contrary to the traditional stock exchange, all the participating entities use the same common software and
platform, which consists of an Ethereum smart contract. This solution architecture is much simpler and
cost-effective as it considerably decreases the overall
system complexity and cost for maintenance and technical support. In addition, the proposed architecture is
highly available and does not require a separate disaster
recovery environment. This saves a high cost compared
to the traditional stock market, where each participating
entity needs to have a specific disaster recovery site.
6) Flexible configuration: the proposed architecture provides more flexibility and scalability in comparison
with the traditional stock exchange platform when it
comes to adjusting the functionalities and introducing new changes to the trading logic. Since the proposed design architecture consists of the StockExchange smart contract, new and existing functionalities,
as well as authorization, can all be managed in one
place. The smart contract can then be shared in the network without requiring participants to make changes in
the hardware and storage, which makes it much easier
to adopt.
7) Smart contract security: In order to design secure
smart contracts, authors in [24] and [25] recommend a
set of analysis tools to identify security issues and vulnerabilities in the smart contract code. Among the most
famous analysis tools, we selected SmartCheck [24]
to assess the proposed ‘‘StockExchange’’ smart contract. SmartCheck allowed us to identify multiple
security-related issues and optimize some functions in
-----
our initial design. Such issues include the extra gas
consumption due to the use of multiple loops and
bad array manipulation, which, if not appropriately
addressed, can lead to a storage overlap attack where
it collides with other data in the storage. Moreover,
the tool provided multiple recommendations, such as
upgrading the solidity code to the latest version as
well as emphasizing on the declarations of public and
private modifiers.
**VI. PERFORMANCE EVALUATION**
In this section, we evaluate the performance of our proposed
blockchain-based stock exchange platform, in terms of transaction throughput and latency, to showcase its capability in
handling the transaction load of the current stock market.
We validate our results with Singapore Exchange, which is
one of the emerging markets offering a diversity of listed
securities for tading. For this purpose, we developed a testing
framework that consists of three main modules: network
module, transaction generation and listening module, and performance evaluation module. The description of each module
is given below:
1) Network module: This module is used to create the
consortium blockchain test network that hosts our
defined ‘‘StockExchange’’ smart contract. It consists
of the entities that resemble the stock exchange participants, which are SE, FMA, Brokers, government,
and CSD. These entities are represented as Ethereum
nodes by using Docker container technology, where
each node runs the Geth Ethereum client. The selected
consensus algorithm is PoA ( see Section III for more
details).
2) Transaction generation and listening module: This
module is implemented using a JavaScript API that
serves as a generator of transaction workload and listens to the blockchain for block confirmation events.
By continuously listening to the network, this module
records information such as block number, validation
time, and the number of transactions per block. The
transaction workload consists of buy and sell orders,
and the total number of generated transactions at each
round of testing is configurable. To ensure that each
pair of buy and sell orders generates a trade, we generate for each buy order a corresponding sell order. The
workload generator and data listener module interact
with a special gateway node in the network that receives
the transactions and broadcasts them to the network.
3) Performance evaluation module: This module is used
to analyze the information stored in the data listener
module and measure the performance of each experiment by calculating the throughput and latency for
entered orders and generated trades. The throughput or
number of transactions per second (TPS) is calculated
as the total number of transactions (N ) divided by
the time it takes to validate them, which is the time
difference between the block with the first transaction
and the block with the last transaction
:
_TPS = N_ _/Btime,_
where Btime is the difference in validation time between
the last and the first blocks.
The latency is a measurement that shows the difference
in time between the time a transaction is sent and the
time it gets validated in a block. It is calculated as the
total time it takes to process X number of transactions
divided by X .
_A. EXPERIMENT_
We have implemented our proposed stock exchange platform,
which has been built on top of a consortium blockchain
network, using Solidity, the de-facto scripting language to
write smart contracts in Ethereum. The created smart contract
consists of the following main functions:
1) addBroker: adds a new broker to the system by entering its name, its symbol, and the maximum amount of
money it is allowed to spend buying shares in a single
trading session.
2) addCompany: a new company is added after entering
its name, symbol, its total number of shares, and the
price per share.
3) ValidateNIN: investor’s data received by CSD is sent
to the government for validation. This data consists of
the investor name, age, nationality, and ID number. The
government responds in the form of true or false value,
which CSD uses as a condition to either proceed or
cancel the creation of the new NIN account.
4) addNin: for each validated investor, a unique investor
number is assigned. This investor number is associated
with the investor’s personal data, including the total
number of shares owned by the investor.
5) AssociateBrokerToInevestor: it assigns a broker to
an investor by entering the broker’s name, symbol,
investor name, and NIN.
6) AssignShareToNin: shares are assigned to a given
NIN and the total number of shares in the NIN is
updated.
7) buyShares: a buy order that has the company’s symbol,
number of shares, price, and NIN enters a queue of
buy orders. For each new buy order, the queue is sorted
such that the order with the highest price is placed first,
followed by the rest in descending order.
8) sellShares: a sell order that has the company’s symbol,
number of shares, price, and NIN enters a queue of sell
orders. For each sell order, the queue gets sorted such
that the order with the lowest price is placed the first,
followed by the rest in acceding order.
9) DoMatch: this function is called as part of each
‘‘buyShares’’ and ‘‘sellShares’’ functions. It takes the
first item in the buy orders queue and compares it with
the first item in the sell orders queue. If the price of
the buy order is more or equal to the price of the sell
-----
**FIGURE 8. Throughput vs sending rate (tx/sec).**
order, then a trade is generated. The matched orders
are removed from the queues and the queues again get
sorted. The NIN accounts of the buyer and seller will
be updated accordingly.
Several experiments have been conducted to measure the
performance of our Stock exchange trading platform in terms
of throughput and latency in which we adjusted our workload
and network size for every round of testing. Six different
workloads have been used in the form of the sending rate of
transactions per second, which are: 100, 200, 300, 350, 400,
and 450 tx/sec. The network size has also been adjusted such
that the blockchain consisted of 1, 5, 10, and 20 validators
for each test scenario. The time to construct two consecutive
blocks has been fixed to 2 seconds, and the total number of
transactions has also been fixed to 10,000 transactions where
5,000 represent buy orders, and the remaining 5,000 transactions are sell orders.
We categorized our test cases into the following workload
scenarios
:
1) ‘‘With Trades’’: in this scenario, orders are entered
such that each pair of buy and sell orders generates a trade. It requires high computational power as
the entered orders trigger the doMatch function that
requires removing the matched orders from the queue
and sorting the queues again as well as updating the
buyer and seller NIN accounts accordingly.
2) ‘‘Without Trade’’: In this scenario, the buy and sell
orders are not matched, and hence, no trade is generated. In terms of computational needs, this scenario
yields the best throughput as it skips the doMatch function, which has to sort and to update investor accounts.
The experiments are conducted on a workstation machine
with Intel(R) Xeon(R) Gold 6130 CPU, 2.10 GHz, 64 core
CPU, 256GB RAM, and running Ubuntu 18.04.2.
FIGURE 8 illustrates the measured throughput under different sending rates and number of validators. In the case of a
single validator node shown in FIGURE 8a, the throughput
is very close to the sending rate up to 350 tx/sec. This is
also valid in scenarios with 5 and 10 validators as shown
in FIGURE 8b and FIGURE 8c, respectively. However,
when the number of validators increases, the throughput
gets considerably affected, as shown in FIGURE 8d with
20 validating nodes. This is due to the limited available
computation power, as all the nodes in different scenarios
share the same workstation machine. To emphasis the effect
of computation power on the throughput, TABLE 6 shows the
average throughput for transactions with and without trades
-----
**TABLE 6. Average throughput (with and without trades) for different network sizes.**
**TABLE 7. Trading data for Singapore exchange for the month of**
April 2020.
**FIGURE 9. Throughput at 350 tx/sec Vs. number of validators.**
**FIGURE 10. Latency Vs number of validators.**
for different network sizes. Our finding shows that our system
can support networks up to 10 validators and transaction
rates up to 350 tx/sec. We plot in FIGURE 9 the result of
the experiment when considering the two previously defined
workload scenarios and their average value at a transaction
rate of 350 tx/sec. The worst throughput is noticed for the case
of the first workload scenario (with trade) as it requires more
computation resources to complete the trade. For a network
with up to 10 validating nodes, the average throughput is
about 311.8 tx/sec. It is equivalent to 89% of the optimal
throughput, which is the ratio of the average throughput value
(311.8 tx/sec) to the optimal throughput value (350 tx/sec).
FIGURE 10 illustrates the effect of the different sending
rates and the number of validators on the average transaction latency. The results show that the latency is inversely
proportional to the throughput. For a network with up to
10 validating nodes, and a sending rate up to 350 tx/sec,
the average latency is about 5.5 seconds. Considering the
block time, which is 2 seconds and the time needed to propagate the block in the network, this can be considered as a
reasonable delay. However, the latency significantly increases
for large network sizes and high sending rates, where it can
reach 40 seconds. It can be related to the following two
reasons
:
1) The higher the sending rate, the larger the block size,
and hence, it will require more time to propagate the
block to all the nodes in the network.
2) The computational resources play a major role in the
network’s ability to handle high sending rates. This
is due to the fact that each transaction needs to go
into several steps such as validation, propagation to the
network, execution, and its inclusion in a new block
that will then be propagated again to the network to
be executed by the other nodes. These steps require
sufficient computational power to be able to handle
high sending rates.
The obtained results have shown that our proposed trading platform can reach a transaction throughput of about
311.8 tx/sec. By analyzing the trading data obtained from Singapore Stock Exchange during the month of April 2020 [26],
TABLE 7 shows that the total number of performed trades
is 10,285,596 for a period of 21 trading days with 7 trading hours each day, which results in 489790.2857 trades
per day. If all these trades are to be processed by the
platform within the same two hours of a certain day,
we will have 244895.1429 trades per hour, which results
in 68.02642857 trade per second. Since each generated trade
consists of buy and sell orders matched together, the estimated total number of generated transactions per second is
3 times the total number of trades/sec, which is equivalent
to 204.0792 tx/sec. It is clear that our proposed platform can
easily meet the requirement of this market by only considering the available computation resources used during the
experiment. We believe that increased performance could be
achieved if more computation resources can be used during
the experimental evaluation.
-----
**VII. CONCLUSION**
In this paper, we have presented a new blockchain-based
architecture for a fully decentralized stock market platform.
Our architecture is based on Ethereum smart contract that
is implemented on a consortium and permissioned network.
To be aligned with the regulations of the stock market,
we chose the validating nodes to be the financial and governmental organizations that are already involved in the
traditional stock exchange platform. This new architecture
addresses the limitations of the traditional stock exchange
platform such as the single point of failure in the participating
systems by replicating the data and smart contract across all
participating nodes, the complexity and inefficiency of the
data management which our solution solves by providing
a shared ledger that can be easily updated and maintained,
the limited level of transparency since now all transactions
can be seen, the limited daily time to access the platform’s
data as now it is easier to monitor the blockchain and access
it throughout the day, and offering a faster financial and
cash settlement time instead of the three days needed after
the trading session. In order to evaluate the performance
of our system, several experiments were conducted where
the throughput and latency were evaluated. We have used
different workloads and network sizes to evaluate the performance and found that the achieved performance can meet the
requirement of the stock market platform for network sizes up
to 10 validators and up to a sending rate of 350 tx/sec. However, we found that for larger workloads or network sizes,
the performance significantly declines due to the limited
computational resources used in the experiment. However,
since the proposed solution will run on a consortium permissioned network, we believe that the participating entities
will be capable of accommodating the necessary computation
resources in order to meet the latency and throughput levels
of the stock exchange. We plan to conduct further study to
address privacy-related concerns and include cryptography
encryption in the same ledger such that only allowed participants can see their relevant transaction data. Our future work
will also cover further enhancements in the proposed smart
contract. For instance, we will cover the possibility of introducing new changes to an already deployed smart contract
without causing disturbance to the overall stock exchange
platform.
**REFERENCES**
[1] B. Comincioli, ‘‘The stock market as a leading indicator: An application
of Granger causality,’’ Univ. Avenue Undergraduate J. Econ., vol. 1, no. 1,
pp. 1–14, 1996.
[2] M. S. Nazir, M. Nawaz, and U. Gilani, ‘‘Relationship between economic
growth and stock market development,’’ Afr. J. Bus. Manage., vol. 4,
pp. 3473–3479, Dec. 2010.
[3] C. Pop, C. Pop, A. Marcel, A. Vesa, T. Petrican, T. Cioara, I. Anghel,
and I. Salomie, ‘‘Decentralizing the stock exchange using blockchain an
ethereum-based implementation of the bucharest stock exchange,’’ in Proc.
_IEEE 14th Int. Conf. Intell. Comput. Commun. Process. (ICCP), Sep. 2018,_
pp. 459–466.
[4] N Inc. (2019). Trading and Matching Technology Provides Flexible,
_Multi-Asset_ _Trading_ _Capabilities_ _for_ _Marketplaces_ _of_ _all_ _Sizes._
[Online]. Available: https://www.nasdaq.com/solutions/tradingand-matching-technology
[5] L. Lee, ‘‘New kids on the blockchain: How Bitcoin’s technology could
reinvent the stock market,’’ SSRN Electron. J., vol. 12, no. 2, pp. 81–132,
2016.
[6] V. V. Bhandarkar, A. A. Bhandarkar, and A. Shiva, ‘‘Digital stocks using
blockchain technology the possible future of stocks?’’ Int. J. Manage.,
vol. 10, no. 3, pp. 44–49, Jun. 2019.
[7] T. Ahram, A. Sargolzaei, S. Sargolzaei, J. Daniels, and B. Amaba,
‘‘Blockchain technology innovations,’’ in Proc. IEEE Technol. Eng. Man_age. Conf. (TEMSCON), Jun. 2017, pp. 137–141._
[8] M. Samaniego, U. Jamsrandorj, and R. Deters, ‘‘Blockchain as a
service for IoT,’’ in Proc. IEEE Int. Conf. Internet Things (iThings)
_IEEE_ _Green_ _Comput._ _Commun._ _(GreenCom)_ _IEEE_ _Cyber,_ _Phys._
_Social Comput. (CPSCom) IEEE Smart Data (SmartData), Dec. 2016,_
pp. 433–436.
[9] T. Lundqvist, A. de Blanche, and H. R. H. Andersson, ‘‘Thing-to-thing
electricity micro payments using blockchain technology,’’ in Proc. Global
_Internet Things Summit (GIoTS), Jun. 2017, pp. 1–6._
[10] Y.-H. Chen, S.-H. Chen, and I.-C. Lin, ‘‘Blockchain based smart contract
for bidding system,’’ in Proc. IEEE Int. Conf. Appl. Syst. Invention (ICASI),
Apr. 2018, pp. 208–211.
[11] A. Dorri, S. S. Kanhere, and R. Jurdak, ‘‘Towards an optimized BlockChain
for IoT,’’ in Proc. 2nd Int. Conf. Internet-of-Things Design Implement.,
Apr. 2017, pp. 173–178.
[12] M. Conoscenti, A. Vetro, and J. C. De Martin, ‘‘Blockchain for
the Internet of Things: A systematic literature review,’’ in Proc.
_IEEE/ACS 13th Int. Conf. Comput. Syst. Appl. (AICCSA), Nov. 2016,_
pp. 1–6.
[13] S. Wang, L. Ouyang, Y. Yuan, X. Ni, X. Han, and F.-Y. Wang, ‘‘Blockchainenabled smart contracts: Architecture, applications, and future trends,’’
_IEEE Trans. Syst., Man, Cybern. Syst., vol. 49, no. 11, pp. 2266–2277,_
Nov. 2019.
[14] A. Kaushik, A. Choudhary, C. Ektare, D. Thomas, and S. Akram,
‘‘Blockchain—Literature survey,’’ in _Proc._ _2nd_ _IEEE_ _Int._ _Conf._
_Recent Trends Electron., Inf. Commun. Technol. (RTEICT), May 2017,_
pp. 2145–2148.
[15] H. Kuzuno and C. Karam, ‘‘Blockchain explorer: An analytical process and
investigation environment for bitcoin,’’ in Proc. APWG Symp. Electron.
_Crime Res. (eCrime), Apr. 2017, pp. 9–16._
[16] M. Salimitari and M. Chatterjee, ‘‘A survey on consensus protocols in
blockchain for IoT networks,’’ Sep. 2018, arXiv:1809.05613. [Online].
Available: https://arxiv.org/abs/1809.05613
[17] S. D. Angelis, L. Aniello, R. Baldoni, F. Lombardi, A. Margheri, and
V. Sassone, ‘‘Pbft vs proof-of-authority: Applying the cap theorem to
permissioned blockchain,’’ in Proc. Italian Conf. Cyber Secur., Jan. 2018,
p. 11. [Online]. Available: https://eprints.soton.ac.uk/415083/
[18] T. T. A. Dinh, R. Liu, M. Zhang, G. Chen, B. C. Ooi, and J. Wang,
‘‘Untangling blockchain: A data processing view of blockchain systems,’’ IEEE Trans. Knowl. Data Eng., vol. 30, no. 7, pp. 1366–1385,
Jul. 2018.
[19] D. Ongaro and J. Ousterhout, ‘‘In search of an understandable consensus
algorithm,’’ in Proc. USENIX Conf. USENIX Annu. Tech. Conf. (USENIX
_ATC). Berkeley, CA, USA: USENIX Association, 2014, pp. 305–320._
[Online]. Available: http://dl.acm.org/citation.cfm?id=2643634.2643666
[20] J. Jaoude and R. Saade, ‘‘Blockchain applications—Usage in different domains,’’ IEEE Access, vol. 7, pp. 45372–45373, 2019, doi:
[10.1109/ACCESS.2019.2902501.](http://dx.doi.org/10.1109/ACCESS.2019.2902501)
[21] G. William Peters and E. Panayi, ‘‘Understanding modern banking ledgers
through blockchain technologies: Future of transaction processing and
smart contracts on the Internet of money,’’ 2015, arXiv:1511.05740.
[Online]. Available: http://arxiv.org/abs/1511.05740
[22] Q. K. Nguyen, ‘‘Blockchain–A financial technology for future sustainable
development,’’ in Proc. 3rd Int. Conf. Green Technol. Sustain. Develop.
_(GTSD), Nov. 2016, pp. 51–54._
[23] S. Singh and N. Singh, ‘‘Blockchain: Future of financial and cyber
security,’’ in Proc. 2nd Int. Conf. Contemp. Comput. Informat. (IC3I),
Dec. 2016, pp. 463–467.
[24] S. Rouhani and R. Deters, ‘‘Security, performance, and applications of smart contracts: A systematic survey,’’ IEEE Access, vol. 7,
pp. 50759–50779, 2019.
[25] M. Demir, M. Alalfi, O. Turetken, and A. Ferworn, ‘‘Security smells in
smart contracts,’’ in Proc. IEEE 19th Int. Conf. Softw. Qual., Rel. Secur.
_Companion (QRS-C), Jul. 2019, pp. 442–449._
[26] SGX. (Apr. 2020). Market Statistics Report. [Online]. Available:
https://www2.sgx.com/research-education/historical-data/marketstatistics
-----
HAMED AL-SHAIBANI received the B.Sc.
degree (Hons.) in computer science from Qatar
University, Doha, Qatar, in 2010, and the M.Sc.
degree in strategic business unit management from
HEC Paris, Doha, in 2016. He is currently pursuing
the Ph.D. degree in computer science and engineering with Hamad Bin Khalifa University, Doha.
His main research interests include blockchain,
cybersecurity, and networking.
NOUREDDINE LASLA (Member, IEEE) received
the B.Sc. degree from the University of Science
and Technology Houari Boumediene (USTHB),
in 2005, the M.Sc. degree from the Superior
Computing National School (ESI), in 2008, and
the Ph.D. degree from USTHB, in 2015, all in
computer science. He is currently a Postdoctoral
Research Fellow with the Division of Information
and Computing Technology, Hamad Bin Khalifa
Univeristy, Qatar, with expertise in distributed systems, network communication, and cyber security.
MOHAMED ABDALLAH (Senior Member,
IEEE) received the B.Sc. degree from Cairo University, in 1996, and the M.Sc. and Ph.D. degrees
from the University of Maryland at College Park,
in 2001 and 2006, respectively. From 2006 to
2016, he held academic and research positions at
Cairo University and Texas A&M University at
Qatar. He is currently a Founding Faculty Member
with the rank of Associate Professor with the
College of Science and Engineering, Hamad Bin
Khalifa University (HBKU). His current research interests include wireless
networks, wireless security, smart grids, optical wireless communication,
and blockchain applications for emerging networks. He has published more
than 150 journals and conferences and four book chapters, and co-invented
four patents. He was a recipient of the Research Fellow Excellence Award at
Texas A&M University at Qatar, in 2016, the Best Paper Award in multiple
IEEE conferences including the IEEE BlackSeaCom 2019, the IEEE First
Workshop on Smart Grid and Renewable Energym in 2015, and the Nortel
Networks Industrial Fellowship for five consecutive years, from 1999 to
2003. His professional activities include an Associate Editor of the IEEE
TRANSACTIONS ON COMMUNICATIONS and the IEEE OPEN ACCESS JOURNAL OF
COMMUNICATIONS, a Track Co-Chair of the IEEE VTC Fall 2019 conference,
a Technical Program Chair of the 10th International Conference on Cognitive
Radio Oriented Wireless Networks, and a Technical Program Committee
Member of several major IEEE conferences.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2020.3005663?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2020.3005663, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09127940.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-06-29T00:00:00
|
[
{
"paperId": "23a1cec389cf82955a8d3046c5aa1bf25f6f6584",
"title": "Blockchain in Internet of Things: a Systematic Literature Review"
},
{
"paperId": "089f26359d7de562b9564e94e1c26b3bf2661c2b",
"title": "Security Smells in Smart Contracts"
},
{
"paperId": "58067fa88a88b60130c1a432808c14c1a975ab21",
"title": "Digital Stocks Using Blockchain Technology the Possible Future of Stocks?"
},
{
"paperId": "12c37ed1451eb3279bac8cf83c7adaf7a1032b43",
"title": "Security, Performance, and Applications of Smart Contracts: A Systematic Survey"
},
{
"paperId": "659dd200ab51398295b55c7ea1779684d21d28c7",
"title": "Blockchain Applications – Usage in Different Domains"
},
{
"paperId": "5d6891f8efc2f5607ae69fefd343239f447fe602",
"title": "A Survey on Consensus Protocols in Blockchain for IoT Networks"
},
{
"paperId": "8bef3ecaaaadacf6316ce30275b98b40b6800bd5",
"title": "Decentralizing the Stock Exchange using Blockchain An Ethereum-based implementation of the Bucharest Stock Exchange"
},
{
"paperId": "e476438e555b24130e70c73f4529c26c45a2049c",
"title": "Blockchain based smart contract for bidding system"
},
{
"paperId": "82d2b9d09cc339fdeac05abfb8a31f9c6eace948",
"title": "Untangling Blockchain: A Data Processing View of Blockchain Systems"
},
{
"paperId": "9092a7802f6e56dd5b6d1be30c8b5588a22e53fe",
"title": "Blockchain technology innovations"
},
{
"paperId": "5eda01b02535084687d52e03a013af294ffdf1b5",
"title": "Thing-to-thing electricity micro payments using blockchain technology"
},
{
"paperId": "8124e1c470957979625142b876acad033bdbe69f",
"title": "Blockchain — Literature survey"
},
{
"paperId": "59230229b0a6eb52c272d92d2392f9735997a350",
"title": "Blockchain explorer: An analytical process and investigation environment for bitcoin"
},
{
"paperId": "61d69925287bd0e3adea7d6fe9ccaffaf29207cd",
"title": "Towards an Optimized BlockChain for IoT"
},
{
"paperId": "fa5e83b4830e11c612623c0a623b94cdaacc4619",
"title": "Blockchain: Future of financial and cyber security"
},
{
"paperId": "631cc57858eb1a94522e0090c6640f6f39ab7e18",
"title": "Blockchain as a Service for IoT"
},
{
"paperId": "bfbce9ae7fd2828c7ca6ecbbe6c46ddc7d5e3e75",
"title": "Blockchain - A Financial Technology for Future Sustainable Development"
},
{
"paperId": "0a84a077ada2acb6918e7764fafcd28f667dae28",
"title": "Understanding Modern Banking Ledgers Through Blockchain Technologies: Future of Transaction Processing and Smart Contracts on the Internet of Money"
},
{
"paperId": "9979809e4106b29d920094be265b33524cde8a40",
"title": "In Search of an Understandable Consensus Algorithm"
},
{
"paperId": "32bf65155f54914b6f42673ef732970ffcb88b33",
"title": "Relationship between economic growth and stock market development"
},
{
"paperId": "5e2bdde4f4cdeec63594c644d63cda52838f08c2",
"title": "Who Trades?"
},
{
"paperId": null,
"title": "‘‘Blockchain-enabledsmartcontracts:Architecture,applications,andfuturetrends,’’"
},
{
"paperId": null,
"title": "Trading and Matching Technology Provides Flexible, Multi-Asset Trading Capabilities for Marketplaces of all Sizes"
},
{
"paperId": "4052d479c69c9b424df4f352d61cb5b4df85f352",
"title": "PBFT vs Proof-of-Authority: Applying the CAP Theorem to Permissioned Blockchain"
},
{
"paperId": "b3578884b80376bcf792b993c9769761f32c41bf",
"title": "New Kids on the Blockchain: How Bitcoin's Technology Could Reinvent the Stock Market"
},
{
"paperId": "742020bb58065c8712d5f9236cefadecfe8b9c1a",
"title": "The Statistics Report"
},
{
"paperId": "e24148556696b1366472879301a549ebb29a25d9",
"title": "The Stock Market as a Leading Indicator: An Application of Granger Causality"
},
{
"paperId": null,
"title": "We assume that all orders entered are of the types limit order or market order and that partially matched orders are possible in cases where B Q (cid:54)= A Q"
},
{
"paperId": null,
"title": "Buy orders are entered by the broker into the smart contract"
},
{
"paperId": null,
"title": "CSD assigns to each validated investor a new investor account number ‘‘NIN’’ by using the function ‘‘addNin’’"
},
{
"paperId": null,
"title": "Let QA be a queue of all sell orders sorted in descending order such that y represents the index of the minimum element in the queue denoted as A P"
},
{
"paperId": null,
"title": "CSD Validates the investor’s data integrity by sending it to the government"
},
{
"paperId": null,
"title": "AssociateBrokerToInevestor"
},
{
"paperId": null,
"title": "FMA defines the companies that are listed in the stock market along with their details such as number of shares they consist of and their prices"
},
{
"paperId": null,
"title": "a buy order that has the company’s symbol, number of shares, price, and NIN enters a queue of buy orders"
},
{
"paperId": null,
"title": "AssignShareToNin"
},
{
"paperId": null,
"title": "If B P ≥ A P and B Q ≥ A Q , B P is partially matched with A P , and a trade is generated"
}
] | 15,005
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00e65f396b4febdc799e03c997608fbeb1035905
|
[
"Computer Science"
] | 0.878881
|
Achlys: Towards a Framework for Distributed Storage and Generic Computing Applications for Wireless IoT Edge Networks with Lasp on GRiSP
|
00e65f396b4febdc799e03c997608fbeb1035905
|
2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
|
[
{
"authorId": "66031965",
"name": "Igor Kopestenski"
},
{
"authorId": "39874026",
"name": "P. V. Roy"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Internet of Things (IoT) continues to grow exponentially, in number of devices and the amount of data they generate. Processing this data requires an exponential increase in computing power. For example, aggregation can be done directly at the edge. However, aggregation is very limited; ideally we would like to do more general computations at the edge. In this paper we propose a framework for doing general-purpose edge computing directly on sensor networks themselves, without requiring external connections to gateways or cloud. This is challenging because sensor networks have unreliable communication, unreliable nodes, and limited (if any) computing power and storage. How can we implement production-quality components directly on these networks? We need to bridge the gap between the unreliable, limited infrastructure and the stringent requirements of the components. To solve this problem we present Achlys, an edge computing framework that provides reliable storage, computation, and communication capabilities directly on wireless networks of IoT sensor nodes. Using Achlys, the sensor network is able to configure and manage itself directly, without external connectivity. Achlys combines the Lasp key/value store and the Partisan communication library. Lasp provides efficient decentralized storage based on the properties of CRDTs (Conftict-Free Replicated Data Types). Partisan provides efficient connectivity and broadcast based on hybrid gossip. Both Lasp and Partisan are specifically designed to be extremely resilient. They are able to continue working despite high node churn, frequent network partitions, and unreliable communication. Our first implementation of Achlys is on a network of GRiSP embedded system boards. We choose GRiSP as our first implementation platform because it implements high-level functionality, namely Erlang, directly on the bare hardware and because it directly supports Pmod sensors and wireless connectivity. We give some first results on using Achlys for building edge systems and we explain how we plan to evolve Achlys in the future. Achlys is a work in progress that is being done in the context of the LightKone European H2020 research project, and we are in the process of implementing and evaluating a proof-of-concept application in the area of precision agriculture.
|
# Achlys : Towards a framework for distributed storage and generic computing applications for wireless IoT edge networks with Lasp on GRiSP
### Igor Kopestenski
ICTEAM Institute
Universit´e catholique de Louvain
igor.kopestenski@uclouvain.be
**_Abstract—Internet of Things (IoT) continues to grow expo-_**
**nentially, in number of devices and the amount of data they**
**generate. Processing this data requires an exponential increase**
**in computing power. For example, aggregation can be done**
**directly at the edge. However, aggregation is very limited; ideally**
**we would like to do more general computations at the edge.**
**In this paper we propose a framework for doing general-**
**purpose edge computing directly on sensor networks themselves,**
**without requiring external connections to gateways or cloud.**
**This is challenging because sensor networks have unreliable**
**communication, unreliable nodes, and limited (if any) computing**
**power and storage. How can we implement production-quality**
**components directly on these networks? We need to bridge**
**the gap between the unreliable, limited infrastructure and the**
**stringent requirements of the components. To solve this problem**
**we present Achlys, an edge computing framework that provides**
**reliable storage, computation, and communication capabilities**
**directly on wireless networks of IoT sensor nodes. Using Achlys,**
**the sensor network is able to configure and manage itself**
**directly, without external connectivity. Achlys combines the Lasp**
**key/value store and the Partisan communication library. Lasp**
**provides efficient decentralized storage based on the proper-**
**ties of CRDTs (Conflict-Free Replicated Data Types). Partisan**
**provides efficient connectivity and broadcast based on hybrid**
**gossip. Both Lasp and Partisan are specifically designed to be**
**extremely resilient. They are able to continue working despite**
**high node churn, frequent network partitions, and unreliable**
**communication. Our first implementation of Achlys is on a**
**network of GRiSP embedded system boards. We choose GRiSP**
**as our first implementation platform because it implements high-**
**level functionality, namely Erlang, directly on the bare hardware**
**and because it directly supports Pmod sensors and wireless**
**connectivity. We give some first results on using Achlys for**
**building edge systems and we explain how we plan to evolve**
**Achlys in the future. Achlys is a work in progress that is being**
**done in the context of the LightKone European H2020 research**
**project, and we are in the process of implementing and evaluating**
**a proof-of-concept application in the area of precision agriculture.**
I. INTRODUCTION
The edge computing paradigm has been widely accepted as
an important concept for sustainability of future Cloud Service
Providers (CSPs) and Mobile Network Operators (MNOs) [1],
[2]. It is well acknowledged by both enterprise and academia
as a valid approach and is actively under research [3], [4], [5].
Newer and more performant infrastructures are continuously
### Peter Van Roy
ICTEAM Institute
Universit´e catholique de Louvain
peter.vanroy@uclouvain.be
elaborated both by CSPs and MNOs [6]. Concurrently, IoT
devices are getting closer to being actually ubiquitous, i.e.,
closer to Mark Weiser’s idea of hundreds of wireless comput_ing devices per person. This is already true in some scenarios,_
e.g., airplanes generate around 10 TB of data every 30 minutes.
Such cases require very responsive and robust systems for
sensor data processing, and could not rely on remote hosts
for it, even if these are close to the edge. Since the Internet
of Things is rapidly expanding and devices are becoming
more powerful, IoT applications are putting severe strain on
cloud providers. The edge computing paradigm is one way
to solve this problem by distributing the workload in a more
sustainable way across the whole network. With this paradigm,
computational and storage resources move closer to the edge
and IoT applications are able to preserve their QoS. Tasks
that were previously done in the cloud are now be delegated
to intermediates between datacenters and IoT edge networks.
Existing designs are enabling this by bridging edge networks
with intermediate gateways[7]. However, it is generally considered that the sensor and actuator networks themselves, such as
traditional Wireless Sensor Networks (WSNs), are too limited
and unreliable to do their own management. Thus even in
recent distributed sensor and IoT networks, a gateway node or
a cloud connection is necessary, which adds a single point of
failure and increases infrastructure complexity. If such a point
is unable to provide its service, network management becomes
impossible and sensor data cannot be retrieved anymore. And
if gateways need to be permanently available, even short
intervals of downtime can disrupt the entire system. A recent
survey has portrayed the full landscape of emerging paradigms
that strive towards the edge and fog principles[8]. It describes
a few designs that share common goals with Achlys, such
as mist computing and suggests that such architectures are a
best fit for systems that require autonomous behavior, ability
for distributed processing directly on IoT devices, little or no
Internet connectivity and privacy preservation.
_A. The Achlys framework_
This paper presents Achlys, a framework that directly
addresses the problem of general-purpose edge computing.
-----
Achlys increases the resilience of sensor/actuator edge networks so that they are able to reliably execute application tasks
directly on the edge nodes themselves. Achlys provides reliable decentralized communication, storage, and computation
abilities, by leveraging CRDTs (Conflict-Free Replicated Data
Types) and hybrid gossip algorithms. This lowers cost, reduces
dependencies, and simplifies maintenance. Our system has no
single point of failure. Achlys consists of three parts: GRiSP
embedded system boards[1], a Lasp CRDT-based key/value
store, and the Partisan hybrid gossip-based communication
component. Experimental releases and source code are available at ikopest.me, and can be deployed on GRiSP boards or
run in an Erlang test shell.
Achlys adds a task model to Lasp, which allows applications
to be written by storing both their code and results directly
inside Lasp. In this way, applications are as resilient as Lasp
itself. The task model was first developed as part of a master’s
thesis [9]. Application code is replicated automatically by
Lasp on all nodes. From the developer’s viewpoint, Achlys
applications are written in a similar way as applications written
for transactional databases. The developer mindset is that the
Lasp database always contains correct data. Application tasks
can be executed at any time on any node by the task model.
On every node, the task model periodically reads tasks from
Lasp and executes them. An executing task reads data from
Lasp, computes the updated data using node-specific sensor
information, actuates node-specific actuators, and stores both
the updated data and an updated task in Lasp. Because of
the convergence properties of CRDTs, the same task can be
executed more than once on different nodes without affecting
correctness. This is a necessary condition for resilience, to
keep running despite node failures and node turnover.
We choose GRiSP because it directly implements Erlang on
the bare hardware, which simplifies system development, and
because it directly supports Pmod sensors and actuators and
has built-in wireless connectivity. Computation and storage
abilities of GRiSP are limited, but adequate for many management tasks (in fact, with the current GRiSP infrastructure,
our system is comparable to state-of-the-art laptops released
just before the year 2000). Our current system is a prototype
that is able to run applications on networks of GRiSP boards.
With this system we are in the process of implementing
and evaluating a proof-of-concept application of Achlys to
precision agriculture, in collaboration with Gluk Advice BV,
as part of the LightKone European H2020 research project.
_B. Motivating example_
We motivate this work with an example related to the
precision agriculture use case we are developing. We envision
a scenario where a farmer has decided to equip his farms
with a Subsurface Drip Irrigation system. Despite that it is
one of the most efficient precision irrigation technologies,
it remains difficult to have sufficiently precise information
about moisture levels, so as to make optimal use of the very
1grisp.org
expensive irrigation infrastructure. Inadequate irrigation can be
extremely detrimental for production levels, as well as water
supplies.
We propose a Self-Sufficient Precision Agriculture Management System for Irrigation in order to allow the farmer to
efficiently irrigate his farms. In addition to productivity gains,
we intend to offer a solution that is near zeroconfig, i.e., it
is able to configure itself with no intervention needed by the
farmer. Finally, we want to provide a system that manages
itself and requires little to no maintenance. It is possible
to connect to the system, but this is only used for setting
policy and is not needed for day-to-day running. The system’s
management mechanisms are autonomous, independent of any
third-party providers.
Currently, the farmer’s irrigation system distributes water
across the entire zone that is equipped with tubes delivering
water to plantations when the farmer activates a pump. Since
moisture levels can vary significantly inside a single farm,
uniform irrigation can be detrimental as some parts will not
receive enough water while in other parts water is wasted and
irrigation is above the optimal levels. Our system is made of
a set of distributed edge nodes that sense moisture levels and
actuate on activation or deactivation of underground irrigation
tubes. The farm is divided into sectors whose moisture levels
are measured by an edge node, and irrigation is adjusted
accordingly. Our solution activates the main irrigation pump
and valves when necessary, and controls the water flow such
that once sufficient moisture is achieved, actuators shut down
the water flow of that sector while irrigation continues in
other sectors. Irrigation decisions are made by an online optimization algorithm that runs continuously on the edge nodes
themselves. The system thus provides completely autonomous
basic management, without the need for any kind of Internet
connection or computer. The system continuously optimizes
its operation to provide adequate irrigation with minimal
cost, and reconfigures itself whenever it detects a change in
configuration.
It is possible to change the irrigation policy by connecting
a PC node to the edge network. This way, we extend the
basic autonomous system with additional features that allow
the farmer to use cloud infrastructures such as storage or high
computational power when it is desired. For example, the edge
nodes could be asked to measure how much water their sector
has consumed based on temperatures during that period, and
compute some metrics locally that could be extracted from the
edge cluster and stored in the cloud at the end of each month.
Learning processes applied to that data could again allow
farmers to adjust the system behavior with their computer to
gain in efficiency.
_C. Common challenges_
In contrast to core cloud datacenters, edge networks are
composed of large numbers of heterogeneous devices. Due to
the highly dynamic and unpredictable nature of edge network
topologies, nodes can also be temporarily isolated from the
network. For these reasons, implementing desirable features
-----
such as reliable computation and storage directly on edge
IoT networks is particularly complex. Possible solutions are
proposed by industry actors such as 5G operators and CSPs,
and are generally based on Points of Presence (POPs) located
near client nodes and available through gateways. In addition,
edge applications must be implemented to manage the limited
resources of IoT nodes. Therefore efficient deployments at
the edge require an adequate load balancing mechanism that
ensures that there is no overload on any node.
An important goal of edge computing is to offload efficiently
the core of the network. Since there are several intermediary
entities between cloud datacenters and IoT devices such as
servers or smaller datacenters, optimal offloading would be
achieved if components of each layer are able to process some
requests autonomously and only rely on higher level nodes
when necessary. Therefore, edge nodes should also strive for
maximum independence, and take advantage of computational
and storage resources of IoT devices to complement the edge
POP solutions.
Moreover, if edge computing extends the traditional cloud
computing paradigm only up to peripheral POPs, it makes IoT
nodes highly dependent on connectivity and exposes single
points of failure. We suggest that the edge paradigm can be
implemented in a way that allows offloading of the core at any
level in the global network, even in the most peripheral parts.
If IoT devices are able to provide some basic functionality,
then higher level devices can rely on them to reduce their own
workload. And the edge paradigm could therefore maximize
the global offloading since it would distribute the load over
all the parts of the edge that are able to perform tasks such as
computations or data storage in a reliable way accordingly to
their hardware resources.
However, despite being standardized to some extent[10],
[11], a global production ready end-to-end solution has not yet
been deployed at scales coming close to those of traditional
cloud architectures. There are still many engineering and
practical considerations that must be addressed. In this regard,
the LightKone H2020 European Project aims at providing a
novel approach for general purpose computations at the edge.
LightKone directly addresses the added complexity due to
heterogeneity of IoT devices, which makes a general purpose
computation model at the edge very attractive.
_D. Structure of the article_
The remainder of this article is structured as follows. Section
II gives a brief overview of current edge computing state of
the art and some key enabling technologies for Achlys. Section
III gives a structural overview of Achlys followed by use case
examples in Section IV. Finally, Section V gives conclusions
about the current state and future evolution of Achlys.
II. CONTRIBUTIONS
In this section, we briefly discuss the contribution of the
Achlys application framework in relationship with the global
edge computing paradigm.
_A. Fault tolerance_
Ensuring fault tolerance is an essential part for generic edge
computing[12]. In order to fit the vision of the LightKone
project, Achlys strives to guarantee this property. This implies
that Achlys must be able to continue functioning even in case
of system failure. These failures can be, but are not limited to
:
_• Network partition or intermittent communication : a_
node or a set of nodes that are isolated from the rest
of the network must be able to run and to preserve
interoperability with other nodes when the network is
repaired.
_• Hardware failure or offline operation : if a hardware_
component becomes dysfunctional or goes offline (to save
power), it should be contained so that the application
preserve a maximum amount of features.
_B. Task model_
Achlys provides a general purpose task model solution using
Erlang higher-order functions. Since Erlang functions are just
values (i.e., constants), they can be copied over a network
like any other constant. Using this ability, Achlys is able
to provide programmers with an API that allows them to
easily disseminate generic tasks in a cluster and be able to
retrieve the results if desired. As handling heterogeneity is a
highly complex task for smart services at the edge[13], [14],
[15], the Achlys prototype can also use this ability to make
the task model homogeneous, despite heterogeneity of the
infrastructure. This is compatible with a larger vision of future
Internet, in which physical components will be virtualized[16],
[17], [18]
_C. Data consistency at the edge_
Conflict resolution is one of the central problems of distributed and decentralized applications, and is the subject of
extensive research[19], [20], [21], [22]. For example, when
multiple actors modify the same data entity across a network
partition, what should be done when the partition is repaired?
CRDTs (Conflict-Free Replicated Data Types) provide a solution to this problem. They are mathematically designed to
provide consistent replication with very weak synchronization
between replicas: only eventual replica-to-replica communication is needed. The Lasp library uses a wide range of CRDT
types for its data storage. To the developer, Lasp looks like a
replicated distributed key/value store that runs directly on the
IoT network. Section III-C gives more information on Lasp
and CRDTs.
III. OVERVIEW OF THE SYSTEM DESIGN
We now present a more detailed description of the Achlys
system, an Erlang[2] implementation of a framework that combines the power of δ-CRDTs[22] in the Lasp store[23], [24],
the Partisan communication component[25], and the GRiSP
Runtime software. It provides application developers a way to
2erlang.org
-----
build resilient distributed edge IoT applications. We leverage
hybrid gossiping and the use of CRDTs in order to propose
a platform that is able to provide reliable services directly on
edge nodes, which are able to function autonomously even
when no gateway or Internet access is available.
_A. GRiSP base_
The GRiSP base board is the embedded system used to
deploy Achlys networks in the current experimental phase. Its
main advantage over other hardware is that it has sufficient
resources[26] to run relevant Erlang applications, that is[3] :
_• Microcontroller : Atmel SAM V71, including :_
**– ARM Cortex M7 CPU clocked at 300MHz**
**– 64 MBytes of SDRAM**
**– A MicroSD socket**
_• 802.11b/g/n wireless antenna_
_• SPI, GPIO, 1-Wire and UART interfaces_
_B. GRiSP_
Figure 1 depicts how the GRiSP architecture is designed.
The RTEMS[4] (RTOS-like set of libraries) component is embedded inside the Erlang VM and makes it truly run on bare
_metal. Achlys greatly benefits from this unique design since_
it allows a much more direct interaction with the GRiSP base
hardware. The GRiSP board directly supports Digilent Pmod[5]
modules. The latter offer a very wide range of sensing and
actuating features that can be accessed at application level in
Erlang. It is not necessary to write drivers in C in order to add
new hardware features to extend the range of functionalities.
Fig. 1. The GRiSP software stack compared to traditional designs. The
hardware layer for GRiSP refers to the GRiSP base board that is currently
available. Reprinted from GRiSP presentation by Adam Lindberg.
3 For full specifications please refer to grisp.org.
4rtems.org
5digilentinc.com
_C. Lasp_
Lasp is a key part of the Achlys framework for both storage
and computation. It provides both replicated data and computation, and guarantees that values will eventually converge on all
nodes[23], [24]. Since our GRiSP boards run Erlang directly
on bare metal, Lasp is a suitable option for consistency as
it runs directly in Erlang[24]. The Lasp port to GRiSP was
initiated in a master’s thesis at UCL[9]. Lasp supports various
CRDTs including sets, counters and registers, all of which do
consistent conflict resolution. For instance, a GCounter is a
counter type that can only be incremented, and when all the
operations performed on individual nodes converge, the entire
cluster will see the same value, i.e., the sum of all increments
of the counter across all nodes. This is a very simple example;
more complex examples such as sets and dictionaries are
also part of Lasp, which allow both adding and removing of
elements with the same convergence property. Achlys is thus
able to handle concurrent modifications and guarantee that
all nodes are eventually consistent, just as on cloud storage
services. This works even on nodes with limited resources
and intermittent connectivity. The only effect of node failures
and intermittent connectivity is to slow down the convergence.
CRDTs in Lasp are implemented using additional metadata
that allows each operation at each node to be taken into
consideration. In fact, the Lasp library uses an efficient implementation of CRDTs called delta-based dissemination mode,
which propagates only delta-mutators[27], [22], i.e., update
operations, instead of the full state, to achieve consistency.
This uses significantly less traffic between nodes than a naive
implementation that propagates the full state.
_D. Partisan_
Partisan[25] is the communication component used by Lasp
to disseminate information between nodes. It provides a highly
resilient alternative communication layer used instead of the
default distributed Erlang communication. This layer combines
the HyParView[28] membership and Plumtree[29] broadcast
algorithms, to ensure both connectivity and communication,
even in extremely dynamic and unreliable environments. Both
algorithms use the hybrid gossip approach. Hybrid gossip is a
sweet spot that combines the efficiency of standard distributed
algorithms (e.g., spanning tree broadcast in Plumtree) with the
resilience of gossip. For example, in the case of Plumtree, the
gossip algorithm is used to repair the spanning tree.
Partisan comes with a set of configurable peer service
modules that are each suited for different types of networks.
Since the HyParView manager ensures reliable communication
in networks with high attrition rates such as edge clusters, it
is used in our configuration by default. But it can easily be
adjusted to match other types of topologies, and also enables
hosts that can be members of multiple clusters to use the
optimal peer service. This makes it suitable for clusters of IoT
nodes that are able to communicate with each other despite
unreliable networks, but also to be able to communicate with
other cluster types such as star or mesh topologies where
reliable communication does not require the same amount of
-----
DEPLOYED ACHLYS CLUSTER
PHYSICAL
OVERLAY
Fig. 2. An example of an Achlys network measuring temperature and
pressure.
effort. Partisan is therefore flexible as well as resilient, and
we are still able to configure edge nodes to communicate
with servers, reliable clients or gateways without overhead that
would be generated using HyParView in stable networks.
_E. Example Achlys network_
Figure 2 shows a conceptual overview of a WSN setup of
Achlys nodes that highlights several elements :
_• The_ bottom layer consists of functions
_temp1,2(∆, m),p(∆, m) that represent input streams of_
data based on environment variables measured by the
nodes, in this example temperatures and pressure.
_• The physical topology reflects the real world configura-_
tion of the nodes where each edge implies that the two
vertices are able to establish radio communication.
_• The virtual overlay that we are able to build using_
Partisan. Achlys provides functions to clusterize GRiSP
nodes through Partisan and therefore partitions such as
shown by dotted blue lines are abstracted away by the
eventual consistency and partition tolerance properties.
Physically isolated parts of the network keep functioning,
and seamlessly recover once the links are reestablished.
_F. Local aggregation_
The vast majority of raw IoT sensor data is usually very
short-lived inside systems and ultimately leads to unnecessary
storage. Hence in Achlys we introduce configurable parameters for aggregation of sensor data. This way programmers
can still benefit from distributed storage but also take advantage of local memory or MicroSD cards to aggregate raw
measurements and propagate mean values. The network loads
and global storage volume are thus decreased and overall
scalability is improved.
_G. Generic task model_
Achlys provides developers the ability to embed Erlang
higher-order functions through a simple API as shown in Table
I. This allows building applications using replicated higherorder functions. Each node receives tasks and can locally
decide based on load-balancing information and destination
targeting information if it needs to execute it.
We used this model to generate replicated meteorological
sensor data aggregations via generic functions supplied with
specific tasks. A live dashboard of the currently converged
view of the data was built and run on a laptop connected to
the GRiSP sensor network. As long as that web client host
was able to reach any node in the network, it could output its
live view of the distributed storage.
IV. ADDITIONAL USE CASES
_A. Live IoT sensor dashboard_
Since our framework is implemented in Erlang, it is also
possible to integrate it in Elixir[6] applications. Elixir is a
programming language that is built on top of the Erlang runtime system and adds several very popular web development
features. This makes it possible to implement a web server
that runs only at the edge and that can interact with an entire
Achlys cluster as soon as a single node is reachable. Our
previous work has already allowed us to display a minimal
version of this use case implementation in the context of
the LightKone project. Figure 3 gives a screenshot of a live
display of recorded magnetic field data recorded with Digilent
Pmod NAV modules attached to GRiSP boards in the cluster.
Achlys allows the live monitoring to be accessed from any
edge node and it is guaranteed that the distributed database
will be consistent regardless of the physical location as long
as there is eventually a connection. This use case is a good
display of the modular designs that can be implemented with
Achlys. We can run isolated applications on IoT networks, add
or remove nodes, and easily implement other types of custom
nodes that will automatically work the existing cluster. For live
monitoring of environmental variables with IoT actuators, the
self-configuring and autonomous execution can be a starting
point, and users are free to perform predictive analysis and
machine learning y bridging the cluster with the cloud computing services. Once the learning process yields a new set of
rules, the users can propagate them to all the network nodes
and it will become autonomous again.
_B. Internet of Battlefield Things_
The U.S. Army Research Laboratory has announced an
entire new research program dedicated uniquely to IoT
devices[30]. It is focused on deriving the new theoretical
models and systems that will bring the key advantages in
military conflicts of the following decades. The IoBT researchers display a very strong interest in some properties
that are not considered in edge computing IoT architectures
designed for commercial use. The authors describe the IoBT
network as dynamic and ubiquitous with a high degree of
pervasiveness, and self-awareness and self-configuration of
networks are explicitly stated as requirements. Based on the
6elixir-lang.org
TOPOLOGY
-----
|Function|Arguments|Description|
|---|---|---|
|add_task|{Name, Targets, Fun}|Adds the task tuple {Name, Targets, Fun} to the tasks CRDT|
|remove_task|Name|Removes the task named Name from the tasks CRDT|
|start_task|Name|Starts the task named Name|
|find_and_start_task|nil|Fetches any available task from the tasks CRDT and executes it|
|start_all_tasks|nil|Starts all tasks in the tasks CRDT|
TABLE I
GENERIC TASK MODEL API FUNCTIONS.
Fig. 3. A web client running on the edge and monitoring magnetic field
sensor data from the distributed database.
@ Periodical calls to find
@ available tasks on each node
erlang:send_after(Cycle, self(), trig)
@ Handle the periodical message
handle_info(trigger) ->
find_and_start_task()
...
@ From single node
@ Create and propagate new task
F = Sense(DeltaInterval, Threshold)
add_task({senseTask, Destinations, F})
Fig. 4. Example usage of the task model API for listening on the tasks CRDT
and running available functions.
described areas of interest in IoBT, we imagine the use case
of Achlys nodes in forward deployments of ground infrantry
squads. These deployments can lead groups to be isolated in
remote areas where no means of communication with remote
operators is possible. During nights, the surroundings must be
constantly watched and soldiers need to spread across areas
they are exposed to higher risks once alone. If soldiers were
to be equipped with a set of Achlys nodes, they would be able
to maintain their cluster membership even with all the crew
members on the terrain, and since the sensors can immediately produce data that can indicate distress such as heat or
trembling, the real-time threat analysis can be immediately be
propagated to all the members in the area. In urban combat
scenarios, this is particularly desirable as individuals are often
in buildings or confined spaces where they cannot see each
other while Achlys would be able to propagate alarms to all
the reachable members. And during all missions Achlys nodes
would record success metrics, environmental variables, and
network topology changes that would be stored during entire
operations, and once groups return to bases they all provide
the data to nodes with high computing capacities that can
make predictive analysis that would be used again in future
deployments and could easily be passed between units if they
are in communication range. We have observed that the key
features of Achlys that provide reliable communication and
storage correspond to numerous requirements of IoBT since
they must remain operational regardless of their environment.
V. CONCLUSION
We introduce Achlys, a framework for general purpose
edge computing that runs directly on sensor/actuator networks
with unreliable nodes, intermittent communication, and limited computation and storage resources. Our current Achlys
prototype is written in Erlang and runs on a wireless ad
hoc network of GRiSP sensor/actuator boards. Achlys uses
the Lasp and Partisan libraries to provide reliable storage,
computation, and communication. Lasp is a distributed key/value store based on CRDTs, which is used both for storage
(with efficient δ-CRDTs) and as dynamic management tool to
allow dissemination of general-purpose computing functions
inside the network. Partisan is a communication component
that provides highly resilient broadcast and connectivity for
dynamic networks with intermittent connectivity, by using the
Plumtree and HyParView hybrid gossip algorithms.
Because it is written in Erlang, Achlys can also be used on
any infrastructure based on the Erlang runtime. For example, it
can be used on scalable web servers written in Elixir, because
Elixir interoperates seamlessly with Erlang. Our experiments
show encouraging results and validate the feasibility of our IoT
edge computing model. This allows us to focus on improving
efficiency and usability aspects of Achlys. In particular, further
engineering work will be dedicated to minimize the resource
usage of the framework and its dependencies. We intend to
provide a framework that supports applications on embedded
systems in actual deployments, and thus storage, computation
and memory requirements of Achlys need to be carefully
managed. Techniques for compact storage will be investigated
such that we increase the amount of information that is passed
through CRDTs while keeping the size identical. Furthermore,
-----
other optimizations in terms of networking and self-adaptation
will be done in order to elaborate more intelligent clustering mechanisms. This will be reported in further work and
measurements of efficiency and resilience with fine-grained
adjustments of Partisan’s parameters will help us implement a
context-aware networking behavior for Achlys, reducing unnecessary bandwidth usage and connections. Finally, in order
to reduce application size, we will study if unused modules
can be excluded from the releases deployed on the embedded
systems, and if compression features that are available through
Erlang compiler flags can preserve the features of Achlys
while leaving more space for applications developed on top.
While we will keep improving Achlys, it will also be used
for proof-of-concept implementations of use case scenarios in
edge computing, and in particular for precision agriculture.
ACKNOWLEDGMENT
This work is partially funded by the LightKone European
H2020 project under Grant Agreement 732505. The authors
would like to thank Giorgos Kostopoulos of Gluk Advice BV
for information on precision agriculture.
REFERENCES
[1] I.-P. Belikaidis, A. Georgakopoulos, P. Demestichas, U. Herzog,
K. Moessner, S. Vahid, M. Fitch, K. Briggs, B. Miscopein, B. Okyere,
and V. Frascolla, “Trends and challenges for autonomic RRM and MAC
functionality for QoS provision and capacity expansions in the context
of 5G beyond 6GHz,” in 2017 European Conference on Networks
_and Communications, EuCNC 2017, Oulu, Finland, June 12-15, 2017._
IEEE, pp. 1–5.
[2] V. Frascolla, F. Miatton, G. K. Tran, K. Takinami, A. D. Domenico,
E. C. Strinati, K. Koslowski, T. Haustein, K. Sakaguchi, S. Barbarossa,
and S. Barberis, “5G-MiEdge: Design, standardization and deployment
of 5G phase II technologies: MEC and mmWaves joint development
for Tokyo 2020 Olympic games,” in IEEE Conference on Standards
_for Communications and Networking, CSCN 2017, Helsinki, Finland,_
_September 18-20, 2017._ IEEE, pp. 54–59.
[3] Smart World of IoT– The Edge is Getting Smarter, Smaller,
and Moving Further Out! - Part 1. [Online]. Available: [http:](http://www.embedded-computing.com/iot)
[//www.embedded-computing.com/iot](http://www.embedded-computing.com/iot)
[4] A. Kumar, M. Zhao, K. Wong, Y. L. Guan, and P. H. J. Chong,
“A Comprehensive Study of IoT and WSN MAC Protocols: Research
Issues, Challenges and Opportunities,” pp. 1–1.
[5] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, “A Survey
on Internet of Things: Architecture, Enabling Technologies, Security and
Privacy, and Applications,” vol. 4, no. 5, pp. 1125–1142.
[6] S. Ziegler and A. Br´ekine, “D6.3 – Second year report on standardization, dissemination and exploitation achievements,” p. 59.
[7] R. Morabito, R. Petrolo, V. Loscr`ı, and N. Mitton, “LEGIoT: A
Lightweight Edge Gateway for the Internet of Things,” vol. 81, pp.
[1–15. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/](https://linkinghub.elsevier.com/retrieve/pii/S0167739X17306593)
[S0167739X17306593](https://linkinghub.elsevier.com/retrieve/pii/S0167739X17306593)
[8] A. Yousefpour, C. Fung, T. Nguyen, K. Kadiyala, F. Jalali,
A. Niakanlahiji, J. Kong, and J. P. Jue, “All One Needs to Know about
Fog Computing and Related Edge Computing Paradigms: A Complete
[Survey.” [Online]. Available: http://arxiv.org/abs/1808.05283](http://arxiv.org/abs/1808.05283)
[9] A. Carlier, I. Kopestenski, and D. Martens, “Lasp on Grisp : Implementation and evaluation of a general purpose edge computing system for
Internet of Things.”
[[10] I. ETSI, “ETSI MEC.” [Online]. Available: https://www.etsi.org/images/](https://www.etsi.org/images/files/ETSIWhitePapers/etsi_wp30_MEC_Enterprise_FINAL.pdf)
[files/ETSIWhitePapers/etsi wp30 MEC Enterprise FINAL.pdf](https://www.etsi.org/images/files/ETSIWhitePapers/etsi_wp30_MEC_Enterprise_FINAL.pdf)
[11] N. ETSI, “ETSI NGP.”
[12] K. Karlsson, W. Jiang, S. Wicker, D. Adams, E. Ma, R. van Renesse,
and H. Weatherspoon, “Vegvisir: A Partition-Tolerant Blockchain for
the Internet-of-Things,” in 2018 IEEE 38th International Conference
_on Distributed Computing Systems (ICDCS)._ IEEE, pp. 1150–1158.
[[Online]. Available: https://ieeexplore.ieee.org/document/8416377/](https://ieeexplore.ieee.org/document/8416377/)
[13] M. Du, K. Wang, Y. Chen, X. Wang, and Y. Sun, “Big Data Privacy
Preserving in Multi-Access Edge Computing for Heterogeneous Internet
of Things,” vol. 56, no. 8, pp. 62–67.
[14] J. Jin, J. Gubbi, S. Marusic, and M. Palaniswami, “An Information
Framework for Creating a Smart City Through Internet of Things,”
vol. 1, no. 2, pp. 112–121.
[15] A. Zanella, N. Bui, A. Castellani, L. Vangelista, and M. Zorzi, “Internet
of Things for Smart Cities,” vol. 1, no. 1, pp. 22–32. [Online].
[Available: http://ieeexplore.ieee.org/document/6740844/](http://ieeexplore.ieee.org/document/6740844/)
[16] F. Esposito, A. Cvetkovski, T. Dargahi, and J. Pan, “Complete edge
function onloading for effective backend-driven cyber foraging,” in 2017
_IEEE 13th International Conference on Wireless and Mobile Computing,_
_Networking and Communications (WiMob), pp. 1–8._
[17] V. Ivanov, Y. Jin, S. Choi, G. Destino, M. Mueck, and V. Frascolla,
“Highly efficient representation of reconfigurable code based on a radio
virtual machine: Optimization to any target platform,” in 25th European
_Signal Processing Conference, EUSIPCO 2017, Kos, Greece, August 28_
_- September 2, 2017._ IEEE, pp. 893–897.
[18] C. S. Meiklejohn and P. Van Roy, “Loquat: A framework for large-scale
actor communication on edge networks.” IEEE, pp. 563–568. [Online].
[Available: http://ieeexplore.ieee.org/document/7917624/](http://ieeexplore.ieee.org/document/7917624/)
[19] C. Baquero, P. S. Almeida, and A. Shoker, “Making Operation-based
CRDTs Operation-based,” p. 15.
[20] A. Barker. Implementing a Garbage-Collected Graph CRDT (Part 1 of
[2). [Online]. Available: http://composition.al/CMPS290S-2018-09/2018/](http://composition.al/CMPS290S-2018-09/2018/11/12/implementing-a-garbage-collected-graph-crdt-part-1-of-2.html)
[11/12/implementing-a-garbage-collected-graph-crdt-part-1-of-2.html](http://composition.al/CMPS290S-2018-09/2018/11/12/implementing-a-garbage-collected-graph-crdt-part-1-of-2.html)
[21] R. Brown, Z. Lakhani, and P. Place, “Big(ger) Sets: Decomposed
[delta CRDT Sets in Riak,” pp. 1–5. [Online]. Available: http:](http://arxiv.org/abs/1605.06424)
[//arxiv.org/abs/1605.06424](http://arxiv.org/abs/1605.06424)
[22] P. S. Almeida, A. Shoker, and C. Baquero, “Delta State Replicated
Data Types,” vol. 111, pp. 162–173. [Online]. Available: [http:](http://arxiv.org/abs/1603.01529)
[//arxiv.org/abs/1603.01529](http://arxiv.org/abs/1603.01529)
[23] C. S. Meiklejohn, V. Enes, J. Yoo, C. Baquero, P. Van Roy, and
A. Bieniusa, “Practical Evaluation of the Lasp Programming Model at
Large Scale - An Experience Report,” pp. 109–114. [Online]. Available:
[http://arxiv.org/abs/1708.06423](http://arxiv.org/abs/1708.06423)
[24] C. Meiklejohn and P. Van Roy, “Lasp: A Language for Distributed,
Coordination-free Programming,” in _Proceedings_ _of_ _the_ _17th_
_International Symposium on Principles and Practice of Declarative_
_Programming, ser. PPDP ’15._ ACM, pp. 184–195. [Online]. Available:
[http://doi.acm.org/10.1145/2790449.2790525](http://doi.acm.org/10.1145/2790449.2790525)
[25] C. Meiklejohn and H. Miller, “Partisan: Enabling Cloud-Scale Erlang
[Applications.” [Online]. Available: http://arxiv.org/abs/1802.02652](http://arxiv.org/abs/1802.02652)
[26] L. Adam. Wireless Small Embedded Erlang Applications with Grisp
[Hardware Boards. [Online]. Available: http://www.erlang-factory.com/](http://www.erlang-factory.com/berlin2016/adam-lindberg)
[berlin2016/adam-lindberg](http://www.erlang-factory.com/berlin2016/adam-lindberg)
[27] P. S. Almeida, A. Shoker, and C. Baquero, “Efficient State-based CRDTs
[by Delta-Mutation.” [Online]. Available: http://arxiv.org/abs/1410.2803](http://arxiv.org/abs/1410.2803)
[28] J. Leitao, J. Pereira, and L. Rodrigues, “HyParView: A Membership
Protocol for Reliable Gossip-Based Broadcast,” in Proceedings of
_the 37th Annual IEEE/IFIP International Conference on Dependable_
_Systems and Networks, ser. DSN ’07._ IEEE Computer Society, pp.
[419–429. [Online]. Available: https://doi.org/10.1109/DSN.2007.56](https://doi.org/10.1109/DSN.2007.56)
[29] ——, “Epidemic Broadcast Trees,” in Proceedings of the 26th IEEE
_International Symposium on Reliable Distributed Systems, ser. SRDS_
’07. IEEE Computer Society, pp. 301–310. [Online]. Available:
[http://dl.acm.org/citation.cfm?id=1308172.1308243](http://dl.acm.org/citation.cfm?id=1308172.1308243)
[30] Internet of Battlefield Things (IOBT) — U.S. Army Research
[Laboratory. [Online]. Available: https://www.arl.army.mil/www/default.](https://www.arl.army.mil/www/default.cfm?page=3050)
[cfm?page=3050](https://www.arl.army.mil/www/default.cfm?page=3050)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1901.05030, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1901.05030"
}
| 2,019
|
[
"JournalArticle",
"Conference"
] | true
| 2019-01-15T00:00:00
|
[
{
"paperId": "54f2aef120d2e79b15bd296cc72fa38dc810c485",
"title": "SmartEdge'19 - The Third International Workshop on Smart Edge Computing and Networking - Welcome and Committeees"
},
{
"paperId": "974a421f5883779b305db6f90a23be2fa2875e27",
"title": "A Comprehensive Study of IoT and WSN MAC Protocols: Research Issues, Challenges and Opportunities"
},
{
"paperId": "3e4388586eb2538657a24ca1ace3c6877ec2235b",
"title": "All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey"
},
{
"paperId": "fb11bc82b66e6980d8611b7f73cc2700e3c117d5",
"title": "Big Data Privacy Preserving in Multi-Access Edge Computing for Heterogeneous Internet of Things"
},
{
"paperId": "3cb4ae87b7ed895a6563ad459329c7f8b8a19d47",
"title": "Vegvisir: A Partition-Tolerant Blockchain for the Internet-of-Things"
},
{
"paperId": "d0895a0a9a6793ee2c477309160b6c7817466e8f",
"title": "LEGIoT: A Lightweight Edge Gateway for the Internet of Things"
},
{
"paperId": "2f6996ddf2f23a9b708416014211f8f3f629f33d",
"title": "Partisan: Enabling Cloud-Scale Erlang Applications"
},
{
"paperId": "0f56a4dcb0a20cfd6b57a2300deb36330982b1c0",
"title": "Complete edge function onloading for effective backend-driven cyber foraging"
},
{
"paperId": "2d0887ae74333a09466c8c9d2a00ac77569412f6",
"title": "5G-MiEdge: Design, standardization and deployment of 5G phase II technologies: MEC and mmWaves joint development for Tokyo 2020 Olympic games"
},
{
"paperId": "b811fc7b786b82ac05cde9d87679d368d5a86e69",
"title": "Practical evaluation of the Lasp programming model at large scale: an experience report"
},
{
"paperId": "50ce4eee3e22f9b432792042ffc6136503fed37e",
"title": "Highly efficient representation of reconfigurable code based on a radio virtual machine: Optimization to any target platform"
},
{
"paperId": "0e7f869040c80bec4ecfee61f081b22ac3babb13",
"title": "Trends and challenges for autonomic RRM and MAC functionality for QoS provision and capacity expansions in the context of 5G beyond 6GHz"
},
{
"paperId": "cd3d4459ff0ff590a3e3258b0f774d6963cd4c90",
"title": "A Survey on Internet of Things: Architecture, Enabling Technologies, Security and Privacy, and Applications"
},
{
"paperId": "1edf49dfd5f09aea94e753e8594e95666ebe131a",
"title": "Loquat: A framework for large-scale actor communication on edge networks"
},
{
"paperId": "57adad122502face49b8dea49246b512539c1452",
"title": "Big(ger) sets: decomposed delta CRDT sets in Riak"
},
{
"paperId": "05009141302f022fcf7387b37c8d9a2c415ead7b",
"title": "Delta state replicated data types"
},
{
"paperId": "bd068826b712bc5a320af8ebef71b1cc9c88a836",
"title": "Lasp: a language for distributed, coordination-free programming"
},
{
"paperId": "9a17a57fdd7bbeebdd0c5fe4a50d786034d49182",
"title": "Efficient State-Based CRDTs by Delta-Mutation"
},
{
"paperId": "3ae7d736576b01665ba7f8a5f60430973e3f7b6b",
"title": "Making operation-based CRDTs operation-based"
},
{
"paperId": "52f168c6c4f42294c4c9f9305bc88b6d25ffec9a",
"title": "Internet of Things for Smart Cities"
},
{
"paperId": "967a2b6638fc0a0374e76ecc1768ef10b7c216cd",
"title": "An Information Framework for Creating a Smart City Through Internet of Things"
},
{
"paperId": "e2d0cb2ecbf930048ae3329aa47a814d8ebace5d",
"title": "Delta"
},
{
"paperId": "528a0945acb242d4f36c361e10f9e612c0b631b9",
"title": "Epidemic Broadcast Trees"
},
{
"paperId": "06e04a7a24100dbc0f72f22bc8e6dde4b2a27d8b",
"title": "HyParView: A Membership Protocol for Reliable Gossip-Based Broadcast"
},
{
"paperId": null,
"title": "Implementing a Garbage-Collected Graph CRDT (Part 1 of 2)"
},
{
"paperId": null,
"title": "Wireless Small Embedded Erlang Applications with Grisp Hardware Boards"
},
{
"paperId": null,
"title": "Internet of Battlefield Things (IOBT) — U.S. Army Research Laboratory"
},
{
"paperId": null,
"title": "D6.3 – Second year report on standardization, dissemination and exploitation achievements"
},
{
"paperId": null,
"title": "ETSI MEC"
},
{
"paperId": null,
"title": "Lasp on Grisp : Implementation and evaluation of a general purpose edge computing system for Internet of Things . ” [ 10 ] I . ETSI , “ ETSI MEC . ” [ Online ]"
},
{
"paperId": null,
"title": "Smart World of IoT – The Edge is Getting Smarter , Smaller , and Moving Further Out ! - Part 1 . [ Online ]"
}
] | 10,173
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00e730f1a001c200cd93883e1cbeb0337c11faee
|
[
"Computer Science",
"Medicine"
] | 0.82044
|
Multiagent Continual Coordination via Progressive Task Contextualization
|
00e730f1a001c200cd93883e1cbeb0337c11faee
|
IEEE Transactions on Neural Networks and Learning Systems
|
[
{
"authorId": "49785134",
"name": "Lei Yuan"
},
{
"authorId": "2216719801",
"name": "Lihe Li"
},
{
"authorId": "2188107173",
"name": "Ziqian Zhang"
},
{
"authorId": "2166590799",
"name": "Fuxiang Zhang"
},
{
"authorId": "2174870366",
"name": "Cong Guan"
},
{
"authorId": "2152850415",
"name": "Yang Yu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Neural Netw Learn Syst"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5962385"
],
"id": "79c5a18d-0295-432c-aaa5-961d73de6d88",
"issn": "2162-237X",
"name": "IEEE Transactions on Neural Networks and Learning Systems",
"type": null,
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=5962385"
}
|
Cooperative multiagent reinforcement learning (MARL) has attracted significant attention and has the potential for many real-world applications. Previous arts mainly focus on facilitating the coordination ability from different aspects (e.g., nonstationarity and credit assignment) in single-task or multitask scenarios, ignoring the stream of tasks that appear in a continual manner. This ignorance makes the continual coordination an unexplored territory, neither in problem formulation nor efficient algorithms designed. Toward tackling the mentioned issue, this article proposes an approach, multiagent continual coordination via progressive task contextualization (MACPro). The key point lies in obtaining a factorized policy, using shared feature extraction layers but separated independent task heads, each specializing in a specific class of tasks. The task heads can be progressively expanded based on the learned task contextualization. Moreover, to cater to the popular centralized training with decentralized execution (CTDE) paradigm in MARL, each agent learns to predict and adopt the most relevant policy head based on local information in a decentralized manner. We show in multiple multiagent benchmarks that existing continual learning methods fail, while MACPro is able to achieve close-to-optimal performance. More results also disclose the effectiveness of MACPro from multiple aspects, such as high generalization ability.
|
## MULTI-AGENT CONTINUAL COORDINATION VIA PROGRESSIVE TASK CONTEXTUALIZATION
A PREPRINT
**Lei Yuan[1,2], Lihe Li[1], Ziqian Zhang[1], Fuxiang Zhang[1,2], Cong Guan[1], Yang Yu[1,2,][∗]**
1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
2 Polixir.ai
yuanl, lilh, zhangzq, zhangfx, guanc @lamda.nju.edu.cn, yuy@nju.edu.cn
_{_ _}_
#### ABSTRACT
Cooperative Multi-agent Reinforcement Learning (MARL) has attracted significant attention and
played the potential for many real-world applications. Previous arts mainly focus on facilitating
the coordination ability from different aspects (e.g., non-stationarity, credit assignment) in singletask or multi-task scenarios, ignoring the stream of tasks that appear in a continual manner. This
ignorance makes the continual coordination an unexplored territory, neither in problem formulation nor efficient algorithms designed. Towards tackling the mentioned issue, this paper proposes
an approach Multi-Agent Continual Coordination via Progressive Task Contextualization, dubbed
**MACPro. The key point lies in obtaining a factorized policy, using shared feature extraction layers**
but separated independent task heads, each specializing in a specific class of tasks. The task heads
can be progressively expanded based on the learned task contextualization. Moreover, to cater to
the popular CTDE paradigm in MARL, each agent learns to predict and adopt the most relevant
policy head based on local information in a decentralized manner. We show in multiple multi-agent
benchmarks that existing continual learning methods fail, while MACPro is able to achieve close-tooptimal performance. More results also disclose the effectiveness of MACPro from multiple aspects
like high generalization ability.
#### 1 Introduction
Cooperative Multi-agent Reinforcement Learning (MARL) has attracted prominent attention in recent years [1], and
achieved great progress in multiple aspects, like path finding [2], active voltage control [3], and dynamic algorithm
configuration [4]. Among the multitudinous methods, researchers, on the one hand, focus on facilitating coordination
ability via solving specific challenges, including non-stationarity [5], credit assignment [6], and scalability [7]. Other
works, on the other hand, investigate the cooperative MARL from multiple aspects, like efficient communication [8],
zero-shot coordination (ZSC) [9], policy robustness [10], etc. A lot of methods emerge as promising solutions for
different scenarios, including policy-based ones [11,12], value-based series [13,14], and many other variants, showing
remarkable coordination ability in a wide range of tasks like SMAC [15]. Despite the great success, the mainstream
cooperative MARL methods are still restricted to being trained in one single task or multiple tasks simultaneously,
assuming that the agents have access to data from all tasks at all times, which is unrealistic for physical agents in the
real world that can only attend to one task at a time.
Continual Reinforcement Learning plays a promising role in the mentioned problem [16], where the agent aims
to avoid catastrophic forgetting, as well as enable knowledge transfer to new tasks (a.k.a. stability-plasticity
dilemma [17]), while maintaining scalable to a large number of tasks. Multiple approaches have been proposed to
address one or more of these challenges, including regularization-based ones [18–20], experience maintaining techniques [21, 22], and task structure sharing categories [23–25], etc. However, the multi-agent setting is much more
complex than the single-agent one, as the interaction among agents might cause additional considerations [26]. Also,
coordinating with multiple teammates is proved intrinsically tough [27]. Previous works model this problem as multi
_∗Corresponding Author_
-----
task [9] or just uni-modal coordination among teammates [28]. In light of the significance and ubiquity of cooperative
MARL, it is thus imperative to consider the continual coordination in both the problem formulation and the algorithm
design to tackle this issue.
In this work, we develop such a continual coordination framework in cooperative MARL where tasks appear sequentially. Concretely, we first develop a multi-agent task context extraction module, where information of each state
in a specific task is extracted and integrated by a product-of-expert (POE) mechanism into a latent space to capture
the task dynamic information, and a contrastive regularizer is further applied to optimize the learned representation,
with which similar tasks representation are pulled together while dissimilar ones are pushed apart. Afterward, an
expandable multi-head policy architecture whose separate independent heads are synchronously expanded with the
newly instantiated context, along with a carefully designed shared feature extraction module. Finally, considering
the popular CTDE (Centralized Training with Decentralized Execution) paradigm in mainstream cooperative MARL,
we leverage the local information of each agent to approximate the policy head selection process via policy distillation in the centralized training process, with which agents can select the most optimal ones to coordinate with other
teammates in a decentralized manner.
For the evaluation of the proposed approach, MACPro, we conduct extensive experiments on various cooperative
multi-agent benchmarks in the continual setting, including level-based foraging (LBF) [29], predator-prey (PP) [11],
and the StarCraft Multi-Agent Challenge benchmark (SMAC) [30], and compare MACPro against previous approaches, strong baselines, and ablations. Experimental results show that MACPro considerably improves upon existing methods. More results demonstrate its high generalization ability and its potential to be integrated with different
value-based methods to enhance their continual learning ability. Visualization experiments provide additional insight
into how MACPro works.
#### 2 Related Work
**Cooperative Multi-agent Reinforcement Learning Many real-world problems are made up of multiple interactive**
agents, which could usually be modeled as a Multi-Agent Reinforcement Learning (MARL) problem [26,31]. Further,
when the agents hold a shared goal, this problem refers to cooperative MARL [32], showing great progress in diverse
domains like path finding [2], active voltage control [3], and dynamic algorithm configuration [4], etc. Many methods
are proposed to facilitate coordination among agents, including policy-based ones (e.g., MADDPG [11], MAPPO [12],
FD-MARL [33]), value-based series like VDN [13], QMIX [14], Linda [34], or other techniques like transformer [35],
these approaches, have demonstrated remarkable coordination ability in a wide range of tasks (e.g., SMAC [30],
Hanabi [12], GRF [35]). Besides the mentioned approaches and the corresponding variants, many other methods
are also proposed to investigate the cooperative MARL, including efficient communication [8] to relieve the partial
observability caused by decentralized policy execution, policy deployment in an offline manner [36], model learning
in MARL [37], policy robustness when some perturbations exist [10], and training paradigm like CTDE (centralized
training with decentralized execution) [38], ad hoc teamwork [27], etc.
Despite the mentioned progress, the vast majority of current approaches either focus on training the MARL policy
on a single task, or the multi-task setting where all tasks appear simultaneously, lacking the attention to the continual
coordination problem. In these methods, MRA [39] focuses on creating agents that generalize across populationvarying Markov games, proposing meta representations for agents that explicitly model the game-common and gamespecific strategic knowledge. MATTAR [40] assumes there are some basic tasks, training with which can accelerate
the training process in other similar tasks, and develops a multi-agent multi-task training framework. TrajeDi [9]
and some variants (or improved versions) like MAZE [41], concentrate on coordinating with different teammates or
even unseen ones like a human, these methods are also under the assumption that we can access all the training tasks
all the time. [28] introduces a multi-agent learning testbed that supports both zero-shot and few-shot settings based
on Hanabi, but it only considers the uni-modal coordination among tasks, and the experimental results demonstrate
methods like VDN [13] trained in the proposed testbed can coordinate well with unseen agents, without any additional
assumptions made by previous works.
**Continual Reinforcement Learning Continual Learning is conceptually related to incremental Learning and Life-**
long Learning as they all assume that tasks or samples are presented in a sequential manner [17,42,43]. For continual
reinforcement learning [16], EWC [18] learns new Q-functions by regularizing the l2 distance between the optimal
weights of the new task and previous ones. It requires additional supervision information like task changes to update
its objective, and then selects a specific Q-function head and a task-specific exploration schedule for different tasks.
CLEAR [44] is a task-agnostic method that does not require task information during the continual learning process,
and leverages big experience replay buffers to prevent forgetting. Coreset [45] prevents catastrophic forgetting by
choosing and storing a significantly smaller subset of the previous task’s data, which is used to rehearse the model
-----
### Testing task 1 task 1,2 task 1, …, 𝑚𝑚
Figure 1: An example of multi-agent continual coordination, where tasks (e.g., the position of food changes in the
Level Based Foraging (LBF) [29]) change along with the timeline. We thus need to train a policy πππm to solve the
concurrent task as well as maintain the knowledge of previous tasks (i.e., avoid catastrophic forgetting) .
during or after finetuning. Some other works like HyperCRL [46], and [47] utilize a learned world model to promote continual learning efficiency. Considering the scalability issue along with the task number, CN-DPM [48] and
LLIRL [49] decompose the whole task space into several subsets of the data (tasks), and then utilize techniques like
Dirichlet Process Mixture or Chinese Restaurant Process to expand the neural network for efficient continual supervised Learning and reinforcement learning tasks, respectively. OWL [24] is a recently proposed approach that learns
a multi-head architecture and achieves high learning efficiency, and CSP [25] incrementally builds a subspace of policies for training a reinforcement learning agent on a sequence of tasks. Other researchers also design benchmarks like
Continual world [50], or baselines [51] to verify the effectiveness of different methods in single-agent reinforcement
learning. [28] investigate whether agents can coordinate with unseen agents by introducing a multi-agent learning
testbed based on Hanabi. Still, it only considers the uni-modal coordination among tasks. Our work takes a further
step in this direction for problem formulation and algorithm design.
#### 3 Problem Formulation
This work considers a cooperative multi-agent reinforcement learning problem under partial observation, which can
be formalized as a Dec-POMDP [52], with tuple = _N,_ _,_ _, Ω, P, O, R, γ_, where N = 1, _, n_,, =
_M_ _⟨_ _S_ _A_ _⟩_ _{_ _· · ·_ _}_ _S_ _A_
, Ω are the set of agents, states, joint actions, and local observation, respectively. P : ∆( )
_A[1]_ _× · · · × A[n]_ _S × A →_ _S_
stands for the transition probability function, O : S × N → Ω and R : S × A → R are the corresponding observation
function and reward function, and γ [0, 1) is the discounted factor. Multiple interactive agents in a Dec-POMDP
_∈_
coordinate with teammates to complete a task under a share reward R, at each time step, agent i receives the local
observation o[i] = O(s, i) and outputs the action a[i] . The formal objective of the agents is to maximize the
_∈A[i]_
expected cumulative discounted reward E[[�][∞]t=0 _[γ][t][R][(][s][t][,aaa][t][)]][ by learning an optimal joint policy.]_
In this work, we focus on a continual coordination problem where agents in a team are exposed to a sequence of
(infinite) tasks Y = (M1, · · ·, Mm, · · · ). Each task involves a sequential decision making problem and can be
formulated as a Dec-POMDP Mm = ⟨Nm, Sm, Am, Ωm, Pm, Om, Rm, γ⟩, as shown in Fig. 1. These agents are
continually evaluated on all previous tasks (but cannot be trained with these tasks) and the present task. Therefore, the
agent’s policy needs to transfer to new tasks while maintaining the ability to perform previous tasks. Concretely, agents
that have learned M tasks are expected to maximize the MARL objective for each task in YM = {M1, · · ·, MM _}. We_
consider the setting where task boundaries are known during the centralized training phase. During the decentralized
execution phase, agents cannot access global but only local information to finish the tasks sampled from YM .
#### 4 Method
In this section, we will describe the detailed design of our proposed method, MACPro. First, we propose a novel
training paradigm, including a shared feature extraction part and an adaptive policy heads expansion module based
-----
(b) Dynamic Network Expansion (a) Task Contextualization Learning (c) Decentralized Task Approximation & Execution
Figure 2: The overall framework of MACPro. (a) We design an efficient multi-agent task contextualization learning
module to capture the uniqueness of each emerging task. (b) The training paradigm, including a shared feature extraction part and an adaptive policy heads expansion module based on the learned contexts. (c) Each agent utilizes its
local information to approximate the actual task head in a decentralized way.
on the learned contexts (Fig. 2(a)). Next, we design an efficient multi-agent task contextualization learning module
to capture the uniqueness of each emerging task (Fig. 2(b)). Finally, considering the CTDE property in mainstream
cooperative MARL, we train each agent to utilize its local information to approximate the actual task head (Fig. 2(c)).
**4.1** **Multi-agent Task Contextualization Learning**
In continual reinforcement learning where tasks keep altering sequentially, it is crucial to capture the unique context
of each emerging new task. However, the behavioral descriptor of the multi-agent task is much more complex than
the single-agent setting due to the interactions among agents [1]. Thus this subsection aims to tackle this issue by
developing an efficient multi-agent task contextualization learning module.
Specifically, consider a trajectory τ = (s0, · · ·, sT ) with horizon T roll-out by any policies, we utilize a global
trajectory encoder gθ parameterized by θ to encode τ into a latent space. Concretely, the trajectory representation is
represented by a multivariate Gaussian distribution N (µθ(τ ), σθ[2][(][τ] [))][ whose parameters are computed by][ g][θ][(][τ] [)][. As the]
trajectory horizon T may alter for different tasks (e.g., 3m and 5m in SMAC [30]), we here apply a transformer [53]
architecture (see App. H) to extract feature from each trajectory, thus the latent context of a whole trajectory can
be represented as T Gaussian distributions N (µ0, σ0[2][)][,][ · · ·][,][ N] [(][µ][T] _[, σ]T[2]_ [)][, where][ N] [(][µ][i][, σ]i[2][)][ stands for the][ i][th][ essential]
parts of the trajectory. Next, considering the importance of different states in a trajectory, we apply the product-ofexperts (POE) technique [54] to acquire the joint representation of a trajectory, which is also a Gaussian distribution
_N_ (µθ(τ ), σθ[2][(][τ] [))][, where:]
(1)
_µθ(τ_ ) =
_σθ[2][(][τ]_ [) =]
� _T_ �� _T_ �−1
� �
_µt(σt[2][)][−][1]_ (σt[2][)][−][1]
_t=0_ _t=0_
� _T_ �−1
�
(σt[2][)][−][1]
_t=0_
_,_
_._
The detailed derivative process between the joint distribution and each single one can be seen in App. J.
The previous part can obtain representation for each trajectory. Nevertheless, the learned representation lacks any
dynamic information about a specific multi-agent task. As the difference between any dynamic model lies in transition
and reward functions [55], we here apply a loss function to force the learned trajectory representation to capture the
dynamic information of each task. Specifically, we learn a context-aware forward model h including three predictors:
_hs, ho, hr which are responsible to predict the next state, local observations, and reward given the current state, local_
-----
observations, actions, and task contextualization, respectively:
_T_
� �
_Lmodel = Eτ_ _∈D′_ _||hs[st,ooot,aaat, z] −_ _st+1||2[2][+]_
_t=0_
_||ho[st,ooot,aaat, z] −_ _ooot+1||2[2][+]_
(hr[st,ooot,aaat, z] − _rt)[2][�],_
(2)
where z is the task contextualization sampled from the joint task distribution, is the replay buffer for task con_D[′]_
textualization learning, which stores a small amount of trajectories for each task. However, as there are tasks with
different correlations, the mentioned optimization object Lmodel might be insufficient for differentiable context acquisition. Therefore, we apply another auxiliary contrastive loss [56] by pulling together semantically similar data points
(positive data pairs) while pushing apart the dissimilar ones (negative data pairs):
�
_Lcontg =Eτj_ _,τk∈D′_ 111{yj = yk}DJ (gθ(τj)||gθ(τk))+
(3)
1 �
111{yj ̸= yk} _DJ_ (gθ(τj)||gθ(τk)) + ε _,_
where 111{·} is the indicator function, yj and yk are the label(s) of the task(s) from which τj and τk are sampled,
respectively, ε is a small positive constant added to avoid division by zero. DJ (P _||Q) = DKL(P_ _||Q) + DKL(Q||P_ )
is the Jeffrey’s divergence [57] used to measure the distance between two distributions, and DKL denotes the KullbackLeibler divergence. Thus the overall loss term is:
_Lcontext = Lmodel + αcontg_ _Lcontg_ _,_ (4)
where αcontg is the coefficient balancing the loss terms.
**4.2** **Adaptive Dynamic Network Expansion**
With the previously learned global trajectory encoder gθ, we can obtain a unique contextualization for each task. Now,
this subsection comes to the design of a context-based continual learning mechanism, which incrementally clusters a
stream of stationary tasks in the dynamic environment into a series of contexts and opts for the optimal policy head
from the expandable multi-head neural network.
Formally, for multiple tasks that appear sequentially, we design a policy network consisting of a shared feature extractor φ with multiple layers of neural network (the index of agent is omitted in this part for simplicity), which can
promote knowledge sharing among different tasks. Furthermore, as there may be some multimodal tasks, a single
head for all tasks could make the policy overfit to some specific tasks. One way to solve this problem is to learn a
customized head for each task like OWL [24]. However, this solution is of poor scalability as the number of heads
increases linearly over the number of tasks that could be infinitely many. Thus, we develop an adaptive network expansion paradigm based on the similarity between task contextualizations. Specifically, we assume that the agents have
already experienced M tasks and have K policy heads {ψ[k]}k[K]=1 [so far (][K][ ≤] _[M]_ [). For each head, we store][ bs][ trajecto-]
ries in buffer D[′], and we use gθ to obtain the corresponding task contextualizations with mean values {{µ[j]k[}]j[bs]=1[}][K]k=1[.]
When encountering a new task (M + 1), we first utilize the feature extractors φ and all the existing heads {ψ[k]}k[K]=1 [to]
derive a set of behavior policies {πππk}k[K]=1 [to collect][ bs][ trajectories each on task][ (][M][ + 1)][, denoted as][ {{][τ][ j]k _[}]j[bs]=1[}][K]k=1[.]_
Next, we use gθ to derive the mean values {{µ[′]k[j][}]j[bs]=1[}][K]k=1 [of their contextualizations and calculate the similarities]
between the existing mean values {{µ[j]k[}]j[bs]=1[}][K]k=1 [as follows:]
_l = (l1, · · ·, lK), l[′]_ = (l1[′] _[,][ · · ·][, l]K[′]_ [)][,]
_bs_
�
_µ[i]k[||][2][,]_
_i=1_
_bs_
�
_µ[i]k[||][2][, k][ = 1][,][ · · ·][, K.]_
_i=1_
where lk = [1]
_bs_
_lk[′]_ [= 1]
_bs_
_bs_
�
_||µ[j]k_ _[−]_ _bs[1]_
_j=1_
_bs_
�
_||µ[′]k[j]_ _[−]_ _bs[1]_
_j=1_
(5)
Here l is the vector describing the dispersion of the K existing contextualizations, and l[′] is the vector describing
the distance between the K new contextualizations and the existing ones. Let k∗ = arg min1≤k≤K lk[′] [, such that the]
-----
_k[th]_
_∗_ [pair of existing and new contextualizations are closest among all][ K][ pairs. With an adjustable threshold][ λ][new][, if]
_lk[′]_ _∗_ _[≤]_ _[λ][new][l][k]∗_ [, indicating task][ (][M][ +1)][ is similar to the task(s) that head][ ψ][k][∗] [takes charge, we thus merge it to this/these]
learned task(s) and use the unified head ψ[k][∗] for them. Otherwise, none of the learned tasks are similar with the new
one, a new head ψ[K][+1] is created. This phase processes along with the task sequence, enjoying high scalability and
learning efficiency.
The previous part solves the head expansion issue, while a single shared feature extractor may inevitably cause forgetting. We here apply an l2-regularizer to relieve this issue by constraining the parameters of the shared part don’t
change too drastically when learning task (M + 1):
_Lreg =_
_n_
�
_||φi −_ _φ[M]i_ _[||][2][,]_ (6)
_i=1_
where φ[M]i is the saved snapshot of agent i’s feature extractor φi after training on task M . As we
can apply MACPro to any value-based methods, we thus obtain the temporal difference error LTD as
[r + γ maxa′ Q[tot](s[′], a[′]; θ[−]) − _Q[tot](s, a; θ)][2], where θ[−]_ are parameters of a periodically updated target network.
The overall loss term of training agents’ policies is defined as follows:
_LRL = LTD + αregLreg,_ (7)
where αreg is the coefficient balancing the two loss terms.
1 Run 5m_vs_6m 2s1z_vs_3z
away Keep
still
13m 2s3z
2
4m 3s5z
Run Run 8m_vs_9m 2s2z_vs_4s
5 4 3 towards randomly
(a) LBF (b) PP (c) Marines (d) SZ
Figure 3: Experimental environments used in this paper. (a) Level-based foraging (LBF), where the position of the
food changes in different tasks as indicated by the number on the food. (b) Predator prey (PP), where in different
tasks, the position of landmarks, the agents’ acceleration, maximum speed, positions, and the fixed heuristic policies
the prey uses are different. (c) & (d) Where Marines and SZ from StarCraft Multi-Agent Challenge (SMAC) involves
various numbers and types of battle agents.
**4.3** **Decentralized Task Approximation**
Although we have obtained an efficient continual learning approach for any tasks that appear in a sequential way, it
is still far away from the MARL setting, as it requires the trajectory of global states to obtain the task representation,
while agents in a MARL system can only acquire its local information.
Towards tackling the mentioned issue, we here develop a distillation solution. Concretely, for agent i with its local
trajectory history τ _[i]_ = (o[i]0[,][ · · ·][, o][i]T [)][, we design a local trajectory encoder][ f][θ]i[′] [that is similar to the global trajectory]
encoder gθ. fθi[′] [takes][ τ][ i][ as input and outputs][ N] [(][µ][θ]i[′] [(][τ][ i][)][, σ]θ[2]i[′] [(][τ][ i][))][. We thus optimize][ f][θ]i[′] [by minimizing the Jeffrey’s]
divergence between the distributions:
|Col1|Col2|Col3|Col4|1|
|---|---|---|---|---|
||||||
|||||2|
||||||
|5||4||3|
|Run away|Keep still|
|---|---|
|Run towards|Run randomly|
3s5z
8m_vs_9m
4m
2s2z_vs_4s
� �
_Loracle = E(τ,τ i)∈D′_ _DJ_
2s1z_vs_3z
13m
��
_gθ(τ_ )||fθi[′] [(][τ][ i][)] _,_ (8)
5m_vs_6m
where denotes gradient stop, τ, τ _[i]_ stand for the global and local trajectory of a same task, respectively. To accelerate
_·_
this learning process and make it consistent with task contextualization learning, we design a local auxiliary contrastive
loss:
�
_Lcontl =Eτj_ _,τk∈D′_ 111{yj = yk}DJ (fθi[′] [(][τ][ i]j [)][||][f][θ]i[′] [(][τ][ i]k[))+]
1 � (9)
111{yj ̸= yk} _DJ_ (fθi[′] [(][τ][ i]j [)][||][f][θ]i[′] [(][τ][ i]k[)) +][ ϵ] _._
2s3z
-----
MACPro Oracle Random OWL Coreset EWC Finetuning
(a) LBF (b) PP
(c) Marines (d) SZ
Figure 4: Performance comparison with baselines. Where each task is trained for 400k steps in LBF, 500k steps in
other benchmarks, and each plot indicates the average performance across all tasks seen so far.
The overall loss term of this part is:
_Lapprox = Loracle + αcontl_ _Lcontl_ _,_ (10)
where αcontl is the coefficient balancing the loss terms.
During the decentralized execution phase, agents firstly roll-out P episodes to probe the environment. Concretely, for
each probing episode p(p = 1, _, P_ ), agents randomly choose one policy head to interact with the evaluating task to
_· · ·_
collect trajectory τp[i][, and calculate the mean value][ µ][θ]i[′] [(][τ][ i]p[)][ of the trajectory representation][ f][θ]i[′] [(][τ][ i]p[)][. Finally, each agent]
_i selects the most optimal task head via comparing the distance with the K existing task contextualization as follows:_
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
||||(a)|LBF||||
|||||||||
|re 4: Perfor r benchmarks||(c) M mance compa, and each pl||arines rison with bas ot indicates th||elines. Wher e average per||
|Col1|Col2|Col3|Col4|
|---|---|---|---|
||(b)|PP||
|||||
|task is trai ce across al|(d) ned for 400k st l tasks seen so f|SZ eps in LBF, 5 ar.|00k steps in|
_k[⋆i]_ = argmin1≤k≤K 1≤minp≤P _[||][µ][θ]i[′]_ [(][τ][ i]p[)][ −] _bs[1]_
_bs_
�
_µ[j]k[||][2][,]_ (11)
_j=1_
and use head ψ[k][⋆i] with feature extractor φ for testing.
#### 5 Experimental Evaluation
In this section, we design extensive experiments for the following questions: 1) Can our approach MACPro achieve
high continual ability compared to other baselines in different scenarios, and how each component influences its
performance? (Sec. 5.2) ? 2) What task representation is learned by our approach, and how does it influence the
continual learning ability (Sec. 5.3) ? 3) Can MACPro be integrated into multiple cooperative MARL methods, and
how does each hyperparameter influence its performance (Sec. 5.4) ?
**5.1** **Environments and Baselines**
For the evaluation benchmarks, we select four multi-agent environments (see Fig. 3). Where Level Based Foraging
(LBF) [29] is a cooperative grid world game with agents that are rewarded if they concurrently navigate to the food
and collect it. The position of the food changes in different tasks as indicated by the number of the food. Predator
Prey (PP) [11] is another popular benchmark where agents (predators) need to chase the adversary agent (prey) and
encounter it to win the game. In different tasks, the position of landmarks, the agents’ acceleration, maximum speed,
positions, and the fixed heuristic policies the prey uses are different. And Marines and SZ from StarCraft Multi-Agent
Challenge (SMAC) [30], involving various numbers of the agent.
To evaluate if MACPro can achieve good performance on these benchmarks when different tasks appear continually,
we apply it to a popular valued-based method QMIX [14]. Compared baselines include Finetuning, which directly
tunes the learned policy on the current task; EWC [18], a regularization-based method that constrains the whole agent
network from dramatic change; Coreset [45], which uses a shared replay buffer over all the tasks so that data on old
tasks can rehearse the agents during finetuning on the new task. Also, OWL [24] is included as it is similar to our work
but applies the bandit algorithm for head selection. To further study the head selection process, we design Random,
-----
|Col1|Col2|
|---|---|
|||
|earning res ng that the ) because t|ults on PP. agents are tr he task has|
|Method|Task 1 Task 2 Task 3 Task 4 Task 5 Average|
|---|---|
|Ours W/o model W/o cont g W/o POE W/o oracle W/o cont l W/o cont g,l|111...000000 ±±± 000...000000 111...000000 ±±± 000...000000 0.80 ± 0.40 0.71 ± 0.38 111...000000 ±±± 000...000000 000...999000 ±±± 000...000999 0.93 0.10 0.68 0.46 0.67 0.47 0.85 0.21 0.93 0.10 0.81 0.18 ± ± ± ± ± ± 111...000000 ±±± 000...000000 0.67 ± 0.47 111...000000 ±±± 000...000000 000...999777 ±±± 000...000333 0.55 ± 0.41 0.84 ± 0.18 0.99 0.01 0.86 0.19 0.97 0.03 0.74 0.17 0.69 0.44 0.85 0.05 ± ± ± ± ± ± 0.21 0.39 0.60 0.49 0.06 0.12 0.24 0.28 0.24 0.38 0.27 0.07 ± ± ± ± ± ± 0.98 0.03 0.86 0.19 0.76 0.17 0.75 0.20 0.97 0.04 0.86 0.06 ± ± ± ± ± ± 0.79 ± 0.40 0.99 ± 0.02 111...000000 ±±± 000...000000 0.34 ± 0.42 0.54 ± 0.38 0.73 ± 0.12|
|---|---|
with MACPro selecting a head randomly during testing, and Oracle, where MACPro’s head selection is based on the
ground-truth heads information. More details about benchmarks and baselines can be seen in App. G.
**5.2** **Competitive Results and Ablations**
**Continual Learning Ability Comparison** At first glance, we compare MACPro against the mentioned baselines to
investigate the continual learning ability as shown in Fig. 5. We can find that Finetuning achieves the most inferior performance in different benchmarks, showing that a conventional reinforcement learning training paradigm is improper
for continual learning scenarios. Other successful approaches for single agent continual learning, like Coreset, EWC,
and OWL, also suffer from performance degradation in the involved benchmarks, demonstrating the necessity of specific consideration for MARL settings. The Oracle baseline, where we give all the ground-truth task identification
when testing, can be seen as an upper bound of performance on the related benchmarks, acquiring superiority over all
baselines in all benchmarks, demonstrating a multi-head architecture can solve the multi-modal tasks while conventional approaches fail. Our approach MACPro, obtains comparable performance to Oracle, indicating the efficiency
of all the designed modules. Random, which selects a head randomly when testing, suffers from terrible performance
degradation compared with MACPro and Oracle, showing that the success of MACPro is owing to the appropriate
head selection mechanism but not a larger network with multiple heads.
Furthermore, we display the performance on every single task in PP in Fig .5, We can find that baselines Fientuning,
EWC, and Coreset all suffer from performance degradation on one task after training on it, i.e., catastrophic forgetting, demonstrating the necessity of specific consideration for MARL continual learning. other baselines, OWL and
Random, fail to choose the appropriate head for testing and does not perform well on all tasks. Learning the new task
as quickly as Finetuning without forgetting the old ones, our method MACPro obtains excellent performance. The
comparable average performance to Oracle also indicates that MACPro can accurately choose the optimal head for
testing. More results can be seen in App. I.
-----
**Ablation Studies** As MACPro is composed of multiple components, we here design ablation studies on benchmark
LBF to investigate their impacts. First, for task contextualization learning, we derive W/o model by removing the
forward model h and its corresponding loss term Lmodel, and using the contrastive loss only to optimize the global
trajectory encoder gθ. Next, instead of extracting the representation of trajectories with POE, we use the average of the
Gaussian distributions generated by the transformer network as the representation, and we call it W/o POE. Further,
we also introduce W/o oracle, which has a similar number of parameters as MACPro, to investigate whether the
superiority of MACPro over QMIX is due to the increase in the number of parameters. Finally, we remove both global
contrastive loss Lcontg and local contrastive loss Lcontl to derive W/o contg,l. As shown in Tab. 1, we can find that when
the model loss is removed, W/o model suffers from performance degradation in most tasks, indicating the necessity
for task representation learning. Furthermore, the POE mechanism also slightly influences the learning performance,
demonstrating the special integration of multiple representations of trajectories can facilitate representation learning.
Consequently, when removing the oracle loss function, W/o oracle sustains great performance degradation, and even
fails in task 3, indicating a simply larger network cannot fundamentally improve the performance. We also find
contrastive learning loss has a positive effect on performance. We further design W/o contg and W/o contl by setting
_αcontg = 0 and αcontl = 0 to study the impact of contrastive loss. Both variants suffer from performance degradation,_
indicating the necessity of contrastive learning.
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
MACPro Random OWL EWC Finetuning Coreset MACPro Random OWL EWC Finetuning Coreset
(a) LBF (b) PP
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
MACPro Random OWL EWC Finetuning Coreset MACPro Random OWL EWC Finetuning Coreset
(c) Marines (d) SZ
Figure 6: Generalization results.
**The Generalization results** As we focus on training each emerging task sequentially, it produces a significant
risk of overfitting. What’s more, the ultimate goal of continual learning agents is to not only perform well on seen
tasks, but also utilize the learned experiences to complete future unseen tasks. Here, we design experiments to test
the generalization ability of MACPro compared with multiple baselines. Concretely, we design 20 additional tasks
(details can be seen in App. G) for each benchmark that agents have not encountered before to conduct zero-shot
experiments. As shown in Fig. 6, MACPro demonstrates the most superior performance compared to the multiple
compared baselines, indicating that it has strong generalization ability due to the multi-agent task contextualization
|Col1|co|nt|g|Col5|
|---|---|---|---|---|
|n|d|ic|a|ti|
||||||
||||||
||||||
|contrastive learning loss has a positive effect on performance. α = 0 and α = 0 to study the impact of contrastive lo contg contl indicating the necessity of contrastive learning. 0.8 Mean 0.6 Return 0.4 Test 0.2 0.0 MACPro Random OWL EWC Finetuning Coreset (a) LBF|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
|||||||g||C|or|es|e|t|||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|MACPro Random OWL EWC Finetuning (d) SZ|||||||||||
|||||||Co|r|es|et||
||||||||||||
||||||||||||
||||||||||||
-----
|MACPro (ours) MACPro w/o adaptive expansion Finetuning # Heads of MACPro (ours) # Heads of MACPro w/o adaptive expansion # Heads of Finetuning 10 4 1 Merge task 3 to task 1 Expand a new head for task 6 Projection at the end task 2 task 1,3 Run task 1,3 task 2,4,5 toR wu an rds task 2,4,5 task 1,3 task 9,10 away Similar Dissimilar aR wu an y task 6 Run Keep task 6,7,8 away still|Col2|Col3|MACPr # Head|Col5|o (ours) s of MACPro (ours)|Col7|Col8|Col9|Col10|Col11|MACPro # Heads|w/o adaptive e of MACPro w/o|xpansion adaptive expans|Fi ion #|Col16|netuning Heads of Finetuning|Col18|Col19|10 4 1|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
||||||Merge task 3|||to task 1|||E|xpand a ne|w head for ta|sk 6||Projection at||the end||
|||||||||||||||||||||
||task 2 task 1,3 Run away Similar Run away||task|1,3|Run away||||||task 1,3|task 2,|4,5 Run towards||tas|k 2,4,5 tas|k 1,3 t|||
|||||||Run away||||||||||||||
|||||||||||Dissimilar task 6 Run Keep away still||||||||||
Figure 7: Task contextualization analysis. Where similar tasks have the same background color, e.g., task 1 and task
3 correspond to the green background. When encountering a new task, we sample latent variables generated by gθ(τ )
and apply dimensionality reduction to them by principal component analysis (PCA) [58], denoted as .
_△_
learning module and the decentralized task approximation procedure. Note that the baseline Oracle is not tested here
because there is no ground-truth head selection on unseen tasks.
**5.3** **Task Contextualization Analysis**
Then, we visualize the development of continual learning performance, along with changes in task representation
and factored heads, to demonstrate how our method works. Concretely, we build a task sequence with 10 tasks of
benchmark PP. As shown in Fig. 7, when t = 1.0M, the incoming task 3 is similar to task 1, and their latent variables
are distributed in the same area (the green ellipse). Task 3 shares the same head as task 1, leading to an unchanged
number of task heads. When a dissimilar task is encountered at t = 2.5M, none of the learned tasks are similar to the
incoming task 6. The latent variables of task 6 are distributed in a new area (the red ellipse), and MACPro does expand
a new head accordingly. This process continuously proceeds, until the learning procedure ends at t = 5.0M, when
the latent variables of all ten tasks are distributed in four separate clusters, and MACPro has four heads, respectively.
The latent variables of the representations fθi[′] [(][τ][ i][)][ encoded by individual trajectory encoders, denoted as][ ◦][, are also]
displayed (we omit them in the first two 3D figures for simplicity). It shows that the representations learned by fθi[′] [(][τ][ i][)]
is close to gθ(τ ), enabling accurate decentralized task approximation and good performance.
Consequently, we can further find the learning curve in the top row of Fig. 7, along with the number of separate
heads that changes according to the corresponding task representations. We also compare two extreme-case methods,
where Finetuning holds a single head for all tasks, enjoying high scalability but strong catastrophic forgetting. On the
contrary, MACPro w/o adaptive expansion maintains one head for each task and can achieve high learning efficiency,
but the heads’ storage cost may impede it when facing a large number of tasks. Our method MACPro, achieves
comparable or even better learning ability but consumes fewer heads, showing high learning efficiency and scalability.
**5.4** **Integrative Abilities and Sensitive Studies**
MACPro is agnostic to specific value-based cooperative MARL methods. Thus we can use it as a plug-in module
and integrate it with existing MARL methods like VDN [13], QMIX [14], and QPLEX [59]. As shown in Tab. 2,
when integrating with MACPro, the performance of the baselines vastly improves, indicating that MACPro has high
generality ability for different methods to facilitate continual learning ability.
As MACPro includes multiple hyperparameters, here we conduct experiments on benchmark PP to investigate how
each one influences the continual learning ability. First, αreg controls the extent of restriction on changing the parameters of the shared feature extractor φi. If it is too small, the dramatic change of φi’s parameters may induce severe
forgetting. On the other hand, if it is too large, agents remember the old task at the expense of not learning the new
task. We thus find each hyperparameter via grid-search. As shown in Fig. 8 (a), we can find that αreg = 500 is the
best choice in this benchmark. Furthermore, another adjustable hyperparameter αcontg influences the training of global
trajectory encoder gθ in multi-agent task contextualization learning. Fig. 8 (b) shows that αcontg = 0.1 performs the
best. In decentralized task approximation, αcontl balances the learning of local trajectory encoder fθi[′] [. We find that]
in Fig. 8 (c) αcontl = 0.1 performs the best. During decentralized execution, agents first probe P episodes before
-----
Table 2: Integrative Abilities.
Envs Method VDN QMIX QPLEX
W/ MACPro 000...929292 ± ± ± 0 0 0...020202 000...909090 ± ± ± 0 0 0...090909 000...979797 ± ± ± 0 0 0...030303
LBF
W/o MACPro 0.21 ± 0.01 0.20 ± 0.00 0.21 ± 0.01
W/ MACPro 000...626262 ± ± ± 0 0 0...060606 000...808080 ± ± ± 0 0 0...020202 000...636363 ± ± ± 0 0 0...050505
PP
W/o MACPro 0.29 ± 0.05 0.27 ± 0.04 0.30 ± 0.03
evaluation to derive the task contextualization and select the optimal head. The more episodes agents can probe, the
more information about the evaluating task agents can gain. However, setting P to a very large value is not practical.
We find in Fig. 8 (d) P = 20 is enough for accurate task approximation.
(a) Sensitivity of 𝛼𝛼reg (b) Sensitivity of 𝛼𝛼cont𝑔𝑔
(c) Sensitivity of 𝛼𝛼cont𝑙𝑙 (d) # probe episodes 𝑃𝑃
Figure 8: Test results of parameter sensitivity studies.
#### 6 Final Remarks
Observing the great significance and practicability of continual learning, this work takes a further step towards continual coordination in cooperative MARL. We first formulate this problem, where agents are centralized trained with
access to global information, then an efficient task contextualization learning module is designed to obtain efficient task
representation, and an adaptive dynamic network expansion technique is applied, we finally design a local continual
coordination mechanism to approximate the global optimal task head selection. Extensive experiments demonstrate
the effectiveness of our approach. To the best of our knowledge, the proposed MACPro is the first multi-agent contin
|Envs|Method|VDN QMIX QPLEX|
|---|---|---|
|LBF|W/ MACPro W/o MACPro|000...999222 ±±± 000...000222 000...999000 ±±± 000...000999 000...999777 ±±± 000...000333 0.21 ± 0.01 0.20 ± 0.00 0.21 ± 0.01|
|---|---|---|
|PP|W/ MACPro W/o MACPro|000...666222 ±±± 000...000666 000...888000 ±±± 000...000222 000...666333 ±±± 000...000555 0.29 ± 0.05 0.27 ± 0.04 0.30 ± 0.03|
|---|---|---|
-----
ual algorithm capable of multi-agent scenarios, which needs a heuristic-designed environment process. Future work on
more reasonable and efficient ways, such as environment automatic generation or applying it to real-world applications
would be of great value.
#### References
[1] A. Oroojlooy and D. Hajinezhad, “A review of cooperative multi-agent deep reinforcement learning,” Applied
_Intelligence, pp. 1–46, 2022._
[2] G. Sartoretti, J. Kerr, Y. Shi, G. Wagner, T. S. Kumar, S. Koenig, and H. Choset, “Primal: Pathfinding via
reinforcement and imitation multi-agent learning,” IEEE Robotics and Automation Letters, vol. 4, no. 3, pp.
2378–2385, 2019.
[3] J. Wang, W. Xu, Y. Gu, W. Song, and T. C. Green, “Multi-agent reinforcement learning for active voltage control
on power distribution networks,” in NeurIPS, 2021, pp. 3271–3284.
[4] K. Xue, J. Xu, L. Yuan, M. Li, C. Qian, Z. Zhang, and Y. Yu, “Multi-agent dynamic algorithm configuration,” in
_NeurIPS, 2022._
[5] G. Papoudakis, F. Christianos, A. Rahman, and S. V. Albrecht, “Dealing with non-stationarity in multi-agent
deep reinforcement learning,” preprint arXiv:1906.04737, 2019.
[6] J. Wang, Z. Ren, B. Han, J. Ye, and C. Zhang, “Towards understanding cooperative multi-agent q-learning with
value factorization,” in NeurIPS, 2021, pp. 29 142–29 155.
[7] F. Christianos, G. Papoudakis, M. A. Rahman, and S. V. Albrecht, “Scaling multi-agent reinforcement learning
with selective parameter sharing,” in ICML, 2021, pp. 1989–1998.
[8] C. Zhu, M. Dastani, and S. Wang, “A survey of multi-agent reinforcement learning with communication,”
_preprint arXiv:2203.08975, 2022._
[9] H. Hu, A. Lerer, A. Peysakhovich, and J. N. Foerster, “”other-play” for zero-shot coordination,” in ICML, 2020,
pp. 4399–4410.
[10] J. Guo, Y. Chen, Y. Hao, Z. Yin, Y. Yu, and S. Li, “Towards comprehensive testing on the robustness of cooperative multi-agent reinforcement learning,” preprint arXiv:2204.07932, 2022.
[11] R. Lowe, Y. Wu, A. Tamar, J. H. P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperativecompetitive environments,” in NIPS, 2017, pp. 6379–6390.
[12] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, and Y. Wu, “The surprising effectiveness of PPO in
cooperative multi-agent games,” in NeurIPS, 2022.
[13] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. F. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z.
Leibo, K. Tuyls, and T. Graepel, “Value-decomposition networks for cooperative multi-agent learning based on
team reward,” in AAMAS, 2018, pp. 2085–2087.
[14] T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, and S. Whiteson, “Qmix: Monotonic value
function factorisation for deep multi-agent reinforcement learning,” in ICML, 2018, pp. 4295–4304.
[15] R. Gorsane, O. Mahjoub, R. J. de Kock, R. Dubb, S. Singh, and A. Pretorius, “Towards a standardised performance evaluation protocol for cooperative MARL,” in NeurIPS, 2022.
[16] K. Khetarpal, M. Riemer, I. Rish, and D. Precup, “Towards continual reinforcement learning: A review and
perspectives,” Journal of Artificial Intelligence Research, vol. 75, pp. 1401–1476, 2022.
[17] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks:
A review,” Neural Networks, vol. 113, pp. 54–71, 2019.
[18] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of
_the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017._
[19] C. Kaplanis, M. Shanahan, and C. Clopath, “Policy consolidation for continual reinforcement learning,” in ICML,
2019, pp. 3242–3251.
[20] E. Lecarpentier, D. Abel, K. Asadi, Y. Jinnai, E. Rachelson, and M. L. Littman, “Lipschitz lifelong reinforcement
learning,” in AAAI, 2021, pp. 8270–8278.
[21] D. Lopez-Paz and M. Ranzato, “Gradient episodic memory for continual learning,” in NIPS, 2017, pp. 6467–
6476.
-----
[22] L. Caccia, E. Belilovsky, M. Caccia, and J. Pineau, “Online learned continual compression with adaptive quantization modules,” in ICML, 2020, pp. 1240–1250.
[23] S. Sodhani, F. Meier, J. Pineau, and A. Zhang, “Block contextual mdps for continual learning,” in L4DC, 2022,
pp. 608–623.
[24] S. Kessler, J. Parker-Holder, P. J. Ball, S. Zohren, and S. J. Roberts, “Same state, different task: Continual
reinforcement learning without interference,” in AAAI, 2022, pp. 7143–7151.
[25] J.-B. Gaya, T. Doan, L. Caccia, L. Soulier, L. Denoyer, and R. Raileanu, “Building a subspace of policies for
scalable continual learning,” preprint arXiv:2211.10445, 2022.
[26] K. Zhang, Z. Yang, and T. Bas¸ar, “Multi-agent reinforcement learning: A selective overview of theories and
algorithms,” Handbook of Reinforcement Learning and Control, pp. 321–384, 2021.
[27] R. Mirsky, I. Carlucho, A. Rahman, E. Fosong, W. Macke, M. Sridharan, P. Stone, and S. V. Albrecht, “A survey
of ad hoc teamwork: Definitions, methods, and open problems,” preprint arXiv:2202.10450, 2022.
[28] H. Nekoei, A. Badrinaaraayanan, A. C. Courville, and S. Chandar, “Continuous coordination as a realistic scenario for lifelong learning,” in ICML, 2021, pp. 8016–8024.
[29] G. Papoudakis, F. Christianos, L. Sch¨afer, and S. V. Albrecht, “Benchmarking multi-agent deep reinforcement
learning algorithms in cooperative tasks,” in NeurIPS, 2021.
[30] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C. Hung, P. H. S. Torr, J. N.
Foerster, and S. Whiteson, “The Starcraft multi-agent challenge,” in AAMAS, 2019, pp. 2186–2188.
[31] L. Busoniu, R. Babuska, and B. De Schutter, “A comprehensive survey of multiagent reinforcement learnin,”
_IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 38, no. 2, pp._
156–172, 2008.
[32] A. OroojlooyJadid and D. Hajinezhad, “A review of cooperative multi-agent deep reinforcement learning,”
_preprint arXiv:1908.03963, 2019._
[33] H. Wang, Y. Yu, and Y. Jiang, “Fully decentralized multiagent communication via causal inference,” IEEE Trans_actions on Neural Networks and Learning Systems, 2022._
[34] J. Cao, L. Yuan, J. Wang, S. Zhang, C. Zhang, Y. Yu, and D.-C. Zhan, “Linda: Multi-agent local information
decomposition for awareness of teammates,” preprint arXiv:2109.12508, 2021.
[35] M. Wen, J. G. Kuba, R. Lin, W. Zhang, Y. Wen, J. Wang, and Y. Yang, “Multi-agent reinforcement learning is a
sequence modeling problem,” in NeurIPS, 2022.
[36] F. Zhang, C. Jia, Y.-C. Li, L. Yuan, Y. Yu, and Z. Zhang, “Discovering generalizable multi-agent coordination
skills from multi-task offline data,” in ICLR, 2023.
[37] X. Wang, Z. Zhang, and W. Zhang, “Model-based multi-agent reinforcement learning: Recent progress and
prospects,” preprint arXiv:2203.10603, 2022.
[38] X. Lyu, Y. Xiao, B. Daley, and C. Amato, “Contrasting centralized and decentralized critics in multi-agent
reinforcement learning,” in AAMAS, 2021, pp. 844–852.
[39] S. Zhang, L. Shen, and L. Han, “Learning meta representations for agents in multi-agent reinforcement learning,”
_preprint arXiv:2108.12988, 2021._
[40] R. Qin, F. Chen, T. Wang, L. Yuan, X. Wu, Z. Zhang, C. Zhang, and Y. Yu, “Multi-agent policy transfer via task
relationship modeling,” preprint arXiv:2203.04482, 2022.
[41] K. Xue, Y. Wang, L. Yuan, C. Guan, C. Qian, and Y. Yu, “Heterogeneous multi-agent zero-shot coordination by
coevolution,” preprint arXiv:2208.04957, 2022.
[42] M. Masana, X. Liu, B. Twardowski, M. Menta, A. D. Bagdanov, and J. van de Weijer, “Class-incremental
learning: survey and performance evaluation on image classification,” preprint arXiv:2010.15277, 2020.
[43] D. Kudithipudi, M. Aguilar-Simon, J. Babb, M. Bazhenov, D. Blackiston, J. Bongard, A. P. Brna,
S. Chakravarthi Raja, N. Cheney, J. Clune et al., “Biological underpinnings for lifelong learning machines,”
_Nature Machine Intelligence, vol. 4, no. 3, pp. 196–210, 2022._
[44] D. Rolnick, A. Ahuja, J. Schwarz, T. P. Lillicrap, and G. Wayne, “Experience replay for continual learning,” in
_NeurIPS, 2019, pp. 348–358._
[45] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny
episodic memories in continual learning,” preprint arXiv:1902.10486, 2019.
-----
[46] Y. Huang, K. Xie, H. Bharadhwaj, and F. Shkurti, “Continual model-based reinforcement learning with hypernetworks,” in ICRA, 2021, pp. 799–805.
[47] S. Kessler, P. Miło´s, J. Parker-Holder, and S. J. Roberts, “The surprising effectiveness of latent world models for
continual reinforcement learning,” preprint arXiv:2211.15944, 2022.
[48] S. Lee, J. Ha, D. Zhang, and G. Kim, “A neural dirichlet process mixture model for task-free continual learning,”
in ICLR, 2020.
[49] Z. Wang, C. Chen, and D. Dong, “Lifelong incremental reinforcement learning with online bayesian inference,”
_IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 8, pp. 4003–4016, 2022._
[50] M. Wolczyk, M. Zajac, R. Pascanu, L. Kucinski, and P. Milos, “Continual world: A robotic benchmark for
continual reinforcement learning,” in NeurIPS, 2021, pp. 28 496–28 510.
[51] S. Powers, E. Xing, E. Kolve, R. Mottaghi, and A. Gupta, “Cora: Benchmarks, baselines, and metrics as a
platform for continual reinforcement learning agents,” in CoLLAs, 2022, pp. 705–743.
[52] F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs. Springer, 2016.
[53] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention
is all you need,” in NIPS, 2017, pp. 5998–6008.
[54] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural computation, vol. 14,
no. 8, pp. 1771–1800, 2002.
[55] F.-M. Luo, T. Xu, H. Lai, X.-H. Chen, W. Zhang, and Y. Yu, “A survey on model-based reinforcement learning,”
_preprint arXiv:2206.09328, 2022._
[56] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face
verification,” in CVPR, 2005, pp. 539–546.
[57] H. Jeffreys, The Theory of Probability. OUP Oxford, 1998.
[58] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory
_Systems, vol. 2, no. 1-3, pp. 37–52, 1987._
[59] J. Wang, Z. Ren, T. Liu, Y. Yu, and C. Zhang, “QPLEX: duplex dueling multi-agent q-learning,” in ICLR, 2021.
[60] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, and Y. Yi, “QTRAN: Learning to factorize with transformation
for cooperative multi-agent reinforcement learning,” in ICML, 2019, pp. 5887–5896.
[61] K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning
phrase representations using rnn encoder-decoder for statistical machine translation,” preprint arXiv:1406.1078,
2014.
[62] S. Hu, F. Zhu, X. Chang, and X. Liang, “Updet: Universal multi-agent reinforcement learning via policy decoupling with transformers,” in ICLR, 2021.
[63] S. Iqbal, C. A. S. De Witt, B. Peng, W. B¨ohmer, S. Whiteson, and F. Sha, “Randomized entity-wise factorization
for multi-agent reinforcement learning,” in ICML, 2021, pp. 4596–4606.
### Appendix
#### G Details About Baselines and Benchmarks
This part gives a detailed description about the relevant baselines, benchmarks, and the three related value-based
cooperative MARL methods.
**G.1** **Baselines**
**Finetuning is a simple method based on a single feature extraction model and policy head to learn a sequence of tasks,**
ignoring the changes in tasks and directly tuning the learned policy on the current task. However, if the current task
is different from the previous ones, the parameters of the policy network would change dramatically to acquire good
performance on the current task, thus inducing the phenomenon of catastrophic forgetting.
**EWC [18] is one of the regularization-based approaches to address the catastrophic forgetting problem. Concretely,**
it tries to maintain expertise on old tasks by selectively slowing down learning on the weights that are important for
-----
(a) 5x5 grid (b) 6x6 grid
Figure 9: Benchmark LBF used in this paper.
Figure 10: Benchmark PP used in this paper, where the prey’s policy is to run in the opposite direction of the nearest
predator.
them. Specifically, the loss function for learning the current task M is
_L(θ) = LM_ (θ) + _[λ]_
2
�
_Fj(θj −_ _θM_ _−1,j)[2],_ (12)
_j_
where LM (θ) is the loss for task M only, and Fi is the i[th] diagonal element of the Fisher information matrix F . θM _−1_
is the saved snapshot of θ after training task M 1, and j labels each parameter. λ is a adjustable coefficient to control
_−_
the trade-off between the current task and previous ones. In this paper, we set the parameters of the agent’s Q network
as θ, and calculate the Fisher information matrix F with temporal difference error.
Unlike EWC that constrains the change of network parameters when learning a new task, Coreset [45], one of the
replay-based methods, prevents catastrophic forgetting by choosing and storing a significantly smaller subset of data
of the previous tasks. When learning the current task, the stored data is also utilized for training the policy, which is
expected to remember the previous tasks. In this paper, we set the replay buffer to uniformly store trajectories of all
seen tasks, including the current one. A small batch of trajectories of one randomly chosen task is sampled from the
buffer to train the agents’ network.
**OWL [24] is a recent approach that learns a multi-head architecture and achieves high learning efficiency when the**
tasks in a sequence have conflicting goals. Specifically, it learns a factorized policy with a shared feature extractor but
separate heads, each specializing in only one task. With a similar architecture to our method MACPro, we can apply
it to learn task sequences in a continual manner. During testing, OWL uses bandit algorithms to find the policy that
achieves the highest test task reward. However, this strategy could bring performance degradation since agents choose
action uniformly at the beginning of the episodes.
**G.2** **Benchmarks**
We select four multi-agent environments for the evaluation benchmarks. Level Based Foraging (LBF) [29] is a
cooperative grid world game (see Fig. 9). Where the positions of two agents and one food are represented by discrete
states, and agents are randomly spawned at cells (0, 0), (0, 1), (1, 0), (1, 1). Each agent observes the relative position
-----
(a) task 5m_vs_6m in Marines series (b) task 2s3z in SZ series
Figure 11: Benchmark Marines and SZ used in this paper.
of other agents and the food, moves a single cell in one of the four directions (up, left, down, right), and gains reward
1 if and only if both agents navigate to the food and be at a distance of one cell from the food. In continual learning
ability comparison, we design 5 tasks in a 5x5 grid, with the food at cell (0, 4), (2, 4), (4, 4), (4, 2), (4, 0) (green food
in Fig. 9 (a)), respectively. To test the generalization ability of different methods, we further design 20 tasks in both
5x5 and 6x6 grid, and the food position is changed as well (red food in Fig. 9 (a)(b)).
**Predator Prey (PP) [11] is another popular benchmark where three agents (predators) need to chase the adversary**
agent (prey) and collide it to win the game (see Fig. 10). Here agents and landmarks are represented by circles with
different sizes and colliding means circles’ intersection. Positions of the two fixed landmarks, positions and speed
of the predators and the prey are encoded into continuous states. The predators and the prey can accelerate in one
of the four directions (up, left, down, right). In different tasks, the position of landmarks, the predators and the
prey’s acceleration, maximum speed, and spawn areas, and the fixed heuristic policies the prey uses are different.
Specifically, the prey (1) runs in the opposite direction of the nearest predator, (2) stays still at a position far away from
the predators, (3) runs towards the nearest predator, and (4) runs in a random direction with great speed. Predators
gains reward 1 if n of them collide with the prey at the same time (n = 1 in case (1)(2)(4) and n = 2 in case (3)). In
the generalization test, for one original task, we create one corresponding additional task by adding one constant ξ to
the original value of different task parameters, including landmark’s size, x-coordinate, y-coordinate, predator’s and
prey’s size, acceleration, and maximum speed. We set ξ = 0.01, 0.02, 0.03 on the four original tasks to derive 20
_±_ _±_
addition tasks to test the generalization ability.
The other benchmarks are two task series, named Marines and SZ (Fig. 11), from StarCraft Multi-Agent Challenge
(SMAC) [30], involving various numbers of Marines, Stalkers/Zealots in two camps, respectively. The goal of the
multi-agent algorithm is to control one of the camps to defeat the other. Agents receive a positive reward signal by
causing damage to enemies, killing enemies, and winning the battle. On the contrary, agents receive a negative reward
signal when they receive damage from enemies, get killed, and lose the battle. Each agent observes information about
the map within a circular area around it, and takes actions, including moving and firing when it is alive. In continual
learning ability comparison, the Marines series consists of (1) 5m vs 6m, (2) 13m, (3) 4m, and (4) 8m vs 9m, and
the SZ series consists of (1) 2s1z vs 3z, (2) 2s3z, (3) 3s5z, and (4) 2s2z vs 4s, where m stands for marine, which can
attack an enemy unit from a long distance at a time, s stands for the stalker, which attacks like a marine and has a
self-regenerate shield, and z stands for zealot, which also has a self-regenerate shield but can only attack an enemy
unit from a short distance. For the generalization test, we first decrease the default sight range and shoot range by 1
to create four additional tasks for both Marines and SZ. Then, we design scenarios 3m, 5m, 6m, 7m, 8m, 9m, 10m,
_{_
11m, 12m, 4m vs 5m, 6m vs 7m, 7m vs 8m, 9m vs 10m, 10m vs 11m, 11m vs 12m, 12m vs 13m for Marines,
_}_
and scenarios 1s1z, 1s2z, 1s3z, 2s1z, 2s2z, 2s4z, 3s1z, 3s2z, 3s3z, 3s4z, 4s2z, 4s3z, 4s4z, 2s2z vs 4z, 3s3z vs 6s,
_{_
4s4z vs 8z for SZ. If vs in the task name, it indicates that the two camps are asymmetric, e.g., in 5m vs 6m, there are
_}_
5 marines in our camp and 6 enemy marines. Otherwise, it indicates that the two camps are symmetric, e.g., in 2s3z,
there are 2 stalkers and 3 zealots in both camps.
**G.3** **Value Function factorization MARL methods**
As we investigate the integrative abilities of MACPro in the manuscript, here we introduce the value-based methods
used in this paper, including VDN [13], QMIX [14], and QPLEX [59]. The difference among the three methods lies
in the mixing networks, with increasing representational complexity. Our proposed framework MACPro follows the
_Centralized Training with Decentralized Execution (CTDE) paradigm used in value-based MARL methods._
These three methods all follow the Individual-Global-Max (IGM) [60] principle, which asserts the consistency between joint and local greedy action selections by the joint value function Qtot(τ _, a) and individual value functions_
-----
MLP | �| 𝑊𝑊1
���
𝑠𝑠𝑡𝑡 𝑄𝑄1(𝜏𝜏𝑡𝑡1, 𝑎𝑎1𝑡𝑡 )���𝑄𝑄𝑛𝑛(𝜏𝜏𝑡𝑡𝑛𝑛, 𝑎𝑎𝑡𝑡𝑛𝑛)
|𝑄𝑄𝑡𝑡𝑡𝑡𝑡𝑡(𝝉𝝉𝑡𝑡, 𝒂𝒂𝑡𝑡) 𝑄𝑄𝑖𝑖(𝜏𝜏𝑡𝑡 𝑖𝑖, 𝑎 𝑄𝑄𝑡𝑡𝑡𝑡𝑡𝑡(𝝉𝝉𝑡𝑡, 𝒂𝒂𝑡𝑡)|Col2|𝑎𝑎𝑡𝑡 𝑖𝑖)|
|---|---|---|
|MLP 𝑏2𝑏 𝑠𝑡𝑠𝑡 Mixing Network 𝜋𝜋 | ȉ | MLP 𝑊2𝑊 MLP MLP 𝑏1𝑏 𝑄1𝑄(𝜏𝑡𝜏1 𝑡, 𝑎𝑎𝑡1 𝑡) ȉȉȉ 𝑄𝑛𝑄𝑛(𝜏𝑡𝜏𝑛 𝑡𝑛, 𝑎𝑎𝑡𝑛 𝑡𝑛) ℎ𝑡𝑖𝑖 𝑡−1 GRU | ȉ |||𝜀𝜀 ℎ𝑡𝑖𝑖 𝑡|
||| ȉ | MLP 𝑊2𝑊 MLP 𝑏1𝑏 | ȉ |||
Agent 1 ��� Agent 𝑛𝑛
(𝑜𝑜𝑡𝑡1, 𝑎𝑎1𝑡𝑡−1) ��� (𝑜𝑜𝑡𝑡𝑛𝑛, 𝑎𝑎𝑡𝑡−1𝑛𝑛 )
𝑄𝑄𝑖𝑖(𝜏𝜏𝑡𝑡𝑖𝑖, 𝑎𝑎𝑡𝑡𝑖𝑖)
𝜋𝜋
MLP
GRU
MLP
(𝑜𝑜𝑡𝑡𝑖𝑖, 𝑎𝑎𝑡𝑡−1𝑖𝑖 )
(a) Mixing Network
(b) Overall Structure (c) Agent Network
Figure 12: The overall structure of QMIX. (a) The detailed structure of the mixing network, whose weights and biases
are generated from a hyper-net (red) which takes the global state as the input. (b) QMIX is composed of a mixing
network and several agent networks. (c) The detailed structure of the individual agent network.
�Qi(τ _[i], a[i])�ni=1[:]_
_∀τ ∈_ **_T, arg maxQtot(τ_** _, a) =_
**_a∈A_**
� � (13)
arg maxQ1 �τ [1], a[1][�] _, . . ., arg maxQn (τ_ _[n], a[n])_ _._
_a[1]∈A_ _a[n]∈A_
**VDN [13] factorizes the global value function Q[VDN]tot** [(][τ] _[,][ a][)][ as the sum of all the agents’ local value functions]_
�Qi(τ _[i], a[i])�ni=1[:]_
_n_
_Q[VDN]tot_ (τ _, a) =_ � _Qi_ �τ _[i], a[i][�]_ _._ (14)
_i=1_
**QMIX [14] extends VDN by factorizing the global value function Q[QMIX]tot** (τ _, a) as a monotonic combination of the_
agents’ local value functions �Qi(τ _[i], a[i])�ni=1[:]_
_i_ _, [∂Q]tot[QMIX](τ_ _, a)_ _> 0._ (15)
_∀_ _∈N_
_∂Qi (τ_ _[i], a[i])_
We mainly implement MACPro on QMIX for its proven performance in various papers and its overall structure is
𝑄𝑄[𝑡𝑡𝑡𝑡𝑡𝑡](𝝉𝝉, 𝒂𝒂)
𝑄𝑄[1](𝜏𝜏[1], 𝑎𝑎[1])
𝑄𝑄[𝑛𝑛](𝜏𝜏[𝑛𝑛], 𝑎𝑎[𝑛𝑛])
𝑠𝑠𝑖𝑖𝑎𝑎𝑎𝑎 𝑠𝑠𝑗𝑗𝑒𝑒𝑒𝑒
𝑜𝑜𝑖𝑖
𝑜𝑜𝑖𝑖 …
(a) Mixing Network (b) Individual Q Network
Figure 13: Network architecture used in Marines and SZ.
shown in Fig. 12. QMIX uses a hyper-net conditioned on the global state to generate the weights and biases of the
local Q-values and uses the absolute value operation to keep the weights positive to guarantee monotonicity.
The above two structures propose two sufficient conditions of the IGM principle to factorize the global value function
but these conditions they propose are not necessary. To achieve a complete IGM function class, QPLEX [59] uses a
duplex dueling network architecture by decomposing the global value function as:
_Q[QPLEX]tot_ (τ _, a) = Vtot(τ_ ) + Atot(τ _, a) =_
_n_ _n_
� _Qi_ �τ _, a[i][�]_ + �
_i=1_ _i=1_
(16)
�λ[i](τ _, a) −_ 1� _Ai_ �τ _, a[i][�]_ _,_
-----
where λ[i](τ _, a) is the weight depending on the joint history and action, Ai_ �τ _, a[i][�]_ is the advantage function conditioning on the history information of each agent. QPLEX aims to find the monotonic property between individual Q
function and individual advantage function.
#### H The Architecture, Infrastructure, and Hyperparameters Choices of MACPro
We give detailed description of the network architecture, the overall flow, and the parameters of MACPro here.
**H.1** **Network Architecture**
We here give details about multiple neural networks in (1) agent networks, (2) task contextualization learning, and (3)
decentralized task approximation.
In benchmark LBF and PP, the number of agents, the dimension of state, observation, and action remains unchanged
in different tasks. Specifically, for (1) agent networks, we apply the technique of parameter sharing and design the
feature extractor φ as a 5-layer MLP and a GRU [61]. The hidden dimension is 128 for the MLP and 64 for the GRU.
Then, each separated head is a linear layer which takes the output of the feature extractor as input and outputs the
Q-value of all actions. For (2) task contextualization learning, we design a global trajectory encoder gθ and a contextaware forward model h. gθ consists of a transformer encoder, a MLP, and a POE module. The 6-layer transformer
encoder takes trajectory τ = (s1, · · ·, sT ) as input and outputs T 32-dimensional embeddings. Then, the 3-layer MLP
transforms these embeddings into means and standard deviations of T Gaussians. Finally, the POE module acquires
the joint representation of the trajectory, which is also a Gaussian distribution N (µθ(τ ), σθ[2][(][τ] [))][. The context-aware]
forward model h is a 3-layer MLP that takes as input the concatenation of current state, local observations, actions,
and task contextualization sampled from the joint task distribution, and outputs the next state, next local observations,
and reward. The hidden dimension is 64 and the reconstruction loss is calculated by mean squared error. For (3)
decentralized task approximation, the local trajectory encoders fθi[′] [(][i][ = 1][,][ · · ·][, n][)][ have the same structure as the]
global trajectory encoder gθ.
In benchmark Marines and SZ, a new difficulty arises since the number of agents, the dimension of state, observation,
and action could vary from task to task, making the networks used in LBF and PP fail to work. Inspired by the
popularly used population-invariant network (PIN) technique in MARL [40, 62, 63], we design a different feature
extractor, head and a monotonic mixing network [14] that learns the global Q-value as a combination of local Qvalues. For the feature extractor (see Fig. 13), we decompose the observation oi into different parts, including agent
_i’s own information o[own]i_, ally information o[al]i [, and enemy information][ o]i[en][. Then we feed them into attention networks]
to derive a fixed-dimension embedding e:
_q = MLPq(o[own]i_ ),
**Kal = MLPKal** ([o[al]i [1] _[, . . ., o]i[al][j]_ _[, . . .][ ])][,]_
**Val = MLPVal** ([o[al]i [1] _[, . . ., o]i[al][j]_ _[, . . .][ ])][,]_
T �
_eal = softmax(qKal_ _/_ _dk)Val,_
(17)
**Ken = MLPKen** ([o[en]i [1] _, . . ., o[en]i_ _[j]_ _, . . . ]),_
**Ven = MLPVen** ([o[en]i [1] _, . . ., o[en]i_ _[j]_ _, . . . ]),_
T �
_een = softmax(qKen_ _/_ _dk)Ven,_
_e = [MLP(o[own]i_ ), eal, een],
where [·, ·] is the vector concatenation operation, dk is the dimension of the query vector, and bold symbols are
matrices. Embedding e is then fed into a MLP and a GRU to derive the output of the feature extractor φi. Finally, the
output is fed into the policy head, a 3-layer MLP, to derive the Q-value. Furthermore, the dimension of states could
also vary in Marines and SZ. Like the way we deal with observations, state s is decomposed into ally information
_s[al]i_ [, and enemy information][ s]j[en][. Then their embeddings are fed into an attention network to derive a][ fixed-dimension]
embedding es. Finally, we feed es into the original mixing network whose structure is used in benchmarks LBF and
PP.
Besides the networks mentioned above, the global trajectory encoder gθ, forward model h, local trajectory encoders
_fθi[′]_ [are also involved with this issue. For][ g][θ][ and][ f][θ]i[′] [, we first apply the same technique to derive the fixed-dimension]
embeddings of states and observations, then feed them into the transformer encoders. For the forward model h, we treat
each agent’s action as a part of its own observation and feed their concatenation into the attention network to derive
-----
an embedding, which will be feed into h with the embedding of state and task contextualization. Then, h outputs a
fixed-dimension embedding. We decode it into the next state, local observations, and reward with task-specific MLP
decoders to calculate the reconstruction loss Lmodel.
**Algorithm 1 MACPro: Training**
**Input: Task sequence Y = {task 1, · · ·, task M** _}_
**Initialize: trajectory encoder gθ, forward model h, individual trajectory encoders fθi[′]:n** [, agents’ feature extractors]
_φ1:n_
1: for m = 1, · · ·, M do
2: Set up task m
3: **if m = 1 then**
4: _ψ1:[1]_ _n_ _[←]_ [Initialized new head]
5: **else**
6: // Dynamic Network Expansion
7: Calculate l, l[′] according to Equation 5
8: Find k∗ = arg min1≤k≤K lk[′]
9: **if lk[′]** _∗_ _[≤]_ _[λ][new][l][k]∗_ **[then]**
10: Merge task m to the task(s) that ψ1:[k][∗]n [tasks charge,][ ψ]1:[m]n 1:n
_[←]_ _[ψ][k][∗]_
11: **else**
12: _ψ1:[m]n_ _[←]_ [Initialized new head]
13: **end if**
14: **end if**
15: (Optional) Reset the ϵ-greedy schedule
16: **for t = 1, · · ·, Ttask m do**
17: Collect trajectories with {φ1:n, ψ1:[m]n[}][, store in buffers][ D][,][ D][′]
18: Update {φ1:n, ψ1:[m]n[}][ according to][ L][RL]
19: **if t mod κ1 = 0 then**
20: // Task Contextualization Learning
21: Train gθ, h according to Lcontext
22: Train fθ1:[′] _n_ [according to][ L][approx]
23: **end if**
24: **if t mod κ2 = 0 then**
25: Save head ψ1:[m]n
26: **end if**
27: **end for**
28: Evaluate task 1, _, m according to Alg. 2_
_· · ·_
29: Empty the experience replay buffer =
_D_ _∅_
30: end for
**H.2** **The Overall Flow of MACPro**
To illustrate the process of training, the overall training flow of MACPro is shown in Alg. 1. Where Lines 3 14
_∼_
express the process of dynamic agent network expansion, where we use the task contextualization to decide whether
we should initialize a new head for the current task. To initialize a new head (Line 12), we propose two strategies.
One strategy is to copy the parameters of the head learned from the last task. The other is to construct a entirely new
head by resetting the parameters randomly. We use the first strategy in LBF, PP, Marines, and the second one in SZ.
Both strategies work well in the experiments. Then, we started training on the new task. In Line 15 we can choose to
reset the ϵ-greedy schedule to enhance exploration (adopted in SZ) or not (adopted in LBF, PP, Marines), where the
schedule is to decay ϵ from 1 to 0.05 in 50K timesteps. Next, we iteratively update the parameters of each component
in Lines 16 27, where we also save the current head. Finally, we test all seen tasks, empty the replay buffer for the
_∼_
current task, and switch to the next task.
Besides, the execution flow of MACPro is shown in Alg. 2. In the execution phase, for each testing task, agents first
toll-out P episodes to probe the environment and derive the context (Line 3 7). With the gathered local information,
_∼_
each agent independently selects an optimal head to perform on this task (Line 8, 9).
-----
**Algorithm 2 MACPro: Execution**
**Input: Task sequence {task 1, · · ·, task M** _}, feature extractors φ1:n, heads {ψ1:[k]_ _n[}][K]k=1_
**Parameter: Number of probing episodes P**
1: for m = 1, · · ·, M do
2: Set up task m
3: **for p = 1, · · ·, P do**
4: Randomly choose an integer k from 1, _, K_
_{_ _· · ·_ _}_
5: Agents collect one trajectory τττ p with {φ1:n, ψ1:[k] _n[}]_
6: Each agent i calculates the mean value µθi[′] [(][τ][ i]p[)][ of the trajectory representation][ f][θ]i[′] [(][τ][ i]p[)]
7: **end for**
_bs_
8: Each agent i selects the optimal head ψi[k][⋆i], where k[⋆i] = argmin1≤k≤K 1≤minp≤P _[||][µ][θ]i[′]_ [(][τ][ i]p[)][ −] _bs[1]_ _j�=1_ _µ[j]k[||][2][.]_
9: Agents test with {φi, ψi[k][⋆i] _}i[n]=1_
10: end for
Table 3: Hyperparameters in experiment.
Hyperparameter Value
Number of testing episodes 32
_P (Number of probing episodes)_ 20
Learning rate for updating networks 0.0005
Number of heads in transformer encoders 3
Number of layers in transformer encoders 6
_bs (Batch size of the sampled trajectories)_ 32
_λnew (Threshold for merging similar tasks)_ 1.5
Hidden Dim (Dimension of hidden layers) 64
Attn Dim (Dimension of Key in attention) 8
Entity Dim (Dimension of Value in attention) 64
Z Dim (Dimension of the encoded Gaussians) 32
_κ1 (Interval (steps) of updating both encoders)_ 1000
_κ2 (Interval (steps) of saving the learning head)_ 10000
_αcontl (Coefficient of local contrastive loss Lcontl_ ) 0.1
_αcontg (Coefficient of global contrastive loss Lcontg_ ) 0.1
_αreg (Coefficient of the l2- regularization Lreg on φ)_ 500
Buffer Size of D (Maximum number of trajectories in D) 5000
Buffer Size of D[′] (Maximum number of trajectories in D[′]) 5000
**H.3** **Hyperparameters Choices**
Our implementation of MACPro is based on the PyMARL[2] [30] codebase with StarCraft 2.4.6.2.69232 and uses its
default hyper-parameter settings. For example, the discounted factor used to calculate the temporal difference error is
set to the default value 0.99. The selection of the additional hyperparameters introduced in our approach, e.g., time
interval of saving the heads, is listed in Tab. 3. We use this set of parameters in all experiments shown in this paper
except for the ablations.
#### I The Complete Continual Learning Results
In this part, we compare MACPro against the multiple mentioned baselines and ablations to investigate the continual
learning ability, and display the performance on every single task seen so far in Fig. 14 16.
_∼_
[2https://github.com/oxwhirl/pymarl](https://github.com/oxwhirl/pymarl)
|Hyperparameter|Value|
|---|---|
|Number of testing episodes P (Number of probing episodes) Learning rate for updating networks Number of heads in transformer encoders Number of layers in transformer encoders bs (Batch size of the sampled trajectories) λ (Threshold for merging similar tasks) new Hidden Dim (Dimension of hidden layers) Attn Dim (Dimension of Key in attention) Entity Dim (Dimension of Value in attention) Z Dim (Dimension of the encoded Gaussians) κ (Interval (steps) of updating both encoders) 1 κ (Interval (steps) of saving the learning head) 2 α (Coefficient of local contrastive loss Lcontl) contl α (Coefficient of global contrastive loss Lcontg) contg α reg (Coefficient of the l 2- regularization Lreg on φ) Buffer Size of D (Maximum number of trajectories in D) Buffer Size of D′(Maximum number of trajectories in D′)|32 20 0.0005 3 6 32 1.5 64 8 64 32 1000 10000 0.1 0.1 500 5000 5000|
|---|---|
-----
|MACPro Oracle Figure 15: Th|MACP|ro Oracle|
|---|---|---|
|MACPro Oracle Figure 16:|MACP|ro Oracle|
|---|---|---|
-----
#### J Product of a Finite Number of Gaussians
Suppose we have N Gaussian experts with means µi1, µi2, · · ·, µiN and variances σi[2]1[, σ]i[2]2[,][ · · ·][, σ]iN[2] [, respectively.]
Thus the product distribution is still Gaussian with mean µi and variance σi[2][:]
� _µi1_ �
_µi =_ + _[µ][i][2]_ + · · · + _[µ][iN]_ _σi[2][,]_
_σi[2]1_ _σi[2]2_ _σiN[2]_ (18)
1 1
= [1] + [1] + + _._
_· · ·_
_σi[2]_ _σi[2]1_ _σi[2]2_ _σiN[2]_
It can be proved by induction.
_Proof. We want to prove Eqn. 18 is true for all N_ 2.
_≥_
- Base case: Suppose N = 2 and p1(x) = N (x|µ1, σ1), p2(x) = N (x|µ2, σ2), then
_p1(x)p2(x) =_
_−_ [(][x][ −] _[µ][1][)][2]_
2σ1[2]
_−_ [(][x][ −] _[µ][2][)][2]_
2σ2[2]
�
1
_·_ _√_ exp
2πσ2
�
�
1
_√_ exp
2πσ1
�
�
(x − _µ1)[2]_
+ [(][x][ −] _[µ][2][)][2]_
_−_
2σ1[2] 2σ2[2]
��
1
= exp
2πσ1σ2
1
= exp
2πσ1σ2
�
1
= exp
2πσ1σ2
2 [+][µ][2][σ]1[2] 1[σ]2[2][+][µ][2]2[σ]1[2]
_x[2]_ _−_ 2 _[µ][1]σ[σ]1[2][2][+][σ]2[2]_ _x +_ _[µ][2]σ1[2][+][σ]2[2]_
− 1 _[σ]2[2]_
2 _σ[σ]1[2][2][+][σ]2[2]_
�x − _[µ][1]σ[σ]21[2][2][+][+][µ][σ]2[2][2][σ]1[2]_ �2
− 2 _σ[σ]1[2]1[2][+][σ][σ]2[2]2[2]_ _−_ [(][µ]2[1][ −]σ1[2][σ][µ]2[2][2][)][2]
(19)
�
�
�
1
= exp
�
2π (σ1[2] [+][ σ]2[2][)]
1
_·_ _√2π_ _√σσ11[2]σ[+]2[σ]2[2]_ _· exp_ −
_−_ [(][µ][1][ −] _[µ][2][)][2]_
2 (σ1[2] [+][ σ]2[2][)]
�2
�x − _[µ][1][σ]σ21[2][2][+][+][µ][σ]2[2][2][σ]1[2]_
1 _[σ]2[2]_
2 _σ[σ]1[2][2][+][σ]2[2]_
�x − _[µ][1]σ[σ]12[2][2][+][+][σ][µ]2[2][2][σ]1[2]_
1 _[σ]2[2]_
2 _σ[σ]1[2][2][+][σ]2[2]_
_._
�2
1
= A · _√2π_ _√σσ11[2]σ[+]2[σ]2[2]_
exp
_−_
Eqn. 19 can be seen as PDF of N (µ, σ) times A where µ = ( _σ[µ]1[1][2]_ [+][ µ]σ2[2][2] [)][σ][2][,][ 1]σ[2][ =] _σ11[2]_ [+] _σ12[2]_ _[.]_
- Induction step: Suppose it is true when N = n, and the product distribution of n Gaussian experts has mean
_µ˜ = (_ _σ[µ]1[1][2]_ [+][ · · ·][ +][ µ]σn[n][2] [)˜][σ][2][ and variance] _σ˜1[2][ =]_ _σ11[2]_ [+][ · · ·][ +] _σ1n[2]_ [, then for][ n][ + 1][ Gaussian experts:]
1 1 1
= [1] + + [1] + _,_
_· · ·_
_σ[2][ = 1]σ˜[2][ +]_ _σn[2]+1_ _σ1[2]_ _σn[2]_ _σn[2]+1_
� _µ˜_ � � _µ1_
_µ =_ _σ[2]_ = + + _[µ][n]_ + _[µ][n][+1]_
_· · ·_
_σ˜[2][ +][ µ]σn[n][2]+1[+1]_ _σ1[2]_ _σn[2]_ _σn[2]+1_
(20)
�
_σ[2]._
- Eqn. 18 has been proved by the above derivation.
-----
If we write Tij = (σij[2] [)][−][1][, Eqn. 18 can be written as:]
� �[N] �� �[N] �−1
_µi =_ _µijTij_ _Tij_ _,_
_j=1_ _j=1_
(21)
� �[N] �−1
_σi[2]_ [=] _Tij_ _,_
_j=1_
and is exactly what we’re trying to prove.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2305.13937, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": null,
"url": ""
}
| 2,023
|
[
"JournalArticle"
] | false
| 2023-05-07T00:00:00
|
[
{
"paperId": "3bfff39e9db142b56de1ab87b051fa68d04881e0",
"title": "A communication-based identification of critical drones in malicious drone swarm networks"
},
{
"paperId": "24d67d0f938b74393b05315be2f63422a64a5773",
"title": "Robust Multi-agent Communication via Multi-view Message Certification"
},
{
"paperId": "e1f2f782b85af692db2cd6121cc4277c65843563",
"title": "Multilayer Network Community Detection: A Novel Multi-Objective Evolutionary Algorithm Based on Consensus Prior Information [Feature]"
},
{
"paperId": "4251052867fb5a14b616a3ed0856a968bf5d5d4e",
"title": "Expert System-Based Multiagent Deep Deterministic Policy Gradient for Swarm Robot Decision Making"
},
{
"paperId": "6895c7e5143d64d23cd1a1bd721edc07be8b6f02",
"title": "The Effectiveness of World Models for Continual Reinforcement Learning"
},
{
"paperId": "bdcd8dd1c2051f063b651e3e94b47596c9827a3b",
"title": "Building a Subspace of Policies for Scalable Continual Learning"
},
{
"paperId": "497f580ebfe562149b4aa886c48c466aa1313d40",
"title": "Multiagent Reinforcement Learning With Heterogeneous Graph Attention Network"
},
{
"paperId": "85b2dd15c3f1024c3651eb258eafa4d1358c187e",
"title": "Multi-agent Dynamic Algorithm Configuration"
},
{
"paperId": "94310aa68c694b8b87d59f3d3f8b1b3fd5a4c901",
"title": "Towards a Standardised Performance Evaluation Protocol for Cooperative MARL"
},
{
"paperId": "3fe351b19fb9d35f9d947cd93b5a378631ca3388",
"title": "Dynamics-Adaptive Continual Reinforcement Learning via Progressive Contextualization"
},
{
"paperId": "59714a5c3af2778fdf888760a20ca87b5e7f6358",
"title": "Heterogeneous Multi-agent Zero-Shot Coordination by Coevolution"
},
{
"paperId": "b6b6bc529e665ebf97326d084a71159634ae10a7",
"title": "A Survey on Model-based Reinforcement Learning"
},
{
"paperId": "43ca5ca1f840dd7eaade725789868e8998aacfc8",
"title": "Game-Based Backstepping Design for Strict-Feedback Nonlinear Multi-Agent Systems Based on Reinforcement Learning"
},
{
"paperId": "fe26607ca95c0d1d9005810b4ad12845ee69e9cf",
"title": "Multi-Agent Reinforcement Learning is a Sequence Modeling Problem"
},
{
"paperId": "bd01cc1a04a6fc8e9e5948ded5a1ad05b8dde3d7",
"title": "Fully Decentralized Multiagent Communication via Causal Inference"
},
{
"paperId": "3992a87e7c53faaf1ac357006edac9b833ffb0bb",
"title": "Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning"
},
{
"paperId": "fc9e25150df546bf7a6745e3a8b547ad96b3ded1",
"title": "A Novel Representation Learning for Dynamic Graphs Based on Graph Convolutional Networks"
},
{
"paperId": "24ed28d44c33de145b05217814d678e3486acbc5",
"title": "Model-based Multi-agent Reinforcement Learning: Recent Progress and Prospects"
},
{
"paperId": "d60742f6c1df48bc820a26564ae51cf527732ae2",
"title": "A survey of multi-agent deep reinforcement learning with communication"
},
{
"paperId": "f4ae07a57e1cfe39d45f982ce32056b5ebe53ce8",
"title": "Multi-Agent Policy Transfer via Task Relationship Modeling"
},
{
"paperId": "f295b3e251f4cc2bb5e866330ba74b582b978aa5",
"title": "Biological underpinnings for lifelong learning machines"
},
{
"paperId": "b73e775c8d4da08bc33deee07e0f9759a9e22dfa",
"title": "Game of Drones: Multi-UAV Pursuit-Evasion Game With Online Motion Planning by Deep Reinforcement Learning"
},
{
"paperId": "e14be0c78e9576ac11d09c55189770f203872248",
"title": "Event-Triggered Communication Network With Limited-Bandwidth Constraint for Multi-Agent Reinforcement Learning"
},
{
"paperId": "22108c16ca1941c3cdf3a62b232269516fa5f237",
"title": "Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks"
},
{
"paperId": "7a9846fbb9a580f522ff93f201a6bf15f80d112b",
"title": "CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents"
},
{
"paperId": "99b9595ac6a13bf986db40a301e3f11442eec36b",
"title": "Block Contextual MDPs for Continual Learning"
},
{
"paperId": "38ff8a98fce54c61362eec2f0362774f647e4bf7",
"title": "LINDA: multi-agent local information decomposition for awareness of teammates"
},
{
"paperId": "61e6674bcdf297dda6744c8fe69cbc0d8ec6006c",
"title": "Learning Meta Representations for Agents in Multi-Agent Reinforcement Learning"
},
{
"paperId": "fa583b02b83e81982da3bea64fcb6899d9c5cd20",
"title": "UNMAS: Multiagent Reinforcement Learning for Unshaped Cooperative Scenarios"
},
{
"paperId": "b5cf7d647d0910180a543cf6ff80e0ee08d6ba46",
"title": "Hierarchical and Stable Multiagent Reinforcement Learning for Cooperative Navigation Control"
},
{
"paperId": "94cefa04e0f834272d85cc425e0adfb27fd17e08",
"title": "Same State, Different Task: Continual Reinforcement Learning without Interference"
},
{
"paperId": "090273ad6b3720027e34f9183576dd2812bb4454",
"title": "Continual World: A Robotic Benchmark For Continual Reinforcement Learning"
},
{
"paperId": "beb09ed0e6272d2ff96f1d5966a8bfea8d5fb7e8",
"title": "Multiagent Meta-Reinforcement Learning for Adaptive Multipath Routing Optimization"
},
{
"paperId": "16bc9c08dbbca2f4f0567b2f5e1de8ee2fc1b72a",
"title": "Continuous Coordination As a Realistic Scenario for Lifelong Learning"
},
{
"paperId": "3a315c81a98851f0614c09fef6a14c30d6a1e63c",
"title": "The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games"
},
{
"paperId": "742e45f9ccf43fdfb3bc565e3bd0f7252902c0d3",
"title": "Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing"
},
{
"paperId": "58e4959fbaf98547507755c3de68ebd2e117ac89",
"title": "Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning"
},
{
"paperId": "d5250c59351e5cc7ef842a3c4c89c1a62bd45180",
"title": "UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers"
},
{
"paperId": "9faecf3e18a833f2d49b030d591cc2ded0b54336",
"title": "Towards Continual Reinforcement Learning: A Review and Perspectives"
},
{
"paperId": "0d67345f2f682e6d8149c66b35a4713b8cfec775",
"title": "Class-Incremental Learning: Survey and Performance Evaluation on Image Classification"
},
{
"paperId": "e10682f8999bd5f05d995732cb418df7ae45d8a7",
"title": "Continual Model-Based Reinforcement Learning with Hypernetworks"
},
{
"paperId": "052c100d45f949c06e8419b504e319b442cb3f0a",
"title": "QPLEX: Duplex Dueling Multi-Agent Q-Learning"
},
{
"paperId": "a365d7819697984d8e7e2ae65ec5a1f15f4e3e25",
"title": "Lifelong Incremental Reinforcement Learning With Online Bayesian Inference"
},
{
"paperId": "4ec9c22bb689dd136726cfe4159344066f681cfb",
"title": "Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks"
},
{
"paperId": "08f1140524e075c3de22c508277df2a7e598d5f4",
"title": "Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning"
},
{
"paperId": "0c0df99ac6c0091172ba424cc024da417408fbaa",
"title": "Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization"
},
{
"paperId": "1b6ed62002979ae366854deb475b3413667e3f2b",
"title": "\"Other-Play\" for Zero-Shot Coordination"
},
{
"paperId": "207c7b8ea8f94463383a089e4f7f24b64503f9c0",
"title": "Lipschitz Lifelong Reinforcement Learning"
},
{
"paperId": "8904e9986ea256f37da535f282c7cd727736db4a",
"title": "A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning"
},
{
"paperId": "54d4a221db5a91a2487b1610374843fafff5a23d",
"title": "Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms"
},
{
"paperId": "386620b2572266baad688b288807f61e5a84719e",
"title": "Online Learned Continual Compression with Adaptive Quantization Modules"
},
{
"paperId": "060571527cdf3036adb78911c5b1c065b92c4714",
"title": "SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning"
},
{
"paperId": "f7c15e9ac6653330b7dd18a89301a3b333927db3",
"title": "A review of cooperative multi-agent deep reinforcement learning"
},
{
"paperId": "615e443f15778e9fdde27fecebd5c6d028816e27",
"title": "Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning"
},
{
"paperId": "56ca9b304e0476feaa6dd4fd1cccc8c0a1a9d8eb",
"title": "QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning"
},
{
"paperId": "9c54962b0fd011d5fe3f5b5275cc6ba091a2c7ae",
"title": "On Tiny Episodic Memories in Continual Learning"
},
{
"paperId": "82055eed2ba0d7156a54c586249742c848e5d565",
"title": "The StarCraft Multi-Agent Challenge"
},
{
"paperId": "c5765a6c1844743337cee4fec985c748af3b1070",
"title": "Policy Consolidation for Continual Reinforcement Learning"
},
{
"paperId": "d9ff7a9344dd5d6653bd7a02bfd704422bb29951",
"title": "Experience Replay for Continual Learning"
},
{
"paperId": "4fe9142a47b35638edcc59013ad5e9e87bd93ea7",
"title": "PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning"
},
{
"paperId": "c4e824a574d396803cf4677b7d0ad4e28ad54804",
"title": "Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward"
},
{
"paperId": "ffc211476f2e40e79466ffc198c919a97da3bb76",
"title": "QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning"
},
{
"paperId": "9ea50b3408f993853f1c5e374690e5fbe73c2a3c",
"title": "Continual Lifelong Learning with Neural Networks: A Review"
},
{
"paperId": "204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"title": "Attention is All you Need"
},
{
"paperId": "7c3ece1ba41c415d7e81cfa5ca33a8de66efd434",
"title": "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
},
{
"paperId": "118fae4b4d07453561f1eded88654f812c7c61ec",
"title": "Gradient Episodic Memory for Continual Learning"
},
{
"paperId": "2e55ba6c97ce5eb55abd959909403fe8da7e9fe9",
"title": "Overcoming catastrophic forgetting in neural networks"
},
{
"paperId": "836ad0c693bc5ef171ee2b07b3f4d1bd2a0ae24c",
"title": "A Concise Introduction to Decentralized POMDPs"
},
{
"paperId": "0b544dfe355a5070b60986319a3f51fb45d1348e",
"title": "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation"
},
{
"paperId": "f61efe204165b40b5fd942f110cf3656e09aa1b0",
"title": "Principal Component Analysis (PCA)"
},
{
"paperId": "4aece8df7bd59e2fbfedbf5729bba41abc56d870",
"title": "A Comprehensive Survey of Multiagent Reinforcement Learning"
},
{
"paperId": "cfaae9b6857b834043606df3342d8dc97524aa9d",
"title": "Learning a similarity metric discriminatively, with application to face verification"
},
{
"paperId": "52070af952474cf13ecd015d42979373ff7c1c00",
"title": "Training Products of Experts by Minimizing Contrastive Divergence"
},
{
"paperId": "cc95b268a5f0ac69af25194dc0c7b052ec8d2303",
"title": "Discovering Generalizable Multi-agent Coordination Skills from Multi-task Offline Data"
},
{
"paperId": "e4faee8a8b0db2ce29d5eb7290e63980d1336149",
"title": "A Survey of Ad Hoc Teamwork: Definitions, Methods, and Open Problems"
},
{
"paperId": "2e1ca97d8f7604e3b334ff4903bfa67267379317",
"title": "The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning"
},
{
"paperId": "c3e151f71168a5f348bdebfde11752ca603fa6d0",
"title": "Theory of Probability"
},
{
"paperId": null,
"title": "received the B.Sc. and M.Sc. degrees from the School of Mechanical Engineering and Automation, Northeastern University, Shenyang, China, in 2012 and 2015, respectively. He is"
},
{
"paperId": null,
"title": "His research interests include multiagent reinforcement learning and multiagent systems"
},
{
"paperId": null,
"title": "COORDINATION VIA PROGRESSIVE TASK"
},
{
"paperId": null,
"title": "received the B.Sc. degree in from the School of Artificial Intelligence, University, Nanjing, China"
}
] | 24,650
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ea729f4029076d418123a919825ae23de9932a
|
[] | 0.854739
|
EVOLUTION AND ANALYSIS OF SECURED HASH ALGORITHM (SHA) FAMILY
|
00ea729f4029076d418123a919825ae23de9932a
|
Malaysian Journal of Computer Science
|
[
{
"authorId": "144611259",
"name": "B. Khan"
},
{
"authorId": "9255543",
"name": "R. F. Olanrewaju"
},
{
"authorId": "2139481280",
"name": "Malik Arman Morshidi"
},
{
"authorId": "2833169",
"name": "R. N. Mir"
},
{
"authorId": "122232289",
"name": "M. L. Mat Kiah"
},
{
"authorId": "2179526146",
"name": "Abdul Mobeen Khan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Malays J Comput Sci"
],
"alternate_urls": null,
"id": "7ed515a5-673b-43d5-8f58-9f0e1363926e",
"issn": "0127-9084",
"name": "Malaysian Journal of Computer Science",
"type": "journal",
"url": "https://ejournal.um.edu.my/index.php/MJCS/"
}
|
With the rapid advancement of technologies and proliferation of intelligent devices, connecting to the internet challenges have grown manifold, such as ensuring communication security and keeping user credentials secret. Data integrity and user privacy have become crucial concerns in any ecosystem of advanced and interconnected communications. Cryptographic hash functions have been extensively employed to ensure data integrity in insecure environments. Hash functions are also combined with digital signatures to offer identity verification mechanisms and non-repudiation services. The federal organization National Institute of Standards and Technology (NIST) established the SHA to provide security and optimal performance over some time. The most well-known hashing standards are SHA-1, SHA-2, and SHA-3. This paper discusses the background of hashing, followed by elaborating on the evolution of the SHA family. The main goal is to present a comparative analysis of these hashing standards and focus on their security strength, performance and limitations against common attacks. The complete assessment was carried out using statistical analysis, performance analysis and extensive fault analysis over a defined test environment. The study outcome showcases the issues of SHA-1 besides exploring the security benefits of all the dominant variants of SHA-2 and SHA-3. The study also concludes that SHA-3 is the best option to mitigate novice intruders while allowing better performance cost-effectively.
|
**EVOLUTION AND ANALYSIS OF SECURE HASH ALGORITHM (SHA) FAMILY**
**_Burhan Ul Islam Khan[1*], Rashidah Funke Olanrewaju[2], Malik Arman Morshidi[3], Roohie Naaz Mir[4], Miss Laiha_**
**_Binti Mat Kiah[5] and Abdul Mobeen Khan[6 ]_**
1,2,3Department of ECE, KOE, International Islamic University Malaysia (IIUM), Kuala Lumpur, 50728, Malaysia
4Department of Computer Science and Engineering, National Institute of Technology (NIT), Srinagar, 190006, India
5Department of Computer System & Technology, Universiti Malaya (UM), Kuala Lumpur, 50603, Malaysia
6 ALEM Solutions and Technologies, Aspen Commercial Tower, Dubai, 11562, United Arab Emirates
Email: burhan.iium@gmail.com[1*] (corresponding author), frashidah@iium.edu.my[2], mmalik@iium.edu.my[3],
naaz310@nitsri.net[4], misslaiha@um.edu.my[5], consultmobeen@gmail.com[6]
DOI: https://doi.org/10.22452/ mjcs.vol35no3.1
**ABSTRACT**
_With the rapid advancement of technologies and proliferation of intelligent devices, connecting to the internet_
_challenges have grown manifold, such as ensuring communication security and keeping user credentials secret._
_Data integrity and user privacy have become crucial concerns in any ecosystem of advanced and interconnected_
_communications. Cryptographic hash functions have been extensively employed to ensure data integrity in insecure_
_environments. Hash functions are also combined with digital signatures to offer identity verification mechanisms_
_and non-repudiation services. The federal organization National Institute of Standards and Technology (NIST)_
_established the SHA to provide security and optimal performance over some time. The most well-known hashing_
_standards are SHA-1, SHA-2, and SHA-3. This paper discusses the background of hashing, followed by elaborating_
_on the evolution of the SHA family. The main goal is to present a comparative analysis of these hashing standards_
_and focus on their security strength, performance and limitations against common attacks. The complete assessment_
_was carried out using statistical analysis, performance analysis and extensive fault analysis over a defined test_
_environment. The study outcome showcases the issues of SHA-1 besides exploring the security benefits of all the_
_dominant variants of SHA-2 and SHA-3. The study also concludes that SHA-3 is the best option to mitigate novice_
_intruders while allowing better performance cost-effectively._
**_Keywords:_** **_Secured Hash Algorithms, Message Digest, Cryptographic Hashing, Statistical Analysis, Fault_**
**_Analysis_**
**1.0** **INTRODUCTION**
The Internet is more critical in the current era than ever before, and almost everyone uses Internet services for
different purposes. In most cases, the data traversing over the Internet comprises private or concealed information
that everybody wants to protect [1], [2]. The rapid advancement in technologies has raised significant concerns
about protecting our data from unauthorized access. With advanced software programs, malicious users can
intercept data in transit or breach the confidentiality of data stored on distributed storage systems such as the cloud
[3]. Therefore, protecting its information is pivotal in safeguarding any organization or individual's company and
client assets. Security usually means using the best preventive and defensive measures to protect data from
unauthorized access [4]. Cryptography is the science of security applications that offers a mechanism for keeping
users' information secure with privacy and confidentiality [5]. As an imperative sub-area of fast communication,
cryptography allows a protected communication process between different users so that malicious users cannot
access the contents inside the file [6]. However, this secure communication area has a long history of achievements
and failures. Several encoding messages have appeared over the eras and are continually broken after a while [7]. A
cryptographic hash algorithm aims to provide secure communication using the digest of messages to generate a hash
value of data to detect unauthorized attempts to data. Hash algorithms are widely used in many applications, such as
data integrity and corruption verification, ownership protection, authentication, and many more [8]. The system of
cryptocurrency also depends on the hashing algorithms and functions.
179
-----
The hashing algorithms are vital cryptographic primitives that accept input, process it and generate a digest [9]. The
digest is a fixed-size alphanumeric string that humans cannot understand. Some older and newer versions of
cryptographic hash functions are available such as the Message-Digest Algorithm 5 (MD5), MD4, SHA family, and
many other hashing functions [10]. Hash is a computationally secure form of compression concerning security
aspects [11]. There are different variants of SHA witnessed to date, e.g., SHA-1, SHA-2, SHA-256, SHA-348, and
SHA-512. With the availability of discussion carried out by various research works and legal authorities, e.g., NIST,
there are constantly evolving versions of secured encryption techniques [12], [13]. The research in this direction
resulted in the formation of SHA-3, a new version. However, owing to the novel nature of the structure of this
encryption technique, the degree of strength and weakness of this algorithm is still unknown.
At present, there are various methods where SHA variants have been reportedly used for catering to different
security demands, e.g., authentication of the electronic document [14], improving security system using chaotic map
[15], secured message hiding [16], image security [17], improving anonymity [18], improving security by
combining hashing with genetic science [19], strengthening security via blockchain [4], [20], authentication over the
mobile network [21]. Apart from this, there is also literature [22], [23] on evaluating the strength of SHA. However,
there is no explicit discussion to assess the strength of the three most essential SHA family members.
The prime objective of this paper is to present a comparative assessment of SHA algorithms besides emphasizing
the adoption of SHA-3. The specific goals of the paper are:
- Assess the effectiveness of all the SHA versions over a similar test environment to chalk out a certain level
of inference, and
- Perform statistical analysis, performance analysis and extensive fault analysis to assess the strength of SHA
approaches.
The paper adopts a simple experimental design methodology considering three variants as well as sub-classes of
SHA, i.e. SHA-1, SHA-2, and SHA-3. SHA-3 is exhibited to perform well in contrast to its counter-SHA1 and
SHA2 versions with respect to minimal randomness of hashes, increased clock per byte, and maximal recovery of
bits. Unlike any existing investigations, the proposed study contributes towards a generalized mathematical
approach considering the standardized evaluation method to prove that SHA-3 is a robust and well-performing
algorithm in a contemporary state. This outcome increases the higher adoption rate for any form of application that
demands optimal security measures.
The remaining sections of the paper are organized as follows: Section 2 provides a brief discussion on the hashing
operation, while it is more elaborately discussed concerning its all-around aspect in section 3 with respect to the
significant SHA variant. The proposed manuscript evaluates all the multiple variants of SHA via statistical analysis
discussed in section 4, performance analysis addressed in section 5, and extensive fault analysis discussed in section
6. The outline of the study outcome is briefed in section 7, while section 8 provides conclusive remarks. References
are listed at the end.
**2.0** **STUDY BACKGROUND**
The adoption of hashing mechanism is an integral part of the majority of security application designs. Technically, a
hash function can be defined as a mathematical function that converts a numerical input value into another
compressed numerical value. An arbitrary length is considered the input of a hash function, while a fixed length
always characterizes hashing. Fig. 1 highlights the usage of messages of random length, which results in a fixedlength hash value after being subjected to a hash function.
180
-----
#### Message of
arbitrary
#### Hash
#### Hash Value of
fixed length
#### 15
|arbitrary length|Col2|Function (H)|fixed lengt (h)|xed lengt (h)|Col6|
|---|---|---|---|---|---|
|John Smith|||00 01 02 03 04 05 - - 15|00||
|||||||
|||||01||
|Sandra Dee||||||
|||||15||
Fig. 1: Conventional Hashing Mechanism
To develop it as an efficient cryptographic tool, specific essential properties must feature a standard hash function.
The first important property is preimage resistance, which induces a computational challenge to reverse the hash
function operation [24]. This property is essential as it safeguards from an intruder who has possession of hash value
only and attempts to obtain the input data. The second important property is called second preimage resistance,
which ensures a higher degree of complexity in getting hash from different inputs for a given input and hash [25].
This security property of hash ensures safety from an intruder who has both input value and its respective hash
value.
In contrast, the intruder is interested in replacing the alternative value as a legitimate value in the position of the
source input value. The third property of hashing is Collision Resistance, which ensures complexity in obtaining two
different input values of any length yielding the same hash [26]. This property provides that the collision is highly
complex, making it difficult for the intruder to explore two input values to possess the same hash. Using these
security properties makes it feasible to construct multiple security applications [27], [28]. One of the essential
applications of hashing is to carry out password storage, while another frequently adopted application is data
integrity. Most hashing can be utilized for authentication systems because the aforementioned two standards are
frequently employed in applications [29].
The construction mechanism of hashing is also quite simplified as well as unique. Referring to Fig. 1, it can be seen
that a mathematical function is quite essential to generating hashes. The sample message of arbitrary length, i.e.,
"John Smith," "Lisa Smith," "Sam Doe" and "Sandra Dee," will formulate blocks of data that further result in hash
codes as an outcome. Usually, the size of such a data block resides in a range of 128-512 bits. At the same time, the
hash function of different types constitutes to become a hashing algorithm. The processing of hashing operation
consists of multiple rounds similar to block cipher. The process rounds continue until the complete messages are
subjected to hash.
181
-----
|Seed Value|Col2|
|---|---|
|||
|Message Block-1|Col2|
|---|---|
|||
|Hash Function|Col2|
|---|---|
|||
|Message Block-2|Col2|
|---|---|
|||
### -
### Block-n
|Message Block-n|Col2|
|---|---|
|||
Fig. 2: Construction Process of Hash
The process flow shown in Fig. 2 is also called as avalanche effect of hashing [30]. Due to the fact that the hash
value associated with the first block of the message acts as an input file for the second hash operation and impacts
the consecutive function, this effect generates unique hash values for messages without any form of mapping
relation with each other. As a result, the two hash values differ by at least one bit of data. The construction process
of hashing also has unique differences between hash algorithms and hash functions. The hash algorithm is defined as
a complete process that formulates breaking down the message block. It also establishes the result obtained from the
previous block of the message and their connectivity with the other generated blocks of messages.
The hash function generates a hash code subjected to multiple blocks associated with a fixed-length of binary data as
an outcome. Therefore, a hash function needs to map the anticipated input over its range of outcomes as evenly as
feasible. The robustness of hashing is carried out considering theoretical and practical approaches. Theoretical
methods will mainly compute the probability of mapping all the keys in a single slot, whereas the practical process
will consist of assessing the longest probe sequence. Usually, the practical method consists of uniform hashing,
which means that any key value can be used to map specific slots with maximum probability. However, such
probability of a real-time attack is significantly small, and hence, hashing is one of the cost-effective mechanisms
adopted in network security.
**3.0** **SHA FAMILY**
The adoption of hashing is witnessed in improving security over different application variants [31], [32], [33], [34],
[35]. The hashing algorithm effectively identifies and validates that a legal person forwards the data received by a
user, or the received information is authentic and not altered [36]. Many algorithms have been introduced to
generate hash values, some of which were rejected, and some have become standards. Message Digest (MD) and
SHA (SHA-1 and SHA-2) are the popularly accepted standards for generating hash values [37], [38]. It is always
necessary to upgrade to a new technique or replace an existing one to meet the latest requirements with technology
developments. A hash algorithm is an explicit cryptographic operation that alters an arbitrary-length input message
to a fixed compressed numerical value called a hash value. The fundamental attributes of the hash algorithms are
that it is difficult to find two different messages with the same hash value [39]. A typical process of the hash
algorithm is exhibited in Fig. 3.
182
-----
# f(x)
### Plain
Message
# Hashing Algorithm
f(x)
### Hash
Function
Fig. 3: A Typical Process of Hash Algorithm
### Hashed Message
Value
A hash function is a mathematical operation that takes in plain message data of variable length and provides a fixed
size of the hash message value. Numerically, the process of a hash algorithm can be expressed as follows:
𝐻(𝑓𝑥): {0,1}[∗] →{0,1}𝑛 (1)
In expression (1), {0,1}[∗] indicates the set of arbitrary-length elements, including the empty string, and {0,1}𝑛 refers
to elements with length 𝑛. Therefore, a hash function maps a set of fixed-length elements to arbitrary-length
elements. However, the length of the output hashed values mainly depends on the type of hash algorithm used.
Generally, the size of the hash algorithms ranges between 160 bits and 1088 bits. To obtain a preset fixed-length
hash value, the input message must be divided into fixed-size data blocks because the hash function receives data
with a fixed length, as shown in Fig. 4.
### Hash Function
### Hashed
Message
Value
### Data Blocks
Fig. 4: A Typical Process of Fixed-Size Data Blocks
The data block sizes vary depending on the type of algorithm used. For instance, let's consider the case of SHA-1,
which takes plain text messages in the block size of 512 bits. If the plain text message's length is 512-bit, the hash
function 𝐻(𝑓𝑥) executes 80 rounds (80 rounds means one-time execution). If the plain text message's length is
1024-bit, then the message will be divided into two fixed-length block sizes of 512-bit and the hash function
executing two times. However, the size of plain text messages is rarely in multiple of 512-bit. A technique called
padding is considered to divide input plain text messages into fixed-length data blocks for other cases. The process
of the fixed-length block is demonstrated in Fig. 4. The plain text message is first divided into multiple fixed-length
block sizes, and the output of the first data block and the consecutive data block is fed to the hash function.
Therefore, the final output is the shared value of all blocks. If the alteration of one bit anywhere in the text message,
the overall hashed value will be changed, called the avalanche effect [40], [41].
**3.1** **Properties of Hash Function**
A hash function should satisfy the following properties [42], [43].
183
-----
- Resistance to Collision Attacks: This attribute indicates that two different messages should not have the
same hash value. An adversary can't identify the same hash value 𝐻(𝑚) for two messages (𝑚1, 𝑚2).
Collision attacks refer to a process when a pair of different messages with the same hash 𝐻(𝑚) conflict
attacks occur. The hash function must have the exact property of not constructing the same hashed value
for two different messages.
- Resistance to Preimage Attacks: This property refers to the function of a hash algorithm strong enough that
it will be challenging to generate the corresponding message for a given hash value. Therefore, the hash
function should have preimage resistance. Preimage refers to a message that hashed to a provided value.
This attack generally considers that at least one message is hashed to a given value. Therefore, an adversary
cannot obtain the original data from the given hash value.
- Resistance to Second Preimage Attacks: Given a message, it must not be possible to identify another
message that will construct the same hash value as the previous message. Therefore, the hash function must
have the ability to resist the second preimage attack. A second preimage refers to a message that hashes to
the same value as a given message.
**3.2** **Secure Hash Algorithms**
SHA is a cluster of hash mathematical functions released by NIST as a US Federal Information Processing Standard.
NIST first proposed the SHA-1 hash algorithm in 1995. The design principle of this algorithm is based on Merkle
Damgard's structure as MD5 (Fig. 5). The algorithm uses a string of any length and provides a 160-bit compressed
MD for variable-length input messages. Usually, in this scheme, the message is padded with 1; the required 0's are
added to make the message length equal to 64 bits (less than an even multiple of 512). Sixty-four bits representing
the length of the original message are added to the end of the padded message, which is processed in 512-bit data
blocks. The following are the functional steps involved in SHA-1: The first step is padding bits to the end of the
message length. The next step is subjected to the process of length appending. The third step is about dividing the
input message into 512-bit data blocks. In this step of the hashing algorithm, chaining variables are initialized, i.e.,
internal state size. This means that number of bits is carried over to the next block. Fig. 5 shows the operation of
SHA-1.
In SHA-1, five chaining variables of 32 bits are taken, i.e., each equals 160 bits of the total. The final step is block
processing. In this step, chaining variables are copied, and the data block of 512 bits is divided into 16 sub-blocks
processed for 80 rounds for single-time execution. SHA-1 is used in a wide range of security applications and
protocols, including Transport Layer Security and SSL, PGP, SSH, S/MIME and IPsec. However, this hash
algorithm is vulnerable to collision attacks due to cryptographic flaws. In the past few years, confirmed cases of
collision attacks have been reported for this algorithm. As a result, after 2010, most encryption users no longer
recognize this standard.
NIST introduced the SHA-2 hashing algorithm in 2002 with hash code lengths of 224, 256, 348 and 512. The design
principle of this algorithm is based on the similar approach of the MD5 and SHA-1. However, this algorithm is more
robust than SHA-1 due to the inclusion of a nonlinear function in the compression module [31]. Fig. 6 shows the
operation of SHA-2.
184
-----
|A|B|C|D|E|
|---|---|---|---|---|
**Wt**
**Kt**
|A|B|C|D|E|
|---|---|---|---|---|
Fig. 5: One SHA-1 Round [37]
## E F G
E F G
Fig. 6: One SHA-2 Round [22]
**Wt**
|A|B|C|D|E|F|G|H|
|---|---|---|---|---|---|---|---|
**Kt**
|A|B|C|D|E|F|G|H|
|---|---|---|---|---|---|---|---|
In SHA-2, the message is padded with 1; the required number of 0's are added to make its length equal to sixty-four
bits (less than an even multiple of 512). Sixty-four bits representing the length of the original message are added to
the end of the padded message, which is processed in 512-bit data blocks. Each data block contains 1024 bits as a
sequence of 64-bit words and supports SHA-512 and SHA-384. However, SHA-2 is not preferred to ensure
integrity, as it has less time-efficiency than SHA-1 due to the absence of multithreading. Besides, SHA-256 for hash
cash is primarily used in cryptocurrency to achieve security on the transactions among peers in the Bitcoin network.
Both SHA-512 and SHA-256 are new functions that use different constants and shift amounts and differ in the
number of rounds. A research study carried out has reported that SHA-2 is vulnerable to attacks. Following is the
overview of the computing steps involved in SHA-2:
- The padded length - The total message length to be hashed should be a multiple of 1024 bits.
185
-----
- The padding is initiated by adding 1 as the first bit followed by the required number of 0's with 128 bits
message {𝑚1, 𝑚2 … 𝑚𝑛}
- Initial hash value, 𝐻[(0)] 𝑡𝑜 𝐻[(𝑖)] = 𝐻[(𝑖−1)] + 𝐶𝑀(𝑖)(𝐻(𝑖−1))
- Where C indicates compression function and 𝐻[(𝑁)] is the hash value of the message (𝑚).
- The output obtained from SHA-256 is the 64 bits of a message, and the six logical attributes function as 𝑎,
𝑏, 𝑐 with a 64-bit message.
After a successful collision attack in SHA-1, an open competition was announced by NIST to develop SHA-3 (a
new hash algorithm). A few years later, on October 2, 2012, the Keccak (as SHA-3) was announced as the
competition winner. In 2015, NIST recognized SHA-3 as a standard hash function [32], [33]. Its operation is shown
in Fig. 7.
p0 p1 p1 z0 z1
r 0
c
_f_ _f_ _f_ _f_ _f_
Fig. 7: One SHA-3 Round [36]
|Col1|p0|Col3|Col4|
|---|---|---|---|
|0||f||
|||||
|0||||
|||||
|f|Col2|
|---|---|
|||
|||
|f|Col2|Col3|
|---|---|---|
||||
||||
|f|Col2|
|---|---|
|||
|||
|f|Col2|
|---|---|
|||
|||
SHA-3 utilizes a sponge structure that consists of two phases, the first is the absorbing phase, and the second is the
squeeze phase. The message block should be XORed into sub-space in the absorption phase and converted. In the
squeezing step, output blocks are analyzed from the same sub-space and transformed with state transitions. The
different variants of SHA-3 consisting of SHA-3:224, SHA-3:256, SHA-3:348, SHA-3:512 generate 224, 256, 348,
and 512-bit message digests, respectively. It uses a block size of 1152, 1088, 832 and 576 bits, respectively.
This algorithm shows relatively low software performance compared with other hash functions. The comparative
analysis of different SHA variants is demonstrated in Table 1 and Table 2 with their essential parameters. Table 1
presents a comparative analysis of different secure hashing algorithms. The study is carried out concerning the
functional parameters of each technique discussed. In Table 2, a comparative analysis is given concerning the
security aspect and computational parameters.
Table 1: Comparative Analysis of SHA concerning their parameters
**Block**
**Secure Hashing** **Output Size** **Internal State**
**Size** **Rounds** **Operation**
**Algorithm** **(Bits)** **Size (Bits)**
**(Bits)**
AND, XOR, OR,
Add(mod 232),
SHA-1 160 160(5x32) 512 80
Rotate with no
carry
SHA-224 224 AND, XOR, OR,
Add(mod 232),
256(8x32) 512 64 Rotate with no
SHA-256 256
carry, Right
Logical Shift
SHA-2
SHA-384 384
AND, XOR, OR,
SHA-512 512 Add(mod 264),
512(6x64) 1024 80
SHA-512/224 224 Rotate with no
carry
SHA-512/256 256
186
|Secure Hashing Algorithm|Col2|Output Size (Bits)|Internal State Size (Bits)|Block Size (Bits)|Rounds|Operation|
|---|---|---|---|---|---|---|
|SHA-1||160|160(5x32)|512|80|AND, XOR, OR, Add(mod 232), Rotate with no carry|
|SHA-2|SHA-224|224|256(8x32)|512|64|AND, XOR, OR, Add(mod 232), Rotate with no carry, Right Logical Shift|
||SHA-256|256|||||
||SHA-384|384|512(6x64)|1024|80|AND, XOR, OR, Add(mod 264), Rotate with no carry|
||SHA-512|512|||||
||SHA-512/224|224|||||
||SHA-512/256|256|||||
-----
|SHA-3|SHA-3-224|224|1600(5x5x64)|1152|24|AND, XOR, NOT, Rotate with no carry|
|---|---|---|---|---|---|---|
||SHA-3-256|256||1088|||
||SHA-3-512|384||832|||
||SHA-3-512|512||576|||
||SHAKE128|d(arbitrary)||1344|||
||SHAKE256|d(arbitrary)||1088|||
Table 2: Comparative Analysis of SHA on Security and Computational Performance
**Capacity** **Performance on**
**Security (in Bits**
**Secure Hashing** **Against** **Skylake** **Year of**
**Against Collision**
**Algorithm** **Extensive** **Release**
**Attack)** **Long**
**Length Attacks** **8 bytes**
**Message**
SHA-1 <63 (collusion found) 0 3.47 52 1995
SHA-224 112 32 7.62 84.50 2004
SHA-256 128 0 7.63 85.25 2001
SHA-384 192 128(≤384) 5.12 135.75
SHA-2
2001
SHA-512 256 0 5.06 135.50
SHA-512/224 112 288 5.12 135.75
2012
SHA-512/256 128 256 5.12 135.75
SHA-3-224 112 448 8.12 154.25
SHA-3-256 128 512 8.59 155.50
SHA-3-384 192 768 11.06 164.00
SHA-3 2015
SHA-3-512 256 1024 15.88 164.00
SHAKE128 min(d/2,128) 256 7.08 155.25
SHAKE256 min(d/2,256) 512 8.59 155.50
**3.3** **Issues in SHA-1 and SHA-2**
SHA-1 and SHA-2 were both introduced by the NSA and issued as patents for public use. Although SHA-1 and
SHA-2 are not identical, they are designed based on the same mathematical concept, which comprises the same
flaws. However, the reason that makes SHA-2 more secure than SHA-1 is that SHA-2 uses more considerable input
and output, and it generates hash values with increased length. Since SHA-1 and SHA-2 are different, they hold
similar algorithmic logic and eventually, specific hash lengths may be vulnerable to a similar collision attack. Since
2008, there have been public attacks on SHA-2 like SHA-1, and attacks against SHA-2 are getting severe over time.
Some of the recent threats to SHA-2 were publicly declared in 2016. Likewise, it is expected that the existing hash
will be attacked and become weaker over time. The federal authority of NIST selected SHA-3 as an improved hash
standard, which is different from the SHA family and can be used when required. In 2010, Keccak Hash was mainly
chosen as the only finalist. NIST released the standard in 2015, and SHA-3 became the certified standard. However,
SHA-1 has promoted the entire world's development, and due to cryptography flaws, the significant shift towards
SHA-2 took place in late 2016 and 2017. The deadline for the cut-off for obtaining SHA-1 certificates was on
December 31, 2017. However, SHA-3 adoption is still in its infancy, and the reasons are listed:
- The prime reason is that implementation of SHA-3 is limited to the research domain only, and most of the
existing software and hardware are yet not fully ready to support it. It also requires a customized code or
program for each device to support it.
- If the recommended guidelines were to shift from SHA-1 to SHA-3, the hash vendors would have done this
rather than moving to SHA-2 due to similar effort and cost.
- Most importantly, SHA-3 is a relatively new standardized hashing technique and was released when the
SHA-2 migration schemes were still being explored.
187
|Secure Hashing Algorithm|Col2|Security (in Bits Against Collision Attack)|Capacity Against Extensive Length Attacks|Performance on Skylake|Col6|Year of Release|
|---|---|---|---|---|---|---|
|||||Long Message|8 bytes||
|SHA-1||<63 (collusion found)|0|3.47|52|1995|
|SHA-2|SHA-224|112|32|7.62|84.50|2004|
||SHA-256|128|0|7.63|85.25|2001|
||SHA-384|192|128(≤384)|5.12|135.75|2001|
||SHA-512|256|0|5.06|135.50||
||SHA-512/224|112|288|5.12|135.75|2012|
||SHA-512/256|128|256|5.12|135.75||
|SHA-3|SHA-3-224|112|448|8.12|154.25|2015|
||SHA-3-256|128|512|8.59|155.50||
||SHA-3-384|192|768|11.06|164.00||
||SHA-3-512|256|1024|15.88|164.00||
||SHAKE128|min(d/2,128)|256|7.08|155.25||
||SHAKE256|min(d/2,256)|512|8.59|155.50||
-----
- Another reason is that SHA-2 is an improved version of SHA-1, so it is not much vulnerable to collision
attacks as SHA-1. Implementation of any version of the SHA-2 hashing algorithm is sufficient in the
current scenario of today's digital world.
Many research studies also explored that SHA-3 is much slower than SHA-2. So, why shift over SHA-3, which is
slower?
**3.4** **Reason for Shifting Towards SHA-3**
The advancement of technology has laid out specific requirements that drive an upgrade to a new hashing technique
to cope with the requirements needed in futuristic computing. SHA-3 is a robust technique compared to the existing
hashing algorithms. In upcoming years, every system will undoubtedly shift to SHA-3. However, it depends on how
the actual threats and security attacks on SHA-2 keep happening. In terms of software, the existing hashing schemes
on windows machines are faster than SHA-3. Nonetheless, in terms of hardware, it easily defeated existing SHA-1
and SHA-2 hashing schemes. The cryptographic procedures are gradually controlled by hardware components in the
future technology like an ecosystem of the Internet of Cyber-Physical Things (IoCPT). With the advancement of
technologies, the CPU or processor is getting faster and faster. So, in most scenarios, the time required to hash will
not cause much burden. Furthermore, few researchers [44], [45] have explored various ways to improve the speed of
SHA-3 in software. In the context of the security strength of SHA-3 few significant points are highlighted as
follows:
- SHA-3 provides a secure one-way hash function. More particularly, it is not susceptible to length
expansion attacks. In this case, the input cannot be reconstructed using only the hash output, nor the input
data can be altered by changing the hash value. Although SHA-3 offers the updated secure hash algorithm,
the existing hashing algorithm SHA-2is still feasible for some applications.
- NIST still believes that only two identical hash functions (i.e., SHA-256 and SHA-512) of SHA-2 are
secure.
- However, the newly released SHA-3 algorithm complements the existing SHA-2 while offering large
varieties.
SHA-3 offers various functions with different digest bit lengths, including SHA-3-224, SHA-3-256, SHA-3-384,
and SHA-3-512. It also offers two flexible output functions, SHAKE-128 and SHAKE-256, where 128 and 256
extensions are security robustness factors, which can be utilized to attain global and randomized hashing, hashbased message authentication code and even for performing stream encryption.
**4.0** **STATISTICAL ANALYSIS**
This section discusses the statistical analysis being carried out over different variants of the SHA family. An opensource platform of Java is considered for experimenting with a standard 64-bit Windows machine. The proposed
analysis considers Java Cryptography API to generate the hashes [46], assuming a sample binary string of ten
thousand random messages. The digest size for input and output is retained for the more straightforward analysis.
The statistical analysis is carried out for SHA-1 with 160 bits and SHA-2 and SHA-3 with 512 bits. Although there
are various SHA-2 and SHA-3, the proposed analysis chooses a 512-bit variant of both. The statistical analysis in
the proposed study is carried out using the series test, bits probability test, and Hamming distance test.
**4.1** **Series Test**
The proposed system carries out a non-parametric test that considers random samples from multiple populations
associated with the cumulative distribution of continuous data. The prime idea of this test is to measure the state of
the randomness of the hashes with clear insight towards studying the dependencies of each hash function. The
mathematical expression of assessing test statistics ∅ is as follows:
∅=
∆𝜓
(2)
𝜎
In expression (2), Δ𝜓 is evaluated by the difference between 𝜓 and 𝜓1, which represents series cardinality depicted
by one hash and anticipated cardinality of series. The variable σ represents the standard deviation. Further, the
computation of 𝜓1 depends upon two parameters of the cardinality of hashes, i.e., favourable cardinality 𝑐𝑓 and total
cardinality 𝑐𝑡, while their empirical relationship is as follows:
188
-----
𝜓1 =
2𝑐𝑓 (3)
𝑐𝑡 [+ 1]
In expression (3), 𝑐𝑓 and 𝑐𝑡 are computed as (𝑐0 ∗𝑐) and (𝑐0 + 𝑐), where 𝑐0 and 𝑐 represent the cardinality of subsequences which has all the 0 and 1, respectively. Considering the significance level of 5%, the proposed system
carries out the series test. It will mean that the hypothesis for random creation of hash will be proven effective if the
absolute value of test statistics ∅ is more than test statistics corresponding to 0.975, which will be 1.96.
Table 3: Accomplished Numerical Outcome of Series Test
**Hash Function**
**Item**
**SHA-1** **SHA-2** **SHA-3**
Maximum 5.34 5.15 5.41
Minimum 0 0 0
Average 0.81 0.91 0.81
Standard Deviation ±0.71 ±0.72 ±0.61
From the above numerical outcomes (Table 3), SHA-3 is the lowest value of standard deviation that is statistically
significant to exhibit the best test outcome compared to the SHA-1 and SHA-2 family of hashes.
**4.2** **Bits Probability Test**
This is the second statistical test carried out in the proposed system to assess the predictability of the bits present in
the MD. As a result, the study computes the probability of one second for every position of the message bit.
Accordingly, the ideal condition will confirm 0 in 50% probability while 1 in another 50% probability.
Mathematically, it can be represented as follows:
|Item|Hash Function|Col3|Col4|
|---|---|---|---|
||SHA-1|SHA-2|SHA-3|
|Maximum|5.34|5.15|5.41|
|Minimum|0|0|0|
|Average|0.81|0.91|0.81|
|Standard Deviation|±0.71|±0.72|±0.61|
𝑃𝑟𝑜𝑏𝑖(𝑖) =
∑𝑚𝑎𝑥𝑗=1 ℎ[𝑗][𝑖] (4)
𝑚𝑎𝑥
In the above expression (4), the computation of probability 𝑃𝑟𝑜𝑏 is carried out by considering the maximum
position of bit 𝑚𝑎𝑥 and a maximum length of hash. The expression also represents ℎ as the hash values of the table
associated with yielded MD. The assessment considers the max value to be 10000 in expression (3). The proposed
analysis is carried out by considering test statistics |∅| discussed in expression (4) in the following sub-section. With
an anticipated outcome of 50% probability, the assessment is regarded as a pass if the value |∅| is found to be less
than 1.96. The numerical outcome is shown in Table 4.
Table 4: Accomplished Numerical Outcome of Bit Prediction
**Probability**
**Item**
**SHA-1** **SHA-2** **SHA-3**
Maximum 52.46 52.57 52.85
Minimum 49.31 49.51 49.87
Average 51.15 49.01 50.01
Standard Deviation ±0.53 ±0.54 ±0.52
A closer look at the above numerically tabulated value shows the average value in the proximity of 50% with a
reduced standard deviation. It can also be noted that the performance of both SHA-1 and SHA-2 are nearly
equivalent, where the analysis found 510 bits to possess a probability value that is different in comparison to 50%.
While, the computation of absolute test statistics |∅| for SHA-1 and SHA-2 is found to be 1.05 and 0.82,
respectively, representing the higher predictability property of both hash functions. On the other hand, the numerical
outcome of SHA-2 is also found slightly to be equivalent to SHA-3. It was found that 507 bits out of 512 bits do not
meet the condition of 50% probability with a minor difference. The analyzed test statistics value of |∅| is 0.328,
demonstrating a lower prediction probability score than SHA-1 and SHA-2. Based on this test, it can be said that the
proposed outcome exhibits lower predictability for SHA-3 than SHA-1 and SHA-2.
189
|Item|Probability|Col3|Col4|
|---|---|---|---|
||SHA-1|SHA-2|SHA-3|
|Maximum|52.46|52.57|52.85|
|Minimum|49.31|49.51|49.87|
|Average|51.15|49.01|50.01|
|Standard Deviation|±0.53|±0.54|±0.52|
-----
**4.3** **Hamming Distance Test**
This is the third statistical analysis in the proposed evaluation to identify the significant impact of minor alteration in
data towards outcome hash files. This assessment form involves applying a t-student test to compute the test
statistics |∅|. This test constructs a hypothesis that the hash function can successfully pass through if the numerical
score of the test statistic resides between 0 and 1.96 of the confidence intervals. The anticipated outcome of the test
statistics is equivalent to half the hash size with 5% of the significance level. The mathematical formulation for this
assessment of test statistics |∅| can be represented as shown in expression (5):
|∅| = |
𝑣𝑎𝑣−𝑣𝜎 𝑒𝑥 √𝑠𝑠𝑖𝑧𝑒| (5)
The prime purpose of this part of the analysis is to compute the Hamming distance between the hash. The operation
carried out in this analysis is: considering 𝑋1 and 𝑋2 are primary and secondary bits of information, then the length
of 𝑋1 is equivalent to the size of 𝑋2. Therefore, Hamming distance computation is carried out as 𝑋3 = 𝑋1 ⊕𝑋2.
This distance represents the cardinality of position with discretely different values for both 𝑋1 and 𝑋2 . The
numerical outcomes of Hamming distance are shown in Table 5.
Table 5: Accomplished Numerical Outcome of Series Test
**Hash Function**
**Item**
**SHA-1** **SHA-2** **SHA-3**
Maximum 5.34 5.15 5.41
Minimum 0 0 0
Average 0.81 0.91 0.81
Standard Deviation ±0.71 ±0.72 ±0.61
From the numerical outcomes of Hamming distance in Table 5, it can be seen that absolute test statistics |∅| for
SHA-1 is about 1.28, which will mean that a minor alteration in the input data will affect 50% of the hashes of SHA1. Regarding SHA-2, the hash variation is sometimes closer to and lower than 50%, while the average test statistic
|∅| is about 0.159. This outcome shows SHA-2 to perform better than SHA-1. From the viewpoint of SHA-3, in the
majority of the cases, the critical values of test statistics are found to be 0.44, which exhibits SHA-2 is still the best
outcome. At the same time, SHA-3 and SHA-1 occupy the third and second positions, respectively.
**5.0** **PERFORMANCE ANALYSIS**
The performance analysis of the SHA family is carried out considering the number of cycles utilized for data (byte)
processing. Unlike conventional analysis mechanisms using data transmitted per unit time, the proposed analysis
considers the processor's clock cycles to perform data processing. To compute the number of cycles utilized in
processing one unit of data associated with the encrypted hash function, the amount of the data processed is divided
by the cycles it consumes to process the data. It is to be noted that this performance parameter could significantly
differ from one to another encryption option. The prime justification behind adopting this performance parameter is
that it offers the possibility to compute total duration, which will mean that the processor with maximal frequency
will exhibit better performance. Furthermore, this performance parameter will directly indicate the effective
processing capability of all kinds of cryptographic operations.
The computation of the cycles per data is carried out as follows:
|Item|Hash Function|Col3|Col4|
|---|---|---|---|
||SHA-1|SHA-2|SHA-3|
|Maximum|5.34|5.15|5.41|
|Minimum|0|0|0|
|Average|0.81|0.91|0.81|
|Standard Deviation|±0.71|±0.72|±0.61|
𝑐𝑦𝑐𝑙𝑒𝑠𝑝𝑒𝑟𝑑𝑎𝑡𝑎=
(𝐷ℎ∗𝜆) (6)
𝑙𝑒𝑛
Expression (6) shows that cycles per data depend on the duration of hashing operation 𝐷ℎ, CPU frequency 𝜆, and
length of input message 𝑙𝑒𝑛. To carry out this performance analysis, the proposed study considers cycles per data
(or byte) as the prime observational values in the presence of 1 kB, 1 MB, and 64 MB input message size in a
typical 64-bit windows environment. This evaluation method will give a complete idea of the scalability
performance of the SHA approach over a similar test environment.
190
-----
(a)
(b)
191
-----
(c)
Fig. 8: Performance Analysis for a) 1 KB, b) 1 MB, and c) 64 MB input data
The outcome exhibited in Fig. 8 highlights three different test cases to assess the clock usage per byte involved for
three different variants of the hash function. Fig. 8(a) shows that SHA-1 has the lower occupation of around 200
cycles per byte, which is the best result compared to the other two variants of the hash function, i.e., SHA-2 and
SHA-3. A closer look into SHA-2 shows no significant increase in cycles ranging between 700-800 cycles per byte.
At the same time, SHA-3 and its three variants range between 890-1100 cycles per byte. A similar trend is observed
in Fig. 8(b) and Fig. 8(c). Another observation among the three graphical outcomes shows an increase in cycles per
byte with the input message size. Despite the reduced occupation of cycles per byte by SHA-1, it cannot be
considered adequate concerning its security operation. It is known that SHA-1 is anticipated to be operational in no
two segments that operate through the internal process, which is again expected to be equivalent to the same hash. It
is also known that the hash of SHA-1 usually is 160 bits long (i.e., a string of 160 zeroes and ones), referring to the
presence of 2160 or 1.4 quindecillion possibilities of hash. This is significantly less when compared with SHA-2
variants. Another significant observation is that theoretically, SHA-2 offers better performance than SHA-3 variants,
especially regarding SHA-2-224 and SHA-2-512. SHA-2 variants utilize a structure of Davies-Meyer considering a
block cipher that is constructed from MD4. On the other hand, SHA-3 variants make use of sponge structure
considering the permutation of Keccak. This makes the complete internal structure different for both the variants of
SHA-2 and SHA-3.
There are no concrete criteria to confirm that anyone has more potential based on the outcome. SHA-1 possesses
structural weakness, making it vulnerable to attacks like brute force. On the other hand, SHA-2 and SHA-3 are the
same and cannot be considered to possess structural weaknesses. On the contrary, SHA-3 is slower than SHA-2, as
exhibited by more cycles per byte in Fig. 8(b) and Fig. 8(c). Hence, from performance analysis based on cycles per
byte, SHA-2 confirms to offer better performance. However, from a security viewpoint, there are few potential
benefits of SHA-3 compared to SHA-2. The prime contribution of SHA-3 is its Keccak sponge that can be utilized
both in the form of Media Access Control (MAC) and hash, unlike SHA-2, which also results in an increase of
cycles per byte in the presented outcome of Fig. 8. It is used as a function that can derive the secret key costeffectively. Although SHA-3 has more cycle per byte inclusion and will demand slightly more resources, they are
still cost-effective security solutions. This outcome shows the higher applicability of SHA-3 on Internet of Things
(IoT) systems, which requires using the low-powered device with cost-effectiveness. As the construction of SHA-3
completely differs from SHA-2 variants, it is unlikely to apply SHA-3 in case of new intruder breaks in SHA-2 or
vice-versa. Therefore, both SHA-2 and SHA-3 must be emphasized equally for making a more straightforward
transition to SHA-3 until SHA-3 is proven to be 100% effective from a performance and security viewpoint.
**6.0** **EXTENSIVE FAULT ANALYSIS**
An extensive fault analysis using differential and Algebraic Fault Analysis (AFA) strategies to evaluate the strength
of all three variants of the SHA algorithm was carried out. This part of the analysis considered the message size for
192
-----
SHA-2 and SHA-3 limited to 224 and 256 only as there were no significant differences in the outcome achieved in
256 and 512 message sizes for two variants of SHA, i.e., SHA-2 and SHA-3. The prime objective of the Differential
Fault Analysis (DFA) was to retrieve the information of an inner state by using the difference in the correlation
among the defective resultants and all intermediate parameters in contrast to the appropriate resultant. DFA was
initially used to analyze block ciphers' strength, DES algorithm, hash functions and stream ciphers [47]. On the
other hand, a different variant called Algebraic Fault Analysis (AFA) integrates algebraic crypto analysis with fault
injection; and an algebraic expression is used over a finite field geometry to translate faults and cryptographic
function. Technology of Satisfiability (SAT) Solver or Satisfiability Modulo Theories (SMT) is often used for
recovering the message or secret key in such a mechanism [48].
It has been noted that analysis is more effective when carried out with AFA than DFA as the solver's complete
propagation of fault reduces the dependency on human intervention to introduce the intrusion. Moreover, the
application of AFA was found to automate the DFA mechanism toward hash function, stream ciphers and block
ciphers [47]. The recovery problem associated with an internal state has been investigated in the proposed system
concerning SHA-1, SHA-2 (224, 256), and SHA-3 (224, 256). This was carried out using the boolean expression for
the operation, followed by using SAT solver to explore the solution for all parameters connected with confidential
information. It was carried out by constructing an equation set for all hash functions associated with faulty and
appropriate execution. The expression for a proper hash 𝐻 considering the input of 𝐿22𝑖 as:
𝐻= 𝐴. 𝐵 (7)
In expression (7), the variable 𝐴= 𝑓(𝜓23, 𝐵, 𝑆, 𝑅, 𝐿) and 𝐵= 𝑓(𝜓22, 𝐵, 𝑆, 𝑅, 𝐿(𝐿22𝑖 )), where 𝜓23/ 𝜓22 represents
binary XOR operation, 𝐵 represents a binary operation in rows over state bits, 𝑆 represents in-slice permutation over
state bits, 𝑅 represents rotation over state bits 𝐿 represents linear operation with all input bits and single output bits.
The above expression is slightly modified to include a fault in the form of 𝛥𝐿_𝑖^22 to generate a fault hash of:
𝐻1 = 𝐴. 𝐵1 (8)
In expression (8), 𝐵1 represents 𝐵1 = 𝑓(𝜓22, 𝐵, 𝑆, 𝑅, 𝐿(𝐿_𝑖^22⨁𝛥𝐿_𝑖^22)). Considering the individual cases of all
the variants of a hash function, the value of 𝐻 and 𝐻1 in expressions (1) and (2) can be 𝑥-bits. A closer look into
this strategy will showcase that the value of 𝛥𝐵_𝑖^23 can be extracted using H and H1 differential while the same
attacker can use 𝐵_𝑖^23 to launch an attack.
𝐵𝑜(𝑥, 𝑦, 𝑧) = 𝐵𝑖(𝑥, 𝑦, 𝑧) ⨁ 𝐵𝑖(𝑥+ 2, 𝑦, 𝑧) ⨁ 𝐵𝑖(𝑥+ 1, 𝑦, 𝑧). 𝐵𝑖(𝑥+ 2, 𝑦, 𝑧) (9)
However, the expression (9) mentioned above differs for all the three hash functions, but it will perform faster for
AFA, recovering 𝐵_𝑖^22 bits of data. It should be carefully noted that there is a significant level of interdependencies towards all the internal states with reversible functions. It will mean that a compromising attempt
toward the hash algorithm can be carried out by extracting only one internal state. The solution to the proposed
scheme of fault analysis will be just one uniquely recovered bit of information. Therefore, there is a need to explore
𝐵_𝑖^22 bits of information in the influence of fault injection. The SAT solver is executed to explore the initial
rounds of solution and then reduce them to non-repeating bits.
193
-----
(a)
DFA AFA
**100**
**90**
**80**
**70**
**60**
**50**
**40**
**30**
**20**
**10**
**0**
**SHA1** **SHA-224** **SHA-256** **SHA3-224** **SHA3-256**
#### Hash Digest
(b)
Fig. 9: Comparative Analysis of Fault Ratio (a) For 8 Bits and (b) For 16 Bits
The outcome shown in Fig. 9 highlights that both the variants of SHA-3 offer a higher value of fault ratio than the
two variants of SHA-2 and SHA-1 when assessed using 8 and 16-bit values, respectively. An intruder can recognize
the injected fault associated with a particular injection. The ratio of effective fault can be computed by considering a
cumulative number of feasible faults for a specific number of positions with all possible fault values. This outcome
showcases that adopting the SHA-3 variant for higher message size results in higher recovery of bits (Fig. 10)
irrespective of AFA and DFA consideration. The outcome eventually infers that SHA-3 variants are always practical
to adopt while dealing with a new generation of attacks; however, SHA-2 variants will also be capable enough to
deal with previously reported attacks in literature.
194
-----
(a)
SHA1 SHA-224 SHA-256 SHA3-224 SHA3-256
**0**
**10** **20** **40** **60** **80** **100**
#### Injected Faults
(b)
Fig. 10: Comparative Analysis of Recovered Bits: a) For AFA and b) For DFA
195
-----
(a)
**8000**
**7000**
**6000**
**5000**
**4000**
**3000**
**2000**
**1000**
**0**
**10** **20** **40** **60** **80** **100** **120**
#### Injected Faults
(b)
Fig. 11: Comparative Analysis of Processing Time (a) For AFA and (b) For DFA
The next part of the analysis investigates the recovered bits (Fig. 10). It is feasible for attackers to inject a different
number of faults associated with the same input and execute all hash variant functions targeting the retrieval of the
internal state of 𝐵_𝑖^22. The result is presented for the recovery process of SHA-1, SHA-224, SHA-256, SHA-3224, and SHA-3-256 to see the higher capability of AFA compared to DFA with more 𝐵_𝑖^22 bits. The outcome
shows that SHA-3 offers lesser dependencies of less fault injection than other SHAs to recover the total value of the
bit state. At the same time, analysis is carried out regarding the time required to perform cryptanalysis. (Fig. 11)
showcases no significant difference between AFA and DFA performance for SHA-3 and SHA-2, but SHA-3 offers
more time than SHA-2 variants. This is because the SAT solver required more time to recognize the faults, followed
by recovering the internal states of the bits. Hence, the proposed investigation shows that SHA-3 offers better
security features than other SHA variants.
**7.0** **STUDY OUTCOME**
After performing an extensive assessment of the SHA family, the following learning outcomes have been drawn:
196
-----
- The statistical analysis exhibits that SHA-3 has the lowest randomness of hashes than SHA-2 in the series
test. Again, SHA-3 performance was found to have a lower predictability score, as seen from the bit
probability test. This signifies that even with a less state of randomness, SHA-3 is hard to predict for a
given number of bits present in the MD. This outcome makes SHA-3 more robust and resilient from
unknown attackers. The Hamming distance test shows better results for SHA-2 compared to SHA-3.
However, the sponge construction of SHA-3 as an internal structure will offer more security strength for
novice forms of attackers. Hence, SHA-3 is the best complimentary solution on top of the SHA-2 approach.
- The performance analysis exhibits that all the variants of SHA-3 offer increased clock per byte in contrast
to SHA-1 and SHA-2. This outcome was justified by the internal structure of SHA-3, making it more
secure due to the presence of MAC and the hash, which is not the case with SHA-2. Irrespective of more
cycles per byte, SHA-3 is still the best option for offering authentication and data integrity. Due to its
flexible structure, SHA-3 provides equivalent performance to non-anonymity, non-repudiation and privacy
from maximum attacker forms.
- The extensive fault analysis exhibits a higher recovery of bits for SHA-3, while the processing time for the
SHA-3 is nearly the same for all of its respective variants. Although the processing time is slightly higher
with increasing injected faults than SHA-2 and SHA-1, SHA-3 is still the preferred algorithm for resisting
higher-end intruders with low faults in the cumulative outcome.
Based on the above outcomes, it can be said that SHA-3 should be used in a specific environment where the
attacker's strategy is less known. If the attacker strategy is well defined, SHA-2 will be quite enough to mitigate
such attacks. SHA-1 is not recommended due to the limitation observed from the analysis discussed in prior
sections. Hence, the outcome suggests wiser use of SHA-3 in a complex attack environment, whereas SHA-2 can
still be used for normal to medium vulnerability.
**8.0** **CONCLUSION**
Hash functions play a vital role in network and communication security. The study and its contribution are manifold:
- The analysis offers a justified outcome towards the usage of different variants of SHA, which has not been
reported in existing research publications,
- The implementation considers a uniform test environment subjected to SHA family using three different
analysis mechanisms to offer validated outcome,
- The outcome removes the myth that one variant of SHA is more effective than its counterparts instead, it
concludes that
`o` SHA-1 is less efficient in the majority of performance attribute while
`o` SHA-2 and SHA-3 bears potential to resist the majority of the threats,
- The study outcome continues to offer proof that SHA-3 is in a better position to mitigate the new definition
of the attacker. At the same time, SHA-2 will be suitable enough to deal with the existing version of the
known attacker,
- The study also offers information that although SHA-3 has slightly more processing time, it can still be
considered to look into its potential in other performance metrics.
The main problem of any hashing algorithm is to check the integrity and authenticity of the data transmitted between
two communicating nodes. This paper discussed the main highlights of cryptographic hash functions, the
widespread use of various well-known hash functions and associated attacks. The study also carried out a
comparative analysis of different hash algorithms and analyzed the trend of progress in this field. Studies on hash
functions performed by other researchers were also discussed. However, most of them were limited to theoretical
implementation and not tested against collision attacks. Based on the analytical findings, it has been shown that the
current status and trend toward adopting SHA-3 were efficient, safe and capable of meeting the requirements of
future network applications. Certain pitfalls were observed in the usage of the SHA-2 and SHA-3 families, which
are associated with partial breakdown connected to bitwise operators. The beneficial point of SHA-3 is associated
with its capability of faster response time and considering the maximum size of data for performing encryption.
Hence, future works should address this problem by reducing clock cycles and quicker response time.
**9.0** **ACKNOWLEDGEMENT**
This research was partially funded by IIUM-UMP-UiTM Sustainable Research Collaboration Grant 2020 (SRCG)
under Grant ID: SRCG20-003-0003 and Fundamental Research Grant Scheme (FRGS) under Grant ID FRGS19068-0676. The authors express their personal appreciation for the effort of Ms Gousia Nissar and Ms Manasha Saqib
in proofreading, editing and formatting the paper.
197
-----
**REFERENCES**
[1] T. Wang, M. Bhuiyan, G. Wang, L. Qi, J. Wu and T. Hayajneh, “Preserving Balance Between Privacy and
Data Integrity in Edge-Assisted Internet of Things”, IEEE Internet of Things Journal, Vol. 7, No. 4, 2020,
pp. 2679-2689, doi: 10.1109/jiot.2019.2951687.
[2] D. Chen, P. Bovornkeeratiroj, D. Irwin and P. Shenoy, “Private Memoirs of IoT Devices: Safeguarding User
Privacy in the IoT Era”, in 2018 _IEEE 38th International Conference on Distributed Computing Systems_
_(ICDCS), Vienna, Austria, 2018, pp. 1327-1336, doi: 10.1109/ICDCS.2018.00133._
[3] Z. A. Solangi, Y. A. Solangi, S. Chandio, M. bt. S. Abd. Aziz, M. S. bin Hamzah and A. Shah, “The Future
of Data Privacy and Security Concerns in Internet of Things”, in _2018 IEEE International Conference on_
_Innovative_ _Research_ _and_ _Development_ _(ICIRD),_ Bangkok, Thailand, 2018, pp. 1-4, doi:
10.1109/ICIRD.2018.8376320.
[4] I. Ochôa, L. Calbusch, K. Viecelli, J. de Paz, V. Leithardt and C. Zeferino, “Privacy in the Internet of Things:
A Study to Protect User's Data in LPR Systems Using Blockchain”, in 2019 17th International Conference
_on_ _Privacy,_ _Security_ _and_ _Trust_ _(PST),_ _Fredericton,_ NB, Canada, 2019, pp. 1-5, doi:
10.1109/PST47121.2019.8949076.
[5] M. Khari, A. K. Garg, A. H. Gandomi, R. Gupta, R. Patan and B. Balusamy, “Securing Data in Internet of
Things (IoT) Using Cryptography and Steganography Techniques”, _IEEE Transactions on Systems, Man,_
_and Cybernetics: Systems, Vol. 50, No. 1, 2020, pp. 73-80, doi: 10.1109/TSMC.2019.2903785._
[6] H. Al Hamid, S. Rahman, M. Hossain, A. Almogren and A. Alamri, “A Security Model for Preserving the
Privacy of Medical Big Data in a Healthcare Cloud Using a Fog Computing Facility with Pairing-Based
Cryptography”, IEEE Access, Vol. 5, 2017, pp. 22313-22328, doi: 10.1109/access.2017.2757844.
[7] N. Sharma, H. Parveen Sultana, R. Singh and S. Patil, “Secure Hash Authentication in IoT based
Applications”, Procedia Computer Science, Vol. 165, 2019, pp. 328-335, doi: 10.1016/j.procs.2020.01.042.
[8] S. Suhail, R. Hussain, A. Khan and C. S. Hong, “On the Role of Hash-Based Signatures in Quantum-Safe
Internet of Things: Current Solutions and Future Directions”, IEEE Internet of Things Journal, Vol. 8, No. 1,
2021, pp. 1-17, doi: 10.1109/JIOT.2020.3013019.
[9] K. Saravanan and A. Senthilkumar, “Theoretical Survey on Secure Hash Functions and Issues”, International
_Journal of Engineering Research & Technology (IJERT), Vol. 2, No. 10, 2013, pp. 1150-1153._
[10] A. Jurcut, T. Niculcea, P. Ranaweera and N. Le-Khac, “Security Considerations for Internet of Things: A
Survey”, SN Computer Science, Vol. 1, No. 4, 2020, pp. 1-19, doi: 10.1007/s42979-020-00201-3.
[11] H. Huang, Q. Huang, F. Xiao, W. Wang, Q. Li and T. Dai, “An Improved Broadcast Authentication Protocol
for Wireless Sensor Networks Based on the Self-Reinitializable Hash Chains”, Security and Communication
_Networks, Vol. 2020, 2020, pp. 1-17, doi: 10.1155/2020/8897282._
[12] M. Stevens, E. Bursztein, P. Karpman, A. Albertini and Y. Markov, “The First Collision for Full SHA-1”, in
_Annual International Cryptology Conference, Santa Barbara, USA, 2017, pp. 570-596, doi: 10.1007/978-3-_
319-63688-7_19.
[13] R. Martino and A. Cilardo, “SHA2 Acceleration Meeting the Needs of Emerging Applications: A
Comparative Survey”, IEEE Access, Vol. 8, 2020, pp. 28415-28436, doi: 10.1109/access.2019.2920089.
[14] A. Mohammed Ali and A. Kadhim Farhan, “A Novel Improvement with an Effective Expansion to Enhance
the MD5 Hash Function for Verification of a Secure E-Document”, IEEE Access, Vol. 8, 2020, pp. 8029080304, doi: 10.1109/ACCESS.2020.2989050.
[15] H. Liu, A. Kadir and J. Liu, “Keyed Hash Function Using Hyper Chaotic System with Time-Varying
Parameters Perturbation”, _IEEE_ _Access,_ Vol. 7, 2019, pp. 37211-37219, doi:
10.1109/ACCESS.2019.2896661.
[16] M. Rathor and A. Sengupta, “IP Core Steganography Using Switch Based Key-Driven Hash-Chaining and
Encoding for Securing DSP Kernels Used in CE Systems”, _IEEE Transactions on Consumer Electronics,_
Vol. 66, No. 3, 2020, pp. 251-260, doi: 10.1109/TCE.2020.3006050.
198
-----
[17] J. Ouyang, X. Zhang and X. Wen, “Robust Hashing Based on Quaternion Gyrator Transform for Image
Authentication”, IEEE Access, Vol. 8, 2020, pp. 220585-220594, doi: 10.1109/ACCESS.2020.3043111.
[18] Y. Zhou, B. Yang, Z. Xia, Y. Mu and T. Wang, “Anonymous and Updatable Identity-Based Hash Proof
System”, IEEE Systems Journal, Vol. 13, No. 3, 2019, pp. 2818-2829, doi: 10.1109/JS2018.2878215.
[19] D. I. Nassr, “Secure Hash Algorithm-2 formed on DNA”, _Journal of the Egyptian Mathematical Society,_
Article No. 24, 2019, pp. 1-20, doi: 10.1186/s42787-019-0037-6.
[20] Y. Lee, S. Rathore, J. H. Park, and J. H. Park, “A Blockchain-Based Smart Home Gateway Architecture for
Preventing Data Forgery”, Human-centric Computing and Information Science, Vol. 10, No. 1, 2020, pp. 114, doi: 10.1186/s13673-020-0214-5.
[21] M. A. Rezazadeh Baee, L. Simpson, X. Boyen, E. Foo, and J. Pieprzyk, “Authentication Strategies in
Vehicular Communications: A Taxonomy and Framework”, EURASIP Journal on Wireless Communication
_and Networking, Vol. 2021, No. 1, 2021, pp. 1-50, doi: 10.1186/s13638-021-01968-6._
[22] R. Martino and A. Cilardo, “SHA2 Acceleration Meeting the Needs of Emerging Applications: A
Comparative Survey”, IEEE Access, Vol. 8, 2020, pp. 28415-28436, doi: 10.1109/ACCESS.2020.2972265.
[23] I. E. Salem, A. M. Salman and M. M. Mijwil, “A Survey: Cryptographic Hash Functions for Digital
Stamping”, _Journal of Southwest Jiaotong University, Vol. 54, No. 6, 2019, pp. 1-11, doi:_
10.35741/issn.0258-2724.54.6.2.
[24] Jianhua Mo, Xiawen Xiao, Meixia Tao and Nanrun Zhou, “Hash Function Mapping Design Utilizing
Probability Distribution for Preimage Resistance”, in _2012 IEEE Global Communications Conference_
_(GLOBECOM), Anaheim, CA, 2012, pp. 862-867, doi: 10.1109/GLOCOM.2012.6503221._
[25] B. Preneel, “Second Preimage Resistance”, _Encyclopedia of Cryptography and Security, Boston, MA,_
Springer, 2005, doi: 10.1007/978-3-540-25937-4_24.
[26] A. Maetouq and S. M. Daud, “HMNT: Hash Function Based on New Mersenne Number Transform”, _IEEE_
_Access, Vol. 8, 2020, pp. 80395-80407, doi: 10.1109/ACCESS.2020.2989820._
[27] W. Jing, D. Zhang and H. Song, “An Application of Ternary Hash Retrieval Method for Remote Sensing
Images in Panoramic Video”, _IEEE_ _Access,_ Vol. 8, 2020, pp. 140822-140830, doi:
10.1109/ACCESS.2020.3006103.
[28] H. Cui, L. Zhu, J. Li, Y. Yang and L. Nie, “Scalable Deep Hashing for Large-Scale Social Image Retrieval”,
_IEEE Transactions on Image Processing, Vol. 29, 2020, pp. 1271-1284, doi: 10.1109/TIP.2019.2940693._
[29] Y. Zheng, Y. Cao and C. H. Chang, “UDhashing: Physical Unclonable Function-Based User-Device Hash
for Endpoint Authentication”, IEEE Transactions on Industrial Electronics, Vol. 66, No. 12, 2019, pp. 95599570, doi: 10.1109/TIE.2019.2893831.
[30] A. Biswas, A. Majumdar, S. Nath, A. Dutta and K. L. Baishnab, “LRBC: A Lightweight Block Cipher
Design for Resource Constrained IoT Devices”, Journal of Ambient Intelligence and Humanized Computing,
2020, pp. 1-15, doi: 10.1007/s12652-020-01694-9.
[31] P. Chanal and M. Kakkasageri, “Security and Privacy in IoT: A Survey”, _Wireless Personal_
_Communications, Vol. 115, No. 2, 2020, pp. 1667-1693, doi: 10.1007/s11277-020-07649-9._
[32] E. Molina and E. Jacob, “Software-Defined Networking in Cyber-Physical System: A Survey”, Computers &
_Electrical Engineering, Vol. 66, 2018, pp. 407-419, doi: 10.1016/j.compeleceng.2017.05.013._
[33] I. Graja, S. Kallel, N. Guermouche, S. Cheikhrouhou and A. Hadj Kacem, “A Comprehensive Survey on
Modeling of Cyber‐Physical Systems”, Concurrency and Computation: Practice and Experience, Vol. 32,
No. 5, 2018, pp. 1-18, doi: 10.1002/cpe.4850.
[34] B. U. I. Khan, R. F. Olanrewaju, F. Anwar and M. Yaacob, “Offline OTP Based Solution for Secure Internet
Banking Access”, in 2018 IEEE Conference on e-Learning, e-Management and e-Services (IC3e), Langkawi,
Malaysia, 2018, pp. 167-172, doi: 10.1109/IC3e.2018.8632643.
199
-----
[35] B. U. I. Khan, R. F. Olanrewaju, F. Anwar, R. N. Mir and A. Najeeb, “A Critical Insight into the
Effectiveness of Research Methods Evolved to Secure IoT Ecosystem”, International Journal of Information
_and Computer Security, Vol. 11, No. 45, 2019, pp. 332-354, doi: 10.1504/ijics.2019.10023470._
[36] C. Jin, “Cryptographic Solutions for Cyber-Physical System Security”, Doctoral Dissertation, University of
Connecticut, Storrs, 2019.
[37] A. Tawalbeh and H. Tawalbeh, “Lightweight Crypto and Security”, Security and Privacy in Cyber-Physical
_Systems, John Wiley & Sons, 2017, pp. 243-261, doi: 10.1002/9781119226079.ch12._
[38] G. Sabaliauskaite and A. Mathur, “Aligning Cyber-Physical System Safety and Security”, Complex System
_Design & Management, Cham, Springer, 2021, pp. 41-53, doi: 10.1007/978-3-319-12544-2_4._
[39] J. Wang, T. Zhang, J. Song, N. Sebe and H. Shen, “A Survey on Learning to Hash”, IEEE Transactions on
_Pattern_ _Analysis_ _and_ _Machine_ _Intelligence,_ Vol. 40, No. 4, 2018, pp. 769-790, doi:
10.1109/tpami.2017.2699960.
[40] M. Ebrahim, S. Khan and U. B. Khalid, “Symmetric Algorithm Survey: A Comparative Analysis”,
_International Journal of Computer Applications, Vol. 61, No. 20, 2013, pp. 12-19, doi: 10.5120/10195-4887._
[41] M. J. Moayed, A. A. A. Ghani and R. Mahmod, “A Survey on Cryptography Algorithms in Security of
Voting System Approaches”, in 2008 International Conference on Computational Sciences and its
_Applications, Perugia, Italy, 2008, pp. 190-200, doi: 10.1109/ICCSA.2008.42._
[42] D. Lee, “Hash Function Vulnerability Index and Hash Chain Attacks”, in 2007 3rd IEEE Workshop on
_Secure Network Protocols, Beijing, China, 2007, pp. 1-6, doi: 10.1109/NPSEC.2007.4371616._
[43] F. Breitinger and H. Baier, “Properties of a Similarity Preserving Hash Function and Their Realization in
SDhash”, in 2012 Information Security for South Africa, Johannesberg, South Africa, 2012, pp. 1-8, doi:
10.1109/ISSA.2012.6320445.
[44] S. El Moumni, M. Fettach and A. Tragha, “High Throughput Implementation of SHA3 Hash Algorithm on
Field Programmable Gate Array (FPGA)”, _Microelectronics Journal, Vol. 93, 2019, pp. 1-8, doi:_
10.1016/j.mejo.2019.104615.
[45] H. Choi and S. Seo, “Fast Implementation of SHA-3 in GPU Environment”, _IEEE Access, Vol. 9, pp._
144574-144586, 2021, doi: 10.1109/access.2021.3122466.
[46] S. Rahaman, N. Meng and D. Yao, “Tutorial: Principles and Practices of Secure Crypto Coding in Java”, in
_2018 IEEE Cybersecurity Development (SecDev), Cambridge, MA, 2018, pp. 122-123, doi:_
10.1109/SecDev.2018.00024s.
[47] A. Baksi, S. Bhasin, J. Breier, D. Jap and D. Saha, “Fault Attacks in Symmetric Key Cryptosystems”, IACR
_Cryptology ePrint Archive, 2020, pp. 1-24._
[48] P. K. Gundaram, A. Naidu Tentu and N. B. Muppalaneni, “Performance of Various SMT Solvers in
Cryptanalysis”, in _2021 International Conference on Computing, Communication, and Intelligent Systems_
_(ICCCIS), Greater Noida, India, 2021, pp. 298-303, doi: 10.1109/ICCCIS51004.2021.9397110._
200
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.22452/mjcs.vol35no3.1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.22452/mjcs.vol35no3.1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://ejournal.um.edu.my/index.php/MJCS/article/download/35355/14869"
}
| 2,022
|
[] | true
| 2022-07-27T00:00:00
|
[] | 18,628
|
en
|
[
{
"category": "Engineering",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00eb6f6ebb68f199f12507a6bd585718f464d75d
|
[
"Engineering"
] | 0.82397
|
Revision and Enhancement of Two Three Party Key Agreement Protocols Vulnerable to KCI Attacks
|
00eb6f6ebb68f199f12507a6bd585718f464d75d
|
[
{
"authorId": "3111122",
"name": "M. A. Strangio"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
In two recent papers, Zuowen Tan (Secu- rity and Communication Networks) and Chih-ho Chou et al. (Computers and Electronics) published three party key agreement protocols especially convenient for protecting communications in mobile-centric environ- ments such as e-payments, vehicular mobile networks (VMN), RFID applications, etc. For his protocol, Tan provides a formal security proof developed in a model of distributed computing based on the seminal work of Bellare and Rogaway. In this paper, we show that both protocols are vulnerable to KCI attacks. We suggest modifications to both protocols that fix the vulnerability at the expense of a small decrease in their computational efficiency.
|
# Revision and Enhancement of Two Three Party Key Agreement Protocols Vulnerable to KCI Attacks
### Maurizio Adriano Strangio
Department of Mathematics, University of Rome “Roma Tre”, Rome, Italy
_∗Corresponding Author: strangio@mat.uniroma3.it_
Copyright c⃝2014 Horizon Research Publishing All rights reserved.
### Abstract In two recent papers, Zuowen Tan (Security and Communication Networks) and Chih-ho Chou
_et al._ (Computers and Electronics) published three
party key agreement protocols especially convenient for
protecting communications in mobile-centric environments such as e-payments, vehicular mobile networks
(VMN), RFID applications, etc.
For his protocol, Tan provides a formal security proof
developed in a model of distributed computing based on
the seminal work of Bellare and Rogaway.
In this paper, we show that both protocols are
vulnerable to KCI attacks. We suggest modifications to
both protocols that fix the vulnerability at the expense
of a small decrease in their computational efficiency.
### Keywords Three Party Key Agreement, Keycompromise Impersonation, Mobile Network Communications
## 1 Introduction
In two recent papers, Tan [2] and Chou et al. [5]
present three party key agreement protocols especially
convenient for protecting communications in mobilecentric environments such as e-payments, vehicular mobile networks (VMN), RFID applications, etc.
For his protocol, Tan provides a formal security proof
in a model of distributed computing based on the work
of Bellare and Rogaway [6, 20] and Abdalla et al. [1].
Unfortunately, both protocols are vulnerable to a particular man-in-the-middle attack known as Key Compromise Impersonation (KCI) [12, 15]. In such attacks,
an adversary that has obtained the private key of party
_A can impersonate a legitimate peer B; if the attack_
is successful A will share a session key with the adversary (instead of B). This is a subtle attack that can
have drastic consequences since the adversary may ob
material (e.g. passwords) or private data (e.g. credit
card numbers). With three party protocols the adversary may impersonate a peer (A or B) or the trusted
third party (S).
In this paper, we describe successful KCI attacks
against the aforementioned protocols and also suggest
modifications to fix the vulnerabilities at the expense of
a small decrease in their computational efficiency.
The rest of the paper is organized as follows. We describe KCI attacks against the protocols of TAN and
CHOU et al. respectively in Sections 2 and 3. In Section 4, we suggest modifications to the above protocols
in order to prevent those attacks. Finally, Section 5 contains our concluding remarks.
## 2 A KCI attack against Tan’s protocol
In this section we review Tan’s three-party key agreement protocol and describe how a successful KCI attack
can be conducted by an adversary that has compromised
the private keying material of an honest party.
### 2.1 Review of the protocol specification
The protocol consists of an initialization phase
(wherein A and B register with the trusted server S
and obtain an dA = H(IDA∥x) and dB = H(IDB∥x)
respectively, where x ∈ _Fq is the master key held by S)_
and the subsequent authenticated key agreement phase
(Figure 1).
System parameters are defined by the following tuple:
ΦT AN = (p, n, E, G, P, Ek(·), Dk(·), h(·))
were
(i) p, n are prime numbers;
(ii) E(Fp) is an elliptic curve defined by the equation
_y[2]_ = x[3] + ax + b over the finite field Fp, where
3 2
-----
(iii) master key x ∈R Zn[∗] [, the later is the set of residues]
modulo n;
(iv) group G and point P with order n over E(Fp);
(v) symmetric encryption/decryption algorithms Ek(·),
_Dk(·);_
(vi) hash function h( ).
_·_
_A[dA], B[dB], S[dA, dB, x]_
_A →_ _B:_ _{IDA, request}_
_R_
_A :_ _a_ _←_ _Zn∗_
_R1 ←_ _aP_, e1 ← _h(R1∥IDA∥IDB)_
_e2 ←_ _EdA_ (R1∥IDA∥IDB∥e1)
_A →_ _S:_ _{IDA, IDB, e2}_
_R_
_B :_ _b_ _←_ _Zn∗_
_R2 ←_ _bP_, e3 ← _h(R2∥IDB∥IDA)_
_e4 ←_ _EdB_ (R2∥IDB∥IDA∥e3)
_B →_ _S:_ _{IDB, IDA, e4}_
_S :_ _dRA1′_ _←[∥][ID]h[′]([A]ID[∥][ID]A∥[′][B]x),[∥][e] d[′][1]B[=] ←[ D]h[d](AID[(][e][2]B[)]∥x)_
_R[′]2∥ID[′]B∥ID[′]A∥e[′]3 = DdB_ (e4)
if e[′]1 ̸= h(R[′]1∥IDA∥IDB) abort
if e[′]3 ̸= h(R[′]2∥IDB∥IDA) abort
_e ←_ _h(R[′]1∥R[′]2∥IDA∥IDB)_
_Q1 ←_ _EdA_ (R[′]1∥R[′]2∥e)
_Q2 ←_ _EdB_ (R[′]2∥R[′]1∥e)
_S →_ _A:_ _{Q1}_
_S →_ _B:_ _{Q2}_
_BA : :_ _RififskRififsk R e R e12′′′′ ← ←[∥][∥]′′′′12[R][R] ̸ ̸ ̸ ̸====21′′′′hh R R[∥][∥](( R RaRbR[e][e]11′′′′′′′′12′′′′ = =[∥][∥]1′′2′′[R][R][abort][abort][∥][∥] D D[ID]22′′′′[ID][∥][∥]dd[ID][ID][A]AB[A][∥](([∥]Q[A][A]Q[ID][ID][∥][∥]12[ID][ID]))[B][B][)][)][B][B]_ [abort][abort]
**Figure 1. TAN’s Protocol**
### 2.2 Description of the KCI attack sce- nario
Below we provide a detailed description of a successful
KCI attack against Tan’s protocol:
1. Adversary _A_ obtains _A’s_ private key _dA_ =
_h(IDA∥x);_
2. A generates a random nonce a _∈_ _Zn[∗][,]_ computes R1 = aP, e1 = h(R1∥IDA∥IDB), e2 =
_EdA_ (R1∥IDA∥IDB∥e1) and sends {IDA, request}
and {IDA, IDB, e2} to B and S respectively to initiate a protocol instance with B;
3. A intercepts the messages {IDA, request} and
_{RID1′_ _[∥][ID]A, ID[′][A][∥]B[ID], e2[′][B]}, decrypts the ciphertext[∥][e][′][1]_ [chooses a random nonce] DdA (e[ c]2) =[ ∈]
_Z_ _[∗]_ _R_ _P_ _h(R[′]_ _RIDID )_
4. on receipt of _Q1,_ _A_ computes _DdA_ (Q1) =
_R1′′_ 2′′ _′′_ . If the equations R1 = R1′′ [and][ e]′′ =
_R1′′_ _[∥][∥][R][R]2′′_ _[∥][∥][e][ID][A][∥][ID][B]_ [are verified (indeed they are]
since the transcripts are indistinguishable from
those exchanged by honest parties) A terminates
with the session key skA = h(aR2′′ _[∥][ID][A][∥][ID][B][);]_
5. A computes sk′ = h(cR1′ _[∥][ID][A][∥][ID][B][) and will be]_
able to establish a communication session with A
since sk′ = skA.
## 3 A KCI attack against Chou et al.’s protocol
In this section we review Chou et al.’s three-party key
agreement protocol and describe how a successful KCI
attack can be conducted by an adversary.
### 3.1 Review of the protocol specification
This protocol also comprises an initialization phase
(wherein users A and B register with the trusted server
_S and obtain YA = skAPKS and YB = skBPKS) and_
an authenticated key agreement phase (Figure 2).
System parameters are defined by the following tuple:
ΦCHOU = (p, n, E, G, P, Ek(·), Dk(·), h(·))
were each parameter is defined as in Section 2.
_A[skA, PKA], B[skB, PKB], S[skS, PKS]_
_R_
_A :_ _rA_ _←_ _Zp∗_
_RA_ _rAPKA + YA_
_←_
_KA_ _rAYA = (KA.x, KA.y)_
_←_
_CA_ _EKA,x_ (RA, IDA, IDB, TA)
_←_
_A →_ _B:_ _{IDA, request}_
_A →_ _S:_ _{IDA, RA, CA, TA}_
_R_
_B :_ _rB_ _←_ _Zp∗_
_RB_ _rBPKB + YB_
_←_
_KB_ _rBYB = (KB.x, KB.y)_
_←_
_CB_ _EKB.x_ (RB, IDB, IDA, TB)
_←_
_B →_ _A:_ _{IDB, response}_
_B →_ _S:_ _{IDB, RB, CB, TB}_
_S :_ _KA ←_ _skS(RA-skSPKA) = (KA.x, KA.y)_
_KB ←_ _skS(RB-skSPKB) = (KB.x, KB.y)_
_RA, IDA, IDB, TA = DKA.x_ (CA)
_RB, IDB, IDA, TB = DKB.x_ (CB)
check TA, TB, RA, RB
_CSA_ _EKA.x_ (RA, KB, IDA, IDS, TS)
_←_
_CSB_ _EKB.x_ (RB, KA, IDB, IDS, TS)
_←_
_S →_ _A:_ _{CSA, TS}_
_S →_ _B:_ _{CSB, TS}_
_A :_ _RA, KB, IDA, IDS, TS = DKA.x_ (CSA)
check RA, TS
_sk ←_ _rAskAKB_
_B :_ _RB, KA, IDB, IDS, TS = DKB,x_ (CSB)
check RB, TS
_sk ←_ _rBskBKA_
**Figure 2** Chou et al ’s Protocol
-----
### 3.2 Description of the KCI attack sce- nario
Below we provide a detailed description of a successful
KCI attack against Chou’s protocol:
1. Adversary A obtains A’s private key skA;
2. A executes all operations according to the protocol specification and sends {IDA, request} and
_{IDA, RA, CA, TA} to B and S respectively to ini-_
tiate a protocol instance with B;
3. A intercepts the message _{IDA, RA, CA, TA},_
chooses a random nonce rE ∈ _Zp[∗]_ [and sends][ C][SA] [=]
_EKA.x_ (RA, KB, IDB, IDS, TS) where KB = rEP to
_A;_
4. on receipt of CSA, A follows the protocol specification (RA, TS will pass the verification step) and
terminates with the session key sk = rAskAKB =
_rAskArEP = rErAPKA = rE(RA_ _YA);_
_−_
5. can establish a communication session with A by
_A_
computing sk = rE(RA _−YA) where YA = skAPKS_
and thus is easily computed by the adversary.
## 4 Revisiting the protocols to eliminate the vulnerability to KCI attacks
In this section we illustrate modifications to the protocols of Tan and Chou et al. that do not allow a malicious
party to perform successfull KCI attacks.
### 4.1 A KCI-resilient version of Tan’s pro- tocol
It is interesting to notice that Tan’s protocol cannot
withstand KCI attacks despite the fact that a formal
security proof was provided by the author (see Theorems 1, 2 in [2]). The arguments used by the author to support the proof of KCI-resilience (Theorem
2) require that the adversary must be able to forge the
transcript e4; however, this assumption misses the point
since the adversary does not need to faithfully reproduce
the protocol actions of B but must simply generate message transcripts (in this particular case, a Diffie-Hellman
ephemeral key R2 = cP ) that are indistinguishable (for
_A) from the real ones. In general, for the sake of protocol_
security analysis one assumes that a man-in-the-middle
attacker has total control over the network (i.e. she can
insert, delete, modify messages flowing across the network) and is allowed to achieve her goals by arbitrarily
diverging from the protocol specification.
To prevent the attack Tan’s protocol can be modified as follows: the trusted third party S shall generate keys for each peer DA = dAP, DB = dBP (eventually during the initialization stage) and compute e =
_h(R[′]1∥R[′]2∥IDA∥IDB∥DA∥DB), Q1 = EdA(R[′]1∥R[′]2∥e)_
and Qi 2 =k _EdB_ (kR[′]2∥hR( Dd R[′]1∥e); finally′′ _IDID A shall compute its ) ( i_ il l
With the preceding modifications, Tan’s key exchange
can now withstand KCI attacks. Indeed, the session
key is computed using the method introduced in the
MTI/A0 [9] protocol, which is immune from KCI attacks; the adversary must now obtain both the private
key dA and the random nonce a to compute the session
key of A. A significant difference with the MTI protocol
is due to the fact that the keys DA, DB do not necessarily have to be pre-distributed to A, B respectively and
therefore do not require public key certificates.
The revised protocol enjoys KCI attack resilience
at the expense of an increased computational workload with respect to the original version. In particular, the trusted first party S must now compute two
additional scalar multiplications to generate the keys
_DA, DB._ However, for reasons of efficiency the computation can be performed prior to online executions of
the protocol. Each peer A, B is required to compute an
additional scalar multiplication (aDB, bDA) at runtime
to compute the session key.
### 4.2 A KCI-resilient version of Chou et al.’s protocol
It is worth mentioning that the procotol of Chou et al.
is actually a revised version of a flawed protocol previously proposed by Zuowen Tan [3] which was yet another
version of the protocol originally published by Yang and
Chang [4] (the later also contained a vulnerability to
impersonation attacks).
Chou et al.’s protocol can withstand KCI attacks with
the following modifications. The trusted third party
computes CSA = _EKA.x_ (RA, KB, YB, IDA, IDS, TS),
_CSB = EKB.x_ (RB, KA, YA, IDB, IDS, TS) and sends
these transcripts to A, B respectively; with respect to
the original specification the terms YB, YA are included
in CSA, CSB respectively. On receipt of CSA, A computes the session key sk = h(rAYB∥skAKB) (similarly
for B) where h is an appropriate hash function.
The revised protocol (similarly to Tan’s protocol) enjoys KCI attack resilience at the expense of a slightly
increased computational workload with respect to the
original version. In this case, each peer A, B needs an additional scalar multiplication (rAYB, rBYA, IDA, IDB)
at runtime to compute the session key.
## 5 Concluding remarks
We have shown that the three party key agreement
protocols recently published in the literature by Tan [2]
and Chou et al. [5] are not resilient to KCI attacks although their authors claim the contrary.
Designing secure key agreement protocols is far from
being a simple task, such protocols involve so many
details and complicated interactions between different
cryptographic primitives that it is nearly impossible to
establish beyond doubt that they are infallible. Indeed,
in the history of this subject there is an abundance of
protocols that were more or less trivially broken regardless of whether formal or heuristic arguments were proid d t t it l i
-----
is that few people are capable of verifying them because
the maths can be quite sophisticated and may be therefore difficult to understand for the average practitioner.
In the last two decades many formal security models
have been proposed (which often differ by a few details)
but it still is not quite clear how to ”transfer” a security
property proved in one model to a different model and
to the real world (Choo et al. [14] have examined this
issue). Furthermore, provable security of cryptographic
primitives is largely based on the use of computational
assumptions, these have proliferated in the last decade
[16] making it difficult to differentiate between their relative strengths (an interesting classification was proposed
by Naor [17]. It is also important to analyse the security
of cryptographic protocols not only from an theoretical
point of view (e.g. adversary interacts with parties in
a black-box mode) but in more realistic models wherein
an opponent may physically attack an honest peer (e.g.
tampering with devices [18]).
In any case, the publication of protocol specifications
in specialised literature and conferences is of fundamental importance since the peer review process and the
subsequent period of public scrutiny can increase the
confidence in the security of the protocol either because
vulnerabilities are not discovered or if there are proposals to fix them will be immediately published.
A topic for future research is the development of a
concrete implementation of the protocols discussed in
this paper to evaluate real world security issues and efficiency. In particular, we plan to develop an implementation with the Java Cryptography Architecture framework [19] which uses a ”‘provider”’-based approach and
contains a set of APIs for various purposes (e.g. encryption, key generation and management, secure random
number generation, certificate validation, etc).
## REFERENCES
[1] Abdalla M., Fouque P.A., Pointcheval D. 2005
_Password- based authenticated key exchange in the_
_three-party setting Proceedings of the PKC05, LNCS,_
3386: 65-84
[2] Tan A. 2012. A Communication and computation_efficient three party authenticated key agreement pro-_
_tocol Security Comm. Networks DOI: 10.1002/sec.622_
[3] Tan Z., 2010. An enhanced three-party authentication
_key exchange protocol for mobile commerce environ-_
_ments J.Commun., DOI:10.4304/jcm.5.5.436-443_
[4] Yang J.H., Chang C.C., 2009. An efficient three_party authenticated key exchange protocol using elliptic_
_curve cryptography for mobile-commerce environments_
J. Syst. Software, DOI:10.1016/j.jss.2009.03.075
[5] Chou Chih-ho, Tsai Kuo-yu, Wu Tzong-chen, Yeh Kuohu. 2013. Efficient and secure three-party authenti
_cated key exchange protocol for mobile environments,_
14(5):347-355
[6] Bellare M. and Rogaway P. 1993 Entity Authentica_tion and Key Distribution Advances in Cryptology-_
CRYPTO 93, pp. 110-125, Springer-Verlag.
[7] Canetti R. and Krawczyk H. 2001 Analysis of key ex_change protocols and their use for building secure chan-_
_nels Proc. of Eurocrypt’01, LNCS 2045, pp. 453-474._
[8] Hankerson D., Menezes A.J. and Vanstone S.A. 2004
_Guide to Elliptic Curve Cryptography Springer Profes-_
sional Computing, New York
[9] Matsumoto T., Takashima Y, and Imai H. 1986 On
_seeking smart public-key distribution systems Transac-_
tions of IEICE, VolE69:99-106
[10] Lamacchia B., Lauter B. and Mityagin A. 2007 Stronger
_Security of Authenticated Key Exchange LNCS 4784,_
pp. 1-16.
[11] Mohammad Z. and Chi-Chun Lo 2009 Vulnerability of
_an improved Elliptic Curve Diffie-Hellmann Key Agree-_
_ment and its Enhancement Proc. of EBISS’09, pp. 1-5_
[12] Strangio M.A. 2006 _On_ _the_ _Resilience_ _of_
_Key_ _Agreement_ _Protocols_ _to_ _Key_ _Compro-_
_mise_ _Impersonation_ Cryptology ePrint Archive,
http://eprint.iacr.org/2006/252.pdf.
[13] Wang S, Cao Z, Strangio M.A and Wang B 2009 Crypt_analysis and Improvement of an Elliptic Curve Diffie-_
_Hellmann Key Agreement Protocol IEEE Communica-_
tion Letters, Vol. 12, No. 2, pp. 149-151.
[14] Choo K.R, Boyd C, Hitchcock Y 2005 _Examin-_
_ing Indistinguishability-Based Proof Models for Key_
_Establishment_ _Protocols,_ Advances in CryptologyASIACRYPT, Springer Berlin/Heidelberg, pp. 585-604
[15] Boyd C, Mathuria A 2003 Protocols for Authentication
_and Key Establishment, Springer Berlin/Heidelberg_
[16] ECRYPT 2013 _Final_ _Report_ _on_ _Main_ _Computa-_
_tional Assumptions in Cryptography,_ ECRYPT II,
http://www.ecrypt.eu.org/documents/D.MAYA.6.pdf
[17] Naor M. 2003 On Cryptographic Assumptions and Chal_lenges, Advances in Cryptology, Lecture Notes in Com-_
puter Science Volume 2729, pp 96-109
[18] Anderson R, 2001 Security Engineering: _A guide to_
_Building Dependeable Distributed Systems, Wiley_
[19] Oracle, 2013 _Java_ _Cryptography_ _Architecture,_
http://docs.oracle.com/javase/7/docs/technotes/guides
/security/crypto/CryptoSpec.html
[20] Bellare M., Rogaway P. 1995 Provably secure session
_key distribution: the three party case, Proceedings of_
the ACM Symposium on the Theory of Computing
(STOC95), pp. 5766
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.13189/WJCAT.2014.020204?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.13189/WJCAT.2014.020204, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "http://www.hrpub.org/download/20140105/WJCAT4-13701794.pdf"
}
| 2,014
|
[] | true
| null |
[
{
"paperId": "b99219fa436dcc8ec75ced6830b11c555439bf09",
"title": "A communication and computation-efficient three-party authenticated key agreement protocol"
},
{
"paperId": "2784a03c76d2597eab0a3c75a526b8a68ec8b347",
"title": "Efficient and secure three-party authenticated key exchange protocol for mobile environments"
},
{
"paperId": "f3c4fccc31a335d1d5b931331177015f4cd8246f",
"title": "Efficient and secure three-party authenticated key exchange protocol for mobile environments"
},
{
"paperId": "c8daa77f4e613cbf07378ae6614a51d0e2554325",
"title": "An Enhanced Three-Party Authentication Key Exchange Protocol for Mobile Commerce Environments"
},
{
"paperId": "31a99b99dbd97f5304a3a4329cb6f8f5aed00224",
"title": "An efficient three-party authenticated key exchange protocol using elliptic curve cryptography for mobile-commerce environments"
},
{
"paperId": "eec113a2cd5c92f4596c0c55646e184b80fc59b8",
"title": "Vulnerability of an Improved Elliptic Curve Diffie-Hellman Key Agreement and its Enhancement"
},
{
"paperId": "2d349ea5ae75426cac8a6928fe46248b2201ef7a",
"title": "Cryptanalysis and improvement of an elliptic curve Diffie-Hellman key agreement protocol"
},
{
"paperId": "5ace5930cd7f2bf8be646ba3fc950b52727fe838",
"title": "On the Resilience of Key Agreement Protocols to Key Compromise Impersonation"
},
{
"paperId": "a5b87921abef3f4c65b1c7f1f9d152bddb833aa2",
"title": "Stronger Security of Authenticated Key Exchange"
},
{
"paperId": "1f4f1f0cf6649a26cf046b16a7ec1aac72d0d58f",
"title": "Examining Indistinguishability-Based Proof Models for Key Establishment Protocols"
},
{
"paperId": "9859517d9c9a539227506e3f576ce40d1ed123ca",
"title": "Password-Based Authenticated Key Exchange in the Three-Party Setting"
},
{
"paperId": "a4013e651c0927d70f3c95474bb00869231a9557",
"title": "Protocols for Authentication and Key Establishment"
},
{
"paperId": "9c9ecf582e5b66e8e2cb171f95be9daf64fe784b",
"title": "On Cryptographic Assumptions and Challenges"
},
{
"paperId": "c54359ab93e998efdf4b7b4ae931a58a8955fb72",
"title": "Protocols for Key Establishment and Authentication"
},
{
"paperId": "e56f9dc2ef53c9479d3d94f593b66a40e6dba52b",
"title": "Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels"
},
{
"paperId": "ca7ecb6e0625bb175bda5eb0655ca609b6b5a6f9",
"title": "Provably secure session key distribution: the three party case"
},
{
"paperId": "d4d2b30a290228436c570c5d274e8202f45a6988",
"title": "Entity Authentication and Key Distribution"
},
{
"paperId": "aa0c3b62c6f1e0d990bb58f52e488af986f065ee",
"title": "ON SEEKING SMART PUBLIC-KEY-DISTRIBUTION SYSTEMS."
},
{
"paperId": "7936ae14d2d0d4e68d5e68f522349f25e1fc79c9",
"title": "A Key Compromise Impersonation attack against Wang's Provably Secure Identity-based Key Agreement Protocol"
},
{
"paperId": null,
"title": "Efficient and secure three-party authenticated"
},
{
"paperId": null,
"title": "Java Cryptography Architecture"
},
{
"paperId": "27eb6c1f53e041c80801d24f59485c40cb2304ad",
"title": "Final Report on"
},
{
"paperId": "3bfdca5ffd79bdfb8b9110c84a23470d88d774a6",
"title": "Guide to Elliptic Curve Cryptography"
},
{
"paperId": "f0773786c41ae59cfd316fa0203bbfb6442c2ffc",
"title": "Security Engineering: A Guide to Building Dependable Distributed Systems"
},
{
"paperId": null,
"title": "work-load with respect to the original version"
},
{
"paperId": null,
"title": "Adversary A obtains A ’s private key d A = h ( ID A ∥ x )"
},
{
"paperId": null,
"title": "master key x"
},
{
"paperId": null,
"title": "A generates a random nonce a ∈ Z ∗ n"
},
{
"paperId": null,
"title": "A intercepts the messages"
},
{
"paperId": null,
"title": "(ii) E( F p ) is an elliptic curve defined by the equation y 2 = x 3 + ax + b over the finite field F p , where 4"
},
{
"paperId": null,
"title": "on receipt of C SA , A follows the protocol specifi-cation ( R A ; T S will pass the verification step) and terminates with the session key"
},
{
"paperId": null,
"title": "on receipt of Q 1 ,"
},
{
"paperId": null,
"title": "A executes all operations"
}
] | 5,858
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00eb7acc1610d210d538de47fd9ca0cb75faf055
|
[
"Computer Science"
] | 0.880526
|
Many-to-One Trapdoor Functions and Their Ralation to Public-Key Cryptosystems
|
00eb7acc1610d210d538de47fd9ca0cb75faf055
|
Annual International Cryptology Conference
|
[
{
"authorId": "1703441",
"name": "M. Bellare"
},
{
"authorId": "1808458",
"name": "S. Halevi"
},
{
"authorId": "1695851",
"name": "A. Sahai"
},
{
"authorId": "1723744",
"name": "S. Vadhan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Cryptol Conf",
"Annu Int Cryptol Conf",
"CRYPTO",
"International Cryptology Conference"
],
"alternate_urls": null,
"id": "212b6868-c374-4ba2-ad32-19fde8004623",
"issn": null,
"name": "Annual International Cryptology Conference",
"type": "conference",
"url": "http://www.iacr.org/"
}
| null |
## Many-to-One Trapdoor Functions
and Their Relation to Public-Key Cryptosystems
Mihir Bellare 1 , Shai Halevi 2, Amit Sahai 3, and Salil Vadhan 3
1 Dept. of Computer Science & Engineering, University of California at San Diego,
9500 Gilman Drive, La Jolla, CA 92093, USA. E-Mail: mihir@cs.ucsd, edu.
```
UI%L: http://w~-cse, ucsd. edu/users/mihir.
```
2 IBM T. J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY 10598,
USA.
E-Maih `shaih@watson, ibm. com.`
3 MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA
02139, USA. E-Malh amitsQtheory, lcs .mit. edu, salil~math.mit, edu.
**URL:** http ://www-math. mit. edu/~ salil.
**Abstract.** The heart of the task of building public key cryptosystems
is viewed as that of "making trapdoors;" in fact, public key cryptosys-
terns and trapdoor functions are often discussed as synonymous. How
accurate is this view? In this paper we endeavor to get a better under-
standing of the nature of "trapdoorness" and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look at
general trapdoor functions; that is, functions that are not necessarily in-
jective (ie., one-to-one). Our first result is somewhat surprising: we show
that non-injective trapdoor functions (with super-polynomial pre-image
size) can be constructed from any one-way function (and hence it is un-
likely that they suffice for public key encryption). On the other hand, we
show that trapdoor functions with polynomial pre-image size are suffi-
cient for public key encryption. Together, these two results indicate that
the pre-image size is a fundamental parameter of trapdoor functions. We
then turn our attention to the converse, asking what kinds of trapdoor
functions can be constructed from public key cryptosystems. We take a
first step by showing that in the random-oracle model one can construct
injective trapdoor functions from any public key cryptosystem.
1 Introduction
A major dividing line in the realm of cryptographic primitives is that between
"one-way" and "trapdoor" primitives. The former effectively means the primi-
tives of private key cryptography, while the latter are typically viewed as tied
to public key cryptosystems. Indeed, the understanding is that the problem of
building public key cryptosystems is the problem of "making trapdoors."
Is it really? It is well known that injective (ie. one-to-one) trapdoor functions
suffice for public key cryptography [Ya,GoMi]. We ask: is the converse true as
-----
**284**
a closer look at the notion of a trapdoor, in particular from the point of view
of how it relates to semantically secure encryption schemes, and discover some
curious things. Amongst these are that "trapdoor one-way functions" are not
necessarily hard to build, and their relation to public key encryption is more
subtle than it might seem.
1.1 Background
The main notions discussed and related in this paper are one-way functions
[DiHe], trapdoor (one-way) functions [DiHe], semantically secure encryption
schemes [GoMi], and unapproximable trapdoor predicates [GoMi].
Roughly, a "one-way function" means a family of functions where each partic-
ular function is easy to compute, but most are hard to invert; trapdoor functions
are the same with the additional feature that associated to each particular func-
tion is some "trapdoor" information, possession of which permits easy inversion.
(See Section 2 for formal definitions.)
In the study of one-way functions, it is well appreciated that the functions
need not be injective: careful distinctions are made between "(general) one-
way functions", "injective one-way functions," or "one-way permutations." In
principle, the distinction applies equally well to trapdoor one-way functions. (In
the non-injective case, knowledge of the trapdoor permits recovery of some pre-
image of any given range point [DiHe].) However, all attention in the literature
has focused on injective trapdoor functions, perhaps out of the sense that this
is what is necessary for constructing encryption schemes: the injectivity of the
trapdoor function guarantees the unique decryptability of the encryption scheme.
This paper investigates general (ie. not necessarily injective) trapdoor one-
way functions and how they relate to other primitives. Our goal is to understand
exactly what kinds of trapdoor one-way functions are necessary and sufficient
for building semantically secure public key encryption schemes; in particular, is
injectivity actually necessary?
Among non-injective trapdoor functions, we make a further distinction based
on "the amount of non-injectivity', measured by pre-image size. A (trapdoor,
one-way) function is said to have pre-image size _Q(k)_ (where k is the security
parameter) if the number of pre-images of any range point is at most _Q(k)._ We
show that pre-image size is a crucial parameter with regard to building public-
key cryptosystems out of a trapdoor function.
Rather than directly working with public-key cryptosystems, it will be more
convenient to work with a more basic primitive called an unapproximable trap-
door predicate. Unapproximable trapdoor predicates are equivalent to semanti-
cally secure public key schemes for encrypting a single bit, and these in turn are
equivalent to general semantically secure cryptosystems [GoMi].
**1.2** **Results**
We have three main results. They are displayed in Fig. 1 together with known
-----
**285**
semantically secure
Injective
public-key cryptosystems
trapdoor functions
_Theorem 3 ~. ~_
[GoMi] I ....~,,JtYal _l trivial_
Unapproximable .q Trapdoor functions with
trapdoor ~redicates **_Theorem 2_** **poly-bound~ pre-image size**
[lmLu] _l td'~al_
One-way _Theorem 1_ Trapdoor functions with
functions IP super-poly pm-irnage size
Fig. 1. Illustrating our results: _Solid lines are standard implications; the dotted line_
_is an implication in the random oracle model._
_One-way functions imply trapdoor functions._ Our first result, given in
Theorem 1, may seem surprising at first glance: we show that one-way functions
imply trapdoor functions. We present a general construction which, given an
arbitrary one-way function, yields a trapdoor (non-injective) one-way function.
Put in other words, we show that trapdoor functions are not necessarily hard
to build; it is the combination of trapdoorness with "structural" properties like
injectivity that may be hard to achieve. Thus the "curtain" between one-way
and trapdoor primitives is not quite as opaque as it may seem.
What does this mean for public key cryptography? Impagliazzo and Rudich
[ImRu] show that it would be very hard, or unlikely, to get a proof that one-way
functions (even if injective) imply public key cryptosystems. Hence, our result
shows that it is unlikely that any known technique can be used to construct
public key encryption schemes from generic, non-injective, trapdoor functions.
As one might guess given [ImRu], our construction does not preserve injectivity,
so even if the starting one-way function is injective, the resulting trapdoor one-
way function is not.
_Trapdoor functions with poly pre-image size yield eryptosystems. In_
light of the above, one might still imagine that injectivity of the trapdoor func-
tions is required to obtain public key encrypti0n. Still, we ask whether the in-
jectivity condition can be rela~ed somewhat. Specifically, the trapdoor one-way
functions which we construct from one-way functions have super-polynomial
pre-image size. This leads us to ask about trapdoor functions with polynomially
bounded pre-image size.
Our second result, Theorem 2, shows that trapdoor functions with poly-
nomially bounded pre-image size suffice to construct unapproximable trapdoor
predicates, and hence yield public key cryptosystems. This belies the impression
-----
286
a public key cryptosystem from it, and also suggests that the super-polynomial
pre-image size in the construction of Theorem 1 is necessary.
#### From trapdoor predicates to trapdoor functions, We then turn to the
other side of the coin and ask what kinds of trapdoor functions must necessarily
exist to have a public key cryptosystem. Since unapproximable trapdoor pred-
icates and semantically secure public key cryptosystems are equivalent [GoMi]
we consider the question of whether unapproximable trapdoor predicates imply
injective trapdoor functions.
In fact whether or not semantically secure public key cryptosystems imply
injective trapdoor functions is not only an open question, but seems a hard one.
(In particular, a positive answer would imply injective trapdoor functions based
on the Diffie-Hellman assumption, a long standing open problem.) In order to
get some insight and possible approaches to it, we consider it in a random oracle
model (cf. [ImRu,BeRo]). Theorem 3 says that here the answer is aff~mative:
given an arbitrary secure public key cryptosystem, we present a function that
has access to an oracle H, and prove the function is injective, trapdoor, and
one-way when H is random.
The construction of Theorem 3 is quite simple, and the natural next question
is whether the random oracle H can be replaced by some constructible crypto-
graphic primitive. In the full version of the paper [BHSV], we show that this
may be difficult, by showing that a cryptographically strong pseudorandom bit
generator [B1Mi,Ya], which seems like a natural choice for this construction, does
not suffice. The next step may be to follow the approach initiated by Canetti
[Ca]: find an appropriate cryptographic notion which, if satisfied by H, would
suffice for the correctness of the construction, and then try to implement H via
a small family of functions. However, one should keep in mind that replacement
of a random oracle by a suitable constructible function is not always possible
[CGH]. Thus, our last result should be interpreted with care.
1.3 Discussion and implications
Theorems 1 and 2 indicate that pre-image size is a crucial parameter when con-
sidering the power of trapdoor functions, particularly with respect to construct-
ing public-key cryptosystems. The significance and interpretation of Theorem 3,
however, requires a bit more discussion.
At first glance, it may seem that public key cryptosystems "obviously im-
ply" injective trapdoor functions. After all, a public key cryptosystem permits
unique decryptability; doesn't this mean the encryption algorithm is injective?
No, because, as per [GoMi], it is a _probabilistic algorithm, and thus not a func-_
tion. To make it a function, you must consider it a function of two arguments,
the message and the coins, and then it may no longer be injective, because two
coin sequences could give rise to the same ciphertext for a given message. More-
over, it may no longer have a (full) trapdoor, since it may not be possible to
recover the randomness from the ciphertext. (Public key cryptosystems in the
-----
287
the authors remark, but that's because encryption there is deterministic. It is
now understood that secure encryption must be probabilistic [GoMi].)
Theorem 3 has several corollaries. (Caveat: All in the random oracle model).
First, by applying a transformation of [BeRo], it follows that we can construct
non-malleable and chosen-ciphertext secure encryption schemes based on the
Ajtal-Dwork cryptosystem [AjDw]. Second, combining Theorems 3 and 2, the
existence of trapdoor functions with polynomially bounded pre-image size im-
plies the existence of injective trapdoor functions. (With high probability over
the choice of oracle. See Remark 5.) Third, if the Decisional Diflie-Hellman prob-
lem is hard (this means the E1 Gamal [E1G] cryptosystem is semantically secure)
then there exists an injective trapdoor function.
Note that in the random oracle model, it is trivial to construct (almost)
injective one-way functions: a random oracle mapping, say, n bits to 3n bits, is
itself an injective one-way function except with probability 2 -n over the choice
of the oracle. However, random oracles do not directly or naturally give rise
to trapdoors [ImRu]. Thus, it is interesting to note that our construction in
Theorem 3 uses the oracle to "amplify" a trapdoor property: we convert the
weak trapdoor property of a cryptosystem (in which one can only recover the
message) to a strong one (in which one can recover both the message and the
randomness used).
Another interpretation of Theorem 3 is as a demonstration that there ex-
ists a model in which semantically secure encryption implies injective trapdoor
functions, and hence it may be hard to prove a separation result, in the style
of [ImRu], between injective trapdoor functions and probabilistic encryption
schemes.
#### 2 Definitions
We present definitions for one-way functions, trapdoor functions, and unapprox-
imable trapdoor predicates.
PRELIMINARIES. If S is any probability distribution then x +- S denotes the
operation of selecting an element uniformly at random according to S, and [S] is
the support of S, namely the set of all points having non-zero probability under
S. If S is a set we view it as imbued with the uniform distribution and write
x ~ S. If A is a probabilistic algorithm or function then A(x, y,... ; R) denotes
the output of A on inputs x, y,... and coins R, while A(x, Y,...) is the probability
distribution assigning to each string the probability, over R, that it is output. For
deterministic algorithms or functions A, we write z:=A(x, y,...) to mean that the
output of A(x, Y,...) is assigned to z. The notation Pr [ E : R1 ; R2 ; ... ; Rk ]
refers to the probability of event E after the random processes R1,..., Rk are
performed in order. If x and Y are strings we write their concatenation as _xll y_
or just _xy._ "Polynomial time" means time polynomial in the security parame-
ter k, PPT stands for "probabilistic, polynomial time", and "efficient" means
computable in polynomial time or PPT
-----
**288**
**2.1** **One-way and trapdoor function families**
We first define families of functions, then say what it means for them to be
one-way or trapdoor.
FAMILIES OF FUNCTIONS. A _family of functions_ is a collection F = {Fk}keN
where each Fk is probability distribution over a set of functions. Each f E
[Fk] has an associated domain Dom(f) and range Range(f). We require three
properties of the family:
�9 Can generate: The operation f +-- Fk can be efficiently implemented, mean-
ing there is a PPT _generation_ algorithm _F-Gen_ that on input i k outputs
a "description" of a function f distributed according to _Fk._ This algorithm
might also output some auxiliary information aux associated to this function
(this is in order to later model trapdoors).
_�9 Can sample:_ Dora(f) is efficiently samplable, meaning there is a PPT algo-
rithm _F-Stop that given f E [Fk] returns a uniformly distributed element of_
```
Dora(f).
```
_�9 Can evaluate: f_ is efficiently computable, meaning there is a polynomial time
evaluation algorithm _F-Eval_ that given f E Fk and x E Dora(f) returns
#### f(x).
For an element y E Range(f) we denote the set of pre-images of y under f by
_f-l(y) = { x e Dom(f)_ : _f(x) = y } ._
We say that F is _injeetive_ if f is injective (ie. one-to-one) for every f E [Fk]. If
in addition Dom(f) = Range(f) then we say that F is a family of permutations.
We measure the amount of "non-injectivity" by looking at the maximum pre-
image size. Specifically we say that F has _pre-image size_ bounded by _Q(k)_ if
]f-l(y)l < _Q(k)_ for _all f e [Fk], all y e_ Range(f) and all k e N. We say that
_F has polynomiaUy bounded pre-image size if there is a polynomial_ _Q(k)_ which
bounds the pre-image size of F.
ONE-WAYNESS. Let F be a family of functions as above. The _inverting probability_
of an algorithm I(.,-) with respect to F is a function of the security parameter
k, defined as InvProbf(I, k) d=ef
Pr Ix' e _f-l(y) : f +._ Fk ; x +- Dom(f) ; y +-- f(x);_ _x' ~ I(f,y) ] ._
F is _one-way_ if InvProbF (I, k) is negligible for any PPT algorithm I.
TRAPDOORNESS. A family of functions is said to be trapdoor if it is possible,
while generating an instance f, to simultaneously generate as auxiliary output
"trapdoor information" _tp, knowledge of which permits inversion of f. Formally,_
a family of functions F is _trapdoor_ if _F-Gen_ outputs pairs _if, tp)_ where f is
the "description" of a function as in any family of functions and _tp is auxiliary_
_trapdoor information._ We require that there exists a probabilistic polynomial
time algorithm _F-Inv such that for all k, all (f, tp) E [F-Gen(lk)],_ and all points
y e Range(f), the algorithm _F-Inv(f, tp, y)_ outputs an element of f-Z(y) with
probability 1. A family of trapdoor functions is said to be one-way if it is also a
family of one-way functions
-----
**289**
A good (candidate) example of a trapdoor, one-way function family which **is**
non-injective is the _Rabin family_ [l~ab]: here each function in Fk is four to one.
(Traditionally, this function is used as the basis of a public key cryptosystem by
first modifying it to be injective.)
_Remark 1._ It is well known that one can define one-way functions either in terms
of function families (as above), or in terms of a single function, and the two
are equivalent. However, for trapdoor functions, one must talk of families. To
maintain consistency, we use the family view of one-way functions as well.
2.2 Trapdoor Predicate Families
We define unapproximable trapdoor predicate families [GoMi]. Recall that such
a family is equivalent to a semantically secure public-key encryption scheme for
a single bit [GoMi].
_A predicate_ in our context means a probabilistic function with domain {0, 1},
meaning a predicate p takes a bit b and flips coins r to generate some output
_y = p(b; r). In a_ _trapdoor predicate family P =_ {P~}keN, each Pk is a probability
distribution over a set of predicates, meaning each p E [P~] is a predicate as
above. We require:
�9 Can generate: There is a generation algorithm _P-Gen_ which on input 1 k
outputs (p, tp) where p is distributed randomly according to Pk and _tp_ is
trapdoor information associated to p. In particular the operation p +-- Pk
can be efficiently implemented.
�9 Can evaluate: There is a PPT algorithm _P-Eval_ that given p and b E {0,1}
flips coins to output y distributed according to p(b).
We say P has _decryption_ error J(k) if there is a PPT algorithm _P-Inv_ who, with
knowledge of the trapdoor, fails to decrypt only with this probability, namely
DecErrp ( P-Inv, k) def__~
Pr [ b' r b : p ~ Pk ; b ~- {0, 1} ; y +- p(b) ; b' e- P-Inv(p, tp, y) ] (1)
is at most J(k). If we say nothing it is to be assumed that the decryption error is
zero, but sometimes we want to discuss families with non-zero (and even large)
decryption error.
UNAPPROXIMABILITY. Let P be a family of trapdoor predicates as above. The
_predicting advantage_ of an algorithm I(., .) with respect to P is a function of the
security parameter k, defined as PredAdvp(I, k) **def**
1
Pr[b' _=b : p+-P~; b+--_ {0,1}; y +-p(b); b'e-I(p,y)]- _2"_
We say that P is _unapproximable_ if PredAdvp(I, k) is negligible for any PPT
algorithm I.
3 From one-way functions to trapdoor **functions**
-----
**290**
**Theorem** 1. _Suppose there exists a family of one-way functions. Then there_
_exists a family of trapdoor, one-way functions._
_J_
This is proved by taking an arbitrary family F of one-way functions and "em-
bedding" a trapdoor to get a family G of trapdoor functions. The rest of this
section is devoted to the proof.
**3.1** **Proof sketch of Theorem 1**
Given a family F = {Fk}keN of one-way functions we show how to construct a
family G = {Gk}keN of trapdoor one-way functions.
Let us first sketch the idea. Given f E Fk we want to construct g which
"mimics" f but somehow embeds a trapdoor. The idea is that the trapdoor is
a particular point c~ in the domain of f. Function g will usually just evaluate
f, except if it detects that its input contains the trapdoor; in that case it will
do something trivial, making g easy to invert given knowledge of the trapdoor.
(This will not happen often in normal execution because it is unlikely that a
randomly chosen input contains the trapdoor.) But how exactly can g "detect"
the trapdoor? The first idea would be to include a in the description of g so that
it can check whether its input contains the trapdoor, but then g would no longer
be one-way. So instead the description of g will include/3 = f(a), an image of the
trapdoor under the original function f, and g will run f on a candidate trapdoor
to see whether the result matches/3. (Note that we do not in fact necessarily
detect the real trapdoor a; the trivial action is taken whenever some pre-image
of/5 under f is detected. But that turns out to be OK.)
In the actual construction, g has three inputs, y, x, v, where v plays the role
of the "normal" input to f; x plays the role of the candidate trapdoor; and y is
the "trivial" answer returned in case the trapdoor is detected. We now formally
specify the construction and sketch a prof that it is correct.
A particular function g E [Gk] will be described by a pair (f,/3) where f E
[Fk] and/3 E Range(f). It is defined on inputs y, x, v by
_g(y, x, v) = { y_ if f(x) =/3 (2)
_f(v)_ otherwise.
Here _x,v E_ Dora(f), and we draw y from some samplable superset S I of
Range(f). (To be specific, we set Sf to the set of all strings of length at most p(k)
where p(k) is a polynomial that bounds the lengths of all strings in Range(f).)
So the domain of g is Dom(g) = S I • Dom(f) • Dora(f).
We now give an intuitive explanation of why G is one-way and trapdoor.
First note that for any z it is the case that (z, c~, a) is a preimage of z under
g, so knowing a enables one to invert in a trivial manner, hence G is trapdoor.
For one-wayness, notice that if g(y, x, v) = z then either _f(v) = z_ or _f(x) =/3._
Thus, producing an element of g-1 (z) requires inverting f at either z or/3, both
of which are hard by the one-wayness of F. A formal proof that G satisfies the
definition of a family of one-way trapdoor functions can be found in the full
version of this paper [BHSV].
_Remark 2._ One can verify that the trapdoor functions g produced in the above
-----
291
if the original one-way functions f are regular. Thus, adding regularity as a
requirement is _not_ likely to suffice for making public-key cryptosystems.
4 From trapdoor functions to cryptosystems
Theorem 1 coupled with [ImRu] says that it is unlikely that general trapdoor
functions will yield semantically secure public-key cryptosystems. However, in
our construction of Section 3.1 the resulting trapdoor function was "very non-
injective" in the sense that the pre-image size was exponential in the security
parameter. So, we next ask, what is the power of trapdoor function families with
polynomially bounded pre-image size? We show a positive result:
**Theorem** 2. _If there exist trapdoor one-way function families with polynomially_
_bounded pre-image size, then there exists a family of unapproximable trapdoor_
_predicates with exponentially small deeryption error._
Theorem 2 extends the well-known result of [Ya,GoMi] that injective trapdoor
functions yield semantically secure public-key cryptosystems, by showing that
the injectivity requirement can be relaxed. Coupled with [ImRu] this also implies
that it is unlikely that the analogue of Theorem 1 can be shown for trapdoor
functions with polynomially bounded pre-image sizes.
4.1 Proof of Theorem 2
Let F = {Fk}keN be a family of trapdoor one-way functions with pre-image
size bounded by a polynomial Q. The construction is in two steps. We first
build an unapproximable family of trapdoor predicates P with decryption error
1/2 - 1/poly(k), and then reduce the decryption error by repetition to get the
family claimed in the theorem.
The first step uses the Goldreich-Levin inner-product construction [GoLe].
This construction says that if f is a one-way function, one can securely encrypt
a bit b via f(x), r, a where a = b $ (x | r) with r a random string, x E Dom(f),
and | denoting the inner-product mod 2. Now, if f is an _injeetive trapdoor func-_
tion, then with the trapdoor information, one can recover b from f(x), r, and
a by finding x and computing b = a $ (x | r). If instead f has polynomial-size
pre-images, the "correct" x will only be recovered with an inverse polynomial
probability. However, we will show that the rest of the time, the success proba-
1
bility is exactly 50%. This gives a noticeable (�89 + pol-'~'~) bias towards the right
value of b. Now, this slight bias needs to be amplified, which is done by repeat-
ing the construction many times in parallel and having the decryptor take the
majority of its guesses to the bit in the different coordinates. A full description
and proof follow.
We may assume wlog that there is a polynomial _l(k)_ such that Range(f) C
{0, 1} i(k) for all f e [F~] and all k E N. We now describe how to use the
Goldreich-Levin inner-product construction [GoLe] to build P = {Pk}keN. We
-----
292
Predicate _p(b)_ _// Takes input a bit b_
x ~ Dom(f) _// Choose x at random from the domain of f_
r r {0,1} l(k) _//Choose a random l(k)-bit string_
a := _b$ (x@r)_ _//XOR b with the GL bit_
Output _(f(x), r, a)_
Here (9 denotes XOR (ie. addition rood 2) and @ denotes the inner-product
rood 2. The generator algorithm for P will choose _(f, tp) ~ F-Gen(1 k)_ and then
output (p, tp) with p defined as above. Notice that p is computable in PPT if f
is.
The inversion algorithm _P-Inv_ is given p, the trapdoor _tp,_ and a triple
(y, r, a). It first runs the inversion algorithm _F-Inv_ of F on inputs _f, tp, y_ to
obtain x I, and then outputs the bit b' = a (9 (x'@ r). It is clear that the inversion
algorithm is not always successful, but in the next claim we prove that it is
successful appreciably more often than random guessing.
_Claim. P_ is an unapproximable trapdoor predicate family, with decryption error
at most (1/2) - 1/[2Q(k)].
_Proof._ We know that F is one-way. Thus, the inner product is a hardcore bit
for F [GoLe]. This implies that P is unapproximable. It is left to show that the
decryption error of P is as claimed, namely that _DecErrp(P-Inv, k) (as defined_
in Equation (1)) is at most (1/2) - _1/[2Q(k)]._
Fix _f, tp, b,_ let x, r be chosen at random as by p(b), let y = _f(x),_ let a =
b ~ (x @ r), let x ~ ~- _F-Inv(f, tp, y),_ and let U = a (9 (x' @ r). Notice that if
x' = x then b ~ = b, but if x ' ~ x then the random choice of r guarantees that
b ~ -- b with probability at most 1/2. (Because _F-Inv,_ who generates x ~, gets no
information about r.) The chance that x = x' is at least _1/Q(k)_ (because F-Inv
gets no information about x other than that _f(x) = y)_ so
_DecErrp(P-Inv, k) < __ (1-Q-~k)) "~ 1
as desired. [3
Now, we can iterate the construction _q(k) de=f O(kQ(k)2 )_ times independently
and decrypt via a majority vote to reduce the decryption error to e -k. In more
detail, our final predicate family Pq = {P~}keN is like this. An instancep q E [P~]
is still described by a function f E [Fk] and defined as _pq(b) = p(b)ll.., lip(b),_
meaning it consists of q(k) repetitions of the original algorithm p on independent
coins. The inversion algorithm _Pq-Inv is given the trapdoor_ _tp_ and a sequence
of triples
(Yl, rl, 0"1)[]""" [l(Yq(k) , rq(k) , ff q(k) ) .
For i = 1,... _,q(k)_ it lets _b~ = P-Inv(p, tp, (yi, ri,ai))._ It outputs b ~ which
is 1 if the majority of the values b~, ... ,bq(k) are 1, and 0 otherwise. Cher-
noff bounds show that DecErrpq (Pq-Inv, k) < e -k. Furthermore standard "hy-
brid"arguments [GoMi,Ya] show that _Pq_ inherits the unapproximability of P.
-----
293
#### Remark 3. Notice that Theorem 2 holds even if the family F only satisfies a
very weak trapdoor property -- namely, that _F-Inv_ produces an element of
f-1 (y) with probability at least _lip(k)_ for some polynomial p. Essentially the
same proof will show that _P-Inv_ can guess b correctly with probability at least
1/2 + _1/[2Q(k)p(k)]._
```
5 From cryptosystems to trapdoor functions
In this section we investigate the relation between semantically secure public
key cryptosystems and injective trapdoor functions. It is known that the exis-
tence of unapproximable trapdoor predicates is equivalent to the existence of
semantically secure public-key encryption [GoMi]. It is also known that injective
trapdoor one-way functions can be used to construct unapproximable trapdoor
predicates ~Ya] (see also [GoLe]). In this section, we ask whether the converse is
true:
Question I. Can unapproximable trapdoor predicates be used to construct in-
jective trapdoor one-way functions?
Note the importance of the injectiveness condition in Question 1. We already
know that non-injective trapdoor functions can be constructed from trapdoor
predicates (whether the latter are injective or not) because trapdoor predicates
imply one-way functions [ImLu] which in turn imply trapdoor functions by
Theorem 1.
We suggest a construction which requires an additional "random looking"
function G and prove that the scheme is secure when G is implemented as a
random oracle (to which the adversary also has access). Hence, IF it is possible
to implement using one-way functions a function G with "sufficiently strong
randomness properties" to maintain the security of this scheme, then Question 1
would have a positive answer (as one-way functions can be constructed from
unapproximable trapdoor predicates [ImLu]).
The key difference between trapdoor functions and trapdoor predicates is
that predicates are probabilistic, in that their evaluation is a probabilistic process.
Hence, our construction is essentially a de-randomization process.
Suppose we have a family P of unapproximable trapdoor predicates, and we
want to construct a family F of injective one-way trapdoor functions from P. A
first approach would be to take an instance p of P and construct an instance f
```
ofF as
#### f(blb2.., bkHrlH... ]Irk) = p(bl; rl)[]... [Ip(bk; r~),
where k is the security parameter. Standard direct product arguments [Ya] im-
ply that F constructed in this manner is one-way. However, F may fail to be
trapdoor; the trapdoor information a~sociated with p only allows one to recover
bl,..., bk, but not rl ,..., r~.
Our approach to fixing this construction is to instead have rl,..., rk deter-
mined by applying some "random-looking" function G to bl,..., bk:
-----
294
Since G must be length-increasing, an obvious choice for G is a pseudo-random
generator. A somewhat circular intuitive argument can be made for the secu-
rity of this construction: If one does not know bl,... ,bk, then rl,... ,rk "look
random," and if rl,..., rk "look random," then it should be hard to recover
_bl,..., bk by the unapproximability of P. In the full version of the paper [BHSV],_
we show that this argument is in fact false, in that there is a choice of an un-
approximable trapdoor predicate P and a pseudorandom generator G for which
the resulting scheme is insecure.
However, it is still possible that there are choices of functions G that make the
above secure. Below we show that the scheme is secure when G is implemented
as a truly random function, ie. a random oracle (to which the adversary also
has access). Intuitively, having access to the oracle does not help the adversary
recover bl ... bk for the following reason: the values of the oracle are irrelevant
except at bl .." bk, as they are just random strings that have nothing to do with
bl... bk or _f(bl.., bk)._ The adversary's behavior is independent of the value of
the oracle at bl--.ba unless the adversary queries the oracle at bl...bk. On
the other hand, if the adversary queries the oracle at bl ... bk, it must already
"know" bl ... bk. Specifically, if the adversary queries the oracle at _bl .." bk with_
non-negligible probability then it can invert / with non-negligible probability
without making the oracle call, by outputting the query. We now proceed with
a more formal description of the random oracle model and our result.
THE RANDOM ORACLE MODEL. In any cryptographic scheme which operates
in the random oracle model, all parties are given (in addition to their usual re-
sources) the ability to make oracle queries [BeRo]. It is postulated that all oracle
queries, independent of the party which makes them, are answered by a single
function, denoted O, which is uniformly selected among all possible functions
(where the set of possible functions is determined by the security parameter).
The definitions of families of functions and predicates are adapted to the ran-
dom oracle model in a straightforward manner: We associate some fixed poly-
nomial Q with each family of functions or predicates, such that on security
parameter k all the algorithms in the above definitions are given oracle access
to a function (9 : {0, 1}* -~ {0, 1} Q(k). The probabilities in these definitions are
then taken over the randomness of these algorithms and also over the choice of
O uniformly at random among all such functions.
**Theorem** 3. _If there exists a family of unapproximable trapdoor predicates, then_
_there exists a family of injective trapdoor one-way functions in the random oracle_
_model._
_Remark 4._ Theorem 3 still holds even if the hypothesis is weakened to only re-
quire the existence of a family of unapproximable trapdoor predicates _in the_
_random oracle model._ To see that this hypothesis is weaker, note that a family
of unapproximable trapdoor predicates (in the standard, non-oracle model) re-
mains unapproximable in the random oracle model, as the oracle only provides
randomness which the adversary can generate on its own
-----
295
See Sections 1.2 and 1.3 for a discussion of the interpretation of such a result.
We now proceed to the proof.
**5.1** **Proof of Theorem 3**
Let P = {P~}keN be a family of unapproximable trapdoor predicates. Let _q(k)_
be a polynomial upper bound on the number of random bits used by any p E
Pk. When used with security parameter k, we view the oracle as a function
O: {0, 1}* -+ {0, 1} kq(k).
We define a family F = {Fk}keN of trapdoor functions in the random oracle
model as follows: We associate to any p E [P~] the function f defined on input
bx...bk E {0,1} k by
_f(bl"" bk) =_ p(bl; rl)ll" �9 IIp(b~; r~),
where
#### rill'" Ilrk = O(bl... bk), ri e {0,1} q(k) .
The generator _F-Gen_ takes input 1 ~, runs _(p, tp) +-- P-Gen(1 k)_ and outputs
(f, tp) where f is as defined above. It is clear that f can be evaluated in poly-
nomial time using the evaluator _P-Eval_ for p.
Notice that f can be inverted given the trapdoor information. Given _f, tp,_
and YllI"" [lYk = f ( bl . . . bk ) , inverter _F- Inv_ computes _b~ = P- Inv(p, tp, yi )_ for
i -- 1,...,k, and outputs bl... b~. Furthermore, f is injective because P has
zero decryption error: in this inversion process, _P-Inv_ correctly returns _bi,_ so we
correctly recover the full input. It remains to show that F is one-way.
_Claim. F_ is one-way.
We prove this claim by describing several probabilistic experiments, modifying
the role of the oracle with each experiment. The first arises from the definition
of a family of one-way functions in the random oracle model. Let A be any PPT,
let k be any positive integer, and let _q = q(k)._
_Experiment 1._
(1) Choose a random oracle O : {0, 1}* -+ {0, 1} kq(k).
(2) Choose p ~- Pk
(3) Select _bl,..., bk_ uniformly and independently from {0,1}.
(4) Let rill"" Ilrk = O(bl... bk), where Iril _= q(k)_ for each i.
(5) Let _x = p(bl;rl)[[... Hp(bk;rk)._
(6) Compute _z ~ AO(lk,p,x)._
We need to prove the following:
_Claim._ For every PPT A, the probability that z = bt .-. b~ in Experiment 1 is
a negligible function of k.
To prove Claim 5.1, we first analyze what happens when the ri's are chosen
independently of the oracle, as in the following experiment: Let A be any PPT,
-----
296
_Experiment 2._
(1)-(3) As in Experiment 1.
(4) Select rl,..., rk uniformly and independently from {0, 1} q.
(5)-(6) As in Experiment 1.
_Claim._ For every PPT A, the probability that z -- bl .-. bk in Experiment 2 is
a negligible function of k.
Claim 5.1 follows from standard direct product arguments [Ya,GNW]. Specifi-
cally, Claim 5.1 is a special case of the uniform complexity version of the Con-
catenation Lemma in [GNW, Lemma 10].
_Claim._ For every PPT A, the probability that (9 is queried at point bl... _bk_
during the execution of _A~_ _x)_ in Step 6 of Experiment 2 is a negligible
function of k.
_Proof._ Suppose that the probability that (9 is queried at point bl." bk was
greater that _1Is(k)_ for infinitely many k, where s is a polynomial. Then we could
obtain a PPT A ~ that violates Claim 5.1 as follows. Let _t(k)_ be a polynomial
bound on the running time of A. A ~ does the following on input (lk,p, x)"
(1) Select i uniformly from {1,...,t(k)}.
(2) Simulate A on input (lk,p, x), with the following changes:
(1) Replace the oracle responses with strings randomly selected on-line,
with the condition that multiple queries at the same point give the
same answer.
(2) Halt the simulation at the i'th oracle query and let w be this query.
(3) Output w.
Then A ~, when used in Experiment 2, outputs bx ... bk with probability greater
that _1/(s(k)t(k))_ for infinitely many k, which contradicts Claim 5.1. []
In order to deduce Claim 5.1 from Claims 5.1 and 5.1, we give an equivalent
reformulation of Experiment 1: Let A be any PPT, let k be any positive integer,
and let _q = q(k)._
_Experiment 3._
(1)-(3) As in Experiment 1.
(4) Select rl,..., rk uniformly and independently from {0, 1} q.
(5) Let _x = p(51; rl)ll.., liP (bk; rk)._
(6) Modify (9 at location bl... bk to have value rill"" Ilrk.
(7) Compute _z +--A~_
We now argue that Experiment 3 is equivalent to Experiment 1. In Experiment 1,
rl,..., rk are uniformly and independently distributed in {0, 1} q and after Step 5
of Experiment 1 the only information about the oracle that has been used is
that _rl H"" [Irk = (9(bl.." bk)._ Thus, the final distribution on all random vari-
ables are identical in the two experiments and it suffices to prove Claim 5.1 for
-----
297
_Proof._ Let E be the event that z = bl ... bk in Experiment 3. Let F be the event
that O is queried at point bl". _bk_ during the execution of A ~ (p, x) in Step 7 of
Experiment 3. To show that E occurs with negligible probability, it suffices to
argue that both F and E A F occur with negligible probability.
First we show that F occurs with negligible probability. Notice that whether
or not _A ~_ queries O at bl.'. b~ in Experiment 3 will not change if Step 6
is removed. This is because its behavior cannot be affected by the change in
_O(bl ... bk)_ until it has already queried that position of the oracle. If Step 6 is
**removed from Experiment 3, we obtain Experiment 2. Hence, the probability of**
F is negligible by Claim 5.1.
Similarly, the probability that [z = bl ... bk and _A ~_ never queries the oracle
at bl... bk] will not change if Step 6 is removed. Thus, the probability of E D F
is bounded above by the probability that z -- bl... bh in Experiment 2, which is
negligible by Claim 5.1. D
_Remark 5._ If the family of unapproximable trapdoor predicates we start with has
negligible decryption error, then the family of trapdoor functions we construct
will in general also have negligible decryption error and may fail to be injective
with some small probability.
By first reducing the decryption error of the predicate family to exp(-~(k3))
as in the proof of Theorem 2 and then using the oracle to derandomize the
inversion algorithm, one can produce an _injective family that has_ _zero decryption_
error with probability 1 - 2 -k (where the probability is just taken over the choice
of the oracle).
Acknowledgments
The first author was supported by a 1996 Packard Foundation Fellowship in
Science and Engineering, and by NSF CAREER Award CCR-9624439. The third
and fourth authors were supported by DOD/NDSEG Graduate Fellowships and
partially by DARPA grant DABT-96-C-0018.
The starting point of this research was a question posed to us by Shaft Gold-
wasser, namely whether trapdoor permutations could be built from the assump-
tions underlying the Ajtai-Dwork cryptosystem.
Thanks to Oded Goldreich and the members of the Crypto 98 program com-
mittee for their comments on the paper.
**References**
`[AjDw]` **M.** AJTAI AND **C. DWORK. A** public-key cryptosystem with worst-case /
average-case equivalence. _Proceedings_ of the 29th Annua/Symposium _on_
_the Theory of Computing,_ ACM, 1997.
#### [AMM] ADLEMAN, ~VIANDERS AND MILLER. On taking roots in finite fields. Proceed-
_ings of the_ 18th _Symposium on Foundations of Computer Science,_ IEEE,
#### 1977.
[BHSV] M. BELLARE, S. HALEVI, A. SAHAI, AND S. VADHAN. Many-to-one trapdoor
functions and their relation to public-key cryptosystems. Full version of this
-----
**298**
[BeRo] `M. BELLARE AND P. ROGAWAY. Random oracles are practical: a paradigm`
for designing efficient protocols. _Proceedings of the First Annual Conference_
```
on Computer and Communications Security, ACM, 1993.
```
[Bel `E. BERLEKAMP. Factoring polynomials over large finite fields. Mathematics`
```
of Computation, Vol. 24, 1970, pp. 713-735.
```
[BIMi] `~]L BLUM AND S. MICALI. How to generate cryptographically strong se-`
```
quences of pseudo-random bits, SIAM Journal on Computing, Vol. 13, No. 4,
850-864, November 1984.
### [ca] R. CANETTI. Towards realizing random oracles: Hash functions that hide all
```
partial information. _Advances in Cryptology - Crypto 97_ _Proceedings,_ Lec-
ture Notes in Computer Science Vol. 1294, B. Kaliski ed., Springer-Verlag,
1997.
### [ca R. CANETTI, O. GOLDREICH AND S. HALEVL The random oracle model,
revisited. _Proceedings of the_ 30th _Annual Symposium on the Theory of_
_Computing,_ ACM, 1998.
[DiHe] V~ r. DIFFIE AND M. HELLMAN. New directions in cryptography. IEEE Trans.
_Info. Theory,_ Vol. IT-22, No. 6, November 1976, pp. 644-654.
`[DDN]` D. DOLEV, C. DWORK, AND M. NAOR. Non-Malleable Cryptography. Pro-
_ceedings of the 23rd Annual Symposium on the Theory of Computing,_ ACM,
1991.
```
[EIO] T. EL GAMAL. A public key cryptosystem and a signature scheme based on
discrete logarithms. IEEE Trans. Inform. Theory, Vol. 31, 1985, pp. 469-472.
### [GoLe] O. GOLDREICH AND L. LEVIN. A hard predicate for all one-way functions.
```
_Proceedings o[ the_ 21st _Annual Symposium on the Theory o[ Computing,_
### ACM, 1989.
[GoMi] S. GOLDWASSER AND S. MICALL Probabilistic Encryption. Journa/of Com-
_puter and System Sciences, Vol. 28, April 1984, pp. 270-299._
### [ swl O. GOLDREmH, N. NISAN, AND A. WIODERSON. On Yao's XOR Lemma.
_Electronic Colloquium on Computational Complexity,_ TR95-050. March
1995. _http://r~w.eccc.uni-trier.de/eccc/_
[HILL] J. H],STAD, P~. IMPAOLIAZZO, L. Lv.VIN AND M. LUBY. Construction of a
pseudo-random generator from any one-way function. Manuscript. Earlier
versions in STOC 89 and STOC 90.
### [ImLu] R,. IMPAGLIAZZO AND M. LUBY. One-way Functions are Essential for
Complexity-Based Cryptography. _Proceedings of the_ 30th _Symposium on_
_Foundations of Computer Science, IEEE, 1989._
#### [ImR.l R. IMPAGLIAZZO AND S. RUDICH. Limits on the provable consequences of
one-way permutations. _Proceedings of the 21st Annual_ _Symposium on the_
_Theory of Computing,_ ACM, 1989.
[NaYul M. NAOR AND 1VI. YUNG. Public-Key Cryptosystems Provably Secure
against Chosen Ciphertext Attacks. _Proceedings of the 22nd Annual Sym-_
posium on _the Theory of Computing,_ ACM, 1990.
`[Rab]` M. RABIN. Digitalized Signatures and Public Key Functions as Intractable
as Factoring. _MIT/LCS/TR-212,_ 1979.
**[Ya]** A. YAO. Theory and applications of trapdoor functions. _Proceedings of the_
### 23rd Symposium on Foundations of Computer Science, IEEE, 1982.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/BFb0055735?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/BFb0055735, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/BFb0055735.pdf"
}
| 1,998
|
[
"JournalArticle",
"Conference"
] | true
| 1998-08-23T00:00:00
|
[] | 12,581
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00eca0ba653718274f5b0593a06dda278759d534
|
[
"Computer Science",
"Medicine"
] | 0.841708
|
Adaptive Indoor Area Localization for Perpetual Crowdsourced Data Collection
|
00eca0ba653718274f5b0593a06dda278759d534
|
Italian National Conference on Sensors
|
[
{
"authorId": "51183994",
"name": "Marius Laska"
},
{
"authorId": "30601143",
"name": "J. Blankenbach"
},
{
"authorId": "1790902",
"name": "R. Klamma"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
The accuracy of fingerprinting-based indoor localization correlates with the quality and up-to-dateness of collected training data. Perpetual crowdsourced data collection reduces manual labeling effort and provides a fresh data base. However, the decentralized collection comes with the cost of heterogeneous data that causes performance degradation. In settings with imperfect data, area localization can provide higher positioning guarantees than exact position estimation. Existing area localization solutions employ a static segmentation into areas that is independent of the available training data. This approach is not applicable for crowdsoucred data collection, which features an unbalanced spatial training data distribution that evolves over time. A segmentation is required that utilizes the existing training data distribution and adapts once new data is accumulated. We propose an algorithm for data-aware floor plan segmentation and a selection metric that balances expressiveness (information gain) and performance (correctly classified examples) of area classifiers. We utilize supervised machine learning, in particular, deep learning, to train the area classifiers. We demonstrate how to regularly provide an area localization model that adapts its prediction space to the accumulating training data. The resulting models are shown to provide higher reliability compared to models that pinpoint the exact position.
|
# sensors
_Article_
## Adaptive Indoor Area Localization for Perpetual Crowdsourced Data Collection
**Marius Laska** **[1,]*** **, Jörg Blankenbach** **[1]** **and Ralf Klamma** **[2]**
1 Geodetic Institute and Chair for Computing in Civil Engineering & Geo Information Systems,
RWTH Aachen University, Mies-van-der-Rohe-Str. 1, 52074 Aachen, Germany;
blankenbach@gia.rwth-aachen.de
2 Advanced Community Information Systems Group (ACIS), RWTH Aachen University,
Lehrstuhl Informatik 5, Ahornstr. 55, 52074 Aachen, Germany; klamma@dbis.rwth-aachen.de
***** Correspondence: marius.laska@gia.rwth-aachen.de
Received: 15 January 2020; Accepted: 3 March 2020; Published: 6 March 2020
[����������](https://www.mdpi.com/1424-8220/20/5/1443?type=check_update&version=2)
**�������**
**Abstract:** The accuracy of fingerprinting-based indoor localization correlates with the quality
and up-to-dateness of collected training data. Perpetual crowdsourced data collection reduces
manual labeling effort and provides a fresh data base. However, the decentralized collection
comes with the cost of heterogeneous data that causes performance degradation. In settings with
imperfect data, area localization can provide higher positioning guarantees than exact position
estimation. Existing area localization solutions employ a static segmentation into areas that is
independent of the available training data. This approach is not applicable for crowdsoucred data
collection, which features an unbalanced spatial training data distribution that evolves over time.
A segmentation is required that utilizes the existing training data distribution and adapts once
new data is accumulated. We propose an algorithm for data-aware floor plan segmentation and a
selection metric that balances expressiveness (information gain) and performance (correctly classified
examples) of area classifiers. We utilize supervised machine learning, in particular, deep learning,
to train the area classifiers. We demonstrate how to regularly provide an area localization model that
adapts its prediction space to the accumulating training data. The resulting models are shown to
provide higher reliability compared to models that pinpoint the exact position.
**Keywords: indoor localization; area localization; crowdsourcing; fingerprinting; deep learning**
**1. Introduction**
In recent years, the usage of location-based services (LBS) has experienced substantial growth.
This is mostly caused by the wide adoption of smartphones with the ability to reliably track a
user’s location. Global Navigation Satellite Systems (GNSS), such as the Global Positioning System
(GPS), are the dominant technology to enable LBS, since they offer accurate and reliable localization
performance. However, GNSS do not provide sufficient availability and reliability inside buildings,
since the satellite signals are attenuated and scattered by building features. This drawback has led
to the development of various alternative indoor localization systems [1], which utilize a spectrum
of techniques and technologies. Until today, there is not any gold standard for indoor localization,
which can be stated as the main issue that has prevented indoor LBS from developing their full
potential [2].
Indoor localization systems can serve different purposes. In monitor-based systems, the location
of a user or entity is passively obtained relative to some anchor node [1]. This can be utilized,
for example, to enhance the energy efficiency of buildings by automatically switching-off lighting
and heating/cooling in empty rooms [3,4]. In contrast, in device-based systems, the location
-----
_Sensors 2020, 20, 1443_ 2 of 26
information is obtained from a user-centric perspective [1], which can be utilized, for example, to enable
navigation [5,6]. A variety of technologies and approaches are present in the field of indoor localization.
Comprehensive overviews are given in [1,7–10]. In general, indoor localization systems can be grouped
into (1) autonomous, (2) infrastructure-based and (3) hybrid systems. Autonomous systems apply
inertial navigation [7]. In infrastructure-based systems, it can be differentiated between (2.1) analysis
of signal propagation to dedicated transmitting stations and (2.2) scene analysis (fingerprinting) [11].
The former utilizes proximity, lateration or angulation measurements to estimate the user’s location.
This requires line-of-sight and knowledge about the location of the stations. In contrast, fingerprinting
does not rely on either. Instead, in an offline phase, the scene is scanned at certain reference points
with a sensing device (e.g., smartphone). The observed sensor values at each reference point form
so-called fingerprints. Using supervised machine learning (ML), a mapping from fingerprints to
locations is learned, which is utilized to estimate the location for unseen fingerprints during online
localization. Fingerprinting leverages existing infrastructure, which reduces upfront deployment cost.
However, the accuracy of the system strongly depends on the quality of the offline site survey and the
up-to-dateness of the fingerprint database.
A crowdsourced site survey has been proposed to partition the collection among several
participants and thus reduces the manual labeling effort [12–14]. Users either explicitly tag a fingerprint
with a location, or the label is implicitly inferred by the system. The decentralized collection comes
with the cost of heterogeneous data, which include among others device heterogeneity, labeling noise
and an unequal spatial training data distribution [15].
Area localization can be applied in settings with imperfect data to achieve reliable positioning
guarantees [16]. The problem is simplified such that the goal becomes to predict the right area instead
of pinpointing the exact location. Existing area localization solutions employ a static segmentation into
areas that is independent of the available training data [17–20]. This approach is not applicable for
crowdsoucred data collection, since it features an unbalanced spatial training data distribution that
changes over time. A segmentation is required that utilizes the existing training data distribution and
adapts when new data is accumulated. The amount and shape of the areas, in particular, the richness
of training data per area, affect the accuracy of classification models, which we subsequently call
model performance. In addition, the expressive power is determined by the segmentation. If a model
predicts one of few but large classes, the information gain of the user is lower compared to models that
predict one of many smaller areas. We call the expressive power of the model that is determined by
the segmentation expressiveness. Since crowdsourced data is expected to be generated continuously,
the segmentation into areas as well as the successive classification model can be continuously improved.
The challenge is, therefore, to continuously find a model with the right balance between expressiveness
and performance given the most recent crowdsourced map coverage.
The main contributions of this paper are summarized as follows:
We introduce the concept of adaptive area localization to enable area classification for
_•_
crowdsourced data that are continuously generated.
We propose the idea of data-aware floor plan segmentation to compute segmentations that
_•_
benefit subsequent classification. We present a clustering-based algorithm that determines such a
segmentation with adjustable granularity.
We formulate a metric to compare various area classifiers, such that the model, providing the
_•_
optimal balance between expressiveness and performance, can be selected. This allows for
automatic model building and selection in the setting of continuous crowdsourced data collection.
We provide a comprehensive experimental study to validate the concepts on a self-generated and
_•_
a publicly available crowdsourced data set.
The rest of the paper is organized as follows: we introduce related work in Section 2 focusing on
crowdsourced data collection, area classification and deep learning. Subsequently, Section 3 introduces
the proposed concepts of adaptive area classification in detail. In Section 4 we present the locally
_dense cluster expansion (LDCE) algorithm for computing floor plan segmentations with adjustable_
-----
_Sensors 2020, 20, 1443_ 3 of 26
granularities that are based on the available training data. Section 5 covers details regarding machine
learning model building for area classification. In Section 6 the proposed concepts are evaluated on a
self-generated as well as a publicly available crowdsourced data set. Finally, we discuss our findings
in Section 7 and draw conclusion in Section 8.
**2. Related Work**
Fingerprinting-based indoor localization commonly utilizes a two stage approach. In the offline
phase, radio frequency (RF) signals are collected at certain reference points and tagged with the
position of collection. An algorithm is used to find a mapping from unknown fingerprints to locations.
This algorithm is then applied during the online phase to localize an RF device [21]. The RF technology
of choice for fingerprinting is commonly WLAN, however, solutions have been proposed that utilize
alternative RF technologies such as LTE [22]. The most common approach for constructing a WLAN
fingerprint is the received signal strength (RSS), which can be used either directly [23,24] or after feature
extraction [25–28]. Recent studies on fingerprinting also incorporate channel state information (CSI) as
input data in order to obtain more accurate prediction results [29–31]. The underlying assumption
states that RSS values do not exploit the subcarriers in an orthogonal frequency-division multiplexing
(OFDM). Therefore, CSI contains richer multipath information [32], which is beneficial for training
complex models. However, obtaining CSI data is only achievable with certain Wi-Fi network interface
cards (NIC) and thus is currently not suitable for smartphone based data collection like crowdsourcing.
In this work, we focus on classical WLAN fingerprinting and utilize the RSS of scanned access points
to construct the radio frequency map.
_2.1. Crowdsourcing_
Several approaches have been proposed to reduce the manual labeling effort during crowdsourced
data collection for Wi-Fi fingerprinting [12–14]. Rai et al. [12] were among the first to present a
probabilistic model to infer the position of implicitly collected fingerprints. They periodically collected
the RSS together with the timestamps of collection. Simultaneously, the system tracks the user utilizing
a particle filter. After convergence, the path information is used to annotate the RSS measurements with
a location. Radu and Marina [13] additionally integrated activity recognition and Wi-Fi fingerprinting
via a particle filter to detect certain anchor points, such as elevator or stairs. He and Chan [33]
utilized proximity information to Internet-of-things (IoT) sensing devices and the initially sparse RSS
radio map to label fingerprints during implicit crowdsourcing. The IoT devices can be fixed, such as
installed beacon transmitters or moving (smartphones of other participants). Santos et al. [14] utilized
pedestrian dead reckoning (PDR) techniques to reconstruct the movements of users and classified
the resulting trajectories using Wi-Fi measurements. Similar segments have been identified using
an adaptive approach based on geomagnetic field distance. Finally, floor plans were reconstructed
through a data fusion process and the collected Wi-Fi fingerprints were aligned to physical locations.
Zhou et al. [34] abstracted the indoor maps as semantics graph. Crowdsourcing trajectories were
mapped to the floor plan by applying activity detection and PDR. The annotated trajectories have
been utilized to construct the radio map. Based on unfixed data collection, Jiang et al. [35] proposed
the construction of a probabilistic radio map, where each cell was assigned a probability density
function (PDF) instead of a mean value as in classical site survey approaches. Wei et al. [18] utilized
the knowledge of location during the payment process inside the shops of a mall. They utilized this
to annotate collected fingerprints with the current shop to build a hierarchical classification model
that provides shop-level localization. In contrast to probabilistic fingerprint annotation, unsupervised
learning can be utilized to obtain labeled Wi-Fi fingerprints [36,37]. Jung and Han [37] utilized
unsupervised learning to infer the location of access points together with a path loss model and
optimization algorithm, which they presented in [36]. They investigated how to adaptively recalibrate
the resulting map to avoid performance degradation of downstream localization models.
-----
_Sensors 2020, 20, 1443_ 4 of 26
Besides the reduction of labeling effort when collecting data via crowdsourcing, there are several
additional challenges that have to be considered. Ye and Wang [15] identifed four major problems,
which are:
Inaccurate position tags for crowdsourced fingerprints that might occur during manual labeling
_•_
of non-experts or are caused by automatic labeling via probabilistic models.
The fluctuating dimensionality of RSS signals caused by varying numbers of hearable access
_•_
points for various locations.
The device heterogeneity that causes RSS to differ across various devices for the same
_•_
measurement position.
The nonuniform spatial data distribution, meaning that some areas feature a larger amount of
_•_
data, while for others no data was collected.
They constructed device-specific grid fingerprints utilizing clustering-based algorithms.
For sparse areas fingerprints are interpolated and finally, the samples from several devices are fused
to obtain device independent grid fingerprints. Yang et al. [25] additionally identified the short
measurement time of crowdsourced sample collection as a typical problem. They utilized the fact
that the most-recorded RSS does not differ much, irrespective of the length of measuring, to extract a
characteristic fingerprint. In a follow up work, Kim et al. [26] evaluated the system in a case study
and demonstrated its effectiveness. Pipelidis et al. [38] proposed an architecture for cross-device
radio map construction via crowdsourcing. They utilized data labeled via a simultaneous localization
and mapping (SLAM)-like algorithm. The RSS values between devices were calibrated via reference
measurements at several landmarks. The data was clustered and subsequently used for classification
of areas.
_2.2. Area Localization_
In contrast to localization systems that aim at pinpointing the exact position of a user, the concept
of area classification only focuses on estimating the current area of the user, such as the office room or
the shop inside a mall. This is particularly suitable for large scale deployments or in situations where
the data quality does not allow for accurate localization.
Lopez Pastor et al. [17] evaluated a Wi-Fi fingerprinting-based indoor localization system inside a
medium sized shopping mall. The system is meant for providing shop-level accuracy, while minimizing
the deployment cost and effort. Data is collected by randomly walking in predefined areas, such
that all data can be labeled with the corresponding shop. The authors claim that the achieved
system performance is sufficiently independent of the device and does not deteriorate over time.
Wei et al. [18] adopted a similar approach. They utilized the fact that during payment inside a shop,
the location of the user is known. This can be used to annotate Wi-Fi fingerprints collected while paying.
The obtained fingerprints can be utilized for shop-level position estimation. Rezgui et al. [19] proposed
a variation of a support vector machine (SVM) (normalized rank based SVM) to address the problem
of hardware variance and signal fluctuation of Wi-Fi based localization systems. The system achieves
room level prediction accuracies. He et al. [16] compared the performance of various classification
models, such as SVM, artificial neural network (ANN) and deep belief network (DBN) for various
test sites. They addressed the identification of floors, indoor/outdoor and buildings. In a recent
follow up work [39], they also tackled the inside/outside region decision problem and propose
solutions for missing AP detection and fingerprint preprocessing. Liu et al. [20] proposed an algorithm
for probability estimation over possible areas. By adopting the user’s trajectory and existing map
information, they eliminate unreasonable results. The partitioning of the map into areas is done
manually based on the different rooms and offices.
_2.3. Deep Learning for Fingerprinting_
Fingerprinting-based indoor localization can be formulated as standard supervised learning
problem. It can be modeled as regression problem with the goal to predict the exact position, or as a
-----
_Sensors 2020, 20, 1443_ 5 of 26
classification task on predetermined areas. Due to the recent success of deep learning in areas such
as image processing or speech recognition, the application of deep models for fingerprinting-based
indoor localization has gained attention recently. Nowicki and Wietrzykowski [40] applied stacked
autoencoders combined with a feed forward neural network for building and floor prediction.
Xiao et al. [23] compared SVM and a deep neural network (DNN) on various publicly available
data sets and propose a data augmentation schema as well as an approach for transfer learning.
Adege et al. [28] applied regression analysis to fill missing RSS values and utilize linear discriminant
analysis for dimensionality reduction. Finally, feed forward neural networks are applied to tackle
the regression and classification problem. Kim et al. [41] formulated the problem as multi-label
classification problem to predict the building, floor and position with a single network with minimal
performance degradation. Mai et al. [42] utilized a convolutional neural network (CNN) on raw RSS
data by applying the convolution on time-series data. The data is artificially constructed by combining
measurements within a certain cell size that have been captured in temporal intervals not exceeding
a certain threshold. By constructing an image of the RSS vector, CNNs that are predominantly used
for image classification can be applied. Mittal et al. [27] filtered access point signals that have a
low Pearson Correlation Coefficient (PCC) between the access point values and the location vector.
The remaining RSS vector is transformed into an image matrix by multiplying each access point vector
with the obtained correlation values and arranging as matrix with zero padding. Sinha et al. [24]
simply arranged the RSS vector as a matrix to train a standard CNN image classifier. They proposed
a data augmentation scheme where single values of the RSS vector are replaced by random values
sampled from the interval of the difference of the actual value and the access point mean value.
**3. Adaptive Area Classification for Crowdsourced Data**
In the following section, we introduce our approach to adaptive area classification. We describe
the concept overview and introduce relevant notations. Subsequently, a floor plan segmentation is
formally defined and classification models for indoor localization are described. Finally, we propose a
novel metric called ACS, which is utilized to select area classifiers with respect to the optimal balance
between expressiveness and performance.
_3.1. Concept Overview_
The performance of Wi-Fi fingerprinting-based indoor localization systems heavily relies on
thorough and up-to-date site survey data. Crowdsourced training data collection continuously
provides fresh data, but suffers from poor data quality. Several approaches suggest to maintain
an up-to-date radio map, which stores a representative fingerprint or a probabilistic distribution for
predefined locations, regions, or grid cells [33,35,43]. Missing data for certain locations prohibits
equal radio map quality at all areas. This is solved by either enlarging the areas of the radio map
or by interpolating fingerprints for sparsely covered areas [15]. The update of such a radio map is
a complicated process, since its granularity is static. However, the spatial distribution of available
training data is expected to shift over time. Therefore, instead of maintaining a radio map with
characteristic fingerprints for predefined areas, we store the entire training data with the noisy position
tags. At regular intervals, we dynamically subdivide the floor plan into areas based on the richness
of available training data. The training data, which are originally annotated with noisy position tags,
are labeled with the corresponding areas based on the computed floor plan segmentation. This enables
training of standard supervised machine learning classifiers that predict the correct area. In order to
quantify the gain of such an area classifier, two metrics can be utilized.
The expressiveness measures the information gain of the user, which is mainly influenced by the
_•_
extent of each individual area and the total coverage of the model.
The performance indicates how reliably the model predicts a certain area.
_•_
-----
_Sensors 2020, 20, 1443_ 6 of 26
The two metrics are inversely proportional. That means a fine segmentation (high expressiveness)
negatively affects the performance of the model and vice versa. We assume that fresh crowdsourced
training data is accumulated over time. This enables updates of the floor plan segmentation and the
successive area classifier. The workflow for continuously providing area localization models, where the
prediction space adapts to the new training data, is illustrated in Figure 1. Over time, the map gets
covered with an increasing amount of training data, which is illustrated in the top row of Figure 1.
At regular intervals, the goal is to provide an optimal indoor area classification model based on the
current map coverage. This process includes the automatic floor plan segmentation into areas and
the training of an ML model. Several floor plan segmentations can be determined that influence the
expressiveness of the ML model and for each of these segmentations several ML models can be learned.
For each epoch, the best combination of segmentation and model is selected. This is done with respect
to a metric, called area classification score (ACS). The ACS balances expressiveness and performance
and is introduced in Section 3.5.
t1 time t2
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|Col33|Col34|s|Col36|Col37|Col38|Col39|Col40|Col41|Col42|Col43|Col44|s|Col46|Col47|Col48|Col49|Col50|Col51|Col52|Col53|Col54|Col55|Col56|Col57|Col58|Col59|Col60|Col61|Col62|Col63|Col64|Col65|Col66|Col67|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||Tr|ai|nin|g|||||||||||||ple||||||||||ple|||||||||||||||||||||||
||||||||||||||||||||da|ta||||||||||||||ng sam||||||||||ng sam|||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||# traini||||||||||# traini|||||||||||||||||||||||
|Floor plan segmentation Expressiveness Model training & Model selection ACS = 0.4 ACS = 0.5|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
**Figure 1. Concept of adaptive area classification for crowdsourced map coverage.**
_3.2. Data Notations_
In the following, we introduce the formal notations that are subsequently used. We assume
that at a certain point in time, a set of N labeled training data tuples (fingerprints) FP = { f pn =
(xn, pn, tn)} for n = 1, ..., N has been collected for a given indoor map. Each fingerprint f p consists
of a M-dimensional feature vector x = (x1, ..., xM)[T] and is tagged with a position pn = (px, py)[T] in
two dimensions and the corresponding timestamp tn of collection. In the following we focus on Wi-Fi
fingerprinting, such that each entry of the vector is the RSS value of the corresponding access point
and M is equal to the total amount of access points that are observable for the map. Since not all access
points are hearable at all locations, x contains missing entries, which have to be considered during
further processing of the data.
_3.3. Floor Plan Segmentation for Area Classification_
In order to train a classification model, we have to find a floor plan segmentation that assigns each
fingerprint tuple (x, p, t) to one of the K areas or classes, Ck for k = 1, .., K. A floor plan segmentation
determines a mapping SEG : Ck _Ak, where Ak might be any two-dimensional shape, such as a_
_→_
rectangle. Given such a mapping SEG, we can label each fingerprint with the class label of the area
it is located in. For a given segmentation SEG, we obtain the transformed set FPSEG = {(xn, cn)},
where cn ∈{1, ..., K} and cn = k ⇔ **pn lies within Ak. The goal is now to find a classifier C : x →** _ck_
**Training**
**data**
-----
_Sensors 2020, 20, 1443_ 7 of 26
that determines the correct area of the floor plan segmentation for an unknown RSS fingerprint.
We have now arrived at the standard formulation of a supervised learning problem, in particular,
a classification problem.
_3.4. ML Models for Area Classification_
Given the transformed set of fingerprints FPSEG = {(xn, cn)} for a segmentation SEG, we can
utilize any standard ML classification model that learns to predict the unknown class ck for a fingerprint
**x. We can either construct a discriminant function that directly assigns a class to an unknown**
fingerprint, or we model the conditional probability distribution p(ck|x) [44]. SVMs depict a typical
discriminant model used in the domain of indoor localization, while with DNNs, it is possible to
model p(ck|x). Both models are utilized in the experimental study (Section 6) as classifiers for the
transformed fingerprint sets FPSEG.
_3.5. Area Classification Score_
In order to properly quantify the quality of the learned area localization model (combination of
segmentation and trained classifier), we have to simultaneously investigate the model’s expressiveness
as well as its performance. The expressiveness is influenced by the total extent of covered area as well
as the size of each individual area. We state that the expressiveness of a model is higher if it predicts
classes associated with smaller areas. However, the benefit of a narrow prediction area vanishes if the
performance for that specific class, for example the accuracy, is poor. To capture this interplay, we
have to look at each predicted class of the classifier individually. We define areak as the surface area
of the area Ak that belongs to class Ck. On an individual class level, we define the expressiveness of
class Ck as:
_expλ(Ck) =_ _[area][min]_, (1)
_area[λ]_
_k_
where areamin is the minimal extent that an area might have by definition (set to 1m[2] in the following)
and λ is a parameter to adjust the slope of the function. Additionally, a performance metric is required,
which measures the accuracy of the model on a class level. We choose the F1 score, since we are equally
interested in precision and recall. Let F1(Ck) be the class-based F1 score for class Ck, evaluated on a
separate test set. The chosen metrics for expressiveness and performance reside in the interval [0, 1],
such that we can multiply them to obtain a value in [0, 1], which would be optimal, if the predicted
class has the minimal extent of 1m[2] and a F1-score of 1 on the test set. In order to account for the total
covered area, we take the weighted mean of the product of expressiveness and performance using the
area of each class. We finally arrive at:
1
_ACS =_
_areatot_
_k_
### ∑ F1(Ck)[µ] · expλ(Ck) · areak, (2)
_k=1_
which we call area classification score (ACS) in the following. The expressiveness term (1) regulates
how much the class score adds to the weighted mean. For λ = 0, the regularization term vanishes,
such that the area size of the specific class has no influence on the amount that is added to the mean.
This means that two localization models with constant class-wise classification performance F1 achieve
the same score if they cover the same area areacov, independent of the amount of classes and their
individual size:
_ACSλ=0 =_ _[area][cov]_ _· F1 ._ (3)
_areatot_
It follows that if λ approaches 0, the metric becomes less sensitive to the individual area sizes.
With respect to models covering a similar extent of the map, those that provide a higher performance
will be rated higher, independent of the number and individual size of their areas. The closer λ
gets to 1, the higher is the influence of individual area sizes. High performance on broad areas will
-----
_Sensors 2020, 20, 1443_ 8 of 26
not add much to the weighted mean, since they are downscaled by the expressiveness factor. As a
consequence, models with finer segmentations score higher, since the influence of area regularization
outweighs the performance factor. For λ = 1, the score is only sensitive to the amount of total segments.
The ACS becomes
1 _k_
_ACSλ=1 =_ ∑ _F1(Ck)[µ]_, (4)
_areatot_ _k=1_
which will be higher for finer segmentations given that the same total extent of the map is covered.
The parameter µ can be utilized for fine tuning. By setting it larger than 1, models with overall low
performance are penalized. We found that λ has a greater impact on the model selection and suffices
for our use-cases. Therefore, µ is set to 1 during subsequent application of the ACS.
Figure 2 emphasizes how the parameter choice of λ affects the ACS for three artificial
segmentations (a–c) . The rectangular boxes represent the prediction areas of the classifier and
the numbers show the class-wise F1 scores on a separate test set. We stated that λ influences the
expressiveness. In particular, the closer the value gets to 1, the more each individual class score is
downscaled by the size of its area. As a consequence, a low λ value targets high performant models
with lower expressiveness and a high λ value selects models with high expressiveness and lower
performance. Given the three segmentations (a–c), we plot the ACS for all possible choices of λ in
Figure 2d to investigate which model achieves the highest score (illustrated by the color below the
curve). As expected, the broad segmentation (a) is selected for low lambda values (0–0.13), the medium
segmentation (b) is chosen for values (0.13–0.37) and the fine segmentation (c) is chosen for higher
values (0.37–1). In practice, a pool of models is trained such as (a–c). The λ parameter is fixed, such
that the best scoring model is determined. If the model does not adhere to the required use case
requirements, λ can be adjusted accordingly, such that a different model is obtained.
16 16
0.91 0.89 0.91 0.88
14 14
12 12
10 0.93 10 0.93
8 8
6 6
4 0.95 4 0.9 0.92
2 2
0
16
14
12
10
8
6
4
2
0
0
0 10 20 0 30 10 40 20 50 30 60
16
0.91 0.89 0.91 0.88
14
12
0.93 10 0.93
8
6
0.95 4 0.9 0.92
2
(a) broad segmentation
(b) medium segmentation
|Col1|Col2|Col3|Col4|Col5|Col6|SEG broad|Col8|
|---|---|---|---|---|---|---|---|
|||||||medi fine|um|
|||||||||
|||||||||
|||||||||
|||||||||
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.6
SEG
0.68 0.81 0.7 0.73 0.84 broad
0.5
medium
fine
0.4
0.72 0.85
0.3
0.2
0.8 0.76 0.54 0.88
0.1
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0 10 20 30 40 50 60
(c) fine segmentation
(d) ACS for different λ values
**Figure 2. Illustration of the impact of λ on the ACS for pool of example models.**
-----
_Sensors 2020, 20, 1443_ 9 of 26
**4. Floor Plan Segmentation Algorithms**
In order to train an area classification model, we have to determine a mapping from areas
to classes that we defined as floor plan segmentation. If we neglect the underlying training data
distribution, we end up with segmentations where certain classes feature few to zero fingerprint
samples. This results in unsatisfying classification performance. The goal should be to leverage
the knowledge about available training data to compute a segmentation that benefits subsequent
classification but still provides the best possible expressiveness. We call such a segmentation data-aware
_floor plan segmentation and present an algorithm for this purpose in the following._
_4.1. Locally Dense Cluster Expansion (LDCE)_
In the following, we introduce the LDCE algorithm that computes a floor plan segmentation,
in particular, a mapping SEG : Ck → _Ak that assign each class Ck a shape Ak. Given SEG, we can label_
fingerprints (x, p, t) with the class that belongs to the area Ak in which p is located. Let FP = { f pn =
(xn, pn)} for n = 1, ..., N be a set of training data, we cluster the observations and determine the shapes
_Ak based on the position labels of the resulting cluster members._
Initially, we detect a set of locally dense base clusters. This serves two purposes: (1) observations
that are densely connected to a certain degree should not be separated and (2) fingerprints that are not
part of any initially dense cluster should be considered as noise. Both conditions are fulfilled when
applying a standard density based clustering algorithm such as the density-based spatial clustering of
applications with noise (DBSCAN) algorithm.
The resulting base clusters are subsequently expanded. Each round the closest clusters are
determined and merged. Resulting clusters that contain the required amount of stop_size members
are deleted from the expansion set and added to the set of final clusters. This process is continued
until either no clusters are present in the expansion set, or the smallest minimal distance exceeds the
maximal allowed merging distance max_eps. Remaining clusters with fewer than stop_size members
are postprocessed. By setting the minMembers parameters lower than stop_size, those clusters having
at least minMember members are added to the set of final clusters. All other remaining clusters are
added to the closest final cluster.
This routine yields clusters with definable bounds for the amount of members. Since clusters with
more than stop_size members are excluded from the merging phase, any merged cluster might have
at most 2 · stop_size members. However, besides the amount of available training data per segment,
we require a reasonable segmentation that adheres to the physical floor plan structure. In particular,
segmentations should minimize spreads across multiple walls if possible. Furthermore, since the
feature vector of subsequent classification consists of the RSS vector, the similarity in RSS signal space
should be considered during the segmentation phase. The approach we propose achieves this by
constructing a particular distance function between fingerprints and clusters of fingerprints that is
used in the previously described algorithm. Given two fingerprints f pu = (xu, pu) and f pv = (xv, pv),
we define their distance as:
_dist( f pu, f pv) = ||pu −_ **pv||2 + θ · |Wpu,pv** _| + ζ · ||xu −_ **xv||2,** (5)
where Wpu,pv is the set of walls between pu and pv. Note that the main distance factor is the Euclidean
distance between the position labels, while the difference between RSS vectors and the number of
conflicting walls are used to penalize this base distance. The distance between clusters is based on
centroid distance. We add another penalty term to account for final clusters that might lie between
merging clusters. Let Ci and Cj be two clusters, pi, pj the average position labels and xi, xj the average
RSS vectors, the distance is then given by:
_dist(Ci, Cj) = ||pi −_ **pj||2 + θ · |Wpi,pj** _| + ζ · ||xi −_ **xj||2 + η · |Ci,j|,** (6)
-----
_Sensors 2020, 20, 1443_ 10 of 26
where Ci,j is the subset of final clusters, such that C _f ∈Ci,j ⇔∃_ _f p f ∈_ _C_ _f |p f within bounds(pi, pj)._
In order to prevent merging of far distant clusters, with respect to the penalized distance function,
we set a threshold max_eps on the maximal allowed merging distance of two clusters. Note that the
choice of max_eps determines the maximal amount of allowed walls between two merging clusters.
If we choose max_eps = θ · x + δ, it holds that for any δ < θ, there will be at most x − 1 separating
walls between any merging cluster.
After we have determined the clustering, we have to construct the two-dimensional shapes
that represent the floor plan segmentation. Those are obtained by using the position labels of the
respective cluster members. We can construct the shape by taking the bounding box around the
labels, or computing the convex or concave hull. Figure 3 shows stages of an example run of LDCE.
The clusters merge over time (a–c) until all clusters have at least stop_size members (d). In the example,
the final segments are obtained from the bounding boxes around the labels of the class members.
The pseude code of the algorithm can be found in Algorithm 1.
15.0 15.0
12.5 12.5
10.0 10.0
7.5 7.5
5.0 5.0
2.5 2.5
0.0 0.0
(a) (b)
0 10 0 20 10 30 20 40
|Col1|Col2|Col3|Col4|15.0 12.5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||
|||||||||||||||||
||10.0 7.5|||||||||||||||
|||||||||||||||||
|||||||||||||||||
||5.0 2.5 0.0|||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
15.0
12.5
10.0
7.5
5.0
2.5
15.0
12.5
10.0
7.5
5.0
2.5
0.0
15.0
12.5
10.0
7.5
5.0
2.5
(c) (d)
0 10 0 20 10 30 20 40
**Figure 3. Illustration of LDCE segmentation. Clusters expand over time (a–c) until all clusters have**
reached a size greater than the stop_size threshold (d).
|Col1|Col2|Col3|Col4|15.0 12.5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||
|||||||||||||||||
||10.0 7.5|||||||||||||||
|||||||||||||||||
|||||||||||||||||
||5.0 2.5 0.0|||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
-----
_Sensors 2020, 20, 1443_ 11 of 26
**Algorithm 1 LDCE floor plan segmentation**
1: Inputs:
Fingerprints: FP = { f pn = (xn, pn)} _▷_ _n = 1, ..., N_
Walls: W = {(xsw, ysw, xew, yew )} _▷_ _w = 1, ..., W_
Main parameters: stop_size, max_eps
Distance penalties: θ, η, ζ
DBSCAN parameters: eps, minPts
Postprocessing: minMembers
2: Initialize:
_dist[ f pu, f pv] ←||pu −_ **pv||2 + θ · |Wpu,pv** _| + ζ · ||xu −_ **xv||2** _▷_ 1 ≤ _u, v ≤_ _N_
_C_ _f inal ←{}_
_Cexp ←_ _DBSCAN(dist, eps, minPts)_
_▷_ Main routine
3: while |Cexp| > 1 and min_dist < map_eps do
4: _C_dist[Ci, Cj] ←||pi −_ **pj||2 + θ · |Wpi,pj** _| + ζ · ||xi −_ **xj||2 + η · |Ci,j|** _▷_ 1 ≤ _i, j ≤|Cexp|_
5: _min_dist ←_ _min(C_dist)_
6: _Cm, Cn ←_ _argmin(C_dist)_
7: _Cmerged ←_ _Cm ∪_ _Cn_
8: _Cexp ←_ _Cexp \ Cx, Cy_
9: **if |Cmerged| > stop_size then**
10: _C_ _f inal ←_ _C_ _f inal ∪_ _Cmerged_
11: **else**
12: _Cexp ←_ _Cexp ∪_ _Cmerged_
13: **end if**
14: end while
_▷_ Postprocessing
15: for all C in Cexp do
16: **if |C| > minMembers then**
17: _C_ _f inal ←_ _C_ _f inal ∪_ _Cmerged_
18: _Cexp ←_ _Cexp \ C_
19: **end if**
20: end for
21: for all C in Cexp do
22: Add C to closest C _f ∈_ _C_ _f inal if closer than 2 · max_eps_
23: end for
_▷_ Determine final shapes
24: Pk = {pi|(pi, xi) ∈ _C_ _f inalk_ _}_ _▷_ _k = 1, ..., |C_ _f inal|_
25: Ak = convex_hull(Pk)
26: return A
**5. Machine Learning Model Building**
The complete pipeline of ML model building comprises (1) preprocessing of the data, (2) model
training and (3) model selection and evaluation. Each step is explained in the following.
-----
_Sensors 2020, 20, 1443_ 12 of 26
_5.1. Preprocessing_
5.1.1. Feature Preprocessing
The applied machine learning models require inputs of fixed dimensions. Each access point that
is observed during data collection represents one dimension of the input vector. Having observed
a total amount of M access points, we can construct a feature vector xn = (x1, ..., xM)[T], where xi for
_i = 1, ..., M and n = 1, ..., N represents the RSS value of the i-th access point of the n-th measurement._
Given a collected training sample, there is not a RSS value for each access point. This can have two
reasons: (1) the access point cannot be observed at the measuring position because it is out of range,
or (2) the access point is in general observable for the given location, however, its RSS value could
not be recorded in that specific sample. The second reason is caused by the response rate of an access
point, which is correlated with the average observable RSS value for a location [45]. For both causes
of unobservable access points, an artificial value has to be chosen as entry for the feature vector.
A common practice, which neglects the response rate of access points, is to simply set all missing
values to a low RSS value, such as -110dB. This approach is adopted in our experiments.
For gradient-based learning algorithms such as DNNs or distance-based algorithms such
as k-nearest neighbor (k-NN), it is crucial to normalize or standardize each feature column [46].
This speeds up the learning phase and prevents features with a longer range to outweigh other features.
It can be distinguished between feature scaling/normalization and feature standardization (z-score
normalization). Scaling linearly transforms the data into the interval [0, 1], while standardization
transforms the data to have zero mean and standard deviation equal to one. Standardization is
especially useful if the range of the features are unknown or the feature contains many outliers.
For choosing the right normalization technique, we have to investigate the influence of the given map
coverage. Let APa and APb be two access points that are far away, such that there is no location where
both can be observed simultaneously. Let areaa and areab be the areas where either signals of APa
or APb are received. A map coverage that contains much more samples of areaa does only have few
samples with signal of APb. When standardizing the data of the map coverage, we encode a strong
bias into the preprocessed data, since the feature column of APb is strongly influenced by the vast
amount of zero entries. Such a bias might be tolerable if the distribution of training data matches the
test data distribution. However, during online localization, users might request their position mostly
within areab, which would result in worse performance. In order to prevent this bias towards the given
map coverage, we simply apply column-wise feature scaling. For each AP it is likely that a sample
exist which could not register any signal strength for the AP. As a conclusion, the minimum RSS value
for all columns is equal to the supplementary value for missing data.
5.1.2. Floor Plan Segmentation (Parameter Choice)
To obtain the class labeled set FPSEG, we partition the floor plan with the introduced LDCE
algorithm. The choice of certain parameters of the LDCE algorithm depends on the given floor plan
and the spatial distribution of available training data. The parameters eps and minPts determine the
starting clusters that result from the initial DBSCAN execution. They should be chosen empirically,
such that the sizes of starting clusters do not exceed the stop_size member threshold and not too many
observations are considered as noise. The value of max_eps and the wall penalty should also be chosen
empirically based on the given floor plan dimensions and the amount of walls that should be allowed
within segments. The penalty term η is set to 2, since higher values might yield overlapping clusters
during the initial DBSCAN execution. ζ is set to the highest penalty value of 20 to avoid intersecting
final clusters. After those parameters are fixed, we can vary the stop_size and minMembers parameters
to obtain multiple segmentations with various granularities. An overview of the parameters can be
found in Table 1. Those parameters that depend on the given test site are revisited in the corresponding
Sections 6.2 and 6.3.
-----
_Sensors 2020, 20, 1443_ 13 of 26
5.1.3. Label Preprocessing
For training of regression models, the labels consist of the set of positions {pn}, n = 1, ..., N,
where each label is a two-dimensional vector representing the position tag. In case of area classification,
the labels {yn} with yn = (y1, ..., yK)[T], n = 1, ..., N, for the set FPSEG are the one-hot encoded areas
of the floor plan segmentation, where yi = 1 ⇔ _i = cn and 0 at all other positions. K represents the_
amount of segments of the given floor plan segmentation FPSEG.
**Table 1. Parameter choice of LDCE for experiments.**
**Main** **Postprocessing** **Penalties** **DBSCAN**
**Data Set**
**stop_size** **max_eps** **minMembers** **_θ_** **_ζ_** **_η_** **eps** **minPts**
RWTH Aachen {80, 50} 30 {40,20} 10 2 20 2 3
Tampere, Finnland {100, 60} 50 {60, 40} 5 2 20 5 3
_5.2. Model Training_
In the upcoming case study in Section 6, we focus on three types of supervised machine learning
models that are suitable to predict the area of unknown fingerprints. After hyperparameter tuning
we end up with a DNN model that has 3 hidden layers (HL) and 512 hidden units (HU) per layer
and utilizes rectified linear unit (ReLU) as activation function between layers. In order to learn the
conditional probability distribution p(y **x), we apply softmax activation function for the output layer**
_|_
together with multiclass cross-entropy loss. This choice can be derived by following a maximum
likelihood approach [47]. The Adam optimizer, a variant of stochastic gradient descent (SGD),
is utilized for iterative learning of the weights. To prevent overfitting, we apply early stopping,
which stops the training phase if the performance on a separate validation data set does not increase
for a specified amount of epochs. Furthermore, weight regularization within the loss function and
dropout are applied. The complete parameterization of the tuned DNN is given in Table 2. In addition,
we train a CNN with similar hyperparameters as suggested by [24], which consists of two convolutional
layers of size (16 16), a Maxpool layer of size (8 8), a convolutional layer of size (8 8) and a
_×_ _×_ _×_
Maxpool layer of size (8 × 8). In-between layers, we add dropout layers with dropping probability of
0.25 and utilize ReLu as activation function. Finally, a fully connected dense layer of size 128 is used
with output softmax activation function. We found that rearranging the RSS vector as matrix with
zero padding outperforms the proposed preprocessing method of [27] that utilize the PCC to reduce
the dimensionality and scale the data per access point. Furthermore, we fit a SVM with RBF kernel,
which we utilize as discriminative model to directly predict y.
Additionally, we select two regression models (k-NN and DNN(reg)). The DNN regression model
has the same configuration as the DNN classifier but uses a linear output activation function and
mean squared error as loss function. The k-NN models apply the weighted version of the algorithm
and are evaluated for three values of k, namely, 2,3 and 5. To validate whether explicitly training a
classifier provides valuable results, we label the regression outputs with the closest area of the floor
plan segmentation during postprocessing and compare them to the output of the area classifiers.
**Table 2. DNN model hyperparameter configuration.**
**HU** **HL** **Dropout** **Reg. Penalty** **lr** **Batch** **Epochs** **Loss** **Activation** **Optimizer**
512 3 0.2 0.06 0.0007 32 200 Cat. cross-entropy ReLU Adam
_5.3. Model Evaluation_
For model evaluation, we require a splitting strategy into training and test data as well as a metric
that indicates how well a model performs. Those are introduced for the different model types in the
following.
-----
_Sensors 2020, 20, 1443_ 14 of 26
_Splitting strategy:_
Area classifiers: The training data is labeled according to the computed floor plan segmentations.
_•_
We apply k-fold cross validation with k=5, such that we arrive at 20% test data per fold. We utilize
the stratified version to obtain a good representative of the whole data set in each split.
Regression models: We choose a subset of testing positions by applying DBSCAN on the position
_•_
labels only. Based on the resulting clusters we apply 5-fold cross validation, such that 20% of the
clusters are used as testing data in each fold.
_Metric:_
As metrics, we compute error vectors for the vectors of predictions and ground truth labels.
Those error vectors can be visualized via an empirical cumulative distribution function, which we will
refer to as CDF in the following.
Area classifiers: The error vector consists of the pairwise distances between the centers of
_•_
the predicted areas and the ground truth areas, which is zero in case of a correct prediction.
The y-intercept of the CDF corresponds to the machine learning accuracy metric (ACC). The curve
yields additional knowledge about the significance of misclassification. Furthermore, we report
the F1 score (F1).
Regression models: In case of exact position estimation, the error vector consists of the pairwise
_•_
distances between predictions and ground truth positions.
Selection via ACS: During model selection, we utilize the ACS as metric. This requires computing
_•_
the class-wise F1 scores of the predicted and ground truth areas.
**6. Experimental Evaluation**
The subsequent experimental case study targets two separate questions:
1. Does adaptive area localization based on a data-aware floor plan segmentation provide more
robust results than the standard regression approach for exact position estimation? In particular,
is it suited for arbitrarily collected training data via crowdsourcing?
2. When crowdsourced training data is generated continuously, the area classifier has to adapt to the
current data basis. This is accomplished by recomputing the underlying floor plan segmentation
and retraining a classification model on the data labeled with the corresponding areas. In this
setting, is the proposed ACS suited for automatic model selection among a pool of models that
provide varying performances and expressivenesses?
_6.1. Study Design_
In order to answer these questions, we conduct two experiments.
_•_ _Static performance analysis (Sections 6.2.1 and 6.3.1): we compute two floor plan segmentations with_
varying granularities for a snapshot of collected training data. For each segmentation we train
and evaluate various classification models. In addition, the performance of the proposed area
classifiers is compared to standard regression models that aim at pinpointing the exact location.
_Model selection via ACS for continuous data collection (Sections 6.2.2 and 6.3.2): we subdivide all_
_•_
available training data into 5 epochs that contain roughly the same amount of additional data
to simulate the continuous data collection. For each epoch we compute a pool of floor plan
segmentations, where we choose the parameters stop_size and minMembers empirically to obtain
segmentations with various granularities. Subsequently, we optimize a classifier on the data
labeled with the areas. The parameter λ has to be chosen according to the use case requirements.
We exemplarily choose the outer bounds (0 and 1), where 0 provides high performance and low
expressiveness and 1 targets models with higher expressiveness. Furthermore, λ = 0.5 is chosen
to select a balanced model. We demonstrate how to utilize the ACS to automatically select the
optimal model for the given use case requirements.
-----
_Sensors 2020, 20, 1443_ 15 of 26
Both experiments are conducted on two different data sets. The first one has been collected in
our university building. The second one utilizes the publicly available benchmark dataset for indoor
localization using crowdsourced data [48], which was captured in Tampere, Finland. In the following
we report the results grouped by the different test sites.
_6.2. Case Study: RWTH Aachen University Building_
The test environment for the data that we collected by ourselves is the 4th floor of the civil
engineering building of the RWTH Aachen university, Germany. The floor contains several offices and
a long hall. The total area is roughly 1500 m[2]. Two smartphones (Oneplus and LG) are used to collect
labeled fingerprints with continuous position tags. In a period of 9 months (from December 2018 to
August 2019), a total amount of above 1000 fingerprints have been collected. The initial performance
analysis utilizes the entire training data as static data set.
6.2.1. Static Performance Analysis
By applying the LDCE algorithm with two different parameterizations, we obtain two floor plan
segmentations, which differ in granularity.
The segmentations are shown in Figure 4, where the segments are represented by the shapes with
black boundaries. The grey points represent fingerprint locations. We sum the amount of data per
2 2 m square and plot a heatmap to visualize the training data distribution. The initial DBSCAN
_×_
is performed with eps = 2 and minPts = 3, which yields reasonably sized start clusters. We choose
a wall penalty of 10 such that given max_eps = 30, there will be at most 2 separating walls between
merging clusters. The first segmentation (Figure 4a) sets stop_size equal to 80, such that clusters are
excluded from the expansion set when they reach more than 80 members. The second segmentation
(Figure 4b) is obtained by setting stop_size to 50.
150
15.0
12.5
10.0
7.5
5.0
2.5
0.0
15.0
12.5
10.0
7.5
5.0
2.5
0.0
|0.98 0.97 0.92 0.98 0.98 0.98|Col2|
|---|---|
0.98
0.97
0.92 0.98
0.98 0.98
0 10 20 30 40 50 60 70 80
(a) broad
0.94
0.96 0.97 0.91
0.9 0.95
0.97 0.91
0.57 1.0
0 10 20 30 40 50 60 70 80
(b) fine
100
50
20
10
5
1
150
100
50
20
10
5
1
|0.94 0.96 0.97 0.91 0.9 0.95 0.97 0.91 0.57 1.0|Col2|
|---|---|
0.94
0.96 0.97 0.91
0.9 0.95
0.97 0.91
0.57 1.0
**Figure 4. Floor plan segmentations of RWTH Aachen university building. The black lined shapes**
represent areas of classifier. The green numbers represent the class-wise F1-score of the best model.
The grey dots are the fingerprint locations. The amount of training data per 2 × 2 m grid cell is
illustrated via the heatmap color.
We label the data set according to both segmentations and train the models described in Section 5.2
to predict the right area.
The resulting CDF is illustrated in Figure 5.
-----
_Sensors 2020, 20, 1443_ 16 of 26
SEG = broad
SEG = fine
1.00
0.95
0.90
0.85
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|model CNN DNN SVM KNN(2) KNN(3) KNN(5) DNN(reg)|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||
|||||||||||||||
0 5 10 15 20 25 30
Distance [m]
0 5 10 15 20 25 30
Distance [m]
**Figure 5. CDF of classification error. Error vector build from distances between centroids of true and**
predicted areas.
The CNN and the DNN achieve the best classification performance with an accuracy of above
97% on the broad segmentation and almost 95% on the finer segmentation. While the SVM achieves
acceptable results for the broad segmentation its performance significantly decreases when using a
finer segmentation. All regression model results are mapped to the closest class. They achieve lower
performance than the CNN and DNN classifiers. A comprehensive overview of the model comparison
can be found in Table 3. The lowest mean error is achieved by the DNN classifier with values of
0.43m and 0.66m respectively. For illustration purposes we plotted the class-wise F1 score of the best
performing model as green numbers for each segment in Figure 4.
**Table 3. Performance of classification models on both segmentations. The upper three models are**
explicitly trained to predict one of the underlying areas, while the other models (reg->class) are
regression models where we assign the closest area of the regression prediction during postprocessing.
**Segmentation** **Model** **Parameter** **Area Center Error** **Classification**
**Mean** **Std** **Min** **Max** **ACC** **F1**
broad CNN 0.43 3.28 0.0 47.42 0.97 0.97
DNN 0.32 2.17 0.0 47.42 0.97 0.97
SVM 0.54 3.46 0.0 45.25 0.96 0.95
k-NN (reg- > class) k = 2 0.85 4.36 0.0 49.95 0.94 0.93
k-NN (reg- > class) k = 3 0.82 4.30 0.0 49.95 0.94 0.93
k-NN (reg- > class) k = 5 0.87 3.94 0.0 49.95 0.93 0.92
DNN (reg- > class) 0.56 2.88 0.0 25.09 0.95 0.95
fine CNN 0.66 4.18 0.0 55.12 0.95 0.91
DNN 0.54 3.74 0.0 59.90 0.95 0.91
SVM 1.12 4.80 0.0 59.90 0.88 0.79
k-NN (reg- > class) k = 2 1.15 5.47 0.0 59.90 0.91 0.84
k-NN (reg- > class) k = 3 0.99 4.94 0.0 59.90 0.91 0.87
k-NN (reg- > class) k = 5 1.00 4.54 0.0 48.34 0.91 0.86
DNN (reg- > class) 0.71 3.07 0.0 42.50 0.92 0.87
In addition, we evaluate the performance of training a standard regression model for exact
position estimation.
The results are presented in Figure 6. The best regression model (DNN) guarantees that in
95% of the cases, the estimated position will not differ more than 10 m. In comparison the area
classification models guarantee a correct area prediction in 95% of the cases and thus achieve more
-----
_Sensors 2020, 20, 1443_ 17 of 26
robust results. This is achieved by lowering the expressiveness and utilizing the knowledge about
available training data.
1.0
0.9
0.8
0.7
model
0.6 DNN
0.5 KNN(2)
0.4 KNN(3)
0.3 KNN(5)
0.2
0.1
0.0
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0
Distance [m]
DNN 4.76 4.09 0.01 49.89
k-NN(k = 2) 6.91 5.53 0.01 61.06
k-NN(k = 3) 6.50 4.24 0.31 40.71
k-NN(k = 5) 6.56 4.25 0.45 41.76
(b)
**Model** **Values**
**Mean** **Std** **Min** **Max**
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
||||||||m|odel||
||||||||D|NN||
||||||||K|NN(2)||
||||||||K|NN(3)||
||||||||K|NN(5)||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
(a)
**Figure 6. Performance of regression models. (a) shows the CDF of the prediction errors and (b) holds**
mean, standard deviation, minimal and maximal error.
6.2.2. Model Selection via ACS
In the following we present the results when applying the ACS for model selection as described
in Section 6.1.
Figure 7 shows the ACS score of the trained models on the pool of segmentations for the three
choices of λ. The figure is interpreted by fixing a choice for λ depending on the use case. At each
epoch, we can now deliver the model with the highest ACS, since it provides the best balance between
expressiveness and performance. Note that for the first two epochs, the segmentations obtained from
_stop_size =_ 60, 80 result in a single cluster, since too few data is available and are thus discarded.
_{_ _}_
When inspecting the score for λ = 0.5, we see that at the second and third epoch, we would use the
segmentation obtained by LDCE (5:20), while in epoch four the highest score is achieved on LDCE
_(10:40). Finally, for the last epoch, the classifier that was optimized on LDCE (40:80) is selected._
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
0.02
|Col1|SEG|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||LDCE ( LDCE (|5:20) 10:40)|||||
||LDCE ( LDCE (|20:60) 40:80)|||||
||||||||
||||||||
||||||||
||||||||
||||||||
1 2 3 4 5
epoch
(a) λ = 0
(b) λ = 0.5
(c) λ = 1
**Figure 7. Area classification score (ACS) for three choices of λ. Per epoch the model with the highest**
score is chosen. The legend shows the (minMembers: stop_size) parameters used during segmentation.
The other parameters of LDCE are chosen as presented in Table 1.
The changes in ACS are discussed epoch-wise in the following. While epoch 1 contains only
training data of the lower left offices, in epoch 2 additional training data along the hall has been
collected. This allows for additional areas. LDCE (5:20) yields much more new segments among
the hall, which causes the high increase for λ = 1. For λ = 0, those small segments do not affect
the score, however, the achieved class-wise F1 score does, which is slightly lower for LDCE (10:40).
Between epoch 2 and 3, only few new areas are covered, however, the lower left offices feature
additional data. LDCE (20:60) and LDCE (40:80) are equal, which can also be observed from their
similar ACS values. In LDCE (10:40), the lower offices have already been split in epoch 2, which yielded
a bad performance. The additional data allows for improved model performance, which explains the
-----
_Sensors 2020, 20, 1443_ 18 of 26
increased ACS. Between epoch 3 and 4, only data in previously uncovered areas is added. This causes
an increased ACS value for all segmentations and λ values. For the broadest segmentation LDCE
_(40:80), the previous areas remain the same, while the other segmentations adopt a finer granularity._
Therefore, the highest relative increase for λ = 0 is observed for LDCE (40:80). Between epoch 4 and 5,
no additional areas are covered with training data. However, segmentation LDCE (40:80) rearranges its
area shapes, such that the total covered area increases. While the class-wise F1 scores remain roughly
the same, this causes the jump in ACS value for λ = 0. The other segmentations remain mainly
unchanged, since only the F1 scores of the models slightly change.
_6.3. Case Study: Tampere, Finland_
In addition to the data collected by ourselves, we evaluate our approach on a publicly available
fingerprinting dataset that was generated via crowdsourcing [48]. The original dataset consists of
4648 fingerprints collected by 21 devices in a university building in Tampere, Finland. The fingerprints
are distributed over five floors, while the 1st floor contains the highest sample density. Therefore, we
select the data of the 1st floor as subset to conduct our experiments.
6.3.1. Static Performance Analysis
Using the entire data collected on the 1st floor, we construct two floor plan segmentations based
on the LDCE algorithm, which can be found in Figure 8. The initial DBSCAN is performed with
_eps = 5 and minPts = 3. Note that in contrast to the other site, we slightly increase the eps parameter_
to obtain reasonably sized start clusters. This is justified because the overall training data distribution
is more sparse and the map has more than 5 times the extent of the other test site. Following the
same logic, we increase the max_eps parameter to 50. We use the same penalties as before but lowered
the wall penalty to 5, since we want to allow clusters to span several office rooms. The remaining
parameters can be found in Table 1. The broad segmentation was obtained by choosing a stop_size of
100 and for the fine segmentation we set stop_size equal to 60.
The dataset is published with a predetermined train test split, which consists of 20% training
data and 80% testing data. When plotting the training data of the 1st floor, we noted that only a
single region contains training samples, which makes the proposed split impractical. Therefore,
we apply the splitting strategy described in Section 5.3. The CDF of the class-wise error vectors is
presented in Figure 9. Similar to the other dataset, the DNN classification models achieve the best
results independent of the segmentation. On the broad segmentation, an accuracy of 89% is reached
and in 97% of the cases the predicted centroid of the area is less than 30 m off from the centroid
of the true area. A comprehensive overview of the individual model performance can be found in
Table 4. The DNN achieves the lowest mean centroid error and has the lowest standard deviation.
The prediction performance with respect to individual areas is illustrated in Figure 8. The green
numbers represent the class-wise F1 scores that the best model achieved.
For comparison with exact position estimation, we evaluate the performance of training standard
regression models. The results are presented in Figure 10. While the DNN regression model achieves
an error below 10 m with 90% probability, we achieve a correct area prediction in ~90% of the cases on
the broad floor plan segmentation. Thus, for the goal of coarse localization the area classifiers provide
higher guarantees.
6.3.2. Model Selection via ACS
In the following we present the results when applying the ACS for model selection as described
in Section 6.1. Figure 11 illustrates the obtained ACS scores of the trained models on the pool of
segmentations for the three choices of λ. Using the ACS as selective feature, we can state the following
observations. For λ = 0 (high performance), the model trained on LDCE (15:40) is chosen for the
first epoch and LDCE (40:80) is selected for the second and third epoch. For the entire training data
the classifier trained on LDCE (60:100) is chosen. For λ = 0.5 (balance between expressiveness and
-----
_Sensors 2020, 20, 1443_ 19 of 26
performance), LDCE (15:40) provides the selected segmentation for the first four epochs and is replaced
by the slightly broader segmentation LDCE (25:60) in the last epoch. The highest expressiveness is
given for λ = 1, which selects the model trained on the finest segmentation LDCE (5:20) for all epochs.
80
70
60
50
20
10
40
30
20
10
0
80
5
1
|0.91 0.97 0.67 0.87 0.92 0.89 0.87 0.93|Col2|
|---|---|
0.91
0.97
0.67
0.87
0.92
0.89
0.87
0.93
0 25 50 75 100 125 150 175 200
(a) broad
70
60
20
10
50
40
30
20
10
0
5
1
|0.89 0.88 0.91 0.81 0.87 0.71 0.85 0.66 0.47 0.86 0.8 0.9 0.79 0.87 0.92|Col2|
|---|---|
0.89 0.88
0.91
0.81
0.87
0.71
0.85 0.66
0.47
0.86 0.8 0.9 0.79
0.87
0.92
0 25 50 75 100 125 150 175 200
(b) fine
**Figure 8. Floor plan segmentations of 1st floor of public dataset [48]. The black lined shapes represent**
areas of the classifier. The green numbers represent the class-wise F1-score of the best model. The grey
dots are the fingerprint locations. The amount of training data per 4x4m grid cell is illustrated via the
heatmap color.
SEG = broad
SEG = fine
1.00
0.95
0.90
0.85
0.80
0.75
0.70
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|model CNN DNN SVM KNN(2) KNN(3) KNN(5) DNN(reg)|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
0 10 20 30 40 50 60
Distance [m]
0 10 20 30 40 50 60
Distance [m]
**Figure 9. CDF of classification error. Error vector build from distances between centroids of true and**
predicted areas.
-----
_Sensors 2020, 20, 1443_ 20 of 26
**Table 4. Performance of classification models on both segmentations. The upper three models are**
explicitly trained to predict one of the underlying areas, while the other models (reg->class) are
regression models where we assign the closest area of the regression prediction during postprocessing.
**Segmentation** **Model** **Parameter** **Area Center Error** **Classification**
**Mean** **Std** **Min** **Max** **ACC** **F1**
broad CNN 3.70 10.15 0.0 69.26 0.87 0.86
DNN 3.21 9.60 0.0 69.26 0.89 0.88
SVM 4.30 10.92 0.0 65.84 0.85 0.83
k-NN (reg- > class) k = 2 3.55 10.02 0.0 69.26 0.87 0.86
k-NN (reg- > class) k = 3 3.97 10.48 0.0 69.26 0.86 0.85
k-NN (reg- > class) k = 5 4.34 10.84 0.0 65.84 0.85 0.83
DNN (reg- > class) 4.62 11.17 0.0 65.84 0.83 0.81
fine CNN 3.65 9.11 0.0 90.47 0.83 0.81
DNN 3.53 9.00 0.0 91.75 0.84 0.81
SVM 7.00 12.36 0.0 100.44 0.71 0.56
k-NN (reg- > class) k = 2 3.72 9.12 0.0 90.47 0.82 0.79
k-NN (reg- > class) k = 3 3.95 9.30 0.0 90.47 0.81 0.77
k-NN (reg- > class) k = 5 4.25 9.55 0.0 69.79 0.80 0.76
DNN(reg) 5.00 10.07 0.0 69.79 0.76 0.72
1.0
0.9
0.8
0.7
model
0.6 DNN
0.5 KNN(2)
0.4 KNN(3)
0.3 KNN(5)
0.2
0.1
0.0
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0
Distance [m]
DNN 5.34 4.00 0.05 45.72
k-NN(k = 2) 5.89 4.66 0.05 38.79
k-NN(k = 3) 5.79 4.76 0.15 38.30
k-NN(k = 5) 5.90 4.72 0.04 36.87
(b)
**Model** **Values**
**Mean** **Std** **Min** **Max**
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
|||||||m|odel||
|||||||D|NN||
|||||||K|NN(2)||
|||||||K|NN(3)||
|||||||K|NN(5)||
||||||||||
||||||||||
||||||||||
||||||||||
(a)
**Figure 10. Performance of regression models. (a) shows the CDF of the prediction errors and (b) holds**
mean, standard deviation, minimal and maximal error.
In the following the ACS graphs are analyzed epoch-wise. In the first epoch LDCE (25:60) and
_LDCE (15:40) consist of only two broad segments, on which the models achieve the same class-wise F1_
scores. This can be observed, since both have the same scores for a fixed λ value. Since they cover a
larger total area than the finer LDCE (5:20), they score higher for low and medium λ values. However,
the larger number of segments of LDCE (5:20) causes the higher ACS value for λ = 1. In epoch 2
_LDCE (5:20) adds the most additional segments, while the number of added segments is the same_
for LDCE (15:40) and LDCE (25:60). This explains the scores observed for λ = 1. While for the three
segmentations the number of segments increases, high class-wise F1 scores can be maintained for
_LDCE (15:40) and LDCE (25:60). However, the finest segmentation LDCE (5:20) sacrifices performance_
for expressiveness and thus scores lower for λ = 0.5. For λ = 0 the score does not change much,
since the total covered area remains mostly constant. However, LDCE (40:80), which is present in
epoch 2 for first time, covers a much wider total area, since it only consists of few large segments and
therefore scores considerably higher for λ = 0. Between epoch 2 and 3, data is collected in previously
uncovered areas, which allows for finer segmentations independent of the chosen parameters. This can
be observed by the significant increase in ACS for λ = {0.5, 1}. On the contrary, between epoch 3 and 4,
mostly data within previously covered areas is collected, which allows for slightly higher performance.
Finally, in the last epoch, the segmentations change again, while especially LDCE (60:100) computes a
segmentation that covers a much larger total extent than the other segmentations. This explains the
high increase in ACS value for λ = 0.
-----
_Sensors 2020, 20, 1443_ 21 of 26
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
0.02
|Col1|SEG|Col3|LDCE (|25:60)|Col6|Col7|
|---|---|---|---|---|---|---|
||LDCE ( LDCE (|5:20) 15:40)|LDCE ( LDCE (|40:80) 60:100)|||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
1 2 3 4 5
epoch
(a) λ = 0
(b) λ = 0.5
(c) λ = 1
**Figure 11. Area classification score (ACS) for three choices of λ. Per epoch the model with the highest**
score is chosen. The legend shows the (minMembers: stop_size) parameters used during segmentation.
The other parameters of LDCE are chosen as presented in Table 1.
**7. Discussion**
In the following the findings of our work are discussed. The results of the case study are analyzed
with emphasis on the proposed concepts. Subsequently, the benefits of adaptive area localization are
highlighted in comparison to existing solutions. And finally, potential applications of the proposed
concept are described.
_7.1. Case Study Results_
_Model performance:_
Independent of the test site, the DNN area classifiers outperformed all other models with respect
to standard classification metrics, such as accuracy and F1 score. The F1 metric indicates that the
model provides high precision and recall scores, which means that each individual area is detected
properly and in case it is selected the prediction is trustable. CNN models are especially useful to
learn tasks where inputs are locally connected, such as adjacent pixels in images [49]. When randomly
arranging the access point vector as a matrix, it cannot be claimed that a comparable relation between
adjacent matrix entries exists. Therefore, the additional feature extraction should not provide any
benefits, which is empirically demonstrated by the results. The SVM model can only be used as
multi-class classifier by training several individual classifiers and following a certain voting scheme.
We applied the one-vs-one strategy, which results in K(K 1)/2 classifiers if we want to detect K areas.
_−_
Besides, the high computational effort, the results are worse than a simple k-NN classifier, which is
also observed in [50].
_LDCE floor plan segmentation algorithm:_
During the second experiment, it was demonstrated that the proposed LDCE algorithm is capable
of providing a pool of segmentations with various granularities. Those can be utilized in combination
with the proposed ACS to select the best area classifier with respect to the right balance between
expressiveness and performance. The algorithm requires certain parameters to be chosen empirically
based on the given site.
_ACS model selection metric:_
The effect of λ on the ACS was theoretically evaluated and demonstrated for three values in the
experiments. However, explicit values cannot be associated with qualitative terms, yet. In particular,
it cannot be stated which exact value is optimal for a certain use case. However, the ACS is lazily
computed. Once an area classifier has been trained, its ACS can be computed for several choices of
_λ by utilizing the stored prediction and ground truth vectors. This means that it is computationally_
inexpensive to compute the ACS for a pool of trained models and a large set of λ values. An initial
-----
_Sensors 2020, 20, 1443_ 22 of 26
_λ value is guessed. When the retrieved model does not meet the requirements, λ can be adjusted to_
match the right balance between expressiveness and performance.
_7.2. Adaptive Area Localization_
Area localization has been proposed for large-scale deployments of fingerprinting-based solutions
or when the data quality does not allow for exact position estimation. The objective is to provide higher
positioning guarantees by lowering the expressiveness of the model. In related work, the segmentation
during area classification features two characteristics [17–20]:
It is determined independent of the available training data.
_•_
It is statically determined, mostly prior to data collection.
_•_
Both features are unfavorable when working with crowdsourced data that is continuously
collected and solutions to apply area localization in such settings are currently missing in the literature.
Crowdsourced data collection results in a spatially non-uniform data distribution [15]. Training
a classifier on data where certain areas (classes) feature only few or no samples results in poor
performance. A segmentation that is determined independent of the training data might result in such
sparsely covered areas. Therefore, we introduce the concept of data-aware floor plan segmentation
and propose the LDCE algorithm that computes such a segmentation. A data-aware floor plan
segmentation introduces a trade-off between expressiveness and performance, which has not been
quantified in the literature, yet. However, such a quantification is required to measure how well an
area classifier performs given that the underlying segmentation is not static. Therefore, we propose
the ACS that captures this trade-off. Furthermore, during crowdsourcing, data is accumulated over
time. The segmentation determined for a given snapshot of data might become unfavorable once
additional data has been collected. It is crucial to regularly recompute the segmentation into areas.
In summary, our proposed concepts enable area localization for crowdsourced data and we empirically
demonstrate that this achieves higher reliability than exact position estimation. The model adapts to
the accumulating training data and finds the right balance between expressiveness and performance.
_7.3. Potential Applications_
Depending on the use case, localization systems might have distinct requirements. A system
with the objective to provide proximity based services (e.g., inside a shopping mall [17]) requires a
coarse-grained position estimation with high guarantees. In contrast, a localization system utilized
for navigation of people with visual impairments might benefit from a more fine-grained position
estimation. Given a base of crowdsourced training data, our approach allows to automatically
construct area localization models for any required tradeoff between expressiveness and performance.
Furthermore, it adapts to the accumulating training data that results from continuous crowdsourced
data collection. To the best of our knowledge, generating such adaptive localization models based on
fingerprinting has not been proposed in the literature, yet.
In addition, absolute location information can be merged with systems that iteratively determine
the position of a user such as PDR. WLAN fingerprinting is already employed in sensor fusion
solutions [51–53]. The granularity and level of guarantee of the fingerprinting model might impact
initialization and convergence time of the fused model. With our approach, the fingerprinting-based
localization model with the optimal granularity in that regards can be trained and deployed in the
fused model.
**8. Conclusions**
In this work, we propose the concept of adaptive area localization to achieve reliable position
estimations using crowdsourced data that is accumulated over time. Existing area localization
solutions employ a static segmentation into areas that is independent of the available training data.
This approach is not applicable for crowdsoucred data collection, since it features an unbalanced
-----
_Sensors 2020, 20, 1443_ 23 of 26
spatial training data distribution that changes over time. To solve this, we propose the LDCE algorithm
that computes data-aware floor plan segmentations with various granularities. The underlying
segmentation influences the model performance as well as its expressiveness. We introduce the ACS
to select the area classifier that provides the best trade-off between them. With those concepts, we
can now regularly compute a pool of segmentations and train classifiers on the data labeled with the
corresponding areas. We select the best model with the ACS and deploy it for localization.
The proposed concepts are validated on a self-collected as well as on a publicly available
crowdsourced data set. We demonstrate that the proposed area classifiers provide higher positioning
guarantees than models for exact position estimation. Furthermore, we show that they adapt to
the accumulating data base. In future work, we want to utilize PDR techniques and sensor fusion
to automate the data collection process and to enhance the positioning quality during localization.
In addition, our approach is not limited to WLAN RSS fingerprinting, but can be extended to support
magnetic and light sensors [54,55] or bluetooth [56], which we want to demonstrate in future work.
**Author Contributions: M.L., J.B. and R.K. designed the methodology; M.L. conceived and conducted the**
experiments; J.B. and R.K. administrated and supervised the research project; M.L. wrote the paper, J.B. and R.K.
reviewed the text and offered valuable suggestions for improving the manuscript.
**Funding: This research received no external funding.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Zafari, F.; Gkelias, A.; Leung, K.K. A Survey of Indoor Localization Systems and Technologies. Commun. Surv.
_[Tutorials IEEE 2019, 21, 2568–2599. [CrossRef]](http://dx.doi.org/10.1109/COMST.2019.2911558)_
2. Basiri, A.; Lohan, E.S.; Moore, T.; Winstanley, A.; Peltola, P.; Hill, C.; Amirian, P.; Figueiredo e Silva, P. Indoor
location based services challenges, requirements and usability of current solutions. Comput. Sci. Rev. 2017,
_[24, 1–12. [CrossRef]](http://dx.doi.org/10.1016/j.cosrev.2017.03.002)_
3. Wang, Y.; Shao, L. Understanding occupancy pattern and improving building energy efficiency through
[Wi-Fi based indoor positioning. Build. Environ. 2017, 114, 106–117. [CrossRef]](http://dx.doi.org/10.1016/j.buildenv.2016.12.015)
4. D’Aloia, M.; Cortone, F.; Cice, G.; Russo, R.; Rizzi, M.; Longo, A. Improving energy efficiency in building
system using a novel people localization system. In Proceedings of the 2016 IEEE Workshop on
Environmental, Energy, and Structural Monitoring Systems (EESMS), Bari, Italy, 13–14 June 2016; pp. 1–6.
[[CrossRef]](http://dx.doi.org/10.1109/EESMS.2016.7504811)
5. Ahmetovic, D.; Murata, M.; Gleason, C.; Brady, E.; Takagi, H.; Kitani, K.; Asakawa, C. Achieving Practical
and Accurate Indoor Navigation for People with Visual Impairments. In Proceedings of the 14th Web for
_All Conference on The Future of Accessible Work—W4A ’17; ACM Press: New York, NY, USA, 2017; pp. 1–10._
[[CrossRef]](http://dx.doi.org/10.1145/3058555.3058560)
6. Ho, T.W.; Tsai, C.J.; Hsu, C.C.; Chang, Y.T.; Lai, F. Indoor navigation and physician-patient communication
in emergency department. In Proceedings of the 3rd International Conference on Communication and Information
_Processing—ICCIP ’17; Ben-Othman, J., Gang, F., Liu, J.S., Arai, M., Eds.; ACM Press: New York, NY, USA,_
[2017; pp. 92–98. [CrossRef]](http://dx.doi.org/10.1145/3162957.3162971)
7. Kárník, J.; Streit, J. Summary of available indoor location techniques. IFAC-PapersOnLine 2016, 49, 311–317.
[[CrossRef]](http://dx.doi.org/10.1016/j.ifacol.2016.12.055)
8. He, S.; Chan, S.H.G. Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons.
_[Commun. Surv. Tutorials IEEE 2016, 18, 466–490. [CrossRef]](http://dx.doi.org/10.1109/COMST.2015.2464084)_
9. Xia, S.; Liu, Y.; Yuan, G.; Zhu, M.; Wang, Z. Indoor Fingerprint Positioning Based on Wi-Fi: An Overview.
_[ISPRS Int. J. Geo-Inf. 2017, 6, 135. [CrossRef]](http://dx.doi.org/10.3390/ijgi6050135)_
10. Batistic, L.; Tomic, M. Overview of indoor positioning system technologies. In Proceedings of the 2018 41st
International Convention on Information and Communication Technology, Electronics and Microelectronics
[(MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 473–478.[CrossRef]](http://dx.doi.org/10.23919/MIPRO.2018.8400090)
-----
_Sensors 2020, 20, 1443_ 24 of 26
11. Yassin, A.; Nasser, Y.; Awad, M.; Al-Dubai, A.; Liu, R.; Yuen, C.; Raulefs, R.; Aboutanios, E. Recent Advances
in Indoor Localization: A Survey on Theoretical Approaches and Applications. Commun. Surv. Tutorials IEEE
**[2017, 19, 1327–1346. [CrossRef]](http://dx.doi.org/10.1109/COMST.2016.2632427)**
12. Rai, A.; Chintalapudi, K.K.; Padmanabhan, V.N.; Sen, R. Zee: Zero-Effort Crowdsourcing for Indoor
Localization. In Proceedings of the 18th annual international conference on Mobile computing and
networking (Mobicom ’12), Istanbul, Turkey, 22–26 August 2012; pp. 293—304.
13. Radu, V.; Marina, M.K. HiMLoc: Indoor smartphone localization via activity aware Pedestrian Dead
Reckoning with selective crowdsourced WiFi fingerprinting. In Proceedings of the International Conference
on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, 28–31 October 2013; pp. 1–10.
[[CrossRef]](http://dx.doi.org/10.1109/IPIN.2013.6817916)
14. Santos, R.; Barandas, M.; Leonardo, R.; Gamboa, H. Fingerprints and Floor Plans Construction for Indoor
[Localisation Based on Crowdsourcing. Sensors 2019, 19, 919. [CrossRef]](http://dx.doi.org/10.3390/s19040919)
15. Ye, Y.; Wang, B. RMapCS: Radio Map Construction From Crowdsourced Samples for Indoor Localization.
_[IEEE Access 2018, 6, 24224–24238. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2830415)_
16. He, S.; Tan, J.; Chan, S.H.G. Towards area classification for large-scale fingerprint-based system. In Proceedings
_of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’16; Lukowicz,_
P.; Krüger, A.; Bulling, A.; Lim, Y.K.; Patel, S.N., Eds.; ACM Press: New York, NY, USA, 2016; pp. 232–243.
[[CrossRef]](http://dx.doi.org/10.1145/2971648.2971689)
17. Lopez-Pastor, J.A.; Ruiz-Ruiz, A.J.; Martinez-Sala, A.S.; Luis Gomez-Tornero, J. Evaluation of an indoor
positioning system for added-value services in a mall. In Proceedings of the 2019 International Conference
on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy, 30 September 2019–3 October 2019; pp. 1–8.
[[CrossRef]](http://dx.doi.org/10.1109/IPIN.2019.8911822)
18. Wei, J.; Zhou, X.; Zhao, F.; Luo, H.; Ye, L. Zero-cost and map-free shop-level localization algorithm based on
crowdsourcing fingerprints. In Proceedings of the 2018 Ubiquitous Positioning, Indoor Navigation and
[Location-Based Services (UPINLBS), Wuhan, China, 22–23 March 2018; pp. 1–10.[CrossRef]](http://dx.doi.org/10.1109/UPINLBS.2018.8559708)
19. Rezgui, Y.; Pei, L.; Chen, X.; Wen, F.; Han, C. An Efficient Normalized Rank Based SVM for Room Level
[Indoor WiFi Localization with Diverse Devices. Mob. Inf. Syst. 2017, 2017, 1–19. [CrossRef]](http://dx.doi.org/10.1155/2017/6268797)
20. Liu, H.X.; Chen, B.A.; Tseng, P.H.; Feng, K.T.; Wang, T.S. Map-Aware Indoor Area Estimation with Shortest
Path Based on RSS Fingerprinting. In Proceedings of the 2015 IEEE 81st Vehicular Technology Conference
[(VTC Spring), Glasgow, UK, 11–14 May 2015; pp. 1–5.[CrossRef]](http://dx.doi.org/10.1109/VTCSpring.2015.7145926)
21. Torres-Solis, J.; Falk., T.; Chau, T. A Review of Indoor Localization Technologies: Towards Navigational
Assistance for Topographical Disorientation. In Ambient Intelligence; Villanueva Molina, F.J., Ed.; IntechOpen:
[Rijeka, Croatia, 2010. [CrossRef]](http://dx.doi.org/10.5772/8678)
22. Pecoraro, G.; Di Domenico, S.; Cianca, E.; de Sanctis, M. LTE signal fingerprinting localization based on
CSI. In Proceedings of the 2017 IEEE 13th International Conference on Wireless and Mobile Computing,
[Networking and Communications (WiMob), Rome, Italy, 9–11 October 2017; pp. 1–8.[CrossRef]](http://dx.doi.org/10.1109/WiMOB.2017.8115803)
23. Xiao, L.; Behboodi, A.; Mathar, R. A deep learning approach to fingerprinting indoor localization solutions.
In Proceedings of the 2017 27th International Telecommunication Networks and Applications Conference
[(ITNAC), Melbourne, VIC, Australia, 22–24 November 2017; pp. 1–7.[CrossRef]](http://dx.doi.org/10.1109/ATNAC.2017.8215428)
24. Sinha, R.S.; Lee, S.M.; Rim, M.; Hwang, S.H. Data Augmentation Schemes for Deep Learning in an Indoor
[Positioning Application. Electronics 2019, 8, 554. [CrossRef]](http://dx.doi.org/10.3390/electronics8050554)
25. Yang, S.; Dessai, P.; Verma, M.; Gerla, M. FreeLoc: Calibration-free crowdsourced indoor localization.
In Proceedings of the 2013 Proceedings IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 2481–2489.
[[CrossRef]](http://dx.doi.org/10.1109/INFCOM.2013.6567054)
26. Kim, W.; Yang, S.; Gerla, M.; Lee, E.K. Crowdsource Based Indoor Localization by Uncalibrated
[Heterogeneous Wi-Fi Devices. Mob. Inf. Syst. 2016, 2016, 1–18. [CrossRef]](http://dx.doi.org/10.1155/2016/4916563)
27. Mittal, A.; Tiku, S.; Pasricha, S. Adapting Convolutional Neural Networks for Indoor Localization with
Smart Mobile Devices. In Proceedings of the 2018 on Great Lakes Symposium on VLSI—GLSVLSI ’18; Chen, D.;
[Homayoun, H.; Taskin, B., Eds.; ACM Press: New York, NY, USA, 2018; pp. 117–122. [CrossRef]](http://dx.doi.org/10.1145/3194554.3194594)
28. Adege, A.; Lin, H.P.; Tarekegn, G.; Jeng, S.S. Applying Deep Neural Network (DNN) for Robust Indoor
[Localization in Multi-Building Environment. Appl. Sci. 2018, 8, 1062. [CrossRef]](http://dx.doi.org/10.3390/app8071062)
-----
_Sensors 2020, 20, 1443_ 25 of 26
29. Wang, X.; Gao, L.; Mao, S.; Pandey, S. DeepFi: Deep learning for indoor fingerprinting using channel state
information. In Proceedings of the 2015 IEEE Wireless Communications and Networking Conference
[(WCNC), New Orleans, LA, USA, 9–12 March 2015; pp. 1666–1671. [CrossRef]](http://dx.doi.org/10.1109/WCNC.2015.7127718)
30. Wang, X.; Gao, L.; Mao, S.; Pandey, S. CSI-based Fingerprinting for Indoor Localization: A Deep Learning
[Approach. IEEE Trans. Veh. Technol. 2016, 66, 763–776. [CrossRef]](http://dx.doi.org/10.1109/TVT.2016.2545523)
31. Chen, H.; Zhang, Y.; Li, W.; Tao, X.; Zhang, P. ConFi: Convolutional Neural Networks Based Indoor Wi-Fi
[Localization Using Channel State Information. IEEE Access 2017, 5, 18066–18074. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2749516)
32. [Yang, Z.; Zhou, Z.; Liu, Y. From RSSI to CSI. ACM Comput. Surv. (CSUR) 2013, 46, 1–32. [CrossRef]](http://dx.doi.org/10.1145/2543581.2543592)
33. He, S.; Chan, S.H.G. Towards Crowdsourced Signal Map Construction via Implicit Interaction of IoT Devices.
In Proceedings of the 2017 14th Annual IEEE International Conference on Sensing, Communication, and
[Networking (SECON), San Diego, CA, USA, 12–14 June 2017; pp. 1–9.[CrossRef]](http://dx.doi.org/10.1109/SAHCN.2017.7964901)
34. Zhou, B.; Li, Q.; Mao, Q.; Tu, W. A Robust Crowdsourcing-Based Indoor Localization System. Sensors
**[2017, 17, 864. [CrossRef]](http://dx.doi.org/10.3390/s17040864)**
35. Jiang, Q.; Ma, Y.; Liu, K.; Dou, Z. A Probabilistic Radio Map Construction Scheme for Crowdsourcing-Based
[Fingerprinting Localization. IEEE Sensors J. 2016, 16, 3764–3774. [CrossRef]](http://dx.doi.org/10.1109/JSEN.2016.2535250)
36. Jung, S.h.; Moon, B.c.; Han, D. Unsupervised Learning for Crowdsourced Indoor Localization in Wireless
[Networks. IEEE Trans. Mob. Comput. 2016, 15, 2892–2906. [CrossRef]](http://dx.doi.org/10.1109/TMC.2015.2506585)
37. Jung, S.h.; Han, D. Automated Construction and Maintenance of Wi-Fi Radio Maps for Crowdsourcing-Based
[Indoor Positioning Systems. IEEE Access 2018, 6, 1764–1777. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2780243)
38. Pipelidis, G.; Tsiamitros, N.; Ustaoglu, E.; Kienzler, R.; Nurmi, P.; Flores, H.; Prehofer, C. Cross-Device
Radio Map Generation via Crowdsourcing. In Proceedings of the 2019 International Conference on Indoor
[Positioning and Indoor Navigation (IPIN), Pisa, Italy, 30 September 2019–3 October 2019; pp. 1–8.[CrossRef]](http://dx.doi.org/10.1109/IPIN.2019.8911766)
39. Chow, K.H.; He, S.; Tan, J.; Chan, S.H.G. Efficient Locality Classification for Indoor Fingerprint-Based
[Systems. IEEE Trans. Mob. Comput. 2019, 18, 290–304. [CrossRef]](http://dx.doi.org/10.1109/TMC.2018.2839112)
40. Nowicki, M.; Wietrzykowski, J. Low-Effort Place Recognition with WiFi Fingerprints Using Deep Learning.
In Automation 2017, Advances in Intelligent Systems and Computing, Warsaw, Poland, 15–17 March 2017;
Szewczyk, R.; Zieli´nski, C.; Kaliczy´nska, M., Eds.; Springer International Publishing: Cham, Switzerland,
[2017; Volume 550, pp. 575–584.[CrossRef]](http://dx.doi.org/10.1007/978-3-319-54042-9_57)
41. Kim, K.S.; Lee, S.; Huang, K. A scalable deep neural network architecture for multi-building and multi-floor
[indoor localization based on Wi-Fi fingerprinting. Big Data Anal. 2018, 3, 466. [CrossRef]](http://dx.doi.org/10.1186/s41044-018-0031-2)
42. Ibrahim, M.; Torki, M.; ElNainay, M. CNN based Indoor Localization using RSS Time-Series. In Proceedings
of the 2018 IEEE Symposium on Computers and Communications (ISCC), Natal, Brazil, 25–28 June 2018;
[pp. 01044–01049. [CrossRef]](http://dx.doi.org/10.1109/ISCC.2018.8538530)
43. Song, C.; Wang, J. WLAN Fingerprint Indoor Positioning Strategy Based on Implicit Crowdsourcing and
[Semi-Supervised Learning. ISPRS Int. J. Geo-Inf. 2017, 6, 356. [CrossRef]](http://dx.doi.org/10.3390/ijgi6110356)
44. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: Secaucus,
NJ, USA, 2006.
45. Dong, K.; Ling, Z.; Xia, X.; Ye, H.; Wu, W.; Yang, M. Dealing with Insufficient Location Fingerprints in Wi-Fi
[Based Indoor Location Fingerprinting. Wirel. Commun. Mob. Comput. 2017, 2017, 1–11. [CrossRef]](http://dx.doi.org/10.1155/2017/1268515)
46. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques, online-ausg ed.; Morgan Kaufmann Series
in Data Management Systems; Elsevier Science: Burlington, Vermont, 2011.
47. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016.
48. Lohan, E.; Torres-Sospedra, J.; Leppäkoski, H.; Richter, P.; Peng, Z.; Huerta, J. Wi-Fi Crowdsourced
[Fingerprinting Dataset for Indoor Positioning. Data 2017, 2, 32. [CrossRef]](http://dx.doi.org/10.3390/data2040032)
49. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive
[Review. Neural Comput. 2017, 29, 2352–2449. [CrossRef]](http://dx.doi.org/10.1162/NECO_a_00990)
50. Rana, S.P.; Prieto, J.; Dey, M.; Dudley, S.; Corchado, J.M. A Self Regulating and Crowdsourced Indoor
[Positioning System through Wi-Fi Fingerprinting for Multi Storey Building. Sensors 2018, 18. [CrossRef]](http://dx.doi.org/10.3390/s18113766)
51. Chang, Q.; van de Velde, S.; Wang, W.; Li, Q.; Hou, H.; Heidi, S. Wi-Fi Fingerprint Positioning Updated by
Pedestrian Dead Reckoning for Mobile Phone Indoor Localization. In China Satellite Navigation Conference
_(CSNC) 2015 Proceedings: Volume III; Lecture Notes in Electrical Engineering; Sun, J., Liu, J., Fan, S., Lu, X.,_
[Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 342, pp. 729–739. [CrossRef]](http://dx.doi.org/10.1007/978-3-662-46632-2_63)
-----
_Sensors 2020, 20, 1443_ 26 of 26
52. Zou, H.; Chen, Z.; Jiang, H.; Xie, L.; Spanos, C. Accurate indoor localization and tracking using mobile
phone inertial sensors, WiFi and iBeacon. In Proceedings of the 2017 IEEE International Symposium on
[Inertial Sensors and Systems (INERTIAL), Kauai, HI, USA, 27–30 March 2017; pp. 1–4.[CrossRef]](http://dx.doi.org/10.1109/ISISS.2017.7935650)
53. Jin, F.; Liu, K.; Zhang, H.; Feng, L.; Chen, C.; Wu, W. Towards Scalable Indoor Localization with Particle Filter
and Wi-Fi Fingerprint. In Proceedings of the 2018 15th Annual IEEE International Conference on Sensing,
[Communication, and Networking (SECON), Hong Kong, China, 11–13 June 2018; pp. 1–2.[CrossRef]](http://dx.doi.org/10.1109/SAHCN.2018.8397155)
54. Wang, X.; Yu, Z.; Mao, S. DeepML: Deep LSTM for Indoor Localization with Smartphone Magnetic and
Light Sensors. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas
[City, MO, USA, 20–24 May 2018; pp. 1–6.[CrossRef]](http://dx.doi.org/10.1109/ICC.2018.8422562)
55. Zhang, W.; Sengupta, R.; Fodero, J.; Li, X. DeepPositioning: Intelligent Fusion of Pervasive Magnetic Field
and WiFi Fingerprinting for Smartphone Indoor Localization via Deep Learning. In Proceedings of the
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico,
[18–21 December 2017; pp. 7–13.[CrossRef]](http://dx.doi.org/10.1109/ICMLA.2017.0-185)
56. Kanaris, L.; Kokkinis, A.; Liotta, A.; Stavrou, S. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for
[Improved Indoor Localization. Sensors 2017, 17, 812. [CrossRef]](http://dx.doi.org/10.3390/s17040812)
_⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7085741, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/20/5/1443/pdf?version=1584016953"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-03-01T00:00:00
|
[
{
"paperId": "52c35300b9d1ed10cca3ea4725b7a9b426c5f2a4",
"title": "Evaluation of an indoor positioning system for added-value services in a mall"
},
{
"paperId": "ecbbe52b36d1153e3c3cf378b0f14f6d82cee7ae",
"title": "Cross-Device Radio Map Generation via Crowdsourcing"
},
{
"paperId": "3adcf3fa58b9cba53e6b3f14253fb9e7c76e352a",
"title": "Data Augmentation Schemes for Deep Learning in an Indoor Positioning Application"
},
{
"paperId": "61596edb4e350da4a122f97bea766c305a2da387",
"title": "Efficient Locality Classification for Indoor Fingerprint-Based Systems"
},
{
"paperId": "df2abfb0a6aaff84b58fe7de5bb694c4a0b50a3e",
"title": "Fingerprints and Floor Plans Construction for Indoor Localisation Based on Crowdsourcing"
},
{
"paperId": "245fcdeed60543b52bf899147411859fbdb4dea0",
"title": "A Self Regulating and Crowdsourced Indoor Positioning System through Wi-Fi Fingerprinting for Multi Storey Building"
},
{
"paperId": "75201f3ea629e09de1161bdad94a936b104e4a69",
"title": "Applying Deep Neural Network (DNN) for Robust Indoor Localization in Multi-Building Environment"
},
{
"paperId": "5735804566ad31c37bb2cedd0f37a5e4aad6afa7",
"title": "Towards Scalable Indoor Localization with Particle Filter and Wi-Fi Fingerprint"
},
{
"paperId": "1ea85d68947ffa02df67aa4c49d5fab6e4acc7db",
"title": "CNN based Indoor Localization using RSS Time-Series"
},
{
"paperId": "79a4f79b89f31ad330f6e84f7f71510070122dd3",
"title": "Adapting Convolutional Neural Networks for Indoor Localization with Smart Mobile Devices"
},
{
"paperId": "787d64c967bce7cb3ae909560d6f0f2f06fee731",
"title": "DeepML: Deep LSTM for Indoor Localization with Smartphone Magnetic and Light Sensors"
},
{
"paperId": "049866640af88866accb7ae765ff54cb16347376",
"title": "Zero-cost and map-free shop-level localization algorithm based on crowdsourcing fingerprints"
},
{
"paperId": "db4217c5c683b910fc02b68c2eb45b77403cabd7",
"title": "A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on Wi-Fi fingerprinting"
},
{
"paperId": "e5980ed8e21e334aed085a4a13b105b409fbc730",
"title": "DeepPositioning: Intelligent Fusion of Pervasive Magnetic Field and WiFi Fingerprinting for Smartphone Indoor Localization via Deep Learning"
},
{
"paperId": "13199a8aa1fcadb99c005a876c971df95d91d7f5",
"title": "Indoor navigation and physician-patient communication in emergency department"
},
{
"paperId": "03604284bcf4d1a87d25f1046ac8e1b86d92f9e2",
"title": "WLAN Fingerprint Indoor Positioning Strategy Based on Implicit Crowdsourcing and Semi-Supervised Learning"
},
{
"paperId": "389d4d0d3e927b349642cc84dc4d00ee117b004a",
"title": "A deep learning approach to fingerprinting indoor localization solutions"
},
{
"paperId": "c87ef7e41abd68293a7bb3a8c1e40405e3b93ba5",
"title": "Wi-Fi Crowdsourced Fingerprinting Dataset for Indoor Positioning"
},
{
"paperId": "cd3033c432ac52a6c86ff1518b7d75f5f82b1249",
"title": "LTE signal fingerprinting localization based on CSI"
},
{
"paperId": "5c17fc24c16d0bb9e4ba5b48e1c9fe88e39650d6",
"title": "ConFi: Convolutional Neural Networks Based Indoor Wi-Fi Localization Using Channel State Information"
},
{
"paperId": "ea509ed799e7e45cd320d2d626a75ea163f226ec",
"title": "A Survey of Indoor Localization Systems and Technologies"
},
{
"paperId": "c1f05b723e53ac4eb1133249b445c0011d42ca79",
"title": "Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review"
},
{
"paperId": "99b7e4a0a042d963f539d7af3578dc7eff03e045",
"title": "An Efficient Normalized Rank Based SVM for Room Level Indoor WiFi Localization with Diverse Devices"
},
{
"paperId": "1152d59ee28eb11b137d5a39a3f4ecd427a69ec9",
"title": "Towards Crowdsourced Signal Map Construction via Implicit Interaction of IoT Devices"
},
{
"paperId": "5576bc8173144aa8f1ee9e5314ee395d0820bc17",
"title": "Indoor location based services challenges, requirements and usability of current solutions"
},
{
"paperId": "0394644b52f404d52d7f85164466dfcff5ee19db",
"title": "Indoor Fingerprint Positioning Based on Wi-Fi: An Overview"
},
{
"paperId": "2c565cc70ad63a6325bb11e771c87b3a706cc3f5",
"title": "Achieving Practical and Accurate Indoor Navigation for People with Visual Impairments"
},
{
"paperId": "53ef0c9a0605c8ef819e3f52669ca1018692c034",
"title": "A Robust Crowdsourcing-Based Indoor Localization System"
},
{
"paperId": "acdf54252dac8fa27c48452e8442a971f0bff8a9",
"title": "Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization"
},
{
"paperId": "32e9e97bb53e38b3a225f6ca5bd1df34db3f696c",
"title": "Accurate indoor localization and tracking using mobile phone inertial sensors, WiFi and iBeacon"
},
{
"paperId": "d45987b100a328d554b052c709af897ea86c5a5c",
"title": "Understanding occupancy pattern and improving building energy efficiency through Wi-Fi based indoor positioning"
},
{
"paperId": "3b98a19c1d44e3e7e43dd7fc58691927fa14d428",
"title": "Recent Advances in Indoor Localization: A Survey on Theoretical Approaches and Applications"
},
{
"paperId": "0b4cf28ca8229bb36aa2fa73eeb7ed9c346ab3ae",
"title": "Low-effort place recognition with WiFi fingerprints using deep learning"
},
{
"paperId": "2b50736bd76872ef562dd83de59bf2614f926177",
"title": "Unsupervised Learning for Crowdsourced Indoor Localization in Wireless Networks"
},
{
"paperId": "1e93c836f0dbe4ef71bfa8b4488ab5e675fc79ac",
"title": "Towards area classification for large-scale fingerprint-based system"
},
{
"paperId": "1c7a2b4a06abf90d2580a5fde510574daae1fb7d",
"title": "Crowdsource Based Indoor Localization by Uncalibrated Heterogeneous Wi-Fi Devices"
},
{
"paperId": "ad142de141e224b6e5c4d731d212c6d7e0bdb31f",
"title": "Improving energy efficiency in building system using a novel people localization system"
},
{
"paperId": "409ab99c91795436622ac063706d2fdf8f628c8b",
"title": "A Probabilistic Radio Map Construction Scheme for Crowdsourcing-Based Fingerprinting Localization"
},
{
"paperId": "4016403a62b587a675198415b93b903e2a66264a",
"title": "CSI-Based Fingerprinting for Indoor Localization: A Deep Learning Approach"
},
{
"paperId": "ca6261d8ab678d9f973fce5aaa5d29f9c25c6bd2",
"title": "Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons"
},
{
"paperId": "42eee752cf03ca8e0ff176f47fef14b28c486be2",
"title": "Map-Aware Indoor Area Estimation with Shortest Path Based on RSS Fingerprinting"
},
{
"paperId": "9713463f509ee274b95db430397a5bcd1cfe7c5b",
"title": "DeepFi: Deep learning for indoor fingerprinting using channel state information"
},
{
"paperId": "7e5a6abb22ea859375a00b1ac40e51c548d51148",
"title": "From RSSI to CSI"
},
{
"paperId": "e124ae6566af94e6ebbda6b6c3ad26b453179f3e",
"title": "HiMLoc: Indoor smartphone localization via activity aware Pedestrian Dead Reckoning with selective crowdsourced WiFi fingerprinting"
},
{
"paperId": "b4121cae8d476fef80222d5ffa7718197a79264a",
"title": "FreeLoc: Calibration-free crowdsourced indoor localization"
},
{
"paperId": "5b2c81e10064fc1cb148532e42d7b8db6096f186",
"title": "Zee: zero-effort crowdsourcing for indoor localization"
},
{
"paperId": "cc7a8d51584e7ceb68962daca77837a309692876",
"title": "A Review of Indoor Localization Technologies: towards Navigational Assistance for Topographical Disorientation"
},
{
"paperId": "e8a706b253be9692dacb4fc39c4b30342cd81c06",
"title": "RMapCS: Radio Map Construction From Crowdsourced Samples for Indoor Localization"
},
{
"paperId": "9f584dbebbef64eeebd9d6542abaf1c23ce84b02",
"title": "Overview of indoor positioning system technologies"
},
{
"paperId": "4b57637b58a125acf964397b83e57ff622835163",
"title": "Automated Construction and Maintenance of Wi-Fi Radio Maps for Crowdsourcing-Based Indoor Positioning Systems"
},
{
"paperId": "4a35dc81e88d320194482313af7c30431a21e600",
"title": "Dealing with Insufficient Location Fingerprints in Wi-Fi Based Indoor Location Fingerprinting"
},
{
"paperId": "6f6358bb586a9a253d12c59aeaf1277da7cabc01",
"title": "Summary of available indoor location techniques"
},
{
"paperId": "8e3c872076750bcce868808f9d4d7a038f950040",
"title": "Pattern Recognition And Machine Learning"
},
{
"paperId": "2913c2bf3f92b5ae369400a42b2d27cc5bc05ecb",
"title": "Deep Learning"
},
{
"paperId": "c48200c7c0a2239cb5839da44a2a8dc0230a4bb1",
"title": "Wi-Fi Fingerprint Positioning Updated by Pedestrian Dead Reckoning for Mobile Phone Indoor Localization"
},
{
"paperId": null,
"title": "Data Mining: Concepts and Techniques, online-ausg ed.; Morgan Kaufmann Series in Data Management Systems"
}
] | 26,308
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ed8311df8cd0e843ca8912bc76e6d365443859
|
[
"Computer Science"
] | 0.853701
|
A comparison of TCP automatic tuning techniques for distributed computing
|
00ed8311df8cd0e843ca8912bc76e6d365443859
|
Proceedings 11th IEEE International Symposium on High Performance Distributed Computing
|
[
{
"authorId": "2242631",
"name": "E. Weigle"
},
{
"authorId": "145476815",
"name": "Wu-chun Feng"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
Approved for public release;
distribution is unlimited.
_Title:_ A Comparison of TCP Automatic Tuning Techniques for
###### Distributed Computing
_Author(s):_ Eric Weigle and Wu-Chun Feng
_Submitted to:_ 1 1 th IEEE International Symposium on High-Performance
###### Distributed Computing
Los Alamos
NATIONAL LABORATORY
Los Alamos National Laboratory, an affirmative actiodequal opportunity employer, is operated by the University of California for the U.S.
Department of Energy under contract W-7405-ENG-36. By acceptance of this article, the publisher recognizes that the U.S. Government
retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, orto allow others to do so, for U.S.
Government purposes. Los Alamos National Laboratory requests that the publisher identify this article as work performed under the
auspices of the U.S. Department of Energy. Los Alamos National Laboratory strongly supports academic freedom and a researcher's right to
publish;asan institution,however,the Laboratorydoesnotendorsetheviewpointof a publicationor guaranteeitstechnicalcorrectness.
-----
###### A Comparison of TCP Automatic-Tuning Techniques for Distributed Computing
Eric Weigle and Wu-chun Feng
Research and Development in Advanced Network Technology Computer and Computational Sciences Division Los Alamos National Laboratory Los Alamos, NM 87545
```
{ehw, feng}@lanl.gov
Abstract In this paper, we compare and contrast these tuning
```
methods. First we explain each method, followed by an in-
_Rather than paiizful, manual, static, per-connection op-_ depth discussion of their features. Next we discuss the ex-
_timization of TCP buffer sizes siniply to achieve acceptable_ periments to fully characterize two particularly interesting
_perfonnance for distributed applications 18, IO], inany re-_ methods (Linux 2.4 autotuning and Dynamic Right-Sizing).
_searclzers liave proposed techniques to perforin this tuning_ We conclude with results and possible improvements.
_automatically [4, 7,9,11,12,14]. This paperjrst discusses_
_tlie relative merits of the various approaches in tlaeory, and_
###### 1.1. TCP Tuning and Distributed Computing
_then provides substantial experimental data concerning two_
_coinpetiizg iniplerneiitations - the buffer autotuning already_
_present in Linux_ `2.4.x and “Dynainic Right-Sizing.”` `Tlzis`
Computational grids such as the Information Power
_paper reveals lieretofore unknown aspects_ `of the problem`
Grid _[5], Particle Physics Data Grid_ **[l], and Earth System**
_and current solutions, provides insiglit into tlie proper ap-_
Grid [3] all depend on TCP. This implies several things.
_proach for various circumstances, and points toward ways_
First, bandwidth is often the bottleneck. Performance for
_to further irnprove perfornzance._
distributed codes is crippled by using TCP over a WAN. An
appropriately selected buffer tuning technique is one solu-
**Keywords:** dynamic right-sizing, autotuning, high-
tion to this problem.
performance networking, TCP, flow control, wide-area net-
work. Second, bandwidth and time are money. An OC-3 at
155Mbps can cost upwards of $50,000 a month and higher
speeds cost even more. If an application can only utilize a
###### 1. Introduction few megabits per second, that money is being wasted. Time
spent by people waiting for data, time spent hand-tuning
TCP, for good or ill, is the only protocol widely avail- network parameters, time with under-utilized compute re-
able for reliable end-to-end congestion-controlled network sources - also wasted money. Automatically tuned TCP
communication, and thus it is the one used for almost all buffers more effectively utilize network resources and save
distributed computing. that money, but an application designer must still choose
from the many approaches.
Unfortunately, TCP was not designed with high-
performance computing in mind - its original design deci- Third, tuning is a pain. Ideally, network and protocol
sions focused on long-term fairness first, with performance designers produce work so complete that those doing dis-
a distant second. Thus users must often perform tortuous tributed or grid computing are not unduly pestered with the
manual optimizations simply to achieve acceptable behav- “grungy” details. In the real world, application develop-
ior. The most important and often most difficult task is de- ers must still make decisions in order to attain peak per-
termining and setting appropriate buffer sizes. Because of formance. The results in this paper show the importance
this, at least six ways of automatically setting these sizes of paying attention to the network and show one way to
have been proposed. achieve maximal performance with minimal effort.
-----
###### 2. Buffer I’uning Techniques connections can increase their window size - performance
improvements are an intentional side-effect.
TCP buffer-tuning techniques balance memory demand Enable uses a daemon to perform the same tasks as a
with the reality of limited resources - maximal TCP buffer human performing manual tuning. It gathers information
space is useless if applications have no memory. Each tech- about every pair of hosts between which connections are
nique discussed below uses different information and makes to be tuned and saves it in a database. Hosts then look up
different trade-offs. All techniques are most useful for large this information when opening a connection and use it to
data transfers (at least several times the bandwidth x _delay_ set their buffer sizes. Enable [ 113 reports performance im-
product of the network). Short, small transmissions are provements over untuned connections by a factor of 10-20
dominated by latency, and window size is practically irrele- and above 2.4 autotuning by a factor of 2-3.
vant. Auto-ncFTP also mimics the same sequence of events
as a human manually tuning a connection. Here, it is per-
formed once just before starting a data connection in FTP
###### 2.1. Current Tuning Techniques
so the client can set buffer sizes appropriately.
DRS FTP uses a new command added to the FTP control
1. Manual tuning [8,10]
language to gain network information, which is used to tune
2. PSC’s Automatic TCP Buffer Tuning [9] buffers during the life of a connection. Tests of this method
show performance improvements over stock FTP by a factor
3. Dynamic Right-Sizing (DRS) [4,14] of 6 with lOOms delay, with optimally tuned buffers giving
an improvement by a factor of 8.
4. Linux 2.4 Auto-tuning [12]
###### 2.2. Comparison of Tuning Techniques
5. Enable tuning [ 111
6. NLANR’s Auto-tuned FI’P (in ncFTP) [7]
**I** **Thing 0** I , **Level** I I **Changes -** 1 I **Band** I I **Visibilitv** 1 I
7. L A W S DRS FTP’(in WUFTP) I **PSC** I Kernel I Dvnamic I In I Transparent 1
**Linux2.4** I Kernel I Dynamic I In I Transparent
Manual tuning is the baseline by which we measure au-
totuning methods. To perform manual tuning, a human uses **DRS** I I Kernel I Dvnamic I In I Transparent
**Enable** 1 User Static Out Visible
tools such as ping and pathchar or pipechar to deter-
**NLANRFTP** I User Static Out . ODaaue
mine network latency and bandwidth. The results are mul-
tiplied to get the bandwidth x _delay product, and buffers_
are generally set to twice that value.
PSC’s tuning is a mostly sender-based approach. Here
**Table 1. Comparison of** **Tuning Techniques**
the sender uses TCP packet header information and times-
tamps tb estimate the bandwidth x delay product of the net-
work, which it uses to resize its send window. The receiver **User-level versus Kernel-level refers to whether the**
simply advertises the maximal possible window. PSC’s pa- buffer tuning is accomplished as an application-level solu-
per [9] presents results for a NetBSD 1.2 implementation, tion or as a change to the kernel (Linux, “BSD, etc.).
showing improvement over stock by factors of 10-20 for Manual tuning tediously requires both types of changes.
small numbers of connections. **An** ‘ideal’ solution would require only one type of change
DRS is a mostly receiver-based buffer tuning approach - kernel-level for situations where many TCP-based pro-
where the receiver tries to estimate the bandwidth x delay grams require high performance, user-level where only a
product of the network and the congestion-control state of single TCP-based program (such as FTP) requires high per-
the sender, again using TCP packet header information and formance.
timestamps. The receiver then advertises a window large Kernel-level implementations will always be more effi-
enough that the sender is not flow-window limited. cient, as more network and high-resolution timing informa-
Linux autotuning refers to a memory management tion is available, but they are complicated and non-portable.
technique used in the stable Linux kernel, version 2.4. Whether this is worth the 20- 100% performance improve-
This technique does not attempt any estimates of the ment is open to debate.
_bandwidth_ _x_ _delay_ product of a connection. Instead, it **Static versus Dynamic refers to whether the buffer tun-**
simply increases and decreases buffer sizes depending on ing is set to a constant at the start of a connection, or if it
available system memory and available socket buffer space. can change with network “weather” during the lifetime of a
By increasing buffer sizes when they are full of data, TCP connection.
2
-----
Generally a dynamic solution is preferable - it adapts **3.1. Varied Experimental Parameters**
itself to changes in network state, which some work has
shown to have multi-fractal congestion characteristics [6,
131. Static buffer sizes are always too large or small Our experiments consider the following parameters:
given “live” networks. Yet, static connections often have **Tuning** (None, 2.4-auto, DRS): We compare a Linux
smoother application-level performance than dynamic con- 2.2.20 kernel which has no autotuning, a 2.4.17 kernel
nections, which is desirable. which has Linux autotuning, and a 2.4.17 kernel which also
Unfortunately, both static and dynamic solutions have has Dynamic Right-Sizing. We will refer to these three as
problems. Dynamic changes in buffer sizes imply changes 2.2.20-None, 2.4.17-Auto, and 2.4.17-DRS.
in the advertised window, which if improperly implemented
**Buffer Sizes (32KB to 32MB): Initial buffer size con-**
can break TCP semantics (data legally sent for a given win-
figuration is required even for autotuning implementations.
dow is in-ff ight when the window is reduced, thus causing
There are three cases:
the data to be dropped at the end host). Current dynamic
tuning methods monotonically increase window sizes to
avoid this - possibly wasting memory. No user or kernel tuning; buffer sizes at defaults. Gives
**In-Band versus Out-of-Band** refers to whether
baseline for comparison with tuned results.
_bmdwzdth_ _x_ _delay_ information is obtained from the
connection itself or is gathered separately from the data
transmission to be tuned. An ideal solution would be Kernel-only tuning; configure maximal buffer sizes
in-band to minimize user inconvenience and ensure the only. Gives results for kernel autotuning implemen-
correct time-dependent and path-dependent information is tations.
being gathered.
DRS FTP is ‘both’ because data is gathered over the con-
trol channel; usually this channel uses the same path as the User and kernel tuning; use `setsockopt ( ) to con-`
figure buffer sizes manually. Gives results for manu-
data channel, but in some ‘third-party’ cases the two chan-
ally tuned connections.
nels are between different hosts entirely. `In the first case`
data collection is ‘in-band’, while in the second not only is
it out of band, it measures characteristics of the wrong con-
**Network Delay** (zOSms, 25ms, 50ms, 100ms): We
nection! Auto-ncFTP suffers from the same ‘third-party’
vary the delay from 0.5ms to lOOms to show the perfor-
problem.
mance differences between LAN and WAN environments.
**Transparent versus Visible refers to user inconvenience**
We use TICKET [15] to perform WAN emulation. This
###### - how easily can a user tell if they are using a tuning
emulator can route at line rate (up to lGbps in our case) in-
method, how many changes are required, etc. An ideal so-
troducing a delay between 200 microseconds and 200 mil-
lution would be transparent after the initial install and con-
liseconds.
figuration required by all techniques.
The kernel approaches are transparent; other than im- **Parallel Streams** (1, 2, 4, 8): We use up to 8 paral-
proved performance they are essentially invisible to aver- lel streams to test the effectiveness of this commonly-used
age users. The FTP programs are ‘opaque’ because they technique with autotuning techniques. This also shows how
can generate detectable out-of-band data, and require some well a given tuning technique scales with increasing num-
start-up time to effectively tune buffer sizes. Enable is com- bers of flows. When measuring performance, we time from
pletely visible. It requires a daemon and database sepa- the start of the first process to the finish of the last process.
rate from any network program to be tuned, generates fre-
quent detectable network benchmarking traffic, and requires
###### 3.2. Constant Experimental Parameters
changes to each network program that wishes to utilize its
functionality.
**Topology: Figure 1 shows the generic topology we use**
###### 3. Experiments in our tests. We have some number of network source (S)
processes sending data to another set of destination (D) pro-
These experiments shift our focus to the methods of di- cesses through a pair of bottleneck routers (R) connected
rect interest: manual tuning, Linux 2.4 autotuning, and via some WAN cloud. The “WAN cloud” may be a direct
Dynamic Right-Sizing under Linux. The remaining ap- long-haul connection or through some arbitrarily complex
proaches are not discussed further because such analysis is network. (In the simplest case, both routers and the “WAN
available in the referenced papers. cloud” could be a single very high-bandwidth LAN switch.)
-----
###### n
**Hardware:** Tests are run between two machines with
## .
dual 933MHz Pentium I11 processors, an Alteon Tigon I1
Gigabit Ethernet card on a 64-bit 66-MHz PCI bus, and
_5 12MB of memory._
###### D5
```
4
4. Results and Analysis
```
**Figure 1. Generic Topology**
Our experiments place all processes (parallel streams)
on a single host. The results of more complicated one- We present data in order of increasing delay. With con-
to-many or many-to-one experiments (common in scatter- stant bandwidth (Gigabit Ethernet), this will show how well
gather computation, or for web servers) can be inferred by each approach scales as pipes get “fatter.”
observing memory and CPU utilization on the hosts. This
information shows the scalability of the sender and receiver
tuning and whether one end’s behavior characterizes the
performance of the connection. This distinction is critical **4.1.** **First Case, zO.5rns Delay**
for one-to-many relationships, as the “one” machine must
split its resources among many flows while each of the
“many” machines can dedicate more resources to the one
flow.
With delays on the order of half a millisecond, we ex-
**Unidirectional Transfers: Although TCP is inherently**
pect that even very high bandwidth links can be saturated
a full duplex protocol, the majority of traffic generally flows
with small windows - the default 64KB buffers should be
in one direction. TCP protocol dynamics do not signifi-
sufficient.
cantly differ between one flow with bidirectional traffic and
two unidirectional flows sending in opposite directions [ 161.
Figure 2 shows the performance using neither user nor
###### Loss: Our WAN emulator is configured to emulate no
kernel tuning. With a completely default configuration, the
loss (although loss may still occur due to senderheceiver
Linux 2.4 stack with autotuning outperforms the Linux 2.2
buffer over-runs). All experiments are intended to be the
stack without autotuning by lOOMbps or more (as well as
best-case scenario. The artificial inclusion of loss adds
showing more stable behavior). Similarly, 2.4.17-DRS out-
nothing to the discussion, as congestion control for TCP
performs 2.4.17-Auto by a smaller margin of 30-50Mbps.
Reno/SACK under Linux is a constant for all experiments.
This is due to more appropriate use of TCP’s advertised
**Data Transfer: Rather than using some of the available**
window field and faster growth to the best buffer size possi-
benchmarking programs we chose to write a simple TCP
ble.
based program to mimic message-passing traffic. This pro-
gram tries to send large (1MB) messages between hosts as Unexpectedly for such a low-delay case, all kernels ben-
fast as possible. A total of 128 messages are sent, a number efit from the use of parallel streams, with improvements
chosen because: in performance from 5570%. When a single data flow is
striped among multiple TCP streams, it effectively obtains
128MB transfers are large enough to allow the conges- a super-exponential slow-start phase and additive increase
tion window to fully open. by N (the number of TCP streams). In this case, that behav-
ior improves performance.
128MB transfers are small enough to occur commonly
in practice ’. Note that limitations in the firmware of our Gigabit Eth-
ernet NICs limit performance to 800Mbps or below, so we
Longer transfers do not help differentiate among tun- simply consider 800Mbps ideal.
ing techniques (tested, but results omitted).
It is evenly divisible among all numbers of parallel
streams.
*Custom firmware solutions can improve throughput, but such results
‘“In the long run we are all dead.” -John Maynard Keynes are neither portable nor relevant to this study.
4
-----
**I000**
**2.2.20-none .... --*-----**
**2.4.17-Auto** `----e----` **2.4.17-Auto** `----8----`
**800** - **2.4.17-DRS** **2.4.17-DRS**
#### - -
**U**
.-
_[e ]D_
###### s
..* .............................................................. **_4_** `1 -` =........ _... .............................................................
**B**
_9_ ................... t
-a 3 **400 1 ..............**
B
`m` m
**2oo I** **2oo t**
O L I 0 ‘ I
**1** **2** **3** **4** **5** **6** **7** **8** **1** **2** **3** **4** **5** **6** **7** **8**
**Number of Parallcl Processes** **Number of Parallel Processes**
**Figure 2.** **No Tuning, 0.5ms** **Figure 3. Kernel-Only Tuning, 0.5ms**
Figure 4 shows the results for hand-tuned connections.
DRS obeys the user when buffers are set by `setsock-`
###### opt ( 1 , so 2.4.17-Auto and 2.4.17-DRS use the same
buffer sizes and perform essentially the same. The perfor-
mance difference between 2.2.20 and 2.4.17 is due to stack
improvements in Linux 2.4.
Figure 3 shows the performance with kernel tuning only;
that is, increasing the maximum amount of memory that the
kernel can allocate to a connection. 2.2.20-none -----*.----
**2.4.17-Aut0** `----e----`
**2.4.17-DRS**
#### -
As expected, results for 2.2.20-None (which does no au-
totuning) mirror the results from the prior test.
....... .............................. - ..............................................................
.......
Goo t
2.4.17-Auto connections perform 30-50Mbps better than
in the untuned case, showing that the default 64KB buffers **400 i**
were insufficient.
**2oo t**
0 ‘ J
**1** **2** **3** **4** **5** **6** **7** **8**
2.4.17-DRS connections also perform better with one or **Number of Parallel Processes**
two processes, but as the number of processes increases,
DRS actually performs worse! DRS is more aggressive in
allocating buffer space; with such low delay, it overallocates **Figure 4. User/Kernel Tuning with Ideal Sizes,**
memory, and performance suffers (see Figure 5’s discussion **0.5ms**
below).
The “ideal” buffer sizes in the prior graph (Figure 4) are
Furthermore, performance is measured at the termina- larger than one might expect; Figure _5_ shows the perfor-
tion of the entire transfer (when the final parallel process mance of 2.4.17-Auto with buffer sizes per process between
completes). Large numbers of parallel streams can lead to 8KB and 64MB. We achieve peak performance with sizes
the starvation of one or more processes due to TCP conges- on the order of 1MB - much larger than the calculated ideal
tion control, so the parallelized transfer suffers. Yet this can of 64KB, the bandwidthx delay of the network. The differ-
be a good thing - parallel flows can induce chaotic network ence is due to the interaction and feedback between several
behavior and be unfair in some cases; by pcnalizing users factors, the most important of which are TCP congestion
of heavily parallel flows, DRS could induce more network control and process scheduling.
fairness while still providing good performance. To keep the pipe full, one must buffer enough data to
-----
avoid transmission “bubbles.” However, with multi-fractal
burstiness caused by TCP congestion control [6,13], occa- **2.4.17-A~t0 .---*---**
sionally the network is so overloaded that very large buffers **2.4.17-DRS**
###### t -
are needed to accommodate it. Also, these buffers them- **_F_** **500** -
.-
selves can increase the effective delay (and therefore in- `3`
creasing the buffering required) in a feedback loop only ter- **400** -
minated by a lull later in the traffic stream. This buffering $ **300** -
can occur either in the network routers or in the end hosts. `a`
###### 2 200 -
Because of process scheduling, it is incorrect to divide
the predicted “ideal” buffer size (the bandwidth x _delay)_
by the number of processes to determine the buffer size per
process when using parallel streams. Because only one pro- **1** **2** **3** **4** **5** **6** **7** **8**
cess can run on a given CPU at a given time, the kernel must **Number of Parallel Processes**
buffer packets for the remaining processes until they can get
a time slice. Thus, as the number of processes grows, the ef-
fective delay experienced by those processes increases, and **Figure 6. No Tuning, 25ms**
the amount of required buffering also grows. Beyond a cer-
tain point, this feedback is great enough that the addition of Figure 7 shows results with Kernel-Only Tuning. The
additional parallelism is actually detrimental. This is what performance of DRS improves dramatically while the per-
we see with DRS in Figure 5. formance of simple autotuning and untuned connections is
constant. As we increase the number of processes we again
see the performance of DRS fall.
900 **1** . ’ I
**1 process** This graph actually reveals a bug in the Linux 2.4 ker-
**850** **2 process** **-----x--.-- -**
nel series that our DRS patch fixes; the window scaling ad-
vertised in SYN packets is based on initial (default) buffer
size, not the maximal buffer size up to which Linux can
tune. Thus with untuned default buffers, no window scaling
is advertised - so even if the kernel is allowed to allocate
multi-megabyte buffers, the size of those buffers cannot be
represented in TCP packet headers.
`IO000` 100000 I e 4 6 **1 e 4 7**
**Buffer size**
**_F_** **500**
.-
**a**
###### 3 400
**Figure** **5.** **Effect of Buffer Size on Perfor-**
```
5
```
**mance, 0.5ms** **_3_** **300**
**a**
**200**
##### : ’
###### ......................................................
4.2. Second Case, ~25rns Delay 100 ...................... ............ ......................... ....................... ~ ................. ............................. .................. I
**1** **2** **3** **4** **5** **6** **7** **8**
This case increases delay to values more in line with **Number of Parallel Processes**
a network of moderate size, giving a bandwidth x _delay_
product of over 3MB. In this case, the default configura-
tion is insufficient for high performance, giving less than **Figure 7. Kernel-Only Tuning, 25ms**
20Mbps for a single process with all kernels (Figure 6). As
the number of processes increases, our effective flow win- With both user and kernel tuning, maximal performance
dow increases, and we achieve a linear speed-up. In this increases for all kernels. However, as shown in Figure 8,
case, simple autotuning outperforms DRS, as the memory- performance does fall for DRS in the two and four pro-
management technique is more effective with small win- cess case - here we see that second-guessing the kernel can
dows (it was designed for heavily loaded web servers). cause problems, and larger buffer sizes are not always de-
###### 6
-----
2.2.20-none - - - . - .
performance improvement. 2.4.17-Auto ----*----
200 - 2.4.17-DRS --t
700
600
###### g 500
```
.3
```
400
.c 50 ~~ _c_ 4
..........
###### 2 300 ....... ............ ........................
**U** ......................... ......................... .......................................
..--- ....................................... ...........
m 5 200
**1** 2 3 4 5 6 7 8
Number of Parallel Processes
loo t
**1** 2 3 4 5 6 7 8
**Figure 9. Kernel-Only Tuning, 1 OOms**
Number of Parallel Processes
Similar to Figure 8, the hand-tuned case in Figure 10
**Figure 8. User/Kernel Tuning with Ideal Sizes,** shows 2.4.17-DRS and 2.4.17-Auto performing identically
**25ms** with 2.2.20-None performing slightly worse. Interestingly,
at this high delay, the performance difference between DRS
and autotuning is insignificant - the factors dominating per-
###### 4.3. Third and Fourth Cases, 50-1OOnis Delay formance are not buffer sizes but rather standard TCP slow-
start, additive increase, and multiplicative decrease behav-
The patterns observed in results for the 50ms and looms iors, and the 128MB transfer size is insufficient to differen-
cases do not significantly differ (other than adjustments in tiate the flows. With latencies this high, very large (multi-
scale) from those in the 25ms case - the factors dominat- gigabyte, minimum) transfer sizes would be required to
ing behavior are the same. That is, at low delays (below more fully utilize the network. It would also help to use
20ms), one can find very interesting behavior as TCP inter- a modified version of TCP such as Vegas [2] or one of the
acts with the operating system, NIC, and so forth. At higher plethora of other versions, because a multiplicative decrease
delays (25ms and above), the time scales are large enough can take a ridiculous amount of time to recover on high-
that TCP slow-start, additive increase, and multiplicative delay links.
decrease behaviors are most important; interactions with the
operating system and so forth become insignificant. **5. Guidelines on Selecting an Auto-Tuned TCP**
In fact, the completely untuned cases differ so little that
the following three equations (generated experimentally) , This section gives a few practical guidelines for a
suffice to calculate the bandwidth in Mbps with error uni-
prospective of an automatically tuned TCP.
formly below 20%, given only the number of processes and
the delay in milliseconds.
1. You have kernel-modification privileges to the ma-
chine. So, you may use a kernel-level solution which
`0 2.2.20-None: (processes x` 214)ldelay will generally provide the best performance. Currently,
only NetJ3SD and Linux implementations exist, so for
**_0_** 2.4.17-Auto: `(processes x` 467)ldelay
other operating systems, you must either wait or use a
`0 2.4.17-DRS: (processes x 355)ldelay` user-level solution.
**_0_** If you want to use NetBSD, you must use PSC's
As in Figure 7, the kernel-only tuning case in Figure 9
tuning.
shows 2.4.17-DRS significantly outperforming 2.4.17-Auto
(by a factor of 5 to 15). DRS at 50ms delay with 8 processes **_0_** Linux 2.4 autotuning is appropriate for
achieves 310Mbps (graph omitted), and at lOOms with 8 large numbers of small connections, such
-----
###### 6. Conclusion
**2.2.20-none** --=- - -
**2.4.17-Auto** `----e----`
**200** - **2.4.17-DRS -** We have presented a detailed discussion on the various
techniques for automatic TCP buffer tuning, showing the
benefits and problcms with each approach. We have pre-
sented experimental evidence showing the superiority of
Dynamic Right-Sizing over simple autotuning as found in
Linux 2.4. We have also uncovered several unexpected as-
pects of the problem (such as the calculated “ideal” buffers
performing more poorly than somewhat larger buffers). Fi-
**50 t**
nally, the discussion has provided insight into which solu-
```
0
```
**1** **2** **3** **4** **5** **6** **7** **8** tions are appropriate for which circumstances, and why.
**Number of Parallel Processes**
###### References
**Figure** **10.** **User/Kernel Tuning with** **Ideal**
ANL, CalTech, LBL, SLAC, JF, U. Wisconsin, BNL,
**Sizes, 100ms**
FNL, and SDSC. Thc Particle Physics Data Grid.
h ttp://www.cacr.cal tech.cdu/ppdg/.
L. Brakmo and L. Peterson. TCP Vegas: End to End Con-
as web/media servers, or machines where users gestion Avoidance on a Global Internet. **_IEEE Journal_** **_on_**
**_Selected Areas_** **_iri_** **_Cornnirrnication, 13(8): 1465-1480,_** Octo-
are willing to tune parallel streams.
ber **1995.**
**_0_** DRS is appropriate for smaller numbers of large W. Feng, I. Fostcr, S. Haminond, B. Hibbard, C. Kesselman,
connections, such as FTP or bulk data transfers, A. Shoshani, B. Tierney, and D. Williams. Prototyping an
###### or machines where users are not willing to tune Earth System Grid. http://\vww.scd.ucar.edulcss/esg/.
parallel streams. M. Fisk and W. Feng. Dynamic Adjustment of TCP
Window Sizes. Technical Report Los Alamos Unclas-
2. You do not have kernel-modification privileges to the sified Report (LA-UR) **00-3221,** Los Alamos National
machine or are unwilling to make changes, forcing a Laboratory, July **2000.** See http://www.lanl.gov/radi-
user-level solution. All user-level solutions perform `ant/website/pubs/hptcp/tcpwindow.pdf.`
comparably, so the choice between them is based on W. E. Johnston, D. Gannon, and B. Nitzberg. Grids as
Production Computing Environments: The Engineering As-
features.
pects of NASA‘s Information Power Grid. In **_Proceedirigs_**
**_0_** If all you need is FTP, LANL‘s DRS FTP or of **_8th IEEE lnteniational S)wiposium on Higlt-Per/oniiaiice_**
NLANR’s Auto-tuned FTP will be the easiest **_Distributed_** **_Coriipufirzg, August 1999._**
W. Leland, M. Taqqu, W. Willinger, ~ and D. Wilson. On
plug-in solutions. Obviously, we are biased in
the Self-similar Nature of Ethemet Traffic (Extended Ver-
favor of LANL’s implementation, which dynam-
sion). **_IEEWACM Trailsactioils on_** **_Networking,_** 2( 1): 1-1 **5,**
ically adjusts the window, over NLANR’s imple-
February 1994.
mentation, which does not.
G. Navlakha and J. Ferguson. Automatic
**_0_** If you require multiple applications, then the En- TCP Window Tuning and Applications.
able [ll] service may fit your needs. This will, `http://dast.nlanr.net/Projects/Autobuf/autotcp.html. April`
however, require source-code level changes to **2001.**
Pittsburgh Supercomputing Center. Enabling
each program you wish to use.
High-Performance Data Transfers on Hosts.
In all cases, initial tuning should be performed to `http://www.psc.edu/networking/perf-tune.htm1.`
J. Semke, J. Mahdavi, , and M. Mathis. Automatic TCP
Ensure TCP window scaling, timestamps, SACK op- Buffer Tuning. ACM SIGCOMM 1998,28(4), October 1998.
tions are enabled. B. Tiemey. **TCP** Tuning Guide for Distributed Applica-
tions on Wide-Area Networks. In **_USENIX & SAGE Login,_**
Set the maximum memory available to allocate per http://www-didc.lbl .gov/tcp-wan.htm1, February 2001.
connection or for user-level tuning. B. L. Tierney, D. Gunter, J. Lee, and M. Stoufer. Enabling
Network-Aware Applications. In Proceedings of **_IEEE Iizter-_**
Set ranges for Linux 2.4 autotuning. **_national Syniposirun on Nigh Perfoniiaiice Distrrtbted Com-_**
**_puting, August 2001._**
(Optional) Flush caches in between runs so inappropri- L. Torvalds and The Free Software Community. The Linux
ately set slow-start thresholds are not re-used. Kernel, September 1991. http://www.kemel.org/.
8
-----
[13] **A** Veres and M. Boda. The Chaotic Nature of TCP Conges-
tion Control. In Proceedings of IEEE Iizfocom 2000, March
2000.
[14] E. Weigle and W. Feng. Dynamic Right-Sizing: **A** Simu-
lation Study. In **_Proceedings of_** _IEEE_ **_Inlernational Con-_**
**_ference_** **_on Computer Coninzunicutioris and Networks, 2001._**
http://public.lanl.gov/ehw/papers/ICCCN-2001 -DRS.ps.
[I51 E. Weigle and W. Feng. TICKETing High-speed Traffic
with Commodity Hardware and Software. In **_Proceedings_**
**_of the Third Annual Passive and Active Meusurenzent Work-_**
_shop (PAM2002), March 2002._
[16] L. Zhang, S. Shenker, and D. D. Clark. Observations on
the Dynamics of a Congestion Control Algorithm: The Ef-
fects of Two-way Traffic. In Proceedings of ACM SigCornm
**_1991, September 1991._**
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/HPDC.2002.1029926?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/HPDC.2002.1029926, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://digital.library.unt.edu/ark:/67531/metadc927872/m2/1/high_res_d/976162.pdf"
}
| 2,002
|
[
"JournalArticle"
] | true
| 2002-07-24T00:00:00
|
[] | 8,926
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ee11fd044876642ee1440432a33d7faa6d3292
|
[
"Computer Science"
] | 0.862692
|
SCI networking for shared-memory computing in UPC: blueprints of the GASNet SCI conduit
|
00ee11fd044876642ee1440432a33d7faa6d3292
|
29th Annual IEEE International Conference on Local Computer Networks
|
[
{
"authorId": "3094819",
"name": "H. Su"
},
{
"authorId": "36900480",
"name": "B. Gordon"
},
{
"authorId": "1770398",
"name": "S. Oral"
},
{
"authorId": "48081786",
"name": "A. George"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# SCI Networking for Shared-Memory Computing in UPC: Blueprints of the GASNet SCI Conduit
## H. Su, B. Gordon, S. Oral, A. George {su, gordon, oral, george}@hcs.ufl.edu
_High-performance Computing and Simulation (HCS) Research Lab, Dept. of Electrical and Computer Engineering,_
_University of Florida, Gainesville, Florida 32611-6200_
### Abstract learning curve for people with C experience to begin
_Unified Parallel C (UPC) is a programming model for_ creating parallel programs and often results in tighter and
_shared-memory parallel computing on shared- and_ more efficient code.
_distributed-memory systems. The Berkeley UPC_ One recent development in UPC is the interest in
_software, which operates on top of their Global_ providing a means for executing UPC over
_Addressing Space Networking (GASNet) communication_ commercial-off-the-shelf (COTS) clusters. The
_system, is a portable, high-performance implementation_ Berkeley UPC runtime system [3], developed by U.C.
_of UPC for large-scale clusters. The Scalable Coherent_ Berkeley and LBNL, is a promising tool now available to
_Interface (SCI), a torus-based system-area network (SAN),_ support this endeavor. An underlying key to this system
_is known for its ability to provide very low latency_ is the Global Addressing Space Networking (GASNet)
_transfers as well as its direct support for both_ communication system [4-5]. GASNet defines a
_shared-memory and message-passing communications._ standard application interface that can be implemented
_High-speed clusters constructed around SCI promise to be_ over a wide variety of standard and high-performance
_a potent platform for large-scale UPC applications._ networks such as Ethernet, InfiniBand, Myrinet, and
_This paper introduces the design of the Core API for the_ Quadrics.
_new SCI conduit for GASNet and UPC, which is based on_ In this study, we present the design of a new GASNet
_Active Messages (AM). Latency and bandwidth data_ conduit operating over the Scalable Coherent Interface
_were collected and are compared with raw SCI results_ (SCI) network [6]. Benchmarks were executed on the
_and with other existing GASNet conduits. The outcome_ newly developed conduit and compared against the raw
_shows that the new GASNet SCI conduit is able to provide_ performance of SCI, the GASNet Myrinet conduit, and
_promising performance in support of UPC applications._ GASNet MPI conduit on SCI using Scali’s ScaMPI [7] to
evaluate various strengths and weaknesses.
### Keywords: Scalable Coherent Interface, Global Address The next section of this paper briefly describes the
_Space Networking, Unified Parallel C, Active Messages._ architecture of SCI and GASNet. In Section 3, we discuss
related research. Section 4 describes the design overview
### 1 Introduction of the GASNet/SCI conduit. Section 5 presents the
performance results and analyses. Finally, Section 6
Many scientific as well as commercial endeavors rely
presents conclusions and directions for future research.
on the ability to solve complex problems in a quick and
efficient manner. One of the dominant solutions to this **2 Background**
problem has been the advent of parallel computing. To
supplement the architectural improvements in this area, In the following subsections we present an overview
parallel programming models have emerged to provide of the SCI high-performance network. Also included is a
programmers alternate ways in solving complex and brief introduction to the GASNet communication system.
computationally intensive problems. Such models
### 2.1 SCI
include message passing, shared memory, and distributed
shared memory. SCI is an ANSI/ISO/IEEE standard (1596-1992) that
While message passing and shared memory are the describes a packet-based protocol [8] for system-area
two most popular ways to implement parallel programs, networking. SCI was initially developed as an attempt to
distributed shared memory is quickly gaining momentum. address the problems associated with buses for use with
One of the reasons for this development is the growing many processors. It has evolved to become a
acceptance of Unified Parallel C (UPC) [1-2] and other high-performance interconnect for SANs and embedded
models like it. UPC is a parallel extension to the ANSI systems. SCI uses point-to-point links, maintaining low
C standard that gives programmers the ability to create latency while achieving high data rates between nodes. It
parallel programs that can target a variety of parallel features a shared-memory mentality so that memory on
architecture platforms while maintaining a familiar each node can be addressable by every other node on the
C-style structure. This approach allows a smaller network. SCI uses 64 bits in its addressing. The
-----
most-significant 16 bits are used to specify the node in the
network, and the remaining 48 bits are used for addresses
within each node. With this scheme, the SCI environment
can support up to 64K nodes with 256TB of addressable
space.
SCI offers many advantages for the unique nature of
parallel computing demands. Perhaps the most
significant of these advantages is its low-latency
performance, typically (based on current commercial
products from Dolphin) on the order of single-digit
microseconds for remote-write operations and tens of
microseconds for remote-read operations. Based on the
latest technology, SCI offers a link data rate of 5.3 Gb/s
with topologies including 1D (ring), 2D, or 3D torus.
The Dolphin SISCI API [9] is a standard set of API
calls allowing users to access and control SCI hardware
behavior directly based on a shared-memory paradigm.
To enable inter-node communication, the receiver must set
aside a portion of its physical memory (global memory
region) for use by the SCI network. The sender then
imports the memory region into its virtual address space
and is thus able to read and write the receiver’s memory
region by way of either PIO (shared-memory operation)
or DMA (zero-copy operation) transfer modes. The SCI
hardware automatically converts accesses in SCI-mapped
virtual address space to network transfers.
### 2.2 GASNet
Global Addressing Space Networking (GASNet),
developed at UCB/LBNL, is a language-independent,
low-level communications layer that provides
network-independent, high-performance communication
primitives aimed at supporting parallel shared-memory
programming languages such as UPC and Titanium, a
parallel dialect of Java. The system is divided into two
layers, the GASNet Core API and the GASNet Extended
API (Figure 1). The Core API is a narrow interface
based on Active Messages (AM) [10] and the
network-specific Firehose memory registration algorithm
[11]. The Extended API is a network-independent
interface that provides medium- and high-level operations
on remote memory and collective operations.
The GASNet segment is the location where most of
the GASNet operations target. There are three ways the
segment can be configured, as _fast,_ _large, or_ _everything._
Under the fast configuration, the size of the segment may
be limited to provide faster transfers of GASNet
operations. The _large configuration makes a large_
portion of memory available to the segment. The size
may include all of the physical memory or more. The
_everything configuration makes the whole virtual address_
space on every node available for GASNet operations.
Currently, GASNet supports execution on UDP, MPI,
Myrinet, Quadrics, InfiniBand and IBM LAPI. GASNet
was first released on 1/29/2003 with the latest release as
of this writing being Version 1.3.
### 3 Related research
Since our GASNet Core API must provide for AM
over SCI, Ibel’s paper [12] is useful as it discusses several
possible ways to execute AM over SCI. However, his
simple remote-queue implementation poses several
limitations. First, with 1 buffer space for all AM replies,
each node is restricted to having only 1 outstanding AM
request to the whole network at any given time.
Furthermore, the need for the receiver to copy bulk data
(long AM payload) from the 4KB buffer to its appropriate
memory location, and the cost of message polling (O(N),
where _N_ denotes the system size), introduce additional
overhead that significantly impacts system performance.
With applications that exhibit frequent inter-node
communication, system performance is degraded to a
degree that the benefit of parallelization is no longer
observed.
Ibel briefly described a split remote-queue scheme
that uses circular queues of _N−k (where_ _k is a constant)_
messages (one queue for each node able to hold _k_
messages) to allow parallel sending and receiving of
messages. Unfortunately, this approach is not
deadlock-free and the overhead for copying bulk data and
message polling still remains.
Additional research that was instrumental to this
work consists of other existing GASNet conduits. The
design documents and source code available on the
GASNet website [4] were used as a guide in the design of
the Core API for the new SCI conduit.
### 4 Core API design
SCI hardware is designed such that remote writes are
~10 times faster than remote reads. This disparity is due
to the inability to streamline reads through the memory
PCI bridges. As a result, to obtain the best performance,
only remote writes are used in our conduit design, as in
Ibel’s approach. Additionally, due to driver limitations,
only the GASNet fast segment configuration is supported
by the SCI conduit. Future improvements will allow
support of the other configurations.
### 4.1 Basic communication regions
Instead of only 1 buffer space for both AM requests
**Network**
**Independent**
**Language**
**Independent**
|work|Distributed Shared Memory Parallel Programming Language|Col3|Langu|
|---|---|---|---|
||GASNet Extended API|||
|endent|GASNet Core API|Direct Access|Indepe|
||Network|||
Figure 1 - GASNet layers overview
-----
and replies as in Ibel’s split-queue scheme, we divide the
buffer (command region) into a request and a reply queue
of equal size making the system deadlock free. Each
request/reply buffer space is set to be the size of the
longest AM header plus the maximum size of a medium
payload. The request and reply are paired so that a node
with an outstanding request to another node is guaranteed
to have space to hold the reply for that particular request.
Each node has a message queue reserved for it on every
other node. This scheme allows each node to locally
manage outgoing messages and guarantee no conflicts
with other nodes (Figure 2).
**Control**
**Control X** **Segments**
**Local (In use)** **(N total)**
**Control X**
**Command X-1**
**Command X-1**
**Physical** **Command**
**Address** **...** **...** **Segments**
**(N*N total)**
**Command X-N**
**Command X-N**
**Payload X**
**Payload**
**Local (free)** **Payload X** **Segments**
**(N total)**
**Node X** **SCI Space**
Figure 2a – Conceptual diagram of the segment
exportation mechanism
**Control 1**
**Control**
**Segments** **...**
**(N total)** **Control 1**
**Control N** **...**
**Control N** **Virtual**
**Command 1-X** **Address**
**Command 1-X** **...**
**Command** **Command N-X**
**Segments** **...**
**(N*N total)**
**Command N-X** **Node X**
**SCI Space**
Figure 2b –Conceptual diagram of the segment
importation mechanism
Similar to Ibel’s approach, a message-ready flag is
used to indicate if a particular message exists in a queue
or not. However, rather than attaching the flag to the
end of the AM message, these flags are separately placed
in an array (control region) that is accessible by all other
nodes. This method provides better data locality when
checking for new messages, as all the message-ready
flags now reside in one contiguous memory region. In
addition, a single global message-exist flag is used to
indicate the existence of any new messages.
Finally, the size of the long AM payload region is
significantly bigger and it corresponds to the range of
remotely accessible memory as specified by the GASNet
_fast segment configuration which the user defines, thus_
minimizing unnecessary data copying. Since the
importing of regions occupies local virtual address space
equal to the size of the segment, the large payload
segments (payload regions) are not imported at
initialization time so as to improve scalability.
Fortunately, DMA transfer mode allows communication
to take place without having to import the region into
virtual memory space, but with added overhead.
### 4.2 AM communication
The message sending and handling process is
illustrated in Figure 3. In order to send a message from
a sender node to a receiver node, the sender first prepares
the AM header, which contains information such as the
handler to be called, message type, payload size, etc.
Once prepared, the header is then written to the receiver’s
command region using a PIO transfer. For a medium
AM message, another remote PIO write operation is used
to transfer the medium payload to the same command
region. The same sequence of operations is used for
long AM transfers to handle the unaligned portion of the
long payload (see Section 5.2.2 for further explanation).
Otherwise, the data payload is sent directly to the payload
region via a DMA transfer.
|Local (In use) Control X Command X-1 sical ress ... Command X-N Payload X Local (free)|Col2|Control Control X Segments (N total)|
|---|---|---|
|sical ress|Local (In use)||
|||Command X-1 Command ... Segments (N*N total) Command X-N|
||Control X||
||Command X-1||
||...||
||Command X-N||
||Payload X||
|||Payload Payload X Segments (N total)|
||Local (free)||
|Control 1|Virt Add|
|---|---|
|...||
|Control N||
|Command 1-X||
|...||
|Command N-X||
|Control 1 Control Segments ... (N total) Control N|Col2|
|---|---|
|Command 1-X Command Segments ... (N*N total) Command N-X||
|Col1|AM Header Medium AM Payload Long AM Payload|Col3|Check Message Exist Flag Control P So tl ali rn tg Command Y-1 ... Command Y-X Ne Aw v aM ile ias bs la eg ?es ... Extract Command Y-N Message Information Payload Y Yes No Memory Process all new messages AM Reply or ack Polling Done Polling End|
|---|---|---|---|
|||n||
|W|ait for Completio|||
||Flags|||
||Other processing|||
||Process reply message|||
|||||
**Node X**
**Node Y**
Figure 3 – High-level flowchart for inter-node
communication
Upon completion of these transfers, the sender writes
the two message flags to the receiver’s control segment.
The message-exist flag is used to tell the receiver that
there is at least one new message available and the
message-ready flag indicates that a particular message
buffer contains a message. When the receiver calls the
polling process, it checks the message-exist flag to see if
there are any new messages that need to be handled. If
there are, the receiver scans message-ready flags and
handles the appropriate newly arrived messages. Using
-----
this approach, the cost of an unsuccessful poll is O(1) and
O(N) for a successful poll, leading to amortized costs for
polling of only O(1).
### 5 Results and analysis
In this section we present the latency and bandwidth
results of the first full design and implementation of our
Core API. These results are compared against Dolphin
SISCI raw performance and two other existing GASNet
conduits, namely the GM conduit for Myrinet and the
MPI conduit, a core only implementation, on SCI using
Scali’s ScaMPI. ScaMPI is a commercial MPI
implementation for SCI, and it is considered the most
efficient communication layer implemented to date for
SCI. This comparison of results is used to evaluate the
performance of our design.
The GASNet system provides a reference-extended
API implementation that is based on Core API functions.
Consequently, a complete and fully functional GASNet
conduit is created with the successful completion of the
Core API. To complete the analysis of our design, we
compared the results of the basic Extended API
operations put and get for our native SCI conduit against
the MPI conduit executing on top of ScaMPI.
### 5.1 Experimental setup
Here we describe the environment and testing
procedures used in obtaining performance measurements
from each of the software environments.
### 5.1.1 Testbed
Two sets of machines were used in this study. The
first set consists of 16 server nodes, each with dual
2.4GHz Intel P4 Xeon CPUs with 256KB L2 cache, 1GB
of DDR PC2100 (DDR266) RAM, and a 533MHz system
bus. Each node is equipped with a Dolphin D339 3D
SCI card and uses Linux Red Hat 9.0 with kernel
2.4.20-8smp and gcc version 3.3.2. These SCI nodes are
wired and configured as two 4×2 2D torus networks.
One torus uses the free open-source driver with SISCI
API V2.2 provided by Dolphin, and the other uses the
commercial Scali V4.0 driver with ScaMPI.
Michigan Tech graciously provided access to their
Myrinet 2000E cluster for this work. Their cluster
consists of 16 server nodes, each with dual 2.2GHz Intel
P4 Xeon CPUs with 256KB L2 cache, 2GB of DDR
PC2100 (DDR266) RAM, and a 533MHz system bus. A
16-port Myrinet 2000 switch is used to connect these
nodes. The Myrinet NIC in each node features an
onboard 133MHz LANai 9.0 CPU with 2MB of on-card
memory using GM V1.6.3.
### 5.1.2 Experiments
Performance results for SCI Raw are obtained using
_scipp (PIO benchmark, ping-pong) and dma_bench (DMA_
_benchmark, one-way), latency and bandwidth benchmarks_
provided by Dolphin for the SISCI API. Conduit results
are obtained by executing a slightly modified version of
_testam benchmark from the GASNet test suite. The_
_testam code was changed only to output the bandwidth_
measurements for AM long transfers.
To test the latency of small-message _put/get_
operations in GASNet, we use the _testsmall benchmark_
from the GASNet test suite. It uses the gasnet_put() and
_gasnet_get() functions to send data back and forth_
between nodes, obtaining the round-trip latency for these
requests. Bandwidth is measured using the _testlarge_
benchmark available in the GASNet test suite. It uses
the various bulk-data transfer functions available in the
Extended API to send one-way data between two nodes.
### 5.2 Core API AM results and analysis
Short, medium, and long AM latency, as well as long
AM bandwidth results, are shown in this section. As
short and medium AM transfers are typically small in size
and do not transfer large amounts of data, bandwidth
numbers for them are not included. Comparison and
analysis of our SCI conduit’s performance versus the SCI
Raw, the MPI/ScaMPI Conduit, and the Myrinet Conduit
are also discussed. Unfortunately, direct comparisons
between our results and those from Ibel’s work cannot be
made due to vastly different hardware/software testbeds.
### 5.2.1 Short/Medium AM
SCI Raw SCI Conduit
MPI/ScaMPI Conduit Myrinet Conduit
50
0 Bytes Payload = Short
40
30
20
10
0
0 1 2 4 8 16 32 64 128 256 512 1024
Payload Size (Bytes)
Figure 4 - Short/Medium AM ping-pong latency results
Compared to SCI raw performance, our SCI conduit
adds ~12us of overhead (Figure 4). The main cause is
the overhead added to package and unpackage the AM
header, obtaining free buffer space and system sanity
checks. Our results are comparable to the Myrinet
conduit, but somewhat lags behind the MPI/ScaMPI
conduit. Other possible causes for the overhead and
reason why MPI/ScaMPI has better performance is still
under investigation.
The transmission of medium AM messages can be
performed in two ways. The header and payload can be
copied into one contiguous memory location and then
transmitted in one transfer to the receiver, or instead the
header and payload can be transferred separately to the
receiver (Figure 5). One would expect the first approach
to perform better than the second given that network
-----
communication cost is generally much higher than local
processing cost. However, our testing indicates that
using the 2 network transactions mechanism is slightly
more efficient (Figure 6). One reason may have to do
with the need to perform a memcpy(), which can
sometimes be an expensive operation. Another part of
the reason may be that SCI allows up to 16 outstanding
transactions to be posted at once. Because of this, the
second SCI transaction overhead is partially hidden from
the user by the first transaction (i.e. overlapping
transactions).
**1 network**
**AM Header** **AM Header** **transfer** **AM Header**
**1 network**
**Medium AM** **Copy** **Medium AM** **Medium AM** **mechanismtransaction**
the high DMA engine setup overhead (~30µs), any long
payloads that are less than 2048 bytes are treated as
unaligned data and written to the command segment using
PIO mode instead. In doing so, our conduit is able to
achieve better performance for small long AM payloads
and suffer lower overhead for unaligned data transfers
(~13us). Future implementations of the SCI conduit
might switch back to use the DMA engine directly, since
Dolphin is currently working on improving their driver to
reduce the mapping overhead, DMA engine start-up
overhead, and the alignment requirement.
Our long AM latency (Figure 7) and bandwidth
results (Figure 8) follow the same growth trend as that of
SCI Raw and are comparable to the Myrinet conduit.
Although MPI/ScaMPI has better performance for smaller
payload size, its maximum bandwidth is about 190 MB/s,
mainly due to the fact that it uses PIO exclusively,
whereas our conduit rises to 213 MB/s with payload size
of 128K.
|AM Header AM Header Copy Medium AM Medium AM Payload Payload Source|1 network transfer|AM Header Medium AM Payload|
|---|---|---|
**Node X** **Node Y**
**2 network**
**transactions**
**mechanism**
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|AM Header Medium AM Payload Source|||AM Header Medium AM Payload|
||AM Header|||
|||||
||Medium AM Payload Source|||
|||||
**Node X**
**Node Y**
Figure 5 –Conceptual diagram of "1 network transaction"
and "2 network transactions" message delivery
mechanisms
1 network transaction 2 network transactions
40
35
30
25
20
15
10
5
0
0 1 2 4 8 16 32 64 128 256 512 1024
Payload Size (Bytes)
Figure 6 - Performance comparison of "1 network
transaction" and "2 network transactions" message
delivery mechanisms
1000
100
|SCI Raw result obtained by double the result obtained from dma_bench|Col2|
|---|---|
|||
10
Payload Size (Bytes)
Figure 7 - Long AM ping-pong latency results
### 5.2.2 Long AM
The SISCI API requires any DMA transfer to have
8-byte alignment between the source and the target
segment (both starting address and transfer size).
Sending of unaligned data thus became a problem as
costly dynamic mapping (~200us overhead) and
unmapping of the target segment is needed. To
overcome this shortcoming, the request/reply buffer
region reserved for medium payload is used as a bounce
buffer for the unaligned portion of the long payload,
which is later copied to the appropriate payload address
when handled by the receiver. Furthermore, because of
Figure 8 - Long AM bandwidth results
### 5.3 Put/Get
There are two modes of testsmall, transfers to within
and without the main GASNet segment. Since all small
and medium AM transactions take place through buffers,
the results for both modes are the same and only the graph
for transfers within the segment is shown. Figure 9
-----
shows the results of testsmall for our SCI conduit and the
MPI conduit on ScaMPI. Since the Extended API
implementation of these two conduits is based on AM
transactions in their Core APIs, the results correspond
almost exactly to the latency gathered for the small and
medium AM transfers in the Core API.
Conduit Put (in) Conduit Get (in)
MPI/ScaMPI Put (in) MPI/ScaMPI Get (in)
35
30
25
20
15
10
5
0
1 2 4 8 16 32 64 128 256 512 1024
Payload Size (Bytes)
Figure 9 - Put/Get latency results
The results for all blocking and non-blocking
functions were the same, so only the results for
_gasnet_put_bulk() and gasnet_get_bulk() are shown here._
Similar to _testsmall, there are two modes of transfer in_
_testlarge. Because our Core API currently supports only_
the _fast segment configuration, it is optimized for_
transfers to within the main GASNet segment.
Therefore, only the results for one-way, in-segment
transfers are shown in Figure 10.
Conduit Put (in) Conduit Get (in)
MPI/ScaMPI Put (in) MPI/ScaMPI Get (in)
250
200
150
100
50
0
Payload Size (Bytes)
Figure 10 - Put/Get bandwidth results
Similar to large AM transfers, the MPI conduit
using ScaMPI achieves slightly better bandwidth for
smaller transfer sizes. However, for transfers of 32KB
and more, our SCI conduit shows better performance.
### 6 Conclusions
GASNet is an important part of the push to expand
UPC shared-memory computing capabilities to
network-based systems like clusters. The GASNet
conduits available on many networks allow UPC to be
executed on a wide variety of platforms. SCI is a
high-performance network that has many features that can
be used to efficiently execute GASNet and UPC. By
extending GASNet to SCI through the creation of an SCI
conduit, the availability of UPC to parallel programmers
increases. The creation of the GASNet Core API is an
essential step in accomplishing this goal, as a complete
Core API implementation is sufficient for a GASNet
conduit.
The tests conducted show that we have designed and
created a complete and potent GASNet conduit design for
SCI. The performance of our SCI conduit is shown to be
comparable to the Myrinet conduit and slightly behind the
MPI/ScaMPI conduit which uses proprietary SCI driver
and MPI software. This outcome strengthens our belief
that our SCI conduit is a promising extension to the
GASNet system, as the driver used in the creation of the
SCI conduit is free and open-source.
Several ideas are under investigation which will
further improve the performance of our conduit. Care is
needed in balancing the many different aspects of network
performance so that the SCI conduit can fully exploit the
unique features available in the SCI network.
Furthermore, currently the SCI conduit only supports
GASNet global segment sizes up to 2MB, under Linux,
without applying a large physical area patch. This
requirement limits the usage of our conduit to those
clusters whose system administrators are willing to patch
the kernel on each SCI node. This patch requirement is
primarily due to the limitation of the current SISCI driver
where the size of each segment needs to be physically
contiguous and relies on the underlying operating system
to ensure continuity. We are currently working with
Dolphin to resolve this issue and increase the ease of use
of this conduit.
Initial testing at the GASNet put/get level with our
Core API again indicates that our conduit is comparable
to other conduits. We are currently completing the
implementation of an Extended API in order to improve
the performance of our SCI conduit. Once complete,
benchmarks at the UPC application level will be used to
obtain a better assessment of the effectiveness of our SCI
conduit from the communication to the application layer.
### Acknowledgements
This work was supported in part by the U.S.
Department of Defense and by equipment support of
Dolphin Interconnect Solutions Inc. Also, we would like
to express our thanks for the helpful suggestions and
cooperation of Dan Bonachea and the UPC group
members at UCB and LBNL, and to Hugo Kohmann and
the support team at Dolphin for technical assistance.
### References
1. W. Carlson, J. Draper, D. Culler, K. Yelik, E. Brooks,
K. Warren, “Introduction to UPC and Language
Specification,” May 1999
http://www.gwu.edu/~upc/pubs.html
2. Official Unified Parallel C website
-----
http://www.upc.gwu.edu/
3. Official Berkeley UPC website http://upc.nersc.gov/
4. Official GASNet website
http://www.cs.berkeley.edu/~bonachea/gasnet
5. D. Bonachea, “GASNet Specification Version 1.3,”
April 2003
http://www.cs.berkeley.edu/~bonachea/gasnet/dist/do
cs/gasnet.pdf
6. D. Gustavson and Q. Li, “The Scalable Coherent
Interface (SCI),” IEEE Communications, Vol. 34, No.
8, August 1996, pp. 52-63.
7. Scali, “ScaMPI – Design and Implementation,”
http://www.scali.com/whitepaper/other/scampidesign.
pdf
8. IEEE Service Center, “Scalable Coherent Interface,
ANSI/IEEE Standard 1596-1992,” Piscataway, New
Jersey, 1993.
9. Dolphin Inc., “SISCI API User Guide,” May 2001,
http://www.dolphinics.com/support/documentation.ht
ml
10. A. Mainwaring and E. Culler, “Active Messages:
Organization and Applications Programming
Interface,” Technical Document, 1995.
11. C. Bell and D. Bonachea, “A New DMA Registration
Strategy for Pinning-Based High Performance
Networks,” Workshop on Communication
Architecture for Clusters (CAC'03), 2003.
12. M. Ibel, K.E. Schauser, C. J. Scheiman, and M. Weis,
_“Implementing Active Messages and Split-C for SCI_
Clusters and Some Architectural Implications,” Sixth
International Workshop on SCI-based
Low-cost/High-performance Computing (SCIzzL-6),
Santa Clara, CA, September 1996.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/LCN.2004.107?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/LCN.2004.107, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://gasnet.cs.berkeley.edu/SuGordon-HSLN.pdf"
}
| 2,004
|
[
"Conference"
] | true
| 2004-11-16T00:00:00
|
[] | 7,470
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ef77f1162f6eed2595e569d716f963c181de21
|
[
"Computer Science",
"Physics"
] | 0.860724
|
Classical zero-knowledge arguments for quantum computations
|
00ef77f1162f6eed2595e569d716f963c181de21
|
IACR Cryptology ePrint Archive
|
[
{
"authorId": "2794473",
"name": "Thomas Vidick"
},
{
"authorId": "2004707436",
"name": "Tina Zhang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IACR Cryptol eprint Arch"
],
"alternate_urls": null,
"id": "166fd2b5-a928-4a98-a449-3b90935cc101",
"issn": null,
"name": "IACR Cryptology ePrint Archive",
"type": "journal",
"url": "http://eprint.iacr.org/"
}
|
We show that every language in QMA admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in QMA, with a classical verifier, introduced by Mahadev (FOCS 2018).
|
# Classical zero-knowledge arguments for quantum computations
#### Thomas Vidick[1] and Tina Zhang[2]
1Department of Computing and Mathematical Sciences, California Institute of Technology, USA
2Division of Physics, Mathematics and Astronomy, California Institute of Technology, USA
We show that every language in QMA admits a classical-verifier, quantum-prover
zero-knowledge argument system which is sound against quantum polynomial-time
provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The
protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al.
(FOCS 2016), and an argument system for languages in QMA, with a classical verifier,
introduced by Mahadev (FOCS 2018).
### 1 Introduction
The paradigm of the interactive proof system is a versatile tool in complexity theory. Although
traditional complexity classes are usually defined in terms of a single Turing machine—NP, for
example, can be defined as the class of languages which a non-deterministic Turing machine is able
to decide—many have reformulations in the language of interactive proofs, and such reformulations
often inspire natural and fruitful variants on the traditional classes upon which they are based. (The
class MA, for example, can be considered a natural extension of NP under the interactive-proof
paradigm.)
Intuitively speaking, an interactive proof system is a model of computation involving two entities,
a verifier and a prover, the former of whom is computationally efficient, and the latter of whom
is unbounded and untrusted. The verifier and the prover exchange messages, and the prover
attempts to ‘convince’ the verifier that a certain problem instance is a yes-instance. We can define
some particular complexity class as the set of languages for which there exists an interactive proof
system that 1) is complete, 2) is sound, and 3) has certain other properties which vary depending
on the class in question. Completeness means, in this case, that for any problem instance in the
language, there is an interactive proof involving r messages in total that the prover can offer the
verifier which will cause it to accept with at least some probability p; and soundness means that, for
[Thomas Vidick: vidick@caltech.edu](mailto:vidick@caltech.edu)
[Tina Zhang: tinazhang@caltech.edu](mailto:tinazhang@caltech.edu)
1
-----
any problem instance not in the language, no prover can cause the verifier to accept, except with
some small probability q. For instance, if we require that the verifier is a deterministic polynomialtime Turing machine, and set r = 1, p = 1, and q = 0, the class that we obtain is of course the class
NP. If we allow the verifier to be a probabilistic polynomial-time machine, and set r = 1, p = 3[2] [,]
_q =_ [1]3 [, we have MA. Furthermore, if we allow the verifier to be an efficient][ quantum][ machine, and]
we allow the prover to communicate with it quantumly, but we retain the parameter settings from
MA, we obtain the class QMA. Finally, if we allow r to be any polynomial in n, where n is the
size of the problem instance, but otherwise preserve the parameter settings from MA, we obtain
the class IP.
For every complexity class thus defined, there are two natural subclasses which consist of the
languages that admit, respectively, a statistical and a computational zero-knowledge interactive
proof system with otherwise the same properties. The notion of a zero-knowledge proof system was
first considered by Goldwasser, Micali and Rackoff in [GMR89], and formalises the surprising but
powerful idea that the prover may be able to prove statements to the verifier in such a way that the
verifier learns nothing except that the statements are true. Informally, an interactive proof system
is statistical zero-knowledge if an arbitrary malicious verifier is able to learn from an honest prover
that a problem instance is a yes-instance, but can extract only negligible amounts of information
from it otherwise; and the computational variant provides the same guarantee only for malicious
polynomial-time verifiers. For IP in particular, the subclass of languages which admit a statistical
zero-knowledge proof system that otherwise shares the same properties had by proof systems for
languages in IP is known as SZK. Its computational sibling, meanwhile, is known as CZK. It is wellknown that, contingent upon the existence of one-way functions, NP CZK: computational zero_⊆_
knowledge proof systems have been known to exist for every language in NP since the early 1990s
([GMW91]). However, because these proof systems often relied upon intractability assumptions or
techniques (e.g. ‘rewinding’) that failed in quantum settings, it was not obvious until recently how
to obtain an analogous result for QMA. One design for a zero-knowledge proof system for promise
problems in QMA was introduced by Broadbent, Ji, Song and Watrous in [BJSW16]. Their work
establishes that, provided that a quantum computationally concealing, unconditionally binding
commitment scheme exists, QMA QCZK.
_⊆_
There are, of course, a myriad more variations on the theme of interactive proofs in the quantum
setting, each of which defines another complexity class. For example, motivated partly by practical
applications, one might also consider the class of languages which can be decided by an interactive
proof system involving a classical verifier and a quantum prover communicating classically, in
which the soundness condition still holds against arbitrary provers, but the honest prover can be
implemented in quantum polynomial time. (For simplicity, we denote this class by IPBQP.) The
motivation for this specific set of criteria is as follows: large-scale quantum devices are no longer
so distant a dream as they seemed only a decade ago. If and when we have such devices, how
will we verify, using our current generation of classical devices, that our new quantum computers
can indeed decide problems in BQP? This problem—namely, the problem of showing that BQP
_⊆_ IPBQP—is known informally as the problem of quantum verification.
The problem of quantum verification has not yet seen a solution, but in recent years a number of
strides have been made toward producing one. As of the time of writing, protocols are known for
the following three variants on the problem:
2
-----
1. It was shown in [ABE10, ABOEM17] that a classical verifier holding a quantum register
consisting only of a constant number of qubits can decide languages in BQP by communicating
quantumly with a single BQP prover. In [BFK09, FK17], this result was extended to classical
verifiers with single-qubit quantum registers. All of these protocols are sound against arbitrary
provers.
2. It was shown in [RUV13] that an entirely classical verifier can decide languages in BQP by
interacting classically with two entangled, non-communicating QPT provers. This protocol
is likewise sound against arbitrary provers.
3. It was shown in [Mah18] that an entirely classical verifier can decide languages in BQP by executing an argument system ([BCC88]) with a single BQP prover. An argument system differs
from a proof system in that 1) its honest prover must be efficient, and 2) an argument system
need not be sound against arbitrary provers, but only efficient ones. In this case, the argument
system in [Mah18] is sound against quantum polynomial-time provers. (The class of languages
for which there exists an argument system involving a classical probabilistic polynomial-time
verifier and a quantum polynomial-time prover is referred to throughout [Mah18] as QPIP0.)
The argument system introduced in [Mah18] is reliant upon cryptographic assumptions about
the quantum intractability of Learning With Errors (LWE; see [Reg09]) for its soundness. For
practical purposes, if this assumption holds true, the problem of verification can be considered
solved.
The last of these three results establishes that BQP ⊆ QPIP0, contingent upon the intractability
of LWE. (As a matter of fact, the same result also establishes that QMA ⊆ QPIP0, provided the
efficient quantum prover is given access to polynomially many copies of a quantum witness for
the language to be verified, in the form of ground states of an associated local Hamiltonian.) In
this work, we show that the protocol which [Mah18] introduces for this purpose can be combined
with the zero-knowledge proof system for QMA presented in [BJSW16] in order to obtain a zero_knowledge argument system for QMA. It follows naturally that, if the LWE assumption holds, and_
quantum computationally hiding, unconditionally binding commitment schemes exist, [1] QMA
_⊆_
CZK-QPIP0, where the latter refers to the class of languages for which there exists a computational
_zero-knowledge interactive argument system involving a classical verifier and a quantum polynomial-_
time prover. Zero-knowledge protocols for languages in NP are an essential component of many
cryptographic constructions, such as identification schemes, and are often used in general protocol
design (for example, one can force a party to follow a prescribed protocol by requiring it to produce
a zero-knowledge proof that it did so). Our result opens the door for the use of zero-knowledge
proofs in protocols involving classical and quantum parties which interact classically in order to
decide languages defined in terms of quantum information (for instance, to verify that one of the
parties possesses a quantum state having certain properties).
We now briefly describe our approach to the problem. The proof system for promise problems in
QMA presented in [BJSW16] is almost classical, in the sense that the only quantum action which
the honest verifier performs is to measure a quantum state after applying Clifford gates to it. The
key contribution which [Mah18] makes to the problem of verification is to introduce a measurement
_protocol which, intuitively, allows a classical verifier to obtain honest measurements of its prover’s_
1It is known that quantum computationally hiding, unconditionally binding commitment schemes fitting our
requirements can be constructed from LWE. See, for example, Section 2.4.2 in [CVZ19].
3
-----
quantum state. The combining of the proof system from [BJSW16] and the measurement protocol
from [Mah18] is therefore a fairly natural action.
That the proof system of [BJSW16] is complete for problems in QMA follows from the QMAcompleteness of a problem which the authors term the 5-local Clifford Hamiltonian problem. However, the argument system which [Mah18] presents relies upon the QMA-completeness of the wellknown 2-local XZ Hamiltonian problem (see Definition 2.3). For this reason, the two results cannot
be composed directly. Our first step is to make some modifications to the protocol introduced
in [BJSW16] so it can be used to verify that an XZ Hamiltonian is satisfied, instead of verifying
that a Clifford Hamiltonian is satisfied. We then introduce a composite protocol which replaces
the quantum measurement in the protocol from [BJSW16] with an execution of the measurement
protocol from [Mah18]. With the eventual object in mind of proving that the result is sound
and zero-knowledge, we introduce a trapdoor check step into our composite protocol, and split the
_coin-flipping protocol used in the proof system from [BJSW16] into two stages. We explain these_
decisions briefly here, after we present a summary of our protocol and its properties, and refer the
reader to Sections 3, 5 and 6 for fuller expositions.
**Protocol 1.1. Zero-knowledge, classical-verifier argument system for QMA (informal summary).**
_Parties._
The protocol involves
1. A verifier, which runs in classical probabilistic polynomial time;
2. A prover, which runs in quantum polynomial time.
_Inputs. The protocol requires the following primitives:_
A perfectly binding, quantum computationally concealing commitment protocol.
_•_
A zero-knowledge proof system for NP.
_•_
An extended trapdoor claw-free function family (ETCFF family), as defined in [Mah18].
_•_
Apart from the above cryptographic primitives, we assume that the verifier and the prover also
receive the following inputs.
1. Input to the verifier: a 2-local XZ Hamiltonian H (see Definition 2.3), along with two numbers,
_a and b, which define a promise about the ground energy of H._ Because the 2-local XZ
Hamiltonian promise problem is complete for QMA, any input to any decision problem in
QMA can be reduced to an instance of the 2-local XZ Hamiltonian problem.
2. Input to the prover: the Hamiltonian H, the numbers a and b, and the quantum state
_ρ = σ[⊗][m], where σ is a ground state of the Hamiltonian H._
_Protocol._
1. The prover applies an encoding process to ρ. Informally, the encoding can be thought of as a
combination of an encryption scheme and an authentication scheme: it both hides the witness
state ρ and ensures that the verifier cannot meaningfully tamper with the measurement
results that it reports in step 5. Like most encryption and authentication schemes, this
4
-----
encoding scheme is keyed. For convenience, we refer to the encoding procedure determined
by a particular encoding key K as EK.[2]
2. The prover commits to the encoding key K from the previous step using a classical commitment protocol, and sends the resulting commitment string z to the verifier.
3. The verifier and the prover jointly decide which random terms from the Hamiltonian H
the verifier will check by executing a coin-flipping protocol. (‘Checking terms of H’ means
that the verifier obtains measurements of the state EK(ρ) and checks that the outcomes are
distributed a particular way—or, alternatively, asks the prover to prove to it that they are.)
However, because it is important that the prover does not know which terms will be checked
before the verifier can check them, the two parties only execute the first half of the coinflipping protocol at this stage. The verifier commits to its part of the random string, rv, and
sends the resulting commitment string to the prover; the prover sends the verifier rp, its own
part of the random string; and the verifier keeps the result of the protocol r = rv _rp secret_
_⊕_
for the time being. The random terms in the Hamiltonian which the verifier will check are
determined by r.
4. The verifier and the prover execute the measurement protocol from [Mah18]. Informally,
this allows the verifier to obtain honest measurements of the qubits of the prover’s encoded
witness state, so that it can check the Hamiltonian term determined by r. The soundness
guarantee of the measurement protocol prevents the prover from cheating, even though the
prover, rather than the verifier, is physically performing the measurements. This soundness
guarantee relies on the security properties of a family of trapdoor one-way functions termed
an ETCFF family in [Mah18]. Throughout the measurement protocol, the verifier holds
trapdoors for these one-way functions, but the prover does not, and this asymmetry is what
allows the (intrinsically weaker) verifier to ensure that the prover does not cheat.
5. The verifier opens its commitment to rv, and also sends the prover its measurement outcomes
_u and function trapdoors from the previous step._
6. The prover checks, firstly, that the verifier’s trapdoors are valid, and that it did not tamper with the measurement outcomes u. (It can determine the latter by making use of the
authentication-scheme-like properties of EK from step 1.) If both tests pass, it then proves
the following statement to the verifier, using a zero-knowledge proof system for NP:
There exists a string sp and an encoding key K such that z = commit(K, sp) and
_Q(K, r, u) = 1._
The function Q is a predicate which, intuitively, takes the value 1 if and only if both the
verifier and the prover were honest. In more specific (but still informal) terms, Q(K, r, u)
takes the value 1 if u contains the outcomes of honest measurements of the state EK(ρ),
where ρ is a state that passes the set of Hamiltonian energy tests determined by r.
**Lemma 1.2 (soundness; informal). Assume that LWE is intractable for quantum computers. Then,**
_in a no-instance execution of Protocol 1.1, the probability that the verifier accepts is at most a_
_function that is negligibly close to_ [3]
4 _[.]_
2The notation used here for the encoding key is not consistent with that which is used later on; it is simplified for
the purposes of exposition.
5
-----
**Lemma 1.3 (zero-knowledge; informal). Assume that LWE is intractable for quantum computers.**
_In a yes-instance execution of Protocol 1.1, and for any classical probabilistic (resp. quantum)_
_polynomial-time verifier interacting with the honest prover, there exists a classical probabilistic_
_polynomial-time (resp._ _quantum polynomial-time) simulator such that the simulator’s output is_
_classical (resp. quantum) computationally indistinguishable from that of the verifier._
The reason we delay the verifier’s reveal of rv (rather than completing the coin-flipping in one step,
as is done in the protocol in [BJSW16]) is fairly easily explained. In our classical-verifier protocol,
the prover cannot physically send the quantum state EK(ρ) to its verifier before the random string
_r is decided, as the prover of the protocol in [BJSW16] does. If we allow our prover to know r at_
the time when it performs measurements on the witness ρ, it will trivially be able to cheat.
The trapdoor check, meanwhile, is an addition which we make because we wish to construct a
_classical simulator for our protocol when we prove that it is zero-knowledge. Since our verifier is_
classical, we need to achieve a classical simulation of the protocol in order to prove that its execution
(in yes-instances) does not impart to the verifier any knowledge it could not have generated itself.
During the measurement protocol, however, the prover is required to perform quantum actions
which no classical polynomial-time algorithm could simulate unless it had access to the verifier’s
function trapdoors. Naturally, we cannot ask the verifier to reveal its trapdoors before the measurement protocol takes place. As such, we ask the verifier to reveal them immediately afterwards
instead, and show in Section 6 that this (combined with the encryption-scheme properties of the
prover’s encoding EK) allows us to construct a classical simulator for Protocol 1.1 in yes-instances.
The organisation of the paper is as follows.
1. Section 2 (‘Ingredients’) outlines the other protocols which we use as building blocks.
2. Section 3 (‘The protocol’) introduces our argument system for QMA.
3. Section 4 (‘Completeness of protocol’) gives a completeness lemma for the argument system
introduced in section 3.
4. Section 5 (‘Soundness of protocol’) proves that the argument system introduced in section 3
is sound against quantum polynomial-time provers.
5. Section 6 (‘Zero-knowledge property of protocol’) proves that the argument system is zeroknowledge (that yes-instance executions can be simulated classically).
_Remark 1.4. As Broadbent et al. note in [BJSW16, Section 1.3], argument systems can often be_
made zero-knowledge by employing techniques from secure two-party computation (2PC). The
essential idea of such an approach, applied to our particular problem, is as follows: the prover and
the verifier would jointly simulate the classical verifier of the [Mah18] measurement protocol using a
(classical) secure two-party computation protocol, and zero-knowledge would follow naturally from
simulation security. (This technique is similar in spirit to those which are used in [BOGG[+]88] to
show that any classical-verifier interactive proof system can be made zero-knowledge.) We think
that the 2PC approach applied to our problem would have many advantages, including that it is
more generally applicable than our approach; however, we also believe that our approach is a more
direct and transparent solution to the particular problem at hand, and that it provides an early
example of how two important results might be fruitfully combined. As such, we expect that our
approach may more easily lead to extensions and improvements.
6
-----
**Related work.** Subsequent to the completion of this work, there have been several papers which
explore other extensions and applications of the argument system from [Mah18], and also papers
which propose zero-knowledge protocols (with different properties from ours) for QMA. Many
of these works focus on decreasing the amount of interaction required to implement a proof or
argument system for QMA. Although none of these works directly builds on or supersedes ours,
we review them briefly for the reader’s convenience. In the category of extensions on the work
of [Mah18], we mention [ACGH19], which proposes a non-interactive zero-knowledge variant of
the Mahadev protocol and proves its security in the quantum random oracle model. (Of course,
our protocol is interactive, and our analysis holds in the standard model.) In the category of
‘short’ proof and argument systems for QMA, we mention three independent works. In [BS19],
the authors present a constant-round computationally zero-knowledge argument system for QMA.
In [BG19] and [CVZ19] the authors present non-interactive zero-knowledge proof and argument
systems, respectively, for QMA, with different types of setup phases. The main difference between
all three of these new protocols and our protocol is that the three protocols mentioned all involve
the exchange of quantum messages (although, in [CVZ19], only the setup phase requires quantum
communication).
_Acknowledgments. We thank Zvika Brakerski, Andru Gheorghiu, and Zhengfeng Ji for useful dis-_
cussions. We thank an anonymous referee for suggesting the approach based on secure 2PC sketched
in Remark 1.4. Thomas Vidick is supported by NSF CAREER Grant CCF-1553477, AFOSR YIP
award number FA9550-16-1-0495, MURI Grant FA9550-18-1-0161, a CIFAR Azrieli Global Scholar
award, and the IQIM, an NSF Physics Frontiers Center (NSF Grant PHY-1125565) with support
of the Gordon and Betty Moore Foundation (GBMF-12500028). Tina Zhang acknowledges support
from the Richard G. Brewer Prize and Caltech’s Ph11 program.
### 2 Ingredients
The protocol we present in section 3 combines techniques which were introduced in prior works
for the design of protocols to solve related problems. In this section, we outline these protocols in
order to introduce notation and groundwork which will prove useful in the remainder of the paper.
We also provide formal definitions of QMA and of zero-knowledge.
#### 2.1 Definitions
**Definition 2.1 (QMA). The following definition is taken from [BJSW16].**
A promise problem A = (Ayes, Ano) is contained in the complexity class QMAα,β if there exists a
polynomial-time generated collection
� �
_Vx : x ∈_ _Ayes ∪_ _Ano_ (1)
of quantum circuits and a polynomially bounded function p possessing the following properties:
1. For every string x ∈ _Ayes ∪_ _Ano, one has that Vx is a measurement circuit taking p(|x|) input_
qubits and outputting a single bit.
7
-----
2. Completeness. For all x ∈ _Ayes, there exists a p(|x|)-qubit state ρ such that Pr(Vx(ρ) = 1) ≥_ _α._
3. Soundness. For all x ∈ _Ano, and every p(|x|)-qubit state ρ, it holds that Pr(Vx(ρ) = 1) ≤_ _β._
In this definition, α, β [0, 1] may be constant values or functions of the length of the input
_∈_
string x. When they are omitted, it is to be assumed that they are α = 2/3 and β = 1/3. Known
error reduction methods [KSVV02, MW05] imply that a wide range of selections of α and β give
rise to the same complexity class. In particular, QMA coincides with QMAα,β for α = 1 − 2[−][q][(][|][x][|][)]
and β = 2[−][q][(][|][x][|][)], for any polynomially bounded function q.
**Definition 2.2 (Zero-knowledge). Let (P, V ) be an interactive proof system (with a classical verifier**
_V ) for a promise problem A = (Ayes, Ano). Assume that (possibly among other arguments) P and_
_V both take a problem instance x_ 0, 1 as input. (P, V ) is computational zero-knowledge if, for
_∈{_ _}[∗]_
every probabilistic polynomial-time (PPT) V _[∗], there exists a polynomial-time generated simulator_
_S such that, when x ∈_ _Ayes, the distribution of V_ _[∗]’s final output after its interaction with the_
honest prover P is computationally indistinguishable from S’s output distribution. More precisely,
let λ be a security parameter, let n be the length of x in bits, and and let {Dn,λ}n,λ and {Sn,λ}n,λ
be the two distribution ensembles representing, respectively, the verifier V _[∗]’s output distribution_
after an interaction with the honest prover P on input x, and the simulator’s output distribution
on input x. If (P, V ) is computationally zero-knowledge, we require that, for all PPT algorithms
_A, the following holds:_
Pr Pr = µ(n)ν(λ),
����y←Dn,λ[[][A][(][y][) = 1]][ −] _y←Sn,λ[[][A][(][y][) = 1]]����_
where µ( ) and ν( ) are negligible functions.
_·_ _·_
#### 2.2 Single-qubit-verifier proof system for QMA ([MF16])
Morimae and Fitzsimons ([MF16]) present a proof system for languages (or promise problems)
in QMA whose verifier is classical except for a single-qubit quantum register, and which is sound
against arbitrary quantum provers. The proof system relies on the QMA-completeness of the 2-local
XZ Hamiltonian problem, which is defined as follows.
**Definition 2.3 (2-local XZ Hamiltonian (promise) problem).**
_Input. An input to the problem consists of a tuple x = (H, a, b), where_
1. H = [�]s[S]=1 _[d][s][H][s][ is a Hamiltonian acting on][ n][ qubits, each term][ H][s][ of which]_
(a) has a weight ds which is a polynomially bounded rational number,
(b) satisfies 0 ≤ _Hs ≤_ _I,_
(c) acts as the identity on all but a maximum of two qubits,
(d) acts as the tensor product of Pauli observables in {σX _, σZ} on the qubits on which it_
acts nontrivially.
2. a and b are two real numbers such that
8
-----
(a) a < b, and
(b) b − _a = Ω(_ poly1(|x|) [).]
� �
_Yes: There exists an n-qubit state σ such that_ _σ, H_ _a.[3]_
_≤_
� �
_No: For every n-qubit state σ, it holds that_ _σ, H_ _b._
_≥_
� �
_Remark 2.4. Given a Hamiltonian H, we call any state σ[∗]_ which causes _σ[∗], H_ to take its mini
� �
mum possible value a ground state of H, and we refer to the value _σ[∗], H_ as the ground energy of H.
The following theorem is proven by Biamonte and Love in [BL08, Theorem 2].
**Theorem 2.5. The 2-local XZ Hamiltonian problem is complete for QMA.**
We now describe an amplified version of the protocol presented in [MF16], and give a statement
about its completeness and soundness which we will use. (See [MF16] for a more detailed presentation of the unamplified version of this protocol.)
**Protocol 2.6 (Amplified variant of the single-qubit-verifier proof system for QMA from [MF16]).**
_Notation. Let L = (Lyes, Lno) be any promise problem in QMA; let x ∈{0, 1}[∗]_ be an input; and
let (H, a, b) be the instance of the 2-local XZ Hamiltonian problem to which x reduces.
1. If x ∈ _Lyes, the ground energy of H is at most a._
2. if x ∈ _Lno, the ground energy of H is at least b._
3. b − _a ≥_ poly1(|x|) [.]
Let H = [�]s[S]=1 _[d][s][H][s][, as in Definition][ 2.3][. Define]_
_|ds|_
_πs =_
�
_s_ _[|][d][s][|][ .]_
_Parties. The proof system involves_
1. A verifier, who implements a classical probabilistic polynomial-time procedure with access to
a one-qubit quantum register; and
2. A prover, who is potentially unbounded, but whose honest behaviour in yes-instances can be
implemented in quantum polynomial time.
The verifier and the prover communicate quantumly.
_Inputs._
3The angle brackets [�]·, ·[�] denote an inner product between two operators which is defined as follows: [�]A, B[�] =
Tr(A[∗]B) for any A, B ∈ L(X _, Y), where the latter denotes the space of linear maps from a Hilbert space X to a_
Hilbert space .
_Y_
9
-----
1. Input to the verifier: the Hamiltonian H and the numbers a and b.
2. Input to the prover: the Hamiltonian H, the numbers a and b, and the quantum state
_ρ = σ[⊗][m], where σ is a ground state of the Hamiltonian H._
_Protocol._
1. The verifier selects uniformly random coins r = (r1, . . ., rm).
2. For each j ∈{1, . . ., m}, the verifier uses rj to select a random sj ∈{1, . . ., S} according to
the distribution D specified as follows:
_D(s) = πs,_ for s ∈{1, . . ., S} .
3. The prover sends a state ρ to the verifier one qubit at a time. (The honest prover sends the
state σ[⊗][m] that consists of m copies of the ground state of H.)
4. The verifier measures Hsj for j = 1, . . ., m, taking advantage of the fact that—if the prover is
honest—it is given m copies of σ. (‘Measuring Hsj ’, in this case, entails performing at most
two single-qubit measurements, in either the standard or the Hadamard basis, on qubits in
_ρ, and then computing the product of the two measurement outcomes.)_
5. The verifier initialises a variable Count to 0. For each j 1, . . ., m, if the jth product
_∈{_ _}_
that it obtained in the previous step was equal to −sign(dj), the verifier adds one to Count.
6. If [Count] is closer to [1] _a_ _b_
_m_ 2 _[−]_ �s [2][|][d][s][|][ than to][ 1]2 _[−]_ �s [2][|][d][s][|] [, the verifier accepts. Otherwise, it rejects.]
**Claim 2.7. Given an instance x = (H, a, b) of the 2-local XZ Hamiltonian problem, there is a**
_polynomial P (depending only on a and b) such that, for any m = Ω(P_ ( _x_ )), the following holds. In
_|_ _|_
_a yes-instance, the procedure of Protocol 2.6 accepts the state ρ = σ[⊗][m]_ _with probability exponentially_
_close (in_ _x_ _) to 1. In a no-instance, the probability that it accepts any state is exponentially small_
_|_ _|_
_in_ _x_ _._
_|_ _|_
_Proof. Consider the probability (over the choice of rj and the randomness arising from measure-_
ment) that the jth measurement from step 4 of Protocol 2.6, conditioned on previous measurement
outcomes, yields −sign(dj). Denote this probability by qj.
As shown in [MNS16, Section IV], it is not hard to verify that
1. when x ∈ _L, if the prover sends the honest witness σ[⊗][m], then qj ≥_ [1]2 _[−]_ �sa[2][|][d][s][|] [, and]
2. when x /∈ _L, for any witness that the prover sends, qj ≤_ [1]2 _[−]_ �sb[2][|][d][s][|] [.]
The difference between the two cases is inverse polynomial in the size of the input to the 2-local
XZ Hamiltonian problem. It is straightforward to show that, for an appropriate choice of m, this
inverse polynomial gap can be amplified to an exponential one: see Appendix B.
_Remark 2.8. It will be useful later to establish at this point that, if the string r from step 1 of_
Protocol 2.6 is fixed, it is simple to construct a state ρr which will pass the challenge determined
by r with probability 1. One possible procedure is as follows.
10
-----
1. For each j 1, . . ., m:
_∈_
Suppose that Hsj = djP1P2, and that P1, P2 ∈{σX _, σZ} act on qubits ℓ1 and ℓ2, respectively._
(a) If −sign(dj) = 1, initialise the ((j − 1)n + ℓ1)th qubit to the +1 eigenstate of P1, and
likewise, initialise the ((j − 1)n + ℓ2)th qubit to the +1 eigenstate of P2.
(b) If −sign(dj) = −1, initialise the ((j − 1)n + ℓ1)th qubit to the +1 eigenstate of P1, and
initialise the ((j − 1)n + ℓ2)th qubit to the −1 eigenstate of P2.
2. Initialise all remaining qubits to 0 .
_|_ _⟩_
It is clear that the ρr produced by this procedure is a tensor product of |0⟩, |1⟩, |+⟩ and |−⟩ qubits.
#### 2.3 Measurement protocol ([Mah18])
In [Mah18], Mahadev presents a measurement protocol between a quantum prover and a classical
verifier which, intuitively, allows the verifier to obtain trustworthy standard and Hadamard basis
measurements of the prover’s quantum state from purely classical interactions with it. The soundness of the measurement protocol relies upon the security properties of functions that [Mah18]
terms noisy trapdoor claw-free functions and trapdoor injective functions, of which Mahadev provides explicit constructions presuming upon the hardness of LWE. (A high-level summary of these
constructions can be found in Appendix A.) Here, we summarise the steps of the protocol, and
state the soundness property that it has which we will use.
**Protocol 2.9 (Classical-verifier, quantum-prover measurement protocol from [Mah18]).**
_Parties. The proof system involves_
1. A verifier, which implements a classical probabilistic polynomial-time procedure; and
2. A prover, which implements a quantum polynomial-time procedure.
The verifier and the prover communicate classically.
_Inputs._
1. Input to the prover: an n-qubit quantum state ρ, whose qubits the verifier will attempt to
derive honest measurements of in the standard and Hadamard bases.
2. Input to the verifier:
(a) A string h 0, 1, which represents the bases (standard or Hadamard) in which it
_∈{_ _}[n]_
will endeavour to measure the qubits of ρ. hi = 0 signifies that the verifier will attempt
to obtain measurement outcomes of the ith qubit of ρ in the standard basis, and hi = 1
means that the verifier will attempt to obtain measurement outcomes of the ith qubit
of ρ in the Hadamard basis.
(b) An extended trapdoor claw-free function family (ETCFF family), as defined in Section 4
of [Mah18]. The description of an ETCFF family specifies a large number of algorithms,
and we do not attempt to enumerate them. Instead, we proceed to describe the verifier’s
prescribed actions at a level of detail which we believe to be sufficient for our purposes,
and refer the reader to [Mah18] for a finer exposition.
11
-----
_Protocol._
1. For each i 1, . . ., n (see ‘Inputs’ above for the definition of n), the verifier generates
_∈{_ _}_
an ETCFF function key κi using algorithms provided by the ETCFF family, along with a
trapdoor τκi for each function, and sends all of the keys κ to the prover. It keeps the trapdoors
_τ to itself. If hi = 0, the ith key κi is a key for an injective function g, and if hi = 1, it is a key_
for a two-to-one function f known as a ‘noisy trapdoor claw-free function’. Intuitively, the
_g functions are one-to-one trapdoor one-way functions, and the f functions are two-to-one_
trapdoor collision-resistant hash functions. The keys for f functions and those for g functions
are computationally indistinguishable. (For convenience, we will from now on refer to the
function specified by κi either as fκi or as gκi. Alternatively, we may refer to it as ηκi if we
do not wish to designate its type.[4]) A brief outline of how these properties are achieved using
LWE is given in Appendix A.
We make two remarks about the functions ηκi which will become relevant later.
(a) The functions ηκi always have domains of the form {0, 1} × X, where X ⊆{0, 1}[w] for
some length parameter w.
(b) The outputs of both the f and the g functions should be thought of not as strings but
as probability distributions. The trapdoor τκi inverts the function specified by κi in the
sense that, given a sample y from the distribution Y = ηκi(b∥x), along with the trapdoor
_τκi, it is possible to recover b∥x, as well as any other b[′]∥x[′]_ which also maps to Y under
_ηκi (should it exist)._
**Definition 2.10. Suppose that ηκi is the function specified by κi, whose output on each**
input b _x in its domain_ 0, 1 is a probability distribution Y . Define a (deterministic)
_∥_ _{_ _} × X_
function ηκ[∗]i[(][b][∥][x, e][) which takes as input an][ b][∥][x][ ∈{][0][,][ 1][} × X][ and a randomness][ e][ ∈E][, for]
some well-defined finite set E, and returns a sample ye from the distribution Y = ηκi(b∥x).
**Definition 2.11. Let ηκi be the function specified by κi, with domain {0, 1} × X** . Let y be
a sample from one of the distributions Y ∈Y, where Y is the range of ηκi. It is guaranteed
that the distributions in the range of ηκi have compact support, and we call b∥x ∈{0, 1} × X
a preimage of y if y is in the support of the probability distribution ηκi(b∥x).
2. The prover uses the function keys κ1, . . ., κn to ‘commit’ to the quantum state of each of the
_n qubits of ρ, and sends n commitment strings y1, . . ., yn to the verifier. We direct the reader_
to Sections 2.2 and 5.1 of [Mah18] for a high-level, and then a more detailed, description of
the commitment procedure, and explanations of how such a procedure will allow the verifier
to extract trustworthy standard and Hadamard basis measurements of the qubits of ρ. For
convenience, we summarise the procedure without justification here.
For each i, the prover concatenates to the qubit indexed by i (which we call, following [Mah18],
the ith ‘committed qubit’, and whose state we denote by[5] _|ψi⟩_ = γi |0⟩ + _δi |1⟩) a register (the_
‘preimage register’) containing a uniform superposition over all x (where 0, 1 is the
_∈X_ _{_ _}×X_
domain of the function ηκi). It also concatenates to it a register containing a superposition
4The letter η has been chosen because it bears some resemblance to the Latin letter h.
5Strictly speaking, the state of the ith commited qubit may not be a pure state, but we ignore this fact for
simplicity of presentation.
12
-----
over all e ∈E, with E defined as it is in Definition 2.10. It then applies the function ηκ[∗]i [(see]
Definition 2.10) in superposition to _x_
[�] _[|][ψ][i][⟩|][x][⟩|][e][⟩][, and collects the outputs in a new register]_
(the ‘output register’), obtaining the quantum state
�
_γi |0⟩|x⟩|e⟩|ηκ[∗]i[(0][∥][x, e][)][⟩]_ [+][ δ][i] _[|][1][⟩|][x][⟩|][e][⟩|][η]κ[∗]i[(1][∥][x, e][)][⟩]_ _[.]_
_x∈{0,1}[w]_
Finally, the prover measures the output register, obtains a sample yi, and sends yi to the
verifier as its ith commitment string.
3. The verifier then chooses at random to run either a test round or a Hadamard round. Each
option is chosen with probability [1]
2 [.]
(a) If a test round is chosen, the verifier requests standard basis measurements of each
committed qubit |ψi⟩ and its associated preimage register (recall that, in the previous
step, only the output registers were measured), and receives measurement results βi, xi
for i 1, . . ., n from the prover. It then checks, for each i 1, . . ., n, whether
_∈{_ _}_ _∈{_ _}_
the ith measurement result is a preimage of yi. (See Definition 2.11 for a definition of
the phrase ‘preimage of y’.) More precisely: if hi = 0, the verifier accepts iff yi is in
the support of gκi(βi∥xi) (recalling that gκi(βi∥xi) is a probability distribution); and if
_hi = 1, the verifier accepts iff yi is in the support of fκi(βi∥xi)._
(b) If a Hadamard round is chosen, the verifier requests Hadamard basis measurements
of each committed qubit and its associated preimage register, and receives βi, di for
_i_ 1, . . ., n from the prover. (We note, for future reference, that the distribution of
_∈{_ _}_
the strings di sent by the honest prover is the uniform distribution over {0, 1}[w].) It then
obtains the measurement results it desires using the procedure outlined in Section 2.2
of [Mah18]:
i. If hi = 0, the verifier ignores the measurement results βi, di, and uses the trapdoor
_τκi to invert yi. If the inverse of yi exists, the inversion procedure outputs a preimage_
(mi, xmi,yi), and the verifier stores mi as the standard-basis measurement outcome.
If the inverse does not exist, the verifier stores a random bit as mi and rejects. The
specification that the verifier stores a random bit is made so that the n-bit string of
measurement results, m, is well-defined even when the verifier rejects.
ii. If hi = 1, the verifier computes the two preimages x0,i and x1,i to yi under fκi
using the trapdoor τκi. It then stores βi _di_ (x0,i _x1,i) as the Hadamard basis_
_⊕_ _·_ _⊕_
measurement outcome. Once again, if either inverse does not exist, the verifier stores
a random bit as mi and rejects.
The soundness property of this measurement protocol (Claim 7.1 in [Mah18]) is stated below. Before
we state it, we briefly introduce the notation used in [Mah18]. (For a more detailed exposition, we
refer the reader to Section 5.3 of [Mah18].) DP,h refers to the distribution over measurement results
_m_ 0, 1 that the verifier obtains when it executes a Hadamard round with the prover labelled
_∈{_ _}[n]_
P on the basis choice h. DP[C],h [is the same distribution, but conditioned on the verifier accepting]
(in a Hadamard round). Dξ,h is the distribution over measurement outcomes in {0, 1}[n] that would
result from directly measuring the quantum state ξ in the bases determined by h. ph,T and ph,H are
defined so that the verifier’s probability of accepting (on basis choice h) in a test and a Hadamard
round, respectively, are 1 _−_ _ph,T and 1_ _−_ _ph,H_ . ∥·∥TV denotes the total variation norm, and A ≈c B
indicates that two distributions A and B are (quantum) computationally indistinguishable.
13
-----
**Claim 2.12. Assume that the Learning With Errors problem (with the same choices of parameters**
_as those made in [Mah18, Section 9]) is quantum computationally intractable._ _Then, for any_
_arbitrary quantum polynomial-time prover P who executes the measurement protocol (Protocol 2.9)_
_with the honest verifier V, there exists a quantum state ξ, a prover P[′]_ _and a negligible function µ_
_such that_
_∥DP[C],h_ _[−]_ _[D][P][′][,h][∥][TV]_ _[≤√][p][h,T]_ [+][ p][h,H] [+][ µ] _and_
_DP′,h_ _c Dξ,h ._
_≈_
#### 2.4 Zero-knowledge proof system for QMA ([BJSW16])
In [BJSW16], Broadbent, Ji, Song and Watrous describe a protocol involving a quantum polynomialtime verifier and an unbounded prover, interacting quantumly, which constitutes a zero-knowledge
proof system for promise problems in QMA. (Although it is sound against arbitrary provers, the
system in fact only requires an honest prover to perform quantum polynomial-time computations.)
We summarise the steps of their protocol below. For details and fuller explanations, we refer the
reader to [BJSW16, Section 3].
**Protocol 2.13 (Zero-knowledge proof system for QMA from [BJSW16]).**
_Notation. Let L be any promise problem in QMA. For a definition of the k-local Clifford Hamilto-_
_nian problem, see [BJSW16, Section 2]. The k-local Clifford Hamiltonian problem is QMA-complete_
for k = 5; therefore, for all possible inputs x, there exists a 5-local Clifford Hamiltonian H (which
can be computed efficiently from x) whose terms are all operators of the form C[∗] 0[k] 0[k] _C for_
_|_ _⟩⟨_ _|_
some Clifford operator C, and such that
1. if x ∈ _Lyes, the ground energy of H is ≤_ 2[−][p],
2. if x ∈ _Lno, the ground energy of H is ≥_ [1]q [,]
for some positive integers p and q.
_Parties. The proof system involves_
1. A verifier, who implements a quantum polynomial-time procedure;
2. A prover, who is unbounded, but who is only required by the protocol to implement a quantum
polynomial-time procedure.
The verifier and the prover communicate quantumly.
_Inputs._
1. Input to the verifier:
(a) The Hamiltonian H.
(b) A quantum computationally concealing, perfectly binding (classical) commitment protocol.
(c) A proof system for NP sound against arbitrary quantum provers.
14
-----
2. Input to the prover:
(a) The Hamiltonian H.
(b) The n-qubit quantum state ρ, where ρ is a ground state of the Hamiltonian H.
(c) A quantum computationally concealing, perfectly binding (classical) commitment protocol.
(d) A zero-knowledge proof system for NP sound against arbitrary quantum provers.
_Protocol._
1. The prover’s encoding step. The prover firstly applies a concatenated Steane code (which maps
every one qubit to N qubits) to each qubit in ρ. (For details on the concatenated Steane
code, see [BJSW16, Appendix A.6]. It will be important to Broadbent et al.’s purposes—and
ours, in Section 3—that this code admits transversal applications of Clifford operations.) It
then executes the following steps:
(a) Concatenate N trap qubits to the end of each logical qubit (alternatively, to the end of
each N -qubit block) in the result of applying the concatenated Steane code to ρ. Each
trap qubit is initialised uniformly at random to one of |0⟩ _, |+⟩_ _, |+y⟩.[6]_ Denote the string
that records the choices of trap qubits for all n logical qubits by t = t1, . . ., tn.
(b) Permute each 2N -tuple of qubits in the result of (a) according to a uniformly random
permutation π ∈ _S2N_ . (Note that the same permutation π is applied to every 2N -tuple.)
(c) Apply a Pauli one-time pad X _[a]Z[b], for uniformly random a, b_ 0, 1, to the entire
_∈{_ _}[2][nN]_
2nN -qubit state.
_Remark 2.14. The number N functions as a security parameter, both for this protocol and_
for the protocol that we construct in section 3. Accordingly, we assume that N is polynomial
in the size of the verifier’s input.
The prover’s encoding applied to ρ is denoted by E(ρ), and the procedure E is fully determined by the encoding key (t, π, a, b) which the prover chose to use. At this point, the prover
sends the state E(ρ) to the verifier, along with a commitment (using some perfectly binding,
computationally concealing classical commitment protocol) to the tuple (π, a, b). (A commitment to the sequence of trap qubits t is unnecessary because, in a sense, the trap qubits exist
only to check the verifier.) Let the prover’s commitment string be denoted z.
2. Coin-flipping protocol. The prover and the verifier execute a coin-flipping protocol, choosing
a string r of fixed length uniformly at random. This random string r determines a local
Hamiltonian term Hr = Cr[∗] _[|][0][k][⟩⟨][0][k][|][ C][r]_ [that is to be tested. (This step can be implemented,]
of course, using the same classical commitment protocol that the prover employed in the
previous step.)
3. Verifier’s challenge. The verifier applies the Clifford Cr transversally to the qubits on which
the k-local Hamiltonian term Hr acts nontrivially, and measures them in the standard basis.
It then sends the measurement results ui1, . . ., uik which it obtained to the prover. (Each ui
is a 2N -bit string, and i1, . . ., ik are the indices of the logical qubits on which the term Hr
acts nontrivially.)
6|+y⟩ here refers to the state _√12_ [(][|][0][⟩] [+][ i][ |][1][⟩][).]
15
-----
4. Prover’s response. The prover receives the verifier’s measurement results u, and firstly checks
whether they cause a predicate Q(t, π, a, b, r, u) to be satisfied. (We will explain the predicate
_Q in more detail shortly. Intuitively, Q is satisfied if and only if both verifier and prover_
behaved honestly.) If Q is not satisfied, the prover aborts, causing the verifier to reject. If Q
is satisfied, then the prover proves to the verifier, using an NP zero-knowledge protocol, that
there exists randomness sp and an encoding key (t, π, a, b) such that z = commit((π, a, b), sp)
and Q(t, π, a, b, r, u) = 1.
We now describe the predicate Q in precise terms. For convenience, Broadbent et al. define
a predicate Rr, which represents the prover’s check after it reverses the effects on u of the
one-time pad X _[a]Z[b], and then proceed to define Q in terms of Rr. Since we will shortly_
have cause to alter the definition of Rr, we quote the definition of Rr used in [BJSW16] for
comparison. (Note that we have altered their notation slightly: the strings that they call yi
we have called qi, to avoid a conflict.)
**Definition 2.15 (Definition of Rr). Let ui1, . . ., uik be the measurement results that the**
verifier sent to the prover in step 3. For each i ∈{i1, . . ., ik}, define strings qi, zi in {0, 1}[N]
such that π(qi∥zi) = ui (alternatively: π[−][1](ui) = qi∥zi). The predicate Rr(t, π, u) takes the
value 1 if and only if the following two conditions are met:
(a) qi ∈DN for every i ∈{i1, . . ., ik}, and qi ∈DN[1] [for at least one index][ i][ ∈{][i][1][, . . ., i][k][}][.]
(DN = DN[0] _N_ [is the set of all valid classical][ N] [-bit codewords of the concatenated]
_[∪D][1]_
Steane code. We note, as an aside, that DN[0] [and][ D]N[1] [are both sets of size 8][t][, where][ t][ is]
an even positive integer such that 7[t] = N ; as such, DN is polynomially sized.)
(b) ⟨zi1 · · · zik _| Cr[⊗][n]_ _|ti1 · · · tik_ _⟩̸= 0._
Now we define the predicate Q(t, π, a, b, r, u) in terms of Rr:
**Definition 2.16 (Definition of Q). Let c1, . . ., cn, d1, . . ., dn ∈{0, 1}[2][N]** be the unique strings
such that
_Cr[⊗][2][N]_ (X _[a][1]Z[b][1]_ _⊗· · · ⊗_ _X_ _[a][n]Z[b][n]) = α(X_ _[c][1]Z[d][1]_ _⊗· · · ⊗_ _X_ _[c][n]Z[d][n])Cr[⊗][2][N]_
for some α ∈{1, i, −1, −i}. (It is possible to efficiently compute c = c1, . . ., cn and d =
_d1, . . ., dn given a, b and Cr.) The predicate Q is then defined by_
_Q(t, π, a, b, r, u) = Rr(t, π, u ⊕_ _ci1 · · · cik_ ) .
#### 2.5 Replacing Clifford verification with XZ verification in Protocol 2.13
The authors of [BJSW16] introduce a zero-knowledge proof system which allows the verifier to
determine whether the prover holds a state that has sufficiently low energy with respect to a klocal Clifford Hamiltonian (see Section 2 of [BJSW16]). In this section, we modify their proof
system so that it applies to an input encoded as an instance of the XZ local Hamiltonian problem
(Definition 2.3) rather than as an instance of the Clifford Hamiltonian problem.
Before we introduce our modifications, we explain why it is necessary in the first place to alter the
proof system presented in [BJSW16]. Modulo the encoding E which the prover applies to its state
16
-----
in Protocol 2.13, the quantum verifier from the same protocol is required to perform a projective
measurement of the form Π = C[∗] 0[k] 0[k] _C, Id_ Π of the state that the prover sends it (where
_{_ _|_ _⟩⟨_ _|_ _−_ _}_
_C is a Clifford unitary acting on k qubits) and reject if it obtains the first of the two possible_
outcomes. Due to the properties of Clifford unitaries, this action is equivalent to measuring k
commuting k-qubit Pauli observables C[∗]ZiC for i ∈{1, . . ., k} (where Zi is a Pauli σZ observable
acting on the ith qubit), and rejecting if all of said measurements result in the outcome +1.
Our goal is to replace the quantum component of the verifier’s actions in Protocol 2.13—a component which, fortunately, consists entirely of performing the projective measurement just described—with the measurement protocol introduced in [Mah18] (summarized as Protocol 2.9).
Unfortunately, the latter protocol 1. only allows for standard and Hadamard basis measurements,
and 2. does not accommodate a verifier who wishes to perform multiple successive measurements
on the same qubit: for each qubit that the verifier wants to measure, it must decide on a measurement basis (standard or Hadamard) prior to the execution of the protocol, and once made its
choices are fixed for the duration of its interaction with the prover. This allows the verifier to, for
example, obtain the outcome of a measurement of the observable C[∗]ZiC for some particular i, by
requesting measurement outcomes of all k qubits in the appropriate basis and taking the product of
the outcomes obtained. However, it is not obvious how the same verifier could request the outcome
of measuring a k-tuple of commuting Pauli observables which all act on the same k qubits.
To circumvent this technical issue, we replace the Clifford Hamiltonian problem used in [BJSW16]
with the QMA-complete XZ Hamiltonian problem. The advantage of this modification is that it
becomes straightforward to implement the required energy measurements using the measurement
protocol from [Mah18]. In order to make the change, we require that the verifier’s measurements
act on a linear, rather than a constant, number of qubits with respect to the size of the problem
input.
A different potentially viable modification to the proof system of [BJSW16] is as follows. Instead of
replacing Clifford Hamiltonian verification with XZ Hamiltonian verification, we could also repeat
the original Clifford-Hamiltonian-based protocol a polynomial number of times. In such a scheme,
the honest prover would hold m copies of the witness state (as it does in Protocol 2.6). The verifier,
meanwhile, would firstly choose a random term Cr[∗] _[|][0][k][⟩⟨][0][k][|][ C][r]_ [from the Clifford Hamiltonian, and]
then select m random Pauli observables of the form Cr[∗][Z][i][C][r][—where][ C][r] [is the particular][ C][r] [which]
it picked—to measure. (For each repetition, i would be chosen independently and uniformly at
random from the set 1, . . ., k .) The verifier would accept if and only if the number of times
_{_ _}_
it obtains −1 from said Pauli measurements is at least 2[m]k [. This approach is very similar to the]
approach we take for XZ Hamiltonians (which we explain below), and in particular also fails to
preserve the perfect completeness of the original protocol in [BJSW16]. For simplicity, we choose
the XZ approach. We now introduce the alterations which are necessary in order to make it viable.
Firstly, we require that the honest prover possesses polynomially many copies of the witness state
_σ, instead of one. We do this because we want the honest verifier to accept the honest prover_
with probability exponentially close to 1, which is not naturally true in the verification procedure
for 2-local XZ Hamiltonians presented by Morimae and Fitzsimons in [MF16], but which is true
in our amplified variant, Protocol 2.6. Secondly, we need to modify the verifier’s conditions for
acceptance. In [BJSW16], as we have mentioned, these conditions are represented by a predicate
_Q (that in turn evaluates a predicate Rr; see Definitions 2.15 and 2.16)._
17
-----
We now describe our alternative proof system for QMA, and claim that it is zero-knowledge.
Because the protocol is very similar to the protocol from [BJSW16], this can be seen by following
the proof of zero-knowledge in [BJSW16], and noting where our deviations require modifications to
the reasoning. On the other hand, we do not argue that the proof system is complete and sound,
as we do not need to make explicit use of these properties. (Intuitively, however, the completeness
and the soundness of the proof system follow from those of Protocol 2.6, and the soundness of the
latter is a property which we will use.)
**Protocol 2.17 (Alternative proof system for QMA).**
_Notation. Refer to notation section of Protocol 2.6._
_Parties. The proof system involves_
1. A verifier, who implements a quantum polynomial-time procedure;
2. A prover, who is unbounded, but who is only required by the protocol to implement a quantum
polynomial-time procedure.
The verifier and the prover communicate quantumly.
_Inputs._
1. Input to the verifier:
(a) The Hamiltonian H, and the numbers a and b.
(b) A quantum computationally concealing, perfectly binding (classical) commitment protocol.
(c) A proof system for NP sound against arbitrary quantum provers.
2. Input to the prover:
(a) The Hamiltonian H, and the numbers a and b.
(b) The n-qubit quantum state ρ = σ[⊗][n], where σ is the ground state of the Hamiltonian H.
(c) A quantum computationally concealing, perfectly binding (classical) commitment protocol.
(d) A zero-knowledge proof system for NP sound against arbitrary quantum provers.
_Protocol._
1. Prover’s encoding step: The same as the prover’s encoding step in Protocol 2.13, except
that t ∈{0, +}[N] rather than {0, +, +y}[N] . (This change will be justified in the proof of
Lemma 2.20.)
2. Coin flipping protocol: Unmodified from Protocol 2.13, except that r = (r1, . . ., rm) represents
the choice of m terms from the 2-local XZ Hamiltonian H (with the choices being made as
described in step 2 of Protocol 2.6) instead of a random term from a Clifford Hamiltonian.
Note that r determines the indices of the 2m logical qubits which the verifier will measure in
step 3.
3. Verifier’s challenge: The same as the verifier’s challenge in Protocol 2.13, except that the
verifier now applies Ur transversally instead of Cr. (See item 2(c) in Definition 2.18 below for
the definition of Ur.)
18
-----
4. Prover’s response: The same as Protocol 2.13 (but note that the predicate Q, which the
prover checks and then proves is satisfied, is the Q described in Definition 2.19 below).
**Definition 2.18 (Redefinition of Rr). Let i1, . . ., i2m be the indices of the logical qubits which**
were chosen for measurement in step 2 of Protocol 2.17, ordered by their corresponding js (so that
_i1 and i2 are the qubits that were measured in order to determine whether Hs1 was satisfied, and_
so on). Let ui1, . . ., ui2m be the 2N -bit strings which the verifier claims are the classical states
that remained after said measurements were performed, and for each i ∈{i1, . . ., i2m}, define N bit strings qi, zi such that π(qi||zi) = ui (alternatively: π[−][1](ui) = qi||zi). In Protocol 2.17, the
predicate Rr(t, π, u) takes the value 1 if and only if the following conditions are met:
1. qi ∈DN for every i ∈{i1, . . ., i2m}.
2. The number [Count] (where Count is obtained by executing the following procedure) is closer
_m_
to [1] _a_ _b_
2 _[−]_ �s [2][|][d][s][|][ than to][ 1]2 _[−]_ �s [2][|][d][s][|] [.]
(a) Initialise Count to 0.
(b) For each j ∈{1, . . ., m}: Suppose that Hsj = djP1P2, for some P1, P2 ∈{σX _, σZ}. The_
tuple (P1, u2j−1, P2, u2j) determines a ‘logical’ measurement result that could equally
have been obtained by measuring Hrj _σ, where σ is the unencoded witness state. We_
denote this measurement result by λ. If λ = −sign(dj), add one to Count.
(c) Let Ur be the circuit obtained from the following procedure:
i. For each j ∈{1, . . ., m}, replace any σX s in the term Hsj with H (Hadamard) gates,
and replace any σZs in Hsj with I. (For example, if Hrj = πjσX,ℓ1σZ,ℓ2, where the
second subscript denotes the index of the qubit on which the observable in question
acts, then Uj = Hℓ1Iℓ2, where the subscripts ℓ1 and ℓ2 once again the denote the
indices of the qubits on which the gates H and I act.)
ii. Apply Uj to the qubits indexed (j − 1)n + 1 through jn.
It must then be the case that ⟨zi1 · · · zi2m| Ur[⊗][N] _|ti1 · · · ti2m⟩̸= 0 (where each ti is an_
_N_ -bit string that represents the pattern of trap qubits which was concatenated to the
_ith logical qubit during step 1 of Protocol 2.17)._
**Definition 2.19 (Redefinition of Q). Let c1, . . ., cn, d1, . . ., dn ∈{0, 1}[2][N]** be the unique strings
such that
_Ur[⊗][2][N]_ (X _[a][1]Z[b][1]_ _⊗· · · ⊗_ _X_ _[a][n]Z[b][n]) = α(X_ _[c][1]Z[d][1]_ _⊗· · · ⊗_ _X_ _[c][n]Z[d][n])Ur[⊗][2][N]_
for some α ∈{1, i, −1, −i}. (It is possible to efficiently compute c = c1, . . ., cn and d = d1, . . ., dn
given a, b and Ur. In particular, recalling that Ur is a tensor product of H and I gates, we have
that ci = ai and di = bi for all i such that the ith gate in Ur[⊗][2][N] is I, and ci = bi, di = ai for all i
such that the ith gate in Ur[⊗][2][N] is H.) The predicate Q is then defined by
_Q(t, π, a, b, r, u) = Rr(t, π, u ⊕_ _ci1 · · · cik_ ),
where Rr is as in Definition 2.18.
**Lemma 2.20. The modified proof system for QMA in Protocol 2.17 is computationally zero-**
_knowledge for quantum polynomial-time verifiers._
19
-----
_Proof. We follow the argument from [BJSW16, Section 5]. Steps 1 to 3 only make use of the security_
of the coin-flipping protocol, the security of the commitment scheme, and the zero-knowledge
properties of the NP proof system, none of which we have modified. Step 4 replaces the real
witness state ρ with a simulated witness ρr that is guaranteed to pass the challenge indexed by
_r; this we can do also (see Remark 2.8). Step 5 uses the Pauli one-time-pad to twirl the cheating_
verifier, presuming that the honest verifier would have applied a Clifford term indexed by r before
measuring. We note that, since Ur is a Clifford, the same reasoning applies to our modified proof
system.
Finally, using the fact that the Pauli twirl of step 5 restricts the cheating verifier to XOR attacks,
step 6 from [BJSW16, Section 5] proves the following statement: if the difference |p0 − _p1| is_
negligible (where p0 and p1 are the probabilities that ρ and ρr respectively pass the verifier’s test in
an honest prover-verifier interaction indexed by r), then the channels Ψ0 and Ψ1 implemented by the
cheating verifier in each case are also quantum computationally indistinguishable. It follows from
this statement that the protocol is zero-knowledge, since, in an honest verifier-prover interaction
indexed by r, ρr would pass with probability 1, and ρ would pass with probability 1 − negl(N ).
(This latter statement is true both in their original and in our modified protocol.) The argument
presented in [BJSW16] considers two exclusive cases: the case when |v|1 < K, where v is the string
that the cheating verifier XORs to the measurement results, |v|1 is the Hamming weight of that
string, and K is the minimum Hamming weight of a nonzero codeword in DN ; and the case when
_|v|1 ≥_ _K. The analysis in the former case translates to Protocol 2.17 without modification, but in_
the latter case it needs slight adjustment.
In order to address the case when |v|1 ≥ _K, Broadbent et al. use a lemma which—informally—_
states that the action of a Clifford on k qubits, each of which is initialised uniformly at random
to one of |0⟩ _, |+⟩, or |+⟩y, has at least a 3[−][k]_ chance of leaving at least one out of k qubits in
a standard basis state. We may hesitate to replicate their reasoning directly, because our k (the
number of qubits on which our Hamiltonian acts) is not a constant. While it is possible that a
mild modification suffices to overcome this problem, we note that in our case there is a simpler
argument for an analogous conclusion: since Ur is a tensor product of only H gates and I gates, it
is straightforward to see that, if each of the 2m qubits on which it acts is initialised either to 0
_|_ _⟩_
or to +, then 1) each of the 2m qubits has exactly a 50% chance of being left in a standard basis
_|_ _⟩_
state, and 2) the states of these 2m qubits are independent.
Now we consider the situation where a string v = v1 v2 _v2m, of length 4mN and of Hamming_
_· · ·_
weight at least K, is permuted (‘permuted’, here, means that π ∈ _S2N is applied to each vi_
individually) and then XORed to the result of measuring 4mN qubits (2m blocks of 2N qubits each)
in the standard basis after Ur has been transversally applied to those qubits. It is straightforward
to see, by an application of the pigeonhole principle, that there must be at least one vi whose
Hamming weight is _K_
_≥_ 2m [. Consider the result of XORing this][ v][i][ to its corresponding block of]
measured qubits. Half of the 2N qubits in that block would originally have been encoding qubits,
and half would have been trap qubits; half again of the latter, then, would have been trap qubits
left in a standard basis state by the transversal action of Ur. As such, the probability that none
of the 1-bits of vi are permuted into positions which are occupied by the latter kind of qubit is
( [3]4 [)][−] 2[K]m, which is negligibly small as long as K is made to be a higher-order polynomial in N than
2m is. The remainder of the argument in [BJSW16, Section 5] follows directly.
20
-----
### 3 The protocol
In this section, we present our construction of a zero-knowledge argument system for QMA. Our argument system allows a classical probabilistic polynomial-time verifier and a quantum polynomialtime prover to verify that any problem instance x belongs to any particular language L QMA,
_∈_
provided that the prover has access to polynomially many copies of a valid quantum witness for an
instance of the 2-local XZ local Hamiltonian problem to which x is mapped by the reduction implicit in Theorem 2.5. The argument system is sound (against quantum polynomial-time provers)
under the following assumptions:
**Assumptions 3.1.**
1. The Learning With Errors problem (LWE) [Reg09] is quantum computationally intractable.
(Specifically, we make the same asssumption about the hardness of LWE that is made
in [Mah18, Section 9] in order to prove the soundness of the measurement protocol.)
2. There exists a commitment scheme (gen, initiate, commit, reveal, verify) of the form described in
Appendix C that is unconditionally binding and quantum computationally concealing. (This
assumption is necessary to the soundness of the proof system presented in [BJSW16].) It is
known that a commitment scheme with the properties required can be constructed assuming
the quantum computational hardness of LWE [CVZ19], although the parameters may be
somewhat different from those required for soundness.
The following exposition of our protocol relies on definitions from Section 2, and we encourage the
reader to read that section prior to approaching this one. We also direct the reader to Figures 1
and 2 for diagrams that chart the protocol’s structure.
**Protocol 3.2. Zero-knowledge, classical-verifier argument system for QMA.**
_Notation. Let L be any promise problem in QMA, and let (H =_ [�]s[S]=1 _[d][s][H][s][, a, b][) be an instance of]_
the 2-local XZ Hamiltonian problem to which L can be reduced (see Definition 2.3 and Theorem 2.5).
Define
_|ds|_
_πs =_
�
_s_ _[|][d][s][|][ .]_
Following [BJSW16], we take the security parameter for this protocol to be N, the number of
qubits in which the concatenated Steane code used during the encoding step of the protocol (step
1) encodes each logical qubit. We assume, accordingly, that N is polynomial in the size of the
problem instance x.
_Parties._
The protocol involves
1. A verifier, which runs in classical probabilistic polynomial time;
2. A prover, which runs in quantum polynomial time.
_Inputs. The protocol requires the following primitives:_
21
-----
A perfectly binding, quantum computationally concealing commitment protocol (gen, initiate,
_•_
commit, reveal, verify) (which will be used twice: once for the prover’s commitment in step 2,
and then again for the coin-flipping protocol in step 3). We assume that this commitment
protocol is of the form described in Appendix C.
A zero-knowledge proof system for NP.
_•_
An extended trapdoor claw-free function family (ETCFF family), as defined in [Mah18].
_•_
(Note that we fall short of using the ETCFF family as a black box: for the trapdoor check of
step 8, we rely on the specific properties of the LWE-based construction of an ETCFF family
that [Mah18] provides. See Appendix A for details.)
Apart from the above cryptographic primitives, we assume that the verifier and the prover also
receive the following inputs.
1. Input to the verifier: the Hamiltonian H and the numbers a and b.
2. Input to the prover: the Hamiltonian H, the numbers a and b, and the quantum state
_ρ = σ[⊗][m], where σ is a ground state of the Hamiltonian H._
_Protocol._
1. The prover encodes the witness. The prover encodes the quantum witness ρ by applying the
following steps:
(a) Apply concatenated Steane code
(b) Concatenate trap qubits _t_
_|_ _⟩_
(c) Apply a random permutation π
(d) Apply a Pauli one-time-pad X _[a]Z[b]_
The encoding process here is the same as that from step 1 of Protocol 2.17; we direct the reader
to Protocol 2.17, and the Protocol 2.13 to which it refers, for a more detailed explanation of
the steps. Denote the application of the prover’s encoding to the state ρ by E(ρ).
2. The prover commits to its encoding keys. The prover commits to the strings (π, a, b) from
the previous step, using randomness sp. Call the prover’s commitment string z, so that
_z = commit((π, a, b), sp)._
3. The verifier and the prover execute the first half of a two-stage coin-flipping protocol.[7] The
verifier commits to rv, its part of the random string that will be used to determine which
random terms in the Hamiltonian H it will check in subsequent stages of the protocol. Let
_c = commit(rv, sv). The prover sends the verifier rp, which is its own part of the random_
string. The random terms will be determined by r = rv ⊕ _rp. (r is used to determine these_
terms in the same way that r is used in Protocol 2.6.)
4. The verifier initiates the measurement protocol. _(Refer to Protocol 2.9 for an outline of_
_the steps in said measurement protocol.) The verifier chooses the measurement bases h =_
7We need to execute the coin-flipping protocol in two stages because, in our (classical-verifier) protocol, the prover
cannot physically send the quantum state E(ρ) to its verifier before the random string r is decided, as the prover of
Protocol 2.13 does. If we allow our prover to know r at the time when it performs measurements on the witness ρ,
it will trivially be able to cheat.
22
-----
_h1_ _h2nN in which it wishes to measure the state E(ρ). 2kN out of the 2nN bits of h—_
_· · ·_
corresponding to k logical qubits—are chosen so that the verifier can determine whether σ
satisfies the Hamiltonian terms specified by r = rv ⊕ _rp. In our particular case, k = 2m,_
where m is the number of Hamiltonian terms that the verifier will check are satisfied. For
the remaining qubits i, the verifier sets hi to 0. The verifier sends the function keys κ =
_κ1, . . ., κ2nN to the prover._
5. The prover commits to its encoded witness state, as per the measurement protocol. The prover
commits to the quantum state E(ρ) by concatenating a preimage register to each qubit in
_E(ρ), applying the functions specified by κ1, . . ., κ2nN in superposition as Protocol 2.9 de-_
scribes, measuring the resulting output superpositions, and sending the outcomes y1, . . ., y2nN
to the verifier.
6. The verifier chooses at random to run either a test round or a Hadamard round. Each option
is chosen with probability [1]
2 [.]
(a) If a test round is chosen, the verifier requests standard basis measurements of each
committed qubit |ψi⟩ in E(ρ) and its associated preimage register, and receives measurement results βi, xi for i ∈{1, . . ., 2nN _} from the prover. It then checks, for each_
_i ∈{1, . . ., 2nN_ _}, whether the ith measurement result is a preimage of yi. (See Def-_
inition 2.11 for a definition of the phrase ‘preimage of y’.) More precisely: if hi = 0,
the verifier accepts iff yi is in the support of gκi(βi∥xi) (recalling that gκi(βi∥xi) is a
probability distribution); and if hi = 1, the verifier accepts iff yi is in the support of
_fκi(βi∥xi)._
(b) If a Hadamard round is chosen, the verifier requests Hadamard basis measurements
of each committed qubit and its associated preimage register, and receives βi, di for
_i_ 1, . . ., 2nN from the prover. It then obtains the measurement results it desires
_∈{_ _}_
using the procedure outlined in Section 2.2 of [Mah18]:
i. If hi = 0, the verifier ignores the measurement results βi, di, and uses the trapdoor
_τκi to invert yi. The inversion procedure outputs a preimage (mi, xmi,yi); the verifier_
stores mi as the standard-basis measurement outcome. (If the inverse does not exist,
the verifier stores a random bit as mi and rejects.)
ii. If hi = 1, the verifier computes the two preimages x0,i and x1,i to yi under fκi
using the trapdoor τκi. It then stores βi _di_ (x0,i _x1,i) as the Hadamard basis_
_⊕_ _·_ _⊕_
measurement outcome. (Once again, if either inverse does not exist, the verifier
stores a random bit as mi and rejects.)
7. The verifier opens its commitment, and sends the prover its measurement outcomes and func_tion trapdoors. If a test round was chosen in the previous step, the verifier now accepts the_
prover directly, provided that the preimage test was satisfied for all i. (If not all the preimage tests passed, the verifier directly rejects the prover.) If a Hadamard round was chosen,
and the verifier instead opens its commitment to rv and sends its measurement outcomes
_m = m1, . . ., m2nN to the prover, along with its trapdoors τκ1, . . ., τκ2nN ._
8. The prover checks that the verifier’s measurement outcomes and trapdoors were consistent; it
_then proves that the verifier’s measurement outcomes were consistent. The prover checks that_
_τκ1, . . ., τκ2nN are indeed trapdoors for the functions specified by κ1, . . ., κ2nN_, and also that
_κ1, . . ., κ2nN are valid ETCFF keys, using the procedure described in Protocol A.2. It also_
23
-----
defines u = ui1 _ui2m (each ui is 2N bits long) = mℓ1_ _mℓ4mN, where ℓ1, . . ., ℓ4mN are the_
_· · ·_ _· · ·_
indices of the qubits on which Ur[⊗][2][N] acts nontrivially, and checks that u causes the predicate
_Q(t, π, a, b, r, u) to be satisfied. (The Q we refer to here is the Q of Definition 2.19. We define_
_Ur in the same way that Ur was defined in Definition 2.18.) If either of these tests fails, the_
prover aborts. If both tests pass, then the prover proves, using an NP zero-knowledge proof
system,[8] that the verifier’s outcomes are consistent in the following sense:
The verifier’s outcomes u are consistent if there exists a string sp and an encoding key
(t, π, a, b) such that z = commit((π, a, b), sp) and Q(t, π, a, b, r, u) = 1.
Figure 1: Diagrammatic representation of an honest execution of Protocol 3.2. We omit communication between
the different parts of the prover for neatness, and we also omit the initial messages i (see Appendix C) from
executions of the perfectly binding, quantum computationally concealing commitment protocol which we refer to
in Assumptions 3.1. The blue parts of the diagram indicate what occurs in the case of a test round, and the red
parts indicate what occurs in the case of a Hadamard round.
8It was shown in [Wat09] that the second item in Assumptions 3.1 suffices to guarantee the existence of a proof
system for languages in NP that is zero-knowledge against quantum polynomial-time verifiers. Our proof that our
protocol is zero-knowledge for classical verifiers only requires that the NP proof system used here is (likewise) zeroknowledge against classical verifiers; however, it becomes necessary to require post-quantum security of this proof
system if we want our protocol also to be zero-knowledge for potentially quantum malicious verifiers.
24
-----
Figure 2: Diagrammatic representation of Protocol 3.2 with a cheating verifier. The cheating verifier V _[∗]_ may
take some (classical) auxiliary input Z0, store auxiliary information (represented by Z1 and Z2), and produce a
final output Z3 that deviates from that specified by the protocol.
### 4 Completeness of protocol
**Lemma 4.1. Suppose that the instance x = (H, a, b) of the 2-local XZ Hamiltonian problem that is**
_provided as input to the verifier and prover in Protocol 3.2 is a yes-instance, i.e. the ground energy_
_of H is smaller than a. Then, the probability that the honest verifier accepts after an interaction_
_with the honest prover in Protocol 3.2 is 1_ _µ(_ _x_ ), for some negligible function µ.
_−_ _|_ _|_
_Proof. The measurement protocol outlined in section 2.3 has the properties that_
1. for any n-qubit quantum state ρ and for any choice of measurement bases h, the honest prover
is accepted by the honest verifier with probability 1 negl(n), and
_−_
2. the distribution of measurement outcomes obtained by the verifier from an execution of the
measurement protocol (the measurement outcomes mi in step 6(b) of Protocol 3.2) is negligibly close in total variation distance to the distribution that would have been obtained by
performing the appropriate measurements directly on ρ.
These properties are stated in Claim 5.3 of [Mah18]. It is evident (assuming the NP zero-knowledge
proof system has perfect completeness) that if the verifier of Protocol 3.2 had obtained the outcomes m through direct measurement of ρ, it would accept with exactly the same probability with
which the verifier of Protocol 2.6 would accept ρ = σ[⊗][n]. By Claim 2.7, this latter probability is
exponentially close to 1. Lemma 4.1 follows.
25
-----
### 5 Soundness of protocol
Let the honest verifier of the argument system in Protocol 3.2 be denoted V, and let an arbitrary
quantum polynomial-time prover with which V interacts be denoted P. For this section, we will
require notation from Section 5.3 of [Mah18], the proof of Theorem 8.6 of the same paper, and
Section 4 of [BJSW16]. We will by and large introduce this notation as we proceed (and some
of it has been introduced already in Sections 2.3 and 2.4, the sections containing outlines of the
measurement protocol from [Mah18] and the zero-knowledge proof system from [BJSW16]), but
the reader should refer to the above works if clarification is necessary.
We begin by making some preliminary definitions and proving a claim, from which the soundness
of Protocol 3.2 (Lemma 5.6) will naturally follow. Firstly, we introduce some notation from Section
4 of [BJSW16]:
**Definition 5.1 (Projection operators Π0 and Π1). Define N as it is defined in Protocol 3.2. Let**
_DN[0]_ [be the set of bitstrings][ x][ such that the encoding of][ |][0][⟩] [under the concatenated Steane code of]
Protocol 2.13 (or of Protocol 3.2) is [�]x∈DN[0] _N_ [likewise be the set of bitstrings][ x][ such]
_[|][x][⟩][, and let][ D][1]_
that the encoding of |1⟩ under the concatenated Steane code is [�]x∈DN[1]
_[|][x][⟩][. (See Definition][ 2.15][,]_
and Section A.6 of [BJSW16], for details about the concatenated Steane code. The first condition
in Definition 2.15 will provide some motivation for the following definitions of Π0 and Π1.) Define
�
Π0 =
_x∈DN[0]_
�
_|x⟩⟨x|,_ Π1 =
_x∈DN[1]_
_x_ _x_ _._
_|_ _⟩⟨_ _|_
**Definition 5.2 (Projection operators ∆0 and ∆1). Define N as it is defined in Protocol 3.2. Let**
∆0 and ∆1 be the following projection operators:
∆0 = _[I]_ _[⊗][N][ +][ Z][⊗][N]_ _,_ ∆1 = _[I]_ _[⊗][N][ −]_ _[Z][⊗][N]_ _._
2 2
∆0 is the projection onto the space spanned by all even-parity computational basis states, and ∆1
is its equivalent for odd-parity basis states. Note that, since all the codewords in D0 have even
parity, and all the codewords in D1 have odd parity, it holds that Π0 ≤ ∆0 and that Π1 ≤ ∆1.
**Definition 5.3 (The quantum channel Ξ). Define a quantum channel mapping N qubits to one**
qubit as follows:
ΞN (σ) = _[⟨][I]_ _[⊗][N]_ _[, σ][⟩][I][ +][ ⟨][X]_ _[⊗][N]_ _[, σ][⟩][X][ +][ ⟨][Y][ ⊗][N]_ _[, σ][⟩][Y][ +][ ⟨][Z][⊗][N]_ _[, σ][⟩][Z]_ _._
2
Loosely, ΞN can be thought of as a simplification of the decoding operator to the concatenated
Steane code that the honest prover applies to its quantum witness in Protocol 2.13 (or in Protocol
3.2). Its adjoint is specified by
Ξ[∗]N [(][σ][) =][ ⟨][I, σ][⟩][I] _[⊗][N][ +][ ⟨][X, σ][⟩][X]_ _[⊗][N][ +][ ⟨][Y, σ][⟩][Y][ ⊗][N][ +][ ⟨][Z, σ][⟩][Z][⊗][N]_ _,_
2
and has the property that
Ξ[∗]N [(][|][0][⟩⟨][0][|][) = ∆][0] _[,]_ Ξ[∗]N [(][|][1][⟩⟨][1][|][) = ∆][1] _[,]_
a property which we will shortly use.
26
-----
Let z be prover P’s commitment string from step 2 of Protocol 3.2. Because the commitment
protocol is perfectly binding, there exists a unique, well-defined tuple (π, a, b) and a string sp such
that z = commit((π, a, b), sp).
**Definition 5.4. For notational convenience, we define a quantum procedure M on a 2nN** -qubit
state ρ as follows:
1. Apply X _[a]Z[b]_ to ρ, to obtain a state ρ[′].
2. Apply π[−][1] to each 2N -qubit block in the state ρ[′], to obtain a state ρ[′′].
3. Discard the last N qubits of each 2N -qubit block in ρ[′′], to obtain a state ρ[′′′].
4. To each N -qubit block in ρ[′′′], apply the map ΞN .
We also define the procedure M[˜] as the application of the first three steps in M, again for notational
convenience.
Intuitively, we think of M as an inverse to the prover’s encoding procedure E. M may not actually
invert the prover’s encoding procedure, if the prover lied about the encoding key that it used when
it sent the verifier z = commit((π, a, b), sp); however, this is immaterial.
We now prove a claim from which the soundness of Protocol 3.2 will follow. Before we do so,
however, we make a remark about notation for clarity. When we write ‘V accepts the distribution
_Dξ,h with probability p’ (or similar phrases), we mean that, in [Mah18]’s notation from section 8.2,_
�
_vh(1 −_ _p˜h(Dξ,h)) = p._
_h∈{0,1}[2][nN]_
Here, h represents the verifier’s choice of measurement bases, as before; vh is the probability that
the honest verifier will select the basis choice h, and 1 − _p˜h(D) is defined, for any distribution D_
over measurement outcomes m 0, 1, as the probability that the honest verifier will accept
_∈{_ _}[2][nN]_
a string drawn from D on basis choice h. (When we refer to the latter probability, we assume,
following [BJSW16, Section 4], that the prover behaves optimally—in terms of maximising the
verifier’s eventual probability of acceptance—after the verifier sends it measurement outcomes at
the end of step 6 in Protocol 3.2. For the purposes of the present soundness analysis, therefore,
we can imagine that the verifier checks the predicate Q itself after step 6, instead of relying on the
prover to prove to it during step 8 that Q is satisfied.)
**Claim 5.5. Suppose there exists a quantum state ξ such that the honest verifier V accepts the**
_distribution Dξ,h with probability p. Then the state M_ (ξ) is accepted by the verifier of Protocol 2.6
_with probability at least p._
_Proof. Fix a choice of r (see step 3 of Protocol 3.2 for a definition of r). Let Zr be the subset of_
0, 1 such that the verifier of Protocol 2.6 accepts if and only if the n-bit string that results from
_{_ _}[n]_
concatenating the measurement results it obtains in step 4 of said protocol is a member of Zr. It
is unimportant to the analysis what Zr actually is; it matters only that it is well-defined.
27
-----
For this choice of r, we can express the probability that the verifier of Protocol 2.6 accepts a state
_τ as_
� �
�
_Ur[∗]_ _[|][z][1][, . . ., z][n][⟩⟨][z][1][, . . ., z][n][|][ U][r][, τ]_ _._
_z∈Zr_
(Though only 2m of the n qubits in τ are relevant to Ur, we assume here for notational simplicity
that Ur is a gate on n qubits, and that the verifier measures all n qubits of Urτ and ignores those
measurement results which are irrelevant.)
For the same choice of r, we can express the probability that the verifier V from Protocol 3.2 will
eventually accept the distribution Dξ,h as
�
_pr =_
_z∈Zr_
� �
(Ur[∗][)][⊗][N] [(Π][z]1 _[⊗· · · ⊗]_ [Π][z]n[)(][U][r][)][⊗][N] _[,][ ˜]M_ (ξ) _._
Following [BJSW16], we note that
�
_z∈Zr_
� �
(Ur[∗][)][⊗][N] [(Π][z]1 _[⊗· · · ⊗]_ [Π][z]n[)(][U][r][)][⊗][N] _[,][ ˜]M_ (ξ)
�
_≤_
_z∈Zr_
�
=
_z∈Zr_
�
=
_z∈Zr_
�
=
_z∈Zr_
� �
(Ur[∗][)][⊗][N] [(∆][z]1 _[⊗· · · ⊗]_ [∆][z]n[)(][U][r][)][⊗][N] _[,][ ˜]M_ (ξ)
� � � �
(Ur[∗][)][⊗][N] Ξ[∗]N [(][|][z][1][⟩⟨][z][1][|][)][ ⊗· · · ⊗] [Ξ][∗]N [(][|][z][n][⟩⟨][z][n][|][)] (Ur)[⊗][N] _,_ _M[˜]_ (ξ)
� �
(Ξ[⊗]N[n][)][∗][U]r[∗] _M_ (ξ)
_[|][z][1][, . . ., z][n][⟩⟨][z][1][, . . ., z][n][|][ U][r][,][ ˜]_
� �
_Ur[∗]_ _[|][z][1][, . . ., z][n][⟩⟨][z][1][, . . ., z][n][|][ U][r][, M]_ [(][ξ][)] _._
For the second-to-last equality above, we have used the observation that, for any n-qubit Clifford
operation C, and every nN -qubit state σ,
Ξ[⊗][n]
_N_ [(][C][⊗][N] _[σ][(][C][⊗][N]_ [)][∗][) =][ C][Ξ]N[⊗][n][(][σ][)][C][∗][.]
This is equation (35) in [BJSW16], and can be verified directly by considering the definition of ΞN .
We conclude that, if the distribution Dξ,h is accepted by V with probability p = [�]r _[v][r][p][r][ =]_
�
_h_ _[v][h][(1][ −]_ _[p][˜][h][(][D][ξ,h][)) (where][ v][r][ is the probability that a given][ r][ will be chosen, and the second]_
expression is simply a formulation in alternative notation of the first), the state M (ξ) is accepted
by the verifier of Protocol 2.6 with probability at least p.
Now we turn to arguing that Protocol 3.2 has a soundness parameter s which is negligibly close to
3
4 [.]
28
-----
**Lemma 5.6. Suppose that the instance x = (H, a, b) of the 2-local XZ Hamiltonian problem that is**
_provided as input to the verifier and prover in Protocol 3.2 is a no-instance, i.e. the ground energy_
_of H is larger than b. Then, provided that Assumptions 3.1 hold, the probability that the honest_
_verifier V accepts in Protocol 3.2 after an interaction with any quantum polynomial-time prover P_
_is at most_ [3]
4 [+][ negl][(][|][x][|][)][.]
_Proof. Claim 2.12 guarantees that, for any arbitrary quantum polynomial-time prover P who exe-_
cutes the measurement protocol with V, there exists a state ξ, a prover P[′] and a negligible function
_µ such that_
_∥DP[C],h_ _[−]_ _[D][P][′][,h][∥][TV]_ _[≤√][p][h,T]_ [+][ p][h,H] [+][ µ,] and
_DP′,h_ _c Dξ,h ._ (2)
_≈_
(See the paragraph immediately above Claim 2.12 for relevant notation.)
It follows from (2) that, if V accepts the distribution DP′,h with probability p, it must accept the distribution Dξ,h with probability p − negl(N ), because the two are computationally indistinguishable
and the verifier V is efficient. Therefore (using Claim 5.5), if V accepts DP′,h with probability p, the
verifier of Protocol 8.3 from [Mah18] accepts the state M (ξ) with probability at least p negl(N ).
_−_
By the soundness of Protocol 2.6 (Claim 2.7), we conclude that p = negl(N ) when the problem
Hamiltonian is a no-instance.
We now apply a similar argument to that which is used in Section 8.2 of [Mah18] in order to
establish an upper bound on the probability φ that V accepts P in a no-instance. Let EP[H],h [denote]
the event that the verifier V does not reject the prover labelled P in a Hadamard round indexed by
_h during the measurement protocol phase of Protocol 3.2. Let EP[T],h_ [denote the analogous event in a]
test round. Furthermore, let EP,h denote the event that the verifier accepts the prover P in the last
step of Protocol 3.2. The total probability that V accepts P is the average, over all possible basis
choices h, of the probability that V accepts P after a test round indexed by h, plus the probability
that V accepts P after a Hadamard round indexed by h. As such,
�
_φ =_ _vh( 2[1]_ _[Pr][[][E]P[T],h[] + 1]2_ _[Pr][[][E]P[H],h_ _[∩]_ _[E][P][,h][] )]_
_h∈{0,1}[2][nN]_
�
= _vh( [1]2_ _[Pr][[][E]P[T],h[] + 1]2_ _[Pr][[][E]P[H],h[]][Pr][[][E][P][,h][|][E]P[H],h[] )]_
_h∈{0,1}[2][nN]_
�
= _vh( 2[1]_ [(1][ −] _[p][h,T][ ) + 1]2_ [(1][ −] _[p][h,H]_ [)(1][ −] _[p][˜][h][(][D]P[C],h[)) )][ .]_
_h∈{0,1}[2][nN]_
Since Lemma 3.1 of [Mah18] and Claim 2.12 taken together yield the inequality
_p˜h(DP′,h) −_ _p˜h(DP[C],h[)][ ≤∥][D]P[C],h_ _[−]_ _[D][P][,h][∥][TV]_ _[≤√][p][h,T]_ [+][ p][h,H] [+][ µ,]
29
-----
it follows that
�
_φ ≤_ _vh( [1]_
2 [(1][ −] _[p][h,T][ ) + 1]2_ [(1][ −] _[p][h,H]_ [)(1][ −] _[p][˜][h][(][D][P][′][,h][) +][ √][p][h,T][ +][ p][h,H][ +][ µ][) )]_
_h∈{0,1}[2][nN]_
� �
_≤_ [1] _vh(1 −_ _ph,T + (1 −_ _ph,H_ )(ph,H + _[√]ph,T )) + [1]_ _vh(1 −_ _p˜h(DP′,h))_
2 _[µ][ + 1]2_ 2
_h∈{0,1}[2][nN]_ _h∈{0,1}[2][nN]_
_≤_ [1]
2 _[µ][ + 3]4 [+ 1]2_ _[p .]_
The upper bound of [3]
4 [in the last line can be obtained by straightforward calculation.][9][ We conclude]
that Protocol 3.2 has a soundness parameter s which is negligibly close to [3]4 [.]
### 6 Zero-knowledge property of protocol
In this section, we establish that Protocol 3.2 is zero-knowledge against arbitrary classical probabilistic polynomial time (PPT) verifiers. Specifically, we show the following:
**Lemma 6.1. Suppose that the instance x = (H, a, b) of the 2-local XZ Hamiltonian problem that is**
_provided as input to the verifier and prover in Protocol 3.2 is a yes-instance, i.e. the ground energy_
_of H is smaller than a._ _Then (provided that Assumptions 3.1 hold) there exists a polynomial-_
_time generated PPT simulator S such that, for any arbitrary PPT verifier V_ _[∗], the distribution_
_of V_ _[∗]’s final output after its interaction with the honest prover P in Protocol 3.2 is (classical)_
_computationally indistinguishable from S’s output distribution._
_Remark 6.2. Lemma 6.1 formulates the zero-knowledge property in terms of classical verifiers and_
computational indistinguishability against classical distinguishers, because this is the most natural
setting for a protocol in which verifier and interaction are classical. However, the same proof can
be adapted to show that, for any quantum polynomial-time verifier executing Protocol 3.2, there
exists a quantum polynomial-time generated simulator whose output is QPT indistinguishable in
yes-instances from that of the verifier. (In particular, the latter follows from the fact that the second
item in Assumptions 3.1 implies an NP proof system which is zero-knowledge against quantum
polynomial-time verifiers, an implication shown to be true in [Wat09].)
We show that Protocol 3.2 is zero-knowledge by replacing the components of the honest prover with
components of a simulator one at a time, and demonstrating that, when the input is a yes-instance,
the dishonest verifier’s output after each replacement is made is at the least computationally indistinguishable from its output before. The argument proceeds in two stages. In the first, we show
that the honest prover can be replaced by a quantum polynomial-time simulator that does not
have access to the witness ρ. In the second, we de-quantise the simulator to show that the entire
9For example, one can obtain this bound by maximising the quantity f (ph,T, ph,H ) = 12 �1−ph,T +(1−ph,H )(ph,H +
_√ph,T )[�]_ under the assumption that ph,T and ph,H lie in [0, 1]. The function f has one stationary point (ph,T =
1
9 _[, p][h,H][ =][ 1]3_ [) in [0][,][ 1]][2][; checking][ f][ at this point, in addition to its maxima on each of the boundaries of [0][,][ 1]][2][, reveals]
that the choice of (ph,T, ph,H ) ∈ [0, 1][2] which yields the maximum value of f is ( [1]9 _[,][ 1]3_ [), giving][ f][ =] 32 [. Of course,]
2
3 _[<][ 3]4_ [; we use the bound of][ 3]4 [for consistency with [][Mah18][].]
30
-----
execution can be simulated by a classical simulator who likewise does not have access to ρ. (The
latter is desirable because the verifier is a classical entity.)
We begin with the protocol execution between the honest prover P and an arbitrary cheating
verifier V _[∗], the latter of whom may take some (classical) auxiliary input Z0, store information_
(represented by Z1 and Z2), and produce an arbitrary final output Z3. A diagram representing the
interaction between V _[∗]_ and P can be found in Figure 2.
#### 6.1 Eliminating the coin-flipping protocol
Our first step in constructing a simulator is to eliminate the coin-flipping protocol, which is designed
to produce a trusted random string r, and replace it with the generation of a truly random string.
(This step is entirely analogous to step 1 of Section 5 in [BJSW16], and we omit the analysis.) The
new diagram is shown below. In this diagram, coins represents a trusted procedure that samples a
uniformly random string r of the appropriate length.
#### 6.2 Introducing an intermediary
Our next step is to introduce an intermediary, denoted by I, which pretends—to the cheating verifier
of Protocol 3.2—to be its prover P, while simultaneously playing the role of verifier to the prover
from the zero-knowledge proof system of Protocol 2.17 [10]. (We denote the honest prover and honest
verifier for the proof system of Protocol 2.17 by and, respectively, to distinguish them from
_P_ _V_
the prover(s) P and verifier(s) V of the classical-verifier protocol currently under consideration.)
We remark, for clarity, that I is a quantum polynomial-time procedure. The essential idea of this
section is that I will behave so it is impossible for the classical verifier V to tell whether it is
interacting with the intermediary or with its honest prover. (We achieve this simply by making I
output exactly the same things that P would.) Given that this is so, the map that V implements
10Protocol 2.17 is identical in structure to the protocol presented in [BJSW16]. We refer the reader to Figure 4 in
that paper for a diagram representing the appropriate interactions.
31
-----
from its input to its output, including its auxiliary registers, cannot possibly be different in the
previous section as compared to this section.
Figure 3: The intermediary interacting with the honest prover from the proof system of Protocol 2.17, denoted
by P, and also with the cheating classical verifier V _[∗]. I1 receives the encoded quantum witness, which we have_
denoted by Y, from P, in addition to P’s commitment z. It then sends z to V1[∗][, along with][ Z][1][, the auxiliary]
input that V1[∗] [is supposed to receive, and][ r][, the random string generated by][ coins][.][ I][2] [passes on any output][ V][ ∗]1
produces to V2[∗][, performs itself the procedure for committing to a quantum state from [][Mah18][], and executes]
the measurement protocol with V2[∗][.][ I][3] [receives the measurement outcomes][ u][ and the trapdoors][ τ][ from][ V][ ∗]2 [, and]
checks whether the trapdoors are valid. If they are invalid, it aborts directly; if they are valid, it sends u on to
_P3 and passes Z2 to V3[∗][, so that][ P][3]_ [and][ V][ ∗]3 [can execute the NP zero-knowledge proof protocol. (Each part of][ I]
should also send everything it knows to its successor, but we have omitted these communications for the sake of
cleanliness, as we omitted the communication between parts of the prover in previous diagrams.)
#### 6.3 Simulating the protocol with a quantum simulator
We now note that Figure 3 looks exactly like Figure 4 from [BJSW16], if we consider the intermediary I and the cheating classical verifier V _[∗]_ taken together to be a cheating verifier V _[′]_ for the
proof system of Protocol 2.17.
32
-----
Figure 4: Compare to Figure 4 of [BJSW16]. Note that S1 includes the behaviour of an arbitrary V1[′] [; the reason]
it is called S1 and not V1[′] [is because][ V]1[′] [obtains][ r][ from a coin-flipping protocol, while][ S][1] [generates][ r][ using][ coins][.]
In all other respects, S1 is the same as V1[′] [.]
Using similar reasoning as in [BJSW16] (and recalling that, by Lemma 2.20, it still works when the
Hamiltonian being verified is an XZ Hamiltonian), therefore, we conclude that we can replace ρ in
Figure 4 with ρr—where ρr is a quantum state specifically designed to pass the challenge indexed
by r—without affecting the verifier’s output distribution (to within computational indistinguishability). See Remark 2.8 for a procedure that explicitly constructs ρr. Note that, if our objective
was to achieve a quantum simulation without knowing the witness state ρ, our task would already
be finished at this step. However, our verifier is classical; therefore, in order to prove that our
classical verifier’s interaction with its prover does not impart to it any knowledge (apart from the
fact that the problem instance is a yes-instance) that it could not have generated itself, we need to
achieve a classical simulation of the argument system.
#### 6.4 Simulating the protocol with a classical simulator
6.4.1 Replacing P0 and I1
If we want to simulate the situation in Figure 4 classically, then we need to de-quantise P0, I1 and
_I2. (I3 and P3 are already classical.) Our first step is to replace P0 and I1 with a single classical_
entity, I1[′] [.]
_I1[′]_ [simply chooses encoding keys][ (][t, π, a, b][)][ and generates][ z][, a commitment to the encoding keys]
(π, a, b). It then sends z, r and Z1 to V1[∗][, as][ I][1][ would have. Because][ I]1[′] [has exactly the same output]
as I1, the verifier’s output in Figure 5 is the same as its output in Figure 4. (We assume that the
still-quantum I2 now generates ρr for itself.)
33
-----
Figure 5: P0 and I1 have been replaced by I1[′] [.]
6.4.2 Some simplifications (which make it possible to de-quantise I2)
Following [BJSW16], we make some alterations to Figure 5 that will allow us to eventually dequantise I2. The alterations are as follows:
1. Replace V3[∗] [and][ P][3][ with an efficient simulation][ S][3][. (An efficient simulation of the NP proof]
protocol execution between V3[∗] [and][ P][3][ is guaranteed to exist because the NP proof protocol]
is zero-knowledge.) Recall that the statement P3 is meant to prove to V3[∗] [in a zero-knowledge]
way is as follows: ‘There exists a string sp and an encoding key (t, π, a, b) such that z =
commit((π, a, b), sp) and Q(t, π, a, b, r, u) = 1.’ The zero-knowledge property of the NP proof
system guarantees that, for yes-instances, the output of S3 is indistinguishable from the
output of the protocol execution between V3[∗] [and][ P][3][. In our case,][ I]1[′] [always holds][ s][p][ and]
(π, a, b) such that z = commit((π, a, b), sp), and the honest prover will abort the protocol if
_Q(t, π, a, b, r, u) = 0. Therefore, whenever the prover does not abort, the output of S3 is_
computationally indistinguishable from that of V3[∗] [and][ P][3][. We assume, following [][BJSW16][],]
that S3 also behaves as V3[∗] [would when the prover aborts.] If it does, then Figure 6 is
computationally indistinguishable from Figure 5.
34
-----
Figure 6: V3[∗] [and][ P][3] [have been replaced by][ S][3][. Note that][ S][3] [does not require access to the witness][ (][s][p][, t, π, a, b][)][,]
and so sp can be discarded immediately after I1[′] [is run.]
2. Replace the generation of the genuine commitment z with the generation of a commitment
_z[′]_ = commit((π0, a0, b0), sp), where π0, a0 and b0 are fixed strings independent of the encoding
key (t, π, a, b) that I1[′] [chooses. Because the commitment protocol is (computationally) con-]
cealing, and the commitment is never opened (recall that sp is discarded after I1[′] [is run),][ V][ ∗]1
should not be able to tell (computationally speaking) that z has been replaced by z[′].
The genuine encoding key is still used to evaluate the predicate Q. Note that, because z
has been replaced with z[′], the statement for which S3 must simulate the execution of a
zero-knowledge proof between V3[∗] [and][ P][3][ is now as follows: ‘There exists a string][ s][p][ and]
an encoding key (t, π, a, b) such that z[′] = commit((π, a, b), sp) and Q(t, π, a, b, r, u) = 1.’
This statement is, in general, no longer true, because the commitment protocol is perfectly
binding. However, if the predicate Q is still satisfied for the encoding key (t, π, a, b) that I3
sent, then S3 will proceed to generate a transcript for the no-instance that is computationally
indistinguishable from a transcript for a yes-instance. If Q is no longer satisfied, then S3
will abort, as before. In effect, therefore, the cheating verifier V _[∗]_ will not be able to tell
(up to computational indistinguishability) that z has been replaced by z[′], and that the NP
statement being ‘proven’ to it is no longer true.
6.4.3 De-quantising I2
We now replace I2 with a classical entity I2[′] [.] In the process, we require modifications to the
behaviour of I3.
Knowing r, I2[′] [can calculate for itself what][ ρ][r][ should be, though it cannot physically produce this]
state. As we noted in Remark 2.8, ρr is a simple state: it is merely the tensor product of |0⟩ _, |1⟩_ _, |+⟩_
and |−⟩ qubits. Applying the concatenated Steane code to ρr will then result in a tensor product
of N -qubit states that look like
� � � � � �
_x_ _,_ _x_ _,_ _x_ + _x_ _,_ and _x_ _x_ _,_ (3)
_|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|_ _⟩−_ _|_ _⟩_
_x∈DN[0]_ _x∈DN[1]_ _x∈DN[0]_ _x∈DN[1]_ _x∈DN[0]_ _x∈DN[1]_
35
-----
after appropriate normalisation.
A brief argument will suffice to establish that it is possible to classically simulate standard or
Hadamard basis measurements on the qubits in E(ρr). Each qubit of E(ρr) is either an encoding
qubit or a trap qubit, up to the application of a random single-qubit Pauli operator. Simulating
standard-basis measurements of encoding qubits is classically feasible, because DN[0] [and][ D]N[1] [are]
polynomially sized, and the expressions in (3) only involve superpositions over those sets with
equal-magnitude coefficients. Simulating standard-basis measurements of trap qubits, which are
always initialised either to 0 or +, is trivially feasible.
_|_ _⟩_ _|_ _⟩_
To simulate a Hadamard basis measurement, we can take advantage of the transversal properties
of the encoding scheme, and apply H before we apply the concatenated Steane code. Denote the
application of the concatenated Steane code to ρr by S(ρr). We have that
_S(H_ _[⊗][n]ρrH_ _[⊗][n]) = H_ _[⊗][nN]_ _S(ρr)H_ _[⊗][nN]_
by transversality. To simulate a Hadamard basis measurement of E(ρr), we then
1. Apply H _[⊗][n]_ to ρr. This is easy to classically simulate, because ρr is a tensor product of
0 _,_ 1 _,_ + and qubits.
_|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|−⟩_
2. Apply the concatenated Steane code to H _[⊗][n]ρrH_ _[⊗][n]. Simulating this is classically feasible,_
by the same argument that we used for standard basis measurements, because H _[⊗][n]ρrH_ _[⊗][n]_ is
still a tensor product of 0 _,_ 1 _,_ + and qubits.
_|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|−⟩_
3. Concatenate trap qubits to each N -qubit block in S(H _[⊗][n]ρrH_ _[⊗][n]) = H_ _[⊗][nN]_ _S(ρr)H_ _[⊗][nN]_ . Simulate the application of H to each trap qubit (which is, once again, classically easy to do
because each trap qubit is initialised either to 0 or to + ).
_|_ _⟩_ _|_ _⟩_
4. Apply the permutation π to each 2N -tuple.
5. Simulate a standard basis measurement of the result.
6. XOR the string b to the measurement outcome (b was previously the Z-key for the Pauli
one-time pad).
Having established that it is possible to classically simulate standard and Hadamard basis measurements of the qubits in E(ρr), we now describe the procedure that the classical I2[′] [should follow]
for each qubit i in the state E(ρr).
1. During the commitment phase, I2[′] [simulates a standard basis measurement on the][ i][th qubit,]
obtains a simulated measurement result βi, and then chooses a uniformly random preimage
_xi from the domain of the function specified by κi. It applies the function specified by κi to_
_βi∥xi and sets yi = ηκi(βi∥xi)._
2. If the verifier requests a test round, I2[′] [sends][ β][i][∥][x][i][ to the verifier. This is exactly what the]
quantum prover I2 would send in the case of a test round, so the verifier cannot tell that it
is interacting with I2[′] [instead of][ I][2][.]
3. If the verifier requests a Hadamard round, I2[′] [sends a uniformly random string][ s][i][ ∈{][0][,][ 1][}][w][+1]
to the verifier, where w is the length of the preimages. In the same situation, the quantum
_I2 would have sent Hadamard basis measurements of the w + 1 qubits in the ith committed_
qubit in E(ρ) and its associated preimage register.
36
-----
(a) If hi = 0, the outcomes of these measurements are uniformly distributed and thus
indistinguishable from the distribution of strings si reported by I2[′] [.]
(b) Let |ψi⟩ be the state of the ith qubit of E(ρ), let x0,i and x1,i be the two preimages to
_yi under the function fκi, and let bi be the ith bit of the one-time-pad Z-key b from I1[′]_ [’s]
encoding key (t, π, a, b). If hi = 1, the outcomes of I2’s Hadamard basis measurements
can be represented as a tuple (βi, di), where di is uniformly random, and
_βi = di · (x0,i ⊕_ _x1,i) ⊕_ _bi ⊕_ Meas(H |ψi⟩) .
(Meas here denotes a standard basis measurement.)
Note that the distribution over (bi, βi, di) which one would obtain by measuring |ψi⟩ in
the Hadamard basis, choosing di and bi uniformly at random, and letting
_βi = di · (x0,i ⊕_ _x1,i) ⊕_ _bi ⊕_ Meas(H |ψi⟩)
is equivalent to the one that one would obtain choosing a uniformly random si, measuring
_|ψi⟩_ in the Hadamard basis, calculating
_bi = si,1 ⊕_ _di · (x0,i ⊕_ _x1,i) ⊕_ Meas(H |ψi⟩),
and finally setting βi = si,1, di = si,2 _si,w+1._
_· · ·_
The former set of actions is equivalent to the set of actions that I2 performs. The latter
set of actions is (as we will shortly show) classically feasible provided that we have the
verifier’s trapdoors. Note that I2[′] [only needs to send the verifier][ s][i][, and can rely on]
its successor I3, who will have access to the verifier’s trapdoors, to calculate the bits bi
retroactively. It follows that, given that I3 can produce correct bits bi (we will shortly
show that it can), the distribution of strings reported by I _[′]_ is identical to the distribution
of outcomes reported by I.
Having established that I2[′] [and][ I][2][ are the same from][ V][ ∗]2 [’s perspective (meaning that it must have]
the same behaviour that it did in Figure 5 after I2 is replaced with I2[′] [), it remains to ensure that]
the choice of the one-time pad Z-key b is consistent with the si that I2[′] [picked. We relegate the task]
of making this choice to I3[′] [, our new version of][ I][3][, because it has access to the verifier’s trapdoors]
_τ_ . If any of the trapdoors that it receives from the verifier are invalid, or if any of the ETCFF
keys κ which the verifier chose are invalid, I3[′] [aborts, as specified in Protocol][ 3.2][. (‘Validity’, here,]
means the following: 1) all the κs which the verifier sent earlier well-formed, and 2) for each yi, the
trapdoor τκi correctly inverts the function specified by κi. We expand on this notion of ‘validity’
in Appendix A.) Presuming upon valid keys and valid trapdoors, I3[′] [then deduces the verifier’s]
choices of measurement basis, h, from τ . Given that the trapdoors are valid and that the keys are
well-formed, I3[′] [can be confident that its deductions in this regard will lead it to behave in the same]
way that the honest prover would, because (given valid keys and trapdoors) I3[′] [will know exactly]
which superpositions the honest prover would have obtained during the measurement protocol after
following the verifier’s instructions.
For notational convenience, let _ψ[∗]_ denote the state obtained by applying the first three steps of
_|_ _⟩_
_E, but not the last step, to ρr. I3[′]_ [subsequently executes the following procedure for all][ i][ such that]
_hi = 1:_
37
-----
1. Set di to be the last w bits of si, and compute di (x0,i _x1,i) using the trapdoor τκi._
_·_ _⊕_
2. Simulate a standard basis measurement of HX _[a][i]_ _|ψi[∗][⟩][. Denote the result by][ β][i][. (Here,][ a][i]_
refers to the ith bit of a, where a is taken from I1[′] [’s initial choice of one-time pad keys, and]
_|ψi[∗][⟩]_ [denotes the][ i][th qubit of][ |][ψ][∗][⟩][.)]
3. Set b[′]i [(the][ i][th bit of][ b][′][, the new][ Z][-key for the one-time pad) to be equal to][ β][i] _[⊕][s][i,][1]_ _[⊕][d][i]_ _[·][(][x][0][,i]_ _[⊕]_
_x1,i) (where si,1 refers to the first bit of si). This will cause the equation Meas(H |ψi⟩) ⊕_ _di ·_
(x0,i _x1,i) = si,1 to be satisfied:_
_⊕_
Meas(H |ψi⟩) ⊕ _di · (x0,i ⊕_ _x1,i) = si,1_
_⇐⇒_ Meas(HZ[b][i]X _[a][i]_ _|ψi[∗][⟩][)][ ⊕]_ _[d][i]_ _[·][ (][x][0][,i]_ _[⊕]_ _[x][1][,i][) =][ s][i,][1]_
_⇐⇒_ _bi ⊕_ Meas(HX _[a][i]_ _|ψi[∗][⟩][)][ ⊕]_ _[d][i]_ _[·][ (][x][0][,i]_ _[⊕]_ _[x][1][,i][) =][ s][i,][1]_
_⇐⇒_ _bi ⊕_ _βi ⊕_ _di · (x0,i ⊕_ _x1,i) = si,1_
_⇐⇒_ _bi = βi ⊕_ _si,1 ⊕_ _di · (x0,i ⊕_ _x1,i)._
Having done this, I3[′] [then feeds][ (][t, π, a, b][′][)][ into][ Q][. (Note that replacing][ b][ with][ b][′][ cannot create]
any conflict with the commitment string z[′] that the verifier will notice, because z[′] was already
independent of the one-time-pad keys (a, b).) In all other respects I3[′] [behaves the same way that][ I][3]
did.
The final simulation will be as follows:
Since all the entities in this simulation are classical and efficient, and none have access to information
about the witness state ρ, it follows that the protocol is zero-knowledge.
### A LWE-based ETCFF family and efficient trapdoor check
In order to explain the trapdoor check that the honest prover of Protocol 3.2 implements during
step 8 of Protocol 3.2, we briefly outline, at a level of detail appropriate for us, how the LWE-based
ETCFF family that is used in [Mah18] is constructed.
38
-----
We begin by introducing the instantiations of the keys κ and the trapdoors τ for noisy trapdoor
claw-free (f ) and trapdoor injective (g) functions, whose properties we have relied upon in a blackbox way for the rest of this work. For details, we refer the reader to Section 9 of [Mah18].
The key (κ1, κ2) for an noisy two-to-one function f is (A, As + e), where A is a matrix in Z[n]q _[×][m]_
and e ∈ Z[n]q [is an error vector such that][ |][e][|][ < B][f] [for some small upper bound][ B][f] [. (The specific]
properties that Bf should satisfy will be described later.) Here, n, m are integers, and q is a prime
power modulus that should be chosen as explained in [Mah18]. In addition, in order to implement
the trapdoor, we assume that the matrix A is generated using the efficient algorithm GenTrap which
is described in Algorithm 1 in [MP11]. (For convenience, we use the ‘statistical instantiation’ of
the procedure described in Section 5.2 of the paper.) GenTrap produces a matrix A that has the
form A = [A|HG − _AR], for some publicly known G ∈_ Z[m]q _[×][w], n ≤_ _w ≤_ _m, some A ∈_ Zq[n][×][(][m][−][w][)],
some invertible matrix H ∈ Z[n]q _[×][n], and some R ∈_ Z[(][m][−][w][)][×][w], where R = τA is the trapdoor to A.
As shown in [MP11, Theorem 5.4], it is straightforward, given the matrix R, to verify that R is a
‘valid’ trapdoor, in the sense that it allows a secret vector s to be recovered from a tuple of the form
(A, b = As + e) with certainty when e has magnitude smaller than some bound BInvert. Checking
that R is a valid trapdoor involves computing the largest singular value of R and checking that A
is indeed of the form A = [A _HG_ _AR] for some invertible H and for the publicly known G. Using_
_|_ _−_
any valid trapdoor, recovery can be performed via an algorithm Invert described in [MP11].
The key for an injective function g, meanwhile, is (A, u), where u is a random vector not of the form
_As + e for any e of small enough magnitude. (Again, ‘small enough’ here refers to a specific upper_
bound, and what the bound is precisely will be described later. The distribution of u is uniform
over all vectors that satisfy this latter requirement.) The trapdoor τA is still the R corresponding
to the matrix A which is described in the preceding paragraph.
The functions fκ and gκ both take as input a bit b and a vector x and output a probability distribution
(to be more precise, a truncated Gaussian distribution of the kind defined in Section 2.3, equation
4 of [Mah18]). We clarify that, when we say that the functions output a probability distribution,
we mean that they should be thought of as maps from the space of strings to the set of probability
distributions, not that their outputs are randomised. Given a sample y from one such probability
distribution Y, the trapdoor τA can be used to recover the tuple(s) (b, x) which are preimages of y
under the function specified by κ. (See Definition 2.11 for a definition of the phrase ‘preimage of
_y’.) The functions fκ and gκ can be defined (using notation explained in the paragraph below the_
definition) as follows:
**Definition A.1 (Definition of trapdoor claw-free and trapdoor injective functions).**
**(a)** _fκ(b, x) = Ax + e0 + b · (As + e),_
where e0 is distributed as a truncated Gaussian with bounded magnitude |e0|max
**(b)** _gκ(b, x) = Ax + e0 + b · u ._
What the above notation means is that one samples from the distribution determined by the input
(b, x) and the function key κ = (κ1, κ2) by sampling e0 from a truncated Gaussian centred at
the origin and then computing κ1x + e0 + b · (κ2). A key feature of the f functions is that the
output distributions given by fκ(0, x) and fκ(1, x − _s) are truncated Gaussians which overlap to a_
high degree (so that the statistical distance between the distributions fκ(0, x) and fκ(1, x − _s) is_
39
-----
negligible). The g functions, meanwhile, are truly injective in the sense that g(b, x) and g(b[′], x[′])
never overlap for (b, x) = (b[′], x[′]). In order that these two things are true, we require that the e in
_̸_
Definition A.1(a) is very small (Bf ≪|e0|max), and that the u in Definition A.1(b) is such that
_u ̸= As + e for any |e| < Bg, where Bg > |e0|max. It follows from hardness of the (decisional) LWE_
assumption that the keys for the f functions and the keys for the g functions are computationally
indistinguishable.
The trapdoor check that the prover of protocol 3.2 executes in step 8 is as follows:
**Protocol A.2 (Trapdoor and key check).**
Let κi = (Ai, Aisi + ei). (Note that |ei| need not be smaller than any particular bound in this
definition of κi.) For all i ∈{1, . . ., 2nN _}:_
1. Check that τκi is a ‘valid’ trapdoor for Ai, in the sense that was explained in the third
paragraph of this appendix. If it is not, abort.
2. For a choice of Bf, Bg and |e0|max such that Bf ≪|e0|max < Bg ≤ _BInvert and Bg −|e0|max >_
_|e0|max, check that one of the following three conditions hold:_
(a) Invert applied to κ2,i succeeds, and recovers an e such that |e| < Bf, or
(b) Invert applied to κ2,i succeeds, and recovers an e such that Bg < |e|, or
(c) Invert fails.
Figure 7: Diagram illustrating one possible choice of parameters that satisfies the conditions in 2. above. When
a circle is labelled with a number (such as Bf or Bg), the radius of the circle represents the size of that number.
The conditions in step 2 above are intended to ensure that, for all i, κi is a key either for an f
or a g function, and therefore well-formed. An ill-formed key would be of the form κbad = As + e
40
-----
for Bf < e < Bg; for some choices of Bf and Bg, a subset of κbads as just defined would behave
neither like keys for f functions nor like keys for g functions, because the distributions ηκbad(0, x)
and ηκbad(1, x−s) would overlap but not to a sufficient degree. The specifications on the parameters
that are made in step 2 above, and the tests prescribed for the prover, are designed to ensure that
_Bf and Bg are properly chosen and that the prover can check efficiently that the verifier’s κs are_
well-formed according to these appropriate choices of Bf and Bg.
The following claim shows that, given a valid trapdoor (i.e. a matrix R that satisfies the efficiently
verifiable conditions described in the third paragraph of this appendix), and a well-formed key
_κ, the trapdoor can be used to successfully recover all the preimages under a function ηκ to any_
sample y from a distribution Y in the range of the function ηκ. This claim is needed to justify the
correctness of the “de-quantized” simulator I2[′] [considered in Section][ 6.4.3][: if][ I]2[′] [can be sure that]
it has recovered all the preimages to y, and no others, then it can successfully simulate the honest
prover.
**Claim A.3. Let A be a matrix in Z[n]q** _[×][m], let κ = (A, κ2), and let the function ηκ be defined by_
_ηκ(b, x) = Ax + e0 + b · κ2. (The output of ηκ is, as in Definition A.1, a probability distribution.)_
_Let τA be a purported trapdoor for A. Suppose that κ passes the test in step 2 of Protocol A.2, and_
_suppose that the trapdoor τA inverts the matrix A, in the sense that, given r = As + e for some e_
_of sufficiently small magnitude, τA can be used to recover the unique (s, e) such that As + e = r._
_Then one can use τA to efficiently recover all the preimages to any y sampled from any distribution_
_Y in the range of ηκ._
_Proof. By hypothesis, κ2 is either of the form As + e for some (s, e) (with |e| < Bf_ ), or it is not of
the form As + e for any e such that |e| < Bg. We do not know a priori which of these is the case,
but the procedure that we perform in order to recover the preimage(s) to y is the same in both
cases:
1. Use the trapdoor τA to attempt to find (x1, e1) such that Ax1 + e1 = y. If such an (x1, e1)
exists, and |e1| < |e0|max, record 0∥x1 as the first preimage.
2. Use the trapdoor τA to attempt to find (x2, e2) such that Ax2 + e2 = y − _κ2. If such an_
(x2, e2) exists, and |e2 < |e0|max, record 1∥x2 as the second preimage.
If κ2 = As + e for some s and e such that |e| ≪ _Bf < |e0|max, then this procedure will return two_
preimages (except with negligible probability, which happens when y comes from the negligiblysized part of the support of a distribution fκ(b, x) which is not in the support of the distribution
_fκ(¬b, x_ +(−1)[b]s); this can occur if y is a sample such that |e0| + _|e| > |e0|max, using notation from_
Definition A.1). Assuming that the latter is not the case, in step 1, the algorithm above will recover
_x such that y = Ax + e0 for some e0 < |e0|max, because (under our assumption, and by linearity)_
_y is always of the form Ax + e0. In step 2, it will recover x[′]_ = x − _s, because x[′]_ = x − _s will satisfy_
the equation y − (As + e) = Ax[′] + e[′] for e[′] = e0 − _e, and |e0 −_ _e| < |e0| + |e| < |e0|max. We know_
that y has two preimages under our assumption, so we conclude that, when our assumption holds,
the algorithm returns all of the preimages to y under ηκ and no others. In the negligible fraction of
cases when y has only one preimage even though κ2 = As + e, the algorithm returns one preimage,
which is also the correct number.
41
-----
It can be seen by similar reasoning that, when κ2 = u for u ̸= As + e for any e such that |e| < Bg,
this procedure will return exactly one preimage, which is what we expect when κ2 = u.
In the context of Protocol 3.2, the honest prover knows that ηκi has been evaluated correctly for
all i, because the prover evaluated these functions for itself. Therefore, given Claim A.3, if our goal
is to show that the honest prover can efficiently determine whether or not a purported trapdoor
_τA[′]_ _i_ [can be used to recover all the preimages to][ y][i][ under][ η][κ]i[, with][ κ][i][ = (][A][i][, κ][2][,i][)][, it is sufficient]
to show that a procedure exists to efficiently determine whether or not τA[′] _i_ [truly ‘inverts][ A][i][’, i.e.]
recovers (s, e) correctly from all possible r = Ais + e with e having sufficiently small magnitude.
This procedure exists in the form of Invert from [MP11].
### B Completeness and soundness of Protocol 2.6
For notational convenience, define α = _a_ _b_
� �
_s_ [2][|][d][s][|][ and][ β][ =] _s_ [2][|][d][s][|] [. Fix an arbitrary state][ ρ][ sent]
by the prover. For j = 1, . . ., m let Xj be a Bernoulli random variable that is 1 if the j-th
measurement from step 4 of Protocol 2.6 yields −sign(dj) and 0 otherwise. Let X = [�]j[m]=1 _[X][j][ and]_
_Bj = E[X|Xj, . . ., X1]. Then (B1, . . ., Bm) is a martingale. Applying Azuma’s inequality, for any_
_t_ 0
_≥_
� �
Pr _X_ E[X] _t_ _e[−]_ 2[t]m[2] _._
_|_ _−_ _| ≥_ _≤_
In the case of an instance x /∈ _L, as mentioned in the main text E[Xj] ≤_ 21 _[−]_ _[β][.]_ Choosing
_t =_ [1]2 _[m][(][β][ −]_ _[α][)][, it follows that in this case]_
� �
Pr _X_ 2e[−][m][(][β][−][α][)][2][/][8] _._
_≤_ [1] _≤_
2 _[m][(1][ −]_ _[β][ −]_ _[α][)]_
Since β _α is inverse polynomial, by [MNS16], the right-hand side can be made exponentially_
_−_
small by choosing m to be a sufficiently large constant times (β−|xα| )[2][ . The soundness of Protocol]
2.6 follows.
Completeness follows immediately from a similar computation using the Chernoff bound, since in
this case we can assume that the witness provided by the prover is in tensor product form.
### C Commitment scheme
We provide an informal description of a generic form for a particular (and commonly seen) kind
of commitment scheme. The protocol for making a commitment under this scheme requires three
messages in total between the party making the commitment, whom we refer to as the committer,
and the party receiving the commitment, whom we call the recipient. The first message is an
initial message i from the recipient to the committer; the second is the commitment which the
committer sends to the recipient; and the third message is a reveal message from the committer
42
-----
to the recipient. The scheme consists of a tuple of algorithms (gen, initiate, commit, reveal, verify)
defined as follows:
gen(1[ℓ]) takes as input a security parameter, and generates a public key pk.
_•_
initiate(pk) takes as input a public key and generates an initial message i (which the recipient
_•_
should send to the committer).
commit(pk, i, m, s) takes as input a public key pk, an initial message i, a message m to which
_•_
to commit, and a random string s, and produces a commitment string z.
reveal(pk, i, z, m, s) outputs the inputs it is given.
_•_
verify(pk, i, z, m, s) takes as argument an initial message i, along with a purported public key,
_•_
commitment string, committed message and random string, evaluates commit(pk, i, m, s), and
outputs 1 if and only if z = commit(pk, i, m, s).
For brevity, we sometimes omit the public key pk and the initial message i as arguments in the
body of the paper. The commitment schemes which we assume to exist in the paper have the
following security properties:
_Perfectly binding: If commit(pk, i, m, s) = commit(pk, i, m[′], s[′]), then (m, s) = (m[′], s[′])._
_•_
_(Quantum) computationally concealing: For any public key pk_ gen(1[ℓ]), fixed initial mes
_•_ _←_
sage i, and any two messages m, m[′], the distributions over s of commit(pk, i, m, s) and
commit(pk, i, m[′], s) are quantum computationally indistinguishable.
It is known that a commitment scheme with the above form and security properties exists assuming
the quantum hardness of LWE: see Section 2.4.2 of [CVZ19]. The commitment scheme outlined in
that work is analysed in the common reference string (CRS) model, but the analysis can easily be
adapted to the standard model when an initial message i is allowed to pass from the recipient to
the committer.
### References
[ABE10] Dorit Aharonov, Michael Ben-Or, and Elad Eban. Interactive proofs for quantum
computations. In Andrew Chi-Chih Yao, editor, Innovations in Computer Science
_- ICS 2010, Tsinghua University, Beijing, China, January 5-7, 2010. Proceedings,_
[pages 453–469. Tsinghua University Press, 2010. URL http://conference.itcs.](http://conference.itcs.tsinghua.edu.cn/ICS2010/content/papers/35.html)
```
tsinghua.edu.cn/ICS2010/content/papers/35.html.
```
[ABOEM17] Dorit Aharonov, Michael Ben-Or, Elad Eban, and Urmila Mahadev. Interactive proofs
for quantum computations. arXiv preprint arXiv:1704.04487, 2017.
[ACGH19] Gorjan Alagic, Andrew M. Childs, Alex B. Grilo, and Shih-Han Hung. Non-interactive
classical verification of quantum computation. arXiv e-prints, page arXiv:1911.08101,
[November 2019, 1911.08101.](http://arxiv.org/abs/1911.08101)
[BCC88] Gilles Brassard, David Chaum, and Claude Cr´epeau. Minimum disclosure proofs of
[knowledge. J. Comput. Syst. Sci., 37(2):156–189, October 1988. doi:10.1016/0022-](http://dx.doi.org/10.1016/0022-0000(88)90005-0)
[0000(88)90005-0.](http://dx.doi.org/10.1016/0022-0000(88)90005-0)
43
-----
[BFK09] Anne Broadbent, Joseph Fitzsimons, and Elham Kashefi. Universal blind quantum
computation. In Foundations of Computer Science, 2009. FOCS’09. 50th Annual
_[IEEE Symposium on, pages 517–526. IEEE, 2009. doi:10.1109/focs.2009.36.](http://dx.doi.org/10.1109/focs.2009.36)_
[BG19] Anne Broadbent and Alex B Grilo. Zero-knowledge for QMA from locally simulatable
proofs. arXiv preprint arXiv:1911.07782, 2019.
[BJSW16] Anne Broadbent, Zhengfeng Ji, Fang Song, and John Watrous. Zero-knowledge proof
systems for QMA. In Foundations of Computer Science (FOCS), 2016 IEEE 57th
_[Annual Symposium on, pages 31–40. IEEE, 2016. doi:10.1109/focs.2016.13.](http://dx.doi.org/10.1109/focs.2016.13)_
[BL08] Jacob D. Biamonte and Peter J. Love. Realizable Hamiltonians for universal adiabatic quantum computers. _[Physical Review A, 78:012352, July 2008, 0704.1287.](http://arxiv.org/abs/0704.1287)_
[doi:10.1103/physreva.78.012352.](http://dx.doi.org/10.1103/physreva.78.012352)
[BOGG[+]88] Michael Ben-Or, Oded Goldreich, Shafi Goldwasser, Johan H˚astad, Joe Kilian, Silvio Micali, and Phillip Rogaway. Everything provable is provable in zero-knowledge.
[volume 403, pages 37–56, 08 1988. doi:10.1007/0-387-34799-2 4.](http://dx.doi.org/10.1007/0-387-34799-2_4)
[BS19] Nir Bitansky and Omri Shmueli. Post-quantum zero knowledge in constant rounds.
_arXiv preprint arXiv:1912.04769, 2019._
[CVZ19] Andrea Coladangelo, Thomas Vidick, and Tina Zhang. Non-interactive zeroknowledge arguments for QMA, with preprocessing. arXiv preprint arXiv:1911.07546,
2019.
[FK17] Joseph F Fitzsimons and Elham Kashefi. Unconditionally verifiable blind quantum
[computation. Physical Review A, 96(1):012303, 2017. doi:10.1103/physreva.96.012303.](http://dx.doi.org/10.1103/physreva.96.012303)
[GMR89] Shafi Goldwasser, Silvio Micali, and Charles Rackoff. The knowledge complexity
of interactive proof systems. _SIAM Journal on computing, 18(1):186–208, 1989._
[doi:10.1137/0218012.](http://dx.doi.org/10.1137/0218012)
[GMW91] Oded Goldreich, Silvio Micali, and Avi Wigderson. Proofs that yield nothing but
their validity or all languages in np have zero-knowledge proof systems. J. ACM,
[38(3):690–728, July 1991. doi:10.1145/116825.116852.](http://dx.doi.org/10.1145/116825.116852)
[KSVV02] Alexei Yu Kitaev, Alexander Shen, Mikhail N Vyalyi, and Mikhail N Vyalyi. Clas_sical and quantum computation._ Number 47. American Mathematical Soc., 2002.
[doi:10.1090/gsm/047/10.](http://dx.doi.org/10.1090/gsm/047/10)
[Mah18] Urmila Mahadev. Classical verification of quantum computations. In Foundations of
_Computer Science (FOCS), 2018 IEEE 59th Annual Symposium on, pages 259–267,_
[Oct 2018. doi:10.1109/focs.2018.00033.](http://dx.doi.org/10.1109/focs.2018.00033)
[MF16] Tomoyuki Morimae and Joseph F. Fitzsimons. Post hoc verification with a single prover. _arXiv_ _e-prints,_ March 2016, [1603.06046.](http://arxiv.org/abs/1603.06046)
https://arxiv.org/pdf/1603.06046.pdf.
[MNS16] Tomoyuki Morimae, Daniel Nagaj, and Norbert Schuch. Quantum proofs can be
verified using only single-qubit measurements. Physical Review A, 93:022326, February
[2016, 1510.06789. doi:10.1103/physreva.93.022326.](http://arxiv.org/abs/1510.06789)
44
-----
[MP11] Daniele Micciancio and Chris Peikert. Trapdoors for lattices: Simpler, tighter, faster,
[smaller. Cryptology ePrint Archive, Report 2011/501, 2011. doi:10.1007/978-3-642-](http://dx.doi.org/10.1007/978-3-642-29011-4_41)
[29011-4 41. https://eprint.iacr.org/2011/501.](http://dx.doi.org/10.1007/978-3-642-29011-4_41)
[MW05] Chris Marriott and John Watrous. Quantum arthur–merlin games. Computational
_[Complexity, 14(2):122–152, 2005. doi:10.1109/ccc.2004.1313850.](http://dx.doi.org/10.1109/ccc.2004.1313850)_
[Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography.
_[Journal of the ACM (JACM), 56(6):34, 2009. doi:10.1145/1568318.1568324.](http://dx.doi.org/10.1145/1568318.1568324)_
[RUV13] Ben W Reichardt, Falk Unger, and Umesh Vazirani. Classical command of quantum
[systems. Nature, 496(7446):456, 2013. doi:10.1038/nature12035.](http://dx.doi.org/10.1038/nature12035)
[Wat09] John Watrous. Zero-knowledge against quantum attacks. SIAM Journal on Comput_[ing, 39(1):25–58, 2009. doi:10.1137/060670997.](http://dx.doi.org/10.1137/060670997)_
45
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1902.05217, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://quantum-journal.org/papers/q-2020-05-14-266/pdf/"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-02-14T00:00:00
|
[
{
"paperId": "10ec524f65c7ced123c8792febcec674feb3fcb2",
"title": "Post-quantum zero knowledge in constant rounds"
},
{
"paperId": "fb78e69a942f8577b5a66e724c2a1d5a3a5b647f",
"title": "Non-interactive classical verification of quantum computation"
},
{
"paperId": "bcca6d6326b1ac47e369d892fc6ba4798d757bf3",
"title": "Non-interactive zero-knowledge arguments for QMA, with preprocessing"
},
{
"paperId": "84b62ec59c5574c516f4c63d2fff1dbaf110815a",
"title": "Zero-Knowledge for QMA from Locally Simulatable Proofs"
},
{
"paperId": "8725345b364702c423fe28035087119b74bcffed",
"title": "Classical Verification of Quantum Computations"
},
{
"paperId": "7371f5feb30bdf921c59262b8bb33217035d47c9",
"title": "Zero-Knowledge Proof Systems for QMA"
},
{
"paperId": "2ca2430bef72150f81cd346f297807c6231d3461",
"title": "Post hoc verification of quantum computation"
},
{
"paperId": "4db7200bdb3120fb462dff20293e1a83078fbbd3",
"title": "Quantum proofs can be verified using only single qubit measurements"
},
{
"paperId": "dbc542602606daaa3de5d597e52da9b00af442cf",
"title": "A Note on Quantum Security for Post-Quantum Cryptography"
},
{
"paperId": "b94e81164f5a5b9113f95fb6e684c5c1c5ed9cc1",
"title": "Classical command of quantum systems"
},
{
"paperId": "07fed20167b9c14492e2ed1c42add42fd8593074",
"title": "How to Construct Quantum Random Functions"
},
{
"paperId": "988ba838c2a4d601a5af5c06f73ee16bb32395ea",
"title": "Trapdoors for Lattices: Simpler, Tighter, Faster, Smaller"
},
{
"paperId": "34b614f6f4156837a0179a5bd56d2a8159f4c057",
"title": "Unconditionally verifiable blind quantum computation"
},
{
"paperId": "d47cc092877ba3c891bb013c68f469e80b770f3b",
"title": "On lattices, learning with errors, random linear codes, and cryptography"
},
{
"paperId": "1fcad18c84c9a414b66137aab3e74839431e1518",
"title": "Interactive Proofs For Quantum Computations"
},
{
"paperId": "ace672c3dddd5072e44cbef025e5739539620041",
"title": "Universal Blind Quantum Computation"
},
{
"paperId": "0a6c31aa44fc829e59743c58cc4ca249ea2fd495",
"title": "Realizable Hamiltonians for Universal Adiabatic Quantum Computers"
},
{
"paperId": "c3d1262160d759f424a83aae1d053b59d3352e7d",
"title": "Zero-knowledge against quantum attacks"
},
{
"paperId": "db5e4a46def6dc05b47e594767c86e9cefe8d2df",
"title": "Quantum Arthur–Merlin games"
},
{
"paperId": "68141fa85942b7adc3b5c58de6a93ba5e4aa994c",
"title": "Correspondence between classical and quantum computation"
},
{
"paperId": "6bcd2606c8f0099a82bd03e520659058a5b12e51",
"title": "Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proof systems"
},
{
"paperId": "c6121161510bd96e4103319feb5f7c355f85beb9",
"title": "Everything Provable is Provable in Zero-Knowledge"
},
{
"paperId": "9ccf7b6cb32cf89752a35bd910555adac54773e0",
"title": "The Knowledge Complexity of Interactive Proof Systems"
},
{
"paperId": "2b1056b47daa0089e877da015910d279a12a6037",
"title": "Minimum Disclosure Proofs of Knowledge"
},
{
"paperId": "ef1d731f28a649a5503c58630b3e26499e861f25",
"title": "M ar 2 01 6 Post hoc verification with a single prover"
},
{
"paperId": null,
"title": "new Z-key for the one-time pad) to be equal to β i ⊕s i,1 ⊕d i ·(x 0,i ⊕ x 1,i ) (where s i,1 refers to the first bit of s i )"
},
{
"paperId": null,
"title": "Set d i to be the last w bits of s i , and compute d i · (x 0,i ⊕ x 1,i ) using the trapdoor τ κ i"
},
{
"paperId": null,
"title": "Simulate a standard basis measurement of HX a i |ψ * i . Denote the result by β i . (Here, a i refers to the ith bit of a, where a is taken from I 1 's initial choice of one-time pad keys"
},
{
"paperId": null,
"title": "a Hadamard round is chosen"
},
{
"paperId": null,
"title": "that an entirely classical verifier can decide languages in BQP by interacting classically with two entangled, non-communicating QPT provers. This protocol is likewise sound against arbitrary provers"
},
{
"paperId": null,
"title": "The verifier measures H s j for j = 1 , . . . , m , taking advantage of the fact that—if the prover is honest—it is given m copies of σ ."
},
{
"paperId": null,
"title": "The verifier opens its commitment to r v , and also sends the prover its measurement outcomes u and function trapdoors from the previous step"
},
{
"paperId": null,
"title": "an entirely classical verifier can decide languages in BQP by executing an argument system ([BCC88]) with a single BQP prover."
},
{
"paperId": null,
"title": "The prover sends a state ρ to the verifier one qubit at a time"
},
{
"paperId": null,
"title": "Prover’s encoding step: The same as the prover’s encoding step in Protocol 2.11, except that t ∈ { 0 , + } N rather than { 0 , + , + y } N"
},
{
"paperId": null,
"title": "Verifier’s challenge"
},
{
"paperId": null,
"title": "HZ b i X a i |ψ * i ) ⊕ d i · (x 0,i ⊕ x 1,i ) = s i"
},
{
"paperId": null,
"title": "If the verifier requests a Hadamard round, I (cid:48)"
},
{
"paperId": null,
"title": "decide which random terms from the Hamiltonian H the verifier will check by executing a coin-flipping protocol"
}
] | 38,339
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f0072a48291ce27d5dfe10e00847d98a915d76
|
[
"Computer Science"
] | 0.846005
|
Conservative Linear Unbiased Estimation Under Partially Known Covariances
|
00f0072a48291ce27d5dfe10e00847d98a915d76
|
IEEE Transactions on Signal Processing
|
[
{
"authorId": "15981931",
"name": "Robin Forsling"
},
{
"authorId": "145298509",
"name": "A. Hansson"
},
{
"authorId": "143710447",
"name": "F. Gustafsson"
},
{
"authorId": "1850451",
"name": "Zoran Sjanic"
},
{
"authorId": "2500955",
"name": "Johan Löfberg"
},
{
"authorId": "2486923",
"name": "Gustaf Hendeby"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Signal Process"
],
"alternate_urls": [
"http://www.signalprocessingsociety.org/publications/periodicals/tsp/",
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=78"
],
"id": "1f6f3f05-6a23-42f0-8d31-98ab8089c1f2",
"issn": "1053-587X",
"name": "IEEE Transactions on Signal Processing",
"type": "journal",
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=78"
}
|
Mean square error optimal estimation requires the full correlation structure to be available. Unfortunately, it is not always possible to maintain full knowledge about the correlations. One example is decentralized data fusion where the cross-correlations between estimates are unknown, partly due to information sharing. To avoid underestimating the covariance of an estimate in such situations, conservative estimation is one option. In this paper the conservative linear unbiased estimator is formalized including optimality criteria. Fundamental bounds of the optimal conservative linear unbiased estimator are derived. A main contribution is a general approach for computing the proposed estimator based on robust optimization. Furthermore, it is shown that several existing estimation algorithms are special cases of the optimal conservative linear unbiased estimator. An evaluation verifies the theoretical considerations and shows that the optimization based approach performs better than existing conservative estimation methods in certain cases.
|
### Conservative Linear Unbiased Estimation Under Partially Known Covariances
#### Forsling Robin, Anders Hansson, Fredrik Gustafsson, Zoran Sjanic, Johan Löfberg and Gustaf Hendeby
The self-archived postprint version of this journal article is available at Linköping University Institutional Repository (DiVA): http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-187807
N.B.: When citing this work, cite the original publication.
Robin, F., Hansson, A., Gustafsson, F., Sjanic, Z., Löfberg, J., Hendeby, G., (2022), Conservative
Linear Unbiased Estimation Under Partially Known Covariances, IEEE Transactions on Signal
_Processing, 70, 3123-3135. https://doi.org/10.1109/tsp.2022.3179841_
#### Original publication available at: https://doi.org/10.1109/tsp.2022.3179841 Copyright: Institute of Electrical and Electronics Engineers http://www.ieee.org/index.html ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
-----
## Conservative Linear Unbiased Estimation Under Partially Known Covariances
##### Robin Forsling[∗], Anders Hansson[†], Fredrik Gustafsson[‡], Zoran Sjanic, Johan Löfberg[†], and Gustaf Hendeby[†]
_∗_ Student Member, IEEE, † Senior Member, IEEE, ‡ Fellow, IEEE
##### Dept. of Electrical Engineering, Linköping University, Linköping, Sweden
**_Abstract—Mean square error optimal estimation requires the_**
**full correlation structure to be available. Unfortunately, it is**
**not always possible to maintain full knowledge about the cor-**
**relations. One example is decentralized data fusion where the**
**cross-correlations between estimates are unknown, partly due to**
**information sharing. To avoid underestimating the covariance**
**of an estimate in such situations, conservative estimation is one**
**option. In this paper the conservative linear unbiased estimator is**
**formalized including optimality criteria. Fundamental bounds of**
**the optimal conservative linear unbiased estimator are derived.**
**A main contribution is a general approach for computing the**
**proposed estimator based on robust optimization. Furthermore,**
**it is shown that several existing estimation algorithms are special**
**cases of the optimal conservative linear unbiased estimator. An**
**evaluation verifies the theoretical considerations and shows that**
**the optimization based approach performs better than existing**
**conservative estimation methods in certain cases.**
**_Index Terms—Conservative estimation, robust optimization,_**
**unknown cross-correlations, covariance intersection, decentral-**
**ized estimation.**
I. INTRODUCTION
PTIMAL ESTIMATION of parameters in a linear regression
is a well studied subject. Minimum variance unbiased
# O
estimators such as the best linear unbiased estimator (BLUE)
require full knowledge about the measurement covariance
[1]. If the covariance structure is only partially known one
solution is to use a conservative estimator that does not
provide a too optimistic uncertainty. That is, a conservative
estimator guarantees that the covariance of an estimate is not
underestimated. In the literature this property is often denoted
_consistency or covariance consistency [2]. Estimation in a_
regression with unknown or partially known covariances goes
at least as far back as [3]. Real-world examples of when the
covariance structure is only partially known are found in [4, 5].
A comprehensive survey of estimation under unknown crosscorrelations is provided in [6].
The conservative estimation problem has earlier been studied in a fusion context only, see, e.g., [7–12]. Fusion denotes
the estimation problem where multiple estimates of the same
parameters are merged into an improved estimate [13]. This
can be seen as a special case of a general linear regression
This work has been supported by the Industry Excellence Center LINKSIC funded by The Swedish Governmental Agency for Innovation Systems
(VINNOVA) and Saab AB, and by the project Scalable Kalman filters funded
by the Swedish Research Council (VR). G. Hendeby has received funding
from the Center for Industrial Information Technology at Linköping University
(CENIIT) t 17 12
|Col1|R2|Col3|
|---|---|---|
||||
||||
||||
Fig. 1. Merging of two estimates with covariances R1 and R2.
Multiple BLUEs for different cross-correlations are provided. A
conservative bound is also shown.
where two direct observations of the same unknown parameter
vector are available. Optimal fusion of two estimates with
covariance R1 and R2 requires the cross-correlations R12 to
be known. If so, the BLUE yields optimal fusion. Under
unknown R12 a conservative estimator has to consider all
possible instances of R12. A geometrical interpretation of this
is given in Fig. 1, where R1 and R2 are represented as ellipses,
and the task is to compute a new ellipse that summarizes
the total information without overestimating it, i.e., we seek a
conservative bound. The BLUE covariance ellipses for zero,
maximum and several nonzero correlations are illustrated in
Fig. 1. Since a conservative bound must take into account
all possible values of R12 it must be simultaneously larger
than all the individual BLUEs and hence, for instance, cannot
be the BLUE assuming either zero correlations or maximum
correlations. The literature describes basically three different
methods corresponding to slightly different assumptions on
_R12. These are covariance intersection (CI, [7]), the largest_
_ellipsoid (LE, [9]) method and inverse covariance intersection_
(ICI, [12]).
In this paper we generalize the existing theory to the general
linear regression framework based on the work initiated in
[14]. The BLUE approach is formulated as an optimization
problem. This enables us to formulate the conservative linear
_unbiased estimator (CLUE) as a similar optimization problem._
A major point is that standard optimization software can
be applied to compute a CLUE in general cases, which is
something that is currently not possible [6]. To evaluate both
the theory and optimization algorithms, we study a number of
fusion problems, which means that we can compare with the
existing methods from literature. For that comparison reason,
we also provide a review of these fusion methods in the same
notational framework and show that these are special cases of
_R1_
BLUE under zero correlations
BLUE under maximum correlations
BLUE under nonzero correlations
conservative bound
-----
the CLUE. The following are the main contributions:
_• A framework which unifies existing conservative es-_
timation methods, facilitates the development of new
methods and is able to serve as a tool in the analysis
of conservative estimation problems.
_• A number of derived properties of the general linear_
regression CLUE.
_• A methodology for computing a CLUE in general cases_
using standard optimization software.
_• A theorem that states under which conditions the LE_
method is an optimal CLUE.
II. BACKGROUND
Necessary mathematical notation and theory of linear estimation are introduced below. This is followed by a literature
review of decentralized and distributed estimation.
_A. Notation_
Let R, R[n] and R[m][×][n] denote the set of real numbers, the set
of real-valued n-dimensional vectors and the set of real-valued
_m × n matrices, respectively. Let S[n]+_ [and][ S]++[n] [denote the set]
of n _n symmetric positive semidefinite (PSD) matrices and_
_×_
the set of n _n symmetric positive definite (PD) matrices,_
_×_
respectively. For A, B ∈ **S[n]+[, the inequalities][ A][ ⪰]** _[B][ and]_
_A ≻_ _B are equivalent to (A −_ _B) ∈_ **S[n]+** [and][ (][A][ −] _[B][)][ ∈]_ **[S]++[n]** [,]
respectively. The ellipsoid of a matrix A ∈ **S[n]++** [is given by]
the set of points (A) = _x_ **R[n]** _x[T]A[−][1]x_ 1 . If A is a
_E_ _{_ _∈_ _|_ _≤_ _}_
covariance matrix then (A) describes an uncertainty. A larger
_E_
ellipsoid means a larger uncertainty and for A, B ∈ **S[n]++** [it is]
also true that
_A_ _B_ (A) (B), (1a)
_≻_ _⇐⇒E_ _⊃E_
_A_ _B_ (A) (B). (1b)
_⪰_ _⇐⇒E_ _⊇E_
Appendix A provides a summary of matrix properties used in
this paper.
_B. Preliminaries_
The Fisherian estimation philosophy is adopted which
means that the state x[0] to be estimated is deterministic. The
overall problem is to derive an estimate ˆx of x[0] **R[n]** from
_∈_
measurements y **R[m]** given as a linear regression
_∈_
_y = Hx[0]_ + v, (2)
where H **R[m][×][n]** and v is zero-mean random noise. It is
_∈_
assumed m _n and rank(H) = n. If rank(H) < n then_
_≥_
_y is insufficient for the considered problem in the sense that_
it is not possible to estimate all components of x[0]. In terms
of a least squares estimator ˆx there would be infinitely many
solutions ˆx in the rank deficient case [15]. The true covariance
of y is given by R[0] = cov(y) = E(y E y)(y E y)[T], where
_−_ _−_
E is the expectation operator. The covariance of ˆx is denoted
by P .
In linear estimation[1] _xˆ = Ky where K_ **R[n][×][m]** is the
_∈_
estimation gain. An estimator is unbiased if E ˆx = x[0]. For a
1Since E v = 0 only linear estimators ˆx = Ky are considered and hence
th l ffi ti t ˆ _K_ + b i t i l d d i thi
linear estimator with y as in (2) this implies KH = I, where
_I is the identity matrix of appropriate dimension. The true_
covariance of ˆx = Ky is given by cov(ˆx) = E(ˆx _x[0])(ˆx_
_−_ _−_
_x[0])[T]_ = KR[0]K [T]. A linear estimator is completely defined by
(K, P ).
Fusion is a subclass of estimation problems where the
components of y are estimates y1, y2, . . . to be merged into a
more accurate estimate. Here it is assumed that yi = Hix[0] +vi
and vi is zero-mean noise. In linear fusion an estimate is
computed from N estimates y1, . . ., yN according to
_xˆ =_ �K1 _. . ._ _KN_ ��y1 _. . ._ _yN_ �T = Ky, (3a)
_R1[0]_ _. . ._ _R1[0]N_ _K1_
_P =_ �K1 _. . ._ _KN_ � ... ... ... ...
_RN[0]_ 1 _. . ._ _RN[0]_ _KN_
= KR[0]K [T], (3b)
where Ri[0] [= cov(][y][i][)][ and][ R]ij[0] [= cov(][y][i][, y][j][)][ is the cross-]
covariance between yi and yj. When speaking of crosscorrelations we mean Rij[0] [. The linear fusion problem of (3) is]
structurally equivalent to the linear estimation problem.
_C. Related Research_
Two estimates y1 and y2 are optimally fused using the Bar_Shalom-Campo [16] formulas if the cross-correlations R12[0]_ [is]
available. Cross-correlations are in general unknown [7, 17],
but nevertheless need to be handled carefully. Otherwise there
is an immediate risk of double counting information. In
[18] the cross-correlations are compensated for by subtracting
previously accounted information. However, this requires some
sort of bookkeeping mechanism which is not always possible
in practice. A related concept is the channel filter [19] which
allows for compensation of cross-correlations in certain sensor network topologies. In [20–23] several consensus-based
methods are proposed. These approaches also make specific
assumptions about the sensor network topology, and then use
averaging to drive the estimates towards consensus. Another
class of methods are distributed Kalman filtering algorithms—
see, e.g., [24–26]—which restrict the estimates to be merged
to follow specific filtering schemes.
The methods described above are useful in a vast amount
of applications given that their conditions are met. A problem
is that these conditions are not always satisfied. For instance,
there are situations where: (i) there is no knowledge about
the sensor network topology; (ii) the history of exchanged
data is unavailable; (iii) the filtering schemes, if any, deployed
by the other nodes in the sensor network are unknown. If
some or all of (i)–(iii) hold then the problem is more or
less structureless and we are basically forced to rely upon
_conservative estimation [27]. Detailed descriptions of the three_
main methods of conservative linear estimation are provided in
Sec. VI. Theoretical aspects of conservative linear estimation
have been studied to some extent. See, e.g., [8, 17, 28, 29] for
theoretical work on CI. ICI is further examined in [30, 31].
A main aspect of conservative estimation is partial knowledge
about cross-correlation. Exploiting partial knowledge is studied in [10 12 32 34]
-----
This paper considers estimation in a linear regression with
an uncertainty in the model. Here we use P cov(ˆx) as the
_⪰_
necessary condition for an estimator given by ˆx and P to be
called conservative. In [35] a closely related problem is studied
where the authors instead uses tr(P ) tr (cov(ˆx)) as the
_≥_
necessary condition for a conservative estimator. The resulting
algorithm is a minimax optimization method that computes a
worst-case estimate given the uncertainty in the model. Other
minimax formulation for estimation under model uncertainties
are derived in [36–39]. To be able to apply a worst-case
approach a worst-case element must exist. Unfortunately, for
the general problem considered in this paper a worst-case
element is not unambiguously defined. To ensure P cov(ˆx),
_⪰_
a conservative estimator must instead simultaneously consider
multiple elements which may all be worst-case in different
senses. We further elaborate on this topic in Sec. III-D to
illustrate why a minimax formulation is not possible for the
general problem studied in this paper.
III. LINEAR UNBIASED ESTIMATION AS OPTIMIZATION
In this section the problem is formulated. First, the CLUE is
defined. The BLUE is then introduced using an optimization
formulation. We finally generalize the BLUE concept for the
optimal CLUE problem.
_A. Conservative Linear Unbiased Estimation_
In conservative estimation R[0] is generally not fully known.
Instead R[0] belongs to a set A where A ⊂ **S[m]++** [is known.]
As a result, cov(ˆx) cannot be computed. The approach is then
to bound cov(ˆx) from above, i.e., to find K and P such that
_P_ cov(ˆx) with ˆx = Ky. An estimator which is able to
_⪰_
guarantee P cov(ˆx), or equivalently P _KRK_ [T], _R_,
_⪰_ _⪰_ _∀_ _∈A_
is conservative. It is assumed that the elements of have
_A_
only finite eigenvalues, which means that R[0] has only finite
eigenvalues.
A CLUE which computes an estimate ˆx and covariance P
is characterized by the following properties
_xˆ = Ky_ _KH = I_ _P_ _KRK_ [T], _R_ _,_ (4)
_∧_ _∧_ _⪰_ _∀_ _∈A_
where y is defined according to (2). The problem studied in
this paper can now be formulated as: For y given according
to (2) and a given set, derive a CLUE (K, P ) where P is
_A_
as small as possible.
As an example of a conservative estimator and a nonconservative estimator, consider a case where x[0] **R[2]**
_∈_
and A = {R[a], R[b], R[c], R[d], R[e]} ⊂ **S[4]++[. Let][ (][K, P]** [)][ and]
(K _[′], P_ _[′]) be two estimators, where P and P_ _[′]_ are given by
their ellipses in Fig. 2. Since (P ) (KRK [T]), _R_
_E_ _⊇E_ _∀_ _∈A_
and hence P _KRK_ [T], _R_ we conclude that (K, P )
_⪰_ _∀_ _∈A_
is conservative given . By a similar geometrical reasoning
_A_
we see that P _[′]_ _⪰_ _K_ _[′]R(K_ _[′])[T]_ cannot hold for all R ∈A
and we therefore conclude that (K _[′], P_ _[′]) is not a conservative_
estimator.
_Remark 1. The authors of [8, 12] use the notion admissibility_
where admissible matrices are those R that are permitted given
the problem formulation. Here is used to represent the set
_A_
of all admissible matrices.
The optimization problem in (5) is easily solved and a
closed-form solution exists, and therefore this optimization
formulation is seldom used. The reason we write the BLUE
in this way is made clear shortly. If R[0] is invertible, then the
BLUE is given by K _[⋆]_ = �H [T](R[0])[−][1]H�−1 H T(R0)−1 [1]
and hence
_xˆ[⋆]_ = �H [T](R[0])[−][1]H�−1 H T(R0)−1y, (6a)
_P_ _[⋆]_ = �H [T](R[0])[−][1]H�−1 . (6b)
According to the Gauss-Markov theorem [15] KR[0]K [T] _P_ _[⋆]_
_⪰_
for any K such that KH = I. Hence, the same solution
_P_ _[⋆]_ is obtained irrespective of the choice of matrix increasing
function J.
_C. Best Conservative Linear Unbiased Estimation_
In the general case where is not a singleton the BLUE is
_A_
not well-defined. The typical reason for this is that the crosscorrelations, e.g., R12[0] [, are unknown. Still it is desirable to]
design an estimator similar to the BLUE as in Definition 1.
A best CLUE is defined in Definition 2. A best CLUE
reduces to the BLUE if = _R[0]_ . Similar formulations have
_A_ _{_ _}_
been proposed in [17, 28, 29].
**Definition 2 (Best Conservative Linear Unbiased Estimator).**
Let y = Hx[0] + v. Assume cov(y) = R[0] _∈A ⊂_ **S[m]++[.]**
An estimator reporting ˆx[⋆] _K_ _[⋆]y and P_ _[⋆]_ is called a best
_KRK_ [T], R ∈A
_P_
_K_ _[′]R(K_ _[′])[T], R ∈A_
_P_ _[′]_
Fig. 2. An example of a conservative estimator (K, P ) and nonconservative estimator (K _[′], P_ _[′])._
_B. Best Linear Unbiased Estimation_
In the classical setting, which is a special case of a CLUE
with = _R[0]_, it is well-known that a linear unbiased
_A_ _{_ _}_
estimator with the smallest mean square error is given by
the BLUE [1]. The BLUE is defined in Definition 1, where
also a loss function J : R[n][×][n] **R is introduced. Throughout**
_→_
this work it is assumed that J is matrix increasing [40], see
Appendix A.
**Definition 1 (Best Linear Unbiased Estimator). Let y =**
_Hx[0]_ + v where cov(y) = R[0]. An estimator ˆx[⋆] = K _[⋆]y where_
_P_ _[⋆]_ = K _[⋆]R[0](K_ _[⋆])[T]_ is called the best linear unbiased estimator
if K _[⋆]_ is the solution to
minimize _J(P_ )
_K,P_
subject to _KH = I_
_P = KR[0]K_ [T],
for a given matrix increasing function J.
(5)
-----
**Conservative Linear Unbiased Estimation**
Assumptions: y = Hx[0] + v, cov(y) = R[0] _∈A_
Estimator: ˆx = Ky, KH = I, P ⪰ cov(Ky)
_A = {R[0]}_ _A = {R[a], R[b], . . ., R[0], . . . }_
**optimal estimator: BLUE** **optimal estimator: best CLUE**
_exact and unique closed-form solution_ _fundamental bounds_ _general CLUE using_ _special cases_
_[⋆]_ = P _[⋆]H_ [T](R[0])[−][1], P _[⋆]_ = �H [T](R[0])[−][1]H�−1 _robust optimization_
Fig. 3. Overview of the conservative linear unbiased estimation problem. The special case with A = {R[0]} is illustrated to
the left and the general case is illustrated to the right. The green box to the right visualizes the scope of this paper.
|optima|A = {R0 }|Col3|
|---|---|---|
||l estim|ator:|
||||
_conservative linear unbiased estimator if (K_ _[⋆], P_ _[⋆]) is the_
solution to
minimize _J(P_ )
_K,P_
_J(P_ ) = J(KRK [T]) is minimized for a worst-case element
_R_ . However, as seen in the example below, the solution
_∈A_
to this minimax problem is not necessarily feasible w.r.t. the
original problem in (2).
Let J( ) = tr( ). Let H = �I _I[�][T]_ and = _R[a], R[b]_,
_·_ _·_ _A_ _{_ _}_
where
subject to _KH = I_
_P_ _KRK_ [T], _R_ _,_
_⪰_ _∀_ _∈A_
for a given matrix increasing function J.
(7)
2 0 0 1
0 4 1 0
0 1 4 0
1 0 0 2
_._
_,_ _R[b]_ =
2 0 2 0
0 4 0 2
2 0 4 0
0 2 0 2
A solution to (7) is given by a pair (K _[⋆], P_ _[⋆]) which is_
one example of a feasible point [40] as this pair satisfies all
constraints of the problem. The set of all feasible points is
called the feasible set. In particular, the feasible set of the
problem in (7) equals the set of all CLUEs.
The BLUE problem has a unique solution, the same is
not true for the best CLUE problem for which the choice of
loss function J is crucial. While the BLUE finds a minimum
element P _[⋆]_ of the feasible set to the problem in Definition 1,
the best CLUE P _[⋆]_ is a minimal element of the feasible set to
the problem in Definition 2. See Appendix A for the definitions
of minimum and minimal elements including procedures for
how they can be found.
Since the best CLUE problem boils down to finding a
minimal element the natural loss function is tr(W ), where
_·_
_W ∈_ **S[n]++[, see Appendix A. The reason for using a more]**
general matrix increasing J is that it includes, e.g., the
determinant, which is a common loss function in the literature
and is not obviously related to tr(W ). However, it is shown in
_·_
Appendix A that any matrix increasing function can be used to
find minimal elements. The literature suggests that trace and
the determinant are the most commonly used loss functions.
Minimizing the trace is related to minimizing the variance,
and minimizing the determinant is related to minimizing the
entropy [41].
_D. Relation to Minimax Optimization_
In Sec. II-C it was discussed that related problems are solved
in [35–39] using minimax formulations. These papers consider
scalar loss functions, cf. J(P ), but they do not impose the
PSD constraint P _KRK_ [T], _R_ which is a necessary
_⪰_ _∀_ _∈A_
condition for a CLUE. Using a relaxed constraint J(P )
_≥_
_J(KRK_ [T]), _R_, e.g., as is used in [35] with J( ) = tr( ),
_∀_ _∈A_ _·_ _·_
it is possible to derive a minimax optimization problem where
_R[a]_ =
The two BLUEs for each of the elements in are given by
_A_
_Ka = PaH_ [T](R[a])[−][1], _Pa =_ �H [T](R[a])[−][1]H�−1,
_Kb = PbH_ [T](R[b])[−][1], _Pb =_ �H [T](R[b])[−][1]H�−1 .
For a solution (K, P ) to satisfy tr(P ) tr(KR[a]K [T]) and
_≥_
tr(P ) tr(KR[b]K [T]) it must necessarily satisfy
_≥_
tr(P ) ≥ max (tr(Pa), tr(Pb)),
as a consequence of the Gauss-Markov theorem [15]. In this
case we have
tr(Pa) = tr(KaR[a]Ka[T][) = 4][,] tr(KaR[b]Ka[T][) = 4][,]
tr(Pb) = tr(KbR[b]Kb[T][) = 2][.][63][,] tr(KaR[b]Ka[T][) = 4][.][41][,]
i.e., tr(Pa) = tr(KaR[b]Ka[T][) = 4][ which is as small as it pos-]
sible can be. We hence have that (Pa, Ka) is a solution to the
minimax problem suggested by relaxing P _KRK_ [T], _R_
_⪰_ _∀_ _∈_
into tr(P ) tr(KRK [T]), _R_ . Meanwhile
_A_ _≥_ _∀_ _∈A_
� 0 1�
_−_
_Pa −_ _KaR[b]Ka[T]_ [=] 1 0 _̸⪰_ 0,
_−_
which violates P _⪰_ _KRK_ [T], ∀R ∈A, and (Pa, Ka) is
therefore not a feasible solution w.r.t. (2). This counterexample
shows that a minimax formulation does not in general apply
for the best CLUE problem in (2). The minimax formulation
is therefore not pursued further.
_E. Proposed Framework_
Definition 2 is the backbone of the proposed framework for
conservative linear unbiased estimation. A similar optimization
formulation has been proposed in e g [29] for the CI case In
-----
this paper we further develop this concept to reach a general
framework for conservative linear unbiased estimation. The
rest of this paper is devoted to the CLUE concept: Problem
properties are analyzed in Sec. IV. A major motivation for
the optimization formulation of a CLUE is that standard
optimization software can be applied to compute a CLUE and
in some cases guarantee a best CLUE. This is the topic of
Sec. V. In Sec. VI some of the existing conservative estimation
methods are shown to be CLUE and sometimes even best
CLUE under certain assumptions on . To evaluate both
_A_
theory and optimization algorithms, we study a number of
fusion problems in Sec. VII. Fig. 3 illustrates conservative
linear unbiased estimation and the special case of linear
unbiased estimation.
IV. PROBLEM PROPERTIES
The optimization problems to find the BLUE and best
CLUE are very similar. However, while a closed form solution
is available for the BLUE, the additional uncertainty in the best
CLUE formulation makes the problem much harder to solve
and no general solution procedure is available. This section
highlights differences between the two optimization problems,
and derives a simplified optimization problem providing a
lower bound Pl of the obtainable covariance of the CLUE.
Also an upper bound Pu is provided implying that P _[⋆]_ lies in
an interval 0 ≺ _Pl ⪯_ _P_ _[⋆]_ _⪯_ _Pu, where Pl and Pu depend on_
.
_A_
_A. Lower Bound on Best CLUE_
The CLUE cannot be better than the BLUE for any R .
_∈A_
As a consequence, a lower bound of the CLUE covariance can
therefore be computed as the smallest covariance larger than
all BLUE covariances, which is an easier optimization problem
than the best CLUE problem. In analogy to the Cramér-Rao
_lower bound [42] this lower bound can be used as a guideline_
for system design, e.g., to a tradeoff between communication
bandwidth and performance. It should be noted that this
formulation relaxes the constraints, and hence there is no
guarantee that a gain K achieving this covariance exists in the
general case. Below we derive a lower bound Pl for the best
CLUE covariance P _[⋆], where subscript l refers to quantities_
related to the lower bound. It is shown that J(P _[⋆]) ≥_ _J(Pl)._
If a CLUE (K, P ) satisfies J(P ) = J(Pl) then this CLUE is
also a best CLUE.
Instead of the best CLUE, consider the problem
minimize _J(P_ )
_P_ (8)
subject to _P_ �H [T]R[−][1]H�−1, _R_ _._
_⪰_ _∀_ _∈A_
For a given J a solution Pl to (8) is a lower bound[2] on P _[⋆]._
**Theorem 1 (Best CLUE Lower Bound). Let (K** _[⋆], P_ _[⋆]) be_
_given by (7) and let Pl be given by (8). Then J(P_ _[⋆]) ≥_ _J(Pl)._
2S VII D id l h th l b d i t i t
_Proof. By assumption the same matrix increasing J and the_
same are used in both (7) and (8). Since (K _[⋆], P_ _[⋆]) solves_
_A_
(7) we have for each R that
_∈A_
_P_ _[⋆]_ _K_ _[⋆]R(K_ _[⋆])[T]_ (H [T]R[−][1]H)[−][1],
_⪰_ _⪰_
where the last inequality follows from the Gauss-Markov
theorem [15] as we have K _[⋆]H = I. Hence, P_ _[⋆]_ satisfies the
constraints in (8) and therefore J(P _[⋆]) ≥_ _J(Pl)._
An implication of the theorem above is that if Pl given by
(8) is obtained by a CLUE, then this estimator is a best CLUE.
We now derive an estimator with the gain Kl computed from
_Pl, and give sufficient conditions for when (Kl, Pl) is a best_
CLUE. Start by solving for Rl in
_Pl =_ �H [T]Rl[−][1]H�−1, (9)
which has a solution since Pl ∈ **S[n]++** [and since][ H] [T][ has full]
column rank. Then define
_Kl =_ �H [T]Rl[−][1]H�−1 H TRl−1, (10)
which yields an unbiased estimator since KlH = I.
For (Kl, Pl) to be a CLUE, cf. (4), it must hold that Pl ⪰
_KlRKl[T][,][ ∀][R][ ∈A][. As it follows from (10) that][ K][l][R][l][K]l[T]_ [=][ P][l]
with Rl according to (9), a sufficient condition for (Kl, Pl) to
be a CLUE is that Rl ⪰ _R, ∀R ∈A since then_
_Pl = KlRlKl[T]_ _l_ _[,][ ∀][R][ ∈A][.]_
_[⪰]_ _[K][l][RK]_ [T]
The results above are summarized in the following theorem.
**Theorem 2 (Best CLUE From Lower Bound). Assume Pl**
_solves (8) and that Kl is according to (10) with Rl implicitly_
_given by (9) . If Rl ⪰_ _R, ∀R ∈A, then (Kl, Pl) is a best_
_CLUE._
As will be seen in Sec. VII-B3 it is possible to not satisfy
_Rl ⪰_ _R, ∀R ∈A while still satisfying Pl ⪰_ _KlRKl[T][,][ ∀][R][ ∈]_
.
_A_
_B. Upper Bound on Best CLUE_
Assume C is given, where C _R,_ _R_ . Then it is
_⪰_ _∀_ _∈A_
possible to construct a CLUE as (K, P ) with P = KCK [T]
for any K subject to KH = I. In particular a CLUE can
be derived by first finding a C _R,_ _R_, and then
_⪰_ _∀_ _∈A_
compute the BLUE w.r.t. this C. Finding a smallest covariance
larger than all possible R is a simpler problem than
_∈A_
the best CLUE problem. However, approaching the problem
in this way restricts the feasible set, and therefore a CLUE,
but not necessarily a best CLUE, is obtained. Nevertheless,
it is sometimes useful to bound tightly using C, see, e.g.,
_A_
[35, 43–45]. In such cases the closed-form expression of (12)
below can be used to compute a CLUE. Next we derive an
upper bound Pu on P _[⋆]_ of (7). Subscript u refers to quantities
related to the upper bound.
Introduce the set
_C_ **S[m]** _C_ _R_ _R_ (11)
_B_ _{_ _∈_ _|_ _⪰_ _∀_ _∈A}_
-----
which contains all matrices C ∈ **S[m]++** [that are larger than all]
elements R ∈A. A CLUE (Ku, Pu) is then given by
_Ku =_ �H [T]C _[−][1]H�−1 H_ TC _−1,_ (12a)
_Pu =_ �H [T]C _[−][1]H�−1,_ (12b)
where C ∈B and KuH = I. By a similar reasoning that leads
up to Theorem 2, (Ku, Pu) according to (12) is a CLUE for
any C .
_∈B_
If C is a minimal element of in (11) it is also called
_B_
a minimal bound on A since there exists no element R[′] _⪯_
_C, R[′]_ = C for which R[′] _R,_ _R_ . In general (12) is too
_̸_ _⪰_ _∀_ _∈A_
conservative to be a best CLUE, even if C is a minimal bound[3]
on . This fact is also discussed in [29]. We summarize the
_A_
results in the following theorem.
**Theorem 3 (Best CLUE Upper Bound). Let (K** _[⋆], P_ _[⋆]) be_
_given by (7) and let (Ku, Pu) be given by (12), where C ∈B_
_with B as in (11). Then J(Pu) ≥_ _J(P_ _[⋆])._
_C. Summary_
If the approach of Sec. IV-A yields a CLUE then (Kl, Pl)
is also a best CLUE, otherwise (Kl, Pl) is a too optimistic
estimator. On the other hand, the estimator (Ku, Pu) given by
(12) is generally too pessimistic to be a best CLUE. If the
lower and upper bounds coincide, then, as a consequence of
Theorem 2, a best CLUE is trivially found.
V. GENERAL CONSERVATIVE LINEAR UNBIASED
ESTIMATION USING ROBUST OPTIMIZATION
In this section it is shown how robust optimization (RO,
[46]) can be used to solve general CLUE problems. Other
cases where RO is used to solve estimation problems are
studied in, for instance, [46, 47].
We begin by showing that our problem fits into the ro_bust semidefinite programming optimization framework [48]._
Tractability and optimality are then discussed. With tractability
we mean that a solution can be found within a reasonable
amount of time. Finally, an implementation of conservative
estimation using RO in YALMIP [49] is provided.
_A. Robust Semidefinite Optimization_
Since we deal with optimization problems having semidefinite constraints our focus is on a class of problems called
_semidefinite programs (SDP). Let ∆_ be an uncertain
_∈D_
optimization parameter only known to reside in an uncertainty
_set_ **R[d]. In RO none of the constraints are allowed to be**
_D ⊂_
violated for any value ∆ [50]. A generic SDP with an
_∈D_
inequality constraint uncertainty can be stated as [48]
minimize _f_ (z)
_z_ (13)
subject to **F(z, ∆)** 0, ∆ _,_
_⪰_ _∈D_
for a loss function f ( ). In (13) z is an optimization variable
_·_
and F(z, ∆) is a matrix-valued function that depends on z and
3A l f thi i id d i S VII B3
∆. The constraint in (13) is a linear matrix inequality (LMI)
for any fixed ∆.
The best CLUE problem of Definition 2 is aligned with the
formulation in (13). To see this, replace z by (K, P ), by,
_D_ _A_
∆ by R and f (z) by J(P ). Then rewrite the matrix inequality
of (7) using Schur complement [51] as
� _P_ _KR_
**F(K, P, R) =**
_RK_ [T] _R_
�
0. (14)
_⪰_
Since (14) is equivalent to P 0 _P_ _KRK_ [T] 0 we have
_⪰_ _∧_ _−_ _⪰_
retrieved the problem in Definition 2.
_B. Tractability And Optimality_
There are a few cases where the computed solution to the
problem in (13), with =, is tractable and optimal in the
_D_ _A_
sense that a minimal element is found when the RO problem is
solved [52]. If is a finite set then tractability follows trivially
_A_
since the uncertainty is replaced by a finite set of LMIs. The
problem is also tractable if is the convex hull [40] of a finite
_A_
set, see below for an example on convex hulls.
The convex hull of a set is another set which contains all
_V_
convex combinations of the elements of [40]. For example,
_V_
consider a set of covariances V = {A, B} ⊂ **S[2]+** [where][ A][ =]
� _−42_ _−12_ � and B = [ [4 2]2 1 []][. The convex hull of][ V][ is]
_θA + (1_ _θ)B_ _θ_ [0, 1]
_{_ _−_ _|_ _∈_ _}_
= �� _−4(2θθ+2(1+1−−θ)θ)_ _−2θθ+2(1+1−θ−θ)_ ���� _θ ∈_ [0, 1]�
= �� 2−44θ 2−14θ ��� _θ ∈_ [0, 1]� _,_
which is equivalent to {[ [4]c 1[ c] []][ |][ c][ ∈] [[][−][2][,][ 2]][}][. In case the]
unknown cross-covariance is not a scalar it is generally not
possible to express as the convex hull of a finite set.
_A_
For general uncertainty sets, i.e., general, there are only a
_A_
few constructive results on robust counterparts for the problem
in (13). The case treated here where both the RO problem and
the uncertainty set is defined by semidefinite constraints is
largely untreated in the literature. Not only are exact solutions
absent in contrast to the simple example above, but also
general tractable conservative approximations are missing.
_C. Robust Estimation Using YALMIP_
YALMIP is a MATLAB[®] toolbox developed to model and
solve optimization problems [49], and it has the ability to
derive RO problems [53]. The strategies in [53] focus on cases
where exact robust counterparts can be derived which rules out
problems according to the model in (13). However, theory has
recently been developed and added to the YALMIP toolbox to
support problems according to (13), i.e., uncertainty structures
involving arbitrary intersections of conic-representable sets.
These additions[4] are described in the forthcoming [54].
The key feature for us is realized by a function called
uncertain(), which enables the uncertainty imposed by
_R[0]_ to be handled. Next, we illustrate conservative
_∈A_
estimation using RO in YALMIP with an example. Consider
4It should be noted that these additions are already implemented in YALMIP
b t th d t ti d ibi th i tl bli h d
-----
the task of computig a conservative estimate ˆx = Ky and P
where tr(P ) is minimized. Let y = [ _[y]y[1]2_ [] =] � _HH12_ � _x[0]_ +v where
_x[0]_ _∈_ **R[2], H1 = H2 = [** [1 0]0 1 []][,][ R]1[0] [= [][ 1 0]0 4 []][ and][ R]2[0] [= [][ 4 0]0 1 []][.]
Assume R12[0] [is completely unknown and that][ R][0][ ⪰] [0][.]
The problem is translated into MATLAB[®] syntax using
YALMIP in Listing 1. YALMIP functions are highlighted in
**orange. The result is K = [** [0]0[.][8] 00.2 00.2 00.8 []][ and][ P][ = 1][.][6 [][ 1 0]0 1 []][.]
Listing 1: A simple YALMIP example.
H = [eye(2) ; eye(2)];
R1 = diag([1 4]);
R2 = diag([4 1]);
K = sdpvar(2,4); % Declare SDP variable
P = sdpvar(2);
R12 = sdpvar(2,2,’full’);
R = [R1 R12 ; R12’ R2];
F = [uncertain(R12),
K*H == eye(2),
[P K*R ; R*K’ R] >= 0,
R >= 0]; % Constraints
J = trace(P);
**optimize(F, J) % Solve problem**
In this example YALMIP finds a best CLUE. However, in
general the solution is approximative and the only guarantee
is that the solution is a CLUE.
VI. SPECIAL CASES OF CONSERVATIVE LINEAR
UNBIASED ESTIMATION
In this section it is shown that several existing conservative
estimation methods are best CLUE under different assumptions on . Common for all methods is that the diagonal
_A_
blocks of R[0] are known while the off-diagonal blocks, e.g.,
_R12[0]_ _[,][ are unknown. What differs between the methods is the]_
assumptions on the off-diagonal blocks. For instance, it could
be that we have some extra knowledge that R12[0] [is diagonal]
or that the eigenvalues of R12[0] [(][R]12[0] [)][T][ are smaller than][ a >][ 0][.]
Benefits of exploiting any extra structure on the otherwise
unknown cross-correlations are illustrated using an example.
Assume R[0] = [ [4]c 1[ c] []][ where][ c][ is unknown. If it is only known]
that R[0] 0 then c [ 2, 2] and hence R[0] could be repre_⪰_ _∈_ _−_
sented by any ellipse encloses in the rectangle of Fig. 4(a).
A minimal bound on is in this case given by C. If on the
_A_
other hand it is known that c ∈ [− 2[1] _[,][ 1]2_ []][, then it is possible]
to find an even smaller minimal bound C _[′], see Fig. 4(b). In_
Fig. 4 we have also illustrated A = {[ [4]c 1[ c] []][ |][ c][ ∈] [[][−][2][,][ 2]][}][ and]
_A[′]_ = {[ [4]c 1[ c] []][ |][ c][ ∈] [[][−] 2[1] _[,][ 1]2_ []][} ⊂A][.]
Below we consider CI, ICI and LE. Recalling the assumptions made in Sec. II-B, with N denoting the number of
estimates to be merged, it is now assumed that (yi, Ri[0][)][ are]
available, where i = 1, . . ., N, and that Rij[0] [, with][ i][ ̸][=][ j][,]
are unknown. It should be emphasized that CI, ICI and LE
give different solutions to a problem since they are related
to different assumptions on . Among the described methods
_A_
LE makes the most restrictive assumptions on while CI
_A_
makes the least restrictive assumptions on . Hence LE in
_A_
general is less conservative than ICI while ICI in general is
less conservative than CI.
_A. Covariance Intersection_
CI was originally proposed in [7] for the fusion of two
correlated estimates CI is based on completely unknown
_C_
_C_ _[′]_
|Col1|A|A′|
|---|---|---|
(a)
(b)
Fig. 4. Illustration of the benefits of utilizing any extra structure on
the unknown parts of R[0]. (a) C is a minimal bound on A. (b) A
smaller minimal bound C _[′]_ _≺_ _C can be found for A[′]_ _⊂A._
_cross-correlations for which the only condition on the cross-_
correlation is that R[0] 0, and therefore
_≻_
_A =_ �� _RR12[T]1 R[R]12[2]_ � _∈_ **S[m]++���R1 = R10[, R][2]** [=][ R]2[0] � _._ (15)
Let y = �y1[T] _. . ._ _yN[T]_ �T, H = �H1[T] _. . ._ _HN[T]_ �Tand C =
diag � _Rω11[0]_ _[, . . .,][ R]ωNN[0]_ �. CI is given by[5]
_P_ _[−][1]_ = H [T]C _[−][1]H =_
_N_
�
_ωiHi[T][(][R]i[0][)][−][1][H][i][,]_ (16a)
_i=1_
_P_ _[−][1]xˆ = H_ [T]C _[−][1]y =_
_N_
�
_ωiHi[T][(][R]i[0][)][−][1][y][i][,]_ (16b)
_i=1_
where ωi ∈ [0, 1] and [�]i[N]=1 _[ω][i][ = 1][. The free parameters]_
_ω1, . . ., ωN are found by minimizing J(P_ ). The CI gain K is
given by
_K = P_ �ω1H1([T]R1[0][)][−][1] _. . ._ _ωN_ _HN[T]_ [(][R]N[0] [)][−][1][�] _._ (17)
Concerning optimality of CI, let N = 2 and H1 = H2 = I.
In this case we only have one free parameter ω since it is
possible to define ω1 = ω and ω2 = 1 − _ω. Let the optimal_
value of ω given J(P ) be denoted by ω[⋆]. Further, let an
arbitrary CLUE be given by (K _[′], P_ _[′]). In [29] it is shown that_
if ω[⋆] is obtained by minimizing J(P ) w.r.t. ω with P given
by (16a), then for (K _[⋆], P_ _[⋆]) given by_
_K_ _[⋆]_ = �ω⋆P ⋆(R10[)][−][1] (1 − _ω[⋆])P_ _[⋆](R2[0][)][−][1][�]_ _,_ (18a)
_P_ _[⋆]_ = �ω[⋆](R1[0][)][−][1][ + (1][ −] _[ω][⋆][)(][R]2[0][)][−][1][�][−][1][,]_ (18b)
it holds that
_P_ _[′]_ _⪯_ _P_ _[⋆]_ =⇒ _P_ _[′]_ = P _[⋆]._
This means P _[⋆]_ is a minimal element of the feasible set. Hence,
(K _[⋆], P_ _[⋆]) according to (18) constitute a best CLUE, provided_
that N = 2 and that the cross-correlations are completely
unknown.
_B. Inverse Covariance Intersection_
ICI is derived in [12] for the case where N = 2 and H1 =
_H2 = I. ICI is less conservative than CI since it utilizes a_
certain structure on R12[0] [called][ common information][ [12].]
We introduce Γ[−][1] _∈_ **S[n]++** [to denote common information]
included in both (R1[0][)][−][1][ and][ (][R]2[0][)][−][1][, and][ ˆ][γ][ to denote the]
5Thi t ti i l k th i f _ti_ _f_
-----
corresponding estimate for which Γ = cov(ˆγ). The common
information structure is then defined as
(Ri[0][)][−][1][ = (][R]i[e][)][−][1][ + Γ][−][1][,] (19a)
(Ri[0][)][−][1][y][i] [= (][R]i[e][)][−][1][y]i[e] [+ Γ][−][1][γ,][ˆ] (19b)
for i = 1, 2, where (Ri[e][)][−][1][ and][ y]i[e] [are the exclusive informa-]
tion and the exclusive estimate of the ith estimate, respectively.
The resulting cross-covariance becomes [12]
_R12[0]_ [=][ R]1[0][Γ][−][1][R]2[0][.] (20)
An implication of (19a) is that (R1[0][)][−][1][,][ (][R]2[0][)][−][1][ ⪰] [Γ][−][1][. The]
set is now given by
_A_
**Algorithm 1 Largest Ellipsoid Method**
**Input: (y1, R1[0][)][ and][ (][y]2[, R]2[0][)]**
1: Factorize R1[0] [=][ U]1[D]1[U][ T]1 [and let][ T]1 [=][ D]1− 2[1] _U1[T][. Factorize]_
_T1R2[0][T][ T]1_ [=][ U]2[D]2[U][ T]2 [and let][ T]2 [=][ U][ T]2 [.]
2: Transform using T = T2T1 according to
_y1[′]_ [=][ Ty]1[,] _D1[′]_ [=][ TR]1[0][T][ T][ =][ I,]
_y2[′]_ [=][ Ty]2[,] _D2[′]_ [=][ TR]2[0][T][ T][.]
3: For each i = 1, . . ., n, of the ˆx[′] and diagonal P _[′], compute_
�[ˆx[′]]i, [P _[′]]ii�_ =
�
([y1[′] []]i[,][ 1)][,] if 1 ≤ [D2[′] []]ii[,]
([y2[′] []]i[,][ [][D]2[′] []]ii[)][,] if 1 > [D2[′] []]ii[.]
**Output: T** _[−][1]xˆ[′], T_ _[−][1]P_ _[′]T_ _[−][T]_
where D2 and D12 are diagonal. The LE method is outlined
in Algorithm 1. The condition in (24) is equivalent to
(21)
_[.]_
������
_R1 = R1[0][, R][2][ =][ R]2[0]_
_R12 = R1Γ[−][1]R2_
_R1[−][1][, R]2[−][1]_ _⪰_ Γ[−][1]
=
_A_
� _R1 R12_
_R12[T]_ _[R][2]_
�
_∈_ **S[m]++**
An estimate is computed using ICI according to
_P_ _[−][1]_ = (R1[0][)][−][1][ + (][R]2[0][)][−][1][ −] �ωR1[0] [+ (1][ −] _[ω][)][R]2[0]�−1,_
(22a)
_P_ _[−][1]xˆ =_ �(R1[0][)][−][1][ −] _[ω]_ �ωR1[0] [+ (1][ −] _[ω][)][R]2[0]�−1[�]_ _y1_
+ �(R2[0][)][−][1][ −] [(1][ −] _[ω][)]_ �ωR1[0] [+ (1][ −] _[ω][)][R]2[0]�−1[�]_ _y2,_
(22b)
where ω [0, 1] is found by minimizing J(P ) [12]. The ICI
_∈_
gain is given by K = �K1 _K2�_ where
_K1 = P_ �(R1[0][)][−][1][ −] _[ω]_ �ωR1[0] [+ (1][ −] _[ω][)][R]2[0]�−1[�]_ _,_ (23a)
_K2 = P_ �(R2[0][)][−][1][ −] [(1][ −] _[ω][)]_ �ωR1[0] [+ (1][ −] _[ω][)][R]2[0]�−1[�]_ _,_
(23b)
with P according to (22a).
Given that the common information structure holds, it is
shown in [12] that if (K _[′], P_ _[′]) is any arbitrary CLUE and P_ _[⋆]_
is computed according to (22a), then
_P_ _[′]_ _⪯_ _P_ _[⋆]_ =⇒ _P_ _[′]_ = P _[⋆]._
Hence, ICI is a best CLUE under common information.
_C. Largest Ellipsoid Method_
Following its first appearance in [9] the LE method has been
derived from multiple principles and therefore has been given
multiple names: In [55] it is called safe fusion, the authors of
[10, 56] suggests the name ellipsoidal intersection, and in [57]
it is named internal ellipsoid approximation. It should be noted
that there are minor differences between how the estimate is
calculated. In this work we use the algorithm proposed in [55].
In the derivations of LE no explicit assumptions on are
_A_
made. Below we propose componentwise aligned correlations
which is an assumed structure on R12[0] [that is satisfied if there]
exists a joint transformation TJ = diag(T, T ) such that
_R1 = R1[0][, R]2_ [=][ R]2[0]
_∃T, TR1T_ [T] = I
_∧_ [TR2T [T]]ij = 0, i ̸= j
_∧_ [TR12T [T]]ij = 0, i ̸= j
���������
=
_A_
� _R1 R12_
_R12[T]_ _[R][2]_
�
_∈_ **S[m]++**
_._ (25)
_D = TJ_ _R[0]TJ[T]_ [=] � _I_ _D12_
_D_ _D_
�
_,_ (24)
Consider the quantities of Algorithm 1. The resulting gain
of the LE method is given by
_K =_ �K1 _K2�_ = T _[−][1][ �]K1[′]_ _K2[′]_ � _T,_ (26)
where K1[′] [and][ K]2[′] [are the gains in the transformed domain, i.e.,]
after transformation using T . The matrix K1[′] [is diagonal where]
[K1[′] []][ii] [= 1][ if][ [][D]2[′] []][ii] _[≥]_ [1][ and otherwise zero, and][ K]2[′] [=][ I][ −][K]1[′]
[14].
**Theorem 4 (Largest Ellipsoid Method—Optimal). If in (24)**
_D12 = TR12T_ [T] _is diagonal for T as given in Algorithm 1,_
_then the LE method of Algorithm 1 is a best CLUE._
_Proof. By assumption TJ_ _RTJ[T]_ [=] � _DI12 DD122_ �, where D2 and
_D12 are diagonal. The ith component of I is only correlated_
with the ith component of D2. Hence, we only need to
consider pairwise correlated scalars. It is then possible to use
CI for the merging of scalars correlated to an unknown degree.
If P _[′]_ is the covariance in the transformed domain, then
[P _[′]]ii = ω[I]ii + (1 −_ _ω)[D2]ii = ω + (1 −_ _ω)[D2]ii,_
which, as a property of CI, is conservative for all ω
_∈_
[0, 1]. Minimizing [P _[′]]ii w.r.t. ω is equivalent to [P_ _[′]]ii =_
min (1, [D2]ii), which in particular is the LE solution. In
[14, Theorem 4.7] it is shown that LE is a linear unbiased
estimator. Hence, LE is a best CLUE under componentwise
aligned correlations.
VII. THEORY EVALUATION
In this section five estimation examples are solved. The
covariances P [CI], P [ICI] and P [LE] corresponding to CI, ICI
and LE, respectively, the lower bound Pl, and the upper
bound Pu are computed wherever applicable. Each example
is also solved using the previously proposed RO approach,
where the resulting covariance is denoted by P [RO] YALMIP
-----
E2a:
E2b:
E2c:
_R1[0]_
_P_ [CI]
_P_ [ICI]
_P_ [LE]
_R2[0]_
Fig. 5. Summary of E2, where P [CI] _≻_ _P_ [ICI] _≻_ _P_ [LE]. If it is possible to
exploit more structure in the problem, then it is possible to compute
a CLUE having a smaller covariance. The hashed areas in the left
part illustrate (H [T]R[−][1]H)[−][1], ∀R ∈A for the different A.
implementations in MATLAB[®] for the RO parts are available
[from https://gitlab.com/robinforsling/clue. In all examples ex-](https://gitlab.com/robinforsling/clue)
cept the last one it is assumed that Hi = I ∈ **R[2][×][2]** for
_i = 1, . . ., N and H =_ �H1[T] _. . ._ _HN[T]_ �T. The loss function
_J(P_ ) = tr(P ) is chosen which means that the estimator
variance is minimized. Example 1 is denoted by E1 and the
remaining examples are identified analogously.
_A. E1:_ _Is A Finite Set_
_A_
Assume that N = 2 and R[0] = � (RR12[0]1[0][)][T][ R]R12[0]2[0] �, where R1[0] [=]
[or[1 0]0 4 −[]][ and]I. Then[ R]2[0] A[= [] =[ 4 0] {0 1Q, S[]][, and where]} ∈ **S[4]++** _[ R][where]12[0]_ _[∈][ Q][R][ =][2][×]�[2][ is either]RI1[0]_ _RI2[0]_ � and[ I]
_S =_ � _R1[0]_ _[−][I]_ �.
_−I R2[0]_
The BLUE for R[0] = Q is given by
_R3[0]_ _R2[0]_
_R1[0]_
_P_ [CI]
_P_ [RO]
_Pl_
Fig. 6. Summary of E3, where P [CI] _≻_ _P_ [RO] _≻_ _Pl, and hence CI is_
not a best CLUE when N > 2. The gray ellipses in the intersection
represent (H [T]R[−][1]H)[−][1] for different R ∈A.
|R0 2 R0 3|Col2|
|---|---|
|||
|PCI PRO||
|P l||
||E3, where P CI ≻ P RO ≻ P l, and hence CI is en N > 2. The gray ellipses in the intersection )−1 for different R ∈A. case is given by (15). We first look at A = hR 10 G i and S = hR 10 −Gi|
_KQ =_ �H [T]Q[−][1]H�−1 H TQ−1 = �1 0 0 0
0 0 0 1
�
_,_ (27)
_1) E2a: In this case_ is given by (15). We first look at
_A_
two elements, Q = � _R1[0]_ _[G]_ � and S = � _R1[0]_ _[−][G]_ �
_G R2[0]_ _∈A_ _−G R2[0]_ _∈A_
where G = [ 3.9990 0.9990 ]. Solving (8), but replacing A by A[′] =
_S, Q_, yields approximately 1.60I. Since we then
_{_ _}_ _A[′]_ _⊂A_
know that Pl ⪰ 1.60I.
The matrix C = 2 diag(R1[0][, R]2[0][)][ satisfies][ C][ ⪰] _[R,][ ∀][R][ ∈]_
. This is true since C _R[0]_ = � _R10_ _−R12[0]_ � 0 as a
_A_ _−_ _−R21[0]_ _R2[0]_ _≻_
consequence of the assumption R[0] 0. Using (12b) yields
_≻_
_Pu = 1.60I which is equivalent to the solution of (8). Using_
Theorem 2 we can hence conclude that a best CLUE is given
by (12) with C = 2 diag(R1[0][, R]2[0][)][.]
_2) E2b: In this case_ is given by (21). Solving the prob_A_
lem using YALMIP yields P [RO] = 1.18I which is equivalent to
the best CLUE solution P _[⋆]_ = P [ICI] = 1.18I computed using
ICI.
_3) E2c: In this case A is given by (25). KQ according to_
(27) yields KQRKQ[T] [=][ I,] _∀R ∈A. Hence, K_ _[⋆]_ = KQ and
_P_ _[⋆]_ = I constitute a best CLUE, where P _[⋆]_ = P [LE] since LE is
a best CLUE in case of componentwise aligned correlations.
The matrix Rl = � _RI1[0]_ _RI2[0]_ � does not satisfy Rl ⪰ _R, ∀R ∈_
_A, e.g., for B = diag(R1[0][, R]2[0][)][ ∈A][ the difference][ R][l]_ _[−]_ _[B][ is]_
indefinite. However, the matrix C = 2 diag(R1[0][, R]2[0][)][ satisfies]
_C_ _R,_ _R_ and is also a minimal bound on . This C
yields ⪰ �H ∀ [T]C ∈A[−][1]H�−1 = 1.60I _P ⋆._ _A_
_≻_
_C. E3: Completely Unknown Cross-Correlations N = 3_
Let N = 3 and assume that
which yields KQQKQ[T] [=][ K][Q][SK]Q[T] [=][ I][. From][ KQK] [T][ ⪰]
_KQQKQ[T]_ _[,][ ∀][K][ subject to][ KH][ =][ I][ and][ P][ ⋆]_ _[⪰]_ _[K][Q][SK]Q[T]_ [it]
follows that a best CLUE in this case is given by K _[⋆]_ = KQ
and P _[⋆]_ = I.
Since (H [T]S[−][1]H)[−][1] = 0.43I (H [T]Q[−][1]H)[−][1]
_≺_
we have _Pl_ = _I._ Using a minimal bound _C_ =
diag �R1[0] [+][ I, R]2[0] [+][ I]� = diag(2, 5, 5, 2) a strictly upper
bound Pu = 1.43I can be computed. RO yields P [RO] = I
which is guaranteed to be optimal as is finite.
_A_
_B. E2:_ _Is An Infinite Set_
_A_
Assume that N = 2 and R[0] = � (RR12[0]1[0][)][T][ R]R12[0][2] �, with R1[0] [and]
_R2[0]_ [defined as in E1. Assume that][ R]12[0] [is unknown and that]
now is an infinite set. A best CLUE depends on . We
_A_ _A_
will solve this problem for three different assumptions on,
_A_
namely: a) completely unknown cross-correlations, b) common information, and c) componentwise aligned correlations.
Since R1[0] [and][ R]2[0] [are fixed, CI, ICI and LE yield][ P][ CI][ = 1][.][60][I][,]
_P_ [ICI] = 1.18I and P [LE] = I, respectively, in all subcases below.
E2 is summarized in Fig 5
with R2[0] [and][ R]3[0] [being generated from rotation of][ R]1[0] [by][ 60][◦]
and −60[◦], respectively. Assume that the off-diagonal blocks
of R[0] are completely unknown.
CI yields P [CI] = 1.88I. In this case YALMIP gives us
_P_ [RO] = 1.76I _P_ [CI]. Hence, CI is not a best CLUE under
_≺_
completely unknown cross-correlations if N > 2. We have
also computed Pl = 1.31I as the smallest ellipse which
contains the intersection of the ellipses of R1[0][,][ R]2[0] [and][ R]3[0][,]
but we cannot draw any conclusions about whether this Pl is
a strictly lower bound on a best CLUE or not
�16 0
_R1[0]_ [=] 0 1
� �4.75 6.50
_, R2[0]_ [=] 6.50 12.25
�, R3[0] [=] �−46.75.50 _−126..2550_
�
_,_
-----
_R2[0]_
_R1[0]_
_K_ _[⋆]Q(K_ _[⋆])[T]_
_K_ _[⋆]S(K_ _[⋆])[T]_
_Pl_
_P_ _[⋆]_
Fig. 7. Summary of E4, where Pl is a strictly lower bound on P _[⋆]._
Gray ellipses are given by (H [T]R[−][1]H)[−][1] where R ∈A.
_D. E4: Lower Bound Is Strict_
In this example it is shown that P _[⋆]_ ≠ _Pl, P_ _[⋆]_ _⪰_ _Pl. Assume_
thatR� 0−2[0].1 05 N −[=].15 = 2��−1which corresponds to1 _− and51_ �, and R[0] _R=_ 12[0]� (R[∈]R12[0]1[0][R][)][T][ R][2] QR[×]12[0][2]2[0][ can either be] and�, where S of R A1[0] =[= [][ [][ 1]0 {.[ 5 1]51 10Q, S1.[]]5[ and][]][ or]},
respectively.
Using RO in YALMIP we compute Pl = [ 0[0].[.]45 0[40 0][.].[45]93 []][ and]
_P_ [RO] = P _[⋆]_ = [ 0[0].[.]40 0[56 0][.].[40]95 []][. The results are visualized in]
Fig. 7, where also K _[⋆]Q(K_ _[⋆])[T]_ and K _[⋆]S(K_ _[⋆])[T]_ are plotted,
with K _[⋆]_ being the best CLUE gain. The reason for having
_P_ _[⋆]_ ≠ _Pl, P_ _[⋆]_ _⪰_ _Pl is that no K exists such that_
_P_ _KQK_ [T] _P_ _KSK_ [T]
_⪰_ _∧_ _⪰_
_∧_ _P_ _[′]_ _⪰_ (H [T]Q[−][1]H)[−][1] _∧_ _P_ _[′]_ _⪰_ (H [T]S[−][1]H)[−][1]
and J(P ) = J(P _[′]) hold simultaneously._
_E. E5: Eigenvalue Constrained R12[0]_
In the final example we have N = 2 and assume that the
eigenvalues of R12[0] [are constrained. Let][ x][0][ ∈] **[R][2][ and]**
tr(P [CI])
tr(P _[′])_
|Col1|tr(PRO)|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
0.5 1.0 1.5 2.0
_ρ_
|R0 2|Col2|Col3|
|---|---|---|
|0 1 K⋆Q(K⋆)T K⋆S(K⋆)T P l P ⋆||Fig. repr|
||RP 1is a −st 1r ic wtl hy rl eo w Re r b Aou .nd −l H ) e ∈||
� 1 1 �
_H1 =_ _√2_ _√2_ _,_ _R1[0]_ [= 1][,]
_√1_ _√−1_
2 2
1 0
0 1
_,_ _R2[0]_ [=]
4 0 0
0 4 0 _._
0 0 4
_H2 =_
In this case R12[0] _[∈]_ **[R][1][×][3][ and is assumed to be constrained]**
as R12[0] [(][R]12[0] [)][T][ ≤] _[ρ][2][ or equivalently][ ∥][R]12[0]_ _[∥≤]_ _[ρ][ where][ ρ][ ≥]_ [0][.]
To have R[0] 0 we require ρ 2 such that ρ [0, 2]. We
_⪰_ _≤_ _∈_
now vary ρ ∈ [0, 2] and compute P [RO]. Also P [CI] and P _[′]_ are
computed where
_P_ _[′]_ = �H1[T][(][R]1[0][)][−][1][H][1] [+][ H]2[T][(][R]2[0][)][−][1][H][2]�−1,
such that P _[′]_ is equivalent to the covariance of the BLUE given
_∥R12[0]_ _[∥]_ [= 0][. In Fig. 8 the trace of][ P][ RO][,][ P][ CI][ and][ P][ ′][ are plotted.]
As ρ increases from 0 to 2, tr(P [RO]) increases from tr(P _[′]) to_
tr(P [CI]). Note, in this example tr(P [RO]) is almost linear in ρ
but this is generally not the case
Fig. 8. Results for E5, where ∥R12[0] _[∥≤]_ _[ρ][. The green solid line]_
represents tr(P [RO]).
_F. Discussion_
The results for E1-E4 are summarized in Table I, where
E5 has been excluded since the results of E5 is given on a
different format, see Fig. 8. We see that each of CI, ICI and
LE yields the same answer for E1 and E2 since R1[0] [and][ R]2[0]
are fixed throughout these cases. In E1 and E2 the YALMIP
solution is equivalent to a best CLUE, and we further see the
benefits of utilizing any extra structure encoded in .
_A_
E3 is a counterexample of CI being a best CLUE under
completely unknown cross-correlations when N > 2. We do
not know if P [RO] is equivalent to P _[⋆]_ since P _[⋆]_ _≻_ _Pl is possible_
even if P [RO] = P _[⋆], cf. Theorem 1. The upper bound is strict_
in this case.
In E5 neither of H1 and H2 is identity or even a square
matrix, and eigenvalues of R12[0] [(][R]12[0] [)][T][ are constrained to be]
smaller than ρ[2]. As ρ is varied from zero to its maximum
value, tr(P [RO]) increases from that of the BLUE given R12 = 0
to that of CI. This result is quite specific but nevertheless
verifies the generality of the RO methodology.
The examples also demonstrate the generality of the CLUE
framework and in particular the usability of : (i) it can be
_A_
used to select estimation method, e.g., ICI if (21) holds, (ii) it
is the basis for deriving and solving general problems using
robust optimization, (iii) and it is used to compute lower and
upper bounds on a best CLUE.
VIII. CONCLUSIONS
A framework for conservative linear unbiased estimation
was proposed. The backbone of the framework is Definition 2
where a best conservative linear unbiased estimator (best
CLUE) is defined. Lower and upper bounds of a best CLUE
were derived.
TABLE I
SUMMARY OF EXAMPLES
_Pu_ _Pl_ _P_ [CI] _P_ [ICI] _P_ [LE] _P_ [RO] _P_ _[⋆]_
E1 1.43I **_I_** 1.60I 1.18I **_I_** **_I_** **_I_**
E2a **1.60I 1.60I** **1.60I 1.18I** **_I_** **1.60I** **1.60I**
E2b - - 1.60I **1.18I** **_I_** **1.18I** **1.18I**
E2c 1.60I **_I_** 1.60I 1.18I **_I_** **_I_** **_I_**
E3 1.88I **1.31I** 1.88I - - **1.76I** E4 - � 00..45 040 0..9345 � - - - � 00..40 056 0..9540 �� 00..56 040 0..4095 �
**black = CLUE, not best CLUE; green = best CLUE; cyan = lower**
bound red = not CLUE; yellow = CLUE, might be best CLUE
Quantities not computed are marked ” ”
-----
Fig. 9. A summary of the main contributions (green boxes) and
suggested future directions to take (orange dashed boxes). Current
progress (gray hashed boxes) have been included for clarity.
The strength of the proposed framework was further demonstrated as best CLUEs were found in more general settings
with robust optimization (RO). Using an example we have
illustrated that the RO based approach has the potential to
perform better than CI if N > 2. Moreover, it was shown
that three existing conservative linear estimation methods in
fact are a best CLUE under different assumptions about the
cross-correlations.
This paper suggests two main directions to take for future
work. Special cases of a best CLUE: New methods can be
derived and connected to a best CLUE by exploiting structures
in the set . Conservative estimation using RO: Synthesizing
_A_
new theory on RO, particularly in robust semidefinite pro_gramming (SDP), means it is possible to prove tractability_
and optimality for even more general cases than those already
stated, and to describe properties of the solution from the RO.
We summarize the contributions of this work and suggested
future work in Fig. 9.
APPENDIX A
MATRIX RELATIONS
Let V ⊆ **S[n]+** [and][ A, B][ ∈] **[S]+[n]** [. The inequalities][ ⪰] [and][ ≻] [are]
defined as
_A ⪰_ _B ⇐⇒_ _A −_ _B ⪰_ 0 ⇐⇒ (A − _B) ∈_ **S[n]+[,]**
_A ≻_ _B ⇐⇒_ _A −_ _B ≻_ 0 ⇐⇒ (A − _B) ∈_ **S[n]++[.]**
A function J : R[n][×][n] **R is matrix nondecreasing if**
_→_
_A_ _B =_ _J(A)_ _J(B),_ (28)
_⪯_ _⇒_ _≤_
and matrix increasing if
_A_ _B, A_ = B = _J(A) < J(B)._ (29)
_⪯_ _̸_ _⇒_
The function tr(WA) is matrix nondecreasing if W ∈ **S[n]+** [and]
matrix increasing if W ∈ **S[n]++[. In particular][ tr(][A][)][ is matrix]**
increasing on S[n]+[. The function][ det(][A][)][ is matrix increasing]
on S[n]+ [[40].]
An element A is the minimum element of if
_∈V_ _V_
_B_ _A,_ _B_ _._ (30)
_⪰_ _∀_ _∈V_
An element A is a minimal element of if B and
_∈V_ _V_ _∈V_
_B_ _A_ _B_ _A_ (31)
_⪯_ _⇒_
Minimal elements of V ⊆ **S[n]++** [are given by]
minimize tr(WB), (32)
_B∈V_
where W ∈ **S[n]++[. If all][ W][ ∈]** **[S][n]++** [yields the same unique]
solution A, then A is a minimum element of . If two or
_V_
more minimal elements of a set exist, then does not have
_V_ _V_
a minimum element [40].
Assuming J is matrix increasing, the solution to
minimize _J(B),_ (33)
_B∈V_
yields a minimal element of . This can be shown by contra_V_
diction. Assume that A solves (33) but that A is not a minimal
element of . The latter assumption implies that there exists
_V_
another element A[′] for which A[′] _A, A[′]_ = A. It follows
_∈V_ _⪯_ _̸_
from (29) that J(A[′]) < J(A) which leads to a contradiction
since A then cannot be a solution to (33). Hence, the solution
to (33) is a minimal element of .
_V_
REFERENCES
[1] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation
_Theory._ Upper Saddle River, NJ, USA: Prentice Hall, 1993.
[2] D. Hall, C.-Y. Chong, J. Llinas, and M. Liggins, Distributed Data Fusion
_for Network-Centric Operations._ Boca Raton, FL, USA: CRC Press,
2012.
[3] L. J. Gleser, Estimation For A Regression Model With An Unknown
_Covariance Matrix._ University of California Press, 1972, pp. 541–568.
[4] J. Nygårds, V. Deleskog, and G. Hendeby, “Safe fusion compared to
established distributed fusion methods,” in Proceedings of the IEEE In_ternational Conference on Multisensor Fusion, Baden-Baden, Germany,_
Sep. 2016.
[5] ——, “Decentralized tracking in sensor networks with varying coverage,” in Proceedings of the 21st IEEE International Conference on
_Information Fusion, Cambridge, UK, Jul. 2018._
[6] M. A. Bakr and S. Lee, “Distributed multisensor data fusion under
unknown correlation and data inconsistency,” Sensors, vol. 17, no. 11,
p. 2472, Oct. 2017.
[7] S. J. Julier and J. K. Uhlmann, “A non-divergent estimation algorithm
in the presence of unknown correlations,” in Proceedings of the 1997
_American Control Conference, Albuquerque, NM, USA, Jun. 1997, pp._
2369–2373.
[8] J. K. Uhlmann, “Covariance consistency methods for fault-tolerant
distributed data fusion,” Information Fusion, vol. 4, no. 3, pp. 201–215,
Sep. 2003.
[9] A. R. Benaskeur, “Consistent fusion of correlated data sources,” in
_Proceedings of the 28th Annual Conference of the IEEE Industrial_
_Electronics Society, Sevilla, Spain, Nov. 2002, pp. 2652–2656._
[10] J. Sijs, M. Lazar, and P. P. J. v. d. Bosch, “State fusion with unknown correlation: Ellipsoidal intersection,” in Proceedings of the 2010 American
_Control Conference, Baltimore, MD, USA, Jun. 2010, pp. 3992–3997._
[11] B. Noack, J. Sijs, and U. D. Hanebeck, “Algebraic analysis of data fusion
with ellipsoidal intersection,” in Proceedings of the IEEE International
_Conference on Multisensor Fusion, Baden-Baden, Germany, Sep. 2016,_
pp. 365–370.
[12] B. Noack, J. Sijs, M. Reinhardt, and U. D. Hanebeck, “Decentralized
data fusion with inverse covariance intersection,” Automatica, vol. 79,
pp. 35–41, May 2017.
[13] S. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking
_Systems._ Norwood, MA, USA: Artech House, 1999.
[14] R. Forsling, “Decentralized estimation using conservative information
extraction,” Licentiate Thesis, Linköping University, Linköping, Sweden, Dec. 2020.
[15] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation. Upper
Saddle River, NJ, USA: Prentice Hall, 2000.
[16] Y. Bar-Shalom and L. Campo, “The effect of the common process
noise on the two-sensor fused-track covariance,” IEEE Transactions on
_Aerospace and Electronic Systems, vol. 22, no. 6, pp. 803–805, Nov._
1986.
[17] L. Chen, P. Arambel, and R. Mehra, “Estimation under unknown
correlation: Covariance intersection revisited,” IEEE Transactions on
_A t_ _ti C_ _t_ _l_ l 47 1879 1882 N 2002
-----
[18] X. Tian and Y. Bar-Shalom, “Exact algorithms for four track-to-track
fusion configurations: All you wanted to know but were afraid to ask,” in
_Proceedings of the 12th IEEE International Conference on Information_
_Fusion, Seattle, WA, USA, Jul. 2009._
[19] S. H. Grime and H. F. Durrant-Whyte, “Data fusion in decentralized
sensor networks,” Control Engineering Practice, vol. 2, no. 5, pp. 849
– 863, Oct. 1994.
[20] R. Olfati-Saber and R. Murray, “Consensus problems in networks of
agents with switching topology and time-delays,” IEEE Transactions on
_Automatic Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004._
[21] R. Olfati-Saber, “Distributed Kalman filter with embedded consensus
filters,” in Proceedings of the 44th IEEE Conference Decision and
_Control, Sevilla, Spain, Dec. 2005._
[22] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed
sensor fusion based on average consensus,” in Proceedings of the 4th
_International Symposium on Information Processing in Sensor Networks_
_(IPSN), Apr. 2005, pp. 63–70._
[23] G. Battistelli, L. Chisci, G. Mugnai, A. Farina, and A. Graziano,
“Consensus-based linear and nonlinear filtering,” IEEE Transactions on
_Automatic Control, vol. 60, no. 5, pp. 1410–1415, Sep. 2015._
[24] U. A. Khan and J. M. F. Moura, “Distributing the Kalman filter for
large-scale systems,” IEEE Transactions on Signal Processing, vol. 56,
no. 10, pp. 4919–4935, Oct. 2008.
[25] F. Govaers and W. Koch, “Distributed Kalman filter fusion at arbitrary
instants of time,” in Proceedings of the 13th IEEE International Con_ference on Information Fusion, Edinburgh, Scotland, Jul. 2010, pp. 1–8._
[26] M. Reinhardt, B. Noack, and U. D. Hanebeck, “The hypothesizing
distributed Kalman filter,” in Proceedings of the IEEE International
_Conference on Multisensor Fusion, Hamburg, Germany, Sep. 2012._
[27] S. J. Julier and J. K. Uhlmann, “General decentralized data fusion
with covariance intersection,” in Handbook of Multisensor Data Fusion:
_Theory and Practice, M. Liggins, D. Hall, and J. Llinas, Eds._ Boca
Raton, FL, USA: CRC Press, 2009, ch. 14.
[28] L. Chen, P. Arambel, and R. Mehra, “Fusion under unknown correlation:
Covariance intersection as a special case,” in Proceedings of the 5th
_IEEE International Conference on Information Fusion, Annapolis, MD,_
USA, Jul. 2002.
[29] M. Reinhardt, B. Noack, P. O. Arambel, and U. D. Hanebeck, “Minimum
covariance bounds for the fusion under unknown correlations,” IEEE
_Signal Processing Letters, vol. 22, no. 9, pp. 1210–1214, Sep. 2015._
[30] B. Noack, J. Sijs, and U. D. Hanebeck, “Inverse covariance intersection:
New insights and properties,” in Proceedings of the 20th IEEE Interna_tional Conference on Information Fusion, Xi’an, China, Jul. 2017._
[31] J. Ajgl and O. Straka, “Inverse covariance intersection fusion of multiple
estimates,” in Proceedings of the 23rd IEEE International Conference
_on Information Fusion, Virtual Conference, Jul. 2020._
[32] Z. Wu, Q. Cai, and M. Fu, “Covariance intersection for partially
correlated random vectors,” IEEE Transactions on Automatic Control,
vol. 63, no. 3, pp. 619–629, Mar. 2018.
[33] J. Ajgl and O. Straka, “Analysis of partial knowledge of correlations
in an estimation fusion problem,” in Proceedings of the 21st IEEE
_International Conference on Information Fusion, Cambridge, UK, Jul._
2018.
[34] ——, “Comparison of fusions under unknown and partially known
correlations,” IFAC-PapersOnLine, vol. 51, no. 23, pp. 295–300, 2018.
[35] Y. Gao, X. R. Li, and E. Song, “Robust linear estimation fusion with
allowable unknown cross-covariance,” IEEE Transactions on Systems,
_Man, and Cybernetics: Systems, vol. 46, no. 9, pp. 1314–1325, 2016._
[36] A. Sayed, “A framework for state-space estimation with uncertain
models,” IEEE Transactions on Automatic Control, vol. 46, no. 7, pp.
998–1013, 2001.
[37] E. Delage and Y. Ye, “Distributionally robust optimization under moment uncertainty with application to data-driven problems,” Operations
_Research, vol. 58, no. 3, pp. 595–612, 2010._
[38] S. Shafieezadeh-Abadeh, V. A. Nguyen, D. Kuhn, and P. M. Esfahani,
“Wasserstein distributionally robust Kalman filtering,” in NeurIPS, Sep.
2018, pp. 8483–8492.
[39] S. Wang and Z.-S. Ye, “Distributionally robust state estimation for linear
systems subject to uncertainty and outlier,” IEEE Transactions on Signal
_Processing, vol. 70, pp. 452–467, Dec. 2022._
[40] S. Boyd and L. Vandenberghe, Convex Optimization. New York, NY,
USA: Cambridge University Press, 2004.
[41] U. Orguner, “Approximate analytical solutions for the weight optimization problems of CI and ICI,” in 2017 Sensor Data Fusion: Trends,
_Solutions, Applications (SDF), 2017, pp. 1–6._
[42] J. H. Taylor, “The Cramér-Rao estimation error lower bound comt ti f d t i i ti li t ” IEEE T _ti_
_Automatic Control, vol. 24, no. 2, pp. 343–344, April 1979._
[43] U. Hanebeck, K. Briechle, and J. Horn, “A tight bound for the joint
covariance of two random vectors with unknown but constrained crosscorrelation,” in Proceedings of the IEEE International Conference on
_Multisensor Fusion, Baden-Baden, Germany, Aug. 2001, pp. 85–90._
[44] R. Forsling, Z. Sjanic, F. Gustafsson, and G. Hendeby, “Consistent
distributed track fusion under communication constraints,” in Proceed_ings of the 22nd IEEE International Conference on Information Fusion,_
Ottawa, Canada, Jul. 2019.
[45] J. Ajgl and O. Straka, “Lower bounds in estimation fusion with partial
knowledge of correlations,” in Proceedings of the IEEE International
_Conference on Multisensor Fusion, Karlsruhe, Germany, Sep. 2021._
[46] A. Ben-Tal, L. E. Ghaoui, and A. Nemirovski, Robust Optimization.
Princeton, NJ, USA: Princeton University Press, 2009.
[47] G. O. Corrêa and A. Talavera, “Competitive robust estimation for uncertain linear dynamic models,” IEEE Transactions on Signal Processing,
vol. 65, no. 18, pp. 4847–4861, Sep. 2017.
[48] L. E. Ghaoui, F. Oustry, and H. Lebgret, “Robust solutions to uncertain
semidefinite programs,” SIAM J. on Optimization, vol. 9, no. 1, pp. 33–
52, May 1998.
[49] J. Löfberg, “YALMIP: A toolbox for modeling and optimization in
MATLAB,” in Proceedings of the CACSD Conference, vol. 3, Taipei,
Taiwan, Sep. 2004.
[50] A. Ben-Tal and A. Nemirovski, “Robust optimization - Methodology
and applications,” Math. Program., vol. 92, pp. 453–480, May 2002.
[51] G. H. Golub and C. F. van Loan, Matrix Computations, 4th ed.
Baltimore, MD, USA: The Johns Hopkins University Press, 2013.
[52] A. Ben-Tal and A. Nemirovski, “Robust convex optimization,” Mathe_matics of Operations Research, vol. 23, no. 4, pp. 769–805, Nov. 1998._
[53] J. Löfberg, “Automatic robust convex programming,” Optimization
_Methods and Software, vol. 27, no. 1, pp. 115–129, Feb. 2012._
[54] J. Löfberg, “Support for robust conic-conic optimization in YALMIP,”
unpublished.
[55] F. Gustafsson, Statistical Sensor Fusion. Lund, Sweden: Studentlitteratur, 2018.
[56] J. Sijs and M. Lazar, “Empirical case-studies of state fusion via
ellipsoidal intersection,” in Proceedings of the 17th IEEE International
_Conference on Information Fusion, Salamanca, Spain, Jul. 2014, pp._
1–8.
[57] Y. Zhou and J. Li, “Data fusion of unknown correlations using internal
ellipsoidal approximation,” in Proceedings of the 17th Triennial IFAC
_World Congress, Seoul, Korea, Jul. 2008, pp. 2856 – 2860._
**Robin Forsling received the M.Sc. degree in en-**
gineering physics in 2016 from Umeå University,
Umeå, Sweden, and the Lic.Eng. degree in automatic control in 2021 from Linköping University,
Linköping, Sweden. He is currently working toward
the Ph.D. degree with the Division of Automatic
Control, Linköping University.
His main research interest is decentralized estimation with a particular focus on conservative
estimation methods and communication reducing
techniques. Since 2016 he is employed at Saab
Aeronautics in Linköping, Sweden, where he has been working as a Systems
Engineer with target acquisition systems, decision support systems and sensor
f i
-----
**Anders Hansson was born in Trelleborg, Sweden,**
in 1964. He received the Master of Science degree
in Electrical Engineering in 1989, the Degree of
Licentiate of Engineering in Automatic Control in
1991, and the PhD in Automatic Control in 1995,
all from Lund University, Lund, Sweden. During the
academic year 1992-1993 he spent six months at
Imperial College in London, UK. From 1995 until
1997 he was a postdoctoral student, and from 1997
until 1998 a research associate at the Information
Systems Lab, Department of Electrical Engineering,
Stanford University. In 1998 he was appointed assistant professor and in 2000
associate professor at S3-Automatic Control, Royal Institute of Technology,
Stockholm, Sweden. In 2001 he was appointed associate professor at the
Division of Automatic Control, Linköping University. From 2006 he is full
professor at the same department. Anders Hansson is a senior member of the
IEEE. During 2006-2007 he was an associate editor of the IEEE Transactions
on Automatic Control. He was a member of the EUCA council 20092015. Currently he is a member of the EUCA General Assembly and of
the Technical Committee on Systems with Uncertainty of the IEEE Control
Systems Society. His research interests are within the fields of optimal control,
stochastic control, linear systems, signal processing, applications of control
and telecommunications. He got the SAAB-Scania Research Award in 1992.
**Fredrik Gustafsson is professor in Sensor In-**
formatics at Department of Electrical Engineering,
Linköping University, since 2005. He received the
M.Sc. degree in electrical engineering 1988 and the
Ph.D. degree in Automatic Control, 1992, both from
Linköping University.
He was an associate editor for IEEE Transactions
of Signal Processing 2000-2006, IEEE Transactions
on Aerospace and Electronic Systems 2010-2012,
and EURASIP Journal on Applied Signal Processing
2007-2012. He was awarded the Arnberg prize by
the Royal Swedish Academy of Science (KVA) 2004, elected member of the
Royal Academy of Engineering Sciences (IVA) 2007, and elevated to IEEE
Fellow 2011. In 2014, he was awarded a Distinguished Professor grant from
the Swedish Research Council. He was an adjunct entrepreneurial professor
at Twente University 2012-2013.
He was awarded the Harry Rowe Mimno Award 2011 for the tutorial
"Particle Filter Theory and Practice with Positioning Applications", which
was published in the AESS Magazine in July 2010. He was a co-author
of "Smoothed state estimates under abrupt changes using sum-of-norms
regularization" that received the Automatica paper prize in 2014.
He is a co-founder of the companies NIRA Dynamics (automotive safety),
Softube (digital music production tools), and Senion (indoor navigation).
**Zoran Sjanic received the M.Sc. degree in Com-**
puter Science and Engineering in 2002 and the
Ph.D. degree in automatic control in 2013, both from
Linköping University, Linköping, Sweden.
He has been employed by Saab Aeronautics in
Linköping, Sweden, since 2001, where he works
in the Decision Support Department as a Principal
Systems Engineer and Technical Manager for the
Image processing and Analysis technical area. Since
2020 he is also Adjunct Associate Professor in
the division of Automatic Control, Department of
Electrical Engineering, Linköping University. His main research interests are
sensor fusion for navigation of manned and unmanned aircraft, radar systems,
simultaneous localisation and mapping, distributed estimation and nonlinear
estimation methods.
**Johan Löfberg received the M.Sc. degree in me-**
chanical engineering in 1998 and the Ph.D. degree
in automatic control in 2003, both from Linköping
University, Linköping, Sweden. After a postdoctoral
stay at ETH Zürich from 2003–2006, he now serves
as Associate Professor and Docent in the division
of Automatic Control, Department of Electrical Engineering, Linköping University. His main research
interest is aspects of optimization in control and
systems theory, with a particular interest in model
predictive control. Driven by applications in control,
he is also more generally interested in robust optimization and optimization
modelling. He is the author of the MATLAB toolbox YALMIP which has
become an important tool for researchers and engineers in many domains.
**Gustaf Hendeby (S’04-M’09-SM’17) received the**
M.Sc. degree in applied physics and electrical engineering in 2002 and the Ph.D. degree in automatic
control in 2008, both from Linköping University,
Linköping, Sweden.
He is Associate Professor and Docent in the division of Automatic Control, Department of Electrical
Engineering, Linköping University. He worked as
Senior Researcher at the German Research Center for Artificial Intelligence (DFKI) 2009–2011,
and Senior Scientist at Swedish Defense Research
Agency (FOI) and held an adjunct Associate Professor position at Linköping
University 2011–2015. His main research interests are stochastic signal
processing and sensor fusion with applications to nonlinear problems, target
tracking, and simultaneous localization and mapping (SLAM), and is the
author of several published articles and conference papers in the area. He has
experience of both theoretical analysis as well as implementation aspects.
Dr. Hendeby is since 2018 an Associate Editor for IEEE Transactions
on Aerospace and Electronic Systems in the area of target tracking and
multisensor systems. In 2022 he served as general chair for the 25th IEEE
International Conference on Information Fusion (FUSION) in Linköping,
Sweden.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSP.2022.3179841?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSP.2022.3179841, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://liu.diva-portal.org/smash/get/diva2:1690215/FULLTEXT01"
}
| 2,022
|
[
"JournalArticle"
] | true
| null |
[] | 25,735
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f0106e0a9ab64dcd4c0dabe5f1af1e5549f75e
|
[
"Computer Science"
] | 0.909186
|
Channel Load Aware AP / Extender Selection in Home WiFi Networks Using IEEE 802.11k/v
|
00f0106e0a9ab64dcd4c0dabe5f1af1e5549f75e
|
IEEE Access
|
[
{
"authorId": "2393355",
"name": "Toni Adame"
},
{
"authorId": "145167737",
"name": "Marc Carrascosa"
},
{
"authorId": "2825302",
"name": "B. Bellalta"
},
{
"authorId": "2612849",
"name": "I. Pretel"
},
{
"authorId": "83042799",
"name": "Iñaki Etxebarria"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Next-generation Home WiFi networks have to step forward in terms of performance. New applications such as on-line games, virtual reality or high quality video contents will further demand higher throughput levels, as well as low latency. Beyond physical (PHY) and medium access control (MAC) improvements, deploying multiple access points (APs) in a given area may significantly contribute to achieve those performance goals by simply improving average coverage and data rates. However, it opens a new challenge: to determine the best AP for each given station (STA). This article studies the achievable performance gains of using secondary APs, also called Extenders, in Home WiFi networks in terms of throughput and delay. To do that, we introduce a centralized, easily implementable channel load aware selection mechanism for WiFi networks that takes full advantage of IEEE 802.11k/v capabilities to collect data from STAs, and distribute association decisions accordingly. These decisions are completely computed in the AP (or, alternatively, in an external network controller) based on an AP selection decision metric that, in addition to RSSI, also takes into account the load of both access and backhaul wireless links for each potential STA-AP/Extender connection. Performance evaluation of the proposed channel load aware AP and Extender selection mechanism has been first conducted in a purpose-built simulator, resulting in an overall improvement of the main analyzed metrics (throughput and delay) and the ability to serve, at least, 35% more traffic while keeping the network uncongested when compared to the traditional RSSI-based WiFi association. This trend was confirmed when the channel load aware mechanism was tested in a real deployment, where STAs were associated to the indicated AP/Extender and total throughput was increased by 77.12%.
|
Received January 29, 2021, accepted February 11, 2021, date of publication February 15, 2021, date of current version February 25, 2021.
_Digital Object Identifier 10.1109/ACCESS.2021.3059473_
# Channel Load Aware AP / Extender Selection in Home WiFi Networks Using IEEE 802.11k/v
TONI ADAME 1, MARC CARRASCOSA1, BORIS BELLALTA 1, IVÁN PRETEL2,
AND IÑAKI ETXEBARRIA[2]
1Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain
2FON Labs, 48009 Bilbao, Spain
Corresponding author: Toni Adame (toni.adame@upf.edu)
This work was supported in part by the Spanish government under Project CDTI IDI-20180274, Project WINDMAL
PGC2018-099959-B-100 (MCIU/AEI/FEDER,UE), and Project TEC2016-79510-P; and in part by the Catalan government under Project
SGR-2017-1188 and Project SGR-2017-1739.
**ABSTRACT Next-generation Home WiFi networks have to step forward in terms of performance. New**
applications such as on-line games, virtual reality or high quality video contents will further demand higher
throughput levels, as well as low latency. Beyond physical (PHY) and medium access control (MAC)
improvements, deploying multiple access points (APs) in a given area may significantly contribute to
achieve those performance goals by simply improving average coverage and data rates. However, it opens a
new challenge: to determine the best AP for each given station (STA). This article studies the achievable
performance gains of using secondary APs, also called Extenders, in Home WiFi networks in terms of
throughput and delay. To do that, we introduce a centralized, easily implementable channel load aware
selection mechanism for WiFi networks that takes full advantage of IEEE 802.11k/v capabilities to collect
data from STAs, and distribute association decisions accordingly. These decisions are completely computed
in the AP (or, alternatively, in an external network controller) based on an AP selection decision metric
that, in addition to RSSI, also takes into account the load of both access and backhaul wireless links for
each potential STA-AP/Extender connection. Performance evaluation of the proposed channel load aware
AP and Extender selection mechanism has been first conducted in a purpose-built simulator, resulting in an
overall improvement of the main analyzed metrics (throughput and delay) and the ability to serve, at least,
35% more traffic while keeping the network uncongested when compared to the traditional RSSI-based
WiFi association. This trend was confirmed when the channel load aware mechanism was tested in a real
deployment, where STAs were associated to the indicated AP/Extender and total throughput was increased
by 77.12%.
**INDEX TERMS Home WiFi, AP selection, extender, load balancing, IEEE 802.11k, IEEE 802.11v.**
**I. INTRODUCTION**
Since their appearance more than 20 years ago, IEEE 802.11
wireless local area networks (WLANs) have become the
worldwide preferred option to provide wireless Internet
access to heterogeneous clients in homes, businesses, and
public spaces due to their low cost and mobility support.
The simplest WLAN contains only a basic service set (BSS),
consisting of an access point (AP) connected to a wired
infrastructure, and some wireless stations (STAs) associated
to the AP.
The associate editor coordinating the review of this manuscript and
approving it for publication was Kashif Saleem .
The increase of devices aiming to use the WLAN technology to access Internet has been accompanied by more
demanding user requirements, especially in entertainment
contents: on-line games, virtual reality, and high quality video. In consequence, traditional single-AP WLANs
deployed in apartments, i.e., Home WiFi networks, may fail
to deliver a satisfactory experience due to the existence of
areas where the received power from the AP is low, and so
the achievable performance [1].
Although IEEE 802.11ac (WiFi 5) [2], IEEE 802.11ax
(WiFi 6) [3], [4], and IEEE 802.11be (WiFi 7) [5], [6]
amendments provide enhancements on physical (PHY) and
medium access control (MAC) protocols that may increase
the WLAN efficiency, and also increase the coverage by using
-----
beamforming, the best solution is still to deploy more APs to
improve the coverage in those areas.
In multi-AP deployments, normally only one AP (the main
AP) has Internet access, and so the other APs (from now
on simply called Extenders) must relay the data to it using
a wired or wireless backhaul network. Since presuming the
existence of a wired network is not always feasible, Extenders
communicate with the main AP wirelessly. In this case, both
the main AP and Extenders are equipped with at least two
radios, usually operating at different bands.
In presence of multiple AP/Extenders, a new challenge
appears: how to determine the best AP/Extender for each
given STA. According to the default WiFi AP selection
mechanism, an STA that receives beacons from several
AP/Extenders will initiate the association process with the
AP/Extender with the highest received signal strength indicator (RSSI) value. Though simple and easy to implement,
this mechanism omits any influence of traffic load and, consequently, can lead to network congestion and low throughput
in scenarios with a high number of STAs [7].
Many research activities have already widely tackled the
AP selection process in an area commonly referred to as load
_balancing, whose goal is to distribute more efficiently STAs_
among the available AP/Extenders in a WLAN. Although
multiple effective strategies have been proposed in the literature, most of them lack the prospect of real implementation,
as they require changes in the existing IEEE 802.11 standards
and/or in STAs’ wireless cards.
The channel load aware AP/Extender selection mechanism presented in this article sets out to enhance the overall
WLAN performance by including the effect of the channel
load into the STA association process. To do it, only already
developed IEEE 802.11 amendments are considered: IEEE
802.11k to gather information from AP/Extenders in the
WLAN, and IEEE 802.11v to notify each STA of its own
prioritized list of AP/Extenders.
Particularly, the main contributions of the current work can
be summarized into:
- Review and classification of multiple existing
AP/Extender selection mechanisms, and some background information on the use of IEEE 802.11k/v.
- Design of a feasible, practical, and flexible channel load
_aware AP/Extender selection mechanism supported by_
IEEE 802.11k/v amendments.
- Evaluation of the channel load aware AP/Extender
selection mechanism by simulation, studying the performance gains of using Extenders along with the
proposed solution. We focus on understanding how
the number of Extenders and their position, the fraction of STAs supporting IEEE 802.11k/v, and the
load of the access and backhaul links, impact on
the system performance in terms of throughput and
delay.
- Validation of the presented solution in a real testbed,
showing the same trends in terms of performance
improvements that those obtained by simulation.
Lastly, the main lessons that can be learned from this article
are listed below
:
1) Placement of Extenders: We observe that Extenders
must be located at a distance (in RSSI terms) large
enough to stimulate the association of farther STAs
while maintaining high data rate in its backhaul connection to the AP. Also, we confirm that connecting
Extenders through other Extenders not only increases
the network coverage, but also the network’s operational range in terms of admitted traffic load.
2) Load of access vs. backhaul links: The relative weight
of the load of the access and backhaul(s) link(s) should
be generally balanced, without dismissing a proper
tuning according to the characteristics of the deploying
scenario.
3) STAs supporting IEEE 802.11k/v: We observe that,
even for a low fraction of STAs supporting IEEE
802.11k/v, the gains of using the channel load aware
AP/Extender selection mechanism are beneficial for
the overall network.
4) Throughput and delay improvements: The use of
Extenders allows to balance the load of the network,
which results in significant gains in throughput and
delay for much higher traffic loads. Therefore, the use
of Extenders is recommended for high throughput multimedia and delay-constrained applications.
The remainder of this article is organized as follows:
Section II offers an overview on AP selection in WiFi
networks. Section III elaborates on IEEE 802.11k and
IEEE 802.11v amendments, paying special attention to the
features considered in the proposed channel load aware
AP/Extender selection mechanism, which is in turn described
in Section IV. Performance results obtained from simulations and real deployments are compiled in Section V and
Section VI, respectively. Lastly, Section VII discusses open
challenges in future Home WiFi networks and Section VIII
presents the obtained conclusions.
**II. USE OF EXTENDERS AND AP/EXTENDER SELECTION**
**MECHANISMS IN WiFi NETWORKS**
The current section reviews the main aspects of the technical
framework involving the use of Extenders in next-generation
Home WiFi networks, such as the main challenges related
to their deployment, the existing options to integrate them
into the STA association procedure, and their management
through an external platform.
_A. MULTI-HOP COMMUNICATION IN WLANs_
The need to expand WLAN coverage to every corner of a
targeted area can be satisfied by increasing the AP transmission power or by deploying wired/wireless Extenders. Putting
aside the wired option, which is not in scope of the current
article, wireless extension of a WLAN can be achieved by
means of a wireless mesh network (WMN).
-----
In a WMN, multiple deployed APs communicate among
them in a multi-hop scheme to relay data from/to STAs. The
most representative initiative in this field is IEEE 802.11s,
which integrates mesh networking services and protocols
with IEEE 802.11 at the MAC layer [8]. Wireless frame forwarding and routing capabilities are managed by the hybrid
wireless mesh protocol (HWMP), which combines the flexibility of on-demand route discovery with efficient proactive
routing to a mesh portal [9].
As traffic streams in a WMN are mainly oriented
towards/from the main AP, they tend to form a tree-based
wireless architecture [10]. This architecture strongly relies
on the optimal number and position of deployed Extenders,
which is determined in [11] as a function of PHY layer parameters with the goal of minimizing latency and maximizing
data rate. This analysis is extended in [12], where a model
based on PHY and MAC parameters returns those Extender locations that maximize multi-hop throughput. Other
approaches such as [13] go far beyond and propose the use of
Artifical Intelligence to enable autonomous self-deployment
of wireless Extenders.
Relaying capabilities of Extenders are also a matter of
study, as in [14], where an algorithm is proposed to determine
the optimal coding rate and modulation scheme to dynamically control the best band and channel selection. Or in [15],
where a low latency relay transmission scheme for WLAN is
proposed to simultaneously use multiple frequency bands.
All in all, once the number, location and relaying capabilities of Extenders operating in a WLAN are selected, the
way in which STAs determine their own parent (i.e., the best
AP/Extender located within their coverage area) can impact
the overall performance of the network. We will discuss on
this issue in the following lines.
_B. AP/EXTENDER SELECTION MECHANISMS_
A review of the currently existing AP/Extender selection
mechanisms along with the description of the WiFi scanning
modes that enable them is offered in the following lines.
1) WiFi SCANNING MODES
IEEE 802.11 standard defines two different scanning modes:
_passive and active [16]. In passive scanning, for each avail-_
able radio channel, the STA listens to beacons sent by APs
for a dwell time. As beacons are usually broadcast by the AP
every 100 ms, channel dwell time is typically set to 100-200
ms to guarantee beacon reception [17], [18].
In active scanning, the STA starts broadcasting a probe
_request frame on one channel and sets a probe timer. If no_
_probe response is received before the probe timer reaches_
_MinChannelTime, the STA assumes that no AP is working in_
that channel and scans another channel alternatively. Otherwise, if the STA does receive a probe response, it will further
wait for responses from other working APs until MaxChan_nelTime is reached by the probe timer. MinChannelTime and_
_MaxChannelTime values are vendor-specific, as they are not_
specified by the IEEE 802.11 standard. Indeed, the obtention
of optimum values to minimize the active scanning phase
have attracted research attention. In [19], for instance, the
author sets these values as low as 6-7 ms and 10-15 ms,
respectively.
Since passive scanning always has longer latency than
_active scanning, wireless cards tend to use the latter to rapidly_
find nearby APs [20]. However, active scanning has three
disadvantages: 1) it consumes significant more energy than
_passive scanning, 2) it is unable to discover networks that do_
not broadcast their SSID, and 3) it may result in shorter scan
ranges because of the lower power level of STAs.
It is also usual that mobile STAs periodically perform
_active background scanning to discover available APs, and_
then accelerate an eventual roaming operation [21]. In this
case, the STA (already associated to an AP and exchanging
data) goes periodically off-channel and sends probe requests
across other channels. On the other hand, the active on-roam
_scanning only occurs after the STA determines a roam is_
necessary.
2) DEFAULT WiFi AP SELECTION MECHANISM
Regardless the scanning mode used by an STA to complete
its own list of available APs, and the final purpose of this
scanning (i.e., the initial association after the STA startup or
a roaming operation), the STA executes the default WiFi AP
selection mechanism (from now on also named RSSI-based)
by choosing the AP of the previous list with the strongest
RSSI.
This is the approach followed by common APs and available multi-AP commercial solutions, like Google WiFi [22]
or Linksys Velop [23], which are especially indicated for
homes with coverage problems and few users. In addition,
these two solutions also integrate the IEEE 802.11k/v amendments (analyzed later on in Section III), but only to provide
faster and seamless roaming.
The strongest RSSI might indicate the best channel condition between the STA and the AP. However, only relying
on this criteria is not always the best choice, as it can lead
to imbalanced loads between APs, inefficient rate selection,
and selection of APs with poor throughput, delay, and other
performance metrics [24].
3) ALTERNATIVE AP/EXTENDER SELECTION MECHANISMS
The inefficiency of the RSSI-based AP selection mechanism
has motivated the emergence of alternative methods that take
into account other metrics than solely the RSSI. The most representative examples are compiled in Table 1 and classified
according to three different criteria: the AP selection mode,
the architecture employed, and the selected decision metric:
- AP selection mode: In the active AP selection, the
STA considers all potential APs and gathers information
regarding one or more performance metrics to make a
decision. In [25], the STA scans for all available APs,
quickly associates to each, and even runs a set of tests
to estimate Internet connection quality. On the contrary,
-----
**TABLE 1. Classification of alternative AP/Extender selection mechanisms.**
Whereas mechanisms employing to some extent IEEE 802.11k are marked
with †, no mechanism employs IEEE 802.11v. (By default, parameters from
the decision metric column refer to the STA’s value).
the passive AP selection is based on the information that
the STA directly extracts from beacon frames or deduces
from their physical features, such as the experienced
delay in [34].
Lastly, in the hybrid AP selection, the network makes
use of the information shared by the STA to give advice
on the best possible potential AP. In [39], for instance,
clients automatically submit reports on the APs that they
use with regard to estimated backhaul capacity, ports
blocked, and connectivity failures.
- Architecture: This category splits the different mechanisms into decentralized and centralized. Decentral_ized mechanisms are those in which the STA selects its_
AP based on its available information (even combining
cross-layer information, as in [38]). On the other hand,
_centralized mechanisms imply a certain degree of coor-_
dination between different APs thanks to a central entity
(that may well be an SDN controller, as in [30]) intended
to balance overall network load.
- Decision metric: The AP selection metric can be determined by a single parameter (e.g., AP load in [28]) or a
weighted combination of some of them (e.g., throughput
and channel occupancy rate in [33]). Apart from RSSI,
there exists a vast quantity of available magnitudes for
this purpose; however, the most common ones in the
reviewed literature are throughput, load, and delay.
Furthermore, there exist some novel approaches that have
introduced machine learning (ML) techniques into the AP
selection process. For instance, in [40] a decentralized cognitive engine based on a neural network trained on past link
conditions and throughput performance drives the AP selection process.
Likewise, a decentralized approach based on the
exploration-exploitation trade-off from Reinforcement
Learning algorithms is used in [41], [42]. Under that system,
STAs learn the network conditions and associate to the AP
that maximizes their throughput. In consequence, STAs stop
its exploration, which is only resumed when there is a change
in network’s topology.
Another decentralized ML-based approach is proposed in
[43], where the AP selection mechanism is formulated as
a non-cooperative game in which each STA tries to maximize its throughput. Then, an adaptive algorithm based on
no-regret learning makes the system converge to an equilibrium state.
_C. COMMERCIAL WLAN MANAGEMENT PLATFORMS_
Centralized network management platforms are commonly
used in commercial solutions, as they give full control of the
network to the operator. These management platforms focus
not only on the AP selection, but also cover several network
performance enhancements such as channel and band selection, and transmit power adjustment.
Nighthawk Mesh WiFi 6 System [44] intelligently selects
the fastest WiFi band for every connected STA, and Insight
Management Solution [45] recalculates the optimum channel
and transmit power for all the APs every 24 hours. Based
on signal strength and channel utilization metrics, ArubaOS
network operating system has components (i.e. AirMatch
[46] and ClientMatch [47]) which dynamically balance STAs
across channels and encourage dual-band capable STAs to
stay on the 5GHz band on dual-band APs. Lastly, Cognitive
Hotspot Technology (CHT) [48] is a multi-platform software
that can be installed on a wide range of APs. It brings
distributed intelligence to any WiFi network to control the
radio resources including AP automatic channel selection,
load balancing, as well as client and band steering for STAs.
The channel load aware AP/Extender selection mechanism presented in this work could be easily integrated in
these centralized platforms and even be further enhanced by
exploiting the know-how gathered from different Home WiFi
networks.
**III. IEEE 802.11k/V AMENDMENTS**
The constant evolution of the IEEE 802.11 standard has
been fostered by the incremental incorporation of technical
amendments addressing different challenges in the context of
WLANs. In particular, the optimization of the AP selection
process and the minimization of the roaming interruption
time are tackled in two different amendments: IEEE 802.11k
and IEEE 802.11v [49].
_A. IEEE 802.11k: RADIO RESOURCE MEASUREMENT_
The IEEE 802.11k amendment on radio resource measurement [50] defines methods for information exchange about
the radio environment between APs and STAs. This information may be thus used for radio resource management
-----
strategies, making devices more likely to properly adapt to
the dynamic radio environment.
Radio environment information exchange between two
devices running IEEE 802.11k occurs through a two-part
frame request/report exchange carried within radio measurement report frames (i.e., a purpose-specific category of action
frames). Despite the wide set of possible measurements, the
AP/Extender selection mechanism presented in this work will
only consider beacon and channel load reports.
The beacon request/report pair enables an AP to ask an
STA for the list of APs it is able to listen effectively to on
a specified channel or channels. The request also includes
the measurement mode that should be performed by the
targeted STA: active scanning (i.e., information comes from
_probe responses), passive scanning (i.e., information comes_
from beacons), or beacon table (i.e., use of previously stored
beacon information).
Whenever an STA receives a beacon request, it creates
a new beacon report containing the BSSID, operating frequency, channel number, and RSSI (among other parameters)
of each detected AP within its range during the measurement
duration specified in the beacon request. At the end of the
measurement duration, the STA will send a beacon report
with all the aforementioned gathered information.
Similarly, the channel load request/report exchange allows
an AP to receive information on the channel condition of a
targeted network device. The channel load report contains the
channel number, actual measurement start time, measurement
duration, and channel busy fraction [51].
_B. IEEE 802.11v: WIRELESS NETWORK MANAGEMENT_
The IEEE 802.11v amendment [52] on wireless network
management uses network information to influence client
roaming decisions. Whereas IEEE 802.11k only targets the
radio environment, IEEE 802.11v includes broader operational data regarding network conditions, thus allowing STAs
to acquire better knowledge on the topology and state of the
network.
In fact, there are a multitude of new services powered by
IEEE 802.11v, including power saving mechanisms, interference avoidance mechanisms, fast roaming, or an improved
location system, among others. In all cases, the exchange
of data among network devices takes place through several
action frame formats defined for wireless network management purposes.
The BSS transition management service is of special interest to our current work, as it enables to suggest a set of preferred candidate APs to an STA according to a pre-established
policy. IEEE 802.11v defines 3 types of BSS transition man_agement frames: query, request, and response._
- A query is sent by an STA asking for a BSS transition
_candidate list to its corresponding AP._
- An AP responds to a query frame with the BSS transition
_candidate list; that is, a request frame containing a pri-_
oritized list of preferred APs, their operating frequency,
and their channel number, among other information.
In fact, the AP may also send a BSS transition candidate
_list to a compatible IEEE 802.11v STA at any time to_
accelerate any eventual roaming process.
- A response frame is sent by the STA back to the AP,
informing whether it accepts or denies the transition.
Once received a BSS transition candidate list and accepted
its proposed transition, the STA will follow the provided APs
candidate list in order of priority, trying to reassociate to such
a network. As operating frequency and channel number of
each candidate AP is also provided, total scan process time in
the reassociation operation can be minimized.
**IV. CHANNEL LOAD AWARE AP/EXTENDER SELECTION**
We introduce in this section the proposed channel load aware
AP/Extender selection mechanism. We aim to define a general approach that allows us to study the trade-off between
received power and channel load-based metrics to make the
AP/Extender selection decision.
The proposed AP/Extender selection mechanism is
intended to be applied on a WLAN topology like the one from
Figure 1, consisting of an AP, several Extenders wirelessly
connected to the AP, and multiple STAs willing to associate to
the network.[1] It is fully based on the existing IEEE 802.11k/v
amendments, which enables its real implementation, and can
be executed as part of the association process of an STA in
any of the following circumstances:
- An STA has just associated to the network through the
AP/Extender selected by using the default RSSI-based
criteria.
- An STA is performing a roaming procedure between
different AP/Extenders from the same WLAN.
- The AP initiates an operation to reassociate all previously associated STAs in case network topology has
changed (e.g., a new Extender is connected), or an overall load balance operation is executed (e.g., as consequence of new traffic demands coming from STAs).
In a real implementation, all computation associated with
this mechanism would be executed in the AP, as it is the
single, centralized element in the architecture with a global
vision of the network. Alternatively, computation tasks could
be assumed by an external network controller run into a
server, either directly connected to the AP or placed in a
remote, cloud-based location.
_A. OPERATION OF THE AP/EXTENDER SELECTION_
_MECHANISM_
The channel load aware AP/Extender selection mechanism
splits the selection process into four differentiated stages.
Figure 2 shows the sequence of their main tasks, which are
described in the following lines:
1) Initial association (IEEE 802.11)
1If Extenders were connected to the AP by means of wired links, the
proposed channel load aware mechanism would be likewise applicable.
-----
**FIGURE 1. WLAN topology with Extenders. Note that M(ai** **_,j ) corresponds to the access link metric from STA_**
_i to AP/Extender j. As for M(bj ), it corresponds to the backhaul link metric of Extender j_ .
2) Collection and exchange of information (IEEE
802.11k)
- The AP (or the network controller) initiates a new
information collection stage by sending (directly
or through the corresponding Extenders) a beacon
_request to the STA._
- Depending on the type of the beacon request
received, the STA initiates an active scanning, a
_passive scanning, or simply consults its own bea-_
_con table._
- The surrounding AP/Extenders respond to an
_active scanning with a probe response or simply_
emit their own beacon frames.
- The STA transmits the resulting beacon report to
its corresponding AP/Extender.
- The AP/Extender, in turn, retransmits this beacon
_report to the AP (or the network controller)._
- Lastly, the AP emits channel load requests to the
network Extenders.
- The Extenders measure the observed channel occupation and send a channel load report to the AP.
3) Computation and transmission of decision (IEEE
802.11v)
**FIGURE 2. Sequence diagram of the channel load aware AP/Extender**
selection mechanism, using active scanning as measurement mode and
explicit channel load request/report exchange.
- After an active or passive scanning, the STA sends
an association request to the AP/Extender with the
best observed RSSI value.
- The AP/Extender registers the new STA and confirms its association. Moreover, it checks if the
STA supports IEEE 802.11k and IEEE 802.11v
modes, which are indispensable to properly perform the next steps of the mechanism.
- The AP/Extender notifies the AP (or the network
controller) of the new associated STA and its
capabilities.
- The AP (or the network controller) computes the
_Yi,j decision metric (defined in the next subsection)_
for each AP/Extender detected by the STA.
- The AP sends the STA the resulting BSS transition
_candidate list._
4) Reassociation (IEEE 802.11)
- The STA starts a new association process with
the first AP/Extender recommended in the BSS
_transition candidate list. If it fails, the STA tries_
to associate to the next AP/Extender in the list.
- The new AP/Extender registers the new STA and
confirms its association.
- The new AP/Extender notifies the AP of the new
associated STA.
-----
**FIGURE 4. RSSI weighting applied in the channel load aware**
AP/Extender selection mechanism.
metric by combining parameters from both access and backhaul links.
More specifically, Yi,j is the decision metric employed in
our proposal per each pair formed by STA i and AP/Extender
_j. Then, the best AP/Extender for STA i will be the one with_
the minimum Yi,j value according to
_Yi,j = α · M_ (ai,j) + (1 − _α) · M_ (bj)
� � �
= α · RSSI[∗]i,j [+][ C][a]i,j + (1 − _α) ·_ _Cbj(k),_ (1)
_k∈Nj_
**FIGURE 3. Example of a WLAN before and after applying the channel**
_load aware AP/Extender selection mechanism._
- Every reassociation to a new AP/Extender within
the WLAN would require a complete authentication process, unless the fast BSS transition feature
from IEEE 802.11r is employed [53].
According to the classification criteria from Table 1, the
AP selection mode in this new AP/Extender selection mechanism is hybrid, because STAs share with the AP information about the network state, the architecture is centralized,
as the AP (or the network controller) computes the best
AP/Extender for each STA, and the parameters of the decision
metric are: the RSSI observed by the STA and the channel
load observed by the different AP/Extenders. As a matter
of example, Figure 3 offers a graphical view of a complete
WLAN before and after applying the channel load aware
AP/Extender selection mechanism.
_B. AP/EXTENDER SELECTION METRIC_
The decision metric used in the proposed approach combines
parameters observed both in the access link M (ai,j) (i.e., from
STA i to AP/Extender j) and in the backhaul link(s) M (bj)
(i.e., those in the route from Extender j to the AP) [36].
When using the RSSI-based AP selection mechanism,
STAs simply choose the AP/Extender with the strongest RSSI
value in the access link. Differently, our AP/Extender selection mechanism takes advantage of the capabilities offered
by IEEE 802.11k and IEEE 802.11v to create a new decision
where α is a configurable factor that weights the influence of
access and backhaul links (0 ≤ _α ≤_ 1) and Cai,j is the channel
load of the access link observed by AP/Extender j. Considering Nj as the set of backhaul links in the path between
Extender j and the AP, Cbj(k) is the channel load of backhaul
link k. Note that when j corresponds to the AP, there are no
backhaul links (i.e., Nj = ∅).
Channel load C is here considered as the fraction of time
during which the wireless channel is sensed busy, as indicated
by either the physical or virtual carrier sense mechanism, with
0 _C_ 1 [54]. The AP (or the network controller) can
≤ ≤
obtain this information explicitly (by means of the channel
_load request/report exchange) or implicitly (from the BSS_
_load element contained in both beacon frames and probe_
_responses emitted by AP/Extenders) [50]._
In fact, unlike other parameters employed in alternative
decision metrics, the channel load is able to provide information not only from the targeted WLAN, but also from the
influence of other external networks. In consequence, the
WLAN is more able to balance the traffic load of newly
associated STAs to the less congested AP/Extenders, thus
increasing the adaptability degree to the state of the frequency
channel.
For its part, RSSI[∗]i,j [corresponds to an inverse weighting]
of the signal strength received by STA i from AP/Extender j,
which is computed as
RSSIi,j − _Ptj_
RSSI[∗]i,j [=] _Si −_ _Ptj_ _,_ (2)
where RSSIi,j is the signal strength received by STA i from
AP/Extender j in dBm, Ptj is the transmission power level of
AP/Extender j in dBm, and Si is the carrier sense threshold
(i.e., sensitivity level) of STA i in dBm.
As shown in Figure 4, the weighting of possible input values of RSSIi,j ∈ [Si, Ptj] from (2) applied in the AP/Extender
-----
selection mechanism results in output values of RSSI[∗]i,j [∈]
[0, 1]. Consequently, low RSSI values (i.e., those close to the
sensitivity level Si) are highly penalized.
**V. PERFORMANCE EVALUATION**
This section is first intended to understand the benefits of
adding Extenders to a WLAN, and determine their optimal
number and location for a given area. Then, the very concept
of a WLAN with Extenders is applied to a typical Home
WiFi scenario aiming to evaluate the impact of the main
parameters involved in the AP/Extender selection mechanism
on network’s performance.
_A. SIMULATION FRAMEWORK_
MATLAB was the selected tool to develop a simulator that
enables the deployment, setting, testing, and performance
evaluation of a WLAN. Specifically, our simulator focused
on the AP/Extender selection mechanism contained in the
STA association process, the transmission of uplink (UL) data
packets (i.e., those from STAs to the AP), and the computation of metrics in the AP with respect to the received traffic.
As for the PHY layer, it was assumed that, once the network topology was established, all devices adjusted their data
rate according to the link condition. Specifically, simulations
used the ITU-R indoor site-general path loss model according
to
PLITU(di,j) = 20 · log10(fc) + N · log10(di,j) + Lf − 28, (3)
where PLITU is the path loss value (in dB), di,j is the distance
between transmitter i and receiver j (in m), fc is the employed
frequency (in MHz), N is the distance power loss coefficient
(in our particular case and according to the model guidelines,
_N = 31), and Lf is the floor penetration loss factor (which_
was removed as a single floor was always considered) [55].
The distributed coordination function (DCF) was used
by all AP/Extenders and STAs. We assumed that all
AP/Extenders and STAs were within the coverage area of the
others, given they operated in the same channel. Therefore,
an STA was able to associate to any AP/Extender in the area
of interest.
Only UL transmissions were considered in simulations,
as they represent the worst case in a WLAN; that is, when
multiple non-coordinated devices compete for the same wireless spectrum. Though excluded from the current study,
downlink (DL) communications could either follow the same
topology resulting from the STA association process or, as it
is already conceived by designers of future WiFi 7, establish
their own paths by means of the multi-link operation capability (in our particular case, according to an alternative decision
metric) [6].
WLAN performance metrics (throughput, delay, and congestion) were obtained using the IEEE 802.11 DCF model
presented and validated in [56], which supports heterogeneous finite-load traffic flows as required in this work. Details
from two different wireless standards were implemented in
the simulator: IEEE 802.11n and IEEE 802.11ac. Due to the
**TABLE 2. List of common simulation parameters.**
higher penetration of 2.4 GHz compatible devices in real
deployments, all tests employed IEEE 802.11n at 2.4 GHz
in access links (with up to 3 available orthogonal channels)
and IEEE 802.11ac at 5 GHz in backhaul links (with a single
channel).[2] Nonetheless, the simulator supports any combination of standards over the aforementioned network links.
A wide set of tests was conducted on several predefined
scenarios to evaluate the impact of different WLAN topologies, configurations, and AP/Extender selection mechanisms
on the network’s performance. The definition of the scenarios together with their corresponding tests is provided in
the following subsections. Lastly, a comprehensive list of
common simulation parameters is offered in Table 2, whose
values were applied to all subsequent tests, if not otherwise
specified. As for test-specific simulation parameters, we refer
the reader to Table 3.
_B. SCENARIO #1: CIRCULAR AREA_
A circular area was defined by the maximum coverage range
of the AP at 2.4 GHz (Dmax); i.e., the distance in which
an STA would receive a signal with the same strength as
its sensitivity level. Three different network topologies were
there considered: only a single AP, an AP and 2 Extenders,
and an AP and 4 Extenders forming a cross (see Figure 5).
Position of Extenders was in turn limited by the maximum
coverage range of the AP at 5 GHz (dmax).
2Data rates were computed from the observed RSSI and according to the
corresponding modulation and coding scheme (MCS) table.
-----
**TABLE 3. List of test-specific simulation parameters.**
**FIGURE 5. Network topologies of Scenario #1.**
1) TEST 1.1: AP-EXTENDER DISTANCE
The goal of this test was to evaluate the effect of the distance
between the AP and any Extender (dAP,E) on network’s performance. To keep symmetry, the topology from Figure 5c
was used, moving all Extenders far from the AP, with RSSI
values at any Extender (RSSIAP,E) ranging from −50 dBm to
−90 dBm (i.e., being the latter the RSSIAP,E value at dmax),
in intervals of 1 dB. The case without Extenders was also
included for comparative purposes.
A number of NSTA = 10 STAs with a common traffic load
of BSTA = 2.4 Mbps were uniformly and randomly deployed
_k_ 1000 times on the AP coverage area. Both the RSSI=
_based and the channel load aware AP/Extender selection_
mechanisms were used in each deployment. In the latter case,
_α was set to 0.5 to give the same importance to access and_
backhaul links when selecting an AP/Extender.
As shown in Figure 6, the use of Extenders almost always
improved the network’s performance in terms of throughput, delay, and congestion regardless RSSIAP,E. In general, the best range to place Extenders was RSSIAP,E ∈
[−50, −72] dBm, as throughput was maintained over 99%
in multi-channel cases when using any of the analyzed
AP/Extender selection mechanisms.[3]
More specifically, the channel load aware mechanism was
able to ensure 100% of throughput and keep delay below
10 ms regardless RSSIAP,E. This was not the case when
using a single communication channel, because almost all
STAs were directly connected to the AP (thus resembling
the case without Extenders, where furthest STAs hindered the
operation of the rest due to their higher channel occupancy),
unless they were really close to an alternative Extender.
As for the RSSI-based mechanism, it always behaved
worse than the _channel_ _load_ _aware_ mechanism in
multi-channel cases, but provided better performance in
single channel ones. In fact, although the number of STAs
connected to Extenders decayed as we moved Extenders far
away from the AP, that value was still much higher than in the
_channel load aware mechanism. However, the adoption by_
Extenders of MCS 1 from RSSIAP,E = −77 dBm on, severely
impacted on network’s performance, as they were not able to
appropriately transmit all packets gathered from STAs.
As a result of this test, dAP,E was set in following tests to
the value that made RSSIAP,E = −70 dBm.
2) TEST 1.2: NETWORK’s RANGE EXTENSION
To prove the benefit of using Extenders to increase the network coverage, the same topologies of Scenario #1 were
3In this test, but also as generalized practice in the rest of tests from this
article, results of each network configuration were obtained as the mean of
values from all k deployments, whether the network got congested or not.
-----
**FIGURE 6. Test 1.1. AP-Extender distance.**
**FIGURE 7. Test 1.3. Number of Extenders.**
used. However, in this case, STAs were placed uniformly at
random over a circular area of radius 1.2 - _Dmax. Again, RSSI-_
_based and channel load aware (with α = 0.5) AP/Extender_
selection mechanisms were employed.
A number of NSTA = 10 STAs were randomly deployed
_k_ 10000 times on the predefined area, with the result=
ing average rate of successful associations from Table 4.
As expected, the higher the number of Extenders, the higher
the total percentage of STAs that found an AP/Extender
within their coverage area and got associated. In fact, both
AP/Extender selection mechanisms achieved the same STA
association rates, because they only depended on whether
there were available AP/Extenders within each STA coverage
area.
3) TEST 1.3: NUMBER OF EXTENDERS
In all three topologies from Scenario #1 were placed a number
of NSTA = 10 STAs, each one with the same traffic load
ranging from BSTA = 12 kbps to BSTA = 3.6 Mbps (i.e.,
a total network traffic, BT = NSTA · BSTA, from BT = 0.12
**TABLE 4. Test 1.2. Network’s range extension.**
Mbps to BT = 36 Mbps). STA deployments were randomly
selected k 1000 times and the whole network operated
=
under both the RSSI-based and the channel load aware (with
_α = 0.5) AP/Extender selection mechanisms. In this test, only_
the multi-channel case was considered.
Results from Figure 7 justify the use of Extenders to
increase the range in which the network operates without
congestion, going up to BT ≈ 13 Mbps without Extenders,
up to BT ≈ 16 Mbps in the RSSI-based mechanism, and
up to BT ≈ 25 Mbps in the channel load aware one.
Furthermore, the channel load aware mechanism guaranteed the minimum observed delay for any considered value
of BT > 5 Mbps.
-----
**TABLE 5. Test 1.3. Number of Extenders (network’s operational range expressed in terms of BT ).**
The influence of the number of Extenders on performance was different in function of the AP/Extender selection
mechanism. Whereas it was barely relevant in the channel
_load aware mechanism due to the effective load balancing_
among Extenders and AP, it provided heterogeneous results
when using the RSSI-based mechanism. Particularly, the use
of 4 Extenders left the AP with a very low number of
directly connected STAs, thus overloading backhaul links
with respect to the case with only 2 Extenders. Lastly, further
details on network’s operational range are detailed in Table 5
according to three different metrics based on throughput,
delay, and congestion.
_C. SCENARIO #2: HOME WiFi_
In this case, STAs were deployed within a rectangular area
emulating a typical Home WiFi scenario defined according to
a set of RSSI values (see Figure 8). Three network topologies
were there considered: only a single AP, an AP connected
to a single Extender, and an AP connected to two linked
Extenders.
1) TEST 2.1: USE OF LINKED EXTENDERS
To evaluate the effect of linking two Extenders in the backhaul, a set of NSTA = 10 STAs were randomly placed k =
1000 times on all topologies from Figure 8, with BSTA ranging
from 12 kbps to 6 Mbps (i.e., BT took values from 0.12 Mbps
to 60 Mbps). Both the RSSI-based and the channel load
_aware AP/Extender selection mechanisms were considered_
(the latter with α = 0.5 to balance access and backhaul
links).
This test was first performed in a multi-channel case, where
the channel load aware mechanism was able to avoid network
congestion until almost BT = 40 Mbps and improve the
performance offered by the RSSI-based mechanism, as seen
in Figure 9. Furthermore, the use of a second Extender linked
to the first one was justified to increase the network’s operational range, as shown in Table 6.
As for the single channel case, the use of a second Extender
(whether under the RSSI-based or the channel load aware
mechanism) here did not result in a significant improvement
of any analyzed performance metric. The fact that all STAs
(even some of them with low transmission rates) ended up
competing for the same channel resources increased the overall occupation and led to congestion for BT _< 25 Mbps_
regardless the number of Extenders.
**FIGURE 8. Network topologies of Scenario #2.**
2) TEST 2.2: IMPACT OF ACCESS AND BACKHAUL LINKS
Assuming the network topology from Figure 8c with 2 linked
Extenders, the effect of α parameter on the channel load
_aware AP/Extender selection mechanism was studied for α =_
{0, 0.25, 0.5, 0.75, 1} and BSTA = {1.8, 3, 4.2, 5.4} Mbps
(i.e., a total network traffic of BT = {18, 30, 42, 54} Mbps,
respectively).
As shown in Figure 10, values of α ∈ [0.5, 0.75] in the
multi-channel case were able to guarantee the best network
performance in terms of throughput (> 95%) and delay
(< 50 ms) for any considered BT value. In fact, to give all
the weight in (1) either to the access link (α = 1) or to the
backhaul links (α = 0) never resulted in the best exploitation
of network resources.
On the other hand, the best performance in the single
channel case was achieved when α = 1; that is, when the
-----
**FIGURE 9. Test 2.1. Use of linked Extenders (multi-channel case).**
**FIGURE 10. Test 2.2. Impact of access and backhaul links.**
_channel load aware mechanism behaved as the RSSI-based_
one and therefore only the RSSI value was taken into account
to compute the best AP/Extender for each STA.[4]
3) TEST 2.3: SHARE OF IEEE 802.11k/V CAPABLE STAs
The channel load aware AP/Extender selection mechanism
can be executed by IEEE 802.11k/v capable STAs without
detriment to the rest of STAs, which would continue using
the RSSI-based mechanism as usual. This test intended to
evaluate this effect on overall network’s performance.
Assuming again the network topology from Figure 8c
with 2 linked Extenders, the effect of the share of IEEE
802.11k/v capable STAs (here noted as β) on the channel
_load aware mechanism was studied for α_ = 0.5, β =
{0, 25, 50, 75, 100} %, and BSTA = {1.8, 3, 4.2, 5.4} Mbps
(i.e., a total network traffic of BT = {18, 30, 42, 54} Mbps,
respectively).
As shown in Figure 11, there was a clear trend in the
multi-channel case that made network’s performance grew
4In the single channel case, Cai,j element in (1) is the same for any access
link. Then, if α = 1 (i.e., all the weight is given to the access link), the
decisive factor is RSSIi,j.
together with the share of IEEE 802.11k/v capable STAs,
even ensuring more than 95% of throughput for any considered BT value when half or more of STAs were IEEE
802.11k/v capable.
On the contrary, in the single channel case the best results
were achieved when β = 0 or, in other words, when none STA
had IEEE 802.11k/v capabilities and therefore all of them
applied the traditional RSSI-based mechanism.
4) TEST 2.4: INTERFERENCE FROM EXTERNAL NETWORKS
We aimed to evaluate the potential negative effect that the
presence of neighboring WLANs could have on the channel
_load aware AP/Extender selection mechanism, and verify if_
that mechanism continued outperforming the RSSI-based one
in terms of total throughput and average delay.
A particular scenario with an AP, an Extender and 10 STAs
was considered following the deployment shown in
Figure 12, where the Extender shared its access link channel
at 2.4 GHz band with an external network. Whereas the traffic
load of each STA was set to BSTA = 4.32 Mbps, the load
of the external network ranged from BEXT = 0 Mbps to
_BEXT = 12 Mbps._
-----
**TABLE 6. Test 2.1. Use of linked Extenders (network’s operational range expressed in terms of BT ).**
**FIGURE 11. Test 2.3. Share of IEEE 802.11k/v capable STAs.**
Figure 13a shows that, for any considered α value, the
_channel load aware mechanism was able to deliver 100%_
of throughput for higher BEXT values than the RSSI-based
configuration, having the highest α values the best performance. The topology without Extenders, here maintained as a
reference, again demonstrates the utility of Extenders in such
Home WiFi scenarios.
The average delay of STAs followed the same trend (see
Figure 13b), having again the channel load aware mechanism the best performance, maintaining it below 5 ms in any
configuration given BEXT < 5 Mbps. Observing the delay,
it is worth noting the difference between the gradual delay
increase in the RSSI-based mechanism (due to the progressive
saturation of the access link to the Extender when BEXT ∈
[1.5, 3.5] Mbps) in comparison with its abrupt change in
the channel load aware one. This was due to a different
AP/Extender selection of one or more STAs from a given
_BEXT value on._
**VI. PERFORMANCE OF THE AP/EXTENDER SELECTION**
**MECHANISM IN A REAL DEPLOYMENT**
A testbed was deployed at Universitat Pompeu Fabra (UPF) to
emulate a Home WiFi network and, therefore, further study
the benefits of using Extenders and the performance of the
_channel load aware AP/Extender selection mechanism._
The hardware employed consisted of an AP, an Extender,
and 5 laptops acting as traffic generation STAs. A sixth
**FIGURE 12. Network topology and STA deployment of Test 2.4.**
laptop was connected to the AP through Ethernet to act as
the traffic sink. The AP and the Extender were placed at a
distance that guaranteed RSSIAP,E = −70 dBm at 5 GHz,
as in the previous simulated scenarios. As for the 2.4 GHz
band, non-overlapping communications were ensured by
using orthogonal channels.
STAs were deployed in 2 different sets of positions (see
Figure 14). Then, using the RSSI and load parameters from
each STA, all network links were obtained according to the
-----
**FIGURE 13. Test 2.4. Interference from external networks.**
appropriate AP/Extender selection mechanism. These links
were then set in the real deployment to get the performance
results.
Tests were performed using iPerf[5] version 2.09 or higher,
which allowed the use of enhanced reports that included both
the average throughput and the delay of the different network
links. The clocks of the STAs needed to be synchronized for
the delay calculation, and this was achieved using the network
time protocol (NTP).[6]
UDP traffic was used in all iPerf tests. Several traffic loads were used in each test, and 5 trials were performed for each traffic load. Each trial lasted 60 seconds.
Clocks were re-synchronized before every new load was
tested (i.e., every 5 trials), leading to an average clock
offset of +/ − 0.154 ms. All trials were performed during
non-working ours, and there were no other WiFi users at UPF
during the tests.
_A. EXPERIMENT 1: ON THE BENEFITS OF USING_
_EXTENDERS_
Testbed #1 was designed to analyze the performance of a
network that consisted of one AP and one Extender, considering only the RSSI-based association mechanism. The device
placement for this experiment can be found in Figure 14a.
Two cases were considered: the first one was the deployment
without the Extender, meaning that all STAs were forced
to associate to the AP. The second case did consider the
Extender, allowing STAs to associate to either the AP or
the Extender. The association for each case can be found in
Table 7, as well as the RSSI of each STA for both the AP and
the Extender.
In the first case, where all STAs associated to the AP,
we can observe that the RSSI was very low for STAs #4
and #5, as expected. Once we added the Extender in the
second case, STAs #4 and #5 were associated to it, and so they
improved their RSSI. Specifically, STA #4 got an increase
of 30.51%, and STA #5 experienced an increase of 46.15%,
[5iPerf main website: https://iperf.fr/](https://iperf.fr/)
[6NTP main website: http://www.ntp.org/](http://www.ntp.org/)
**FIGURE 14. Plan map of testbeds performed at UPF and placement of**
network devices.
respectively. The average RSSI of the different links was
also increased, going from -47.20 dBm to -37.60 dBm (i.e.,
20.34% higher).
Three different total network traffic loads (BT ), as a result
of the corresponding traffic load per STA (BSTA), were tested
in each case, starting with BSTA = 1 Mbps (i.e., BT =
5 Mbps), then BSTA = 3 Mbps (i.e., BT = 15 Mbps), and
lastly BSTA = 7.5 Mbps (i.e., BT = 37.5 Mbps).
Figure 15 shows the throughput achieved for each load,
as well as the average delay for the network. Regardless
the presence of the Extender, 100% of throughput was
achieved for BT = 5 Mbps. Higher differences appeared
for BT = 15 Mbps and BT = 37.5 Mbps, as without the Extender the network was saturated, whereas 100%
of the desired throughput was achieved when using the
Extender.
-----
**TABLE 7. RSSI values received by STAs from AP/Extender and selected next hop in Testbed #1 and #2.**
**FIGURE 15. Throughput and delay achieved in Testbed #1.**
**FIGURE 16. Average delay by STA in Testbed #1.**
The use of an Extender is also beneficial for the average
delay, as even in the worst case, when BT = 37.5 Mbps, this
value was reduced from 6633.84 ms to 4.10 ms. The reason of selection mechanisms. The resulting association for all STAs,
such huge delays when not using Extenders can be observed as well as their traffic loads can be found in Table 7, where
in Figure 16, where the delay breakdown per STA shows how we can observe that at least one STA was always associated to
STA #4 and STA #5 influenced the overall average values. the Extender when using the channel load aware mechanism,
In this experiment we have shown that the use of Extenders thus resulting in better use of network resources.
in a Home WiFi network can be beneficial beyond the exten- Figure 17 shows the results obtained for each AP/Extender
sion of the coverage area, increasing both the minimum and selection mechanism. For BT = 5 Mbps, BT = 37.5 Mbps
the average RSSI for the whole network, as well as achieving and BT = 50 Mbps, both the RSSI-based and the channel
higher throughput capacity and lower delays. These results _load aware mechanisms achieved 100% of desired through-_
therefore support our previous simulations, whose results are put. However, only the channel load aware mechanism was
compiled in Table 4, Table 5, and Figure 7. capable of reaching 100% for BT = 75 Mbps, with the RSSI
_based mechanism reaching only 66.9 Mbps. Finally, although_
_B. EXPERIMENT 2: VALIDATION OF THE CHANNEL LOAD_ the network was always congested for BT = 100 Mbps,
_AWARE AP/EXTENDER SELECTION MECHANISM_ the channel load aware mechanism managed to boost the
Testbed #2 was deployed following Figure 14b to evaluate throughput from 49.22 Mbps to 87.18 Mbps.
the performance of the channel load aware AP/Extender In terms of delay, the channel load aware mechanism
selection mechanism and compare it to the RSSI-based mech- always had the minimum values. As a matter of example,
anism. The AP and the Extender were always active and in in the worst case, with BT = 100 Mbps, the delay was equal
non-overlapping channels. All STAs were inside the office to 130.24 ms and 37.34 ms for the RSSI-based and the channel
that contained the AP, and we applied both selection mecha- _load aware mechanisms, respectively._
nisms to every STA. For the channel load aware mechanism, In this experiment, we have shown that the channel
the α used was 0.5; i.e., the influence of the access and the _load aware AP/Extender selection mechanism outperforms_
backhaul links was the same when selecting an AP/Extender. the network performance in Home WiFi scenarios of the
Five increasing loads were used to compare the per- _RSSI-based one in terms of throughput and delay. Further-_
formance of the RSSI-based and the channel load aware more, results also corroborate those obtained in previous
-----
**FIGURE 17. Throughput and delay achieved in Testbed #2.**
simulations (compiled in Table 6), in which the channel
_load aware mechanism is shown to keep more deployments_
uncongested.
**VII. THE FUTURE OF HOME WiFi NETWORKS WITH**
**MULTIPLE AP/EXTENDERS**
In the last years, the emergence of a plethora of new applications and services in addition to the necessity of ubiquitous
communication have made Home WiFi networks be more
densely populated with wireless devices. Consequently, WiFi
traditional spectrum at 2.4 GHz band has become scarce,
and it has been necessary to extend the WiFi paradigm into
new bands operating at 5 GHz and 6 GHz, with much higher
resources availability.
Next generation WiFi amendments such as IEEE 802.11ax
and IEEE 802.11be are taking advantage of these new
bands of free license-exempt spectrum to develop physical PHY/MAC enhancements that provide Home networks
with higher capacity, lower delay, and higher reliability, thus
expanding WiFi into next-generation applications from the
audiovisual, health care, industrial, transport, and financial
sector, among others.
Nonetheless, regardless the operating band, the increasing demand of wireless resources in terms of throughput,
bandwidth, and for longer connection periods makes crucial
to take into consideration the interplay not only with other
devices from the same Home WiFi network, but also with
overlapping networks when accessing to the shared medium,
including other AP/Extenders belonging to the same WLAN.
In this last case, the proliferation of WLAN management
platforms as discussed in Section II may facilitate the coordination of the network, as well as with the help of some
new features coming in IEEE 802.11ax and IEEE 802.11be
amendments, such as spatial reuse, OFDMA, and target wake
time (TWT) solutions [57], including their cooperative multiAP/Extender counterparts.
For WLANs with multiple AP/Extenders, there are still
many open challenges to properly design and implement
real-time load balancing schemes among AP/Extenders
when considering STA (and AP) mobility and traffic heterogeneity, including UL and DL traffic. Particularly, to create a potentially effective AP/Extender selection mechanism
adapted to the aforementioned conditions, its decision metric(s) should be enriched with new parameters describing
the instantaneous state of available AP/Extenders such as the
number of hops to the AP, the packet latency, the available
rate(s), the bit error rate (BER), or even the distance to the
targeted STA.
In this last regard, the IEEE 802.11az Task Group (TGaz)
aims at providing improved absolute and relative location,
tracking, and positioning of STAs by using fine timing measurement (FTM) instead of signal-strength techniques [58].
Specifically, FTM protocol enables a pair of WiFi cards
to estimate distance between them from round-trip timing
measurement of a given transmitted signal.
Lastly, and in line with what was stated in Section II, there
is wide scope for the introduction of ML techniques into the
AP/Extender selection mechanism. Particularly, the weight(s)
of the decision metric(s) could be determined through ML,
either dynamically according to a real-time observation and
feedback process on the network state, or by applying the
values corresponding to the most similar case from a set of
predetermined patterns and scenarios.
**VIII. CONCLUSION**
The RSSI-based AP selection mechanism, used by default
in IEEE 802.11 WLANs, only relies on the signal strength
received from available APs. Therefore, in spite of its simplicity, it may result in an unbalanced load distribution between
AP/Extenders and, consequently, in a degradation of the overall WLAN performance.
Though several alternatives can be found in the literature
addressing this issue, the channel load aware AP/Extender
selection mechanism presented in this article stands out by
its feasibility, as it is fully based on the already existing
IEEE 802.11k/v amendments, without requiring to modify
the firmware of end devices to facilitate real implementation.
The potential of the channel load aware mechanism is
shown through simulations and real testbed results. It is
able to outperform the traditional RSSI-based mechanism in
multi-channel scenarios consisting of multiple AP/Extenders
in terms of throughput, delay, and number of situations that
are satisfactorily solved, thus extending the WLAN operational range in, at least, 35%.
Furthermore, results from a real testbed show that the
throughput is boosted up to 77.12% with respect to the traditional RSSI-based mechanism in the considered setup. As for
the measured delay, it is consistently lower with the channel
_load aware mechanism, with differences ranging from 1.398_
to 92.895 ms.
**REFERENCES**
[1] T. Høiland-Jørgensen, M. Kazior, D. Täht, P. Hurtig, and A. Brunstrom,
‘‘Ending the anomaly: Achieving low latency and airtime fairness in wifi,’’
in Proc. USENIX Annu. Tech. Conf. (USENIX ATC), 2017, pp. 139–151.
-----
[2] IEEE Standard for Information Technology—Telecommunications and
_Information Exchange Between Systems Local and Metropolitan Area_
_Network—Specific Requirements—Part 11: Wireless LAN Medium Access_
_Control (MAC) and Physical Layer (PHY) Specifications—Amendment 4:_
_Enhancements for Very High Throughput for Operation in Bands Below 6_
_ghz, document 802.11ac-2013 (Amendment to IEEE Std 802.11-2012, as_
amended by IEEE Std 802.11ae-2012, IEEE Std 802.11aa-2012, and IEEE
Std 802.11ad-2012), Dec. 2013, pp. 1–425.
[3] B. Bellalta, ‘‘IEEE 802.11ax: High-efficiency WLANS,’’ IEEE Wireless
_Commun., vol. 23, no. 1, pp. 38–46, Feb. 2016._
[4] E. Khorov, A. Kiryanov, A. Lyakhov, and G. Bianchi, ‘‘A tutorial on IEEE
802.11ax high efficiency WLANs,’’ IEEE Commun. Surveys Tuts., vol. 21,
no. 1, pp. 197–216, 1st Quart., 2019.
[5] D. Lopez-Perez, A. Garcia-Rodriguez, L. Galati-Giordano, M. Kasslin,
and K. Doppler, ‘‘IEEE 802.11be extremely high throughput: The next
generation of Wi-Fi technology beyond 802.11ax,’’ IEEE Commun. Mag.,
vol. 57, no. 9, pp. 113–119, Sep. 2019.
[6] T. Adame, M. Carrascosa, and B. Bellalta, ‘‘Time-sensitive networking in IEEE 802.11be: On the way to low-latency WiFi 7,’’ 2019,
_arXiv:1912.06086. [Online]. Available: http://arxiv.org/abs/1912.06086_
[7] L.-H. Yen, T.-T. Yeh, and K.-H. Chi, ‘‘Load balancing in IEEE 802.11
networks,’’ IEEE Internet Comput., vol. 13, no. 1, pp. 56–64, Jan. 2009.
[8] G. R. Hiertz, D. Denteneer, S. Max, R. Taori, J. Cardona, L. Berlemann,
and B. Walke, ‘‘IEEE 802.11 s: The WLAN mesh standard,’’ IEEE Wireless
_Commun., vol. 17, no. 1, pp. 104–111, 2010._
[9] S. M. S. Bari, F. Anwar, and M. H. Masud, ‘‘Performance study of
hybrid wireless mesh protocol (HWMP) for IEEE 802.11s WLAN mesh
networks,’’ in Proc. Int. Conf. Comput. Commun. Eng. (ICCCE), Jul. 2012,
pp. 712–716.
[10] S. Waharte and R. Boutaba, ‘‘Tree-based wireless mesh network architecture: Topology analysis,’’ in Proc. 1st Int. Workshop Wireless Mesh Netw.
_(MeshNets), Budapest, Hungary, 2005, pp. 1–11._
[11] M. Herlich and S. Yamada, ‘‘Optimal distance of multi-hop 802.11 WiFi
relays,’’ in Proc. IEICE Soc. Conf., Sep. 2014, pp. 1–2.
[12] S. A. Hassan, ‘‘Optimal throughput analysis for indoor multi-hop wireless
networks in IEEE 802.11n,’’ in Proc. WAMICON, Apr. 2013, pp. 1–5.
[13] R. Atawia and H. Gacanin, ‘‘Self-deployment of future indoor Wi-Fi networks: An artificial intelligence approach,’’ in Proc. GLOBECOM IEEE
_Global Commun. Conf., Dec. 2017, pp. 1–6._
[14] Z. M. Fadlullah, Y. Kawamoto, H. Nishiyama, N. Kato, N. Egashira,
K. Yano, and T. Kumagai, ‘‘Multi-hop wireless transmission in multiband WLAN systems: Proposal and future perspective,’’ IEEE Wireless
_Commun., vol. 26, no. 1, pp. 108–113, Feb. 2019._
[15] N. Egashira, K. Yano, S. Tsukamoto, J. Webber, and T. Kumagai, ‘‘Low
latency relay processing scheme for WLAN systems employing multiband
simultaneous transmission,’’ in Proc. IEEE Wireless Commun. Netw. Conf.
_(WCNC), Mar. 2017, pp. 1–6._
[16] IEEE Standard for Information Technology—Telecommunications and
_Information Exchange Between Systems Local and Metropolitan Area_
_Networks—Specific Requirements—Part 11: Wireless LAN Medium Access_
_Control (MAC) and Physical Layer (PHY) Specifications, Standard 802.11-_
2016 (Revision of IEEE Std 802.11-2012), Dec. 2016, pp. 1–3534.
[17] X. Chen and D. Qiao, ‘‘HaND: Fast handoff with null dwell time for IEEE
802.11 networks,’’ in Proc. IEEE INFOCOM, Mar. 2010, pp. 1–9.
[18] T. Choi, Y. Chon, and H. Cha, ‘‘Energy-efficient WiFi scanning for localization,’’ Pervas. Mobile Comput., vol. 37, pp. 124–138, Jun. 2017.
[19] A. Mishra, M. Shin, and W. Arbaugh, ‘‘An empirical analysis of the IEEE
802.11 MAC layer handoff process,’’ ACM SIGCOMM Comput. Commun.
_Rev., vol. 33, no. 2, pp. 93–102, Apr. 2003._
[20] H. Velayos and G. Karlsson, ‘‘Techniques to reduce the IEEE
802.11b handoff time,’’ in Proc. IEEE Int. Conf. Commun., Jun. 2004,
pp. 3844–3848.
[21] S. Lee, M. Kim, S. Kang, K. Lee, and I. Jung, ‘‘Smart scanning for mobile
devices in WLANs,’’ in Proc. IEEE Int. Conf. Commun. (ICC), Jun. 2012,
pp. 4960–4964.
[22] Google LLC. (2019). Google WiFi Main Website. Accessed: Jan. 29, 2021.
[Online]. Available: https://store.google.com/product/google_wifi
[23] Linksys. (2019). Linksys Velop Main Website. Accessed: Jan. 29, 2019.
[Online]. Available: https://www.linksys.com/us/velop/
[24] J. B. Ernst, S. Kremer, and J. J. P. C. Rodrigues, ‘‘A utility based
access point selection method for IEEE 802.11 wireless networks with
enhanced quality of experience,’’ in Proc. IEEE Int. Conf. Commun. (ICC),
Jun. 2014, pp. 2363–2368.
[25] A. J. Nicholson, Y. Chawathe, M. Y. Chen, B. D. Noble, and D. Wetherall,
‘‘Improved access point selection,’’ in Proc. 4th Int. Conf. Mobile Syst.,
_Appl. Services - MobiSys, 2006, pp. 233–245._
[26] L.-H. Yen, J.-J. Li, and C.-M. Lin, ‘‘Stability and fairness of AP selection
games in IEEE 802.11 access networks,’’ IEEE Trans. Veh. Technol.,
vol. 60, no. 3, pp. 1150–1160, Mar. 2011.
[27] Y. Fukuda and Y. Oie, ‘‘Decentralized access point selection architecture for wireless LANs,’’ IEICE Trans. Commun., vol. E90-B, no. 9,
pp. 2513–2523, Sep. 2007.
[28] H. Gong, K. Nahm, and J. Kim, ‘‘Distributed fair access point selection for
multi-rate IEEE 802.11 WLANs,’’ in Proc. 5th IEEE Consum. Commun.
_Netw. Conf., Jan. 2008, pp. 528–532._
[29] F. Xu, C. C. Tan, Q. Li, G. Yan, and J. Wu, ‘‘Designing a practical access
point association protocol,’’ in Proc. IEEE INFOCOM, Mar. 2010, pp. 1–9.
[30] A. Raschella, F. Bouhafs, M. Seyedebrahimi, M. Mackay, and Q. Shi,
‘‘A centralized framework for smart access point selection based on the
fittingness factor,’’ in Proc. 23rd Int. Conf. Telecommun. (ICT), May 2016,
pp. 1–5.
[31] M. Abusubaih and A. Wolisz, ‘‘An optimal station association policy for
multi-rate ieee 802.11 wireless lans,’’ in Proc. 10th ACM Symp. Modeling,
_Anal., Simulation Wireless Mobile Syst. - MSWiM, 2007, pp. 117–123._
[32] L. Du, Y. Bai, and L. Chen, ‘‘Access point selection strategy for large-scale
wireless local area networks,’’ in Proc. IEEE Wireless Commun. Netw.
_Conf., Mar. 2007, pp. 2161–2166._
[33] M. Abusubaih, J. Gross, S. Wiethoelter, and A. Wolisz, ‘‘On access point
selection in IEEE 802.11 wireless local area networks,’’ in Proc. 31st IEEE
_Conf. Local Comput. Netw., Nov. 2006, pp. 879–886._
[34] S. Vasudevan, K. Papagiannaki, C. Diot, J. Kurose, and D. Towsley,
‘‘Facilitating access point selection in IEEE 802.11 wireless networks,’’ in Proc. 5th ACM SIGCOMM Conf. Internet Meas. IMC, 2005,
p. 26.
[35] K. Mittal, E. M. Belding, and S. Suri, ‘‘A game-theoretic analysis of wireless access point selection by mobile users,’’ Comput. Commun., vol. 31,
no. 10, pp. 2049–2062, Jun. 2008.
[36] L. Luo, D. Raychaudhuri, H. Liu, M. Wu, and D. Li, ‘‘Improving
end-to-end performance of wireless mesh networks through smart association,’’ in Proc. IEEE Wireless Commun. Netw. Conf., Mar. 2008,
pp. 2087–2092.
[37] M. Abusubaih and A. Wolisz, ‘‘Interference-aware decentralized access
point selection policy for multi-rate IEEE 802.11 wireless LANs,’’ in Proc.
_IEEE 19th Int. Symp. Pers., Indoor Mobile Radio Commun., Sep. 2008,_
pp. 1–6.
[38] K. Sundaresan and K. Papagiannaki, ‘‘The need for cross-layer information in access point selection algorithms,’’ in Proc. 6th ACM SIGCOMM
_Internet Meas. - IMC, 2006, pp. 257–262._
[39] J. Pang, B. Greenstein, M. Kaminsky, D. McCoy, and S. Seshan, ‘‘WiFireports: Improving wireless network selection with collaboration,’’ IEEE
_Trans. Mobile Comput., vol. 9, no. 12, pp. 1713–1731, Dec. 2010._
[40] B. Bojovic, N. Baldo, and P. Dini, ‘‘A neural network based cognitive
engine for IEEE 802.11 WLAN access point selection,’’ in Proc. IEEE
_Consum. Commun. Netw. Conf. (CCNC), Jan. 2012, pp. 864–868._
[41] M. Carrascosa and B. Bellalta, ‘‘Decentralized AP selection using multiarmed bandits: Opportunistic ε-Greedy with stickiness,’’ in Proc. IEEE
_Symp. Comput. Commun. (ISCC), Jun. 2019, pp. 1–7._
[42] M. Carrascosa and B. Bellalta, ‘‘Multi-armed bandits for decentralized
AP selection in enterprise WLANs,’’ 2020, arXiv:2001.00392. [Online].
Available: http://arxiv.org/abs/2001.00392
[43] L. Chen, ‘‘A distributed access point selection algorithm based on no-regret
learning for wireless access networks,’’ in Proc. IEEE 71st Veh. Technol.
_Conf., May 2010, pp. 1–5._
[44] Nighthawk Mesh WiFi 6 System, Netgear, Inc, San Jose, CA, USA, 2019.
[45] Netgear Support. (2018). How do I use Smart WiFi or Auto-Radio Resource
_Management (RRM) in Insight. Accessed: Jan. 29, 2021. [Online]. Avail-_
able: https://kb.netgear.com/000053883
[46] Aruba AirMatch Technology, Aruba Networks, Inc., Santa Clara, CA,
USA, 2018.
[47] What is Aruba ClientMatch,, Aruba Networks, Inc., Santa Clara, CA,
USA, 2018.
[48] J. A. G. Garrido and J. D. Alonso, ‘‘System and method for decentralized
control of wireless networks,’’ U.S. Patent 10 397 932, Aug. 27, 2019.
[49] M. I. Sanchez and A. Boukerche, ‘‘On IEEE 802.11K/R/V amendments:
Do they have a real impact?’’ IEEE Wireless Commun., vol. 23, no. 1,
pp. 48–55, Feb. 2016.
-----
[50] IEEE Standard for Information technology—Local and metropolitan area
_networks—Specific requirements—Part 11: Wireless LAN Medium Access_
_Control (MAC) and Physical Layer (PHY) Specifications Amendment 1:_
_Radio Resource Measurement of Wireless LANs, Standard 802.11k-2008,_
(Amendment to IEEE Std 802.11-2007), Jun. 2008, pp. 1–244.
[51] E. A. Panaousis, P. A. Frangoudis, C. N. Ververidis, and G. C. Polyzos, ‘‘Optimizing the channel load reporting process in IEEE 802.11kenabled WLANs,’’ in Proc. 16th IEEE Workshop Local Metrop. Area
_Netw., Sep. 2008, pp. 37–42._
[52] IEEE Standard for Information technology—Local and metropolitan
_area networks—Specific requirements—Part 11: Wireless LAN Medium_
_Access Control (MAC) and Physical Layer (PHY) Specifications Amend-_
_ment 8: IEEE 802.11 Wireless Network Management, Standard 802.11v-_
2011, (Amendment to IEEE Std 802.11-2007 as amended by IEEE Std
802.11k-2008, IEEE Std 802.11r-2008, IEEE Std 802.11y-2008, IEEE
Std 802.11w-2009, IEEE Std 802.11n-2009, IEEE Std 802.11p-2010, and
IEEE Std 802.11z-2010), Feb. 2011, pp. 1–433.
[53] S. Bangolae, C. Bell, and E. Qi, ‘‘Performance study of fast BSS transition using IEEE 802.11r,’’ in Proc. Int. Conf. Commun. Mobile Comput.
_IWCMC, Jul. 2006, pp. 737–742._
[54] C. Thorpe, S. Murphy, and L. Murphy, ‘‘Analysis of variation in IEEE
802.11k channel load measurements for neighbouring WLAN systems,’’
in Proc. 17th ICT Mobile Wireless Commun. Summit (ICT-MobileSummit),
2008. [Online]. Available: https://e-archivo.uc3m.es/handle/10016/14054
[55] Propagation Data and Prediction Methods for the Planning of Indoor
_Radio Communication Systems and Radio Local Area Networks in the Fre-_
_quency Range 900 MHz to 100 GHz, document ITU-R, Recommendation_
P.1238-7, P Series. Radiowave propagation, 2012.
[56] B. Bellalta, M. Oliver, M. Meo, and M. Guerrero, ‘‘A simple model of the
IEEE 802.11 MAC protocol with heterogeneous traffic flows,’’ in Proc.
_EUROCON Int. Conf. Comput. Tool, Nov. 2005, pp. 1830–1833._
[57] M. Nurchis and B. Bellalta, ‘‘Target wake time: Scheduled access in IEEE
802.11ax WLANs,’’ IEEE Wireless Commun., vol. 26, no. 2, pp. 142–150,
Apr. 2019.
[58] C.-C. Wang. IEEE P802.11, Wireless LANs, Specification Framework for
_TGaz. Accessed: Jan. 29, 2021. https://mentor.ieee.org/802.11/dcn/17/11-_
17-0462-16-00az-11-az-tg-sfd.doc
TONI ADAME received the M.Sc. degree in
telecommunications engineering from the Universitat Politecnica de Catalunya (UPC), in 2009.
He is currently a Senior Researcher with the
Department of Information and Communication
Technologies (DTIC), Universitat Pompeu Fabra
(UPF), responsible for the design of technical solutions in research and development projects-based
on heterogeneous wireless technologies. He also
collaborates as an Associate Lecturer in several IT
degrees with UPF and Universitat Oberta de Catalunya (UOC).
MARC CARRASCOSA received the B.Sc. degree
in telematics engineering and the M.Sc. degree in
intelligent and interactive systems from the Universitat Pompeu Fabra (UPF), in 2018 and 2019,
respectively. He is currently pursuing the Ph.D.
degree with the Wireless Networking Research
Group, Department of Information and Communication Technologies (DTIC), UPF. His research
interest includes performance optimization in
wireless networks.
BORIS BELLALTA is currently an Associate Professor with the Department of Information and
Communication Technologies (DTIC), Universitat
Pompeu Fabra (UPF), where he is also the Head of
the Wireless Networking Research Group.
IVÁN PRETEL received the M.Sc. degree in development and integration of software solutions from
the University of Deusto, in 2010, and the Ph.D.
degree in computer engineering and telecommunications, in 2015. He is currently a Research Engineer with Fon Labs. In 2008, he began his research
career with the MORElab Research Group, Deusto
Foundation, where he started as a Research Intern
in the mobile services area participating in more
than 20 international and national research projects
related to system architectures, human–computer interaction, and societal
challenges. He is also involved in research projects related to data science and
5G technologies, such as the 5GENESIS H2020 project. He also collaborates
in several master degrees as an Associate Lecturer with the University of
Deusto, giving several courses on mobile platforms, big data, and business
intelligence. His research interests include data science and advanced mobile
services.
IÑAKI ETXEBARRIA received the degree in
telecommunications engineering from the Escuela
Superior de Ingeniería de Bilbao (ETSI). He has
developed his professional career in private business, before Fon he worked with Erictel M2M
working on IoT, embedded equipment development, and fleet management software solutions.
Since 2015, he has been working with Fon Labs,
where he has specialized in communication networks, specifically WiFi, developing innovation
projects on product and technology. He is currently a Research and Development Engineer with Fon Labs. He has worked on several projects in international consortiums integrating WiFi in 5G networks. He also combines
engineering work with the management of the Fon Labs team.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2004.08110, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09354613.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-04-17T00:00:00
|
[
{
"paperId": "2d14884fc362e2e18321372aae44f8575e4bd4b8",
"title": "IEEE Standard for Information Technology--Telecommunications and Information Exchange between Systems - Local and Metropolitan Area Networks--Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications - Redline"
},
{
"paperId": "31e33e50ac50e3e273571b980e24e646d4b80432",
"title": "Multi-Armed Bandits for Decentralized AP selection in Enterprise WLANs"
},
{
"paperId": "397de1bd830d10c80bc4edec0b6d2de6df1f693e",
"title": "Time-Sensitive Networking in IEEE 802.11be: On the Way to Low-Latency WiFi 7"
},
{
"paperId": "8ca96b9fc3ed781ac47fd99dbd04e4a593b01e16",
"title": "Decentralized AP selection using Multi-Armed Bandits: Opportunistic ε-Greedy with Stickiness"
},
{
"paperId": "d7a6c480e4364145e7e328875efe2fac3832c574",
"title": "IEEE 802.11be Extremely High Throughput: The Next Generation of Wi-Fi Technology Beyond 802.11ax"
},
{
"paperId": "c21d4bda27eda671604b5093632af9bc8b25c318",
"title": "Multi-Hop Wireless Transmission in Multi-Band WLAN Systems: Proposal and Future Perspective"
},
{
"paperId": "bc021725fdf50de15fc0a91cfb44bd9e1c28de81",
"title": "Target Wake Time: Scheduled Access in IEEE 802.11ax WLANs"
},
{
"paperId": "db3ae1dac8a7b45f7e6bb7e9ae7fca4d9d6a7c11",
"title": "Self-Deployment of Future Indoor Wi-Fi Networks: An Artificial Intelligence Approach"
},
{
"paperId": "05d11d5850b5b1f0a5dc6b1518914e49dd1f014b",
"title": "Energy-efficient WiFi scanning for localization"
},
{
"paperId": "b69f834743a0d8b0544054385869a69cdc8d5ff0",
"title": "Low Latency Relay Processing Scheme for WLAN Systems Employing Multiband Simultaneous Transmission"
},
{
"paperId": "ba7f5b3f80159579b7b050380a8ff958bf392801",
"title": "Ending the Anomaly: Achieving Low Latency and Airtime Fairness in WiFi"
},
{
"paperId": "1d06c514ab7844f083ac899f1f477de2698c5b1a",
"title": "A centralized framework for smart access point selection based on the Fittingness Factor"
},
{
"paperId": "629b5649cff12a388a7772f49b020050696caffa",
"title": "On IEEE 802.11K/R/V amendments: Do they have a real impact?"
},
{
"paperId": "5ea0c7762e62a928564e239cb99e837de0777e76",
"title": "IEEE 802.11ax: High-efficiency WLANS"
},
{
"paperId": "03e8f41b833ab570d1ffc292605c5cd4d34ee897",
"title": "A utility based access point selection method for IEEE 802.11 wireless networks with enhanced quality of experience"
},
{
"paperId": "6096c4bc3e6335e1b55c49f2e8133e86079b6296",
"title": "Optimal throughput analysis for indoor multi-hop wireless networks in IEEE 802.11n"
},
{
"paperId": "f2ca20d38a8013a36b4d2ddcedc68e78320c9c38",
"title": "Performance study of hybrid Wireless Mesh Protocol (HWMP) for IEEE 802.11s WLAN mesh networks"
},
{
"paperId": "5395e8bdf03de0a5ea2320fec012263c2acd2c4e",
"title": "Smart scanning for mobile devices in WLANs"
},
{
"paperId": "805d987b6e1de715e227bea633efb7fe97f1a1b2",
"title": "A Neural Network based cognitive engine for IEEE 802.11 WLAN Access Point selection"
},
{
"paperId": "cfb820f627b4e3186716fc1a85898608a6a1c3a2",
"title": "Stability and Fairness of AP Selection Games in IEEE 802.11 Access Networks"
},
{
"paperId": "f861380b479d292e00629db16057867857cf36b6",
"title": "A Distributed Access Point Selection Algorithm Based on No-Regret Learning for Wireless Access Networks"
},
{
"paperId": "badd21a2238020776ba3946957e01de3b0f995cb",
"title": "Designing a Practical Access Point Association Protocol"
},
{
"paperId": "f5109e138a0a6cb0dc28e7932b1327a85fd4e7ab",
"title": "HaND: Fast Handoff with Null Dwell Time for IEEE 802.11 Networks"
},
{
"paperId": "265cffeefbe3188b9acfb4eafc5965e2408bd5c3",
"title": "IEEE 802.11s: The WLAN Mesh Standard"
},
{
"paperId": "0d868769b3e8e49220c45cc0c069b6fb240ad365",
"title": "Wifi-Reports: Improving Wireless Network Selection with Collaboration"
},
{
"paperId": "abda94e2d53bad912ca23516d4f9108761432471",
"title": "Interference-aware decentralized access point selection policy for multi-rate IEEE 802.11 Wireless LANs"
},
{
"paperId": "667423cfc8625a7cd17dea6a9157ad14c07d2c50",
"title": "Optimizing the channel load reporting process in IEEE 802.11k-enabled WLANs"
},
{
"paperId": "c9ccaa8a21215caf240ddf8d527a920101ee16d8",
"title": "A game-theoretic analysis of wireless access point selection by mobile users"
},
{
"paperId": "63314ce11513f43cca0a0067e73fb7c1bf4463a5",
"title": "Improving End-to-End Performance of Wireless Mesh Networks through Smart Association"
},
{
"paperId": "3b67435a67dcd692e8b006389dae14d1951d9d6e",
"title": "Distributed Fair Access Point Selection for Multi-Rate IEEE 802.11 WLANs"
},
{
"paperId": "e48926854eecdf2840894167bd28a5411b9abfd9",
"title": "An optimal station association policy for multi-rate ieee 802.11 wireless lans"
},
{
"paperId": "09fd9b208a7df88371a98c8fdd828bf2a075def7",
"title": "Decentralized access point selection architecture for wireless LANs"
},
{
"paperId": "f31d55c51a8d4e36bd57e6ccc8d60f08d9c93b55",
"title": "Access Point Selection Strategy for Large-Scale Wireless Local Area Networks"
},
{
"paperId": "7688c418ce8984087b3141a1272c6fb8be88d641",
"title": "On Access Point Selection in IEEE 802.11 Wireless Local Area Networks"
},
{
"paperId": "7a180be3f8d994789c45d5b283d7d8bd4c01d74e",
"title": "The need for cross-layer information in access point selection algorithms"
},
{
"paperId": "c4639cf655a4ef6a761c2a35ad39ac7e8ec68521",
"title": "Performance study of fast BSS transition using IEEE 802.11r"
},
{
"paperId": "063ca005b4714603f907bbc8b89b27e3af535caf",
"title": "Improved access point selection"
},
{
"paperId": "3665b1dbb30a285d067ed10ff6e901ecd19865e5",
"title": "Facilitating access point selection in IEEE 802.11 wireless networks"
},
{
"paperId": "c7e4583c59fb2f9fa8d4aa0b3e477be5cdd1d4f6",
"title": "Techniques to reduce the IEEE 802.11b handoff time"
},
{
"paperId": "dfeb0437c05df795cdfd0c7f944fd55c37b6db8b",
"title": "An empirical analysis of the IEEE 802.11 MAC layer handoff process"
},
{
"paperId": null,
"title": "IEEE P802.11, Wireless LANs, Specification Framework for TGaz, https://mentor.ieee.org/802.11/dcn/17/11-17-0462-16-00az-11-az-tg-sfd.doc"
},
{
"paperId": "f9f8a0cc797db5fbc8924541b973968747f928e1",
"title": "A Tutorial on IEEE 802.11ax High Efficiency WLANs"
},
{
"paperId": null,
"title": "System and method for decentralized con-trol of wireless networks"
},
{
"paperId": null,
"title": "Nighthawk Mesh WiFi 6 System"
},
{
"paperId": null,
"title": "What is Aruba ClientMatch?"
},
{
"paperId": null,
"title": "Aruba AirMatch Technology"
},
{
"paperId": "7fc0131a272f7b80c4a40f2b1f76008f084121ff",
"title": "Optimal Distance of Multi-hop 802 . 11 WiFi Relays"
},
{
"paperId": "2106f250967871c41b9457b92a386d8ee1d02aee",
"title": "Load Balancing in IEEE 802.11 Networks"
},
{
"paperId": "b4ca9499d24f41f84eb96c2aef7422b78bc82d9d",
"title": "Analysis of Variation in IEEE 802 . 11 k Channel Load Measurements for Neighbouring WLAN Systems"
},
{
"paperId": "99364dbd96fd1db90894bc3cec100d895b2da0b8",
"title": "A Simple Model of the IEEE 802.11 MAC Protocol with Heterogeneous Traffic Flows"
},
{
"paperId": "3fc99e90563e52461a510ca291c29ddeb97a04b4",
"title": "Tree-based Wireless Mesh Network Architecture : Topology Analysis"
},
{
"paperId": "e454a4f94495c8593c9f8d654f1627c8e54e78e8",
"title": "Wireless lan medium access control (mac) and physical layer (phy) specifications"
},
{
"paperId": null,
"title": "Enhancements for Very High Throughput for Operation in Bands Below 6 ghz, document 802"
},
{
"paperId": null,
"title": "How do I use Smart WiFi or Auto-Radio Resource Management (RRM) in Insight"
},
{
"paperId": null,
"title": "Wireless LANs, Specification Framework for TGaz"
},
{
"paperId": null,
"title": "- 7 : Propagation data and prediction methods for the planning of indoor radio communication systems and radio local area networks in the frequency range 900 MHz to 100 GHz , P Series"
},
{
"paperId": null,
"title": "Linksys Velop Main Website"
}
] | 19,542
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f030043a1ca3a957a37cdf03060a15fefeb932
|
[] | 0.918561
|
Paradigm shift in the music industry: Adaptation of blockchain technology and its transformative effects
|
00f030043a1ca3a957a37cdf03060a15fefeb932
|
JOURNAL OF ARTS
|
[
{
"authorId": "2264798292",
"name": "Betül Yarar Koçer"
}
] |
{
"alternate_issns": [
"2426-8356"
],
"alternate_names": [
"J ART",
"J art",
"Journal des arts"
],
"alternate_urls": [
"https://gallica.bnf.fr/ark:/12148/cb32799197c/date"
],
"id": "3a539138-8634-48b5-af70-11b63711ab77",
"issn": "2636-7718",
"name": "JOURNAL OF ARTS",
"type": "journal",
"url": "http://www.ratingacademy.com.tr/ojs/index.php/arts"
}
|
The music industry is undergoing a profound transformation thanks to blockchain technology. This article extensively examines how the core components of music – production, distribution, performance, and presentation – are undergoing radical changes through the integration of blockchain technology. The traditional music industry faces significant challenges, particularly in vital areas like copyright management, music distribution, and artist compensation. These challenges have become even more complex with the digitization of music and the rise of online platforms. However, blockchain technology, with its decentralized and transparent structure, has the potential to overcome these obstacles. This technology takes important steps in addressing disputes related to copyright by enhancing the traceability and verifiability of music works throughout their lifecycle, thereby contributing to fairer compensation for artists. Moreover, this article also delves into other intersecting domains related to the music industry, focusing on safeguarding intellectual property in music and presenting innovative solutions to the intricate music economy. Relevant data gathered through qualitative research methods is systematically presented to comprehensively explore the potential role of blockchain technology in the music industry’s future. This exploratory analysis also investigates blockchain-supported platforms, providing an in-depth examination of their current development status and business models. The article places special emphasis on fundamental concepts such as copyright, ownership of artistic works, cultural heritage, and the role of blockchain technology in shaping the music industry, artists, and the ongoing digital transformation. In this rapidly evolving dynamic process, the transformative role of blockchain technology in the music industry and its potential must be continuously monitored, serving as a foundation for future-oriented initiatives. This comprehensive approach reflects the concerted effort to understand the effects of blockchain technology, which is shaping the trajectory of the music industry’s future, from a broader perspective.
|
Volume/Cilt: 6, Issue/Sayı: 4 Year/Yıl: 2023, pp. 243-253
E-ISSN: 2636-7718
[URL: https://journals.gen.tr/arts](https://journals.gen.tr/arts)
DOI: https://doi.org/10.31566/arts.2163
Received / Geliş: 08/08/2023
Acccepted / Kabul: 27/09/2023
RESEARCH ARTICLE / ARAŞTIRMA MAKALESİ
# Paradigm shift in the music industry: Adaptation of blockchain technology and its transformative effects
## Betül Yarar Koçer
Assist. Prof. Dr., Mersin University, State Conservatory, Department of Music,Türkiye, e-mail: betulyarar@mersin.edu.tr
**Abstract**
The music industry is undergoing a profound transformation thanks to blockchain technology. This article extensively
examines how the core components of music – production, distribution, performance, and presentation – are undergoing
radical changes through the integration of blockchain technology. The traditional music industry faces significant
challenges, particularly in vital areas like copyright management, music distribution, and artist compensation.
These challenges have become even more complex with the digitization of music and the rise of online platforms.
However, blockchain technology, with its decentralized and transparent structure, has the potential to overcome
these obstacles. This technology takes important steps in addressing disputes related to copyright by enhancing the
traceability and verifiability of music works throughout their lifecycle, thereby contributing to fairer compensation
for artists. Moreover, this article also delves into other intersecting domains related to the music industry, focusing
on safeguarding intellectual property in music and presenting innovative solutions to the intricate music economy.
Relevant data gathered through qualitative research methods is systematically presented to comprehensively explore
the potential role of blockchain technology in the music industry’s future. This exploratory analysis also investigates
blockchain-supported platforms, providing an in-depth examination of their current development status and business
models. The article places special emphasis on fundamental concepts such as copyright, ownership of artistic works,
cultural heritage, and the role of blockchain technology in shaping the music industry, artists, and the ongoing digital
transformation. In this rapidly evolving dynamic process, the transformative role of blockchain technology in the music
industry and its potential must be continuously monitored, serving as a foundation for future-oriented initiatives. This
comprehensive approach reflects the concerted effort to understand the effects of blockchain technology, which is
shaping the trajectory of the music industry’s future, from a broader perspective.
**Keywords: Blockchain, Music Industry, Digital Music Distribution, Licensing, Copyright**
**Citation/Atıf: YARAR KOÇER, B. (2023). Paradigm shift in the music industry: Adaptation of blockchain technology and its transformative**
effects. Journal of Arts. 6(4): 243-253, DOI: 10.31566/arts.2163
**Corresponding Author/ Sorumlu Yazar:**
Betül Yarar Koçer
E-mail: betulyarar@mersin.edu.tr
Bu çalışma, Creative Commons Atıf 4.0 Uluslararası
Lisansı ile lisanslanmıştır.
This work is licensed under a Creative Commons
Attribution 4.0 International License.
-----
## 1. INTRODUCTION
The advancement of technology has brought
about a radical transformation in the distribution
and recording methods of music. Recorded
sounds have enabled musical compositions
to reach wider audiences, expanding the
boundaries of cultural sharing. However, this
transformation has also given rise to concepts
such as copyright and intellectual property.
Traditionally, ownership of music compositions,
copyright regulations, and protection methods
have been shaped by legal frameworks aimed at
controlling the usage of musicians’ and artists’
works.
Technological progress, especially since the
mid-20th century, has significantly widened
the scope of cultural sharing through the use
of recorded sounds, making music accessible to
broader audiences. Music, previously limited to
live performances, has become easily accessible
in recorded formats. Cassettes, records and
then digital formats made it easy for everyone
to listen to music in the venue of their choice.
Yet, this technological shift has also brought
forth issues related to copyright and intellectual
property. The rise of digital formats has
increased the copyability and shareability of
music compositions, leading to unauthorized
use, duplication, and distribution of artists’
works. Copyright and intellectual property laws
have been developed to protect the control and
income of musicians by adapting to the changing
dynamics of the music industry and aiming to
safeguard artists’ creative efforts.
While the music industry is rapidly adapting
to the effects of digital transformation, the
traditional processes of music distribution and
copyright management remain complex and
contentious. Lack of trust can arise in areas
such as revenue sharing and copyright tracking
among artists, producers, songwriters, and
other stakeholders. The challenge of generating
reasonable income from music production
has become increasingly difficult, driven by
the surge in inter-stakeholder content sharing
and the distribution of intellectual property
rights. Currently, the involvement of numerous
intermediaries in the distribution stage has led
to a chaotic process, contributing to a significant
reduction in artists’ income due to low sales and
inadequate royalty payments.
“New technologies can radically simplify
methods of identifying and compensating music
rights owners, enabling sustainable business
models for artists, entrepreneurs, and music
enterprises.” (Panay, 2016). In this context,
blockchain technology emerges as a potential
solution. Mougayar (2016) asserts that blockchain
technology is as crucial as the World Wide Web.
The potential impacts of blockchain technology
on the music industry encompass a wide range.
In the transformation of music, it is observed that
blockchain technology could play a significant
role in music production, distribution, and
consumption. Particularly, by reducing the
number of intermediaries and enabling
instant payments, it can address complex
payment issues within the music industry.
Additionally, blockchain technology has the
capacity to enhance copyright management and
traceability of music compositions by providing
a decentralized structure. It can assist artists
in gaining more control and transparency
by enabling digital ownership, tracking, and
sharing of music compositions.
The historical and cultural evolution of music,
combined with technological advancements and
new business models, is shaping the future of the
music industry. The impact of this technology
on the future of the music industry is of great
importance in terms of preserving cultural
heritage and valuing artists’ efforts.
This article aims to delve deeper into
understanding the potential impact of blockchain
technology in the field of music and to discuss
the transformation in the industry. To achieve
this goal, after examining the current state and
dynamics of the music industry, a comparative
analysis of global blockchain music companies
will be conducted. The analysis results will
provide a discussion on the potential contribution
of blockchain technology to the music industry
and its possible effects.
-----
## 2. METHOD
This article employs a qualitative research method
to examine the potential and transformative
effects of blockchain technology on the music
industry. Specifically, the focus has been on
how blockchain can impact the music industry
and how new business models can be defined
through technology. “Qualitative research can
be defined as a series of interpretive techniques
that attempt to explain, analyze, and translate
concepts and phenomena rather than record
their frequency in society” (Van Maanen, 1983).
For this purpose, a qualitative approach has been
adopted since it deals with “how” questions.
The research initiated with a comprehensive
literature review. The literature review laid the
foundation for data collection and analysis by
providing guiding frameworks for the research
(Vom Brocke et al., 2015). Relevant sources were
selected from platforms such as Scopus and
Google Scholar, and an overview was obtained
by skimming through identified texts. Online
materials like social media content, blockchain
platforms, and industry reports were also
utilized to gain a comprehensive understanding.
Additionally, the snowball sampling method
was employed as an efficient way to find relevant
literature in terms of time.
In the initial phase, the supply chain processes
and relationships of the traditional music
industry were examined in detail through an
exploratory analysis. This analytical approach
was utilized to comprehend how the chain
operates and to identify challenges within these
processes. The same analytical method was
applied in the exploration of new music platforms
supported by blockchain technology. Prominent
blockchain-based platforms like Resonate,
Opus, Musicoin, and Audius were investigated
at this point. The functionality, purpose of use,
adopted practices, and how they are used were
systematically explored using a comprehensive
content analysis method. Content analysis
proved to be a critical tool in shedding light on
the unique features and usage patterns of each
platform. Through this analytical approach,
the advantages, challenges, functionality, and
purpose of use of each platform were discussed
in detail.
In this context, this study comprehensively
addresses the impact of blockchain technology on
the digital transformation of the music industry.
Comparing the traditional music industry’s
supply chain model with the potential offered
by blockchain technology provides valuable
perspectives for the industry’s future evolution.
This analytical framework aims to contribute to a
broader understanding of the transformation of
the music industry within a larger context.
## 3. DEVELOPMENT OF DIGITAL MUSIC AND CHALLENGES IN THE EVOLUTION OF THE INTERNET
The evolution of the internet has revolutionized
the music industry and brought significant
changes to how music is created, distributed,
and consumed. Digital music, defined in its
fundamental sense, is a visual-auditory medium
stored in digital format that can be transmitted
over the internet and wireless networks. When
compared to traditional music, digital music is not
only low-cost, highly efficient, and personalized,
but also caters to the consumption needs of
consumers in the era of new technologies.
The internet has completely transformed music
distribution today by enabling easier and
faster access to music. Instead of traditional
physical formats like vinyl, cassette, or CDs,
the internet allows music to be downloaded
digitally or streamed online through streaming
services, making music more accessible. Digital
technologies and the internet have democratized
the process of music creation and recording.
Digital audio workstations (DAWs) and various
software tools allow musicians to create
professional-quality music from their homes.
Additionally, online collaboration platforms
facilitate musicians’ collaboration from around
the world. Moreover, new avenues have been
provided for artists and record labels to promote
and market their music. Through social media,
music videos, online radio, podcasts, music
platforms like Spotify and Apple Music, and
other digital platforms, artists can reach a global
audience. Music listeners can now interact with
their favorite artists through social media and
-----
listen to artists live through online concerts and
live streams.
However, this transition has come with various
challenges and impacts. The launch of the
iTunes Music Store by Apple in April 2003
is considered a significant milestone in the
digital music transformation. This platform
reduced the cost of downloading a single song
to $0.99 and an album to $9.99 through iTunes
4.0, providing a 33% discount compared to
traditional CD formats (Dutra et al., 2018). This
move encouraged the consumption of digital
music and marked a significant transformation
in the music industry. The price reduction made
music more accessible on digital platforms and
influenced music consumers’ habits.
Digital music and the internet have created
new challenges and opportunities in terms
of copyright and licensing. Artists and rights
holders are required to change how they manage
their music’s online use and revenue generation.
This transformation has brought both new
opportunities and challenges.
Digital music and the internet have created new
challenges and opportunities in the realms of
copyright and licensing, necessitating a change
in how artists and rights holders manage the
use of their music online and how they earn
income from it. This process of transformation
has brought forth numerous new prospects
online streaming, while download rates are
alongside its challenges. For instance, issues
like piracy and copyright infringements have
emerged as significant problems affecting both
the music industry and artists.
In recent years, the rise in popularity of music
streaming services has somewhat mitigated
piracy, as these services often offer users access
to a vast music library at a low cost or for
free. Nevertheless, piracy continues to pose a
significant challenge for the music industry.
To address these and other issues, the music
industry and technology developers continually
explore new solutions and models.
Furthermore, many artists contend that the
revenue derived from music streaming platforms
is unfair. Notably, musicians, including
influential figures like David Bowie, have been
at the forefront of advocating for and actively
engaging in discussions on this transformation
(For more detailed information, refer to Pareles,
2002). However, during the complex transitional
period spanning from 2000 to 2015, public
discourse paid limited attention to how musicians
would generate income in this emerging digital
age, the funding sources available to them,
and the means by which they could sustain
their music careers. Discussions during this
period primarily revolved around speculations
regarding new opportunities and changes, with
relatively little focus on the income-generation
declining. Table 1. Global Recorded Music Industry Revenues 1999 - 2022 (Billion US Dollars) (ifpi.org)
**Table 1. Global Recorded Music Industry Revenues 1999 - 2022 (Billion US Dollars) (ifpi.org)**
-----
challenges faced by musicians (Hesmondhalgh,
2021, p. 3594).
The global music industry continues to grow
in recent years. According to the International
Federation of the Phonographic Industry (IFPI),
which measures the music industry’s growth, it
reported a total revenue of $26.2 billion in 2022
based on data from record companies. IFPI notes
that while revenue from physical formats (such
as vinyl and CD revenue) has decreased over
the past decade, digital revenue has increased.
Furthermore, there is an observed increase in
online streaming, while download rates are
declining.
The revenue from digitally sold music has been
unequally distributed among stakeholders in
the music industry. In broad terms, the revenue
from music streams is divided as follows:
30% to on-demand streaming services, 60% to
record companies and publishers, and 10% to
songwriters, artists, and music groups. According
to analyses, Apple Music has paid unsigned
artists $0.0064 and signed artists $0.0073, while
Spotify has paid $0.007 and $0.0044 respectively.
In 2017, in the United States, for an artist to
earn the minimum wage of $1,472, their songs
would need to be streamed around 230,000
times on Apple and 380,000 times on Spotify.
For YouTube, considering an artist receives
only $0.003 per stream, their content would
need to be streamed around 4.2 million times
(Sanchez, 2017). Ensuring fair and equitable
distribution of revenues among rights holders,
particularly among stakeholders, is critical for
the sustainability of the industry and to support
artists. At this point, collaboration among all
stakeholders in the industry is necessary to
explore appropriate solutions.
## 4. BLOCKCHAIN TECHNOLOGY
“Blockchain is a shared, trusted, public ledger
that everyone can inspect, but which no single
user controls. It operates by consensus, and once
recorded, the data in any given block cannot
be altered retroactively.” (BlockchainHub,
2023). Since its introduction through the Bitcoin
whitepaper published by an anonymous
individual or group using the pseudonym
Satoshi Nakamoto in 2008, blockchain
technology has come a long way (Nakamoto,
2008). A blockchain consists of a virtual chain
of blocks, each with a unique identifier (referred
to as a hash) and containing information
such as financial transactions, contracts, or
other documents. A blockchain operates on a
decentralized network of computers (referred to
as nodes) collectively verifying the information
entering a block. Reaching a consensus on what
information should be included in a block is
necessary to minimize the chances of accepting
incorrect information, as nodes mostly reject a
block without the need for a central entity (Peters
and Panayi, 2016). The database is distributed
based on the principle that each copy of new
data is sent to not just one computer but to all
users in the chain or system. To change any bit
of the database, hackers would need to change
the copies of inputs in the system by 51%, and
each copy would need to include all previous
interactions with that data (Nguyen & Dang,
2018, pp. 483-484).
In essence, no singular entity owns a blockchain,
making it immutable and devoid of a single
point of vulnerability for those attempting to
hack or otherwise tamper with the data in the
blockchain ledger. For this reason, blockchain
is the first technology to enable the transfer of
digital ownership in a decentralized and trustless
manner (Iinuma, 2018). Creating a blockchain
transaction involves the following steps:
defining the transaction and providing access
to the sender network, including the recipient’s
address, transaction value, and digital signature.
Nodes verify the user’s digital signature through
encryption. The verified transaction is added to
a pool. Pending transactions are combined into a
block, creating an updated record maintained by
a node. The block is accepted by the network’s
verification nodes and added to the blockchain.
This process is typically completed within 2 to 10
seconds (Gheorghe et al., 2017, p. 218).
Blockchain provides high security and flexibility
through high interaction, successfully eliminating
third parties and rendering processes more
transparent, democratic, decentralized, costeffective, and secure. This technology has various
applications, including smart contracts, supply
-----
chain traceability, digital identity verification,
and many more. Blockchain technology offers
transformation potential across numerous
industries through these and other applications.
Due to its decentralized and transparent nature,
blockchain can offer a reliable framework
for copyright management. Through smart
contracts, automatic and transparent revenue
sharing can occur between copyright holders
and licensees. Additionally, copyright tracking
and monitoring processes can be automated,
reducing copyright infringements and disputes.
In the field of music distribution, blockchain
technology can enable artists to directly reach
listeners and eliminate the costs of traditional
intermediaries. This could create a fairer and
more sustainable revenue model, particularly for
independent artists.
## 5. USAGE AND PRACTICE OF BLOCKCHAIN IN MUSIC
In the digital age, music is considered data, and
metadata is the data about that data, containing
information about the music itself. Metadata
embedded in each recorded music track can
include usage conditions and contact details
of copyright holders, making it easier to locate
owners of a recorded music piece and acquire
licenses. The concept is to attribute a purpose
to music, allowing it to act as if it were alive.
Gradually placing copyright data onto the
blockchain could eventually lead to the creation
of a comprehensive copyright database for music
(LO’Dair, 2016).
In the contemporary music landscape, the fusion
of blockchain technology, smart contracts, and
cryptocurrency is forming the foundation of a
new music ecosystem that reflects inclusivity,
integrity, transparency, and fair compensation
ethics. Producers and consumers of digital music
content are deciding how to share their content
in the online world. On these new-generation
platforms, artists can easily upload their music
and associated content to a centralized online
location, making it accessible to everyone.
Rights, ownership, and usage of the content
shift the focus from traditional music company
or distributor policies to a technically artist
centric model built on blockchain architecture.
This model enables artists to offer their work for
listening, sharing, remixing, or purchase directly
to audiences. (Tapscott & Tapscott, 2016, pp. 287290).
The music industry ecosystem is a centralized
database network. These databases connect
rights and licensing flows while providing a
revenue stream. The DotBlockchain architecture
is designed for Blockchain technology, aiming
to develop the future music ecosystem by
utilizing a balanced ring architecture. This
architecture encompasses all participants from
traditional labels and publishers to performing
rights organizations and composition editors.
Collaborating partners can store their data in a
metadata chain by combining their individual
databases. This chain resides within a public
data block. The DotBlockchain architecture
works compatibly with existing media formats,
maintaining data safety and accuracy (Gheorge,
2017, pp. 2022-2024).
However, integrating blockchain technology
into the music industry could face challenges
such as standardizing copyright management
and licensing processes and creating a legal
framework. Additionally, collaboration and
data sharing among all stakeholders need to be
encouraged.
The various roles and applications of blockchain
technology in the music industry include:
**Copyrights and Licensing:** Blockchain can
be used to verify and track ownership of a
song or album. This facilitates the verification
of copyright and licensing information for
each track, leading to more accurate revenue
distribution.
**Music Distribution: Artists and groups can**
distribute music directly to consumers using
blockchain technology. This bypasses traditional
distribution channels, giving them more control
and potential revenue.
**Micro Payments: Blockchain facilitates artists**
receiving micro payments for their tracks. This
allows listeners to directly purchase specific
songs or albums.
-----
**NFTs (Non-Fungible Tokens): Artists can use**
NFTs to create unique digital products. This
provides fans with the opportunity to own
unique pieces and offers artists new revenue
streams.
**Interaction** **with** **Fans:** Some artists use
blockchain technology to engage more with
their fans. They can provide exclusive access and
experiences using tokenized rewards.
Nevertheless, the full impact of blockchain
technology on the music industry is still unfolding
and evolving. However, this technology has
significant potential to fundamentally change
how musicians create, distribute, and earn from
their music.
## 6. INNOVATIVE BLOCKCHAIN- BASED PLATFORMS IN MUSIC DISTRIBUTION AND A LOOK INTO THE FUTURE
The music industry is undergoing profound
changes due to the impact of digital
transformation. Challenges such as traditional
distribution models, copyright issues, and the
lack of fair compensation for artists necessitate
new and innovative solutions for music to adapt
to the digital age. At this juncture, blockchain
technology comes into play, enabling data
to be stored transparently, securely, and
in a decentralized manner. However, the
widespread adoption and acceptance of these
platforms by the general public can give rise to
significant challenges, considering factors such
as technological capabilities, user behavior, and
industry standards. Blockchain-based music
platforms are offering a new perspective to
the music industry, reshaping the interaction
between artists and listeners. However, how
these platforms will be embraced as alternatives
to traditional music distribution models and
how they will impact the music industry will be
better understood through future studies and
adoption processes.
Since its early days, blockchain technology has
garnered significant interest across various
industries. Platforms like Bittunes, Ujo Music,
Voise, Musicoin, and Resonate are standout
examples of blockchain-based streaming
platforms that have emerged in recent years.
These platforms promise to employ smart
contracts to reward artists and pledge fair
compensation. They also provide the capability
for users to directly tip artists. However, the
acquisition of the necessary cryptocurrency
(such as Bitcoin, Ethereum, or Musicoin) for
these platforms might not be as user-friendly
as the payment processes of traditional music
platforms, potentially slowing down the
adoption process (Sciaky, 2019).
Some of the innovative platforms in the music
industry are as follows:
**Audius[1]: Built on the Ethereum blockchain,**
Audius allows artists to independently release
their music and interact directly with listeners.
This enables artists to overcome the limitations
of traditional music distribution channels and
reach broader audiences, effectively marketing
their music (Audius, 2023). Audius boasts
several important features that set it apart
from other blockchain-based music platforms.
These features enable the platform to provide
a more effective and appealing experience for
users. The user-friendly interface facilitates
the rapid adoption of Audius. Both artists and
listeners find navigation and content uploading
on the platform hassle-free, ensuring a more
comfortable and enjoyable user experience.
Audius’ ability to provide wider access is also
a noteworthy feature. When artists upload their
music to the platform, they can reach listeners
from different cultures and geographies,
allowing their music to reach broader audiences.
Audius’ innovative business models distinguish
it from other platforms. Artists can choose to
make their music available for free listening or
license it for a certain fee. Additionally, adjusting
usage rights for works based on different regions
or platforms is also possible. These features
differentiate Audius from other blockchainbased music platforms. The platform presents an
innovative approach aimed at providing a more
sustainable, fair, and enriching music experience
for both artists and listeners.
**Musicoin[2]:** Operating on micro-payments
between artists and listeners, Musicoin offers a
-----
fair payment model. As listeners enjoy music,
they can make payments to artists using
cryptocurrency. Artists, in turn, are rewarded
with the Musicoin cryptocurrency as they share
their content. This approach enables artists to
better determine the value of their music and
manage their copyrights more directly. Artists can
establish closer connections with their listeners,
receive feedback, and even offer exclusive content
or experiences for a certain amount of Musicoin.
This not only allows for listening to music but
also enriches the experience by forming a more
personal connection with artists. Being free
and ad-free, this platform utilizes the Universal
Basic Income (UBI) model, ensuring that each
contribution is fairly rewarded (Musicoin, 2023).
**Resonate3: It presents an alternative approach**
to subscription-based models like Spotify and
Apple Music. It offers music to listeners at
affordable prices while committing to providing
artists with higher payments compared to their
competitors. Operating on a blockchain-based
democratic governance system, it ensures that
artists receive their earnings through a perlistener payment model, while listeners gain
access to music through a fair subscription model.
One of Resonate’s most striking features is the
“Stream2Own” model. In this model, users make
micro-payments for each streaming session, and
the amount they pay is instantly transferred to
the artist’s wallet. This enables artists to earn
instant income from every play, fostering a more
equitable distribution of revenue compared to
traditional music streaming platforms.
Additionally, the Resonate platform grants
artists more control over how they license their
music. Artists can determine the usage terms for
their works, which are automatically enforced
through smart contracts. This enhances the
protection of copyright and empowers artists
to manage their music. Another area where
Resonate stands out is user experience. The
platform allows users not only to listen to music
but also to get closer insights into artists’ stories
and music creation processes. This approach
transforms music into not just sound but also
a story and experience, fostering a deeper
connection. Resonate’s unique features offer
a fresh perspective on the digitalization of the
music industry, with functions like fair revenue
sharing, licensing control, and enhanced music
experience. Serving as a robust and effective
bridge between artists and music consumers,
this platform positions itself as a contender to
shape the future of music by innovating in the
realms of fair revenue distribution, licensing
control, and music experience.
**Opus[4]: It stands out as a blockchain-based**
platform built to store and distribute high-quality
audio files. It specifically enables the storage of
high-resolution audio files in the FLAC (Free
Lossless Audio Codec) format. The FLAC format
maintains audio quality while incorporating
compression capabilities. This provides music
artists and producers with the capacity to
preserve their creations at the highest level.
Opus’s primary goal is to enhance audio quality
within the music industry. Traditional digital
music platforms often use compressed audio
formats, which may lead to quality degradation.
Opus, on the other hand, aims to deliver a
superior listening experience by offering highresolution audio through the FLAC format.
Opus is built on a blockchain technology that
ensures fair revenue sharing. Artists receive direct
payments as listeners engage with their music on
the platform. Smart contracts facilitate revenue
sharing based on predefined ratios. Furthermore,
Opus enables accurate management of copyright.
Artists can establish usage terms for their works,
and smart contracts automatically enforce these
conditions. The platform supports various
licensing models. Artists can make their works
available for free streaming or license them for
a specified fee. Additionally, they can customize
the usage rights based on geographical regions
or platforms. With a global vision, Opus provides
access to listeners and artists worldwide. When
artists upload their works to the platform,
listeners from different geographies and cultural
backgrounds can access these creations. Opus’s
fundamental aim is to deliver a high-quality
audio experience while ensuring fair revenue
sharing and copyright management. With its
innovative approach, Opus contributes to a more
transparent and accessible future for the music
industry.
-----
These platforms in the music industry carry
the potential to offer a more equitable and
transparent experience to artists and listeners.
They exemplify instances of the digital
transformation within the music industry. While
each platform shares a similar core purpose and
functionality, their unique features and intended
uses exhibit noticeable differences. Notably,
Resonate’s innovative distribution model,
Opus’s high-quality audio storage concept,
Musicoin’s copyright management, and Audius’s
decentralized music streaming platform all stand
out for their distinctive attributes. The adoption
process of these platforms and their impacts on
the music industry will become clearer based on
the outcomes of future research and studies.
## 7. DISADVANTAGES OF BLOCK- CHAIN-BASED MUSIC PLATFORMS
While blockchain platforms offer several
advantages, they also come with certain
disadvantages. The requirement to transact with
cryptocurrencies and the complexity of payment
processes can be significant factors limiting user
acceptance. Blockchain-based music platforms
are often designed to facilitate payments with
cryptocurrencies and manage copyright, which
may demand users to possess cryptocurrencies
or purchase them. Despite the prevalence of
cryptocurrencies, many individuals may still
have limited knowledge or desire to engage with
them. This can lead to hesitation among music
enthusiasts or artists to use such platforms.
The complexity of payment processes can also
pose a barrier. Payments on blockchain-based
platforms are typically conducted through
smart contracts, which can be different and
more technical compared to traditional payment
methods. Users may need to understand and
navigate these processes correctly. Additionally,
factors such as the volatility of cryptocurrency
values and the verification process of payment
transactions can complicate the payment
experience for users.
These disadvantages, especially for users
who are less familiar with technology or have
limited exposure to cryptocurrencies, can reduce
their willingness to adopt the platforms. User
tendencies to prioritize security and simplicity
can influence the adoption rate of these platforms.
Therefore, platform providers might reach a
broader audience by offering user-friendly
interfaces, simplifying payment processes, and
supporting traditional payment methods instead
of cryptocurrencies. Taking these issues into
account, the future success of blockchain-based
music platforms will depend on how effectively
they can make the user experience simple and
secure.
Another disadvantage is scalability issues.
Blockchain infrastructure might struggle to
handle high-volume transactions, limiting the
platforms’ growth and reach to a wider user
base. Scalability issues might become even more
pronounced if the platform gains popularity.
Energy consumption is another concern. Some
blockchain protocols can require high energy
consumption for transaction validation, raising
environmental and sustainability concerns.
Transaction speed and duration could also
pose a disadvantage. During peak periods or
increased network traffic, transaction speeds
could slow down, making it unsuitable for
scenarios requiring instant payments or quick
transactions. Data storage concerns should also
be considered. Since blockchain records every
transaction on a public ledger, safeguarding
personal or sensitive data might be challenging.
This becomes a risk, especially when situations
demand the storage of confidential information.
Lastly, the legal and regulatory aspects of
blockchain technology are still uncertain.
Matters like the legal status and taxation of
cryptocurrencies can vary from country to
country. These uncertainties might affect the
operational processes of platforms. These
disadvantages stand out as factors that could
limit the widespread acceptance and usage of
blockchain-based music platforms. However,
with developers’ efforts to address these
challenges and the evolution of technology,
these disadvantages could be overcome over
time, allowing platforms to reach a broader user
base.
-----
## 8. CONCLUSION
The “blockchain” technology has rapidly
advanced and is widely used in various fields.
Today, music works have transitioned from the
recording era to digitalization. Digitized music
works can be accessed widely via the “Internet,”
but they also bring along contentious copyright
issues. Effective strategies using blockchain
technology in the future will make it possible to
protect digital music copyrights on the internet,
which will greatly enhance the development of
the digital music industry. Blockchain has the
potential to simplify many complex processes by
offering benefits such as reduced transaction costs,
payment speed, elimination of intermediaries,
and sharing of copyright fees through smart
contracts, compared to traditional methods.
However, there are disadvantages associated
with blockchain technology, including the lack of
recognition among potential users, indifference
towards music that can be accessed in this way,
and the high volatility of cryptocurrency values.
Thus, there is a prevalence of speculative and
misinformation-laden discourse surrounding
blockchain technology. In the music sector,
technology benefits both in the online sale of
concert tickets and in delivering live music
performances to listeners, benefiting amateur
musicians and small groups as well. The nature
of open-source software offered by technology
allows all these processes to be decentralized,
enabling users and existing institutions in the
music industry to set up their own web stores
and reach listeners directly. Additionally, by
creating synchronous streams that suit the nature
of music, it provides brand new performance
models and economic gain systems.
Blockchain technology can play a significant role
in areas such as copyright management, music
distribution, and artist compensation in the
music industry. However, given the technical,
legal, and collaboration challenges, the adoption
process for this technology will be lengthy and
require careful planning. In the future, it is
expected that blockchain technology will bring
more innovation and transformation to the
music industry.
While blockchain platforms offer several
advantages, they also come with certain
disadvantages. Blockchain-based music
platforms face disadvantages such as the
requirement to transact with cryptocurrencies
and the complexity of payment processes. These
factors can limit user acceptance and particularly
deter users unfamiliar with cryptocurrencies.
Additionally, challenges like scalability issues,
high energy consumption, transaction speed, and
data storage concerns can restrict the expansion
and usage of these platforms. Although these
issues can be overcome with solutions that
enhance user experience, it’s anticipated that
as technology evolves and regulatory clarity is
achieved, blockchain-based music platforms will
become more widespread.
**Endnotes**
https://audius.co/
https://musicoin.org/
https://resonate.coop/
https://opus.audio/
**REFERENCES**
AUDIUS. 2023. Audius. Accessed on June 9, 2023.
https://audius.co/
BLOCKCHAINHUB (2017). Blockchain Explained
- Intro - _Beginners Guide to Blockchain. Available at:_
https://blockchainhub.net/blockchain-intro/ (Accessed
on May 17, 2023).
DUTRA, A., TUMASJAN, A., & WELPE, I. M. (2018).
Blockchain is Changing How Media and Entertainment
Companies Compete. _MIT Sloan Management Review,_
Fall 2018.
GHEORGHE, P. N. M., TIGĂNOAIA, B., &
NICULESCU, A. (2017, October). Blockchain and
smart contracts in the music industry–streaming vs.
downloading. _International Conference on Management_
_and Industrial Engineering (pp. 215-228)._ _Niculescu_
_Publishing House._
HESMONDHALGH, D. (2021). Is music streaming
bad for musicians? Problems of evidence and
argument. _New Media & Society,_ _23(12), 3593–_
-----
[3615. https://doi.org/10.1177/1461444820953541](https://doi.org/10.1177/1461444820953541)
IINUMA, A. (2018, April 5). What Is Blockchain And What
Can Businesses Benefit From It? Forbes. https://www.
forbes.com/sites/forbesagencycouncil/2018/04/05/
what-is-blockchain-and-what-can-businesses-benefitfrom-it/?sh=3aef0a6f675f (Accessed on July 5, 2023).
LO’DAIR, M. (2016), Music on The Blockchain. [online].
Blockchain For Creative Industries Research Cluster
Middlesex University, Report No. 1, https://www.
mdx.ac.uk/__data/assets/pdf_file/0026/230696/MusicOn-The-Blockchain.pdf (Accessed on July 10, 2023).
MOUGAYAR, W. (2016). _The business blockchain:_
_promise, practice, and application of the next Internet_
_technology. John Wiley & Sons._
MUSICOIN. 2023. Musicoin. Accessed on June 9, 2023.
https://musicoin.org/.
NAKAMOTO, S. (2008). Bitcoin: A peer-to-peer electronic
_cash system. https://bitcoin.org/bitcoin.pdf (Accessed_
on July 10, 2023).
NGUYEN, Q. K., & DANG, Q. V. (2018, November).
Blockchain Technology for the Advancement of
the Future. _2018 4th international conference on green_
_technology and sustainable development (GTSD) (pp. 483-_
486). IEEE.
OPUS. (2023). Accessed on June 9, 2023. https://opus.
audio/
PANAYİ, P. (2016), Why Us, Why Now: Convening
the Open Music Initiative. _BerkleeICE, https://www._
berklee.edu/panos-panay-open-music-initiative
(Accessed on July 3, 2023)
PARELES, J. (2002). David Bowie, 21st-century
entrepreneur. The New York Times, 9(06), 2002.
PETERS, G.W., PANAYİ, E. (2016). Understanding
modern banking ledgers through blockchain
technologies: Future of transaction processing and
smart contracts on the internet of money. _Banking_
_Beyond Banks and Money. Springer, pp. 239–278._
RESONATE. (2023). Resonate Mission. Accessed on
June 9, 2023. https://resonate.is/.
SANCHEZ, D. (2017). What Streaming Music
Services Pay. _Digital Music News, https://www._
digitalmusicnews.com/2017/07/24/what-streamingmusic-services-pay-updated-for-2017/. (Accessed on
July 1, 2023).
SCIAKY, D. (2019). The digital transformation of the
music industry through applications of blockchain
technology. https://kth.diva-portal.org/smash/record.
jsf?pid=diva2%3A1375894&dswid=984 (Accessed on
June 1, 2023).
TAPSCOTT, D., & TAPSCOTT, A. (2016). _Blockchain_
_revolution: how the technology behind bitcoin is changing_
_money, business, and the world. Penguin._
VAN MAANEN, J. (1983). _Qualitative methodology._
Sage.
VOM BROCKE, J., SIMONS, A., RIEMER, K.,
NIEHAVES, B., PLATTFAUT, R., CLEVEN, A. (2015).
Standing on the Shoulders of Giants: Challenges and
Recommendations of Literature Search in Information
Systems Research. Communications of the Association of
_Information Systems, 37(9)._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.31566/arts.2163?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.31566/arts.2163, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://journals.gen.tr/index.php/arts/article/download/2163/1441"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-10-23T00:00:00
|
[] | 9,408
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Education",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f24857803eefcd0900689dda52c32505105132
|
[
"Computer Science"
] | 0.905613
|
THUNDER: helping underfunded NPO’s distribute electronic resources
|
00f24857803eefcd0900689dda52c32505105132
|
Journal of Cloud Computing: Advances, Systems and Applications
|
[
{
"authorId": "145729001",
"name": "Gabriel Loewen"
},
{
"authorId": "2068756",
"name": "Jeffrey M. Galloway"
},
{
"authorId": "2115206220",
"name": "Jeffrey A. Robinson"
},
{
"authorId": "1696277",
"name": "X. Hong"
},
{
"authorId": "1770322",
"name": "Susan V. Vrbsky"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
As federal funding in many public non-profit organizations (NPO’s) seems to be dwindling, it is of the utmost importance that efforts are focused on reducing operating costs of needy organizations, such as public schools. Our approach for reducing organizational costs is through the combined benefits of a high performance cloud architecture and low-power, thin-client devices. However, general-purpose private cloud architectures are not easily deployable by average users, or even those with some computing knowledge. For this reason, we propose a new vertical cloud architecture, which is focused on ease of deployment and management, as well as providing organizations with cost-efficient virtualization and storage, and other organization-specific utilities. We postulate that if organizations are provided with on-demand access to electronic resources in a way that is cost-efficient, then the operating costs may be reduced, such that the user experience and organizational efficiency may be increased. In this paper we discuss our private vertical cloud architecture called THUNDER. Additionally, we introduce a number of methodologies that could enable needy non-profit organizations to decrease costs and also provide many additional benefits for the users. Specifically, this paper introduces our current implementation of THUNDER, details about the architecture, and the software system that we have designed to specifically target the needs of underfunded organizations.
|
http://www.journalofcloudcomputing.com/content/2/1/24
## RESEARCH Open Access
# THUNDER: helping underfunded NPO’s distribute electronic resources
### Gabriel Loewen[*], Jeffrey Galloway, Jeffrey Robinson, Xiaoyan Hong and Susan Vrbsky
**Abstract**
As federal funding in many public non-profit organizations (NPO’s) seems to be dwindling, it is of the utmost
importance that efforts are focused on reducing operating costs of needy organizations, such as public schools. Our
approach for reducing organizational costs is through the combined benefits of a high performance cloud architecture
and low-power, thin-client devices. However, general-purpose private cloud architectures are not easily deployable
by average users, or even those with some computing knowledge. For this reason, we propose a new vertical cloud
architecture, which is focused on ease of deployment and management, as well as providing organizations with
cost-efficient virtualization and storage, and other organization-specific utilities. We postulate that if organizations are
provided with on-demand access to electronic resources in a way that is cost-efficient, then the operating costs may
be reduced, such that the user experience and organizational efficiency may be increased. In this paper we discuss
our private vertical cloud architecture called THUNDER. Additionally, we introduce a number of methodologies that
could enable needy non-profit organizations to decrease costs and also provide many additional benefits for the
users. Specifically, this paper introduces our current implementation of THUNDER, details about the architecture, and
the software system that we have designed to specifically target the needs of underfunded organizations.
**Introduction**
Within the past several years there has been a lot of
work in the area of cloud computing. Some may see this
as a trend, whereas the term “cloud” is used simply as
a buzzword. However, if viewed as a serious contender
for managing services offered within an organization,
or a specific market, cloud computing is a conglomerate of several very desirable qualities. Cloud computing
is known for being scalable, which means that resource
availability scales up or down based on need. Additionally, cloud computing represents highly available and ondemand services, which allow users to easily satisfy their
computational needs, as well as access any other required
services, such as storage and even complete software systems. Although there is no formal definition for cloud
computing, we define cloud computing as a set of serviceoriented architectures, which allow users to access a number of resources in a way that is elastic, cost-efficient, and
on-demand. General cloud computing can be separated
into three categories: Infrastructure-as-a-Service (IaaS),
[*Correspondence: gloewen@crimson.ua.edu](mailto:gloewen@crimson.ua.edu)
Department of Computer Science, The University of Alabama, Tuscaloosa, AL,
USA
Platform-as-a-Service (PaaS), and Software-as-a-Service
(SaaS). Infrastructure-as-a-Service provides access to virtual hardware and is considered the lowest service layer
in the typical cloud stack. An example of Infrastructureas-a-Service is the highly regarded Amazon EC2, which
is subsystem of Amazon Web Services [1]. At the highest
layer is Software-as-a-Service, which provides complete
software solutions. An example software solution, which
exists as a cloud service is Google Docs. Google Docs
is a SaaS which gives users access to document editing tools, which may be used from a web browser. In
between SaaS and IaaS is Platform-as-a-Service, which
allows users to access programming tools and complete API’s for development. An example of a PaaS is
Google AppEngine, which gives developers access to
robust API’s and tools for software development in a
number of different languages. We are beginning to
see many software services being offered by a number
of public cloud providers, including image editing software, email clients, development tools, and even language
translation tools. However, these tools are all offered
by different providers and are not necessarily free for
general use.
© 2013 Loewen et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons
[Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction](http://creativecommons.org/licenses/by/2.0)
in any medium, provided the original work is properly cited.
-----
http://www.journalofcloudcomputing.com/content/2/1/24
Considering that non-profit organizations cannot
always afford to purchase access to software, we propose
that these organizations should simply maintain their own
private cloud, which could decrease the costs associated
with software licensing. There are several freely available
cloud architectures that may be considered. However,
general-purpose cloud architectures are not suitable for
organization that do not have highly trained professionals
to manage such a system. This downfall of most generalpurpose architectures is due to the lack of an easy to
use user interface and somewhat complicated deployment process. Many architectures, such as Eucalyptus
[2] and OpenStack [3], rely heavily on the command line
for interfacing with the system, which isn’t desirable for
markets that do not have experts readily available for
troubleshooting. A cloud architecture designed for these
specific markets must have the following attributes: ease
of deployment, user friendly interface, energy efficiency,
and cost effectiveness. In consideration of these qualities
we have designed a new IaaS cloud architecture, which we
call THUNDER (THUNDER Helps Underfunded NPO’s
Distribute Electronic Resources). THUNDER utilizes the
notion of simplicity at all levels in order to ensure that all
users, regardless of their technical experience, will be able
to use the system or redeploy the architecture if necessary.
Most IaaS cloud architectures rely upon the general
case model. In the general case, an IaaS cloud architecture supports low-level aspects of the cloud stack, such as
hardware virtualization, load balancing of virtual machine
instances, elastic storage, and modularity of physical hardware. Vertical clouds, on the other hand, are defined by
a specific market, and therefore, are able to abstract the
general case IaaS cloud model to provide features that
are tailored for a specific set of uses. We see vertical
clouds predominantly in the healthcare sector with the ehealth cloud architecture. The THUNDER architecture is
an abstraction of the general case model by taking care of
the low-level details of hardware virtualization, load balancing, and storage in a way that is considerate of the technical maturity of the users, as well as the level of expertise
expected from the administrators. This abstraction is possible in a vertical cloud designed for the non-profit sector
because we can make an assumption about the maximum number of virtual machines, the type of software
required, and the expected level of experience of the users.
We assume the number of virtual machine instances is
congruent to the number of client devices in an office or
computer lab. Additionally, the software available on the
cloud is defined by a set of use cases specific to the organization. For example, THUNDER deployed to a school may
be used in conjunction with a mathematics course, which
would be associated with a virtual machine image containing mathematics software, such as Matlab or Maple.
Additionally, we assume that the technical experience of
administrators and instructors in a school setting is low.
Therefore, by deviating from the general case model of
an IaaS cloud architecture, and by considering the special needs of the market, we can minimize the complexity
of deployment by removing the necessity of a fine-tuned
configuration.
In the following sections we discuss related background
work in private vertical cloud architectures, our proposed
architecture, future work, and we end with a summary and
conclusion.
**Background and motivation**
There has been much discussion on the topic of cloud
computing for various administrative purposes at educational institutions. However, cloud computing is a topic
that until recently has not been widely considered for the
high school grade bracket. Due to the nature of cloud
computing, being a service oriented architecture, there is
a lot of potential in adopting a cloud architecture that can
be used in a classroom [4]. Cloud computing in the classroom could be used to provide valuable educational tools
and resources in a way that is scalable, and supportive
of the ever-changing environment of the classroom. Production of knowledgeable students is not a trivial task.
Researchers in education are focused on providing young
students with the tools necessary to be productive members of society [4]. The past decade has seen, in some
cases, a dramatic decrease in state and local funding
for public secondary education. This reduction in funding indicates that a paradigm shift in how technology is
utilized in the classroom is necessary in order to continue to provide high quality education. The authors of
[4-7] believe that cloud computing may be a viable solution to recapture students’ interests and improve student
success.
**Education**
Researchers at North Carolina State University (NCSU)
have developed a cloud architecture, which is designed
to provide young students with tools that help to engage
students in the field of mathematics [4]. This cloud architecture, known as “Virtual Computing Lab” or “VCL”, has
been provided as a public service to rural North Carolina
9th and 10th grade algebra and geometry classes. The goal
of this study is to broaden the education of STEM related
topics using the VCL in these schools, and two applications were selected to be used in the course curriculums: Geometer’s Sketchpad 5, and Fathom 2. The authors
describe a set of key challenges that were encountered
during the study, including: diversity of software, software
licensing, security, network availability, life expectancy of
hardware, affordability, as well as technical barriers. Software availability is a prime concern when it comes to
provisioning educational tools for academic use.
-----
http://www.journalofcloudcomputing.com/content/2/1/24
The specific needs of the classroom, in many cases,
require specific software packages. When deploying software to a cloud architecture, it is not always possible
to provide certain software packages as cloud services.
For this reason, it is common to bundle software with
virtual machine images, which are spawned on an IaaS
cloud. A virtual machine image is a single file that contains a filesystem along with a guest operating system and
software packages. Additionally, software packages may
have some conflicts with one another that can create an
issue with the logistics of the system [4]. Another software concern is related to software-specific licensing, and
how it affects the cloud. Many software packages require
licensing fees to be paid per user of the system, or as a
volume license, which may or may not impose a maximum number of users allowed access to the software.
Therefore, depending on the specific requirements of the
school and course, software licensing fees must be paid for
accordingly. For example, when geometers sketchpad was
deployed to the VCL, the authors made sure that the software licensing fees were paid for in accordance with the
software publishers’ license agreement. The necessity for
licensing does affect the cost effectiveness of using a cloud
in this setting, however it is no different than licensing
software for traditional workstations [4].
The authors of [8] have created a private cloud architecture, called CloudIA, which supports e-Learning services at every layer of the cloud stack. At the IaaS layer,
the CloudIA architecture supports an automated virtual
machine image generator, which utilizes a web interface
for creating custom virtual machine images with predefined software packages installed. At the PaaS layer, the
CloudIA architecture supports computer science students
with a robust API for writing software that utilizes cloud
services. At the SaaS layer, the CloudIA architecture supports collaborative software for students to utilize for
projects and discussion.
The authors of [9] describe the benefits of cloud computing for education. The main point that the authors
make is that cloud computing provides a flexible and cost
effective way to utilize hardware for improving the way
information is presented to students. Additionally, the
authors describe details about the ability of cloud computing to shift the traditional expenses from a distributed
IT infrastructure model to a more pay-as-you-go model,
where services are paid for based on specific needs.
Authors of [5] discuss “Seattle”, which is a cloud application framework and architecture, enabling users to
interact with the cloud using a robust API. By using this
platform students can execute experiments for learning
about cloud computing, networking, and other STEM
topics. The authors also describe a complimentary programming language built upon Python, which gives students easy access to the Seattle platform.
The authors of [10] discuss a new model for SaaS, which
they have named ESaaS. ESaaS is defined as a Softwareas-a-Service cloud architecture with a focus on providing
educational resources. The authors discuss the need for a
managed digital library and a global repository for educational content, which is easily accessible through a web
interface. The proposed architecture is meant to integrate into existing secondary and post-secondary institutions as a supplementary resource to their existing
programs.
**LTSP**
One approach is the use of thin client devices, which have
been used in other educational endeavors, such as the
Linux Terminal Server Project (LTSP) [11]. Thin client
solutions, when paired with an IaaS cloud, offer low power
alternatives to traditional computing infrastructures. The
authors of [12] analyze energy savings opportunities in the
thin-client computing paradigm.
Authors of [13] discuss design considerations for a low
power and modular cloud solution. In this study the LTSP
[11] architecture is reviewed and compared to the authors
cloud architecture design. LTSP is a popular low power
thin client solution for accessing free and open source
Linux environments using a cluster of server machines
and thin client devices. The LTSP architecture provides
services, which are very similar to an IaaS cloud architecture with a few notable limitations. Firstly, LTSP only
offers Linux environments, which differs from an IaaS
cloud in that the cloud can host Linux, Windows, and
in some instances Apple OSX virtual machine instances.
Additionally, LTSP does not utilize virtualization technology, rather it provides several minimal Linux and
X windows environments on the same host computer.
Interfacing with an LTSP instance also differs from an
IaaS cloud in that an LTSP terminal will boot directly
from the host machine using PXE or NetBoot, which
is a remote booting protocol. A client connected to an
IaaS cloud will typically rely upon the Remote Desktop
Protocol (RDP) for accessing Windows instances, or the
Virtual Network Computing (VNC) protocol for Linux
instances.
**Other work**
All of the previous work relate to educational resources
and services in the cloud. However, most of the related
work is integrated using public cloud vendors and is specific towards one particular subject, as is presented in
[4] and [5]. The authors of [14] present their solution,
SQRT-C, which is a light-weight and scalable resource
monitoring and dissemination solution using the publisher/subscribe model, similar to what is described in this
manuscript. The approach considers three major design
implementations as top priority: Accessing physical
-----
http://www.journalofcloudcomputing.com/content/2/1/24
resource usage in a virtualized environment, managing
data distribution service (DDS) entities, and shielding
cloud users from complex DDS QoS configurations.
SQRT-C can be deployed seamlessly in other cloud platforms, such as Eucalyptus and OpenStack, since it relies
on the libvirt library for information on resource management in the IaaS cloud.
In [15], the authors propose a middleware for enterprise
cloud computing architectures that can automatically
manage the resource allocation of services, platforms,
and infrastructures. The middleware API set used in
their cloud is built around servicing specific entities
using the cloud resources. For end users, the API
toolkit provides interaction for requesting services. These
requests are submitted through a web interface. Internal interface APIs communicate between physical and
virtual cloud resources to construct interfaces for users
and determine resource allocation. A service directory API is provided for users based on user privileges. A monitoring API is used to monitor and calculate the use of cloud system resources. This relates
to the middleware introduced in this manuscript; however, it addresses architectures more suitable for large
enterprises.
The authors of [16] propose a resource manager that
handles user requests for virtual machines in a cloud environment. Their architecture deploys a resource manager
and a policy enforcer module. First, the resource manager
decides if the user has the rights to request a certain virtual machine. If the decision is made to deploy the virtual
machine, the policy enforcer module communicates with
the cloud front-end and executes an RPC procedure for
creating the virtual machine.
Authors of [17] describe how cloud platforms should
provide services on-demand that helps the user complete their job quickly. Also mentioned is the cloud’s
responsibility of hiding low-level technical issues, such as
hardware configuration, network management, and maintenance of guest and host operating systems. The cloud
should also reduce costs by using dynamic provisioning of
resources, consuming less power to complete jobs (within
the job constraints), and by keeping human interaction to
cloud maintenance to a minimum.
Development of cloud APIs is discussed in [18]. The
author mentions three goals of a good cloud API: Consistency, Performance, and Dependencies. Consistency
implies the guarantees that the cloud API can provide.
Performance is relatively considered in forms of decreasing latency while performing actions. Cloud dependencies are other processes that must be handled, other
than spawning virtual machines and querying cloud
resource and user states. These three issues are considered in the development process of our own IaaS cloud
architecture.
**Proposed architecture**
Our focus is to provide underfunded non-profit organizations with the means to facilitate the computing needs
of their users in a cost-effective manner. The THUNDER
architecture is composed of a special purpose private
cloud stack, and an array of low power embedded systems,
such as Raspberry Pi’s [19] or other low-power devices.
The THUNDER stack differs from the general-purpose
private cloud model in a number of ways. General purpose cloud stacks, such as Eucalyptus [2] and OpenStack
[3], are focused on providing users with many different
options as to how the cloud can be configured. These
general-purpose solutions are great for large organizations because the architecture is flexible enough to be useful for diverse markets. However, non-profit organizations
do not typically have the resources to construct a generalpurpose cloud architecture. Therefore, a special-purpose
or vertical cloud architecture is desirable because it circumvents the typical cloud deployment process by making
assumptions about the use of the architecture. THUNDER
may be utilized by various NPO’s and for various purposes, but a secondary focus of THUNDER is focused
on the education market. Research in cloud computing
for education has shown that educational services in high
school settings are successful in motivating students to
learn and achieve greater success in the classroom [4].
The THUNDER cloud stack utilizes a number of commodity compute nodes, in addition to persistent storage
nodes with a redundant backup, as well as a custom
DHCP, MySQL, and system administration server. Each
compute node is capable of accommodating four Windows virtual machines or twelve Linux virtual machines.
The lab consists of low-power client devices with a keyboard, mouse, and monitor connected to a gigabit network. A custom web-based interface allows users to login,
select their desired virtual machine from a list of predefined images, and then launch the virtual machine
image. For example, students taking a course in Python
programming might be required to use a GNU/Linux
based computer for development. However, a receptionist in an office setting might be required to use a
Microsoft Windows system. Therefore, regardless of the
user requirements, THUNDER will be able to provide
all necessary software components to each user independently. Figure 1 illustrates the THUNDER network
topology. The THUNDER network topology resembles a
typical cloud topology, where the compute cluster is connected to a single shared LAN switch, and support nodes
share a separate LAN switch. Additionally, the topology
shows the client devices and how they interface with the
rest of the system. Table 1 shows a power cost comparison
between THUNDER and a typical 20 PC lab, and shows
a possible savings of 50% when compared to a traditional
computer lab.
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 1 Networking topology for the THUNDER cloud architecture.**
**Network topology**
The THUNDER network topology in Figure 1 is most
cost efficient when combined with low-power or thi–
client devices, but can also be paired with regular desktop
and laptop computers. One of the common advancements in wired Ethernet technology is the use of switches.
**Table 1 Power cost comparison between THUNDER and a**
**typical lab**
**Typical lab** **THUNDER**
**#** **Hardware** **Watts** **#** **Hardware** **Watts**
20 PC Desktops 6,000 20 Thin client 60
20 Display 2,000 20 Display 2,000
3 Compute node 1,200
2 Storage node 500
1 Admin/Web 250
Total 8,000 Total 4,010
Monthly bill: $152 Monthly bill: $77
Ethernet switches allow for adjacent nodes connected to
the switch to communicate simultaneously without causing collisions. The network interface cards used in all
of the devices of THUNDER support full duplex operation, which further allows nodes to send and receive data
over the network at the same time. Ethernet is a LinkLayer protocol, which determines how physical devices
on the network communicate. The clients communicate
with THUNDER through simple socket commands and
a virtual desktop viewing client, such as VNC or RDP
viewers.
**Compute and store resources**
THUNDER compute and storage resources will consume
a considerable amount of network bandwidth. The compute nodes are responsible for hosting virtual machines
that are accessed by the clients. These compute nodes will
mount the user’s persistent data as the virtual machine is
booting. Each compute node will communicate with the
THUNDER cloud resources and the client devices using
a 1 Gbps network interface. The client devices should
-----
http://www.journalofcloudcomputing.com/content/2/1/24
be equipped with a 10/100/1000 Mbps network adapter,
and considering the limited number of cloud servers, it
is unlikely that the 1 Gbps network switch will become
completely saturated with traffic. Userspace storage nodes
are connected to the same switch as the compute nodes,
which allows for tighter coupling of storage nodes and
compute nodes, decreasing the potential delay for persistent file access. Backup storage nodes are connected to a
separate network switch, and are used to backup the cloud
system in case of a system failure.
**_Administrative resources_**
The THUNDER administrative resources include a
MySQL database, Web interface, and Networking services. These services are hosted on a single physical
machine, with a backup machine isolated to the same network switch. There will be no need for a high amount
of resources in the administrative node since the numbers of compute and storage nodes determine the amount
of clients that can be connected to THUNDER. The
THUNDER cloud, when accessed from devices external
to the organization’s private network, can be routed to a
secondary administrative node, such that the on-site users
will not experience any quality of service issues.
**Network provisioning for low latency**
Using a top down approach, the amount of bandwidth
(maximum) needed for a twenty node THUNDER lab
can be described. If we assume that each client device
requires a sustained 1 Mbps network throughput, we
would need to accommodate for sustaining 20 Mbps
within the network switch used by the clients. Since the
network switch is isolated to communicating with the
resources of THUNDER, this throughput needs to be sustainable on the uplink port. This is relatively easy, given
the costs of gigabit switches on the market today. The
specification that needs close attention is the total bandwidth of the switch backplane. Making the assumption
that this bandwidth is the number of ports multiplied by
the switch speed is not always true. In our case, the bandwidth needed, 20 Mbps, is much lower than the maximum
throughput of a twenty-four port gigabit switch.
There is little to no communication between THUNDER
compute node resources. These devices are used to host
virtual machines that are interfaced to the clients directly.
Given a THUNDER lab size of twenty clients, the network
bandwidth needed on the isolated network containing the
compute nodes should be above 20 Mbps, assuming each
client consumes 1 Mbps of bandwidth.
The THUNDER storage node resources are also isolated to the same gigabit network switch as the compute
node resources. When the user logs into THUNDER and
requests a virtual machine, their persistent storage is
mounted inside the virtual machine for them to use. The
data created by the users has to be accessed while they are
using a virtual machine.
**Middleware design and implementation**
One of the core components in building a cloud architecture is the development of a middleware solution, allowing
for ease in resource management. Additionally, in order
to improve the quality of service (QOS) an emphasis
on minimizing resource utilization and increasing system reliability is desirable. Our reasoning for developing
a new cloud middleware API is to address issues that we
have encountered in current cloud middleware solutions,
which are centered upon ease of deployment and ease of
interfacing with the system. Additionally, we have utilized
our API to build a novel cloud middleware solution for
use in THUNDER. Specifically, this middleware solution
is designed for management of compute resources, including instantiation of virtual machine images, construction
and mounting of storage volumes, metadata aggregation,
and other management tasks. We present the design and
implementation for our cloud middleware solution and we
introduce preliminary results from our study into the construction of THUNDER, which is our lightweight private
vertical IaaS cloud architecture.
Management of resources is a key challenge in the
development of a cloud architecture. Moreover, there
is a necessity for minimizing the complexity and overhead in management solutions in addition to facilitating
attributes of cloud computing, such as scalability and elasticity. Another desirable quality of a cloud management
solution is modularity. We define modularity as the ability to painlessly add or remove components on-the-fly
without the necessity to reconfigure any services or systems. The field of cloud management exists within several
overlapping domains, which include service management,
system deployment, access control management, and others. We address the requirements of a cloud management
middleware API, which is intended to support the implementation of the private cloud architecture currently in
development. Additionally, we compare our cloud management solution to solutions provided by freely available
private IaaS cloud architectures.
When examining the current state of the art in cloud
management, there are few options. We are confined to
free and open source (FOSS) cloud implementations, such
as Eucalyptus [2] and Openstack [3]. Cloud management
solutions used in closed-source, and often more popular
cloud architectures, such as Amazon EC2, are out of reach
from an academic and research perspective due to their
closed nature. However, there has been an effort to make
Eucalyptus and Openstack compatible with Amazon EC2
by implementing a compatible API and command line
tools, such as eucatools [20] and Nova [21], respectively.
The compatibility of API’s makes it easy to form a basis
-----
http://www.journalofcloudcomputing.com/content/2/1/24
of comparison between different architectures. Although,
this compatibility may also serve as a downfall because if
one API suffers from a bug, it may also be present in other
API’s.
**Eucalyptus discussion**
The methodology for management of resources in Eucalyptus is predominantly reliant upon establishing a control
structure between nodes, such that one cluster is managed by one second-tier controller, which is managed by
a centralized cloud controller. In the case of Eucalyptus,
there are five controller types: cloud controller, cluster
controller, block-based storage controller (EBS), bucketbased storage controller (S3), and node controller. The
cloud controller is responsible for managing attributes of
the cloud, such as the registration of controllers, access
control management, as well as facilitating user interaction through command-line and, in some cases, webbased interfacing. The cluster controller is responsible
for managing a cluster of node controllers, which entails
transmission of control messages for instantiation of virtual machine images and other necessities required for
compute nodes. Block-based storage controllers provide
an abstract interface for creation of storage blocks, which
are dynamically allocated virtual storage devices that can
be utilized as persistent storage. Bucket-based storage
controllers are not allocated as block-level devices, but
instead are treated as containers by which files, namely
virtual machine images, may be stored. Node controllers
are responsible for hosting virtual machine instances and
for facilitating remote access via RDP [22], SSH [23], VNC
[24], and other remote access protocols.
**OpenStack discussion**
Similar to the methodology used by Eucalyptus, OpenStack also maintains a control structure based on the
elements present in the Amazon EC2 cloud. OpenStack maintains five controllers: compute controller
(Nova), object-level storage (Swift), block-level storage
(Cinder), networking controller (Quantum), and dashboard (Horizon). There are many parallels between the
controller of OpenStack and the controllers of Eucalyptus. The Nova controller of OpenStack is similar to the
node controller of Eucalyptus. Similarly we see parallels between Swift in OpenStack with the bucket-based
controller in Eucalyptus, and Cinder in Openstack with
the block-based storage of Eucalyptus. There seems to
be a discretion in implementation between the highestlevel controller in each architecture. OpenStack maintains different controllers for interfacing and network
management, while Eucalyptus maintains a single cloud
controller combining these functionalities. Additionally,
OpenStack does not maintain a higher-level control
structure for managing compute components, which is a
deviation from the cluster controller mechanism present
in Eucalyptus.
**Middleware interfacing, communication, and**
**authentication**
In developing our middleware solution we encountered
challenges regarding the method by which it would interface with the various resources in the cloud. Many different methodologies were considered. However, we decided
to use an event-driven mechanism, which is similar to
remote procedure calls (RPC).
One of the prime differences in the way Eucalyptus and
OpenStack perform management tasks is in the means of
communication. Eucalyptus utilizes non-persistent SSH
connections between controllers and nodes in order to
remotely execute tasks. OpenStack, on the other hand utilizes remote procedure call, or RPC’s. In keeping with the
methodology introduced by OpenStack and its current
momentum in the open source cloud computing community, we utilize an event driven model, which presents a
very similar mechanism to that of RPC. However, these
two architectures share a common component. They both
utilize the libvirt [25] library, which is the same library that
we utilize in our architecture.
Additionally, authentication was a challenge because in
reducing the complexity of authentication we introduce
new possible security threats. Although, we believe the
security threats posed by our authentication model are
minimal, additional threats could be uncovered during
system testing. We believe that this solution is important
because we address concerns regarding the overall usage
of the cloud architecture, and our initial performance
results in Figure 2 show that our middleware performs
well when compared to Eucalyptus [2].
**Node-to-node communication scheme**
In contrast to the methodologies used by Eucalyptus,
OpenStack, and presumably Amazon, our cloud middleware API addresses resource management in a simplified
and more direct manner. The hierarchy of controllers
used in Eucalyptus introduces extra complexity that we
have deemed unnecessary. For this reason, our solution
utilizes a simple publisher/subscriber model by which
compute, storage, and image repository nodes may construct a closed network. The publisher/subscriber system
operates in conjunction with event driven programming,
which allows events to be triggered over the private network to groups of nodes subscribed to the controller
node. Figure 3 shows the logical topology and lines of
communication constructed using this model.
In constructing the communication in this manner we
are able to broadcast messages to logical groups in order
to gather metadata about the nodes subscribed to that
group. Message passing is useful for retrieving the status
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 2 Performance comparison of NetEvent and SSH authentication protocols.**
of nodes, including virtual machine utilization, CPU and
memory utilization, and other details pertaining to each
logical group. Additionally, we are able to transmit messages to individual nodes in order to facilitate virtual
machine instantiation, storage allocation, image transfer,
and other functions that pertain to individual nodes.
**_Registration of nodes_**
Communication between nodes utilizes non-persistent
socket connections, such that the controller node maintains a static pre-determined port for receiving messages,
while other nodes may use any available port on the
system. Thus, each node in the cloud, excluding the controller node, automatically selects an available port at boot
time. Initial communication between nodes is done during boot time to establish a connection to the controller
node. We utilize a methodology for automatically finding
and connecting to the controller node via linear search
over the fourth octet of the private IP range (xxx.xxx.xxx.0
to xxx.xxx.xxx.255). Our assumption in this case is that
the controller node will exist on a predefined subnet that
allows us to easily establish lines of communication without having to manually register nodes. Additionally, we
can guarantee sequential ordering of IP addresses with
our privately managed DHCP server. Once a communication link is established between a node and the controller
node, the node will request membership within a specific logical group, after which communication between
the controller node and that logical group will contain the
node in question.
The registration methodology used in our middleware
solution differs from the methodology used by Eucalyptus
and OpenStack. For example, Eucalyptus relies upon command line tools to perform RSA keysharing and for
**Figure 3 Logical topology - logical groups represent group-wise membership in publisher/subscriber model.**
-----
http://www.journalofcloudcomputing.com/content/2/1/24
establishing membership with a particular controller. We
do not perform key sharing, and instead rely upon a
pre-shared secret key and generated nonce values. This
approach is commonly known as challenge-response [26],
and it ensures that nodes requesting admission into the
cluster are authentic before communication is allowed.
When a node wishes to be registered as a valid and
authentic node within a cluster, a nonce value is sent
to the originating node. The node will then encrypt the
nonce with the pre-shared key and transmit the value
back to the controller. We validate the message by comparing the decrypted nonce produced by the receiver and
the nonce produced by the sender. Thus, we do not rely
upon manual sharing of RSA keys beforehand, and instead
we eliminate the need for RSA keys altogether and utilize
a more dynamic approach for validation of communication during the registration process. Figure 4 presents the
registration protocol.
**Middleware API**
As stated in the introduction, our methodology for
constructing a middleware API for cloud resource management centers around the decreasing overhead when
compared to general-purpose solutions. In order to facilitate a simple middleware solution, our API was designed
to provide a powerful interface for cloud management
while not introducing excessive code overhead. We have
titled our API “NetEvent”, which is indicative of its
intended purpose as an API for triggering events over a
network. This API is utilized within our private IaaS cloud
architecture as a means for communication, management
of resources, and interaction with our cloud interface.
Figures 5 and 6 illustrate the manner in which the API
is accessed. Although, the code examples presented here
are incomplete, they illustrate the simplicity of creating
events to be triggered by the system for management of
resources.
In Figure 5 we present sample code for the creation of
a controller node, which is responsible for relaying commands from the web interface to the cloud servers. In
Figure 6 we present a skeleton for the creation of a compute node with events written for instantiation of virtual
machine images and for retrieving the status of the node.
Although, we do not present code for the implementation
of storage or image repository nodes, the implementations
are similar to that of the compute node. In addition, the
code examples presented in this paper show only a subset of the functionality contained within the production
code.
The API presented here provides a powerful interface
for implementing private cloud architectures. By means
of event triggering over a private network we are able
to instantiate virtual machine images, mount storage volumes, retrieve node status data, transfer virtual machine
images, monitor activity, and more. The implementation of the system is completely dependent upon the
developer’s needs and may even be used in distributed
systems, which may or may not be implemented as a
**Figure 4 Node registration protocol.**
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 5 Example controller node service written in pseudocode.**
cloud architecture. This approach is different than the
more traditional approach of remote execution of tasks by
means of SSH tunneling.
**User interfacing**
In the previous section we introduced our middleware
API for managing cloud resources. However, another
important component is a reasonable way to interface
with the middelware solution. Although, the middleware API solution is completely independent from the
interface, we have chosen to use a message passing
approach that is different from that of general-purpose
architectures. In this approach our web interface, which
is written in PHP, connects to the controller node in
order to trigger the “INVOKE” event. By interfacing
with the controller node we are able to pass messages
to groups or individual nodes in order to manage the
resources of that node and receive responses. The ability to interface in this manner allows our interface to
remain decoupled from the logical implementation, while
allowing for flexibility in the interface and user experience. Figure 7 shows an example PHP script for interfacing with the resources in the manner described in this
section.
**Figure 6 Skeleton for compute node service written in pseudocode.**
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 7 Example communication interface in PHP.**
The PHP interface presented in Figure 7 illustrates the
methodology behind how we may capture and display
data about the nodes, as well as provide a means for
user interaction in resource allocation and management.
Although, we do not present the full source code in this
paper, additional functions could be written. For example, a function could be written that instructs compute
nodes to instantiate a particular virtual machine image.
One important aspect of this system is that the mode of
communication remains consistent at every level of the
cloud stack. Every message sent is implemented via nonpersistent socket connections. This allows for greater data
consistency without modifying the semantics of messages
between the different systems. Figure 8 shows an example
interface for metadata aggregation of a logical compute
group. Figure 9 presents a sequence diagram for the VM
selection interface.
**Supporting storage services**
Pinnacle to the development of a complete cloud architecture, and a pre-requisite to supporting compute services is the ability for a cloud middleware to support the
mounting and construction of persistent storage volumes.
Storage service support is a pre-requisite of compute services because it is common for virtual machine images to
reside on a separate image repository or network attached
storage device. Therefore, before compute services can
be fully realized it is necessary to be able to mount the
image repository, such that the local hypervisor may have
access to the virtual machine images. We can support
storage services using the storage driver provided by libvirt. Figure 10 shows the XML specification provided to
libvirt, which is required by the storage driver.
Once the storage pool has been mounted, then the user
of the cloud may be provided access to storage space, if it is
persistent userspace. Alternatively, if the share is a image
repository, then the compute node will be given access to
the virtual machine images provided by the storage pool.
**Supporting compute services**
The NetEvent API allows for services to be written and
distributed to nodes within a private cluster. These services utilize the NetEvent API as a means for triggering
events remotely. Within cloud architectures there are a
few important events that must be supported. Firstly, the
instantiation of virtual machine images must be supported
by all cloud architectures. Compute services may be supported by combining the flexibility of the NetEvent API
and a hypervisor, such as KVM. A proper compute service should maintain an image instantiation event which
invokes the hypervisor and instructs it to instantiate a
specific virtual machine image.
The steps involved in supporting compute services start
with mounting the storage share containing the virtual
machine images. This is made possible with the function,
_mountVMPool, which constructs a storage pool located in_
the directory “/var/lib/iibvirt/images”, and is the default
location by which libvirt may locate the available domains
or virtual machine images available to the system. Once
the virtual machine pool is mounted then a specific virtual machine may be instantiated, which is made possible
with the function, instantiateVM. This function looks up
the virtual machine, and if it exists in the storage pool, it
will be instantiated. Once the VM is instantiated, a domain
object will be returned to the node, which provides the
methods for managing the virtual machine instantiation.
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 8 Example interface for metadata aggregation with two nodes being polled for data.**
**Figure 9 Sequence diagram for virtual machine image selection process.**
-----
http://www.journalofcloudcomputing.com/content/2/1/24
**Figure 10 Storage pool XML specification required by libvirt.**
**Supporting metadata aggregation**
Metadata aggregation refers to the ability to retrieve data
about each node within a specific group. This data may
be used for informative purposes, or for more complex
management tasks. Example metadata includes the nodes
IP address, operating system, and kernel. Additionally,
dynamic data may be aggregated as well, including
RAM availability and CPU load. We can support metadata aggregation in each service by introducing events
that retrieve the data and transmit it to the controller
node.
**Performance results**
One of the many reasons for not using SSH, which seems
to be the industry standard approach for inter-node communication in general-purpose cloud architectures, is that
SSH produces excessive overhead. The communication
approach used by NetEvent is very simplified and does
not introduce data encryption or a lengthy handshake
protocol. The downfall of simplifying the communication
structure is that the system becomes at risk for loss of
sensitive data being transmitted between nodes. However,
in the case of this system no sensitive data is ever transmitted, and instead only simple commands are ever sent
between nodes. For this reason encryption is unnecessary. However, authentication is still required in order to
determine if nodes are legitimate. In testing the performance of NetEvent we compared the elapsed time for
authenticating a node with the controller and establishing
a connection with the elapsed time for SSH to authenticate
and establish a connection. We gathered data over five trials, which is presented in Table 2. Additionally, Figure 2
presents the average latencies between SSH and NetEvent.
From the performance comparison we draw the conclu
sion that general-purpose cloud architectures that utilize
SSH connections, such as Eucalyptus, sacrifice up to a
99% loss in performance when compared to traditional
sockets. However, this comparison is being made at optimal conditions, because the servers are under minimal
load. More data needs to be gathered to determine how
much the performance is affected when the servers are
overloaded.
**Supporting software services**
The preceding sections discussed our implementation of
the software system necessary for supporting IaaS cloud
services, namely hardware virtualization and persistent
storage. Building upon virtualization of hardware, we
are able to provide software services as custom virtual
machine instances. The approach that THUNDER takes
is instantiation of server virtual machine images, which
deploy web services for user collaboration, research,
and other tools and utilities. By implementing services
in this fashion, no modifications are required to the
infrastructure of the cloud, and administrators may easily start services by allocating hardware resources and
stop services by deallocating resources. This approach
differs from typical SaaS architectures in that no additional configuration is necessary outside of what is
required for regular instantiation of virtual machines.
The only difference is that regular users do not have
access to connecting to service instances directly using
VNC.
**Future work and conclusion**
We would like to introduce this architecture in a select
number of organizations in order to determine the effectiveness and usability of the architecture from both the
user’s and administrator’s perspectives. Based on the
results of the study, we will alleviate any possible concerns from users or administrators. We plan to form
an incremental process, such that various aspects of the
system are studied in different organizational environments, then small changes will be made to the system
**Table 2 Performance results comparing NetEvent to**
**traditional SSH-based authentication**
**Trial #** **SSH (ms)** **NetEvent (ms)**
1 298 3.1
2 301 3.2
3 302 9.2
4 298 3.1
5 299 3.0
-----
http://www.journalofcloudcomputing.com/content/2/1/24
before running another study. In this fashion, we will
have more control over which features are beneficial to
organizations, and which features are least significant.
Currently, the THUNDER cloud is comprised of a set of
standalone servers which are not organized in a shared
structure, such as a rack-style chassis. We would like to
build a complete prototype that is presentable and able
to be easily taken to organizations for demonstration purposes. One of the key challenges in building a cloud
infrastructure is the development of a middleware solution, which allows for ease in resource management. The
work presented in this paper demonstrates that a middleware solution does not have to be as complex as those
found in the popular cloud architectures, Eucalyptus and
Openstack. We also introduced the model by which our
middleware API offers communication between nodes,
namely utilizing event driven programming and socket
communication. We have developed our API to be efficient, light weight, and easily adaptable for the development of vertical cloud architectures. Additionally, we
showed the manner in which a web interface may interact
with the middleware API in order to send messages and
receive responses from nodes within the cloud. For future
work we would like to investigate approaches for fault tolerance in this architecture. Additionally, we would like to
perform an overall system performance benchmark and
make comparisons between other cloud architectures. We
would also like to implement a method for obfuscation of
management traffic such that the system may not be as
susceptible to malicious users.
We have presented our work in designing and
implementing a new private cloud architecture, called
THUNDER. This architecture is implemented as a vertical
cloud, which is designed for use in non-profit organizations, such as publicly funded schools. We leverage a
number of technologies, such as Apache2 web server, and
MySQL for implementation of the architecture. Additionally, we introduce socket programming and RPC as a
viable alternative to the more common SSH based solution for inter-node communication. We have established
that the primary goal of THUNDER is not to replace traditional private cloud architectures, but to serve as an alternative, which is custom tailored for reducing complexity,
costs, and overhead in underprivileged and underfunded
markets. We also demonstrate that if an organization were
to adopt the THUNDER architecture they could benefit
by reducing up to 50% of their power bill due to the low
power usage when compared to a traditional computer
lab. We believe that the power and cost savings, when
combined with the features and qualities of THUNDER
as presented in this paper, make THUNDER a desirable
architecture for school computer labs and other organizations. Further studies and analysis will validate the
effectiveness of the architecture.
**Competing interests**
The authors declare that they have no competing interests.
**Authors’ contributions**
GL performed the research, design, and development of the THUNDER
architecture. JG and JR revised the manuscript and contributed to the
background work. XH provided insight and guidance in developing the
networking model for THUNDER. SV edited and revised the final manuscript.
All authors read and approved the final manuscript.
Received: 11 August 2013 Accepted: 14 December 2013
Published: 21 December 2013
**References**
1. [Amazon Web Services. [http://aws.amazon.com/]. Accessed: 12/18/2012](http://aws.amazon.com/)
2. [Eucalyptus Enterprise Cloud. [http://eucalyptus.com/]](http://eucalyptus.com/)
3. [OpenStack. [http://openstack.org/]](http://openstack.org/)
4. Stein S, Ware J, Laboy J, Schaffer HE (2012) Improving K-12 pedagogy via
a Cloud designed for education. Int J Informat Manage.
[[http://linkinghub.elsevier.com/retrieve/pii/S0268401212000977]](http://linkinghub.elsevier.com/retrieve/pii/S0268401212000977)
5. Cappos J, Beschastnikh I (2009) Seattle: a platform for educational cloud
[computing. ACM SIGCSE 111–115. [http://dl.acm.org/citation.cfm?id=](http://dl.acm.org/citation.cfm?id=1508905)
[1508905]](http://dl.acm.org/citation.cfm?id=1508905)
6. Donathan K, Ericson B (2011) Successful K-12 outreach strategies.
In: Proceedings of the 42nd ACM technical symposium on Computer
[science education, pp 159–160. [http://dl.acm.org/citation.cfm?id=](http://dl.acm.org/citation.cfm?id=1953211)
[1953211]](http://dl.acm.org/citation.cfm?id=1953211)
7. Ercan T (2010) Effective use of cloud computing in educational
institutions. Procedia - Social and Behavioral Sci 2(2):938–942.
[[http://linkinghub.elsevier.com/retrieve/pii/S1877042810001709]](http://linkinghub.elsevier.com/retrieve/pii/S1877042810001709)
8. Doelitzscher F, Sulistio A, Reich C, Kuijs H, Wolf D (2010) Private cloud for
collaboration and e-Learning services: from IaaS to SaaS. Comput 91:
[23–42. [http://www.springerlink.com/index/10.1007/s00607-010-0106-z]](http://www.springerlink.com/index/10.1007/s00607-010-0106-z)
9. Sultan N (2010) Cloud computing for education: A new dawn? Int J
[Inform Manage 30(2):109–116. [http://linkinghub.elsevier.com/retrieve/](http://linkinghub.elsevier.com/retrieve/pii/S0268401209001170)
[pii/S0268401209001170]](http://linkinghub.elsevier.com/retrieve/pii/S0268401209001170)
10. Masud M, Huang X (2011) ESaaS: A new education software model in
[E-learning systems. Inform Manage Eng 468–475. [http://www.](http://www.springerlink.com/index/H5547506220H73K1.pdf)
[springerlink.com/index/H5547506220H73K1.pdf]](http://www.springerlink.com/index/H5547506220H73K1.pdf)
[11. Linux Terminal Server Project. http://ltsp.org/. [Accessed: 12/23/2012]](http://ltsp.org/)
12. Willem Vereecken LD (2010) Energy efficiency in thin client solutions.
GridNets 25:109–116
13. Cardellini V, Iannucci S (2012) Designing a flexible and modular
architecture for a private cloud: a case study. In: Proceedings of the 6th
international workshop on Virtualization Technologies in Distributed
Computing Date, VTDC ’12. ACM, New York, NY, USA, pp 37–44.
[[http://doi.acm.org/10.1145/2287056.2287067]](http://doi.acm.org/10.1145/2287056.2287067)
14. An K, Pradhan S, Caglar F (2012) Gokhale AA publish/subscribe
middleware for dependable and real-time resource monitoring in the
cloud. In: Proceedings of the Workshop on Secure and Dependable
Middleware for Cloud Monitoring and Management, SDMCMM ’12. ACM,
[New York, NY, USA, pp 1–3:6. [http://doi.acm.org/10.1145/2405186.](http://doi.acm.org/10.1145/2405186.2405189)
[2405189]](http://doi.acm.org/10.1145/2405186.2405189)
15. Lee SY, Tang D, Chen T, Chu WC (2012) A QoS Assurance middleware
model for enterprise cloud computing. In: IEEE 36th Annual Computer
Software and Applications Conference Workshops (COMPSACW), 2012,
pp 322–327
16. Apostol E, Baluta I, Gorgoi A, Cristea V (2011) Efficient manager for
virtualized resource provisioning in cloud systems. In: IEEE International
Conference on Intelligent Computer Communication and Processing
(ICCP), 2011, pp 511–517
17. Khalidi Y (2011) Building a cloud computing platform for new
possibilities. Computer 44(3):29–34
18. Pallis G (2010) Cloud computing: the new frontier of internet computing.
IEEE Int Comput 14(5):70–74
[19. Raspberry Pi Foundation (2013). http://www.raspberrypi.org/](http://www.raspberrypi.org/)
[Accessed: 11/15/2012]
[20. EC2 Tools (2013). [http://www.eucalyptus.com/eucalyptus-cloud/tools/](http://www.eucalyptus.com/eucalyptus-cloud/tools/ec2)
[ec2]](http://www.eucalyptus.com/eucalyptus-cloud/tools/ec2)
[21. OpenStack Nova (2013). [http://nova.openstack.org/]](http://nova.openstack.org/)
-----
http://www.journalofcloudcomputing.com/content/2/1/24
22. Surhone L, Timpledon M, Marseken S (2010) Remote desktop protocol.
VDM Verlag Dr. Mueller AG & Company Kg, Saarbruecken, Germany
23. Barrett DJ, Silverman RE, Byrnes RG (2005) SSH, The secure shell: the
definitive guide. O’Reilly Media, Sebastopol, CA, USA
[24. VNC - Virtual network computing (2013). [http://www.hep.phy.cam.ac.uk/](http://www.hep.phy.cam.ac.uk/vnc_docs/index.html)
[vnc_docs/index.html]. [Accessed: 12/18/2012]](http://www.hep.phy.cam.ac.uk/vnc_docs/index.html)
[25. libvirt - The virtualization API (2013). http://libvirt.org/.](http://libvirt.org/)
[Accessed: 7/21/2013]
26. M’Raihi D, Rydell J, Bajaj S, Machani S, Naccache D “OCRA: OATH
Challenge-Response Algorithm”, RFC 6287. June 2011
doi:10.1186/2192-113X-2-24
**Cite this article as: Loewen et al.: THUNDER: helping underfunded NPO’s**
distribute electronic resources. Journal of Cloud Computing: Advances, Sys_tems and Applications 2013 2:24._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1186/2192-113X-2-24?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1186/2192-113X-2-24, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://journalofcloudcomputing.springeropen.com/track/pdf/10.1186/2192-113X-2-24"
}
| 2,013
|
[
"JournalArticle"
] | true
| 2013-12-01T00:00:00
|
[
{
"paperId": "9ec00129eb540f667e750b1d1f6c1c3c10513d15",
"title": "Improving K-12 pedagogy via a Cloud designed for education"
},
{
"paperId": "a39a58733def7147f0e8188676628098d7e92c9c",
"title": "A publish/subscribe middleware for dependable and real-time resource monitoring in the cloud"
},
{
"paperId": "dc751f2c80660080741b22c73d4174af5e49a983",
"title": "A QoS Assurance Middleware Model for Enterprise Cloud Computing"
},
{
"paperId": "f45fda5f62283d02d7a3c10e14d585fbab47ad5e",
"title": "Designing a flexible and modular architecture for a private cloud: a case study"
},
{
"paperId": "1cf578c7693a804865d58353be48ff22ba2f85b5",
"title": "Efficient manager for virtualized resource provisioning in Cloud Systems"
},
{
"paperId": "eeb1b41918b7e298d5b9c1511c8e6678a5ea013b",
"title": "ESaaS: A New Education Software Model in E-learning Systems"
},
{
"paperId": "538f9486d42432ee85890e6d402c631e891f2311",
"title": "OCRA: OATH Challenge-Response Algorithm"
},
{
"paperId": "edc1097dfb9bef7df0c0e185904e0ff7c2eab6d1",
"title": "Successful K-12 outreach strategies"
},
{
"paperId": "77f35465e74d10349931f50cb7dcbab8648be885",
"title": "Proceedings of the 42nd ACM technical symposium on Computer science education"
},
{
"paperId": "551e5e3adcc7bd7059bea60ac0c284ce63044261",
"title": "Building a Cloud Computing Platform for New Possibilities"
},
{
"paperId": "d404b2193a62c0f99e37c33ed323fa69fb76a449",
"title": "Cloud Computing: The New Frontier of Internet Computing"
},
{
"paperId": "1db3b7d018e2981d4436535a32afccf3741e8866",
"title": "Cloud computing for education: A new dawn?"
},
{
"paperId": "8eb8251ec42f2a47501d72098000dad88700852c",
"title": "Energy Efficiency in Thin Client Solutions"
},
{
"paperId": "634148c772d4affa2d8ed4235e14125b4a823276",
"title": "Seattle: a platform for educational cloud computing"
},
{
"paperId": "54f67af65db0ae1b4a7d7a1b85022494d2679959",
"title": "SSH, The Secure Shell: The Definitive Guide"
},
{
"paperId": null,
"title": "Eucalyptus Enterprise Cloud"
},
{
"paperId": null,
"title": "http://www.raspberrypi.org/ [Accessed: 11/15"
},
{
"paperId": "2686eeb5ae98872d0ab7418b244ef9765310700c",
"title": "Private cloud for collaboration and e-Learning services: from IaaS to SaaS"
},
{
"paperId": "fdd3d480e784dddc7dc9ce0be48bfcc1b667bc06",
"title": "Effective use of cloud computing in educational institutions"
},
{
"paperId": null,
"title": "Remote desktop protocol. VDM Verlag Dr SSH, The secure shell: the definitive guide. O'Reilly Media"
}
] | 12,964
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Chemistry",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f2f02c7694836eb4950f4395aae455571d91c1
|
[] | 0.875635
|
Blockchain technology in quantum chemistry: A tutorial review for running simulations on a blockchain
|
00f2f02c7694836eb4950f4395aae455571d91c1
|
International Journal of Quantum Chemistry
|
[
{
"authorId": "1395946625",
"name": "M. W. Hanson-Heine"
},
{
"authorId": "1729356540",
"name": "Alexander P. Ashmore"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Quantum Chem"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/1097461X",
"http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1097-461X"
],
"id": "b0b4cd85-08f5-4199-bd28-5512a3b0a0d9",
"issn": "0020-7608",
"name": "International Journal of Quantum Chemistry",
"type": "journal",
"url": "http://www3.interscience.wiley.com/cgi-bin/jtoc?ID=29830"
}
| null |
DOI: 10.1002/qua.27035
#### R E V I E W
# Blockchain technology in quantum chemistry: A tutorial review for running simulations on a blockchain
## Magnus W. D. Hanson-Heine [1] | Alexander P. Ashmore [2]
1School of Chemistry, University of
Nottingham, Nottingham, UK
2Oxinet, Bath, UK
Correspondence
Magnus W. D. Hanson-Heine, School of
Chemistry, University of Nottingham,
University Park, Nottingham NG7 2RD, UK.
[Email: magnus.hansonheine@nottingham.ac.](mailto:magnus.hansonheine@nottingham.ac.uk)
[uk, magnus.hansonheine@gmail.com](mailto:magnus.hansonheine@nottingham.ac.uk)
### 1 | INTRODUCTION
### Abstract
#### Simulations of molecules have recently been performed directly on a blockchain vir
tual computer at atomic resolution. This tutorial review covers the current applica
tions of blockchain technology for molecular modeling in physics, chemistry, and
biology, and provides a step-by-step tutorial for computational scientists looking to
use blockchain computers to simulate physical and scientific processes in general.
Simulations of carbon monoxide have been carried out using molecular dynamics
#### software on the Ethereum blockchain in order to facilitate the tutorial.
K E Y W O R D S
blockchain, computational science, distributed ledger, quantum chemistry
Blockchain and distributed ledger technology is widely used in areas ranging from finance [1], to medicine [2], energy markets [3], and transparent
and censorship resistant computers [4]. Blockchains have been integrated into chemical databases for genomics [5–21], electron microscopy [22],
and automated experimental chemistry [23], and these databases aim to improve both privacy and openness when storing, accessing, and sharing
chemical and molecular data. Blockchains have been integrated with machine learning techniques for computational modeling in order to facilitate
data sharing and collaborative or federated model construction [24–41]. Distributed ledgers have driven novel algorithm developments in quan
tum computing [42–52], blockchains have been used to create non-fungible tokens (NFTs) from scientific data [53], and blockchains have seen
extensive use in the chemical industry [54–59].
However, the development of blockchains that are capable of performing generalized Turing complete computation now means that, in prin
ciple, the full range of computational science, quantum chemistry, and quantum physics simulations that are possible using conventional com
puters can also be implemented directly on blockchain computers. Quantum chemistry applications that use blockchain architectures to solve the
Schrödinger equation have not yet been implemented; however, the implicit solutions for electronic structures that are contained within empirical
parameterization of molecular mechanics and molecular dynamics force fields are available using existing software. Calculations of this type have
been performed directly on a blockchain virtual computer to model the relatively simple physical process of a vibrating carbon monoxide molecule
[60, 61]. The ability to perform computational and quantum science using blockchain computers is the focus of the current tutorial review.
### 2 | BLOCKCHAIN COMPUTERS—AN OVERVIEW
Blockchains are a series of algorithms that enable peer-to-peer consensus on the state and history of a decentralized ledger in an open network
[1]. Open public blockchains aim to store a provably tamperproof record of data or computational operations. This is done through the use of
cryptographic hashing functions that map variable length input data to a fixed-length output called a hash. Any change in the input results in an
[This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,](http://creativecommons.org/licenses/by/4.0/)
provided the original work is properly cited.
-----
unpredictable but deterministic change to the hash, and consensus on the state of the blockchain is maintained using algorithms such as proof-of
work [1, 62]. Proof-of-work algorithmic details and the details of blockchain virtual computer operations have been discussed in detail elsewhere
[1, 4]. However, in basic terms, a blockchain virtual computer could be described using the following protocol: a set of desired computations, ter
med “transactions,” are broadcast and executed by the computer nodes in the blockchain network including “miner” nodes. The outputs of these
computations are hashed along with supplemental data by a miner node. The supplemental data includes an update to the state of the ledger that
creates new tokens, or “coins,” that are under the miner node's control. The supplemental data is modified by a miner node until the numerical
hash of the combined input is below a threshold (network difficulty), which is set in order to ensure that on average a certain amount of computa
tional hashing work needs to be performed by a given miner in order for a valid combination of inputs to be found. Once a sufficiently small hash
is found, the miner node that found the corresponding set of inputs broadcasts this new “block” of data to the rest of the network. Other nodes
in the network then perform a much smaller number of hashing operations on these broadcast inputs in order to check that the hash is valid, and
check that the other rules that the blockchain operates by have been followed. If the new block of data is accepted by the rest of the nodes in the
network, then the miner receives control of the new tokens in addition to any fees that were sent to them as part of the computational work. This
remunerates them for the costs associated with doing the hashing and broadcasting work. If the rules are not followed, then the new block is
rejected by the other nodes, and the newly created tokens are not received in the copies of the blockchain held by the other nodes in the net
work. This creates an incentive for accurate and honest computational work, and disincentivizes fraudulent or erroneous work. Newly appended
blocks of data contain a hash of the previous block as part of their supplemental data, which then causes revisions to old blocks of data to invali
date the hashes of the subsequent blocks, leading to them being rejected. The information in each “block” is therefore mathematically “chained”
to the other blocks in the series through the sequential cryptographic hashes. All computer nodes in the network can therefore reliably maintain
consensus on the state and history of the blockchain database, and on the computational work used to modify that state, creating a decentralized
virtual computer. At the time of writing, the Ethereum blockchain and associated Ethereum virtual machine (EVM) is the oldest and most widely
used blockchain that performs Turing complete general computation [4].
### 3 | BLOCKCHAIN MOLECULAR DYNAMICS
The first computational chemistry experiments performed entirely within a blockchain virtual computer have been reported in the scientific litera
ture. These simulations involved modeling the vibrational motion of the carbon monoxide molecule on the Ethereum blockchain [60, 61]. In this
case, atomic resolution molecular dynamics simulations were performed for the carbon monoxide molecule with a harmonic potential representing
the energetics of the carbon–oxygen chemical bond over a 1 and 40 fs timeframe, respectively. Computer software compatible with the EVM
was written in the specialized Solidity programming language, and was used to calculate a diatomic molecular dynamics trajectory with a variation
FIGURE 1 Molecular dynamics trajectories showing carbon–oxygen bond length variations for the carbon monoxide molecule when
(A) coded in C# and run on a local computer, and (B) coded in solidity and run on the Ethereum blockchain. Figure reproduced from HansonHeine and Ashmore [60]
TABLE 1 The Ethereum blockchain transaction identification numbers (hexadecimal) for the simulations of carbon monoxide
Operation Transaction identification number
Software upload 0x1c98dbb671dbe76b7cb4188d7585296e5adbe0f64fe8144546b4a82081089152
Preliminary molecular dynamics 0x13eb9c343f380334262706975c676aa5006ac2103b2132dfba0e1667c7dfddc6
Production molecular dynamics 0x36b510f3bdb2fa67a0d4f749899ab8630b54000ee5a211172d699d750f94f94f
-----
FIGURE 2 Abstract representation of computational state replication in blockchain virtual computers. Image credit: Giovanni Ciatto,
University of Bologna
of the velocity Verlet algorithm by integrating Newton's equations of motion in atomic units [63]. These simulations were executed for 10 and
400 time steps of 0.1 fs each, respectively, with an initial carbon–oxygen bond length of 120 pm. This model used an equilibrium bond length
parameter of 112.8 pm and a force constant of 1855 N/m, together with the masses of [12]C and [16]O assigned to the atoms. An equivalent simula
tion was also written using the C# programming language, and was executed on a local machine for comparison (see Figure 1). The outputs of
these calculations were recorded on the Ethereum blockchain in blocks 9360161 and 9360178, respectively, and are available for the public to
review. These specific transactions can be identified in the listed blocks using the hexadecimal identification numbers in Table 1.
### 4 | SMART CONTRACTS
Ethereum, and many other blockchains, allow for software to be deployed and executed through transactions on the blockchain. These pieces of
software are known as “smart contracts,” and since they are deployed as part of blocks, any contract added to the blockchain cannot be changed
and any transactions interacting with the contract are also recorded permanently. This means that any deployed contract is always accessible
regardless of the source code being lost, made closed source, or any hardware changes that might otherwise limit the accessibility.
Interactions with the blockchain are broadcast as messages by the nodes that form part of the blockchain network and are packaged into
blocks by miner nodes as part of the proof-of-work algorithm. These messages can include instructions to deploy or execute smart contracts, and
are termed “transactions.” The term transaction is used in blockchain nomenclature as a holdover from the first blockchain, Bitcoin, which was ini
tially used exclusively for financial payments, and this term is still used today when referring to the corresponding operations on blockchain net
works that are capable of Turing complete computation. For example, the molecular dynamics simulation described here was carried out by
executing a smart contract uploaded in Ethereum block 9360156 with the transaction identification number shown in Table 1.
### 5 | BLOCKCHAIN TIMESTAMPS
When similar scientific discoveries are made independently, the original discovery is often attributed to the person considered to have made the
relevant observations or calculations first. One of the more famous examples of this is the controversy between Leibniz and Newton over who
invented calculus; however, this can be a regular occurrence when researchers share a common knowledge base and goal set. Knowing the order
in which experiments were carried out is also useful when analyzing methods of hypothesis testing and conclusion formation that can differ
depending on the order in which observations happen. An important property of many blockchains is therefore the creation of an internal chro
nology. The regularity and sequential way in which new blocks of data are added and immutably chained to the existing blocks cryptographically,
means that the position which calculations have in the blockchain acts as an automatic timestamp that can be used to verify the order in which
the calculations were performed. In the case of the two carbon monoxide simulations discussed previously, the 1 fs simulation took place chrono
logically prior to the 10 fs simulation. Under normal circumstances this statement would be hard to prove. However, in this case the two trajecto
ries are recorded in Ethereum blocks 9360161 and 9360178 respectively which creates an effective timestamp proving the order in which the
-----
FIGURE 3 Screen capture showing the default workspace in the Remix IDE
FIGURE 4 Screen capture showing the provided contract in the text editor of the Remix IDE
simulations were performed for as long as the Ethereum blockchain continues to operate with a coherent state and history. Similar uses of this
time stamp data have been proposed in the blockchain integrated automatic experiment platform (BiaeP) protocol [23].
### 6 | SCIENTIFIC REPRODUCTION
Scientific reproduction, and the replication of findings, including computational work, is of critical importance for error checking and establishing
consensus among scientists. The internal state of blockchain computers are reproduced by multiple independent local computer nodes (Figure 2).
This makes them an ideal target for running scientific simulations in an openly verifiable and repeatable manner which can enable improved peer
review and replication in the computational sciences. Any simulation written and deployed to the blockchain can always be found in the same
place on that blockchain and the simulation can be verified by anyone running a node and can be re-executed by anyone using the blockchain
-----
FIGURE 5 Screen capture showing the Remix IDE Solidity compiler tab
FIGURE 6 Screen capture showing the Remix IDE “Deploy and Run Transactions” tab
and willing to pay the transaction fees. Previous executions of smart contracts can also often be identified, and all of the parameters used to exe
cute them can be easily verified and retrieved, ensuring that any simulation that was run can also be repeated with the computational certainty
that executing the same code with the same parameters will result in the same output
-----
FIGURE 7 Screen capture of the manual interface for the DiatomicMD contract in Remix IDE
FIGURE 8 Screen capture of the Remix IDE transaction viewer, displaying the example simulation results
-----
While this does not prevent a malicious contract, for example, one that would check the number of times it has been run and then produce a
different result dependent on that number, the compiled bytecode of the contract can be retrieved from the blockchain and decompiled into code
that can be manually checked for any errors or attempts at deception. This can be done regardless of whether the original source code is available,
and the bytecode can also be used to confirm that the source code, if provided, is the same as the code that was deployed on the blockchain.
FIGURE 9 Screen capture showing the manual interface for the DiatomicMD contract with the getSimOutput call expanded
FIGURE 10 Screen capture showing the transaction viewer displaying results retrieved with a call to getSimOutput
-----
FIGURE 11 Screen capture of the “Deploy and Run Transactions” tab after compiling the contract
FIGURE 12 Screen capture of the MyEtherWallet website homepage
-----
FIGURE 13 Screen capture of the MyEtherWallet website wallet access menu
Smart contracts can make records and retrieve data from blocks, which can serve as an optional additional record of a simulation. While it is
generally more computationally expensive to record data on the blockchain itself, if that data is to be accessed later multiple times, this could
prove more cost effective, as well as making the simulation easier to verify, since, any output stored will have been confirmed and recorded in a
way that makes tampering almost impossible when the blockchain is operating as intended.
### 7 | CENSORSHIP RESISTANT COMPUTATION
Calculations on open public blockchains can be made available to anyone for review. Previous blockchain executions can be identified,
verified, and executed again using the same code to produce the same output, and new pieces of software can be uploaded and exe
cuted with pseudo-anonymity for the researchers involved. Once the transactions are broadcast and the correct fees are paid, a given
computation cannot be stopped by any central authority so long as the blockchain operates normally. Blockchains can therefore be
useful for scientists working under conditions where concerns about fraud and censorship become meaningful considerations.
Blockchain consensus algorithms can create permanent and practically immutable or tamperproof records of computational data.
While relatively rare, in extreme cases, state actors have been known to block or edit data contained in peer-reviewed journals
[64, 65].
-----
FIGURE 14 Screen capture showing the MyEtherWallet “Deploy Contract” menu option
### 8 | BLOCKCHAIN LIMITATIONS
The requirement that multiple computer nodes reproduce the computational work in each block in order to reach and maintain internal consensus
currently makes the computational rate of blockchains significantly slower than conventional computers. More efficient architectures are cur
rently under development; however, this remains a significant limitation. For example, the production molecular dynamics simulation of carbon
monoxide previously mentioned in this tutorial review was included in Ethereum block 9360178, and this block was mined in 11 s, compared to
�
a 135 ms execution time for an equivalent C# simulation executed on a standalone desktop machine running with a i7-4790k CPU and 32GB of
�
1333 MHz DDR3 RAM.
The relative inefficiency of blockchain computation compared with trusted and centralized computational work, and the electricity needed to
perform the hashing operations in proof-of-work, has led to open questions over the ecological impact of blockchain computation [66]. Consensus
algorithms such as proof-of-stake and even green-proof-of-work have been proposed as energy efficient alternatives [67]. The mining competi
tion in proof-of-work means that miners to use energy to reach consensus even when they fail to find the correct hash before their competitors,
which could be considered wasteful when compared with alternative methods. However, this process also incentivizes miners to use energy in an
efficient manner in order to maximize their chance of being the first to generate a new block by using electricity in areas and at times when it is
cheaper, and would otherwise be wasted or used less efficiently. This makes direct watt-for-watt comparisons between blockchain proof-of-work
mining and other types of computational energy usage potentially misleading as a direct measure of their relative environmental impact.
Blockchain computers are currently many orders of magnitude less efficient than centralized or trusted alternatives; however, they also provide
certain computational advantages that these other systems cannot replicate
-----
FIGURE 15 Screen capture of MyEtherWallet, showing the contract deployment section for the tutorial program
Ethereum also has a limit to the maximum amount of computation that can be performed as part of generating a single block. This is measured in units
known as “gas,” and exists in part to make sure that individual programs do not halt the blockchain and prevent it from regularly forming new blocks. A
detailed description of the relationship between gas and specific computational operations can be found in the Ethereum yellow paper “Ethereum: A
[Secure Decentralized Generalized Transaction Ledger EIP-150 Revision” by Gavin Wood and currently hosted on the website gavwood.com/paper.pdf at](http://gavwood.com/paper.pdf)
the time of writing. The hard limit on the maximum amount of computation that can be performed as part of generating a single block in the case of the
Ethereum blockchain is currently set to 30 000 000 gas per-block, with a target of 15 000 000. The example molecular dynamics transaction carried out
in this work made use of 545 720 gas. However, if the entire block limit were used for a simulation it would then be stopped by the Ethereum network at
that point even if instructions and additional funds were available for a longer run time. In principle both the results and the current state of a simulation
after one transaction can be stored and used to send a second transaction in a future block in order to continue the simulation from where it left off,
but this is not currently possible without action from outside the Ethereum blockchain in order to trigger the continuation of the simulation.
Furthermore, since each node needs to re-execute every transaction and reproduce the same results, the programming languages used for
smart contracts are either entirely deterministic, or, in the cases where they use pre-existing languages, are limited to ensure that certain features
are not used. As an example, this means that smart contracts cannot use random number generators that generate (pseudo)random numbers
within a contract in a way which might lead to different computers generating different values, they also cannot contain any kind of variable that
involves an estimation which can vary across different programming languages and hardware, and they cannot contain any calculation involving
the current date, time, or location specific data that is determined by the local computer node at the point that the code is executed. It is possible
to implement pseudo-random number generators and provide seed data as an input parameter to generate pseudo-randomness in a deterministic
manner that allows for consensus on the output between nodes in the blockchain.
### 9 | BLOCKCHAIN SIMULATION TUTORIAL
The smart contract used in this tutorial is the source code and set of execution instructions for a piece of molecular dynamics software that has been
designed to simulate the general vibration of diatomic molecules using a variation of the velocity Verlet algorithm [63] with a harmonic potential
-----
FIGURE 16 Screen capture showing the MyEtherWallet smart contract deployment transaction confirmation
representing the energetics of chemical bonding. This contract is similar to the contract used to carry out the original simulations of the carbon monoxide
molecule mentioned previously, but it has been written to allow for a greater number of user-defined input parameters in order to facilitate the general
simulation of any diatomic molecule and to improve its usefulness as a potential teaching aid. The source code for this smart contract can be found in the
Supporting Information and on the Ethereum blockchain (vide infra), and will be referenced explicitly here. This molecular dynamics software has two
modes of execution. The first mode reproduces the original molecular dynamics of carbon monoxide using the parameters used during the simulations
carried out in Ethereum blocks 9 360 161 and 9 360 178, with a user specified output precision and number of time steps. The second mode also allows
for user defined input values to be set for the masses of the two atoms individually, the harmonic bond force constant, the time step size, the initial bond
length, and the equilibrium bond length. The details of these functions can be found in the smart contract and for simplicity the first mode will be used
for the demonstration in this tutorial. However, any custom built simulation software that is written in Solidity can also be substituted for the contract
we have used here in order to run any other physically meaningful simulation or any other computational science application of the users choice.
### 10 | SUPPORTING SOFTWARE AND TOOLS
To follow this tutorial, some third-party software will be required. To simplify this process, minimize hardware requirements, and to reduce the
chances of encountering version differences, web applications have been primarily chosen here. There are however, many different options for
each of these categories and this is only one of many ways to deploy a smart contract.
### 10.1 | Ethereum wallet
A wallet is a piece of software (or hardware) that stores the private key of a cryptographic private and public key pair that is used
to access and sign transactions for an account on a blockchain Wallets will often contain further features ranging from running
-----
FIGURE 17 Screen capture showing the smart contract deployment on the Ethereum blockchain using the Etherscan blockchain block
explorer website
entire nodes or miner nodes, to lite-wallets that interact with third-party nodes in order to broadcast transactions and query the
blockchain.
For this tutorial, a browser based lite-wallet was chosen. These are wallets that are integrated into a web browser and allow for more conve
nient access to web applications that utilize blockchains. While this tutorial is written with these wallets in mind, any wallet that is supported by
MyEtherWallet (MEW) can be used here, which includes a variety of physical hardware wallets and mobile applications.
To complete the deployment and execution process, a sum of the Ether token (ETH), or the native token associated with the blockchain being
used to compute the simulation, will be required in order to pay for any associated transaction fees (Appendix). This sum can vary greatly
depending on the relative exchange value of the native token, and the congestion of the network.
Two notable Ethereum browser wallets that function very similarly are the “Brave Wallet” and “Metamask” software. The former is a wallet
integrated with the Brave browser, while the latter is a plug-in that can be installed on most mainstream browsers such as Chrome and Firefox.
[Brave can be downloaded from brave.com and Metamask can be downloaded from metamask.io.](http://brave.com)
### 10.2 | Integrated development environment
An integrated development environment (IDE) is an application designed to fulfill many or all of the requirements of software development. This
typically incorporates a text editor for working with source code, alongside other features such as file management, a code compiler, and/or
debugging tools.
For this tutorial, an IDE for Solidity, the native programming language of Ethereum, is required. The Remix IDE is the one used here, and it
[has the capability to run smart contracts locally for debugging purposes Remix can be accessed through the website remix ethereum org](http://remix.ethereum.org)
-----
FIGURE 18 Screen capture showing the “interact with contract” option for the deployment smart contract in MyEtherWallet
FIGURE 19 Screen capture of the contract listed in MyEtherWallet (MEW)'s search after deployment
-----
FIGURE 20 Screen capture of the contract once selected, with address and ABI fields populated
### 10.3 | MyEtherWallet
MEW is a website that allows users to view and manage their account on the Ethereum blockchain using a variety of different wallets. It offers a
simple graphical user interface (GUI) for both deploying and interacting with smart contracts, and it supports both of the browser wallets
suggested for this tutorial alongside numerous other options. While it is possible to deploy a contract in many other ways, MEW's GUI has been
[deemed to be a particularly user friendly option. MEW can be accessed through myetherwallet.com.](http://myetherwallet.com)
### 10.4 | Off-chain smart contract testing
Before deploying a contract, it first needs to be compiled into bytecode that can be understood by the EVM. During this stage, it is also important
to test the contract to ensure that it functions as intended, since, once deployed, the contract cannot be altered or removed, and each new
deployment incurs an additional cost in transaction fees.
To test the contract, the first step is to open Remix IDE. By default, Remix will provide an example workspace with sample code. A new
workspace is not required and the existing sample code can be safely ignored as it will not be used. To begin, a new file must be created by
clicking the document icon located below the workspace name as shown in Figure 3. In this instance the file is named “diatomicmd.sol.”
Once created, the new diatomicmd.sol file must be opened and the contents of the provided contract pasted in, ensuring that the entire con
tract is present and that no lines have been missed. An example of this is shown in Figure 4. The contract begins with a “pragma solidity” state
ment denoting the version number of the compiler to use and ends with a closing curly brace “}” on the final line.
Once the contract code has been entered into Remix's text editor, it must be compiled before it can be tested or deployed. On the toolbar to
the left, clicking the third item listed will open the Solidity Compiler tab. Once opened, a button will be displayed with the text “Compile
diatomicmd.sol,” shown in Figure 5, which after being clicked, will have prepared the bytecode and application binary interface (ABI) for this con
tract. The ABI is an interface that describes the functions of a contract, which makes it possible for other applications to understand how to inter
act with the deployed contract itself. A single compiler warning should also appear below this panel, but can be safely ignored for the purposes of
this tutorial
-----
FIGURE 21 Screen capture showing MyEtherWallet's contract interaction panel with the RunMD function selected
With the contract compiled, it needs to then be deployed to an environment for testing. Remix IDE supports running an EVM inside the local
web browser using JavaScript so that contracts can be tested without being deployed to a live or public environment. Clicking the fourth item in
the toolbar on the left will open the “Deploy & Run Transactions” tab.
The environment can then be set to the JavaScript VM option, and with the contract set to the DiatomicMD contract that was compiled, the
“Deploy” button shown in Figure 6 can then be clicked to deploy the contract for testing outside of the blockchain.
The contract functions will be presented at the bottom of the tab and can be run in the browser using the JavaScript VM. There will be two
functions named “runMD,” one accepting two parameters and one accepting eight. The two-parameter version will, when called, run the simula
tion using pre-configured parameters taken from the first simulation of carbon monoxide run on the EVM [60, 61].
The eight-parameter version can be used to run a custom simulation, however, each input variable will first need to be converted to bytes
using the “getValueBytes” function, providing an integer representation of the input value and an integer indicating the number of decimal places,
the output of which will then be used as the input for calling the “runMd” function. For example, if an input value was meant to be 1.25, you
would call “getValueBytes” using 125 as the “val” parameter and 2 as the “precision” parameter.
Expand the second listed two-parameter “runMd” function and provide a number of steps and precision. In this case it has been carried out
using five steps, with a precision of ten decimal places, shown in Figure 7. When ready, click “transact” to run the test simulation. Due to limita
tions running a copy of the EVM in a web browser with JavaScript, and the fact that software has not yet been designed with these types of simu
lation in mind, it is likely that longer running functions will cause the browser to stop responding or crash, while this is not an issue on a full
Ethereum node it is recommended to use a smaller number of steps to mitigate this during testing
-----
FIGURE 22 Screen capture showing the Etherscan block explorer data associated with this new molecular dynamics trajectory of carbon
monoxide
At the bottom of the debug window, two transactions will now be listed (shown in Figure 8) and can be expanded. The first transaction is the
deployment of the contract itself, while the second is the test execution of the simulation. Expanding the second transaction and looking at the
“decoded input” line will list the parameters that were entered previously, while the “decoded output” line will list the results of the simulation as
a comma separated list.
The first value in this list is the run number, which can be used later to retrieve these results, with the following values being the raw simula
tion output which must then be divided by 10[precision] to produce the diatomic internuclear separation distances for each time step in atomic units.
With the simulation having been run, it is now possible to confirm that the results have been recorded by using the “getSimOutput” function.
Expanding the “getSimOutput” menu in Figure 9, entering the run number of the test, and clicking “call,” will add a further item below the previ
ous two transactions shown in Figure 10. The “decoded input” line will list the run number that was entered and the “decoded output” will display
the same results as the previous transaction.
### 10.5 | Blockchain contract deployment
Once satisfied that the contract is working as intended, the next step is to obtain the bytecode and ABI for the contract.
The bytecode is the compiled contract in a form that the EVM can understand that will actually be deployed on the blockchain
network.
Switching back to the “Deploy and Run Transactions” tab there should be two buttons to copy both the bytecode and ABI. These buttons
are located at the bottom of Figure 11. These will be needed in the next steps, and should each be copied and recorded where they can be easily
retrieved during later steps.
-----
FIGURE 23 Screen capture showing the results of this molecular dynamics trajectory
With the ABI and bytecode prepared, the next step is to switch to MEW and access MetaMask or the Brave wallet from within the Brave
browser. In order to do this, navigate to the MEW website and click the “Access My Wallet” button (Figure 12), followed by the Browser exten
sion button (Figure 13).
If using a different wallet from those suggested but still using MEW, select the appropriate option on the MEW site and follow the instruc
tions provided. The remaining steps will be identical to using the suggested wallets with the only exception being the process of signing
transactions.
When the wallet is connected, expand the “Contract” section and navigate to the “Deploy Contract” page in Figure 14 using the menu on
the left side of the webpage.
Once the webpage loads, paste the bytecode and ABI into the two text areas, and enter a name for the smart contract. Only the
bytecode is deployed to the blockchain, while the ABI and smart contract name are recorded by the website to simplify accessing the
contract in the future. While the contract name is unimportant, the ABI should be retained as there is no guarantee that MEW will
maintain a record of it and without a recognized standard for contracts of this type, interacting with the contract without the ABI may
be difficult
-----
TABLE 2 The carbon monoxide bond lengths calculated using the Ethereum virtual computer in blocks 14710078 and 9360178 [60, 61]
Time (fs) RCO (a0) Block 14710078 RCO (a0) Block 9360178
0.1 2.2676 2.2676
0.2 2.2672 2.2672
0.3 2.2667 2.2667
0.4 2.2659 2.2659
0.5 2.2649 2.2649
TABLE 3 Decimal block numbers, hexadecimal block header hashes, and hexadecimal transaction identification numbers for the simulation
in this work
Contract deployment
Block number 14709931
Transaction ID 0xa3f600317f39a6e065299b506c2e950676befc3080dce1d4d68eafdec7c5bdbc
Block header hash 0x03eefc07fcefc9d06f43c877986d7394deb065e445a5a5234e039bb659b262fb
Molecular dynamics simulation
Block number 14710078
Transaction ID 0x7b8bdb43dadc827d80de190f5df2753baecfb56ed1b2536ea0141de4a1f97e32
Block header hash 0x440c0aec7ed4ceff351575fd9f72693e4c61f232e68fddf70fd9155d11c14ff7
With the contract data filled in, the “Sign Transaction” button in Figure 15 can be clicked to deploy the contract onto the blockchain. Once a
smart contact is deployed it can no longer be changed, and modifying any software contained in the smart contract will therefore require that an
updated contract be deployed separately, so a user may wish to take a moment to ensure that everything is filled out correctly at this point.
After clicking the “Confirm & Send” button shown in Figure 16, the wallet will prompt for the transaction to be cryptographically signed, at
which point it will be broadcast to the Ethereum network. MEW will also update to confirm that the transaction has been initiated. The transac
tion can be viewed, and progress of the broadcast can be monitored, by selecting the “View on Etherscan” option.
Recommended practice is to copy and save the contract address from Etherscan for future reference. This may not be needed on MEW ini
tially, but can save time in the future when calling the contract and looking up transactions. The contract address is listed in the “To:” section on
the Etherscan block explorer (see Figure 17).
Should you lose or forget to record the contract address, it can always be retrieved from the transaction history of the account used to deploy
the contract.
With the contract now deployed, in this case in block 14709931 of the Ethereum blockchain with the transaction identification number
0xa3f600317f39a6e065299b506c2e950676befc3080dce1d4d68eafdec7c5bdbc, return to MEW and switch to the “Interact with Contract”
section in the menu (Figure 18).
Select the contract in the contract dropdown menu (Figure 19). If the contract is not listed by name, copy the contract address from earlier
into the appropriate field and add the ABI (see Figure 20).
MEW does not, at time of writing, support calling overloaded functions (having two functions with the same name but a different set of
parameters). When running the simulation with custom inputs this will not be an issue, however, it will not be possible to call the simplified func
tion used in testing, that is, to recreate the original simulations of carbon monoxide, without an altered ABI on MEW. For simplicity, this tutorial
uses a slightly altered ABI which is provided in the Supporting Information. This is a limitation of MEW specifically, and not a limitation of
Ethereum itself.
With the address and ABI filled in, click “Interact” and then the “RunMD” function from the list provided. For demonstration purposes we
have opted for five steps with a precision of five (Figure 21). Click “Call” and sign the transaction just as before when deploying the contract, this
will likely take longer to complete and can again be monitored using Etherscan or an equivalent block explorer (see Figure 22).
The simulation has now taken place on the Ethereum blockchain. In this case the simulation of carbon monoxide was recorded in Ethereum
block 14710078, with a transaction ID of 0x7b8bdb43dadc827d80de190f5df2753baecfb56ed1b2536ea0141de4a1f97e32 (Table 3). Once com
pleted, return to MEW and select the “GetSimOutput” function, enter “1” as the “runNum” parameter and click “Call.” This function is call-only
and has no cost, nor will it be recorded on the blockchain. If everything has worked correctly, the output of the simulation will be retrieved and
displayed below in the “Results” section, starting with the run number and followed by an array of values. These values will need to be multiplied
by 10[�][precision] in order to produce the final intended output for this contract and should again be quoted to one fewer decimal places than the
-----
precision specification. The results for this simulation of carbon monoxide are shown in unprocessed form in Figure 23, and compared with the
original outputs from simulation of carbon monoxide in block 9360178 in Table 2. Finally, the block header hashes for the specific blocks that
contained the simulation data can be taken either directly from the blockchain or using block explorer software like Etherscan to provide extra
information about the blocks reproducibility. The block header hashes for the blocks in which the software deployment and molecular dynamics
run took place are given with the block numbers and transaction ID numbers in Table 3.
### 10.6 | Conclusions and perspective
This tutorial provides a template for researchers looking to perform scientific computation using blockchain computers. Blockchains can provide
clear advantages over conventional computers in terms of error recording, computational reproducibility, provenance and determining calculation
order, and censorship resistance in scientific work. The use of blockchain cryptography can also aid in federated calculations that involve data
sharing. The high costs relative to centralized architectures currently make blockchains less useful for routine calculations where these properties
carry less of a premium. However, it is our view that blockchain calculations are likely to have an increasing impact as blockchain throughputs con
tinue to scale. Hybrid calculations that use blockchains for certain computational steps and conventional off-chain computers for other steps may
also allow more common use of these methods at a significantly reduced cost. In practical terms, while speculative, more developed blockchain
calculations could prevent future uncertainties like the “war over supercooled water” between the research groups of Chandler and Debenedetti
[68]. We look forward to seeing how this field develops over time.
AUTHOR CONTRIBUTIONS
Magnus W. D. Hanson-Heine: Conceptualization; data curation; project administration; supervision; writing – original draft. Alexander
P. Ashmore: Software; writing – original draft.
CONFLICT OF INTEREST
The authors declare no competing financial interest. It has come to our attention that the Ethereum blockchain has begun a hard fork between
the proof-of-work and proof-of-stake consensus algorithms while the present article was under peer review.
DATA AVAILABILITY STATEMENT
The data generated during this study is available in this article, and is also available on the Ethereum blockchain in the referenced blocks.
ORCID
Magnus W. D. Hanson-Heine [https://orcid.org/0000-0002-6709-297X](https://orcid.org/0000-0002-6709-297X)
Alexander P. Ashmore [https://orcid.org/0000-0001-7498-8873](https://orcid.org/0000-0001-7498-8873)
REFERENCES
[[1] S. Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, 2008 Bitcoin.org, https://bitcoin.org/bitcoin.pdf (accessed: June 22 2019).](http://bitcoin.org)
[2] D. R. Wong, S. Bhattacharya, A. J. Butte, Nat. Commun. 2019, 10(1), 917.
[3] M. Andoni, V. Robu, D. Flynn, S. Abram, D. Geach, D. Jenkins, P. McCallum, A. Peacock, Renew. Sustain. Energy Rev. 2019, 100, 143.
[[4] V. Buterin, A Next-Generation Smart Contract and Decentralized Application Platform, 2014 (Cryptorating.eu) https://cryptorating.eu/whitepapers/](https://cryptorating.eu/whitepapers/Ethereum/Ethereum_white_paper.pdf)
[Ethereum/Ethereum_white_paper.pdf. (accessed November 13 2019).](https://cryptorating.eu/whitepapers/Ethereum/Ethereum_white_paper.pdf)
[5] T.-T. Kuo, X. Jiang, H. Tang, X. F. Wang, T. Bath, D. Bu, L. Wang, A. Harmanci, S. Zhang, D. Zhi, H. J. Sofia, L. Ohno-Machado, BMC Med. Genom. 2020,
13(7), 98.
[6] D. Grishin, K. Obbad, G. M. Church, Nat. Biotechnol. 2019, 37(10), 1115.
[7] G. Gürsoy, C. M. Brannon, M. Gerstein, BMC Med. Genom. 2020, 13(1), 74.
[8] N. D. Pattengale, C. M. Hudson, BMC Med. Genom. 2020, 13(7), 102.
[9] T.-T. Kuo, T. Bath, S. Ma, N. Pattengale, M. Yang, Y. Cao, C. M. Hudson, J. Kim, K. Post, L. Xiong, L. Ohno-Machado, Int. J. Med. Inform. 2021, 154,
104559.
[10] S. Ma, Y. Cao, L. Xiong, BMC Med. Genom. 2020, 13(7), 91.
[11] H. I. Ozercan, A. M. Ileri, E. Ayday, C. Alkan, Genome Res. 2018, 28(9), 1255.
[12] M. S. Ozdayi, M. Kantarcioglu, B. Malin, BMC Med. Genom. 2020, 13(7), 82.
[13] G. Gürsoy, R. Bjornson, M. E. Green, M. Gerstein, BMC Med. Genom. 2020, 13(7), 78.
[14] T.-T. Kuo, R. A. Gabriel, L. Ohno-Machado, J. Am. Med. Inform. Assoc. 2019, 26(5), 392.
[15] B. Adanur Dedeturk, A. Soran, B. Bakir-Gungor, PeerJ 2021, 9, e12130.
[16] F. Carlini, R. Carlini, S. Dalla Palma, R. Pareschi, F. Zappone, Front. Blockchain 2020, 3(57), 1.
[17] V. Racine, Sci. Eng. Ethics 2021, 27(3), 35.
[18] X.-L. Jin, M. Zhang, Z. Zhou, X. Yu, J. Med. Internet Res. 2019, 21(9), e13587.
[19] B S Glicksberg S Burns R Currie A Griffin Z J Wang D Haussler T Goldstein E Collisson J Med Internet Res 2020 22(3) e16810
-----
[20] M. Shabani, J. Am. Med. Inform. Assoc. 2019, 26(1), 76.
[21] M. Kang, V. Lemieux, Ledger 2021, 6, 126.
[22] D. R. Ortega, C. M. Oikonomou, H. J. Ding, P. Rees-Lee, Alexandria, G. J. Jensen, PLoS One 2019, 14(4), e0215531.
[23] Y. Xu, R. Liu, J. Li, Y. Xu, X. Zhu, J. Phys. Chem. Lett. 2020, 11(23), 9995.
[24] H. Kim, S. Kim, J. Y. Hwang, C. Seo, IEEE Access 2019, 7, 136481.
[25] F. Chen, H. Wan, H. Cai, G. Cheng, Can. J. Stat. 2021, 49(4), 1364.
[26] D. Li, D. Han, T.-H. Weng, Z. Zheng, H. Li, H. Liu, A. Castiglione, K.-C. Li, Soft Comput. 2021, 26, 4423.
[27] M. Imran, U. Zaman, Imran, J. Imtiaz, M. Fayaz, J. Gwak, Electronics 2021, 10(20), 2501.
[28] F. Zerka, V. Urovi, A. Vaidyanathan, S. Barakat, R. T. H. Leijenaar, S. Walsh, H. Gabrani-Juma, B. Miraglio, H. C. Woodruff, M. Dumontier, P. Lambin,
IEEE Access 2020, 8, 183939.
[29] V. Drungilas, E. Vaiciukynas, M. Jurgelaitis, R. Butkienˇ ė, L. Ceponien[ˇ] ė, Appl. Sci. 2021, 11(3), 1010.
[30] E. A. Mantey, C. Zhou, J. H. Anajemba, I. M. Okpalaoguchi, O. D.-M. Chiadika, Front. Public Health. 2021, 9(1256), 737269.
[31] Y. Liu, Y. Qu, C. Xu, Z. Hao, B. Gu, Sensors 2021, 21(10), 3335.
[32] R. Zhang, M. Song, T. Li, Z. Yu, Y. Dai, X. Liu, G. Wang, J. Syst. Archit. 2021, 118, 102205.
[33] J. Weng, J. Zhang, M. Li, Y. Zhang, W. Luo, IEEE Trans. Dependable Secure Comput. 2021, 18(5), 2438.
[34] H. Jin, X. Dai, J. Xiao, B. Li, H. Li, Y. Zhang, IEEE Internet Things J. 2021, 8(21), 15776.
[35] M. Ali, H. Karimipour, M. Tariq, Comput. Secur. 2021, 108, 102355.
[36] D. Unal, M. Hammoudeh, M. A. Khan, A. Abuarqoub, G. Epiphaniou, R. Hamila, Comput. Secur. 2021, 109, 102393.
[37] M. Shen, H. Wang, B. Zhang, L. Zhu, K. Xu, Q. Li, X. du, IEEE Internet Things J. 2021, 8(4), 2265.
[38] Z. Mahmood, V. Jusas, Symmetry 2021, 13(7), 1116.
[39] C. Hickman, H. Alshubbar, J. Chambost, C. Jacques, C.-A. Pena, A. Drakeley, T. Freour, Fertil. Steril. 2020, 114(5), 927.
[40] U. Majeed, L. U. Khan, A. Yousafzai, Z. Han, B. J. Park, C. S. Hong, IEEE Access 2021, 9, 155634.
[41] C. Jiang, C. Xu, Y. Zhang, Inf. Sci. 2021, 576, 288.
[42] E. O. Kiktenko, N. O. Pozhar, M. N. Anufriev, A. S. Trushechkin, R. R. Yunusov, Y. V. Kurochkin, A. I. Lvovsky, A. K. Fedorov, Quantum Sci. Technol.
2018, 3(3), 035004.
[43] I. Stewart, D. Ilie, A. Zamyatin, S. Werner, M. F. Torshizi, W. J. Knottenbelt, R. Soc. Open Sci. 2018, 5(6), 180410.
[44] M. Edwards, A. Mashatan, S. Ghose, Quantum Inf. Process. 2020, 19(6), 184.
[45] Y.-L. Gao, X. B. Chen, G. Xu, K. G. Yuan, W. Liu, Y. X. Yang, Quantum Inf. Process. 2020, 19(12), 420.
[46] S. Singh, N. K. Rajput, V. K. Rathi, H. M. Pandey, A. K. Jaiswal, P. Tiwari, Neural Process. Lett 2020, 1.
[47] C. Li, X. Chen, Y. Chen, Y. Hou, J. Li, IEEE Access 2019, 7, 2026.
[48] G. Iovane, IEEE Access 2021, 9, 39827.
[49] S. Banerjee, A. Mukherjee, P. K. Panigrahi, Phys. Rev. Res. 2020, 2(1), 013322.
[50] Q. Li, M. Luo, C. Hsu, L. Wang, D. He, IEEE Syst. J. 2022, 16, 4816.
[51] P. Zhang, L. Wang, W. Wang, K. Fu, J. Wang, Secur. Commun. Netw. 2021, 2021, 6671648.
[52] Z. Cai, J. Qu, P. Liu, J. Yu, IEEE Access 2019, 7, 138657.
[53] J. Nicola, Nature 2021, 594(7864), 481.
[54] Z. C. Kennedy, D. E. Stephenson, J. F. Christ, T. R. Pope, B. W. Arey, C. A. Barrett, M. G. Warner, J. Mater. Chem. C 2017, 5(37), 9570.
[55] J. J. Sikorski, J. Haughton, M. Kraft, Appl. Energy 2017, 195, 234.
[56] J. Leng, P. Jiang, K. Xu, Q. Liu, J. L. Zhao, Y. Bian, R. Shi, J. Cleaner Prod. 2019, 234, 767.
[57] S. S. Takhar, K. Liyanage, Int. J. Internet Technol. Secur. Trans. 2021, 11(1), 75.
[58] K. Bhubalan, A. M. Tamothran, S. H. Kee, S. Y. Foong, S. S. Lam, K. Ganeson, S. Vigneswari, A.-A. Amirul, S. Ramakrishna, Environ. Res. 2022, 213,
113631.
[59] H. Lu, K. Huang, M. Azimi, L. Guo, IEEE Access 2019, 7, 41426.
[60] M. W. D. Hanson-Heine, A. P. Ashmore, Chem. Sci. 2020, 11(18), 4644.
[61] M. W. D. Hanson-Heine, A. P. Ashmore, J. Phys. Chem. Lett. 2020, 11(16), 6618.
[62] C. Dwork, M. Naor, Pricing via processing or combatting junk mail. in Advances in Cryptology—CRYPTO '92 (Ed: E. F. Brickell), Springer, Berlin Heidelberg 1993, p. 139.
[63] W. C. Swope, H. C. Andersen, P. H. Berens, K. R. Wilson, J. Chem. Phys. 1982, 76(1), 637.
[64] W. D. Carey, Science 1983, 219(4587), 911.
[65] U. Hoßfeld, L. Olsson, Nature 2006, 443(7109), 271.
[66] C. Schinckus, Renew. Sustain. Energy Rev. 2021, 152, 111682.
[67] N. Lasla, L. Al-Sahan, M. Abdallah, M. Younis, Comput. Netw. 2022, 214, 109118.
[[68] G. S. Ashley, Phys. Today 2018, 1945. https://doi.org/10.1063/PT.6.1.20180822a](https://doi.org/10.1063/PT.6.1.20180822a)
AUTHOR BIOGRAPHIES
-----
SUPPORTING INFORMATION
Additional supporting information can be found online in the Supporting Information section at the end of this article.
APPENDIX
This appendix details the steps of how to setup a blockchain wallet on the Ethereum blockchain and purchase Ether tokens to run a simulation of
the kind outlined in this publication. The appendix assumes the use of the Chrome web browser and MetaMask wallet plug-in for this purpose at
the time of writing.
1. Go to the Chrome web store and install the MetaMask plug-in. A copy of this plug-in can also be found through the MetaMask website
[(which is currently metamask.io).](http://metamask.io)
2. Click the “Add to Chrome” button to install the plug-in.
3. Once installed, a new tab titled “Welcome to MetaMask” will open. Click the “Get started” button.
4. Click either the “No thanks” or “I agree” button to share diagnostic data with MetaMask based on personal preferences. Neither will directly
affect any further steps in this process.
5. Click the “Create a wallet” button.
6. Enter a password of your choice and click “Create” to generate a new “wallet.”
7. The next page will allow you to watch a video explaining what a “recovery phrase” is in the context of cryptographic key pairs, as well as pre
senting some basic information on how to keep this phrase secure.
8. The next page will allow you to click to view your recovery phrase. You should record this somewhere secure, however, if you do not, it is
possible to retrieve it from the MetaMask plug-in at a later date. Should this phrase be lost however, there is no method to retrieve it, and
any tokens associated with the wallet will be lost.
9. Click either “Remind me later” or “Next.” If you click “Next,” you will be prompted to enter the phrase to confirm you have backed it
up. Once done, click “Confirm.”
10. The wallet is now set up and ready to use. To obtain some Ether (ETH), which is the native token of the Ethereum blockchain, click the “Buy”
button.
11. The option to select multiple payment gateways will then be presented where you can choose to purchase ETH with fiat currencies such as
Dollars ($), Euros (€), or Pounds Sterling (£), and have it directly deposited into the wallet you have just set up. The payment gateway to select
will depend on the preferred method to pay for the ETH (e.g., debit card, credit card, bank transfer, apple pay, etc).
12. Once installed, MetaMask is available from the top-right plug-in bar in the Chrome browser.
13. Should this wallet need to be used on a separate machine, repeat the same steps until the “Create a wallet” option is presented and instead
click “Recover a wallet,” at which point you should enter the recovery phrase from the original setup to restore that wallet instead.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1002/qua.27035?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/qua.27035, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://nottingham-repository.worktribe.com/preview/13449034/Int%20J%20of%20Quantum%20Chemistry%20-%202022%20-%20Hanson___Heine%20-%20Blockchain%20technology%20in%20quantum%20chemistry%20%20A%20tutorial%20review%20for.pdf"
}
| 2,022
|
[
"Review"
] | true
| 2022-10-30T00:00:00
|
[
{
"paperId": "de67654589823c196f81521726d78fc35ebac509",
"title": "Leveraging blockchain concepts as watermarkers of plastics for sustainable waste management in progressing circular economy."
},
{
"paperId": "7db7e190025906c1b3741413eb5e2c8668fbd54a",
"title": "Proof-of-work based blockchain technology and Anthropocene: An undermined situation?"
},
{
"paperId": "29717b25795cd599543ccbdfc05e80e3e73d5a38",
"title": "A Decentralized Identity-Based Blockchain Solution for Privacy-Preserving Licensing of Individual-Controlled Data to Prevent Unauthorized Secondary Data Usage"
},
{
"paperId": "2becfa0f1f61441e2ba5dd0721cfc44ad5ffdfd0",
"title": "Blockchain for federated learning toward secure distributed machine learning systems: a systemic survey"
},
{
"paperId": "b62ddfa76a90b48fa2096814f98572ce2f0187eb",
"title": "Comprehensive Survey of IoT, Machine Learning, and Blockchain for Health Care Applications: A Topical Assessment for Pandemic Preparedness, Challenges, and Solutions"
},
{
"paperId": "87a54749baeb5c1642b5d7a1cad1bf401860991b",
"title": "Integration of federated machine learning and blockchain for the provision of secure big data analytics for Internet of Things"
},
{
"paperId": "bb0c90fd90ca9481919d1fa14e20044709e3cf5e",
"title": "Blockchain for genomics and healthcare: a literature review, current status, classification and open issues"
},
{
"paperId": "82921c5f3622cb54aba641e7e12f5bb0befe8764",
"title": "Blockchain-Secured Recommender System for Special Need Patients Using Deep Learning"
},
{
"paperId": "1f467e7d2805b945ebca678be3240c023d524cbc",
"title": "Democratic learning: hardware/software co-design for lightweight blockchain-secured on-device machine learning"
},
{
"paperId": "887f6f6a4ebe631f2332e7f2c7be847fa07b1c43",
"title": "Integration of blockchain and federated learning for Internet of Things: Recent advances and future challenges"
},
{
"paperId": "b277f77fef0b5d117765140359927fe45c6c6fa6",
"title": "Benchmarking blockchain-based gene-drug interaction data sharing methods: A case study from the iDASH 2019 secure genome analysis competition blockchain track"
},
{
"paperId": "b2c4e325c49611a2886134dafc2fe4271264a08f",
"title": "Implementation Framework for a Blockchain-Based Federated Learning Model for Classification Problems"
},
{
"paperId": "1ddc151446a22f4da7263d410a1ee31ac41aad83",
"title": "How scientists are embracing NFTs"
},
{
"paperId": "696133fc007432f68045379e0618fb4bf011fa94",
"title": "PFLM: Privacy-preserving federated learning with membership proof"
},
{
"paperId": "5477382231adec3319c0a2089eb63c3d4edb71e9",
"title": "Can Blockchain Solve the Dilemma in the Ethics of Genomic Biobanks?"
},
{
"paperId": "166d0012a6711aab6be7ba957d0249f6260a0a4b",
"title": "Blockchain-Enabled Asynchronous Federated Learning in Edge Computing"
},
{
"paperId": "b64c941afa33573a441cc512e68f38b7b7766af8",
"title": "Towards Blockchain-Based Federated Machine Learning: Smart Contract for Model Inference"
},
{
"paperId": "eeabfd10ec53acbd0a9cdf4a47730ede06853e46",
"title": "A novel quantum blockchain scheme base on quantum entanglement and DPoS"
},
{
"paperId": "629d2985d0a72fbf0ba69097c787c80b20072ca3",
"title": "The Blockchain Integrated Automatic Experiment Platform (BiaeP)."
},
{
"paperId": "7493e0e7fe9be5de44ebc6cbd09b3693c79b8a68",
"title": "Data sharing: using blockchain and decentralized data technologies to unlock the potential of artificial intelligence: What can assisted reproduction learn from other areas of medicine?"
},
{
"paperId": "c60c4667076bc0cfe40870212b745eb785c86386",
"title": "Calculating with Permanent Marker: How Blockchains Record Immutable Mistakes in Computational Chemistry."
},
{
"paperId": "b656b2c65dc0c3b22a08f7f1aa1a1f5a9f01f2fc",
"title": "Green-PoW: An Energy-Efficient Blockchain Proof-of-Work Consensus Algorithm"
},
{
"paperId": "386c4d9e439132ec06ff7bdc9ae14e0c9843a37c",
"title": "Decentralized genomics audit logging via permissioned blockchain ledgering"
},
{
"paperId": "a4c0b3e005cc8ff86ea36fd89b640a9b36faa019",
"title": "iDASH secure genome analysis competition 2018: blockchain genomic data access logging, homomorphic encryption on GWAS, and DNA segment searching"
},
{
"paperId": "6c8660f101d6fb712217d6e036f68a3785a130a0",
"title": "Using blockchain to log genome dataset access: efficient storage and query"
},
{
"paperId": "66776d20e43f25b5f8b5cc303e02caa10271cd07",
"title": "Blockchain-Authenticated Sharing of Genomic and Clinical Outcomes Data of Patients With Cancer: A Prospective Cohort Study"
},
{
"paperId": "07734c769838f696fd2cebae792d8944dbdc2db8",
"title": "Leveraging blockchain for immutable logging and querying across multiple sites"
},
{
"paperId": "47cea5a55d5f36f0c3db63c4b7ba6798bff0584b",
"title": "A review of quantum and hybrid quantum/classical blockchain protocols"
},
{
"paperId": "8bda3f00fd1db58cdcd3198c4cc14707b9773b34",
"title": "Using Ethereum blockchain to store and query pharmacogenomics data via smart contracts"
},
{
"paperId": "42bcfcd7a1372643f34ab500c1fe0d676e254765",
"title": "Makerchain: A blockchain with chemical signature for self-organizing process in social manufacturing"
},
{
"paperId": "080a16243c559464798895642e6ee616a496fbc4",
"title": "Data privacy in the age of personal genomics"
},
{
"paperId": "47b5f5fce2ba7799e3750deaf762c604ba037e1d",
"title": "Machine learning in/for blockchain: Future and challenges"
},
{
"paperId": "b779cdd9e02232eff538f8ce6f8ae6e7399df425",
"title": "Efficient logging and querying for blockchain-based cross-site genomic dataset access audit"
},
{
"paperId": "9b510736a3cbdff7a1417142b8e89b38b0702553",
"title": "Fair compute loads enabled by blockchain: sharing models by alternating client and server roles"
},
{
"paperId": "3f48b08829cd419e2898fbeb14beab2d07f9744d",
"title": "Prototype of running clinical trials in an untrustworthy environment using blockchain"
},
{
"paperId": "7d31850f128e4bf854f6dc6d2eb4ab91568927e6",
"title": "Application of a Blockchain Platform to Manage and Secure Personal Genomic Data: A Case Study of LifeCODE.ai in China"
},
{
"paperId": "60be2610dba19761d6458bbac27527b744b0109e",
"title": "Blockchain technology in the energy sector: A systematic review of challenges and opportunities"
},
{
"paperId": "c5f70ee65eabde057f0ecc37c12e333182c5f94d",
"title": "Blockchain-based platforms for genomic data sharing: a de-centralized approach in response to the governance problems?"
},
{
"paperId": "fa24095e91e253064000da94c63e14562dc578b6",
"title": "ETDB-Caltech: A blockchain-based distributed public database for electron tomography"
},
{
"paperId": "bb6716d6b37ebba651e882d5077b1f9bf3f88408",
"title": "Realizing the potential of blockchain technologies in genomics"
},
{
"paperId": "32fd79744afaa92970a3a57305497121af55785b",
"title": "Committing to quantum resistance: a slow defence for Bitcoin against a fast quantum computing attack"
},
{
"paperId": "eabb07994b757d329a434a082e72b81fca9f6237",
"title": "Blockchain technology in the chemical industry: Machine-to-machine electricity market"
},
{
"paperId": "8f7579327850b5719547287f540b88c2efd4f10b",
"title": "Quantum-secured blockchain"
},
{
"paperId": "efcf9e4952a6904241975b95eaf45a645a59cd08",
"title": "Freedom of the mind got Nature banned by the Nazis"
},
{
"paperId": "0ffa8cd252fbf280c088296a94728cb6ede16e31",
"title": "Censorship, soviet style."
},
{
"paperId": "ac81ede712252992b364566fc2f44637e3b0eab3",
"title": "A computer simulation method for the calculation of equilibrium constants for the formation of physi"
}
] | 13,199
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f3a470a4e9d926afcc53cfcd5deeb000595e12
|
[] | 0.885048
|
A Secure Land Asset Transfer System using Blockchain
|
00f3a470a4e9d926afcc53cfcd5deeb000595e12
|
International Journal of Information & Computation Technology
|
[
{
"authorId": "2105991621",
"name": "P. Ramya"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
International Journal of Information & Computation Technology.
ISSN 0974-2239 Volume 13, Number 1 (2023), pp. 1-6
© International Research Publications House
https://dx.doi.org/10.37624/IJICT/13.1.2023.1-6
# A Secure Land Asset Transfer System using
Blockchain
**P.N. Ramya[1], Geetha Krishna Guruju[2], Ayushi Verma[3],**
**Achintya Prathikantam[4] and Priyanka Myaragalla[5 ]**
_12,3,4,5G. Narayanamma Institute of Technology and Science (for Women),_
_Hyderabad TS 500084, India_
_[1e-mail: ramyapn1@gmail.com](mailto:ramyapn1@gmail.com)_
**Abstract**
Property transfer is one of the use cases that involves a lot of intermediaries to put trust
in the system. In the present scenario, property transactions are carried out on paper,
giving rise to countless conflicts. Maintaining accurate records of land ownership and
transfers is a very difficult task, made even more challenging by fraudulent or
incomplete registries that can be extremely hard to trace back through history. The
integrity of these records is crucial, but ensuring their accuracy is a complex
undertaking. Blockchain can be utilized to overcome these predicaments faced in land
dealings. The transparent nature of blockchain makes it possible to securely track the
transfer of ownership from one individual to another reliably. Blockchain’s immutable,
auditable, and traceable features makes it a suitable solution for this use case. IPFS is a
decentralized protocol and peer-to-peer network that facilitates the storage and sharing
of data in a distributed file system. It's designed to enable efficient and secure sharing
of files across a network of computers without relying on a central server. A solution
of decentralized application or DAPP on Ethereum Blockchain is proposed through this
work, which will be a one stop platform for buying, selling, or registering land. A
systematic approach is used, right from the registration of the land
inspector/buyer/seller to the registration of lands, making it available to sell, etc.
**Keywords: Blockchain, Smart Contracts, Ganache, Ethereum, IPFS, Decentralized**
Application
**1** **Introduction**
In our country, property ownership is a contentious issue because of the lack of proper
documentation and legal conflicts. The system's weaknesses lie in legacy paper trails
and poorly maintained centralized systems, which can be easily manipulated by
-----
2 _P.N. Ramya et al_
fraudulent users. To address these issues, an Ethereum blockchain-based Decentralized
application is being proposed. Blockchain technology is created by combining a
blockchain system or network with a data structure. The data is stored in blocks which
are interconnected and also has hash references to the previous block. The hash
references are also used for storing transaction data. A hash function is a tool that takes
in data of any size and converts it into a specific and unchanging string of bits called a
hash value or hash reference. While these hash values can be quickly calculated, it's
extremely challenging to reverse the process and turn the hash value back into the
original data, according to computational theory. The proposed system uses the IPFS,
which is a distributed file system for storing the confidential land and user identity
documents. The proposed system aims to create a secure, decentralized, and tamperproof platform for buying, selling, and registering land. By enabling direct
communication between buyers and sellers and eliminating the need for intermediaries,
the system will increase transparency and efficiency. The goal is to create a one-stop
decentralized application that can maintain immutable and tamper-proof records of
transactions.
**2** **Related work**
In 2021, Suganthe et al. [1] proposed a system that provides the precise details of land
records and ownership. The major drawback of this system is that it can only get the
land details and store them within the blockchain and hence it doesn’t enable users to
buy/sell the land or transfer the ownership of land.
The proposed work of Mohammed Moazzam Zahuruddin et al. [2] is implemented on
Ethereum blockchain with solidity. The system being proposed employs a double
consensus mechanism for transactions, where the landowner initiates the transaction
and the buyer completes it. This approach addresses situations where the landowner is
unavailable, and assigns ownership to the government. The major drawback of this
system is offline land details verification.
This paper [3] proposes a land document registration system based on Ethereum and
IPFS. This method ensures that user papers kept in the IPFS garage are secure. They
expanded an information garage software to illustrate the process. The log files are
saved on the IPFS network, which also provides the Hash. The major drawback of the
system is that it simply secures the documents stored on IPFS and doesn’t provide any
provision for communication between the land buyers and sellers.
**3** **Proposed system**
The system we propose is a decentralized application which provides a user-friendly
interface for direct communication between the buyers and sellers without any middle
man. We aim to implement this system using Solidity and Flutter. The DAPP will be a
one stop platform for buying, selling and transferring land assets. The system has
user/land authentication and verification to prevent fraudulent activity. The system
facilitates the users to buy/sell land assets conveniently and every transaction is
recorded on the blockchain to maintain transparency and immutability. IPFS is used to
-----
_A Secure Land Asset Transfer System using Blockchain_ 3
store the land and identity documents in a decentralized way. The GIS mapping
software is used to draw and display the layout of the land. All the transactions take
place in Ethers using Metamask wallet. As a proof of ownership transfer, a land sale
deed document is generated and stored in IPFS.
**4** **Implementation**
4.1 **Module Description**
**Authentication Module.**
In this module, the verification of the user and the lands is done. The user first registers
with the help of his private key and identity documents. After the user has registered,
the Land Inspector verifies his identity documents and authenticates him. If the user
adds a land to his profile, the Land Inspector should verify the land documents in order
to enable the user to take further actions.
**User Processes Module.**
In this module, the user can add lands to his dashboard, make lands available for sale
or buy the available lands from the land gallery. Each of these actions are followed by
verification in every step. Finally, the transfer of ownership takes place and a digital
document is generated and stored.
**Transactions Module.**
Metamask is used for making transactions in our system. Right from the buyer’s
payment to recording the transfer of ownership, all the transactions are tamper-proof
and unchangeable.
**4.2** **System Design**
There are three stakeholders namely, contract owner, land inspector and the user
interact with the decentralized web application. It is built using flutter and solidity.
Web3.js is used as API support for communication between the DAPP and blockchain.
An Ethereum wallet is required for performing all the blockchain transactions. The
documents are stored on IPFS which is a decentralized file system.
**Figure 1. Proposed System Design**
-----
4 _P.N. Ramya et al_
**4.3** **Working**
The contract owner initiates the whole process by deploying the smart contract on the
Ethereum blockchain and acts as the Admin of the system. The contract owner can
add/remove the Land Inspectors. First-time users have to register into the system by
using a private key and by providing personal information. The system verifies the user
and the land inspector verifies the identity document produced at the time of
registration. Once the user is authenticated, the option to add lands is enabled. The land
inspector verifies the land documents before approving the addition of land to the user’s
profile. Once the land is verified, it gets added to the land gallery of all the users. The
user can make it available for sale or can buy an already available land. If the seller
accepts the request, payment is made and transaction begins. The land inspector verifies
the transaction and transfers the ownership of the land. A transfer of ownership
document proof is generated and stored in IPFS securely.
**5** **Results**
**Figure 2: Land Inspector Dashboard**
**Figure 3: User Dashboard-Land Gallery**
-----
_A Secure Land Asset Transfer System using Blockchain_ 5
**Figure 4: Land Details Page showing Land Map drawn using GIS**
**Figure 5: Payment Confirmation Page**
**Figure 6: Payment in transit using Metamask wallet**
-----
6 _P.N. Ramya et al_
**Figur 7: Land sale deed document generated and stored using IPFS**
**6** **Conclusion and Future Scope**
The proposed system is a single platform for the users to buy/sell/register a land. By
providing the details of the user, identity documents and land documents which are
verified by a Land Inspector, the system ensures the credibility of the data. On the
contrary, if these were provided in the traditional system, the data could be altered
easily. With the help of blockchain and by storing the data on a decentralized file
system, we have ensured that no fraudulent activity takes place and the data remains
tamper-proof.
As we know that there’s always a scope for improvement, there are certain aspects that
could be added to our system to increase its overall efficacy. The system can be further
enhanced by automating the user and the land verification process. We can also predict
the approximate price of land and suggest the users about the current land price trends.
We can also include land splitting or gifting options.
**References**
[1] Suganthe, R. C., Shanthi N., Latha R. S., Gowtham K., Deepakkumar S., and
Elango R.: Blockchain enabled Digitization of Land Registration. In: 2021
International Conference on Computer Communication and Informatics
(ICCCI)(2021)
[2] Mohammed M.Z., Gupta S., Shaik W.A.: Land Registration using Blockchain
Technology. In: International Journal of Emerging Technologies and Innovative
Research (2021)
[3] Kumar K.V.R., Gokul A.R., Kumar.V.N.: Blockchain and Smart Contract for
Land Registration using Ethereum Network. In: International Journal of
Engineering Research & Technology (IJERT)(2022)
[4] Roopa.C,, Suganthe, R. C., Shanthi N.: Blockchain Based Certificate
Verification Using Ethereum And Smart Contract. In: Journal of Critical
Reviews(2020)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.37624/ijict/13.1.2023.1-6?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.37624/ijict/13.1.2023.1-6, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://doi.org/10.37624/ijict/13.1.2023.1-6"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-07-06T00:00:00
|
[] | 2,482
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f658832a6f1d5d53f20b9bc361594b72a8ddf8
|
[
"Computer Science"
] | 0.85086
|
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles
|
00f658832a6f1d5d53f20b9bc361594b72a8ddf8
|
[
{
"authorId": "2093280131",
"name": "Kai Song"
},
{
"authorId": "39474042",
"name": "K. Koh"
},
{
"authorId": "3333859",
"name": "Chunbo Zhu"
},
{
"authorId": "31139085",
"name": "Jinhai Jiang"
},
{
"authorId": "2144447138",
"name": "Chao Wang"
},
{
"authorId": "47932866",
"name": "Xiaoliang Huang"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists
# 6,700
Open access books available
# 180,000 195M
International authors and editors
Our authors are among the
Downloads
# 12.2%
# 154 TOP 1%
Countries delivered to most cited scientists Contributors from top 500 universities
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
### Interested in publishing with us? Contact book.department@intechopen.com
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
-----
##### Chapter 6
#### A Review of Dynamic Wireless Power Transfer for In‐
Motion Electric Vehicles
##### Kai Song, Kim Ean Koh, Chunbo Zhu, Jinhai Jiang,
Chao Wang and Xiaoliang Huang
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/64331
**Abstract**
Dynamic wireless power transfer system (DWPT) in urban area ensures an uninterrupt‐
ed power supply for electric vehicles (EVs), extending or even providing an infinite driving
range with significantly reduced battery capacity. The underground power supply
network also saves more space and hence is important in urban areas. It must be noted
that the railways have become an indispensable form of public transportation to reduce
pollution and traffic congestion. In recent years, there has been a consistent increase in
the number of high‐speed railways in major cities of China, thereby improving accessi‐
bility. Wireless power transfer for train is safer and more robust when compared with
|Col1|Col2|
|---|---|
||S|
conductive power transfer through pantograph mounted on the trains. Direct contact is
subject to wear and tear; in particular, the average speed of modern trains has been
increasing. When the pressure of pantograph is not sufficient, arcs, variations of the current,
and even interruption in power supply may occur. This chapter provides a review of the
latest research and development of dynamic wireless power transfer for urban EV and
electric train (ET). The following key technology issues have been discussed: (1) power
rails and pickups, (2) segmentations and power supply schemes, (3) circuit topologies
and dynamic impedance matching, (4) control strategies, and (5) electromagnetic
interference.
**Keywords:** dynamic wireless power transfer, magnetic coupler, circuit topologies,
control strategies, electromagnetic interference
© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
-----
110 Wireless Power Transfer - Fundamentals and Technologies
##### 1. Introduction
In recent years, studies on DWPT have gained traction especially from The University of
Auckland, Korea Advanced Institute of Science and Technology (KAIST), The University of
Tokyo, Oak Ridge National Laboratory (ORNL), and many other international institutions. The
topics discussed include system modeling, control theories, converter topologies, magnetic
coupling optimization, and electromagnetic shielding technologies for DWPT.
The University of Auckland and Conductix‐Wampfler manufactured the world's first WPT
bus with 30 kW power. A demo ET with 100 kW WPT capability and a 400 m long track without
any on‐board battery was also constructed [1] as shown in Figure 1.
**Figure 1. WPT for EV and ET.**
KAIST constructed electric buses powered by an online electric vehicle (OLEV) system. The
buses are deployed in Gumi city for public transportation, running on two fixed routes
covering a total distance of 24 km as shown in Figure 2. The OLEV system on these routes is
able to supply 100 kW power with 85% of transfer efficiency [2].
**Figure 2. KAIST OLEV.**
The research in Oak Ridge National Laboratory focuses on coupling configuration, transfer
characteristics, medium loss, and magnetic shielding. The dynamic charging system as shown
in Figure 3 constructed by ORNL consists of a full bridge inverter powering two transmitters
simultaneously through a series connection. The experimental results show that the positions
of the electric vehicle significantly affect the transferred power and efficiency [3].
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 111
http://dx.doi.org/10.5772/64331
**Figure 3. DWPT system of ORNL.**
Researchers in The University of Tokyo proposed using the combination of a feedforward
controller and a feedback controller to adjust the duty cycle of the power converters in the
DWPT system to achieve optimum efficiency. With the advanced control method, a wireless
in‐wheel motor is developed as shown in Figure 4. The current WPT is from the car body to
the in‐wheel motor. In future, the wireless in‐wheel motor can be powered directly from the
ground using a dynamic charging system [4].
**Figure 4. Wireless in‐wheel motor.**
On the other hand, the Korea Railroad Corporation (KRRI) designed a WPT system for the
implementation in railway track. A 1 MW, 128‐m‐long railway track was developed to
demonstrate the dynamic charging technology for EV. The coupling mechanism consists of a
long transmitter track and two small U‐shaped magnetic ferrites to increase the coupling
strength. As a long transmitter track has high inductance, high voltage drop will occur when
the current flows through it. In order to reduce this voltage stress, the compensation capacitors
are distributed along the track as shown in Figure 5 [5].
**Figure 5. Wireless power rail developed by KRRI.**
-----
112 Wireless Power Transfer - Fundamentals and Technologies
The researchers from the Japan Railway Technical Research Institute proposed a different
design of coupling mechanism for the ET. The transmitters are long bipolar coils, and “figure‐
8” coils are used as the matching pickups as shown in Figure 6. The system is able to transfer
50 kW of power across a 7.5‐mm gap with 10‐kHz frequency [6].
**Figure 6. The non‐contact power supply system for railway vehicle.**
Bombardier Primove from Germany is currently leading in WPT technology for EV and ET.
Studies have been primarily conducted for better exploitation of the technology. Apparently,
the technical information of the WPT system developed by Bombardier Primove has not been
published. In 2013, the company proposed a design shown in Figure 7 to ensure high reliability
when powering the EV. The main DC bus is supplied by k‐number of AC/DC substations
connected in parallel. This configuration is used to increase the robustness of the system. If
one of the AC/DC substations breaks down, that particular substation will be disconnected
from the system and other neighboring substations can continue functioning normally, thus
avoiding power interruption. Each transmitter cluster is supplied by multiple high‐frequency
DC/AC inverters in parallel. Similar to the DC bus, the power supply at the AC bus will not
be interrupted if an inverter breaks down. At the receiver side, the train contains a DC bus as
shown in Figure 7. Multiple receivers are supplying to the DC bus simultaneously via AC/DC
rectification. The DC bus powers the motor through a controller. If any of the rectifiers is
damaged, other receivers can continue providing sufficient power to the DC bus [7].
**Figure 7. DWPT system for railway vehicle.**
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 113
http://dx.doi.org/10.5772/64331
The Harbin Institute of Technology demonstrated dynamic charging using segmented
transmitters with parallel connections to the inverter [8]. At the receiver side, two layers of flat
coils wounded in the same direction are stacked against each other to cancel the points, where
transferred power is zero, thereby increasing the overall efficiency. Using the decoupling
principle to design the size and position of the two‐phase coil, the cross‐coupling is cancelled
and high efficiency is then achieved at any position [9].
Although several studies have been conducted all over the world yielding exceptional results,
factors such as power transfer performance, construction cost, and maintenance cost still
require improvement. Other important considerations for practical DWPT implementation
include high‐power rail, robust control strategies, and EMC.
##### 2. Power rails and pickups
Core‐less rectangular coils and bipolar coils are the two general types of coils used in WPT.
The University of Auckland proposed using long rectangular rails to transfer power. A larger
surface area for road construction necessitates less amount of power to be transferred per
surface area. The design is also sensitive to lateral displacement of the electric vehicles.
Moreover, a high level of magnetic field leakage occurs at both sides of the rail [10]. KAIST
proposed an improved version by adding a magnetic core with an optimized design. Com‐
pared to the transmitter rail proposed by the University of Auckland, the transfer efficiency
and transfer distance are increased. However, the construction cost is also higher.
KAIST presented an advanced coupling mechanism design and optimization technology in
their past research. In 2009, the first‐generation OLEV was successfully produced. An E‐shaped
magnetic core is used as the power transmission rail. The air gap is only 1 cm and the transfer
efficiency 80% [2]. A U‐shaped transmission rail was also proposed in the same year by
significantly increasing the transmission gap to 17 cm with an efficiency of 72%. In 2010, a
skeleton‐type W‐shaped magnetic core is proposed, thus further increasing the transfer
distance to 20 cm and efficiency to 83% [2]. From 2011 to 2015, researchers from KAIST
designed fourth‐generation I‐shaped bipolar rails and fifth‐generation S‐shaped bipolar rails
with even larger transfer gap, narrower frame, and higher efficiency [2]. With bipolar rails, the
magnetic field path is parallel to the moving direction of the vehicle instead of being orthogonal
to the moving direction. The new design is well suited for DWPT due to its advantages such
as high power density, narrow frame, and therefore lower construction complexity, robust to
lateral displacement, and lower magnetic field exposure on both sides of the rail [10–12]
(Tables 1 and 2).
In 2015, KAIST proposed using a dq‐two‐phase transmitter rail for cancelling the zero coupling
points along the moving direction [13] using the control method which is relatively complex.
A double loop control is implemented by detecting the phase of the primary current. The
amplitudes and phases of the d‐q currents are controlled using a phase‐locked loop and DC
chopper according to the position of the receiver.
-----
114 Wireless Power Transfer - Fundamentals and Technologies
**Type** **Coreless long coil** **Bipolar rail**
Merits Even magnetic field distribution, stable power
transfer, coreless, and low manufacturing cost
Demerits Low power density, sensitive to lateral
displacement, large surface area is needed for
High power density, narrow design, robust to lateral
displacement, low construction complexity, and low
level of magnetic field exposure
Uneven magnetic field distribution, zero coupling
point. High cost due to the usage of ferrite core
**Table 1. Advantages and disadvantages of commonly used powering rail.**
**Table 2. Wireless power rails and receiving pickups developed by KAIST (From generation 1 to 6).**
##### 3. Segment and power supply scheme
In order to overcome the issues of low transfer efficiency and high sensitivity to the changing
parameters in a centralized power supply system, a new segmented scheme is proposed [14].
The voltage at the 50 Hz AC bus is first stepped up to reduce transmission loss. Then, before
the segmented transmitters, the voltage is stepped down via the inverter. Constant current is
also used at the transmitters. Efficient converter topologies are also reviewed for implementing
a centralized power supply system.
(1) Centralized power supply scheme (Figure 8)
|Col1|Col2|displacement, large surface area is neede construction, and high level of magnetic field exposure|d for point. High cost due to the usage of ferrite core|Col5|
|---|---|---|---|---|
|Table 1. Advantages and disadvantages of commonly used powering rail.|||||
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 115
http://dx.doi.org/10.5772/64331
With the increasing length of the transmitter rail, the bandwidth of the primary side channel
becomes narrower. Therefore, the system is more sensitive to the variations of parameters, and
the robustness is decreased. The controller for the centralized power supply is relatively
**a.** High requirements of the components due to a single module supporting large power.
**b.** The whole rail is activated and causes high loss.
**c.** Low reliability due to any breakdown will affect the whole rail.
**d.** The efficiency is low when the load is small.
**e.** High self‐inductance and therefore high voltage across capacitor.
**f.** Highly sensitive toward the variations in parameters, causing low stability.
**Figure 8. Centralized power supply scheme.**
**Figure 9. Power frequency scheme—segmented rail mode.**
(2) Power frequency scheme—segmented rail mode (Figure 9)
The advantages of segmented rails are as follows:
**a.** Different segments can be turned on at different time periods, decreasing the power loss;
**b.** Smaller‐sized power converters;
**c.** Higher reliability, when one of the segments breaks down, other segments will still be
functioning normally;
-----
116 Wireless Power Transfer - Fundamentals and Technologies
**d.** Lower self‐inductance, less sensitive to variations in parameters, and therefore increasing
the system stability.
However, segmented rails also have the following disadvantages:
**a.** High number converters, difficult to control and high maintenance and construction cost;
**b.** High number of components is required and therefore low reliability of the whole system.
(3) High frequency scheme—segmented rail mode (Figure 10)
With segmented rails and centralized power supply, the advantages of this design are as
follows:
**a.** Lesser power converter units, easier to maintain;
**b.** Different segments can be activated at different time periods, lesser power loss;
**c.** Lower self‐inductance, less sensitive to variations in parameters, increases the system
stability.
**Figure 10. High frequency scheme—segmented rail mode.**
However, this design has the following disadvantages:
**a.** When the power supply breaks down, all of the segmented rails will stop functioning,
thus lowering the system reliability;
**b.** High loss in the cable connecting the power supply to the segmented rails;
**c.** High capacity power supply and therefore large requirements of the components;
(4) High frequency and high voltage scheme and low voltage and constant current rail mode
(Figure 11).
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 117
http://dx.doi.org/10.5772/64331
**Figure 11. High frequency and high voltage scheme—low voltage and constant current rail mode.**
(5) Combination scheme (Figure 12)
This type of rails combines the advantages of abovementioned rails; however, the system is
complex and only suitable for a large‐scale dynamic charging system.
**Figure 12. Combined type rail scheme.**
##### 4. Circuit topologies and impedance matching
In the DWPT system, the gap between the receiver and transmitter is always changing.
Different cars have different heights with respect to the ground and the coupling coefficient
will varies significantly. Coupling coefficient is an important parameter in WPT. If the value
is too low, the efficiency may drop considerably. Contrarily, frequency splitting phenomena
may occur if the coupling coefficient is too high, and the system functions in the unstable
-----
118 Wireless Power Transfer - Fundamentals and Technologies
region. Therefore, the circuit topology should be designed to be insensitive to coupling
changes.
In order to achieve a steady power supply with variations in coupling and to increase the
system stability in the light‐load region, an LCLC topology can be used. The current at the
primary is kept constant and stress on switches is reduced during on‐off. At the receiver side,
a parallel‐T configuration can increase the tolerance of the system toward coupling variation.
The proposed topology is shown in Figure 13.
**Figure 13. Circuit topology of double LCLC.**
The transmitter current is written as follows:
_ip_ = (U _i_ - _U_ _r_ 0 ) / (w0Lp ) (1)
With λ = L s / L 2 <1 as the load coefficient, the receiver output voltage is as follows:
_U_ _o_ = _U_ _oc_ l= w0k _L L Ip_ _s_ _p_ l (2)
The output voltage is 1/λ times the receiver voltage. A step‐up voltage converter is used to
provide sufficient power when coupling is low, therefore increasing the tolerance of the system
against lateral displacement.
The voltage ratio and efficiency are given as follows:
ìG = _MRl_ _L R0_ ( l + _rs_ ) + _r C0_ _p_ (M w0 + _r Rp_ ( l + _rs_ ))
ïíïîh= w02M R2 l2L0 (w02M 2 + _r Rp_ ( l2 + _rs_ ))(L R0 ( l2 + _rs_ ) + _C_ _pr0_ (w02M 2 + _r Rp_ ( l2 +
(3)
where r0 is the internal resistance of the inverter circuit, rp is the resistance of the transmitter,
and rp is the resistance of the receiver.
The power and efficiency curves are given in **Figure 14. The efficiency is high at the low‐**
coupling region which is particularly important for the DWPT application.
As shown by the curves in Figure 15, the efficiency and power are significantly improved for
different loads and coupling coefficient compared to series topology.
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 119
http://dx.doi.org/10.5772/64331
**Figure 14. Efficiency and voltage gain vs. coupling coefficient.**
**Figure 15. Power and efficiency of the two kinds of structure vs. coupling coefficient.**
While designing the circuit of WPT, the compensation is performed under no‐load condition.
In normal operating condition, frequency tracking is used to ensure resonance by keeping the
same phase between primary voltage and primary current [12]. Besides, to ensure the EMC
and system stability, control is used to achieve constant current. The magnetic field from the
transmitter is in steady state. For example, in the WPT system developed by KAIST, the input
voltage of the inverter is adjusted using a three‐phase thyristor converter shown in **Fig‐**
**ure 16 to achieve constant current at the transmitter.**
-----
120 Wireless Power Transfer - Fundamentals and Technologies
**Figure 16. Diagram of the KAIST IPTS showing a power inverter, a power supply rail, and a pickup.**
For the secondary side, in order to realize constant current, constant voltage, or constant power,
a DC/DC converter is usually implemented. Figures 17 and 18 show the DC/DC converters
used in the WPT systems of the University of Auckland and KAIST [15, 16].
**Figure 17. Secondary DC/DC converter.**
**Figure 18. Functional diagram of OLEV power receiver system.**
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 121
http://dx.doi.org/10.5772/64331
**Figure 19 shows a secondary‐side circuit which consists of both controllable rectifier and DC/**
DC converter. SPWM synchronous rectification is employed at the controllable rectifier. The
duty cycle of the rectifier is regulated through SPWM; the effective resistance can be adjusted
in the range of Rload ~ _∞. While for a boost converter, the effective resistance can be in the range_
of 0∼ _∞. Therefore, any desired values of the effective resistance can be realized to improve_
the system overall efficiency.
**Figure 19. Dynamic impedance adjustment for secondary side pickups.**
##### 5. Control strategies
Three types of control were proposed for DWPT: primary control, secondary control, and
double‐side control. The University of Auckland proposed adjusting the duty cycle of the
inverter to control primary resonant current, simplifying the system configuration [17]. KAIST
designed constant current control at the primary. A DC/DC converter is added before the
inverter, and the DC voltage from the main line is adjusted to achieve constant current for
different loads [13]. The main objective of primary control is to produce constant magnetic
field, then robust power control can be implemented. The University of Tokyo utilizes
secondary control strategy. A buck converter is added after the rectifier [4]. General state space
averaging (GSSA) is used to construct the small‐signal model. Constant power or maximum
efficiency is then realized using PI pole placement [18]. In addition, controllable rectifier and
hysteresis comparator are also proposed for implementation at the secondary side to control
the output power or maximum efficiency [19]. Double‐side control can be with or without
communication. ORNL combines the control of both sides, using a closed loop control and
frequency adjustment with communication to realize wireless charging [3]. The Hong Kong
University proposed simultaneous control of both power and maximum efficiency without
communication. The smallest input power is searched to realize constant output power of the
inverter [20] (Table 3).
-----
122 Wireless Power Transfer - Fundamentals and Technologies
**Control**
**strategy**
**Primary control** **Secondary control** **Both side control**
**Without close‐loop**
**communication**
Both desired power and
maximum efficiency are
achievable simultaneously
Conflict control between
Merits Constant current in
transmitter, steady
magnetic field, no need to
**With close‐loop**
**communication**
Both desired power and
maximum efficiency are
achievable simultaneously
Additional wireless
communication is required,
lower the system reliability
and real‐time performance
consider reflected
impedance
Demerits Unable to control for
maximum efficiency,
primary side and
limited control of output
load, and constant current
charging is not realizable
Constant charging
current, constant
charging voltage, or
maximum efficiency
Adjustable range of
the secondary side is
limited, and accurate
model is required
secondary side
**Table 3. Comparison of advantages and disadvantages of various control strategies.**
The DWPT system is subject to disturbances such as variation of mutual inductance caused
by movement of the vehicles. New robust control strategies, which are more superior to PID
controllers [4,18,19] in disturbance suppression, are currently being studied.
##### 6. Electromagnetic interference
The DWPT uses a high‐frequency, strong magnetic field to transfer power wirelessly. The EMC
is an important consideration as the DPWT system is surrounded by many sensitive electronic
circuits. The requirements include shielding design, frequency allocation, and grounding
design. According to the standard set by the International Commission on Non‐Ionizing
Radiation Protection (ICNIRP), the current density exposed to the public is 200 mA/m[2], when
the frequency is 100 kHz. The values may affect the nerve system of human body. The limit of
specific absorption rate (SAR) is 2 W/kg and power density is 10 W/m[2]; if the exposure to the
human body is higher than these limits, heating of the human tissues may occur (Table 4).
**Shielding** **Metal conductor** **Magnetic material** **Active shielding** **Resonant reactive shielding**
**method**
|spec hum|ifi a|c absorption rate (SAR) is 2 W/kg a n body is higher than these limits, h|nd power density is 10 W/m2; if the exposure to the eating of the human tissues may occur (Table 4).|Col5|Col6|
|---|---|---|---|---|---|
|Shielding Metal conductor Magnetic material Active shield||||in|g Resonant reactive shielding|
Does not consume power
from the system, controllable
Difficult to design, complex
configuration
Flexible placement,
good shielding effect
Merits Fully enclosed metal
conductor housing
provide excellent
shielding effect
Demerits Eddy loss affecting the
system efficiency
Magnetic field shaping,
increasing coupling
coefficient and therefore
low loss
Limited shielding effect Additional coil lower
the system efficiency
**Table 4. Comparison of merit and demerit of various magnetic shielding methods.**
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 123
http://dx.doi.org/10.5772/64331
The suppression of the leakage field can be divided into active shielding and passive shielding.
In passive shielding, a magnetic path is created using magnetic material or canceling field
using a low magnetic permeability metallic conductor [21–23]. The self‐inductance and mutual
inductance are increased when using magnetic material. The magnetic flux distribution is
improved due to higher coupling coefficient, and transfer loss is decreased. However, the
shielding effect is limited. Metallic shield is widely used in a high‐frequency magnetic field to
suppress electromagnetic interference. Both KAIST and ORNL utilize this kind of shielding
method. The advantages include simple design and easy to use. However, metallic shielding
cannot cover the transmitter and receiver completely. The exposed conductor is subject to
friction and eddy current which will increase the heat loss. KAIST proposed a new active
shielding method in 2015. A conventional ferrite plate is embedded in multiple metallic sheets
as shown in Figure 20. Experimental results show that the magnetic interference is effectively
reduced [24].
**Figure 20. Ferrite shielding structure using an embedded metal sheet.**
Regarding active shielding, additional coils with or without power supply are implemented
at the WPT system to create a cancelling field as shown in Figure 21. Compared to metallic
shielding, the space required is smaller.
-----
124 Wireless Power Transfer - Fundamentals and Technologies
**Figure 21. Magnetic field cancellation using a resonant coil.**
KAIST published a paper in 2013, proposing an active shielding method using a resonant coil.
A switching array is used to change the values of compensated capacitors, thereby controlling
the amplitude and phase of the cancelling field. An experiment was performed using green
public transportation [25]. In 2015, an improved version using double loop and phase adjust‐
ment to achieve resonance was proposed to achieve an active shielding without power supply.
The shielding coils are placed at the side of the coupling mechanism as shown in Figure 22.
The current induced by leakage field is then sensed. Magnetic field with the same amplitude
but opposite polarity with the leakage is then created for field cancellation [26].
**Figure 22. Resonant reactive power shielding with double coils and four capacitors.**
In 2013, ORNL proposed using an aluminum board to reduce electromagnetic interference
[27]. As shown in Figure 23, a 1‐mm‐thick aluminum shield is placed above the cables. The
magnetic field measured at the passenger‐side front tire is reduced from 18.72 μT to 3.22 μT.
**Figure 23. Suppression of magnetic field after adding aluminum plate and its effect.**
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 125
http://dx.doi.org/10.5772/64331
##### 7. Conclusions
With the advancement of EV and ET, the significance of DWPT has been consistently growing.
Recent developments in DWPT for EV and ET have been presented throughout this chapter.
Five different aspects of this technology, such as power rail and pickup design, power supply
schemes, circuit topologies and impedance matching, control strategies, and EMC, are
reviewed. Despite obtaining significant results post study in this field, some issues of concern
are yet to be resolved. Previous results as well as the challenges in deployment of DWPT in
real application have been highlighted in this chapter.
##### Author details
Kai Song[1*], Kim Ean Koh[2], Chunbo Zhu[1], Jinhai Jiang[1], Chao Wang[1] and Xiaoliang Huang[2]
*Address all correspondence to: kaisong@hit.edu.cn
1 School of Electrical Engineering, Harbin Institute of Technology, Harbin, China
2 Department of Electrical Engineering, The University of Tokyo, Tokyo, Japan
##### References
[1] Chen L, Nagendra G.R, Boys J.T, Covic G.A. Double‐Coupled Systems for IPT Roadway
Applications. IEEE Journal of Emerging and Selected Topics in Power Electronics.
2015;3(1):37–49. DOI: 10.1109/JESTPE.2014.2325943
[2] Choi S.Y, Gu B.W, Jeong S.Y, Rim C.T. Advances in Wireless Power Transfer Systems
for Roadway‐Powered Electric Vehicles. IEEE Journal of Emerging and Selected Topics
in Power Electronics. 2015;3(1):18–36. DOI: 10.1109/JESTPE.2014.2343674
[3] Miller J.M, Onar O.C, Chinthavali M. Primary‐Side Power Flow Control of Wireless
Power Transfer for Electric Vehicle Charging. IEEE Journal of Emerging and Selected
Topics in Power Electronics. 2015;3(1):147–162. DOI: 10.1109/JESTPE.2014.2382569
[4] Kobayashi D, Imra T, Hori Y. Real‐time coupling coefficient estimation and maximum
efficiency control on dynamic wireless power transfer for electric vehicles. In: IEEE
WoW 2015 ‐ IEEE PELS Workshop on Emerging Technologies: Wireless Power,
Proceedings ; June 5, 2015 ‐ June 6, 2015; Daejeon, Korea. Piscataway, United States:
Institute of Electrical and Electronics Engineers Inc.; 2015. p. 1–6. DOI: 10.1109/WoW.
2015.7132799
-----
126 Wireless Power Transfer - Fundamentals and Technologies
[5] Kim J.H, Lee B.S, Lee J.H, Lee S.H, Park C.B, Jung S.M. Development of 1‐MW Inductive
Power Transfer System for a High‐Speed Train. IEEE Transactions on Industrial
Electronics. 2015;62(10):6242–6250. DOI: 10.1109/TIE.2015.2417122
[6] Ukita K, Kashiwagi T, Sakamoto Y, Sasakawa T. Evaluation of a non‐contact power
supply system with a figure‐of‐eight coil for railway vehicles. In: IEEE WoW 2015 ‐
IEEE PELS Workshop on Emerging Technologies: Wireless Power, Proceedings; June
5, 2015 ‐ June 6, 2015; Daejeon, Korea. Piscataway, United States: Institute of Electrical
and Electronics Engineers Inc.; 2015. p. 1–6. DOI: 10.1109/WoW.2015.7132807
[7] Winter J, Mayer S, Kaimer S, Seitz P, Pagenkopf J, Streit S. Inductive power supply for
heavy rail vehicles. In: 2013 3rd International Electric Drives Production Conference,
EDPC 2013 ‐ Proceedings; October 29, 2013 ‐ October 30, 2013; Nuremberg, Germa‐
ny.Piscataway, United States: IEEE Computer Society; 2013. p. 1–9. DOI: 10.1109/EDPC.
2013.6689749
[8] Song K, Zhu C, Li Y, Guo Y, Jiang J, Zhang J. Wireless power transfer technology for
electric vehicle dynamic charging using multi‐parallel primary coils. Zhongguo Dianji
Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering.
2015;35(17):4445–4453. DOI: 10.13334/j.0258‐8013.pcsee.2015.17.020
[9] Zhu C, Song K, Wei G, Zhang Q. Novel power receiver for dynamic wireless power
transfer system. In: Industrial Electronics Society, IECON 2015 ‐ 41st Annual Confer‐
ence of the IEEE; 9 November 2015‐12 November 2015; Yokohama.Piscataway, United
States: Institute of Electrical and Electronics Engineers Inc.; 2015. p. 002247– 002251.
DOI: 10.1109/IECON.2015.7392436
[10] Chen L, Nagendra G.R, Boys J.T, Covic G.A. Ultraslim S‐Type Power Supply Rails for
Roadway‐Powered Electric Vehicles. IEEE Transactions on Power Electronics.
2015;30(11):6456–6468. DOI: 10.1109/TPEL.2015.2444894
[11] Lee W.Y, Huh J, Choi S.Y, Thai X.V, Kim J.H, Al‐Ammar E.A. Finite‐Width Magnetic
Mirror Models of Mono and Dual Coils for Wireless Electric Vehicles. IEEE Transac‐
tions on Power Electronics. 2013;28(3):1413–1428. DOI: 10.1109/TPEL.2012.2206404
[12] Huh J, Lee S.W, Lee W.Y, Cho G.H, Rim C.T. Narrow‐Width Inductive Power Transfer
System for Online Electrical Vehicles. IEEE Transactions on Power Electronics.
2011;26(12):3666–3679. DOI: 10.1109/TPEL.2011.2160972
[13] Park C, Lee S, Jeong S.Y, Cho G.H, Rim C.T. Uniform Power I‐Type Inductive Power
Transfer System With DQ Power Supply Rails for On‐Line Electric Vehicles. IEEE
Transactions on Power Electronics. 2015;30(11):6446–6455. DOI: 10.1109/TPEL.
2015.2420372
[14] Tian Y. Research on Key Issues of Sectional Track‐Based Wireless Power Supply
Technology for Electric Vehicles [thesis]. Chongqing, China: Chongqing University;
2012.
-----
A Review of Dynamic Wireless Power Transfer for In‐Motion Electric Vehicles 127
http://dx.doi.org/10.5772/64331
[15] Keeling N.A, Covic G.A, Boys J.T. A Unity‐Power‐Factor IPT Pickup for High‐Power
Applications. IEEE Transactions on Industrial Electronics. 2010;57(2):744–751. DOI:
10.1109/TIE.2009.2027255
[16] Shin J, Shin S, Kim Y, Ahn S, Lee S, Jung G, et al. Design and Implementation of Shaped
Magnetic‐Resonance‐Based Wireless Power Transfer System for Roadway‐Powered
Moving Electric Vehicles. IEEE Transactions on Industrial Electronics. 2014;61(3):1179–
1192. DOI: 10.1109/TIE.2013.2258294
[17] Abdolkhani A, Hu A.P. Improved autonomous current‐fed push‐pull resonant
inverter. Iet Power Electronics. 2014;7(8):2103–2110. DOI: 10.1049/iet‐pel.2013.0749
[18] Hata K, Imura T, Hori Y. Maximum efficiency control of wireless power transfer via
magnetic resonant coupling considering dynamics of DC‐DC converter for moving
electric vehicles. In: Conference Proceedings ‐ IEEE Applied Power Electronics
Conference and Exposition ‐ APEC; March 15, 2015 ‐ March 19, 2015; Charlotte, NC,
United states. Piscataway, United States: Institute of Electrical and Electronics Engi‐
neers Inc.; 2015. p. 3301–3306. DOI: 10.1109/APEC.2015.7104826
[19] Gunji D, Imura T, Fujimoto H. Envelope model of load voltage on series‐series
compensated wireless power transfer via magnetic resonance coupling. In: IEEE WoW
2015 ‐ IEEE PELS Workshop on Emerging Technologies: Wireless Power, Proceedings;
June 5, 2015 ‐ June 6, 2015; Daejeon, Korea. Piscataway, United States: Institute of
Electrical and Electronics Engineers Inc.; 2015. p. 1–6. DOI: 10.1109/WoW.2015.7132853
[20] Zhong W.X, Hui S.Y.R. Maximum Energy Efficiency Tracking for Wireless Power
Transfer Systems. IEEE Transactions on Power Electronics. 2015;30(7):4025‐4034. DOI:
10.1109/TPEL.2014.2351496
[21] Kim J, Kim H, Kim M, Ahn S, Kim J, Kim J. Analysis of EMF noise from the receiving
coil topologies for wireless power transfer. In: cccc2012 Asia‐Pacific Symposium on
Electromagnetic Compatibility, APEMC 2012 ‐ Proceedings; May 21, 2012 ‐ May 24,
2012; Singapore, Singapore. Piscataway, United States: IEEE Computer Society; 2012.
p. 645–648. DOI: 10.1109/APEMC.2012.6237964
[22] Kim H, Song C, Kim J, Kim J. Shielded Coil Structure Suppressing Leakage Magnetic
Field from 100W‐Class Wireless Power Transfer System with Higher Efficiency.
International Microwave Workshop Series on Innovative Wireless Power Transmis‐
sion: Technologies. 2012;16(3):83–86. DOI: 10.1109/IMWS.2012.6215825
[23] Ahn S, Park H.H, Choi C.S, Kim J, Song E, Paek H.B, et al. Reduction of electromagnetic
field (EMF) of wireless power transfer system using quadruple coil for laptop appli‐
cations. In: 2012 IEEE MTT‐S International Microwave Workshop Series on Innovative
Wireless Power Transmission: Technologies, Systems, and Applications, IMWS‐IWPT
2012 ‐ Proceedings; May 10, 2012 ‐ May 11, 2012; Kyoto, Japan. Piscataway, United
States: IEEE Computer Society; 2012. p. 65–68. DOI: 10.1109/IMWS.2012.6215821
-----
128 Wireless Power Transfer - Fundamentals and Technologies
[24] Park H.H, Lwon J.H, Kwak S.I, Ahn S. Magnetic Shielding Analysis of a Ferrite Plate
with a Periodic Metal Strip. IEEE Transactions on Magnetics. 2015;51(8):1–8. DOI:
10.1109/TMAG.2015.2425796
[25] Kim J, Kim J, Kong S, Kim H, Suh I.S, Suh N.P, et al. Coil Design and Shielding Methods
for a Magnetic Resonant Wireless Power Transfer System. Proceedings of the IEEE.
2013;101(6):1332–1342. DOI: 10.1109/JPROC.2013.2247551
[26] Moon H, Kim S, Park H.H, Ahn S. Design of a Resonant Reactive Shield With Double
Coils and a Phase Shifter for Wireless Charging of Electric Vehicles. IEEE Transactions
on Magnetics. 2015;51(3):1–4. DOI: 10.1109/TMAG.2014.2360701
[27] Ahn S, Park J, Song T, Lee H, Byun J, Kang D, et al. Low frequency electromagnetic
field reduction techniques for the On‐Line Electric Vehicle (OLEV). In: IEEE Interna‐
tional Symposium on Electromagnetic Compatibility; July 25, 2010 ‐ July 30, 2010; Fort
Lauderdale, FL, United states. Piscataway, United States: Institute of Electrical and
Electronics Engineers Inc.; 2010. p. 625–630. DOI: 10.1109/ISEMC.2010.5711349
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.5772/64331?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5772/64331, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://www.intechopen.com/citation-pdf-url/51247"
}
| 2,016
|
[
"Review"
] | true
| 2016-06-29T00:00:00
|
[] | 9,236
|
|
en
|
[
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f6cf73c647f116d7a23d17e8a4c45a0555dd81
|
[
"Physics",
"Computer Science",
"Mathematics"
] | 0.859475
|
Phase transitions in distributed control systems with multiplicative noise
|
00f6cf73c647f116d7a23d17e8a4c45a0555dd81
|
arXiv.org
|
[
{
"authorId": "8054025",
"name": "N. Allegra"
},
{
"authorId": "1713790",
"name": "Bassam Bamieh"
},
{
"authorId": "48850725",
"name": "P. Mitra"
},
{
"authorId": "5225213",
"name": "C. Sire"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subject to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper, we analyze the so called network consensus problem, prototypical of the above considerations, that has previously been analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions, and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a ‘toy’ model, we believe that the qualitative features are generic, with implications for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.
|
## Phase transitions in distributed control systems with multiplicative noise
**Nicolas Allegra[1][,][2], Bassam Bamieh[1], Partha Mitra[3]** **and Cl´ement Sire[4]**
1 Department of Mechanical Engineering, University of California, Santa Barbara, USA
3 Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
4 Laboratoire de Physique Th´eorique, Universit´e de Toulouse, UPS, CNRS, F-31062 Toulouse, France
**Abstract.** Contemporary technological challenges often involve many degrees of freedom in a distributed
or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a
graph with limited communication resources, hindering centralized control; the communication is subjected
to noise; and the number of variables can be very large. These three aspects make tools and techniques from
statistical physics particularly suitable for the performance analysis of such networked systems in the limit
of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly,
phase-transition like phenomena appear in these systems, where a sharp change in performance can be
observed with a smooth parameter variation, with the change becoming discontinuous or singular in the
limit of infinite system size.
In this paper we analyze the so called network consensus problem, prototypical of the above considerations,
that has been previously analyzed mostly in the context of additive noise. We show that qualitatively new
phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending
on dimensions and on the presence or absence of a conservation law, the system performance shows a
discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the
conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the
stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for
certain classes of centralized control problems. Although our study involves a ”toy” model we believe that
the qualitative features are generic, with implication for the robust stability of distributed control systems,
as well as the effect of roundoff errors and communication noise on distributed algorithms.
PACS numbers: 05.10.-a, 02.30.Yy, 89.75.Fb
Submitted to: JSM
-----
**Contents**
**1** **Introduction** **3**
1.1 Example of a simple Laplacian consensus algorithm . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Plan of the article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
**2** **Consensus algorithms in a random environment** **6**
2.1 Definition of the system and conservation laws . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Correlations induced by exact conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Asymmetric links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Symmetric links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Isotropic links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Gain noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Average conservation law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
**3** **Threshold behavior of the network coherence** **10**
3.1 Phase diagram of the exactly conserved model . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 Noise threshold in any dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Finite size dependence of the coherence . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3 Exponential regime above the noise threshold . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Phase diagram of the average conserved model . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Noise threshold in high dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 Robustness of the coherence and finite-size dependance . . . . . . . . . . . . . . . . . 16
3.3 A word on the other forms of symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
**4** **Summary and conclusions** **17**
2
-----
**1. Introduction**
Phase transition phenomena have played a central role in twentieth century theoretical physics, ranging from
condensed matter physics to particle physics and cosmology. In recent decades, phase transition phenomena,
corresponding to non-analytic behavior arising from large system-size limit in systems of many interacting
variables, have increasingly appeared in the engineering disciplines, in communications and computation
[1], robotics [2, 3], control theory [4, 5] and machine learning. Related phenomena have also been noted in
behavioral biology [6], social sciences [7] and economics [8, 9]. It is not as widely appreciated that phase
transition-like behavior may also be observed in distributed control systems (or distributed algorithms) with
many variables, for similar mathematical reasons that they appear in statistical physics, namely the presence
of many interacting degrees of freedom.
Problems in a variety of engineering areas that involve interconnections of dynamic systems are closely
related to consensus problems for multi-agent systems [10]. The problem of synchronization of coupled
oscillators [11] has attracted numerous scientists from diverse fields ([12] for a review). Flocks of mobile agents
equipped with sensing and communication devices can serve as mobile sensor networks for massive distributed
sensing [13, 2, 4]. In recent years, network design problems for achieving faster consensus algorithms has
attracted considerable attention from a number of researchers [14]. Another common form of consensus
problems is rendezvous in space [15, 16]. This is equivalent to reaching a consensus in position by a number
of agents with an interaction topology that is position induced. Multi-vehicle systems are also an important
category of networked systems due to their technological applications [17]. Recently, consensus algorithms
had been generalized to quantum systems [18] and opening new research directions towards distributed
quantum information applications [19].
In contrast to the above mentioned work, a relatively less studied problem is that of robustness or
resilience of the algorithms to uncertainties in models, environments or interaction signals. This is the central
question in the area of Robust Control, but it has been relatively less studied in the context distributed control
systems. This issue is quite significant since algorithms that work well for a small number of interacting
subsystems may become arbitrarily fragile in the large-scale limit. Thus, of particular interest to us in this
paper is how large-scale distributed control algorithms behave in various uncertainty scenarios. Additive
noise models have been studied in the context of networked consensus algorithms [20, 21], including scaling
limits for large networks [22, 23]. More relevant to the present work are uncertainty models where link or
node failures, message distortions, or randomized algorithms [24, 25, 26, 27, 28] are modeled by multiplicative
noise. In this latter case, the phenomenology is much richer than the case of only additive noise. The basic
building block of multi-agent and distributed control systems is the so-called consensus algorithm. In this
paper we develop a rather general model of consensus algorithms in random environments that covers the
above mentioned uncertainty scenarios, and we study its large-size scaling limits for d-dimensional lattices.
We now motivate the problem formulation of this paper using a simple version of the consensus algorithm.
_1.1. Example of a simple Laplacian consensus algorithm_
We consider a local, linear first-order consensus algorithms over an undirected, connected network modeled
by an undirected graph G with N nodes and M edges. We denote the adjacency matrix of G by A, and D
is the degree matrix. The Laplacian matrix of the graph G is denoted by L and is defined as L = D _A. In_
_−_
the first-order consensus problem, each node i has a single state ui(t). The state of the entire system at time
_t is given by the vector u(t) ∈_ R[N] . Each node state is subject to stochastic disturbances, and the objective
is for the nodes to maintain consensus at the average of their current states. The dynamics of this system
in continuous time is given by
**u˙** (t) = _αLu(t) + n(t),_ (1)
_−_
where α is the gain on the communication links, and n(t) ∈ R[N] is a white noise. This model covers a large
number of systems. A few examples are
_• In flocking problems ui(t) is the current heading angle of a vehicle or agent [6]. L is the Laplacian of_
the agents’ connectivity network determined by e.g. a distance criterion. The disturbance ni(t) models
random forcing on the i’th agent.
3
-----
_• In load balancing algorithms for a distributed computation network ui(t) is the current load on a_
computing node. L is the Laplacian of the connectivity graph [29, 30] . The disturbance ni(t) models
arriving (when positive) or completed (when negative) jobs.
_• In distributed sensing networks [13, 2, 4], the i’th sensor makes a local measurement ui(0) of a global_
quantity. The aim is for the sensors to communicate and agree on the value of the sensed quantity
without having a central authority, thus they communicate locally based on a connectivity graph with
_L as its Laplacian. The dynamics (3) represents averaging of each sensor with their neighbors, while_
the disturbances ni(t) can represent the effective noise in communicating with neighbors. The aim is
for all sensors to reach the same estimate (which would be the initial network mean) asymptotically, i.e.
for each i, limt→∞ _ui(t) =_ _N1_ �i _[u][i][(0).]_
We should note that in much of the literature, the algorithm Eq. (3) has been studied for finite systems
and in the absence of disturbances n(t). In the large-scale limit however, there are significant differences in
phenomenology between the disturbance-free versus the uncertain scenarios. In the absence of disturbances
and for a connected graph, the system Eq. (3) converges asymptotically to the average of the initial states.
With the additive noise term, the nodes do not converge to consensus, but instead, node values fluctuate
around the average of the current node states. Let us denote the spatial average across the network (”network
mean”) of the sites by m(t),
_m(t) = [1]_
_N_
_N_
�
_uk(t)._ (2)
_k=1_
In the absence of noise, m(t) = m(0) for a finite graph. Although m(t) shows a diffusive behavior for
finite sized networks in the presence of additive noise, the diffusion coefficient is proportional to 1/N so
that in the infinite lattice size limit one still obtains m(t) = m(0). This can be seen by spatially averaging
the consensus equation. The noise-free dynamics preserves the spatial average (equivalently, the Graph
Laplacian is chosen to be left stochastic), so in the absence of noise ˙m(t) = 0. In the presence of the additive
noise,
_m˙N_ (t) = [1]
_N_
�
_ni(t),_ (3)
_i_
Assuming the noise is uncorrelated between sites, this means that var(mN (t)) ∝ _Nt_ [, so that]
limN _→∞_ _var(mN_ (t)) = 0. Thus, in the infinite lattice size limit, m(t) = m(0). Note that this continues to
be the case for the multiplicative noise model discussed in the paper when there is a conservation law that
preserves the network mean in the absence of the additive noise component. The spatial variance across the
lattice sites has been called the ”network coherence” in previous work [23].
1
_CN[∞]_ [:= lim]t→∞ _N_
_N_
� � �
var _ui(t) −_ _m(t)_ _,_ (4)
_i=1_
Note that in the infinite size limit for the consensus problem with additive noise m(t) = m(0) so the
spatial variance (”network coherence”) also characterizes the variance of the individual site variables from
the desired mean m(0) at the initial time. If this variance is finite in the limit of large N, then under the
consensus dynamics the individual lattice sites settle down to stationary fluctuations around the desired
initial spatial mean, and this quantity can be recovered from a single site at long times, by taking a time
average.
It has been shown that CN[∞] [is completely determined by the spectrum of the matrix][ L][ (see [][10][] for a]
review). Let the eigenvalues of L denoted by 0 = λ1 < ... < λN . The network coherence is then equal to
1
_._ (5)
_λi_
4
1
_CN[∞]_ [:=]
2αN
_N_
�
_i=2_
-----
A classical result [23] shows that, if one considers the network to be Z[d], then the large-scale (N →∞)
properties of the coherence show the following behavior
_CN[∞]_ _[∼]_
_N_ in d = 1
log N in d = 2
1 in d > 2.
(6)
Compared to a perfect Laplacian algorithm (without additive noise), the addition of disturbances makes the
system unable to reach the global average in low dimensions. In higher dimensions, the network coherence
becomes finite and the algorithm performs a statistical consensus at large time. This result shows that there
are fundamental limitations to the efficiency of the algorithm for a large-scale system in presence of additive
noise in low dimensions. Let us mention that this algorithm can be extended to a second-order dynamic
where each nodes has two state variables u(t) and v(t). These system dynamics arise in the problem of
autonomous vehicle formation control (the state of a node is given by the position and velocity of the vehicle). The vehicles attempt to maintain a specified formation traveling at a fixed velocity while subject to
stochastic external perturbations and a similar coherence definition can be introduced, and the scaling of
the coherence with N has been computed [23] and similar dimensional limitations has been unraveled.
_Robustness and Multiplicative Noise_
In engineering systems, N may be large but perhaps not as large as might be considered in condensed matter
problems. In the distributed control setting, one can give a robustness interpretation of asymptotic limits
like Eq. (6). For example, in the d = 1 case (which is relevant to the so-called automated vehicular platoons
problem [23]) one can say that as the formation size increases, it becomes susceptible to arbitrarily small
forcing disturbances. A more proper robustness interpretation however requires considering scenarios with
_multiplicative noise as a model for uncertain system dynamics, and not just uncertain forcing or measurement_
noise (which is typically modeled with additive noise). Consider the following more general version of Eq. (3)
**u˙** (t) = _αL(t) u(t) + n(t),_ (7)
_−_
where the Laplacian L(t) is now a time-varying matrix with some “structured” randomness. This is now a
multiplicative noise model since the randomness multiplies the state u(t). This is used to model effects of
random environments such as networked algorithms in which links or nodes may fail randomly, randomly
varying system parameters, round-off error in floating-point arithmetic, or more generally any random errors
that are proportional to the current state [24, 25, 26, 27, 28]. If arbitrarily small probabilities of error in
_L(t) can produce large effects in the dynamics of Eq. (7) then the algorithms are fragile and lack robustness._
The main question in this paper is to study a fairly general model of networked algorithms in random
environments like Eq. (7). In particular, we study scaling limits in d-dimensional lattices, and characterize
how the system responds to multiplicative noise and whether (or not) the algorithm can remain stable and
perform its task once some arbitrary small multiplicative noise in incorporated. It is highly unlikely that
taking account of multiplicative noise (with variance σ[2]) is going to improve the scaling in low dimensions,
but the challenge is to characterize the robustness of the algorithm in high dimensions. This measure of
robustness can be quantified by the amount of noise variance σ[2] that the system can sustain before hitting a
threshold σc[2] [above which the algorithm becomes unstable and the coherence grows unboundedly. The range]
of noise
0 ⩽ _σ[2]_ _< σc[2][,]_ (8)
in which the coherence is bounded is often called margin of stability in the control literature [31]. The main
goal of this article is to quantify this margin, and to explicitly give its behavior in a wide range of models.
_1.2. Plan of the article_
The plan of the article is the following, after the introduction of the general system and its main properties,
we shall discuss the several conservation laws that one might consider and the effects on the correlations
5
-----
in the system. Furthermore, different cases of the form of the random environment, which is modeled
by multiplicative random variables will be analyzed, leading to various types of correlations between the
different degrees of freedom of our model. The main part of the article will be the study of the time-behavior
of the network coherence Eq. (14) via the 2-point correlation function. We shall see that, depending on the
conservation law that we are imposing in our system, different behaviors of the coherence is found. The
stability margin Eq. (8), introduced in the previous section, shall be fully characterized and its explicit
value will be given for each considered cases. The complete phase diagram of the network-coherence will
be explored in any dimension, with an emphasis on the presence of a noise- threshold between various
large-scale behaviors. The phase diagrams shall be explained in a more general point of view by analyzing
the various universality classes of the system and making contact with the well-known phenomenology of
disordered systems. Although our results are derived for first-order consensus algorithms Eq. (3), we expect
qualitatively similar behaviors to occur in higher-order consensus algorithms such as those used in vehicular
formation control [23].
**2. Consensus algorithms in a random environment**
_2.1. Definition of the system and conservation laws_
In this section, one introduces a class of algorithms generalizing the noisy Laplacian algorithm Eq. (3). The
generalized algorithm is a discrete-time stochastic evolution for the quantity ui(t) ∈ R with both additive
and multiplicative noises. Let us define the system with a general coupling kernel K on a square-lattice of
size L and dimension d
�
_ui(t + 1) =_ _ξij(t)K(ri, rj)uj(t) + Vi(t)ui(t) + ni(t),_ (9)
_j_
where ξij(t), Vi(t) are random variables with mean ⟨ξ⟩ and ⟨V ⟩, and Vi being the neighborhood of the site
_i. Here we choose the convention that ξii(t) = 0 such that the off-diagonal randomness is only in Vi. The_
link ξij(t) and onsite Vi(t) variables are Gaussian variables with correlations that will be detailed in the next
sections. The additive noise is centered and uncorrelated in space and time with variance σn[2]
_⟨ni(t)⟩_ = 0, (10)
_⟨ni(t)nj(t[′])⟩_ = σn[2] _[δ][ij][δ][tt][′]_ _[,]_ (11)
where δab is the Kronecker symbol. The system is translationally invariant so that K(ri, rj) = K(ri **rj). In**
_−_
the following, we shall use the convenient notation Ki−j := K(ri − **rj). We assume also that the distributions**
do not vary with time. As a consequence, all the moments of the different variables do not depend of the
space-time coordinates ( e.g. _⟨ξij(t)[p]⟩_ = ⟨ξ[p]⟩∀i, j). Let us define the variances σξ[2] [=][ ⟨][ξ][2][⟩−⟨][ξ][⟩][2][ and]
_σV[2]_ [=][ ⟨][V][ 2][⟩−⟨][V][ ⟩][2][. In the spirit of Eq. (][3][), the system can be written in a matrix form]
**u(t + 1) = M(t)u(t) + n(t),** (12)
where the matrix M(t) is a time-dependent random matrix with matrix elements Mij(t) = Ki−jξij(t)+δijVj(t)
and n(t) is the additive noise vector. This algorithm is a generalization of Eq. (3) in which the Laplacian
matrix is replaced by a time-dependent random matrix. The problem is then related to the asymptotic
properties of product of random matrices (see [32] or [33, 34] for example in the context of consensus and
wireless communications). The exponential growth rate of the matrix powers M[t] as t is controlled
_→∞_
by the eigenvalue of M with the largest absolute value. While the stationary distribution was guaranteed
in the additive model Eq. (3), the multiplicative model Eq. (9) may or may not have a stationary solution
depending on the parameters and the dimension d. The mean value can be written
_⟨ui(t + 1)⟩_ = �∥K∥1⟨ξ⟩ + ⟨V ⟩�⟨ui(t)⟩, (13)
where ∥K∥1 = [�]i [K][i][. Then obviously][ ⟨][u][i][(][t][)][⟩] [= 0 when][ ⟨][V][ ⟩] [=][ −∥][K][∥][1][⟨][ξ][⟩] [and][ ⟨][u][(][t][ + 1)][⟩] [=][ ⟨][u][i][(][t][)][⟩] [when]
_⟨V ⟩_ = 1 −∥K∥1⟨ξ⟩. In general, the sequence converges if |(∥K∥1⟨ξ⟩ + ⟨V ⟩| ⩽ 1 and diverges otherwise for a
6
-----
positive initial condition _u(0)_ _> 0. One way of studying the margin will be to compute the time-behavior_
_⟨_ _⟩_
of the coherence
_L[d]_
�
Gr(t) −⟨m(t)⟩[2], (14)
_r=1_
_CL(t) = [1]_
_L[d]_
_L[d]_
�
_⟨(ur(t) −_ _m(t))(u0(t) −_ _m(t)))⟩_ = [1]
_L[d]_
_r=1_
where Gr(t) is the local 2-point correlation function ⟨ur(t)u0(t)⟩ that captures correlations between an
arbitrary node 0 and a node r and m(t) = L[−][d][ �]i _[u][i][(][t][) being the space average of the value][ u][i][(][t][). The]_
term ⟨m(t)⟩[2] is stationary if |(∥K∥1⟨ξ⟩ + ⟨V ⟩| ⩽ 1. Therefore the time-evolution of CL(t) is simply governed
by the time-evolution of Gr(t). This function will be the main quantity that one will be interested in to
characterize the scaling behavior of the performance of the algorithm. In the following, we shall impose the
time conservation of the average m(t) either exactly or in average at the thermodynamic limit L .
_→∞_
_2.2. Correlations induced by exact conservation_
In this section, we summarize all the particular cases of Eq. (12), while preserving conservation of m(t)
exactly (i.e [�]i [M][ij][ = 1) at the thermodynamic limit. The randomness is fully defined by specifying the]
form of the correlations ⟨Mij(t)Mpq(t)⟩. The exact conservation is defined by m(t + 1) = m(t) . As a
consequence the matrix M satisfies column-stochasticity [�]i [M][ij][ = 1. Hence, it is straitforward to show that]
we need to impose
�
_Vj(t) = 1 −_ _ξij(t)Ki−j,_ (15)
_i_
for that conservation law to be satisfied Let us notice that the summation is over the first index, while it
is on the second in the evolution equation Eq. (9). In this case, the multiplicative quantities are linearly
dependent and one can write everything in terms of the variance of one or the other. Additionally one can
assume many different form for the link variables ξij(t). The simplest case is to assume that ξij(t) and ξji(t)
are independent, this assumption does not add any new correlation between the onsite variables Vi(t). The
opposite scenario is to consider ξij(t) = ξji(t) which enforces correlations between onsite variables as we
shall see in the following. We will also consider both cases where the link variables ξij(t), depend only on
one index, i.e. ξij(t) = Jj(t) and ξij(t) = gi(t) which we call respectively isotropic and gain noise cases. For
each considered possibilities, we shall explicitly compute the relevant correlations that we will need later on
for the calculation of the coherence and in particular for the calculation of Gr(t).
_2.2.1. Asymmetric links_ Now let us consider the model with asymmetric links
_ξij(t) ̸= ξji(t)._ (16)
The model can be seen as acting on a directed graph where the incoming and outgoing links are two different
independent random variables. As we saw in the previous section, exact conservation enforces
�
_Vi(t) = 1 −_ _ξki(t)Kk−i_ then _⟨V ⟩_ = 1 −⟨ξ⟩∥K∥1. (17)
_k_
Let us write down an explicit example (ignoring the additive noise) for a line with three sites with closed
boundary conditions and nearest-neighbors interactions. The evolution equation Eq. (9), written in the
matrix form Eq. (12), reads
_ui−1(t + 1)_ _Vi−1_ _ξi−1,i_ 0 _ui−1(t)_
_ui(t + 1)_ = _ξi,i−1_ _Vi_ _ξi,i+1_ _ui(t)_ _,_ (18)
_ui+1(t + 1)_ 0 _ξi+1,i_ _Vi+1_ _ui+1(t)_
where the column-stochastic condition is
_Vi−1(t) = 1 −_ _ξi,i−1(t),_ (19)
_Vi(t)) = 1 −_ _ξi−1,i −_ _ξi+1,i(t),_
_Vi+1(t) = 1 −_ _ξi,i+1._
7
-----
### ξi,i−1
ξi−1,i
### ξi+1,i
ξi,i+1
**Figure 1. Diagram corresponding to the asymmetric case. It describes the simplest example defined by**
Eq. (18) and Eq. (19). The link variables are four different uncorrelated random variables (represented by
different colors). The rules are the following, the arrows exiting site i correspond to the quantity which
is distributed to the neighbors. And the arrows coming in i represent the quantity that is given from the
neighbors.
### ξi,i−1
ξi,i−1
### ξi,i+1
ξi,i+1
**Figure 2. Diagram corresponding to the symmetric case for the example Eq. (18). The link variables are**
now the same ξij (t) = ξji(t) between two neighbors.
We can check that ui−1(t + 1) + ui+1(t + 1) + ui(t + 1) = ui−1(t) + ui(t) + ui+1(t) ‡. In that case, the matrix
M is not correlated, all the entries are independent. It can be useful to write Eq. (18) and Eq. (19) as a
diagram (see Fig .(1) ). The rules are the following, the arrows exiting site i correspond to the quantity
which is distributed to the neighbors. And the arrows coming in i represent the quantity that is given
from the neighbors. With exact conservation law, the variances of the link and on-site noises are related by
_σV[2]_ [=][ ∥][K][∥][2][σ]ξ[2][, where][ ∥][K][∥] [=] ��i [K]i[2][. Consequently, the correlations between the link variables and the]
on-site variables take the form
_⟨ξrp(t)V0(t)⟩−⟨ξ⟩⟨V ⟩_ = −σξ[2][K][r][δ][p,][0][.] (20)
The exact conservation law spatially correlates the link and on-site variables together. Because of Eq. (17),
there is no cross-correlation between onsite variables at different sites
_⟨Vr(t)V0(t)⟩−⟨V ⟩[2]_ = σV[2] _[∥][K][∥][2][δ][r,][0][.]_ (21)
That shows that the onsite variables are simply uncorrelated in space. This asymmetric link assumption will
be fully detailed in the next sections, but it will be useful to explore the other cases for a better understanding
of the differences. The asymmetric case is the only one which does not create correlations inside the matrix
_M, as we shall see next._
_2.2.2. Symmetric links_ Now let us consider the same model but with symmetric links
_ξij(t) = ξji(t),_ (22)
and where one still have the deterministic relation Eq. (15) between the variables. The example of the past
section can be written as well and the diagram becomes the following. In that situation, the matrix M is
now doubly-stochastic because of the symmetry, but not uncorrelated anymore. The corresponding diagram
_‡ Let us remark that ⟨V ⟩_ = 1 −⟨ξ⟩∥K∥1 is not satisfied here because our example is not invariant by translation.
8
-----
can be seen on (see Fig .2 ) Indeed the symmetry induced some counter-diagonal correlations in the matrix
M. Here we still have the relation σV[2] [=][ ∥][K][∥][2][σ]ξ[2] [and]
_⟨ξrp(t)V0(t)⟩−⟨ξ⟩⟨V ⟩_ = σξ[2][K][r][δ][p,][0][,] (23)
like in the previous model. The main difference comes from the fact that the V ’s are now spatially correlated,
and using Eq. (15) one finds
_⟨Vr(t)V0(t)⟩−⟨V ⟩[2]_ = σξ[2] �∥K∥[2]δr,0 + K[2]r� _._ (24)
We can observe, that the onsite variables are not delta-correlated anymore but are short-range correlated
with a correlation kernel K[2]r[. The symmetry between links induces correlations between the onsite variables]
at different positions.
_§_
_2.2.3. Isotropic links_ Let us consider the case where, the link variable does not depend on the first index
_ξij(t) := Jj(t),_ (25)
of variance σJ[2] [, then all the out-going links have the same value, but the incoming links are independent.]
The diagram of Eq. (9) in that case is Fig .3. In that case, one has
�
_Vi(t) = 1 −_ _Ji(t)_ Kk−i = 1 − _Ji∥K∥1._ (26)
_k_
and σV[2] [=][ σ]J[2] [(][∥][K][∥][1][)][2][ where][ ∥][K][∥][1][ =][ �]i [K][i][. The exact conservation condition implies row-stochasticity of]
the matrix M. In that case, the correlations between the link variables and the on-site variables are
### Ji−1
Ji
### Ji
Ji+1
**Figure 3. Diagram corresponding to the isotropic case ξij** (t) = Jj (t) for the example Eq. (18). The outgoing
links are equal (red) and the incoming are different uncorrelated random variables (blue and green).
_⟨Jr(t)V0(t)⟩−⟨J⟩⟨V ⟩_ = −σJ[2] _[∥][K][∥][1][δ][r,][0][.]_ (27)
Because of Eq. (26), there is no correlations between Vr(t) and V0(t) at different sites
_⟨Vr(t)V0(t)⟩−⟨V ⟩[2]_ = σJ[2] [(][∥][K][∥][1][)][2][ δ][r,][0][.] (28)
Contrary to previous symmetric case, the isotropic assumption does not couple the on-site variables. This
case is actually very interesting because of the form of the matrix M, although it will not be discussed in
this article.
_2.2.4. Gain noise_ The last case that one might be interested in here is
_ξij(t) := gi(t),_ (29)
of variance σg[2][. This assumption can be seen as the opposite of the previous isotropic situation. Here we]
have
�
_Vi(t) = 1 −_ _gk(t)Kk−i._ (30)
_k_
The diagram is now Fig .4 (notice that the diagram is reversed compared to the previous isotropic case).
The relation between the variances is now σV[2] [=][ ∥][K][∥][2][σ]g[2][. Here the exact conservation condition implies]
9
-----
### gi
gi−1
### gi+1
gi
**Figure 4.** Diagram corresponding to the gain noise case ξij (t) = gi(t) for the example Eq. (18). The
incoming links are equal (red) and the outgoing are different independent random variables (blue and green).
column-stochasticity of the matrix M. The correlations between link variables and on-site variables take the
form
_⟨gr(t)V0(t)⟩−⟨g⟩⟨V ⟩_ = −σg[2][K][r][.] (31)
The correlations between the onsite variables are also different, one can show that
_⟨Vr(t)V0(t)⟩−⟨V ⟩[2]_ = σg[2][∥][K][∥][1][K][r][.] (32)
In this case, the V _[′]s are spatially correlated by the kernel Kr du to the form of the links and are also_
correlated with the links with the same kernel. In a sense, this specific form is closer to the symmetric case,
but the expressions of the correlations are different.
_2.3. Average conservation law_
The other scenario is to impose conservation in average at the thermodynamic limit _m(t+1)_ = _m(t)_ . This
_⟨_ _⟩_ _⟨_ _⟩_
condition is a less restrictive constraint on the time-evolution of the system. This condition only imposes a
relation in average between the onsite and link variables
_⟨V ⟩_ = 1 −⟨ξ⟩∥K∥1, (33)
where ∥K∥1 = [�]i [K(][i][). The diagonal and off-diagonal noises are now independent random quantities while]
there were linearly dependent when exact conservation was enforced. We are not repeating the analysis of
correlations between noises in every cases here, let us just focus on the asymmetric model that one will be
interested in in the rest of the article. Of course, when only average conservation is enforced, no relation
between the variances σV[2] [and][ σ]ξ[2] [exists, and there is no cross-correlation between onsite and link variables]
_⟨ξrp(t)V0(t)⟩−⟨ξ⟩⟨V ⟩_ = 0. (34)
Furthermore, the onsite variables V are simply delta-correlated
_⟨Vr(t)V0(t)⟩−⟨V ⟩[2]_ = σV[2] _[δ][r,][0][.]_ (35)
In the formulation Eq. (12), the matrix M is not stochastic anymore [�]i [M][ij][ ̸][= 1 but is average-stochastic]
�i[⟨][M][ij][⟩] [= 1.] We shall see that those two different conservation conditions (exact or average) lead to
different results for the behavior of the coherence, that can be understood in terms of their corresponding
continuum space-time evolution equation.
**3. Threshold behavior of the network coherence**
_3.1. Phase diagram of the exactly conserved model_
As explained earlier, the behavior of the coherence of the system can be analyzed by computing the 2-point
correlation function Gr(t) = ⟨ur(t)u0(t)⟩ between ur(t) and u0(t) at same time t. The calculation is shown
here for the asymmetric case with exact conservation law, but the steps are the same for all the different
_§ If now we impose average conservation and Mij_ ’s are Gaussian distributed, then M belongs to the Gaussian Orthonormal
Ensemble (GOE) [35].
10
-----
particular cases. In the asymmetric case with exact conservation, we have Eq. (20) and Eq. (21), therefore,
it is straitforward to write down the time-evolution of the correlator Gr(t) as
�
Gr(t + 1) = ⟨ξ⟩[2][ �] Kr−kKlGk−l(t) + 2⟨V ⟩⟨ξ⟩ Kr−kGk(t)
_k,l_ _k_
+ ⟨V ⟩[2]Gr(t) + δr,0G0(t) �σξ[2][∥][K][∥][2][ +][ σ]V[2] _[−]_ [2][σ]ξ[2][K][2]r�
+ σn[2] _[δ][r,][0][.]_ (36)
From now on, we will focus on the analysis of this evolution equation for a short-range kernel but let us
mention that this equation can be easily solved on the complete graph where the kernel takes the form
Kr = 1 − _δr,0, the details of this calculation will be presented elsewhere._
_3.1.1._ _Noise threshold in any dimension One could notice that Eq. (36) can be written down as a_
convolution, then in Fourier space the result is a simple product plus a term that contains Gr(t) in r = 0
� �2 � �
G(� **q, t + 1) =** _⟨ξ⟩K([�]_ **q) + ⟨V ⟩** �G(q, t) + G0(t) _σξ[2][∥][K][∥][2][ +][ σ]V[2]_ _[−]_ [2][σ]ξ[2]K[�][2]r + σn[2] _[,]_ (37)
with the following definition of the Fourier transform ˆuq(t) = [�]ri∈Z[d][ u][r]i [(][t][)][e][i][qr][i] [. As we saw earlier, the]
exact conservation law implies σV[2] [=][ σ]ξ[2][∥][K][∥][2][ then Eq. (][37][) reads]
�
G(� **q, t + 1) = λ(q)[2]G(�** **q, t) + 2G0(t)σξ[2]** _∥K∥[2]_ _−_ K[�][2]r
�
+ σn[2]
= λ(q)[2][ �]G(q, t) + 2G0(t)σξ[2][(]K[�][2](0) − K[�][2](q)) + σn[2] _[,]_ (38)
where we have
_λ(q) =_ _ξ_ K(q) + _V_
_⟨_ _⟩_ [�] _⟨_ _⟩_
� �
= 1 −⟨ξ⟩ _∥K∥1 −_ K([�] **q)**
� �
= 1 −⟨ξ⟩ K(0)� _−_ K(� **q)** _._ (39)
Now let us consider the continuum space limit of this problem, ur∈Zd (t) → _u(r ∈_ R[d], t). Let us notice that
the case _ξ_ = 0 is therefore rather particular, indeed it implies _V_ = 1 _λ(q) = 1, and the evolution_
_⟨_ _⟩_ _⟨_ _⟩_ _→_
equation is trivial to solve, hence the correlator is always exponential for any value of the variances. For
_∥_
_⟨ξ⟩̸= 0, the stationary solution_ G[�] st(q) is given by solving G([�] **q, t + 1) −** G([�] **q, t) = 0, it follows**
� �
2Gst(0)σξ[2] K�[2](0) − K[�][2](q) + σn[2]
G� st(q) = _,_ (40)
1 _λ(q)[2]_
_−_
where we used the notation Gst(0) := Gst(r = 0). The self-consistent equation for Gst(0) gives Gst(0) =
_cdσn[2]_
1−2σξ[2][f][d][ where, in the continuum-lattice limit, we have]
� _∞_
_−∞_
� _∞_ K�[2](0) − K[�][2](q)
d[d]q _._ (42)
1 _λ(q)[2]_
_−∞_ _−_
d[d]q
(41)
1 _λ(q)[2][,]_
_−_
and
1
_cd =_
(2π)[d]
1
_fd =_
(2π)[d]
Until now, the calculation holds for any integrable kernel, and we shall now focus on a local kernel which
scales as a power-law in Fourier space K(q) **q[2][θ], the simplest being the Laplacian corresponding to θ = 1.**
[�] _∼_
For a local kernel, one has λ(q)[2] _∼_ 1+ _p⟨ξ⟩q[2][θ]_ where p is a constant. The integral cd converges in d > 2θ and
_∥_ For symmetric links, this case is not trivial anymore.
11
-----
_fd is convergent in any dimension. The integrals cd and fd can be defined, by dimensional regularization,_
for any real d ¶. The stationary solution exists if and only if 1 − 2fdσξ[2] _[>][ 0. The critical value]_
1
_σc[2]_ [=] _,_ (43)
2fd
where fd is given by Eq. (42), is the maximum value of the variance σξ[2] [such that the stationary state is]
reachable and such that the system is stable. The constant fd depends only on the explicit form of the
kernel, the mean of the link variables _ξ_ and the dimension d of the space. The value of this threshold can
_⟨_ _⟩_
be tuned by changing the value of _ξ_ . So a first result shows that, in that situation, the stability margin
_⟨_ _⟩_
Eq. (8) is always positive in any dimension, therefore the system can support some amount of multiplicative
noise and reach its stationary state. The full explicit form of the stationary correlator can be computed. For
_σξ[2]_ _[< σ]c[2]_ [and for][ d >][ 2][θ][ the stationary solution can be written]
Gst(r) = 2σξ[2] _cdσn[2]_
(2π)[d] 1 − 2σξ[2][f][d]
� K�[2](0) − K[�][2](q)
d[d]q _e[−][i][qr]._ (44)
1 _λ(q)[2]_
_−_
The stationary solution converges to a finite value in d > 2θ. As usual, the divergence is logarithmic at dc.
The behavior can be easily understood in a renormalization group (RG) language. In the phase σξ[2] _[< σ]c[2][, we]_
can define the quantity A[ex]σ[2]
1
_A[ex]σ[2][ =]_ _._ (45)
1 − 2σξ[2][f][d]
This quantity, which goes to 1 when σξ[2] _[→]_ [0 and goes to][ ∞] [when][ σ]ξ[2] _[→]_ _[σ]c[2][, can be seen as a coefficient]_
which renormalizes the additive noise. Indeed we have Gst(0) = A[ex]σ[2] _[σ]n[2]_ _[f][d][, which is, up to the constant][ A][ex]σ[2]_ [,]
the stationary solution of a pure additive system (σξ[2] [= 0), as we shall see in the next paragraph. What]
it means, is that below the noise threshold, the randomness of the link and onsite variables are irrelevant
in a RG sense and the system renormalizes to a pure additive model with a renormalized additive noise
_n˜i(t) =_ [�]A[ex]σ[2] _[n][i][(][t][) where][ n][i][(][t][) is the original additive noise. The correlator equation becomes]_
G(� **q, t + 1) = λ(q)[2]G(�** **q, t) + A[ex]σ[2]** _[σ]n[2]_ _[.]_ (46)
Since σξ[2] [is the only relevant parameter dictating the large-scale behavior of the system, the effective]
description Eq. (46) of the system below σc[2] [is still valid in the non-stationary regime][ d][ ⩽] [2][θ][. The full]
space-time-dependent solution of Eq. (46), for a kernel of the form K(q) **q[2][θ]** verifies the following scaling
[�] _∼_
form
� _t_ �
G(r, t) = A[ex]σ[2] _[σ]n[2]_ **[r][2][θ][−][d][Ψ]** _._ (47)
**r[2][θ]**
The form of the correlator is valid for any dimension below the threshold. Here Ψ(y) is a scaling function
with properties that Ψ(y) _const as y_ and Ψ(y) _y[(2][θ][−][d][)][/][2][θ]_ as y 0. This scaling form Eq. (47) is
_→_ _→∞_ _→_ _→_
the so-called Family-Viczek (FV) scaling [36], well-known in statistical physics of interfaces. The case θ = 1
and θ = 2 are respectively the Edwards-Wilkinson (EW) and Mullins-Herring (MH) universality classes [37].
The upper-critical dimension of this system is dc = 2θ, above that dimension, the correlator converges to a
finite value. For σξ[2] _[< σ]c[2]_ [and for][ d][ ⩽] [2][θ][ the behavior is a power-law following Eq. (][47][). The correlator grows]
as t[(2][θ][−][d][)][/][2][θ] and the stationary solution is never reached for an infinite system. This exponent is named 2β
in the context of growing interfaces and is equal to β = 1/4 (resp β = 3/8) for the EW class (resp. MH).
The exponent increases with θ.
_3.1.2. Finite size dependence of the coherence_ Now we can extend the result on the large-time coherence
scaling for the Laplacian algorithm (θ = 1) sketched in the introduction. From the result on the calculation
of G(r, t) below the noise threshold, one can translate the informations in terms of the time-dependance
of CN (t) and the stationary value CN[∞] [of the network coherence defined (see Eq. (][4][) and Eq. (][14][)) in]
the introduction for a system of N agents. The finite-size behavior of the coherence can be extracted by
_¶ then the calculation also holds for a fractal graph of non-integer dimension d_
12
-----
introducing a cut-off in Fourier space in Eq. (41) and Eq. (42). For a system of size L with N = L[d] nodes,
the coherence is behaving for short times (t _L[2])_
_≪_
�
_CN_ (t) ∝ [1] d[d]rG(r, t) ∼ _t[(2][−][d][)][/][2]._ (48)
_N_
So for short times, the coherence grows as a power-law, independently of system-size. The coherence reaches
then a stationary value which scales with system-size as
_CN[∞]_ _[∝]_ _N[1]_
� 2−d
d[d]rGst(r) ∼ _N_ _d,_ (49)
for any d. This value is reached within a time-scale tc ∼ _L[2]_ = N [2][/d]. The value of the infinite coherence CN[∞]
grows unboundedly in d < 3 and converges to a finite value in d > 2, in agreement with Eq. (6). In terms of
performance, in d > 2 the algorithm is still capable to perform its averaging task even with the presence of
multiplicative noise, as long as the variance is below the threshold σc[2][.]
_3.1.3. Exponential regime above the noise threshold_ Now that we have understood the behavior of the
correlator below the threshold, where the large-scale behavior was dictated by the additive noise, one can
study the behavior in the other regime, where the behavior will be controlled by the parameter σξ[2][. For]
_σξ[2]_ [⩾] _[σ]c[2][, the additive noise is irrelevant since we expect an exponential growth of the correlator, and the]_
equation Eq. (38) can be written as
G(� **q, t + 1) = D(G(�** **q, t)).** (50)
The large-time behavior of this equation is dominated by the largest eigenvalue λmax of the operator D. If
_λmax = 1, the FV scaling Eq. 47 holds. If λmax > 1 then the correlator behaves exponentially as_
G(� **q, t) ∼** _λ[t]max[g][�][(][q][)][,]_ (51)
where _g(q) has to determined self-consistently. Inserting Eq. (51) inside Eq. (37), one ends up with_
�
2σξ[2][(]K[�][2](0) − K[�][2](q)
_g(q) =_ _._ (52)
� _λmax −_ _λ(q)[2]_
Now let us write a condition on the form of the largest eigenvalue λmax of the operator D. We have
K�[2](0) − K[�][2](q)
G(� **q, t) = 2σξ[2]** (53)
_λmax −_ _λ(q)[2][ G(0][, t][) =][ �][g][(][q][)G(0][, t][)][.]_
Now using (2π)[−][d][ �] d[d]qG([�] **q, t) = G(0, t) we end up with a condition on the largest eigenvalue λmax**
� d[d]q
(54)
(2π)[d][ �][g][(][q][) = 1][,]
where
K�[2](0) − K[�][2](q)
_g(q) = [d][d][q]_ (55)
� (2π)[d] _λmax −_ _λ(q)[2][,]_
and λ(q)[2] _≈_ 1 + p⟨ξ⟩q[2]. The integral Eq. (54) when λmax → 1, converges in any d by dimensional
regularization, meaning that the FV scaling holds in any d below the threshold. The behavior of the
coherence C(t) in the regime σξ[2] [⩾] _[σ]c[2]_ [is always exponential in any dimension and for any system-size. Those]
different regimes can be understood by looking at the corresponding continuum equation describing the
large-scale fluctuations of Eq. (9). Without loss of generalities, let us focus on the Laplacian kernel from
now on. We showed previously that, when exact conservation is enforced, it exists a deterministic relation
between the onsite and link noises Eq. (17). Therefore the asymptotic behavior of our model is governed by
a continuum space-time equation with only one multiplicative source of randomness and additive noise
_∂tu(x, t) = ∂x [ξ(x, t)∂xu(x, t)] + n(x, t)._ (56)
13
-----
_d_
Finite Exponential
_CN[∞]_ [=] 1−cdσ2σξ [2]n[2][fd] _CN_ (t) ∼ _λ[t]max_
_dc = 2_
Algebraic and stationary
_CCNN((tt ≫ ≪_ _ttcc)) ∼ ∼_ _tN[2][β]2−d_ _d_ _CExponentialN_ (t) ∼ _λ[t]max_
0 _σc[2]_ [=] 2fd1 _σξ[2]_
**Figure 5. Phase diagram in the exact conservation case on a d-dimensional lattice for the Laplacian kernel**
_θ = 1 with N sites. On the left of the critical line (red line) σξ[2]_ _[< σ]c[2][, the behavior of the coherence follows]_
the FV scaling Eq. (47), i.e. algebraic (with exponent β = (2 − _d)/4) for short time then stationary at_
large time, in low d (below the blue dashed line) and finite for higher d. On the other side of the line, the
coherence grows exponentially for any d. In that situation, the algorithm remains resilient to multiplicative
noise below the noise threshold, showing that the stability margin is positive in any dimension.
From our analysis, one shows that there are physically two different regimes. The first regime, is when
_ξ(x, t) is irrelevant at large distance and one ends up with the large-scale behavior of the Edwards-Wilkinson_
equation
_∂tu(x, t) = ∂x[2][u][(][x][, t][) + ˜][n][(][x][, t][)][,]_ (57)
where the additive noise ˜n(x, t) is renormalized by the value of the variance σξ[2] [of the multiplicative noise]
as ˜n(x, t) = [�]A[ex]σ[2] _[n][(][x][, t][), where][ n][(][x][, t][) is the original additive noise. The other regime is exponential (see]_
Eq. (51)), where the large-scale behavior is dominated by the multiplicative noise ξ(x, t) and where the
additive noise does not change the asymptotic behavior. This two regimes are separated by a threshold σc[2]
that is finite in any dimension and equal to σc[2] [=] 2f1d [where][ f][d][ is the integral given by Eq. (][42][).]
_3.2. Phase diagram of the average conserved model_
Now let us look at the our system when the time-evolution of m(t) is conserved in average. We will see that
the behavior changes drastically when one imposes average conservation, especially in high dimensions. In
the aymmetric case with average conservation we have no correlation between the onsite and link variables,
therefore the evolution of the correlator Gr(t) takes the following form
�
Gr(t + 1) = ⟨ξ⟩[2][ �] Kr−kKlGk−l(t) + 2⟨V ⟩⟨ξ⟩ Kr−kGk(t)
_k,l_ _k_
+ ⟨V ⟩[2]Gr(t) + δr,0G0(t) �σξ[2][∥][K][∥][2][ +][ σ]V[2] � + σn[2] _[δ][r,][0][.]_ (58)
Let us focus on the analysis of the evolution equation in the next section. The steps of calculation are very
similar to the exactly conserved system, therefore one jumps immediately to the analysis of the solution.
_3.2.1._ _Noise threshold in high dimensions_ Let us again focus on the Laplacian kernel without loss of
generality. The self-consistent equation for Gst(0) gives
Gst(0) = _cdσn[2]_ (59)
1 − _cdσ[2][,]_
where cd is given by the same integral Eq. (41) as in the previous section and σ[2] = σV[2] [+][ σ]ξ[2][∥][K][∥][2][. The]
stationary solution exists if and only if 1 − _cdσ[2]_ _> 0. The critical value of the variance being_
_σc[2]_ [= 1] _._ (60)
_cd_
14
-----
_σσξξ[2]_
Exponential Phase
Finite Phase
Finite Phase
_σσVV[2]_
**Figure 6.** There is a critical line f (σV[2] _[, σ]ξ[2][) in the plane (][σ]V[2]_ _[, σ]ξ[2][) which is parametrized by the ellipse]_
equation _[σ]σV[2]c[2]_ [+] _σσξc[2]_ _[∥][K][∥][2][ = 1. In the white domain defined by][ f]_ [(][σ]V[2] _[, σ]ξ[2][)][ ⩾]_ [1, the algorithm becomes unstable]
and the coherence is exponential. In the blue domain defined by f (σV[2] _[, σ]ξ[2][)][ <][ 1, it is finite and the algorithm]_
reaches a global consensus. Let us mention that the critical line exists only in high dimensions d > 2, the
size of the ellipse decreases (the dashed ellipses) as d → _dc and eventually shrinks down to a single point at_
_dc. The system is then completely unstable for any non-zero variances._
There is a critical line f (σV[2] _[, σ]ξ[2][) in the plane (][σ]V[2]_ _[, σ]ξ[2][) which is parametrized by the ellipse (see Fig .(][6][))]_
equation
_σV[2]_ + _σξ[2]_ K = 1. (61)
_∥_ _∥[2]_
_σc[2]_ _σc[2]_
The constant cd depends only of the explicit form of the kernel, the mean of the link variables ⟨ξ⟩ and the
dimension d of the space. Because of the divergence of cd in d ⩽ 2, there is no threshold in low dimensions
and the system is always controlled by the variables ξ’s and V ’s as we will see later on. In d ⩽ 2, the stability
margin Eq. (8) becomes infinitesimal and any non-zero amount of multiplicative noise makes the algorithm
unstable. The full space-dependance of the stationary solution below σc[2] [and in][ d >][ 2 takes the following]
form
�� _∞_ _e[−][i][qr]_
d[d]q (62)
1 _λ(q)[2][ .]_
_−∞_ _−_
Gst(r) = _σn[2]_
(2π)[d]
� 1
1 − _cdσ[2]_
The picture is almost the same as in the exactly conserved system where below the threshold, the system
behaves as a pure additive model Eq. (46). In this regime, the renormalization factor becomes
1
_A[av]σ[2][ =]_ (63)
1 − _cdσ[2][ .]_
Below σc[2][, the system is also described by the pure additive model Eq. (][46][) with ˜][n][i][(][t][) =][ �][A][av]σ[2] _[n][i][(][t][) and]_
the FV scaling Eq. (47)) holds with the same set of exponents. The main difference is the FV scaling holds
only in d > 2, where the exponent β goes to zero, and where the stationary solution Eq. (62) converges to
a finite value Eq. (59). We have actually a critical line (see Fig .(6)) in the plane (σV[2] _[, σ]ξ[2][). The behavior is]_
the same anywhere below the line where the large-scale dynamic is described by Eq. (46). Because there is
no relation between the variances σV[2] [and][ σ]ξ[2][, we can send independently][ σ]V[2] [or][ σ]ξ[2] [to zero. The large-scale]
dynamic above the threshold can then be described by
�
_ui(t + 1) = ⟨ξ⟩_ Ki−juj(t) + V[˜]i(t)ui(t), (64)
_j_
with V[˜]i(t) has a renormalized variance σV[2] [+] _[σ]ξ[2][∥][K][∥][2][. The behavior of our system when average conservation]_
is imposed, varies greatly from the exactly conserved algorithm, where the system could not be described by
15
-----
_d_
_σc[2]_ [=] _cd1_
Finite
Exponential
_CN[∞]_ [=] 1−cdσ2σ[2]n[2]cd _CN_ (t) ∼ _λ[t]max_
_dc = 2_
Exponential
_CN_ (t) ∼ _λ[t]max_
0 _σ[2]_
**Figure 7. Phase diagram in the average conservation case for the Laplacian kernel θ = 1. Here the scenario**
is rather different than the exact conserved case in dimension d. In low d, the behavior of the coherence
is always exponential for any system-size. In higher d there is a transition between a finite phase and an
exponential regime at the critical value σc[2][.]
Eq. (64) due to the relation between the variances σV[2] [and][ σ]ξ[2][. The eigenvalue equation for this model can]
also be written as Eq. (50). If λmax > 1 then the correlator behaves as G([�] **q, t) ∼** _λ[t]max[g][�][(][q][), where]_
_σV[2]_ [+][ σ]ξ[2][∥][K][∥][2]
_g(q) =_ (65)
� _λmax −_ _λ(q)[2][ .]_
The condition on λmax is now
� d[d]q
(66)
(2π)[d][ �][g][(][q][) = 1][.]
The convergence properties of this integral are fairly different than the one found in the previous section.
The integral Eq. (66) when λmax → 1 converges in d > 2. Thus in d ⩽ 2 there is no stationary state and
the correlator grows exponentially. In d > 2 there is a noise threshold σc[2][, below this value the behavior is]
algebraic, and above it is exponential for any σ[2]. In that case, the phase transition occurs at d > 2, indeed
in d ⩽ 2 the system remains exponential for any value of σ[2]. The low d phase is the strong coupling regime
and in higher d, there is two phases depending on the value of σ[2]. Let us finally notice that, contrary to the
exact conservation case, here the phase transition may happen at an infinitesimal value of σ, just above the
critical dimension dc. The complete phase diagram is shown in Fig .(7). Two regimes are present, just as
before, the EW limit where the variance σ[2] is smaller than the critical value and where the multiplicative
noise is irrelevant. This happens in low dimensions. The other regime can be shown to be described by the
stochastic heat equation (SHE)
_∂tu(x, t) = ∂x[2][u][(][x][, t][) + ˜][V][ (][x][, t][)][u][(][x][, t][)][.]_ (67)
This equation is known to describe the partition function u(x, t) of a directed polymer in a quenched random
potential V[˜] (x, t) and the height h(x, t) log u(x, t) of a random interface verifying the Kardar-Parisi-Zhang
_∝_
equation (see [38] for details or [39] for even more details).
_3.2.2. Robustness of the coherence and finite-size dependance_ The analysis of the correlator G(r, t) tells us
that in low dimensions, the consensus algorithm is extremely sensitive to multiplicative noise. The network
coherence becomes essentially exponential as soon as a finite value of multiplicative noise is introduced in the
system, making the algorithm unstable and unable to perform its task. The stability margin, is essentially
zero in d < 3. In higher dimensions d > 2, the stability margin becomes is non-zero, and the system remains
stable as long as the noise variance remains below the threshold. The higher the dimension and the bigger the
stability margin is. From that result on the calculation of G(r, t) below the noise threshold, one can translate
the information in terms of the stationary value CN[∞] [of the network coherence defined (see Eq. (][4][)). The]
16
-----
finite-size behavior of the coherence can be extracted by introducing a cut-off in Fourier space in Eq. (41).
For a system of size L with N = L[d] nodes, the coherence is reaching a stationary value which scales as
system-size as
_CN[∞]_ _[∝]_ _N[1]_
� 2−d
d[d]rGst(r) ∼ _N_ _d ._ (68)
In that context, in d > 2 the algorithm is still capable to perform its task, even with the presence of
multiplicative noise, as long as the variance is below the threshold σc[2][. In lower dimensions, the stability]
margin is inexistent and there is no stationary state. An infinitesimal amount of multiplicative noise makes
the system exponentially unstable, the system is thus infinitely fragile.
_3.3. A word on the other forms of symmetry_
As we said earlier in the article, the asymmetric link case, is the simplest case where the link and onsite
variables are uncorrelated in space, see Eq. (21)), regardless the type of conservation condition that we
consider. Therefore, this case leads to a rather simple equation for the time-evolution of the correlator, and
the calculation can be done for any kernel. The other cases might be more or less involved depending on the
form of the links and the conservation condition, indeed when one enforces symmetry between link variables
for example, the onsite random variables Vi(t) become spatially correlated (see Eq. (24)) leading to different
asymptotic behaviors. Nevertheless, the method that we have proposed here can be applied directly to those
situations, leading to the explicit form of the noise threshold. The details of those models will be detailed
elsewhere although let us finally mention, that the crucial ingredient in the different universal behaviors
that we have observed is the presence of some form of a conservation law, and we believe that the specific
details of the system will not change the dimensional behaviors and phase diagrams Fig .(5) and Fig .(7), but
could have some relevance for more specific applications to real consensus algorithms and communication
networks.
**4. Summary and conclusions**
In this paper, we have quantified the performance of a consensus algorithm or a distributed system by
studying the network coherence Eq. (4) of a system of N agents
_CN_ (t) = [1]
_N_
�
d[d]r �G(r, t) _m(t)_ _,_ (69)
_−⟨_ _⟩[2][�]_
in the large-scale limit N and where G(r, t) is the local 2-point correlation _u(r, t)u(0, t)_ . We were
_→∞_ _⟨_ _⟩_
interested in the behavior of this quantity for a system with diverse sources of uncorrelated multiplicative
and additive noise (see Eq. (9)) in a lattice of dimension d in the continuum-space limit. We showed
how the quantity G(r, t) behaves in both cases where one imposes conservation of m(t) either exactly or
in average. Let us summarize the results of the continuum-space calculation of the time-behavior of the
network coherence CN (t), for a system of N nodes, when varying the strength of the link and onsite random
variables. The general behavior in the exact conservation case, is that in any dimension, below a critical value
_σc[2][, the network coherence grows algebraically then reaches the stationary state at large time, and above]_
_σc[2][, the network coherence grows exponentially, see Fig .(][5][). The stability margin is always non-zero in any]_
dimension, and the system can remain stable as long as the variance of the noise is below the threshold.
The other case is when average conservation of m(t) is enforced. In that case, the link and onsite variables
are independent, making the phenomenology richer than the exact conservation case. In low dimensions,
the network coherence grows exponentially, for any value of the variances σV[2] [and][ σ]ξ[2][. The system is then]
highly sensitive to multiplicative noise and there is no stability margin. Obviously in that case, no stationary
solution is reachable and the network coherence becomes infinite at large time. In higher d > 2, a stability
margin appears, the system is then capable to perform the average for small noise. The network coherence is
then finite below the noise threshold and for σ[2] ⩾ _σc[2]_ [the network coherence grows exponentially again, see]
17
-----
Fig .(7). Another quantity that might be extremely insightful in the context of network consensus algorithms
is the variance with respect to random initial conditions ui(t0) at time t0
_N_
� �
_uk(t0)_ _._ (70)
_k=1_
_CN_ (t, t0) := [1]
_N_
_N_
� �
var _ui(t) −_ [1]
_N_
_i=1_
This quantity tells us how the system forgets (or not) about the initial state values on the network and
how time correlations grow in the system. In our formalism, the calculation boils down to compute
G(r, t, t0) = ⟨u(r, t)u(0, t0)⟩. Those correlations have not been studied in details in the control literature but
are well known in the ageing literature [40]. A problem not addressed in this work is the case of a static
(quench) disorder, which may have also a lot of interesting applications in averaging systems and consensus
algorithms. A similar threshold, known in the context of Anderson localization [41], appears in those systems
as well, and it will be interesting to see how localization emerges in distributed systems with quench disorder.
The SHE equation with this type of disorder is often called the parabolic Anderson problem, see for example
[42]. One of the most ambitious perspective would be to show that the noise threshold that we have observed
in those classical systems, would also appear in quantum systems. Indeed it is well known in the field of
quantum computation [19] that noise is the main obstacle to efficient computation, and that above a certain
value, computation is no longer possible. This well-known result is called the ”quantum threshold theorem”
[43] and it has later been shown that this phenomenon is actually a phase transition [44]. It would of great
interest if our results could be extended to quantum systems, and prove that a similar threshold theorem
would hold for quantum distributed systems and quantum consensus [18].
This work is partially supported by NSF Awards PHY-1344069 and EECS-1408442. NA is grateful to
Thimoth´ee Thiery for discussions during the KPZ program at KITP. CS is grateful to the Labex NEXT, the
CSHL, and the MUSE IDEX Toulouse contract for funding his visit to NY.
18
-----
[1] M. Mezard and A. Montanari, Information, physics, and computation. Oxford University Press, 2009.
[2] R. Olfati-Saber, “Flocking for multi-agent dynamic systems: Algorithms and theory,” IEEE Transactions on automatic
_control, vol. 51, no. 3, pp. 401–420, 2006._
[3] D. Chowdhury, L. Santen, and A. Schadschneider, “Statistical physics of vehicular traffic and some related systems,”
_Physics Reports, vol. 329, no. 4, pp. 199–329, 2000._
[4] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings
_of the IEEE, vol. 95, no. 1, pp. 215–233, 2007._
[5] Y.-Y. Liu, J.-J. Slotine, and A.-L. Barab´asi, “Controllability of complex networks,” Nature, vol. 473, no. 7346, pp. 167–173,
2011.
[6] T. Vicsek and A. Zafeiris, “Collective motion,” Physics Reports, vol. 517, no. 3, pp. 71–140, 2012.
[7] C. Castellano, S. Fortunato, and V. Loreto, “Statistical physics of social dynamics,” Reviews of modern physics, vol. 81,
no. 2, p. 591, 2009.
[8] J.-P. Bouchaud and M. Potters, Theory of financial risk and derivative pricing: from statistical physics to risk management.
Cambridge university press, 2003.
[9] R. N. Mantegna and H. E. Stanley, Introduction to econophysics: correlations and complexity in finance. Cambridge
university press, 1999.
[10] F. Bullo, Lectures on Network Systems. Version 0.85, 2016. With contributions by J. Cortes, F. Dorfler, and S. Martinez.
[11] Y. Kuramoto, Chemical oscillations, waves, and turbulence, vol. 19. Springer Science & Business Media, 2012.
[12] J. A. Acebr´on, L. L. Bonilla, C. J. P. Vicente, F. Ritort, and R. Spigler, “The kuramoto model: A simple paradigm for
synchronization phenomena,” Reviews of modern physics, vol. 77, no. 1, p. 137, 2005.
[13] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” in Robotics and
_Automation, 2002. Proceedings. ICRA’02. IEEE International Conference on, vol. 2, pp. 1327–1332, IEEE, 2002._
[14] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78,
2004.
[15] J. Lin, A. S. Morse, and B. D. Anderson, “The multi-agent rendezvous problem-the asynchronous case,” in Decision and
_Control, 2004. CDC. 43rd IEEE Conference on, vol. 2, pp. 1926–1931, IEEE, 2004._
[16] J. Lin, A. S. Morse, and B. D. Anderson, “The multi-agent rendezvous problem. part 2: The asynchronous case,” SIAM
_Journal on Control and Optimization, vol. 46, no. 6, pp. 2120–2147, 2007._
[17] W. Ren and R. W. Beard, Distributed consensus in multi-vehicle cooperative control. Springer, 2008.
[18] L. Mazzarella, A. Sarlette, and F. Ticozzi, “Consensus for quantum networks: Symmetry from gossip interactions,” IEEE
_Transactions on Automatic Control, vol. 60, no. 1, pp. 158–172, 2015._
[19] M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information. Cambridge university press, 2010.
[20] L. Xiao, S. Boyd, and S.-J. Kim, “Distributed average consensus with least-mean-square deviation,” Journal of Parallel
_and Distributed Computing, vol. 67, no. 1, pp. 33–46, 2007._
[21] M. Huang and J. H. Manton, “Coordination and consensus of networked agents with noisy measurements: stochastic
algorithms and asymptotic behavior,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 134–161, 2009.
[22] S. Patterson and B. Bamieh, “Consensus and coherence in fractal networks,” IEEE Transactions on Control of Network
_Systems, vol. 1, no. 4, pp. 338–348, 2014._
[23] B. Bamieh, M. R. Jovanovic, P. Mitra, and S. Patterson, “Coherence in large-scale networks: Dimension-dependent
limitations of local feedback,” IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2235–2249, 2012.
[24] F. Fagnani and S. Zampieri, “Randomized consensus algorithms over large scale networks,” IEEE Journal on Selected
_Areas in Communications, vol. 26, no. 4, pp. 634–649, 2008._
[25] R. Carli, F. Fagnani, P. Frasca, and S. Zampieri, “Gossip consensus algorithms via quantized communication,” Automatica,
vol. 46, no. 1, pp. 70–80, 2010.
[26] S. Patterson, B. Bamieh, and A. El Abbadi, “Convergence rates of distributed average consensus with stochastic link
failures,” IEEE Transactions on Automatic Control, vol. 55, no. 4, pp. 880–892, 2010.
[27] J. Wang and N. Elia, “Distributed averaging under constraints on information exchange: emergence of l´evy flights,” IEEE
_Transactions on Automatic Control, vol. 57, no. 10, pp. 2435–2449, 2012._
[28] J. Wang and N. Elia, “Consensus over networks with dynamic channels,” International Journal of Systems, Control and
_Communications, vol. 2, no. 1-3, pp. 275–297, 2010._
[29] G. Cybenko, “Dynamic load balancing for distributed memory multiprocessors,” Journal of parallel and distributed
_computing, vol. 7, no. 2, pp. 279–301, 1989._
[30] J. E. Boillat, “Load balancing and poisson equation in a graph,” Concurrency and Computation: Practice and Experience,
vol. 2, no. 4, pp. 289–313, 1990.
[31] J. C. Doyle, B. A. Francis, and A. R. Tannenbaum, Feedback control theory. Courier Corporation, 2013.
[32] A. Crisanti, G. Paladin, and A. Vulpiani, Products of Random Matrices: in Statistical Physics, vol. 104. Springer Science
& Business Media, 2012.
[33] B. Touri, Product of random stochastic matrices and distributed averaging. Springer Science & Business Media, 2012.
[34] R. R. Muller, “On the asymptotic eigenvalue distribution of concatenated vector-valued fading channels,” IEEE
_Transactions on Information Theory, vol. 48, no. 7, pp. 2086–2091, 2002._
[35] M. L. Mehta, Random matrices, vol. 142. Academic press, 2004.
[36] F. Family and T. Vicsek, “Scaling of the active zone in the eden process on percolation networks and the ballistic deposition
model,” Journal of Physics A: Mathematical and General, vol. 18, no. 2, p. L75, 1985.
[37] A.-L. Barab´asi and H. E. Stanley, Fractal concepts in surface growth. Cambridge university press, 1995.
[38] J. Krug and H. Spohn, “Kinetic roughening of growing surfaces,” Cambridge University Press, Cambridge, vol. 1, no. 99,
p. 1, 1991.
19
-----
[39] T. Halpin-Healy and Y.-C. Zhang, “Kinetic roughening phenomena, stochastic growth, directed polymers and all that.
aspects of multidisciplinary statistical mechanics,” Physics reports, vol. 254, no. 4, pp. 215–414, 1995.
[40] M. Henkel, H. Hinrichsen, S. L¨ubeck, and M. Pleimling, Non-equilibrium phase transitions, vol. 1. Springer, 2008.
[41] P. W. Anderson, “Absence of diffusion in certain random lattices,” Physical review, vol. 109, no. 5, p. 1492, 1958.
[42] Y. B. Zel’Dovich, S. Molchanov, A. Ruzmaikin, and D. D. Sokolov, “Intermittency in random media,” Physics-Uspekhi,
vol. 30, no. 5, pp. 353–369, 1987.
[43] D. Aharonov and M. Ben-Or, “Fault-tolerant quantum computation with constant error,” in Proceedings of the twenty_ninth annual ACM symposium on Theory of computing, pp. 176–188, ACM, 1997._
[44] D. Aharonov, “Quantum to classical phase transition in noisy quantum computers,” Physical Review A, vol. 62, no. 6,
p. 062311, 2000.
20
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1610.00653, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1610.00653"
}
| 2,016
|
[
"JournalArticle"
] | true
| 2016-10-03T00:00:00
|
[
{
"paperId": "26815359d454f17bfdd1ff46e482182b6ff85929",
"title": "Extreme value statistics of correlated random variables"
},
{
"paperId": "ba3844f0112a761fc9354582f5254e8e9db63d28",
"title": "Consensus and Coherence in Fractal Networks"
},
{
"paperId": "68074c123229f29fa742e354a2c0a8b6af8057aa",
"title": "Consensus for Quantum Networks: Symmetry From Gossip Interactions"
},
{
"paperId": "ddbf9bc7a13e503f9afcaa4aea1a6495afb41dc8",
"title": "Quantum Computation and Quantum Information"
},
{
"paperId": "7c588932ea3c80975f063afefee653732084c717",
"title": "Product of Random Stochastic Matrices and Distributed Averaging"
},
{
"paperId": "46db537b24e707bf6229c3b74afbaf1b867e27e6",
"title": "Distributed Averaging Under Constraints on Information Exchange: Emergence of Lévy Flights"
},
{
"paperId": "2784dcfdc3fc5726bc2426692dbad06a5b3338ce",
"title": "Coherence in Large-Scale Networks: Dimension-Dependent Limitations of Local Feedback"
},
{
"paperId": "3b62c331be63a928278282418de40f63ba9b0e48",
"title": "Controllability of complex networks"
},
{
"paperId": "04a7df9214f1c1165dd7e169953ae4d5de412566",
"title": "Diffusion and multiplication in random media"
},
{
"paperId": "cd67aaa9b8bc9ad53a2f0b6392e3ceb8f8c589f7",
"title": "SPECTRAL DISTRIBUTIONS OF ADJACENCY AND LAPLACIAN MATRICES OF RANDOM GRAPHS"
},
{
"paperId": "d015a49ef3685ba48a765a17a442ba2c6507d67a",
"title": "Free-energy distribution of the directed polymer at high temperature"
},
{
"paperId": "dd409a309ba4768c910688efbb0a76ee71e2885d",
"title": "Convergence Rates of Distributed Average Consensus With Stochastic Link Failures"
},
{
"paperId": "24f02bb525da2091346984890401a312a20703fc",
"title": "Gossip consensus algorithms via quantized communication"
},
{
"paperId": "3533a122e4db6b9c8f97dfe6516c7ccc9661e164",
"title": "Coordination and Consensus of Networked Agents with Noisy Measurements: Stochastic Algorithms and Asymptotic Behavior"
},
{
"paperId": "ac01e2bfe4961bef7af7322437bb39b5f0feaa26",
"title": "Consensus over networks with dynamic channels"
},
{
"paperId": "bbadd4370cb353cd31a260668824f27a41c9af56",
"title": "Distributed Consensus in Multi-vehicle Cooperative Control - Theory and Applications"
},
{
"paperId": "6cf90d5847351130d978c2097bfe21957a024f0a",
"title": "Distributed average consensus with stochastic communication failures"
},
{
"paperId": "6ed59b022b11aaa03ed58ec9fb70e16ea64ab9c4",
"title": "Randomized consensus algorithms over large scale networks"
},
{
"paperId": "e419cfbbdd1de7f9a2ed6bb2d5392840dcb2a4fd",
"title": "Statistical physics of social dynamics"
},
{
"paperId": "aa6be519b394b44ab24c6ad964f8a2c6a9b23571",
"title": "Consensus and Cooperation in Networked Multi-Agent Systems"
},
{
"paperId": "bf4c74068a75b3b01497f29c0bfb090cd8124510",
"title": "The Multi-Agent Rendezvous Problem. Part 2: The Asynchronous Case"
},
{
"paperId": "d12c412eaf6c8e6be3814e546c36a8776d5e8fff",
"title": "Flocking for multi-agent dynamic systems: algorithms and theory"
},
{
"paperId": "6ce6aa385045b19268f0620c9192028df51f65e6",
"title": "The Kuramoto model: A simple paradigm for synchronization phenomena"
},
{
"paperId": "8da11c1b11b0d62c68992d019fd3d2d2ed063bfb",
"title": "The Parabolic Anderson Model"
},
{
"paperId": "48372b9fdbe64ec8d619babaf7f7ee734b00127c",
"title": "Fast linear iterations for distributed averaging"
},
{
"paperId": "028440361b95d0377e850da47175198ae227a6f0",
"title": "Coverage control for mobile sensing networks"
},
{
"paperId": "64bda138bb95e4d3796dfc291461f0496a564e33",
"title": "Products of random matrices."
},
{
"paperId": "f0400ba52bf050d36782fe721d394371eba81f83",
"title": "Theory Of Financial Risk And Derivative Pricing"
},
{
"paperId": "85b04239d38468e276deb89f8e02d3e4b5f2cf01",
"title": "Random Incidence Matrices: Moments of the Spectral Density"
},
{
"paperId": "292897bb00885a38d54f5425512031d905676be5",
"title": "Statistical physics of vehicular traffic and some related systems"
},
{
"paperId": "ed19bd9fd44a1b5e4aaede814e1eeba0925b12f8",
"title": "An Introduction to Econophysics: Correlations and Complexity in Finance"
},
{
"paperId": "35a8766be2b5862e99397a77b37f076c42a18d07",
"title": "Quantum to classical phase transition in noisy quantum computers"
},
{
"paperId": "6b811738b111869d50b643d978f63adb72092223",
"title": "Collective Motion"
},
{
"paperId": "24d228df6b7c2e196482a4251c22eadff85eb475",
"title": "Fault-tolerant quantum computation with constant error"
},
{
"paperId": "07a333cf0c75ad7ba4321f24af1a49b98b725792",
"title": "Kinetic roughening phenomena, stochastic growth, directed polymers and all that. Aspects of multidisciplinary statistical mechanics"
},
{
"paperId": "651a41f7078f86c9320bd128ad13718f7721bb9e",
"title": "Generic behavior in linear systems with multiplicative noise."
},
{
"paperId": "b8183abf0a3c9b364caf33b4d75699231a5dc580",
"title": "Replica-scaling analysis of diffusion in quenched correlated random media."
},
{
"paperId": "73d8775c6bb21b2faa882107b3cd59d83e7328cf",
"title": "Load Balancing and Poisson Equation in a Graph"
},
{
"paperId": "1d1efce717535589753503a09eb7ff2c876c0128",
"title": "Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications"
},
{
"paperId": "706af6fa043dcbed058fdfed8fe939084661f7f7",
"title": "Diffusion in a random catalytic environment, polymers in random media, and stochastically growing interfaces."
},
{
"paperId": "ccc91e6fc8c43f7b3c845ce271acedaef2c515eb",
"title": "Dynamic Load Balancing for Distributed Memory Multiprocessors"
},
{
"paperId": "51eb4d3880943ce8c0c3dc5823f3f5b1257b9542",
"title": "Multidimensional diffusion in random potentials."
},
{
"paperId": "08f5226065e0646e27e27061390570036dadc8bf",
"title": "Scaling of directed polymers in random media."
},
{
"paperId": "919736effeef76a8edaa11a843394680d976707b",
"title": "Intermittency in random media"
},
{
"paperId": "fe3e2ab70e3d106128f85f239d302c00c97d613f",
"title": "Scaling of the active zone in the Eden process on percolation networks and the ballistic deposition model"
},
{
"paperId": "f6f215bf741217b7acf66ffe301dd11b2a4f4d2b",
"title": "Chemical Oscillations, Waves, and Turbulence"
},
{
"paperId": "0c8e7919b31430753d1d3951a21a06a8b2c4f6c2",
"title": "Absence of Diffusion in Certain Random Lattices"
},
{
"paperId": "f657f01c94c62117145c579700cb03beafc2fe9c",
"title": "Stochastic differential equations"
},
{
"paperId": null,
"title": "Lectures on Network Systems . Version 0.85"
},
{
"paperId": "a970d706c7c43f4330f334117041cb316000e87d",
"title": "Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management"
},
{
"paperId": "0b0cc2c941c10ff180e036c068d51c0d6cc1676c",
"title": "Feedback Control Theory"
},
{
"paperId": null,
"title": "Non-Equilibrium Phase Transitions vol 1 (Berlin: Springer"
},
{
"paperId": "5359fb2362ee22a18a5cc1bf9ff7f69d7ce533bf",
"title": "Distributed average consensus with least-mean-square deviation"
},
{
"paperId": "d70fc00c99028c1f29f53b849903230a9482d5e4",
"title": "The multi-agent rendezvous problem - the asynchronous case"
},
{
"paperId": null,
"title": "Random Matrices vol 142 (New York: Academic"
},
{
"paperId": null,
"title": "Random matrices Academic press"
},
{
"paperId": "69b2c55f2cec2a9144e5bd90b47ba02722f44cc4",
"title": "On the asymptotic eigenvalue distribution of concatenated vector-valued fading channels"
},
{
"paperId": "655c280a2e526b8af5bc03bc6b5fd51940263c5a",
"title": "Coverage control for mobile sensing networks"
},
{
"paperId": "9d6641e3dba49d6b7cecf42d42101af05e295f17",
"title": "An Introduction to Econophysics: Contents"
},
{
"paperId": "310cfa8ebe987b4e0a6c4f361f7e0dbd34dda615",
"title": "Information, physics, and computation"
},
{
"paperId": "4c149382eff152778ba58a8ee8c91732364686e2",
"title": "Fractal Concepts in Surface Growth"
},
{
"paperId": "d3e8569557ad7fd551addab72d32573b22d259d4",
"title": "Fractal Concepts in Surface Growth: Frontmatter"
},
{
"paperId": "beeffbf0b18653ceab7f5bb80a2187b84c18c71c",
"title": "Products of random matrices in statistical physics"
},
{
"paperId": null,
"title": "Kinetic roughening of growing surfaces"
},
{
"paperId": null,
"title": "Let us remark that V = 1 − ξK1 is not completely satisfied in that case because our example is not periodic"
},
{
"paperId": null,
"title": "In the case of symmetric links another term σ"
},
{
"paperId": null,
"title": "would hold for quantum distributed systems and quantum consensus [18]. This work is partially supported by"
}
] | 20,565
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f77951251798ae562e1ebe066753594f4b37be
|
[
"Computer Science"
] | 0.866705
|
Private Stream Aggregation with Labels in the Standard Model
|
00f77951251798ae562e1ebe066753594f4b37be
|
Proceedings on Privacy Enhancing Technologies
|
[
{
"authorId": "2053351159",
"name": "J. Ernst"
},
{
"authorId": "2059727806",
"name": "Alexander Koch"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Proc Priv Enhancing Technol"
],
"alternate_urls": null,
"id": "d5dc4224-e4c3-43c9-918a-bd6326650b5b",
"issn": "2299-0984",
"name": "Proceedings on Privacy Enhancing Technologies",
"type": null,
"url": "https://www.degruyter.com/view/j/popets"
}
|
Abstract A private stream aggregation (PSA) scheme is a protocol of n clients and one aggregator. At every time step, the clients send an encrypted value to the (untrusted) aggregator, who is able to compute the sum of all client values, but cannot learn the values of individual clients. One possible application of PSA is privacy-preserving smart-metering, where a power supplier can learn the total power consumption, but not the consumption of individual households. We construct a simple PSA scheme that supports labels and which we prove to be secure in the standard model. Labels are useful to restrict the access of the aggregator, because it prevents the aggregator from combining ciphertexts with different labels (or from different time-steps) and thus avoids leaking information about values of individual clients. The scheme is based on key-homomorphic pseudorandom functions (PRFs) as the only primitive, supports a large message space, scales well for a large number of users and has small ciphertexts. We provide an implementation of the scheme with a lattice-based key-homomorphic PRF (secure in the ROM) and measure the performance of the implementation. Furthermore, we discuss practical issues such as how to avoid a trusted party during the setup and how to cope with clients joining or leaving the system.
|
### Johannes Ernst* and Alexander Koch
# Private Stream Aggregation with Labels in the Standard Model[†]
**Abstract: A private stream aggregation (PSA) scheme**
is a protocol of n clients and one aggregator. At every time step, the clients send an encrypted value to
the (untrusted) aggregator, who is able to compute the
sum of all client values, but cannot learn the values of
individual clients. One possible application of PSA is
privacy-preserving smart-metering, where a power supplier can learn the total power consumption, but not the
consumption of individual households.
We construct a simple PSA scheme that supports labels
and which we prove to be secure in the standard model.
Labels are useful to restrict the access of the aggregator, because it prevents the aggregator from combining
ciphertexts with different labels (or from different timesteps) and thus avoids leaking information about values
of individual clients.
The scheme is based on key-homomorphic pseudorandom functions (PRFs) as the only primitive, supports
a large message space, scales well for a large number of
users and has small ciphertexts.
We provide an implementation of the scheme with
a lattice-based key-homomorphic PRF (secure in the
ROM) and measure the performance of the implementation. Furthermore, we discuss practical issues such as
how to avoid a trusted party during the setup and how
to cope with clients joining or leaving the system.
**Keywords: Private Stream Aggregation, Aggregator**
Obliviousness, Standard Model, Pseudorandom Function, Lattice-Based Cryptography, Learning With
Rounding, Smart-Meters
DOI 10.2478/popets-2021-0063
Received 2021-02-28; revised 2021-06-15; accepted 2021-06-16.
***Corresponding Author: Johannes Ernst: University of**
St. Gallen (most of the work done while at KIT, Karlsruhe)
**Alexander Koch: Competence Center for Applied Security**
Technology (KASTEL), Karlsruhe Institute of Technology
† An extended abstract of this work appeared in [19]
## 1 Introduction
Smart meters are becoming more and more ubiquitous
in many countries. This has advantages for the power
suppliers, because they get near real-time power consumptions from their clients, which they can use for
load-balancing and prediction in their networks. However, this raises the question of privacy. Sensitive information like the work schedule can easily be guessed from
the variation in the power consumption of a household.
In practice, it is often sufficient for the power supplier to
know the sum of the consumptions of all clients within
a certain area, and this is exactly what a protocol for
private stream aggregation (PSA) [22] can offer. PSA
considers the scenario where an aggregator wishes to
periodically compute the sum of values that are supplied by different clients. The values are encrypted in
such a way that the aggregator can only compute their
sum, but not the individual values. This is captured in
the game-based security definition of aggregator obliviousness (AO), which is given in Definition 3.
It is desirable for PSA schemes to support the use
of labels. Labels restrict the aggregator to only be able
to compute the sum of values which were encrypted under the same label. This prevents the aggregator from
mixing ciphertexts of different time steps and thereby
learning more about the individual values than would
otherwise be possible.
A clear advantage of PSA schemes is that they do
not require the clients to exchange messages, nor does
the aggregator need to send messages to the clients. After the keys have been distributed, the only messages in
the protocol are the ciphertexts which the clients send
to the aggregator. PSA stays secure even if the aggregator colludes with an arbitrary subset of the clients.
In that case, the aggregator only learns the sum of the
non-colluding clients’ values.
When we apply a PSA protocol to the smart meter
scenario, this means that every smart meter encrypts
(e.g. every fifteen minutes) its current power consumption and sends it to the supplier. The supplier then is
able to compute the sum and thereby learns the power
consumption of all households in the specific area. Because of the way the clients’ values are encrypted, the
-----
_only information the power supplier gets is the sum of_
all values.
To further protect the privacy of each client, techniques from differential privacy can be used. In this case,
it means that every client adds a small amount of noise
to their value before encrypting it. This induces a small
error in the resulting sum, but in many cases a small error is tolerable. However, in this paper we focus on the
encryption part. Differential privacy can then be added
by standard techniques, e.g. as described in [22].
Apart from privacy-preserving smart metering, PSA
has a lot more of possible applications. For example it
can be used in federated learning in a similar way to
the protocol of [11], to compute a global model update
from the local updates that are supplied by the clients.
This can help to prevent an adversary from using the
model to infer information on the, possibly sensitive,
data which the clients used to train the model.
PSA (without differential privacy) can be seen as
special case of inner-product multi-client functional encryption (IP-MCFE) first introduced by [16]. In innerproduct multi-client functional encryption there are several clients and one or more aggregators. The aggregators can ask for functional decryption keys associated
with an arbitrarily chosen vector y. The functional decryption key then enables them to compute the inner
product of the clients’ values with the vector y. When
we only allow the vector y = (1, . . ., 1), then this is exactly the case of PSA (without differential privacy). The
main challenge in IP-MCFE is that the scheme must be
secure, although the vectors y are not known at the
beginning and an aggregator can hold keys for many
different vectors.
### 1.1 Contribution
**Provably secure PSA scheme with labels in the**
**standard model: The scheme we propose is the first**
PSA scheme that both supports labels and is proven to
be aggregator oblivious (AO) with adaptive corruptions
in the standard model. Although, strictly speaking, the
IP-MCFE scheme of [1] can also be used as PSA scheme
with the same properties, compared to their scheme,
ours is more efficient. The size of each user secret key
and the length of the ciphertexts in their scheme grows
linearly with the number of clients, whereas ours are constant, i.e. independent of the number of clients. Becker
et al. have also proposed a PSA scheme in the standard model, but without labels [9]. They roughly explain how to extend their scheme to support labels, but
they provide no security proof of this extension. Furthermore, their scheme seems to be subject to a patent [8].
Our scheme is very similar to the PSA scheme of [24]
who used key-homomorphic weak PRFs but only proved
non-adaptive security in the standard model. Also, as
opposed to our scheme, the labels have to be precomputed at setup time and distributed to all clients. Thus,
the scheme does not support an unbounded number of
labels and the key size grows linearly with the number
of labels.
Our scheme needs as its only building block a keyhomomorphic pseudorandom function (PRF) whose output space is ZR for some integer R. It thus relies only
on secret-key primitives and is quite flexible. It makes
efficient use of the key-homomorphic PRF, as both encryption and decryption only require one PRF evaluation. The scheme can be instantiated with a latticebased key-homomorphic PRF, which are assumed to be
secure against quantum adversaries.
Additionally, we implemented the scheme and a simple key-homomorphic PRF based on the learning with
rounding (LWR) problem. For simplicity and efficiency,
we chose a PRF that relies on the random oracle model
(ROM). The performance tests show that both encryption and decryption are very efficient, in this case.
One restriction of our scheme is that it only supports the encryption of one message per label. However,
this is a mild restriction, because any (correct) PSA
scheme leaks information about individual messages, if
a user encrypts more than one message per label.
Furthermore, we concretely describe how the
scheme can be used for privacy-preserving smart-meter
aggregation. We discuss issues that arise in that setting,
such as clients joining or leaving the system and how to
execute the setup without a trusted party.
### 1.2 Related Work
In this section, we give an overview of other work that
is related to ours.
**1.2.1 Privacy Preserving Aggregation**
Shi et al. [22] were the first to formalize the notion of
PSA together with the security definition of AO. They
propose a scheme that is based on the Decisional Diffie–
Hellman (DDH) problem and prove it to be aggregator
oblivious in the ROM. Despite its simplicity, the decryption procedure is inefficient because it has to compute
-----
a discrete logarithm. This limits the size of the message
space such that the discrete logarithm can be computed
in reasonable time.
Subsequently, Benhamouda et al. [10] propose a general way to build PSA schemes from key-homomorphic
smooth projective hash functions (SPHF). Their construction yields schemes that are aggregator oblivious
in the ROM. They give concrete instantiations from the
DDH-, DCR- and several other assumptions. An advantage over previous schemes is the low reduction loss,
which does not depend on the number of users, but only
on the maximum number of labels used. This allows for
smaller keys and thereby makes the schemes more efficient.
Valovich constructs a PSA scheme from keyhomomorphic weak PRFs [24]. In contrast to the other
PSA schemes, the author only considers semi-honest adversaries. He proves the scheme to be aggregator oblivious for non-adaptive corruptions in the standard model.
For proving adaptive security, the author resorts to the
ROM. The main differences to our scheme are that [24]
has a weaker security model and that the author uses
_weak PRFs. Because of the use of weak PRFs, the labels_
need to be uniformly random. Therefore, the set of labels is created at setup time and given to all parties, to
ensure that everyone uses the same labels. This means
that the scheme does not support an unbounded number of labels and that the key size grows linearly with
the number of labels.
Becker et al. propose a generic PSA scheme [9] that
can be instantiated with an additively homomorphic encryption scheme, where the addition of ciphertexts corresponds to the addition of plaintexts, with the additional property that the ciphertexts are indistinguishable from random strings. The security of their scheme
relies on the Learning with Errors (LWE) assumption,
or its ring variant, and is proven to be secure in the
standard model. Two advantages over the scheme of Shi
et al. [22] are the prospective post-quantum security and
the more efficient decryption algorithm. In contrast to
the other PSA schemes, they provide an implementation
and give performance results. Their implementation has
a message space of size 2[16]. The scheme does not directly
support labels which limits the practical use cases. Although the authors sketch how to extend the scheme
to work with labels, they provide no security proof for
that.
Except for [9] all other PSA schemes, including ours,
have the restriction that every client must only encrypt
one message per label. However, this restriction is also
reasonable from a security perspective, because even a
perfectly secure PSA scheme leaks information about individual client values, if a client encrypts more than one
message per label. We elaborate on this in Section 2.4.1.
The advantage of our scheme over [22] and the DDH
version of [10] is that the size of the message space is
not restricted. The advantage over [10, 22, 24] is that
we prove our scheme to be secure under adaptive corruptions in the standard model. A disadvantage of our
scheme is the larger key size. Benhamouda et al. [10]
report key sizes of 592 bit for 128 bit security for their
DDH based scheme when using elliptic curves. For our
choices of parameters our scheme provides a security
of 114 bit[1] with key-sizes of 268288 bit. Note however,
that this is mainly due to the use of a post-quantum
secure PRF. We give more details on this in Section 4.4.
Advantages of our scheme over [9] are that our scheme
has a security proof for the case that labels are used
and that it is much simpler. Our scheme relies on keyhomomorphic PRFs while theirs needs an additively homomorphic public key encryption scheme with ciphertexts indistinguishable from random.
The following works are not directly comparable to
ours, because their focus is a bit different.
Emura considers the verifiability of aggregated
sums [18]. This means that, when the aggregator publishes the sum, they must provide a publicly verifiable
proof that the sum is correct. The author proposes two
schemes, which are based on the DDH version of [10]
and require pairings. In this paper we do not consider
the verifiability of the result.
Ács and Castelluccia [3] propose an aggregation
scheme that uses similar pair-wise masking as the
MCFE scheme of [1]. The users in their scheme agree
on shared keys via the Diffie–Hellman key-exchange. Additionally, the scheme offers a mechanism by which the
aggregator can decrypt the sum, even when some clients
drop out. As opposed to PSA, this mechanism needs interaction between the clients and the aggregator. The
authors do not provide a security proof, but argue that
the scheme is secure against certain attacks.
Bonawitz et al. construct a protocol for the aggregation of model updates in distributed machine learning [11]. Their protocol is resistant to user failures and
is proven secure against malicious adversaries in the
ROM. They use pairwise masks that are set up by
Diffie–Hellman key exchanges between the users. The
protocol has four rounds of communication to aggre
**1 when used with 1000000 clients. With 10000 clients the security**
is 132 bit)
-----
gate one model update. The key difference to PSA is
that their protocol is resistant to user failures, but requires several rounds of communication, whereas PSA
is non-interactive.
**1.2.2 Multi-Client Functional Encryption**
Chotard et al. [16] are the first to define (decentralized)
multi-client functional encryption (DMCFE) and show
two instantiations for the inner product functionality.
Their schemes have several practical limitations and rely
on pairings and the ROM. Abdalla et al. [2] address
these limitations and remove the need for pairings, while
still relying on the ROM.
Finally, Abdalla et al. [1] construct the first MCFE
scheme with labels that is secure in the standard model.
Their scheme works with any PRF and (single-input)
functional encryption scheme for inner products. Due
to that, their scheme can be based upon many different
mathematical problems including LWE, DDH and DCR.
Our security proof strongly relies on techniques from the
security proof of their MCFE scheme.
The main differences between PSA and MCFE are
that in MCFE the aggregator(s) can compute inner
products with many different vectors and not just the
vector consisting of one-entries. Furthermore, in MCFE
these vectors can be chosen by the aggregator(s) adaptively during the protocol execution. This make schemes
for MCFE harder to construct and usually less efficient.
Nevertheless, both areas have a lot of techniques in common.
### 1.3 Concurrent Work
Independent of and concurrent to our work, two more
papers on PSA have been published in online pre-print
archives recently.
The authors of [23] propose two PSA schemes
based on variants of two fully homomorphic encryption
schemes. The security of both schemes relies on the ringLWE assumption. They also implement the schemes and
provide a performance analysis. According to this analysis our scheme seems to be faster, however.
Waldner et al. propose a PSA scheme based on
PRFs [25]. As opposed to our scheme, the PRF does
not need to be key-homomorphic. This enables the use
of very efficient PRFs. In [25], the authors use AES
and SHA3. However, this comes at the cost of requiring n evaluations of the PRF for encrypting one mes
sage, where n is the number of users. The authors also
provide an implementation and performance results.
In Section 4, we compare the running time of the
aforementioned schemes with our implementation.
Table 1 shows a comparison of the different properties of the PSA schemes that we described in this section.
**Scheme** **Proof** **in** **Number** **Adaptive** **Encryption**
**standard** **supported** **corrup-** **cost** **per**
**model** **labels** **tions** **client**
Shi et al. [22] ✗ unbounded ✓ _O(1)_
Benhamouda ✗ unbounded ✓ _O(1)_
et al. [10]
Valovich [24] ✓ bounded ✗ _O(1)_
(1[st] scheme)
Valovich [24] ✗ bounded ✓ _O(1)_
(2[nd] scheme)
LaPS [9] ✓ none ✓ _O(1)_
SLAP [23] ✗ unbounded ✓ _O(1)_
LaSS [25] ✓ unbounded ✓ _O(n)_
Our scheme ✓ unbounded ✓ _O(1)_
**Table 1. Comparison of PSA schemes. Note that the DDH based**
schemes in [22] and [10] have the limitation that the message
space needs to be small in order to allow taking a discrete logarithm in reasonable time.
### 1.4 Outline
We give the necessary definitions and background in Section 2. In Section 3 we explain our PSA scheme and
prove its security according to the game-based security
definition of AO. In Section 4 we describe the implementation and choices of parameters and give performance
results. In Section 5 we discuss several issues related
to the deployment of the scheme in practice, with a focus on smart-meters. The last section summarizes our
paper.
## 2 Preliminaries
In this section, we explain our basic notation and define
the cryptographic problems and primitives we use in
this paper.
|Scheme|Proof in standard model|Number supported labels|Adaptive corrup- tions|Encryption cost per client|
|---|---|---|---|---|
|Shi et al. [22]|✗|unbounded|✓|O(1)|
|Benhamouda et al. [10]|✗|unbounded|✓|O(1)|
|Valovich [24] (1st scheme)|✓|bounded|✗|O(1)|
|Valovich [24] (2nd scheme)|✗|bounded|✓|O(1)|
|LaPS [9]|✓|none|✓|O(1)|
|SLAP [23]|✗|unbounded|✓|O(1)|
|LaSS [25]|✓|unbounded|✓|O(n)|
|Our scheme|✓|unbounded|✓|O(1)|
-----
### 2.1 Notation
Here, we quickly explain some of the notation that we
use. By [n] we denote the set {1, . . ., n} and with [n]0
we mean {0, . . ., n}. With log(x) we mean the logarithm
to base 2 and with ln(x) we mean the logarithm to
base e. As security parameter we use λ. Lower-case boldface letters such as v denote vectors. We use the terms
_client and user synonymously. By a PPT Turing ma-_
_chine, we mean a probabilistic Turing machine that runs_
in polynomial time. By x ←$ X we mean that x is chosen uniformly random from the set X . With ⟨x, y⟩ we
denote the inner-product of two vectors x and y. Let
_q, p ∈_ N with q > p. Then, for a value x ∈ Zq we define
_⌊x⌋p := ⌊x · p/q⌋._
### 2.2 Learning With Rounding (LWR)
We will define the learning with rounding (LWR) problem, which can be seen as a deterministic version of
the learning with errors (LWE) problem. Learning with
rounding was introduced in [7] and has turned out to
be very useful to construct secret-key primitives such as
pseudorandom functions.
**Definition 1. Let λ, q, p ∈** N, with q > p and s ←$ Z[λ]q [.]
Let Ls be the following distribution over Z[λ]q _[×][Z][p][: Choose]_
**a ←$ Z[λ]q** [and output][ (][a][,][ ⌊⟨][a][,][ s][⟩⌋][p][)][. The (decision) LWR]
problem then is to distinguish between the distribution
_Ls and the uniform distribution over Z[λ]q_ _[×][ Z][p][.]_
We will use a key-homomorphic pseudorandom function
that is based on the LWR problem to instantiate our
scheme. Next we define pseudorandom functions and
key-homomorphic pseudorandom functions.
### 2.3 Pseudorandom Functions
Intuitively, a pseudorandom function (PRF) is a function that is indistinguishable from a random function
(RF). A random function is a function that returns truly
random values on all distinct inputs. We use pseudorandom functions PRFk : X →Y that are indexed by a key
_k ∈K. For a PPT-adversary A, we define A’s advantage_
in distinguishing a pseudorandom function PRF from a
random function as
Adv[prf]A,PRF[(][λ][)]
:=|Pr[Exp[PRF](λ, A) = 1] − Pr[Exp[RF](λ, A) = 1]|,
where the experiments Exp[PRF](λ, A), and Exp[RF](λ, A)
are defined as follows:
Exp[PRF](λ, A) Exp[RF](λ, A)
1 : _k ←$ K_ 1 : _k ←$ K_
2 : _b ←A[PRF][k][(][·][)]_ 2 : _b ←A[RF][(][·][)]_
3 : **return b** 3 : **return b**
In the first case, has oracle access to PRF indexed
_A_
by a random key k, whereas in the second case A has
oracle access to a random function. Intuitively, ’s goal
_A_
can be seen as finding out whether they are in Exp[PRF]
or Exp[RF]. A pseudorandom-function is computationally
_indistinguishable from a random function, if for all PPT-_
adversaries, there exists a negligible function negl such
_A_
that for all sufficiently large λ it holds that
Adv[prf]A,PRF[(][λ][)][ ≤] [negl][(][λ][)][.]
**2.3.1 Key-Homomorphic Pseudorandom Functions**
A useful special case of pseudorandom functions are keyhomomorphic pseudorandom functions. A pseudorandom function PRFk : X →Y is key-homomorphic, if for
all x ∈X, PRF(·)(x) is a group homomorphism between
the key space K and Y. To define this formally, let (K, ∗)
and (Y, •) be groups. Then, for all x ∈X _, k1, k2 ∈K_
PRFk1 (x) • PRFk2 (x) = PRFk1∗k2 (x)
must hold. A key-homomorphic PRF must fulfill the
same security definition as a PRF.
A PRF is _almost_ key-homomorphic if
PRFk1+k2 (x) = PRFk1 (x) + PRFk2 (x) + e for a small
_e ∈_ N. The PRF we use as main building block in our
PSA scheme is almost key-homomorphic with e ∈{0, 1}.
### 2.4 Private Stream Aggregation
In private stream aggregation (PSA) we have an aggregator and several clients. In each time step, the clients
send an encrypted value to the aggregator. The aggregator is then able to compute the sum of these values but
no individual client value. It is important that the aggregator can only compute the sum of values that were
encrypted under the same time-stamp or label. There
is no further interaction beyond the messages that the
clients send to the aggregator.
Often in PSA differential privacy is additionally
used. For this, every client adds a small amount of noise
-----
to their value before encrypting it. The aggregator can
then compute the resulting noisy sum of the plaintexts.
When the noise is chosen appropriately, and enough
clients honestly add noise, the noisy sum maintains differential privacy. However, in this paper we are only concerned with the encryption and will leave out the noise
in the definition of PSA. Nevertheless, it is no problem
to add differential privacy via standard techniques (e.g.
as in [22]).
Our definition roughly follows the definition of [10],
as it is also without noise.
**Definition 2 (Private Stream Aggregation). A private**
_stream aggregation scheme PSA over ZR (for R ∈_ N) and
label space, consists of the following three PPT algo_L_
rithms for the setup, the encryption and the decryption
of the aggregate sum:
– Setup(1[λ], 1[n]): Given the security parameter λ and
the number of users n in unary, it outputs public
parameters pp and n + 1 keys (ki)i∈[n]0 . The key k0
is the (secret) key of the aggregator, and each ki is
a (secret) key of a user i ∈ [n].
– Enc(pp, ki, l, xi): Given the public parameters pp, a
key ki of user i ∈ [n], a label l ∈L and a value
_xi_ ZR, it outputs an encryption ci of xi under
_∈_
key ki with label l. This algorithm is supposed to
be executed by each user at every time step, where
the time step is used as label. The user then sends
_ci to the aggregator._
– AggrDec(pp, k0, l, {ci}i∈[n]): Given the public parameters pp, the aggregator’s key k0, a label l ∈L, and a
set of n ciphertexts {ci}i∈[n] that were encrypted under the same label l, it outputs [∑]i∈[n] _[x][i][ (mod][ R][)][.]_
We additionally require PSA = (Setup, Enc, AggrDec)
to satisfy _correctness,_ i.e. that for any _n,_ _λ_ _∈_
N, x1, . . ., xn _∈_ ZR and any label l _∈_ _L, that_
for (pp, {ki}i∈[n]0 ) _←_ Setup(1[λ], 1[n]), and _ci_ _←_
Enc(pp, ki, l, xi), we have
#### ∑
AggrDec(pp, k0, l, {ci}i∈[n]) = _xi mod R._
_i∈[n]_
In most PSA schemes (including ours) the sum is computed modulo a public integer R. When the goal is to
compute the sum over Z instead of ZR then the clients
must be restricted to only encrypt values smaller than
a certain value ω and R must be chosen to be greater
than n · ω. This difference can be important, because
some proofs only go through, when the message space
is a group. However, in our proofs it makes no difference
whether the clients are allowed to encrypt values from
ZR or {0, . . ., ω}.
Usually in a PSA scheme, a trusted third party executes the setup algorithm and gives the secret keys to
the clients and the aggregator. The clients then regularly encrypt some value and send the ciphertext to the
aggregator. By calling AggrDec the aggregator is then
able to decrypt the sum of the values. In Section 5.1 we
will describe approaches how the trusted setup can be
avoided.
Next we define the security notion of aggregator obliviousness. We only define encrypt-once security,
which is security in the case that every client encrypts
only one message per label. This is a reasonable restriction, because it can be easily enforced in practice. Furthermore, encrypting two messages per label leaks the
difference of the messages as explained in Section 2.4.1.
The PSA schemes of [22] and [10] both have this restriction as well.
**Definition 3 (Aggregator obliviousness).** The gamebased security notion of aggregator obliviousness
(AO) is defined via the following security experiment
AOb(λ, n, A), b ∈{0, 1} given in Figure 1. First, the challenger runs Setup and passes the public parameters pp
to the adversary . Then, can adaptively ask queries
_A_ _A_
to the following oracles:
QEnc(i, xi, l): Given a user index i ∈ [n], a value xi ∈
ZR, and a label l, it answers with c = Enc(pp, ki, l, xi).
QCorrupt(i): Given a user index i ∈ [n]0 (including
the aggregator’s index 0), it returns the secret key ki.
QChallenge(U, {x[0]i _[}][i][∈U]_ _[,][ {][x]i[1][}][i][∈U]_ _[, l][∗][)][: The adversary]_
specifies a set of users U ⊆ [n], a label l[∗] and two
challenge messages for each user from . The ora_U_
cle answers with encryptions of x[b]i [, that is][ {][c][i] _←_
Enc(pp, ki, l[∗], x[b]i [)][}][i][∈U] [. This oracle can only be queried]
once during the game. (If it is not queried, we set =
_U_ _∅_
in the discussion below.)
At the end, A outputs a guess α, of whether b = 0
or b = 1.
AOb(λ, n, A)
(pp, {ki}i∈[n]0 ) ← Setup(1[λ], 1[n])
_α ←A[QCor][(][·][)][,][QEnc][(][·][,][·][,][·][)][,][QChallenge][(][·][,][·][,][·][,][·][)](pp)_
```
if condition (∗) is satisfied (see p. 123)
output α
else
output 0
```
**Fig. 1. Aggregator obliviousness experiment for PSA schemes.**
Depending on the bit b, the oracle QChallenge answers with
encryptions of x[0]i [or][ x]i[1][.]
-----
To formally define the condition ( ), we introduce the
_∗_
following sets:
– Let El ⊆ [n] be the set of all users for which A has
asked an encryption query on label l.
– Let CS ⊆ [n] be the set of users for which A has
asked a corruption query. Even if the aggregator is
corrupted, we define this set to only contain the
corrupted users and not the aggregator.
– Let Ql∗ := U ∪El∗ be the set of users for which A
asked a challenge or encryption query on label l[∗].
We say that condition ( ) is satisfied (as used in Fig_∗_
ure 1), if all of the following conditions are satisfied:
– = . This means that all users for which re_U ∩CS_ _∅_ _A_
ceives a challenge ciphertext must stay uncorrupted
during the entire game.
– A has not queried QEnc(i, xi, l) twice for the same
#### (i, l). Doing so would violate the encrypt-once restriction.
– U ∩El∗ = ∅. This means that A is not allowed to get
a challenge ciphertext from users for which they ask
an encryption query on the challenge label l[∗]. Doing
this would violate the encrypt-once restriction.
– If A has corrupted the aggregator and Ql∗ _∪CS = [n]_
then we require that
#### ∑ ∑
_x[0]i_ [=] _x[1]i_ _[.]_
_i∈U_ _i∈U_
We will call this condition the balance-condition.
The balance condition captures the fact that if has
_A_
corrupted the aggregator and received a ciphertext from
every uncorrupted user, then they can compute the sum
of the plaintexts. If the plaintexts submitted in the challenge query would sum to different values, then could
_A_
trivially win the game by using their aggregation capability. Note that the balance-condition does not apply if
there is a single honest user for which did not get a
_A_
ciphertext on label l[∗].
We say that corruptions are adaptive, because can
_A_
ask corruption queries depending on previously asked
queries. If has to decide at the beginning of the game
_A_
which users they want to corrupt, the term static corruptions is used in the literature. In this paper we only consider adaptive corruptions, because it is a more realistic
assumption and because security under adaptive corruptions implies security under static corruptions. We define ’s advantage as
_A_
Adv[AO]A,PSA[(][λ, n][) =][|][Pr[][AO][0][(][λ, n,][ A][) = 1]]
_−_ Pr[AO1(λ, n, A) = 1]|.
A PSA scheme is aggregator oblivious, if for every PPT
adversary there is a negligible function negl such that
_A_
for all sufficiently large λ
Adv[AO]A,PSA[(][λ, n][)][ ≤] [negl][(][λ][)][.]
**2.4.1 Inherent Leakage of Sum Queries**
Here we will briefly explain why it is dangerous in a PSA
scheme, when a client encrypts more than one message
per label, even though, the scheme may be formally secure. Imagine that user i ∈ [n] encrypts both xi and x[′]i
as ciphertexts ci and c[′]i[, respectively, with the same la-]
bel l. When the aggregator got ciphertexts for the same
label l from the other users as well, they can use AggrDec
to compute
AggrDec(pp, k0, l, (c1, . . ., ci, . . ., cn))
_−AggrDec(pp, k0, l, (c1, . . ., c[′]i[, . . ., c][n][))]_
#### ∑ ∑
= ( _xj) + xi −_ ( _xj) −_ _x[′]i_ [=][ x][i] _[−]_ _[x]i[′]_ _[.]_
_j∈[n]\{i}_ _j∈[n]\{i}_
With this, the aggregator learns the difference of the two
messages of user i. It also means that if the aggregator
knows one of the two messages, they can compute the
other one. If the aggregator has two ciphertexts from
more than one client, then they can combine them in
arbitrary ways to get even more information. This leakage cannot be avoided, because it is leaked by the sum
functionality itself. This is also a reason why, in this paper, we restrict the clients to only encrypt one message
per label (encrypt-once).
## 3 Adaptively Secure PSA
In this section, we construct a scheme for private stream
aggregation and prove that it is aggregator oblivious
under adaptive corruptions in the standard model. We
will define the scheme without noise. The noise can be
added via standard techniques (e.g. as in [22]), to ensure
differential privacy.
In the security proofs, we use diagrams to illustrate
the game hops. Figure 2 shows how to read these diagrams. In this example there are the four games G0
to G3. In game G0, only the unmodified lines are executed, that is the lines which are neither framed nor gray.
Thus, in G0 only line 1 is executed. Game G1 additionally executes the lines that are framed by a rectangular
box, but that are not gray. In our example, G1 executes
-----
**Fig. 2. Figure showing how to read the game hop diagrams.**
lines 1 and 2. Game G2 executes all unmodified lines,
all framed lines and all gray lines, which in this case are
all four lines. In game G3, only the unmodified lines are
executed and the lines that are gray, but not framed.
Therefore, in G3 the lines 1 and 4 are executed.
### 3.1 The Construction
Our scheme makes use of a key-homomorphic PRF to
create pseudorandom pads which are added to the messages as encryption.
Let PRFk : X _→_ _Y_ be the key-homomorphic
PRF, where the key spaces (K, +) and (Y, +) are
abelian groups. Thus, we have that for all x _∈_
_X_ _,_ [∑]i [PRF][k][i] [(][x][) =][ PRF][∑]i _[k][i]_ [(][x][)][ holds. For our use we]
require that (Y, +) is the group (ZR, +), for some integer R. In Section 4, we describe how to instantiate
the scheme with a lattice-based key-homomorphic PRF.
Throughout this section we will often write [∑]i∈[n] _[x][i]_
and omit the mod R, when it is clear from the context.
That is, [∑]i∈[n] _[x][i][ denotes the sum in][ Z][R][ and not the]_
sum in Z. Also we will write [∑]i∈[n] _[k][i][, with which we]_
mean the sum in the group .
_K_
We define the PSA scheme PSA =
#### (Setup, Enc, AggrDec) in Figure 3. The setup algorithm
chooses n random keys for the key-homomorphic PRF
and defines the aggregation key as the sum of the
client keys. The encryption algorithm uses the keyhomomorphic PRF to create a pseudorandom pad and
adds it to the message modulo the public modulus R.
The decryption algorithm sums together all ciphertexts
which yields the sum of the client values plus the sum
of the pseudorandom pads. Because the key of the aggregator is the sum of the client keys, the aggregator
can compute the sum of the pseudorandom pads and
subtract it from the ciphertexts’ sum to obtain the sum
of the plaintexts.
In Section 3.2, we show that this construction is aggregator oblivious under adaptive corruptions if the key
homomorphic PRF is indistinguishable from a random
function. If the PRF is secure in the standard model,
then our construction is also secure in the standard
model.
Setup(1[λ], 1[n]):
`for i ∈` [n]: ki ←$ K
_k0 :=_ [∑]i∈[n] _[k][i]_
pp := R (the modulus)
`return (pp, {ki}i∈[n]0` )
Enc(pp, ki, l, xi):
_ti,l := PRFki_ (l)
_ci,l := xi + ti,l mod R_
```
return ci,l
```
AggrDec(pp, l, {ci,l}i∈[n]):
_t0,l = PRFk0_ (l)
`return` [∑]i∈[n] _[c][i,l][ −]_ _[t][0][,l][ mod][ R]_
**Fig. 3. The PSA scheme that uses key-homomorphic PRFs.**
Next, we show that PSA is correct. Note
that because PRF is key-homomorphic, we have
#### ∑
_i∈[n]_ [PRF][k][i] [(][l][) =][ PRF][∑]i∈[n] _[k][i]_ [(][l][) =][ PRF][k][0] [(][l][)][.]
**Correctness: Let n, λ ∈** N, (pp, {ki}i∈[n]0 ) ←
Setup(1[λ], 1[n]), l ∈L, xi ∈ ZR, ci,l ← Enc(pp, ki, l, xi).
Then we have
AggrDec(pp, k0, {ci,l}i∈[n], l)
= [∑]i∈[n] _[c][i,l][ −]_ [PRF][k][0] [(][l][) mod][ R]
= [∑]i∈[n][(][x][i][ +][ PRF][k][i] [(][l][))][ −] [PRF][k][0] [(][l][) mod][ R]
= [∑]i∈[n] _[x][i][ +][ ∑]i∈[n]_ [PRF][k][i] [(][l][)][ −] [PRF][k][0] [(][l][) mod][ R]
= [∑]i∈[n] _[x][i][ +][ PRF][∑]i∈[n]_ _[k][i]_ [(][l][)][ −] [PRF][k][0] [(][l][) mod][ R]
= [∑]i∈[n] _[x][i][ mod][ R]_
If [∑]i∈[n] _[x][i][ < R][, then we get the sum over the integers]_
#### ∑
_i∈[n]_ _[x][i][ as result.]_
### 3.2 Security Proof
In this section, we prove the aggregator obliviousness of
the above scheme. For this, we follow the proof strategy
-----
of [1], who used this strategy to show the security of an
inner-product MCFE scheme.[2]
In the proof of this theorem we show that PSA is
aggregator oblivious, if the key-homomorphic PRF is
indistinguishable from a random function.
**Theorem 1. For any PPT adversary** on the aggre_A_
gator obliviousness game, there is a PPT adversary
_B_
on the PRF such that
Adv[AO]A,PSA[(][λ][)]
_≤_ 2(4n[2](n − 1) + 2n(n − 1) + 2n[2]) · Adv[prf]B,PRF[(][λ][)]
_≤_ (8n[3] + 8n[2]) · Adv[prf]B,PRF[(][λ][)][,]
where n is the number of users. The adversary B has
roughly the same running time as .
_A_
_Proof. We use four intermediate games to go from AO0_
to AO1. A description of the games is depicted in Figure 4. We provide the lemmas for the transition between
the games in Appendix A.
**Game G0: This is the AO0 game, in which the chal-**
lenge query is answered with encryptions of x[0]i [.]
**Game G1: This game still answers the challenge**
query with encryptions of x[0]i [, but changes the pseudo-]
random pads that are used for the encryption. For correct decryption, these changes must not affect the sum
of the pseudorandom pads. Therefore, to each pseudorandom pad, we add a share of a perfect η-out-of-η secret
sharing of zero, where η is the number of users in the
challenge query. This makes the ciphertexts useless, unless they are all summed together. This fact enables us
to make the change of the next game.
**Game G2: This game answers the challenge query**
with encryptions of x[1]i [instead of][ x]i[0][. This is possible]
because the secret shares of the previous game hide all
information on the individual ciphertexts.
**Game G3: Here we remove the secret shares from**
the pseudorandom pads again. Therefore, this game is
**2 The definition of the games G0 to G3 is very similar to theirs.**
However, there are differences in the proof. Because in the game
of aggregator obliviousness, the adversary is only allowed to ask
one challenge query, the games do not have to guess the number
of honest users. Also we need one game less than [1], because we
do not have a layer of functional encryption. Because we use keyhomomorphic PRFs instead of general PRFs, the transition from
_G0 to G1 is also different. Lastly, we directly prove aggregator_
obliviousness without relying on lemmas to upgrade the security.
Doing so adds an extra case distinction to the proof, but reduces
the reduction loss. This is possible, because PSA is a simpler
primitive than MCFE.
**Fig. 4. Game hops of the proof of Theorem 1.**
identical to AO1, in which the challenge query is answered with encryptions of x[1]i [.]
We distinguish two cases. In the first case the adversary corrupts the aggregator. This is the more chal_A_
lenging case, because it allows the adversary to decrypt
the sum of ciphertexts. Care must be taken to introduce
the changes for the games in a way that cannot be recognized by the adversary. In Lemma 1, we use a hybrid
argument over all users and several intermediate games
to show that
_| Pr[G0(λ, A) = 1] −_ Pr[G1(λ, A) = 1]|
_≤_ 2n[2](n − 1) · Adv[prf]B,PRF[(][λ][)][.]
To get from G1 to G2 we show in Lemma 2 that
_| Pr[G1(λ, A) = 1] −_ Pr[G2(λ, A) = 1]|
_≤_ 2n(n − 1) · Adv[prf]B,PRF[(][λ][)][.]
-----
Finally, to get from G2 to G3, we apply Lemma 3 in
which we show that
_| Pr[G2(λ, A) = 1] −_ Pr[G3(λ, A) = 1]|
_≤_ 2n[2](n − 1) · Adv[prf]B,PRF[(][λ][)][.]
In the second case, does not corrupt the aggregator.
_A_
This enables us to directly go from G0 to G3 by a hybrid
argument over all users. Thus, in Lemma 4 we show that
_| Pr[G0(λ, A) = 1] −_ Pr[G3(λ, A) = 1]|
_≤_ 2n[2] _· Adv[prf]B,PRF[(][λ][)][.]_
The reduction uses an unbiased coin to decide whether
_B_
to simulate case 1 or case 2, so in conclusion we get
Adv[AO]A,PSA[(][λ][)]
_≤_ 2(4n[2](n − 1) + 2n(n − 1) + 2n[2]) · Adv[prf]B,PRF[(][λ][)]
_≤(8n[3]_ + 8n[2]) · Adv[prf]B,PRF[(][λ][)][.]
In this section, we proposed a PSA scheme that is based
on key-homomorphic PRFs. We proved that the scheme
is aggregator oblivious in the standard model. In the
next section, we describe how to instantiate the scheme
with a lattice-based PRF and explain our implementation.
## 4 Implementation
In this section, we describe the implementation of our
scheme, the choice of parameters and performance results. The implementation in Go can be found here
[https://github.com/johanernst/khPRF-PSA. Both the en-](https://github.com/johanernst/khPRF-PSA)
cryption and the decryption algorithm are fast, so that
they can also be executed on computationally limited
devices such as smart-meters. The setup algorithm is
slower, because for our parameters it needs to draw
_λ = 2096 random numbers per client. However it is only_
executed very rarely.
### 4.1 Choice of the Pseudorandom Function
For the implementation, we chose to use an almost keyhomomorphic PRF mentioned in [12]. It relies on the
LWR assumption and is secure in the random oracle
model. Therefore, with this concrete instantiation our
scheme is also only secure in the ROM. We chose a
ROM-based PRF for its simplicity and efficiency. The
standard model PRF of Boneh et al. [12] requires quite
large parameters to be secure. The public parameters
are two λ[′] _× λ[′]_ matrices. Because the matrices are sampled from {0, 1}[λ][′][×][λ][′] instead of Z[λ]q _[′][×][λ][′]_, the dimension
of the matrices must be increased by a factor of log2(q).
Since we use λ = 2096 and q = 2[128], this means that
_λ[′]_ = 2096 · 128, thus the square matrix would have
_λ[′][2]_ _≈_ 7 · 10[10] entries. Even if each entry is stored as
single bit, these are 70 gigabits. Also, for evaluating the
PRF, the matrices need to be multiplied together multiple times, whereby the intermediate entries which need
to be kept in memory get much larger. Thus, this PRF
does not seem practical because of both running time
and memory constraints.
While the key-homomorphic PRF of Banerjee and
Peikert [6] is more efficient, it is also more complex and
thus, more prone to implementation errors, which endangers the security of the implementation. Kim proposes an approach that allows a smaller modulus q at
the cost of larger keys [21]. Since we need q > p = 2[85]
for a message space of 2[64], we would not gain very much
from a smaller modulus. When our scheme is instantiated with any of the above mentioned standard model
key-homomorphic PRFs, then we obtain a PSA scheme
that is secure in the standard model.
Next, we describe the ROM-based keyhomomorphic PRF of [12], which we use in the following.
For λ, q, p ∈ N, with q > p, k ∈ Z[λ]q [and a hash function]
_H : X ↦→_ Z[λ]q [, the PRF is defined as:]
_Fk(x) := ⌊⟨H(x), k⟩⌋p._
Because in [12] there is no security proof for this function, we provide a short proof in Appendix B. Because
the output of F is from Zp, we set the public modulus
_R in our scheme equal to p._
The hash function H is required to map to Z[λ]q [. We]
construct such a hash function using a standard hash
function H _[′], such as SHA3, as follows:_
where byte converts its argument to a byte array. The
space between x and i ∈{1, . . ., λ} is necessary to ensure
that all inputs to H _[′]_ are different. Note that if q does not
divide the size of the output space of H _[′], extra analysis_
is needed to make sure that the mod q operation does
not induce any bias. However in our case we choose q
as power of 2, whereby it divides the size of the output
space of H _[′]. In our implementation we used SHA3-512_
as H _[′]._
_H_ _[′](byte(x ‖ " " ‖ "1")) mod q_
...
_H_ _[′](byte(x ‖ " " ‖ "λ")) mod q_
####
(1) [,]
_H(x) :=_
####
-----
The rounding function ⌊a⌋ is not exactly linear, but
almost linear, which means that:
_⌊a + b⌋_ = ⌊a⌋ + ⌊b⌋ + e,
for e ∈{0, 1}. This entails that the PRF is only almost
key-homomorphic:
_Fk1+k2_ (x) = ⌊⟨H(x), k1 + k2⟩⌋p
= ⌊⟨H(x), k1⟩ + ⟨H(x), k2⟩⌋p
= ⌊⟨H(x), k1⟩⌋p + ⌊⟨H(x), k2⟩⌋p + e
= Fk1 (x) + Fk2 (x) + e
for e ∈{0, 1}. For our use-case this is not a problem.
Because in the decryption algorithm the PRF values of
_n clients are summed, the error from the non-linearity_
is at most n − 1. The idea is to use a larger message
space where all legitimate messages have a difference
larger than n. The decrypted message that potentially
contains an error of up to n − 1 is then rounded to the
next legitimate message. In Section 4.3, we describe this
in more detail.
### 4.2 Choice of Parameters
In this section, we describe how we chose the parameters
for the PRF and approximately which security level we
get from these parameters.
The PRF is parameterized by λ, q, p ∈ N and reduces tightly to LWRλ,q,p. We used the LWE-estimator
from [5] to estimate the security level for certain choices
of parameters. The value of 1/p corresponds to the error
rate α in LWE, so we need to choose λ, q, p ∈ N such that
LWEλ,q,1/p is hard. For λ = 2096, q = 2[128] and p = 2[85],
the program estimates a hardness of over 2[178]. Note that
these parameters also satisfy the recommendation of [7]
_√_
that q/p > _λ._
According to Theorem 1, the reduction loss is less
than 8n[3] +8n[2]. When we suppose we have n = 2[20] users,
then the reduction loss is less than 2[64]. This yields a
security of 178 64 = 114 bit.[3]
_−_
### 4.3 Concrete Instantiation
As described in the previous section, we set λ := 2096,
_q := 2[128]_ and p := 2[85]. In our implementation, we set
**3 For the 10000 users in our implementation the security is at**
least 132 bit
the public modulus R = p = 2[85] and PRF := Fk(x) =
_⌊⟨H(x), k⟩⌋p with key space Z[λ]q_ [and][ H][ as defined in (1).]
As labels we use strings that are converted to byte arrays
before given to the hash function.
Next, we describe how to mitigate the error introduced by the non-linearity of the rounding function: Because [∑]i∈[n] **[k][i][ =][ k][0][, we have]**
#### ∑
_Fki_ (l) + e mod R = Fk0 (l),
_i∈[n]_
for e ∈{0, . . ., n − 1}. This means that
#### ∑ ∑
(xi + Fki (l)) − _Fk0_ (l) mod R = _xi −_ _e mod R._
_i∈[n]_ _i∈[n]_
To ensure correctness, each client, before calling
Enc(pp, ki, l, x[′]i[)][, computes][ x][′]i [:][=][ n][ ·][ x][i][ + 1][. The multi-]
plication with n ensures that all legitimate messages
are apart by n − 1 and the addition of 1 ensures
that the non-linearity error does not cause an underflow mod R. The aggregator, after executing ¯s =
AggrDec(pp, k0, l, {ci}i∈[n]), rounds ¯s up to the next multiple s[′] of n and computes s := (s[′] _−_ _n)/n._
**Correctness: After encryption we have**
_ci = n · xi + 1 + Fki_ (l) mod R.
AggrDec(pp, k0, l, {ci}i∈[n]) yields
_s¯ =_ [∑]i∈[n][(][n][ ·][ x][i][ + 1 +][ F][k]i [(][l][))][ −] _[F][k]0_ [(][l][) mod][ R]
= [∑]i∈[n][(][n][ ·][ x][i][ + 1)][ −] _[e][ mod][ R]_
= n · [∑]i∈[n] _[x][i][ +][ n][ −]_ _[e][ mod][ R]_
= n · [∑]i∈[n] _[x][i][ +][ n][ −]_ _[e.]_
For the last equality to hold, we need that
0 ≤ _n ·_ [∑]i∈[n] _[x][i][ +][ n][ −]_ _[e < R,]_
which means that e neither creates an underflow nor
an overflow modR. Because e is at most n − 1, we have
#### 0 < n · [∑]i∈[n] [x][i][ +][ n][ −] [e][ and if][ ∑]i∈[n] [x][i][ <][ (][R][ −] [n][)][/n][,]
then we also have
#### ∑
_n ·_ _xi + n −_ _e < R._
_i∈[n]_
When we suppose we have at most n = 2[20] users,
this means that [∑]i∈[n] _[x][i][ must be smaller than][ (][R][ −]_
2[20])/2[20] = 2[85]/2[20] _−1 = 2[65]_ _−1. In the next step, the ag-_
gregator rounds ¯s up to the next multiple of n, which is
_s[′]_ = n·[∑]i∈[n] _[x][i][+][n][. After computing][ s][ = (][s][′][−][n][)][/n][, they]_
get s = [∑]i∈[n] _[x][i][, which is the desired result. Therefore,]_
-----
if the total sum is at most 2[64], the scheme works correctly.
**Security: Because the clients input n · xi + 1 to the**
encryption algorithm of the PSA scheme, Theorem 1
guarantees that only [∑]i∈[n][(][n][ ·][ x][i][ + 1)][ mod][ R][ can be]
computed by the aggregator. Because n is known publicly, this value does not contain more information than
#### ∑
_i∈[n]_ _[x][i][. Thus, our instantiation inherits the security]_
of the general scheme.
### 4.4 Performance
In this section, we analyze the performance of our
scheme both in theory and by performing running time
measurements.
Every secret key consists of λ = 2096 elements
from Zq = Z2128, which means that every secret key
needs 2096 128 = 268288 bits of memory. These are
_·_
roughly 33.5 kilobyte. For a security level of 128 bit,
Benhamouda et al. [10] report a key size of 592 bit for
their scheme and of 416 bit for the scheme of Shi et al.
[22]. The size of a ciphertext in our scheme is log(p) = 85
bit. Again for a security level of 128 bit, Benhamouda
et al. [10] report a ciphertext size of 296 bit for their
PSA scheme and of 416 bit for the scheme of [22]. Both
values were computed for 2[20] users and 2[20] labels. Neither Takeshita et al. [23] nor Waldner et al. [25] report
their key sizes, however the key size of [25] increases linearly with the number of clients, because every client
needs a shared secret with every other client. In many
cases, smaller ciphertexts are preferable over small keys,
because the ciphertexts have to be sent over the network
at every time-step. In our scheme the cost for encryption mainly is the cost for evaluating the PRF. The PRF
needs 2096 evaluations of the underlying hash function,
#### 2096 modulo operations and 2096 additions and multiplications for evaluating the inner product of the hash
and the key.
We executed the performance tests on a laptop on
a single thread of an Intel Core i5-10210U CPU. We
measured the running time of both our scheme and [25].
For a better comparison we executed their scheme with
a message space of 2[64] and without noise. We executed
the tests 40 times and took the average. In every test we
run the encryption algorithm once for every client and
executed the decryption algorithm 1000 times. Figure 5
shows the average running time of a single execution
of the encryption and decryption algorithm of both our
scheme and [25]. As expected the running time for the
encryption algorithm of our scheme does not depend on
the number of users, whereas the running time of [25]
grows linearly. Somewhat surprisingly the running time
of our decryption algorithm increases with the number
of users. This means that the running time is not completely dominated by the cost of evaluating the PRF,
but summing together all ciphertexts also takes significant time. As the figure clearly shows, our scheme outperforms [25] starting from about 3500 clients. Table 2
shows the exact numbers of the average running time of
our scheme and [25] for different numbers of users. Table 3 shows the running time of [23] and [9] taken from
the respective papers.
In both, our scheme and [25], the evaluation of the
PRF does not depend on the plaintext. Therefore, encryption could be sped up by computing the PRF beforehand. Then, when the plaintext is available, encryption
only consists of adding the PRF output to the plaintext
and one modulo operation. The same can be done for
decryption as well.
**Our scheme** **LaSS (AES variant)**
Users Enc Dec Enc Dec
1000 0.913(0.010) 0.875(0.002) 0.295(0.011) 0.277(0.007)
5000 0.929(0.007) 1.209(0.006) 1.590(0.022) 1.508(0.042)
10000 0.901(0.004) 1.805(0.007) 3.941(0.046) 3.643(0.201)
**Table 2. Running time in milliseconds of one execution of the**
encryption/decryption algorithm of our scheme and the AES
version of LaSS [25]. The value in parentheses is the standard
deviation. For both schemes we executed 40 measurements and
took the average.
**SLAPBGV** **LaPS**
Users Enc Dec Enc Dec
1000 1.17 3.26 3.724 1.964
**Table 3. Running time in milliseconds of the optimized version**
of SLAPBGV [23] and of LaPS [9]. This version of LaPS only
provides a security of 80 bit. For a security of 128 bit Becker
et al. [9] report a running time of 77.33ms for encryption and
67.62 for decryption. Both implementations were measured with a
message space of size 2[16]. Note that the numbers are taken from
the respective papers. Thus, a comparison is not entirely reliable.
The setup algorithm of our scheme has to draw 2096
random elements from Zq for each user and then compute the sum of the user keys to get the key for the
aggregator. As shown in Figure 6, the running time of
|Col1|Our scheme|LaSS (AES variant)|
|---|---|---|
|Users|Enc Dec|Enc Dec|
|1000|0.913(0.010) 0.875(0.002)|0.295(0.011) 0.277(0.007)|
|5000|0.929(0.007) 1.209(0.006)|1.590(0.022) 1.508(0.042)|
|10000|0.901(0.004) 1.805(0.007)|3.941(0.046) 3.643(0.201)|
|Col1|SLAPBGV|LaPS|
|---|---|---|
|Users|Enc Dec|Enc Dec|
|1000|1.17 3.26|3.724 1.964|
-----
our setup algorithm grows linearly in the number of
users. The running time of [25] grows quadratically with
the number of users, because every pair of users needs
a shared key. The generation of the random numbers
can trivially be parallelized and thereby be made much
faster in practice. Also, since the setup algorithm is executed very rarely, its running time is not as critical as
the running time of the encryption or decryption algorithm.
**Fig. 5. This figure shows the running time of one execution of**
the encryption and decryption algorithm of both our scheme and
[25]. The vertical bars show the standard deviation, which is large
enough to be seen only for the decryption algorithm of LaSS.
**Fig. 6. This figure shows the running time of the setup algorithm**
of both our scheme and LaSS. As in Figure 5 the error bars are
too small to be seen.
## 5 Deployment Considerations
In this section, we discuss practical issues when deploying our scheme, with a special focus on the smart-meter
application.
### 5.1 Setup and Key Management
In the PSA literature the setup procedure is usually
considered to be executed by a trusted party who distributes secret keys to the clients and the aggregator.
This often means that the trusted party is able to decrypt all messages sent by the clients. In the following,
we discuss techniques to overcome this limitation.
One approach to achieve a decentralized setup is to
use techniques very similar to the ones of Chotard et al.
[17]. The idea is that we let each client choose their PRF
key ki at random. The aggregator’s key is supposed to
be the sum of the keys chosen by the clients. So we only
need to let the aggregator know the sum of the client
keys in a secure way. The solution for this is to combine
non-interactive key exchange (NIKE) ([13], [20]) with
a technique from [15] as done in [17]. First the clients
execute the NIKE, i.e. each client generates a public
key and a secret key and uploads the public key to a key
server. Each client downloads the public keys of all other
clients and uses each other client’s public key together
with their own secret key to derive a shared secret with
that client. Then the clients use the shared pairwise keys
to generate random pads with the property that the
pads sum to zero. The clients then add these pads to
their PSA secret keys and send the resulting ciphertext
to the aggregator. The aggregator will obtain the sum
of the client keys by adding all ciphertexts, but learns
nothing else about the keys. The process is essentially
the same as the DSum functionality of [17], but without
the layer of All-or-Nothing Encapsulation and is shown
in Figure 7.
In principle, one can use the same key-homomorphic
PRF as in our PSA scheme. However, the property of
key-homomorphism is not needed here. Hence, we recommend using a block cipher such as AES as PRF.
One may now argue, that we do not need the rest
of the PSA scheme anymore, because we already have a
way of letting the aggregator know the sum of client values. Indeed this would basically give us the PSA scheme
of [25]. However then the encryption of each value requires n invocations of the PRF, where n is the number of users. Doing this for every encryption becomes
-----
**Fig. 7. The decentralized setup algorithm as executed by every**
client. The algorithm takes as input the client’s PSA secret key
_ki, the NIKE secret key ski and the NIKE public keys of the_
other clients pkj .
inefficient as the number of clients becomes large. So
it is preferable to only execute this step once for the
decentralized setup and then continue with our PSA
scheme that only requires one invocation of the keyhomomorphic PRF in the encryption algorithm.
For illustration, let us present a simple example
of how a NIKE scheme can be built from the Diffie–
Hellman key exchange. Every client i publishes their
public key pki := g[x][i] and downloads the public keys of
all other clients. To compute a shared key with client j,
client i computes pk[x]j _[i]_ = g[x][j] _[x][i]_ and hashes this together
with their identities H(i, j, g[x][j] _[x][i]_ ). For more details on
constructions and security models see [13] and [20].
The most efficient way to distribute the public keys
is to use a key server which all clients use to upload
and download their public keys. The key server needs
to be semi-honest, i.e. it can share all its knowledge
with the adversary without compromising security, but
must follow the protocol. A malicious key server could
perform a person-in-the-middle attack by replacing all
client public keys with its own public keys. The key
server would then compute a shared secret with every
client and would be able to decrypt all messages. However such an attack can be detected when clients manually compare their keys with each other.
The communication costs for the decentralized
setup of one client is uploading their NIKE public-key
to the key-server, downloading n − 1 public-keys and
sending one aggregation key share (the encrypted PSA
secret key) to the aggregator. The computational costs
are n − 1 calls to NIKE.SharedKey to compute the pairwise shared keys and n − 1 PRF or AES evaluations to
compute the pseudorandom pad that encrypts the PSA
secret key. These operations only have to be executed
for the setup and have no influence on the cost for encryption, which is still independent of the number of
clients.
An alternative to relying on a key server would be
to let the clients broadcast their public key to all other
clients. However, here a person-in-the-middle attack is
also possible. Furthermore this approach would cause
roughly n[2] messages in the setup phase, which can be
too much in the smart-meter scenario, where the number of clients can be large.
Having a decentralized setup as described above has
the additional advantage that the setup can be repeated
at regular intervals to achieve some sort of forward secrecy. Repeating a centralized setup would mean that
the trusted party would have to be available at each time
the setup is repeated. Only relying on a semi-honest key
server which needs to be online once every interval is
much easier to assure.
Another approach from literature to decentralize
the setup is adhoc multi-input functional encryption
(adhoc-MIFE) [4]. In adhoc-MIFE the clients also generate shares of the aggregation key in a decentralized
way, by getting as input the public keys of the other
parties. The authors consider the computation of innerproducts, which can be seen as a generalization of the
computation of sums in our setting. They use 2-round
MPC protocols with specific properties in their construction. However their construction does not support labels
and is less efficient due to the use of MPC.
### 5.2 Client Failures
If one client fails to submit a valid ciphertext, then
the aggregator cannot compute the overall sum, because that client’s PRF term is missing. The result will
then look like a random value. In the context of smartmetering this is a problem, because there are many devices and it is quite possible that one device fails due to
technical problems, loss of network connection or active
manipulation by the user. We have several options to
cope with such failures.
A straightforward approach is to partition the users
in several groups and run one instance of the PSA
scheme for each group. For example, if we have 1000
users, we can divide them in ten groups of 100 users
each. When one user fails, we only lose the values of
100 users, instead of 1000. Two disadvantages of this
approach are that a few failing users can cause a lot of
lost values and that it reduces the privacy of the users,
because their values are only aggregated with a smaller
set of other users.
To solve this problem, Chan et al. [14] have proposed a generic solution that incorporates differential
privacy in an essential way. They let the aggregator
learn partial sums of the users’ inputs in such a way
-----
that the sum of the values of the non-failing users can
always be reconstructed. They use a binary tree, where
the clients are the leaf nodes. Each inner node corresponds to the partial sum of the values of the clients
beneath that node. So each client produces log(n) ciphertexts of which each one corresponds to an inner node on
the path from that client to the root of the tree. When
all clients send ciphertexts, then the aggregator uses the
ciphertexts corresponding to the root node to compute
the total sum. If some clients fail, the aggregator has to
use ciphertexts corresponding to the inner nodes to be
able to reconstruct the sum of the remaining users as
shown in Figure 8. The noise for differential privacy is
essential here, because otherwise the aggregator would
get all the users’ values in the clear.
[0,3]
[0,1] [2,3]
0 1 2 3
**Fig. 8. If client 2 fails, the aggregator uses the ciphertexts corre-**
sponding to the black nodes to compute the sum of the remaining clients’ values. The notation [i,j] means the (noisy) sum of
the values of clients {i,. . .,j}. This figure is a smaller version of
Figure 1b from [14].
With this approach each client has log(n) secret
keys, one corresponding to each node on the path to the
root. Also, instead of sending one message, each client
sends log(n) messages. The aggregator holds one aggregation key for each inner node in the tree. Another way
to view this is that we are running one instance of the
PSA scheme for each inner node. This approach does
not add additional rounds of communication. The aggregator will always be able to compute the overall sum,
no matter how many clients fail, however, when more
clients fail, the resulting sum is noisier. The scheme is
generic and does not pose any special requirements on
the underlying PSA scheme, so it can be used to make
our scheme failure tolerant. Only one small adaption
has to be made. The scheme of Chan et al. only considers user values in {0, 1}, whereas in our case the values
are from {0, . . ., ∆}, where ∆ is the largest reasonable
power consumption. To accommodate for this we need
to multiply the ϵ and δ parameters for differential pri
vacy with 1/∆. In Appendix C we describe in a bit more
detail how the adapted encryption algorithm works and
also provide figures with pseudocode.
### 5.3 Client Joins and Leaves
If clients leave the system, for example if they changed
their power supplier, we can treat them as permanently
failed (cf. Section 5.2), as suggested by [14]. This will
slightly increase the noise relative to the number of remaining clients. Therefore, when many clients have left,
we can repeat the setup for the remaining clients and
start over with a new tree. Repeating the setup is more
practical when using the decentralized setup described
in Section 5.1, because then we do not need to entrust
a third party with the key generation.
To accommodate for joining clients, [14] propose to
create a tree that has more leaves than there are clients,
where the trusted party creates secret keys for every
leave node. The not-yet-present clients are treated as
failed, until they join. When a new client joins, they get
the secret keys corresponding to their leave node from
the trusted party. This has the advantage that neither
the other clients nor the aggregator need to be notified
when a client joins. However it has the disadvantage
that the trusted party needs to be available whenever a
new client joins.
In the following we describe how client-joins can
work together with the decentralized setup from Section 5.1. Since we are using a binary tree now, we are
essentially running one instance of the PSA scheme for
every inner node, where the clients of each instance are
the clients below the respective inner node. This means
that in the beginning, each client creates log(n) aggregation key shares and sends them to the aggregator.
Whenever a new client joins they broadcast their
public key to the other clients by uploading it to the
key server and downloads the public keys of the other
clients and computes a shared key with each of them.
The other clients download the public key of the new
client and use it to compute a shared secret with the
new client. Then a new aggregation key is created for
each node on the path from the new client to the root.
This means that each client, which shares an inner node
with the new client, chooses a new secret key for the respective node and sends the aggregation key share to
the aggregator. The aggregator receives all the aggregation key shares for each node on the path from the new
client to the root and combines them to obtain a completely new aggregation key for all these log(n) nodes.
-----
Another way to view this is that whenever a new client
joins, (a part of) the decentralized setup is repeated for
the inner nodes on the path from the new client to the
root.
The cost for the new (n + 1)th client is computing
a shared secret with the n other clients and computing aggregation key shares for each node on the path to
the root. The number of PRF/AES evaluations depends
on the position of the new node in the tree, but is at
least n (for the aggregation key associated with the root
node) and at most 2n. The other clients have to compute one new shared secret. The number of PRF/AES
evaluations depends on their position in relation to the
new node, but is again at least n and at most 2n.
### 5.4 Concrete Smart-Meter Example
In this section we summarize how our PSA scheme together with the above extensions can be used for privacy preserving smart-metering. Every smart-meter is
preconfigured with the IP addresses of the key server
and the power-supplier, the public key of the powersupplier and with an upper bound on the number of
other smart-meter devices that are expected to join the
system[4]. Every smart-meter creates a random key for
the PSA scheme and a public and secret key for the
NIKE scheme. The smart-meters then upload their public keys to the key-server and compute pairwise shared
keys as described in Section 5.1. They then compute
their key share as described in Figure 7, encrypt it with
the power-suppliers public key and send the resulting
ciphertext to the power-supplier. The power-supplier is
then able to compute the aggregation key for each inner
node. This concludes the setup.
At every time step (e.g. every fifteen minutes) the
smart-meters encrypt their current power consumption and send it to the power-supplier. There are two
straightforward ways of choosing the label which is
needed as input for the PRF. One possibility is to use a
counter that starts with 0 and is incremented at every
time step. This has the disadvantage that if a client malfunctions and misses a time step, its counter becomes
asynchronous to the counters of the other smart-meters,
whereby its ciphertexts cannot be decrypted anymore.
The other possibility provides a solution for this. The
label can be chosen as the current date and hour and a
**4 When more users than expected join the system, then the**
(non-interactive) setup can simply be repeated
value between 0 and 3 which indicates the quarter of the
hour we are currently in. Then every smart-meter can
deduce the next label from the current time. The clocks
of the smart-meters only need to have the same time up
to fifteen minutes precision and the current time can be
easily obtained via the network. Note that this ensures
that every label is only used once, whereby the encryptonce condition is satisfied and thus, our security proof
applies.
To accommodate for users joining the system, every
smart-meter checks the key server for new public keys
at regular intervals such as once every day. If there are
new public keys, the clients recompute some of their
key-shares as described in Section 5.3 and send them to
the aggregator. Failing or leaving smart-meters are also
treated as described in Section 5.3. Since we use the
generic approach of [14] to provide fault-tolerance, the
resulting power-consumption, which is decrypted by the
power-supplier, contains a small amount of noise. However as stated by [14] the additive error is only slightly
larger than logarithmic in the number of users.
The smart-meters can also be configured to repeat
the setup at regular intervals to provide better forward
secrecy or to reduce the noise, when too many smartmeters have left the system.
## 6 Conclusion
PSA is a useful protocol for privately letting an aggregator know the sum of client-supplied values. It has applications from privacy-preserving smart metering to distributed machine learning.
In this paper, we proposed a PSA scheme that is
based on key-homomorphic PRFs as the only building
block. It supports a large message space and scales well
for a large number of users. Also it has very small ciphertexts (85 bit in our implementation with a message
space of 2[64]). Both encryption and decryption mostly
consist of only one PRF evaluation.
We proved the security of our scheme in the standard model. Furthermore we implemented our scheme
using a lattice-based key-homomorphic PRF in the
ROM. We analyzed the performance both in theory and
by measuring the running time in practice and thereby
showed that the scheme is quite efficient. Moreover we
discussed possible solutions for practical issues as how
to decentralize the setup and how to accommodate for
clients joining or leaving the system.
-----
### Acknowledgments
We would like to thank the anonymous reviewers.
Alexander Koch was supported by the Competence Center for Applied Security Technology (KASTEL).
## References
[1] M. Abdalla, F. Benhamouda, and R. Gay. From singleinput to multi-client inner-product functional encryption. In
S. D. Galbraith and S. Moriai, editors, ASIACRYPT 2019,
_Proceedings, Part III, volume 11923 of LNCS, pages 552–_
[582. Springer, 2019. 10.1007/978-3-030-34618-8_19.](https://doi.org/10.1007/978-3-030-34618-8_19)
[2] M. Abdalla, F. Benhamouda, M. Kohlweiss, and H. Waldner.
Decentralizing inner-product functional encryption. In D. Lin
and K. Sako, editors, Public-Key Cryptography, PKC 2019,
_Proceedings, Part II, volume 11443 of LNCS, pages 128–157._
[Springer, 2019. 10.1007/978-3-030-17259-6_5.](https://doi.org/10.1007/978-3-030-17259-6_5)
[3] G. Ács and C. Castelluccia. I have a DREAM! (DiffeRentially
privatE smArt Metering). In T. Filler, T. Pevný, S. Craver,
and A. D. Ker, editors, Information Hiding, IH 2011, volume
[6958 of LNCS, pages 118–132. Springer, 2011. 10.1007/978-](https://doi.org/10.1007/978-3-642-24178-9_9)
[3-642-24178-9_9.](https://doi.org/10.1007/978-3-642-24178-9_9)
[4] S. Agrawal, M. Clear, O. Frieder, S. Garg, A. O’Neill, and
J. Thaler. Ad hoc multi-input functional encryption. In
T. Vidick, editor, Innovations in Theoretical Computer Sci_ence Conference, ITCS 2020, volume 151 of LIPIcs, pages_
40:1–40:41. Schloss Dagstuhl – Leibniz-Zentrum für Infor[matik, 2020. 10.4230/LIPIcs.ITCS.2020.40.](https://doi.org/10.4230/LIPIcs.ITCS.2020.40)
[5] M. R. Albrecht, R. Player, and S. Scott. On the concrete
hardness of Learning with Errors. Journal of Mathematical
_[Cryptology, 9(3):169–203, 2015. 10.1515/jmc-2015-0016.](https://doi.org/10.1515/jmc-2015-0016)_
[6] A. Banerjee and C. Peikert. New and improved keyhomomorphic pseudorandom functions. In J. A. Garay
and R. Gennaro, editors, CRYPTO 2014, Proceedings, Part
_I, volume 8616 of LNCS, pages 353–370. Springer, 2014._
[10.1007/978-3-662-44371-2_20.](https://doi.org/10.1007/978-3-662-44371-2_20)
[7] A. Banerjee, C. Peikert, and A. Rosen. Pseudorandom
functions and lattices. In D. Pointcheval and T. Johansson,
editors, EUROCRYPT 2012, Proceedings, volume 7237 of
_[LNCS, pages 719–737. Springer, 2012. 10.1007/978-3-642-](https://doi.org/10.1007/978-3-642-29011-4_42)_
[29011-4_42.](https://doi.org/10.1007/978-3-642-29011-4_42)
[8] D. Becker and J. G. Merchan. Post-quantum secure private stream aggregation, Apr. 21 2020. [URL https://](https://patents.google.com/patent/US10630655B2/en)
[patents.google.com/patent/US10630655B2/en. US Patent](https://patents.google.com/patent/US10630655B2/en)
10,630,655.
[9] D. Becker, J. Guajardo, and K.-H. Zimmermann. Revisiting
private stream aggregation: Lattice-based PSA. In NDSS
_[2018. The Internet Society, 2018. URL https://www.ndss-](https://www.ndss-symposium.org/wp-content/uploads/2018/02/ndss2018_02B-3_Becker_paper.pdf)_
[symposium.org/wp-content/uploads/2018/02/ndss2018 02B-](https://www.ndss-symposium.org/wp-content/uploads/2018/02/ndss2018_02B-3_Becker_paper.pdf)
[3 Becker paper.pdf.](https://www.ndss-symposium.org/wp-content/uploads/2018/02/ndss2018_02B-3_Becker_paper.pdf)
[10] F. Benhamouda, M. Joye, and B. Libert. A new framework
for privacy-preserving aggregation of time-series data. ACM
_Transactions on Information and System Security (TISSEC),_
[18(3):10:1–10:21, 2016. 10.1145/2873069.](https://doi.org/10.1145/2873069)
[11] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B.
McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth.
Practical secure aggregation for privacy-preserving machine
learning. In B. M. Thuraisingham, D. Evans, T. Malkin,
and D. Xu, editors, ACM SIGSAC Conference on Computer
_and Communications Security, CCS 2017, pages 1175–1191._
[ACM, 2017. 10.1145/3133956.3133982.](https://doi.org/10.1145/3133956.3133982)
[12] D. Boneh, K. Lewi, H. Montgomery, and A. Raghunathan.
Key homomorphic prfs and their applications. In R. Canetti
and J. A. Garay, editors, CRYPTO 2013, Proceedings, Part
_I, volume 8042 of LNCS, pages 410–428. Springer, 2013._
[10.1007/978-3-642-40041-4_23.](https://doi.org/10.1007/978-3-642-40041-4_23)
[13] D. Cash, E. Kiltz, and V. Shoup. The Twin Diffie-Hellman
problem and applications. In N. P. Smart, editor, EURO_CRYPT 2008, Proceedings, volume 4965 of LNCS, pages_
[127–145. Springer, 2008. 10.1007/978-3-540-78967-3_8.](https://doi.org/10.1007/978-3-540-78967-3_8)
[14] T.-H. H. Chan, E. Shi, and D. Song. Privacy-preserving
stream aggregation with fault tolerance. In A. D. Keromytis,
editor, Financial Cryptography and Data Security, FC 2012,
volume 7397 of LNCS, pages 200–214. Springer, 2012.
[10.1007/978-3-642-32946-3_15.](https://doi.org/10.1007/978-3-642-32946-3_15)
[15] M. Chase and S. S. Chow. Improving privacy and security in
multi-authority attribute-based encryption. In E. Al-Shaer,
S. Jha, and A. D. Keromytis, editors, ACM Conference on
_Computer and Communications Security, CCS 2009, pages_
[121–130. ACM, 2009. 10.1145/1653662.1653678.](https://doi.org/10.1145/1653662.1653678)
[16] J. Chotard, E. D. Sans, R. Gay, D. H. Phan, and
D. Pointcheval. Decentralized multi-client functional encryption for inner product. In T. Peyrin and S. D. Galbraith,
editors, ASIACRYPT 2018, Proceedings, Part II, volume
[11273 of LNCS, pages 703–732. Springer, 2018. 10.1007/978-](https://doi.org/10.1007/978-3-030-03329-3_24)
[3-030-03329-3_24.](https://doi.org/10.1007/978-3-030-03329-3_24)
[17] J. Chotard, E. Dufour-Sans, R. Gay, D. H. Phan, and
D. Pointcheval. Dynamic decentralized functional encryption.
In D. Micciancio and T. Ristenpart, editors, CRYPTO 2020,
_Proceedings, Part I, volume 12170 of LNCS, pages 747–775._
[Springer, 2020. 10.1007/978-3-030-56784-2_25.](https://doi.org/10.1007/978-3-030-56784-2_25)
[18] K. Emura. Privacy-preserving aggregation of time-series
data with public verifiability from simple assumptions. In
J. Pieprzyk and S. Suriadi, editors, Australasian Conference
_on Information Security and Privacy, ACISP 2017, Pro-_
_ceedings, Part II, volume 10343 of LNCS, pages 193–213._
[Springer, 2017. 10.1007/978-3-319-59870-3_11.](https://doi.org/10.1007/978-3-319-59870-3_11)
[19] J. Ernst and A. Koch. Efficient private stream aggregation with labels in the standard model. In S.-L. Gazdag,
D. Loebenberger, and M. Nüsken, editors, crypto day mat_ters 32, Bonn, 2021. Gesellschaft für Informatik e.V. / FG_
[KRYPTO. 10.18420/cdm-2021-32-16.](https://doi.org/10.18420/cdm-2021-32-16)
[20] E. S. Freire, D. Hofheinz, E. Kiltz, and K. G. Paterson. Noninteractive key exchange. In K. Kurosawa and G. Hanaoka,
editors, Public-Key Cryptography, PKC 2013, Proceed_ings, volume 7778 of LNCS, pages 254–271. Springer, 2013._
[10.1007/978-3-642-36362-7_17.](https://doi.org/10.1007/978-3-642-36362-7_17)
[21] S. Kim. Key-homomorphic pseudorandom functions from
LWE with small modulus. In A. Canteaut and Y. Ishai,
editors, EUROCRYPT 2020, Proceedings, Part II, volume
[12106 of LNCS, pages 576–607. Springer, 2020. 10.1007/978-](https://doi.org/10.1007/978-3-030-45724-2_20)
[3-030-45724-2_20.](https://doi.org/10.1007/978-3-030-45724-2_20)
[22] E. Shi, T. H. Chan, E. G. Rieffel, R. Chow, and D. Song.
Privacy-preserving aggregation of time-series data. In NDSS
_[2011. The Internet Society, 2011. URL https://www.ndss-](https://www.ndss-symposium.org/ndss2011/privacy-preserving-aggregation-of-time-series-data)_
[symposium.org/ndss2011/privacy-preserving-aggregation-of-](https://www.ndss-symposium.org/ndss2011/privacy-preserving-aggregation-of-time-series-data)
-----
[time-series-data.](https://www.ndss-symposium.org/ndss2011/privacy-preserving-aggregation-of-time-series-data)
[23] J. Takeshita, R. Karl, T. Gong, and T. Jung. SLAP: Simple
lattice-based private stream aggregation protocol. Cryptology
_[ePrint Archive, Report 2020/1611, 2020. URL https://](https://eprint.iacr.org/2020/1611/20210513:151621)_
[eprint.iacr.org/2020/1611/20210513:151621.](https://eprint.iacr.org/2020/1611/20210513:151621)
[24] F. Valovich. Aggregation of time-series data under differential
privacy. In T. Lange and O. Dunkelman, editors, LATIN_CRYPT 2017, Revised Selected Papers, volume 11368 of_
_[LNCS, pages 249–270. Springer, 2017. 10.1007/978-3-030-](https://doi.org/10.1007/978-3-030-25283-0_14)_
[25283-0_14.](https://doi.org/10.1007/978-3-030-25283-0_14)
[25] H. Waldner, T. Marc, M. Stopar, and M. Abdalla. Private
stream aggregation from labeled secret sharing schemes.
_Cryptology ePrint Archive, Report 2020/81, 2021. URL_
[https://eprint.iacr.org/2020/081.](https://eprint.iacr.org/2020/081)
## A Lemmas for the Security Proof of the PSA Scheme
**Lemma 1. (Transition from G0 to G1): For every PPT**
adversary, which corrupts the aggregator, there is a
_A_
PPT adversary, on the PRF with
_B_
_| Pr[G0(λ, A) = 1] −_ Pr[G1(λ, A) = 1]|
_≤_ 2n[2](n − 1) · Adv[prf]B,PRF[(][λ][)][.]
_Proof. We prove the transition by a hybrid argument._
As in the proof of [1], the goal in each hybrid step is
to add a random pad to the PRF value of one more
user. Each hybrid step consists of three game hops as
depicted in Figure 9.
We define hybrid games G0,µ for µ ∈ [n]. Let η = |U|
and U = {i1, . . ., iη} be the set of users for which A
asks the challenge query. Let Θ = min(η, µ). If Θ ≥ 2,
in hybrid step µ, the game adds random pads to the
PRF values of the first Θ users in . The condition that
_U_
#### Θ 2 is necessary, because we need two users in to
_≥_ _U_
change the PRF to a RF, because we must not change
the overall sum of the ciphertexts. The pads are set up
such that they are a perfect Θ-out-of-Θ secret sharing of
0. This makes sure that the pads have no effect on the
sum of the ciphertexts. If µ > η, then there are already
random pads on the PRF values of all users from,
_U_
therefore, the subsequent games are the same. We have
that G0 = G0,1 and G0,n = G1. In the following we will
shortly describe the different games of the transition
from G0,µ−1 to G0,µ:
**Game G0,µ−1: This is step µ** _−_ 1 of the hybrid argument between G0 and G1. Let Θ = min(η, µ). If Θ ≥ 2,
then in this game there are random pads added to the
PRF-values of the first Θ 1 honest users.
_−_
**Fig. 9. Game hops for one step of the hybrid argument**
-----
**Game G[′]0,µ−1[: In this game, the PRF of user][ i][1][ is]**
replaced by a RF. The pseudorandom pad tiΘ,l of user
_iΘ is set such that_ [∑]i∈[n] _[t][i,l][ =][ t][0][,l][ still holds. The game]_
has to guess the first and Θ’th user of before seeing
_U_
the challenge query, because it must be able to answer
encryption queries before seeing the challenge query.
**Game G[′′]0,µ−1[: Here we still have the RF of the]**
previous game. When answering the challenge query, the
game adds a random pad uΘ to the RF. Now, the first Θ
honest users have a random pad added to their PRF/RF
values.
**Game G0,µ: This game again uses a PRF instead of**
a RF for honest users i1 and iΘ. Because of the changes
in the previous games, there are random pads added to
the PRF value of the first Θ users of, so this is the
_U_
_µ’th hybrid game._
Next we give the reductions for the transitions between
the games.
**Transition from G0,µ−1 to G[′]0,µ−1[: The difficulty]**
in this step is that [∑]i∈[n] [PRF][k][i] [(][l][∗][) =][ PRF][k][0] [(][l][∗][)][. Re-]
placing one PRF with a RF while holding the other
keys fixed would violate this property. This is the reason why in each hybrid step of the transition from G0 to
_G1 the game guesses two honest users. The pad of one_
user is determined by the answer of the PRF challenger
and the pad of the other user is set, such that the pads
still sum to PRFk0 (l[∗]).
Now we show the indistinguishability of G0,µ−1 and
_G[′]0,µ−1_ [by a reduction to the security of the PRF. We]
assume that there is an adversary which can distin_A_
guish G0,µ−1 and G[′]0,µ−1[. The reduction][ B][ guesses the]
honest users i[∗]1 [and][ i]Θ[∗] [. They generate the secret keys]
_{ki}i ∈_ [n]0 \{i[∗]1[, i][∗]Θ[}][ for all users and the aggregator ex-]
cept of users i[∗]1 [and][ i]Θ[∗] [. They send the public parameters]
pp to and answer the queries as follows:
_A_
QCorrupt(i): If the guess of i[∗]1 [and][ i]Θ[∗] [was correct,]
these two users stay uncorrupted. Since generated the
_B_
keys of all the other clients, they can simply answer with
the corresponding secret key. If i = 0, then B returns the
aggregation key k0. Note that here k0 is not the sum
of the client keys, but chosen randomly as well. The
pads of i[∗]1 [and][ i]Θ[∗] [will be chosen accordingly such that]
#### ∑
_i∈[n]_ _[t][i,l][ =][ t][0][,l][ still holds.]_
QEnc(i, xi, l): If i = i[∗]1 [or][ i][ =][ i]Θ[∗] [,][ B][ queries][ l][ to]
their own PRF challenger and receives al, which is either
PRFk′ (l), for some k[′], or RF(l). They set ti[∗]1 _[,l][ :][=][ a][l][ and]_
_ti[∗]Θ[,l][ :][=][ PRF][k][0][ −]_ [∑]j∈[n]\{i[∗]1 _[,i][∗]Θ[}][ PRF][k][j]_ [(][l][)][ −] _[a][l][ =][ t][0][,l][ −]_
#### ∑
_j∈[n]\{i[∗]Θ[}][ t][j,l][. This ensures that all][ t][i,l][ still sum to]_
_t0,l. Note that ti[∗]Θ[,l][ =][ t][0][,l][ −]_ [∑]j∈[n]\{i[∗]Θ[}][ t][j,l][ also holds]
in the unmodified scheme (Figure 3) due to the key
homomorphism of the PRF. If i = i[∗]1[,][ B][ sends][ x][i][ +][ t][i][∗]1 _[,l]_
to A. If i = i[∗]Θ[,][ B][ sends][ x][i][ +][ t][i][∗]Θ[,l][ to][ A][ and stores][ t][i]1[∗][,l]
until A asks an encryption query for i[∗]1 [and label][ l][. For]
the other clients knows the corresponding secret keys
_B_
and can, therefore, answer the queries without asking
their PRF challenger.
QChallenge(U, {x[0]i _[}][i][∈U]_ _[,][ {][x]i[1][}][i][∈U]_ _[, l][∗][)][:]_ Here _B_ encrypts x[0]i [the same way as in the][ QEnc][ queries.]
If the PRF challenger uses PRFk′ instead of a RF then
_ki[∗]1_ [is implicitly set to][ k][′][ and][ k][i]Θ[∗] [is set such that]
#### ∑
_i∈[n]_ _[k][i][ =][ k][0][. In that case the][ k][i][ are a perfect secret]_
sharing of k0, which is exactly as in G0,µ−1. So in that
case, B perfectly simulates G0,µ−1.
If the PRF challenger uses a RF, then ti[∗]1 _[,l][ =][ RF][(][l][)]_
and ti[∗]µ [is set such that all][ t][i,l][ sum to][ t][0][,l][. So in this case]
_B perfectly simulates game G[′]0,µ−1[.]_
**Transition from G[′]0,µ−1** **[to][ G]0[′′],µ−1** [In this step]
the goal is to add random pads uΘ to the RF of users i1
and iΘ in the answers of the challenge query. Because
we consider encrypt-once security, cannot ask an en_A_
cryption query for user i1 or iΘ on label l[∗]. Therefore,
the only information that A has about RF(l[∗]), comes
from the answer to the challenge query. Since RF(l[∗]) is
identically distributed as RF(l[∗]) + uΘ, A cannot realize
that they received RF(l[∗]) + uΘ instead of RF(l[∗]). Thus,
_G[′]0,µ−1_ [and][ G]0[′′],µ−1 [are perfectly indistinguishable.]
**Transition from G[′′]0,µ−1** **[to][ G][0][,µ][ In this step we]**
need to change back the RF of users i1 and iΘ to a
PRF. This works exactly as the transition from G0,µ−1
to G[′]0,µ−1[.]
After η _−_ 1 of these steps we reached Game G1. Now
in G1 the challenge query (U, {x[0]i _[}][i][∈][U]_ _[,][ {][x]i[1][}][i][∈U]_ _[, l][∗][)][ is an-]_
swered with x[0]i [+][ t][i,l][∗] [+ ¯][u][i][, where]
_u¯i =_
#### ⎧ ⎪⎪⎨∑j∈U\{i1} [u][j] if i = i1,
_−ui_ if i ∈U \ {i1},
#### ⎪⎪⎩0 else.
Therefore, the {u¯i}i∈U are a perfect η out of η secret
sharing of 0.
The guessing of the two honest users in each hybrid
step incurs a reduction loss of n(n − 1) and using n
hybrid games leads to a loss of n[2](n − 1) for the hybrid
argument. Since the hybrid argument is applied twice,
once to add and once to remove the random pads, we
get a total reduction loss of 2n[2](n − 1).
**Lemma 2. (Transition from G1 to G2): For every PPT**
adversary, which corrupts the aggregator, there is a
_A_
-----
PPT adversary on the PRF with
_B_
_| Pr[G1(λ, A) = 1] −_ Pr[G2(λ, A) = 1]|
_≤_ 2n(n − 1) · Adv[prf]B,PRF[(][λ][)][.]
_Proof. The goal in this step is to change the answer of_
the challenge query from encryptions of x[0]i [to encryp-]
tions of x[1]i [. We distinguish two cases here. Remember]
that Ql∗ is the set of clients for which A has received a
ciphertext on label l[∗], either by an encryption or a challenge query. In the first case Ql∗ = HS, which means
that gets a ciphertext of every honest user for the chal_A_
lenge label l[∗]. Then we have Ql∗ _∪CS = [n], whereby A’s_
challenge query is restricted by the balance-condition.
We then argue that, since [∑] _x[0]i_ [=][ ∑] _[x]i[1][, the change is]_
covered by the ¯ui.
In the second case there is an honest user of whom
_A does not get a ciphertext on label l[∗]. Therefore, the_
challenge messages are not restricted by the balance condition. Here we use the fact that is lacking a cipher_A_
text of an honest user iq and thereby has no information
about PRFkiq (l[∗]).
Case 1 (Ql∗ = HS): In this case A knows k0,
because they corrupted the aggregator. Furthermore
_Ql∗_ = HS, whereby A’s challenge messages are restricted by the balance-condition. We argue as the authors in [1]. For t0,l := PRFk0 (l), we have _ti,l∗_ = t0,l∗
#### [∑]
and that {u¯i}i∈U is a perfect η out of η secret sharing of
0. Therefore, {x[0]i [+] _[t][i,l][∗]_ [+][ ¯][u][i][}][i][∈U][ and][ {][x]i[1] [+] _[t][i,l][∗]_ [+][ ¯][u][i][}][i][∈U]
are perfect secret sharings of [∑]i∈U [(][x]i[0] [+][ t][i,l][∗] [)][ and]
#### ∑
_i∈U_ [(][x]i[1] [+] _[t][i,l][∗]_ [)][, respectively. The balance-condition re-]
quires that [∑]i∈U _[x]i[0]_ [=][ ∑]i∈U _[x]i[1][. So][ {][x]i[0]_ [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U]
and {x[1]i [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U][ are both perfect secret sharings]
of the same value and are, therefore, perfectly indistinguishable.
Case 2 (Ql∗ _̸= HS): Other than in case 1, A’s mes-_
sages in the challenge query are not restricted by the
balance-condition, because there is at least one honest
user for which A has no ciphertext on label l[∗]. Therefore,
in this case we need another reduction. If asks no chal_A_
lenge query or a challenge query with =, has no
_U_ _{}_ _A_
information about the challenge bit and, therefore, ’s
_A_
advantage is 0. Thus, we can assume that contains
_U_
at least one user. Additionally, in this case Ql∗ ≠ _HS,_
therefore, HS \ Ql∗ contains an honest user that is different from the user in . The reduction guesses the
_U_ _B_
users i[∗]q _[∈HS \][ Q][l][∗]_ [and][ i]u[∗] _[∈U][. Then][ B][ changes the]_
PRF of these two users to a RF by setting ti∗u[,l][ :][=][ RF][(][l][)]
and ti∗q _[,l][ :][=][ t][0][,l][ −]_ [∑]i∈[n]\iq∗ _[t][i,l][. This is done the same]_
way as in the transition from G0,µ−1 to G[′]0,µ−1[.]
In the next step we change all ciphertexts in the
challenge query from x[0]i [+][ t][i,l][∗] [to][ x]i[1] [+][ t][i,l][∗] [. Because of]
the ¯ui, {x[0]i [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U][ and][ {][x]i[1] [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U][ are]
perfect η out of η secret sharings of [∑]i∈U [(][x]i[0] [+][t][i,l][∗] [)][ and]
#### ∑
_i∈U_ [(][x]i[1] [+][ t][i,l][∗] [)][ respectively. For both][ b][ = 0][ and][ b][ = 1]
we have [∑]i∈U [(][x]i[b] [+][ t][i,l][∗] [) =][ ∑]i∈U\{i[∗]u[}][(][x]i[b] [+][ t][i,l][) +][ x][i]u[∗] [+]
_ti∗u[,l][∗]_ [. Because][ t][i][∗]u[,l][∗] [=][ RF][(][l][∗][)][, both][ {][x]i[0] [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U]
and {x[1]i [+][ t][i,l][∗] [+][ ¯][u][i][}][i][∈U][ are secret sharings of a truly]
random value and are, therefore, perfectly indistinguishable.
In the last step we change back the RF to a PRF
again.
The reduction loss of n(n − 1) comes from guessing the two users i[∗]u [and][ i]q[∗][. The factor of two is there,]
because the RF needs to be changed back into a PRF.
Therefore, the total reduction loss is 2n(n − 1).
**Lemma 3. (Transition from G2 to G3): For every PPT**
adversary, that corrupts the aggregator, there is a
_A_
PPT adversary on the PRF with
_B_
_| Pr[G2(λ, A) = 1] −_ Pr[G3(λ, A) = 1]|
_≤_ 2n[2](n − 1) · Adv[prf]B,PRF[(][λ][)]
_Proof. This transition is just applying the G0_ _G1 tran-_
_−_
sition backwards.
**Lemma 4. For every PPT adversary**, which does not
_A_
corrupt the aggregator, there is a PPT adversary on
_B_
the PRF with
_| Pr[G0(λ, A) = 1] −_ Pr[G3(λ, A) = 1]|
_≤_ 2n[2] _· Adv[prf]B,PRF[(][λ][)][.]_
_Proof. In this case we can directly go from G0 to G3 via_
a hybrid argument over all users. Let {i1, . . ., iη} = U
be the set of users specified in the challenge query. In
hybrid game Hµ the challenge query for (i1, . . ., iµ) is
answered with an encryption of x[1]i [, whereas for the other]
users it is answered with an encryption of x[0]i [.]
Formally, in Hµ we have:
QChallenge(U = {i1, . . ., iη}, {x[0]i _[}][i][∈U]_ _[,][ {][x]i[1][}][i][∈U]_ _[, l][∗][):]_
#### �Enc(pp, ki, x[0]i [)] if i = iτ for τ > µ
_ci,l∗_ :=
Enc(pp, ki, x[1]i [)] if i = iτ for τ ≤ _µ_
`return {ci,l∗` _}i∈U_
We have that G0 = H0 and G3 = Hn. To get from
_Hµ−1 to Hµ we use the two intermediate games Hµ[′]_ _−1_
and Hµ[′′]−1[. In][ H]µ[′] _−1[, the PRF of user][ i][µ][ is replaced by]_
a RF and in Hµ[′′]−1 [the challenge query of user][ i][µ][+1][ is]
-----
answered with an encryption of x[1]q+1 [instead of][ x]q[0]+1[. In]
Lemma 1 we needed two honest users in order to change
the PRF to a RF. This was necessary, because we had to
make sure that the sum of the ciphertexts remained unchanged. Here we are in the case that does not corrupt
_A_
the aggregator and, therefore, is unable to recognize
_A_
when the sum of the ciphertexts changes. Thus, we only
need one honest user to exchange the PRF for a RF. We
now describe the games in a bit more detail:
**Game Hµ−1: In this game the challenge query for**
users iτ with τ ≤ _µ −_ 1 is answered with encryptions of
_x[1]iτ_ [, whereas the challenge query for the users][ i][τ][ with]
_τ > µ −_ 1 is answered with encryptions of x[0]iτ [.]
**Game Hµ[′]** _−1[: This game guesses user][ i][µ][ and uses]_
a RF instead of a PRF to answer the encryption and
challenge queries for this user.
**Game Hµ[′′]−1[: In this game the challenge query for]**
user iµ is answered with an encryption of x[1]iµ [instead of]
_x[0]iµ_ [.]
**Game Hµ: This game uses a PRF instead of a RF**
for user iµ again. The answer to the challenge query is
an encryption of x[1]iτ [for all users with][ τ][ ≤] _[µ][.]_
Next, we prove the transitions between the games
by reductions to the security of the PRF.
**Transition from Hµ−1 to Hµ[′]** _−1[: The reduction][ B]_
guesses the user i[∗]µ [and generates keys][ k][i] [for all users,]
including the aggregator, except of user i[∗]µ[. Then][ B][ an-]
swers the queries as follows:
QCorrupt(i): If i ̸= i[∗]µ [then][ B][ returns the self gener-]
ated key ki. If i = i[∗]µ [then the guess that][ i]µ[∗] [would be]
honest was wrong and B aborts. If i = 0, i.e. A wishes
to corrupt the aggregator, aborts, because we are in
_B_
the case where does not corrupt the aggregator.
_A_
QEnc(i, xi, l): If i ̸= i[∗]µ [then][ B][ simply answers with]
_xi + PRFki_ (l) mod R. If i = i[∗]µ [then][ A][ asks][ l][ to their]
PRF challenger CPRF, gets the answer al and sends xi[∗]µ [+]
_al mod R to A._
QChallenge(U = {i1, . . ., iη}, {x[0]i _[}][i][∈U]_ _[,][ {][x]i[1][}][i][∈U]_ _[, l][∗][):]_
The reduction asks l[∗] to CPRF and receives the answer
_al∗_ . Then B answers with x[1]iτ [+][ PRF][k]iτ [(][l][∗][)][ for][ τ < µ][,]
with x[0]iτ [+][ PRF][k]iτ [(][l][∗][)][ for][ τ > µ][ and with][ x]i[0]τ [+][ a][l][∗] [for]
_τ = µ. If iµ ̸= i[∗]µ[, then][ B][’s guess of][ i][∗]µ_ [was wrong and]
they abort the game.
The reduction can directly use ’s output, as
_B_ _A_
guess for CPRF. If CPRF used a PRF to generate its answers, then B perfectly simulates Hµ−1 and if CPRF used
a RF, then B perfectly simulates Hµ[′] _−1[.]_
**Transition from Hµ[′]** _−1_ **[to][ H]µ[′′]−1[: The games][ H]µ[′]** _−1_
and Hµ[′′]−1 [are perfectly indistinguishable, because][ RF][ +]
_x[0]µ_ [is identically distributed as][ RF][ +][ x]µ[1] [.]
**Transition from Hµ[′′]−1** **[to][ H][µ][: Here we only need]**
to change back the RF of user iµ to a PRF. So this
transition is the transition from Hµ−1 to Hµ[′] _−1_ [applied]
backwards.
Guessing user i[∗]µ [entails a reduction loss of][ n][, for]
both the transition from Hµ−1 to Hµ[′] _−1_ [and from][ H]µ[′′]−1
to Hµ. Together with the n hybrid steps leads to a reduction loss of 2n[2].
## B Security Proofs of the PRFs
In this section we prove that the PRF used in Section 4.1
is secure.
**Theorem 2. Let λ, q, p ∈** N with q > p, k _←$_ Zλq [and]
_H : X ↦→_ Z[λ]q [. When we model][ H][ as a random oracle,]
then for any PPT adversary A on the PRF Fk(x) :=
_⌊⟨H(x), k⟩⌋p, there is an adversary B on LWRλ,q,p with_
Adv[prf]A,F [(][λ][)][ ≤] [Adv][LWR]B (λ).
_Proof. The idea is for an LWR sample (a, b) ∈_ Z[λ]q _[×]_
Zp, to interpret a as H(x) for some x and the LWR
secret s as the PRF-key k. Then b = ⌊⟨H(x), k⟩⌋p. This
is possible, because can program the random oracle
_B_
accordingly.
The reduction works as follows. maintains a
_B_ _B_
table of triples (·, ·, ·). If A asks a PRF query for value
_x, B looks for an entry (x, a, b) in the table and returns b_
if such an entry is present. If there is no such entry, then
requests an LWR sample from their LWR challenger,
_B_
receives (a, b), stores (x, a, b) in the table and returns b
to .
_A_
If A asks a RO query for value x, B again looks for
an entry (x, a, b) in the table, but this time returns a
if an entry is found. If there is no such entry, again
_B_
queries their LWR challenger, stores the answer (a, b) in
the table as (x, a, b) and returns a to B.
In LWR the values of a are uniformly random, therefore, ’s answers to the RO queries of are uniformly
_B_ _A_
random as well. Furthermore, by maintaining the table of triples (x, a, b), B ensures that the answer to the
RO queries are consistent with the answers of the PRF
queries. If the LWR challenger returns random values
_b, then B perfectly simulates a random function. If the_
LWR challenger returns actual LWR samples then per_B_
fectly simulates the PRF Fk(x) := ⌊⟨H(x), k⟩⌋p, where
**k is the LWR secret s. Therefore,** can directly forward
_B_
’s guess to their LWR challenger and wins the game,
_A_
if wins.
_A_
-----
**Fig. 11. The diluted geometric mechanism from [14]. The func-**
tion Geom(α) returns a value k with probability _[α]α[−]+1[1]_ _[α][−|][k][|][.]_
**Fig. 10. The encrypt procedure for the fault-tolerant scheme,**
where B(i) is the set of all log2(n) nodes of the tree from client i
to the root. For each of these nodes it calls PSA.Encrypt which is
the encrypt algorithm of our proposed scheme in Figure 3.
## C Noise and Fault Tolerance
In this section we quickly describe how the noise is
added in the fault tolerant version of our scheme. This
is only a slight adaption of the method of Chan et al.
especially of their Figure 2. They only considered user
values in {0, 1}, whereas we consider values in {0, . . ., ∆},
where ∆ is the largest possible power consumption for
which privacy shall still hold. Therefore we have an additional factor of 1/∆ in the computation of ϵ0, the rest
remains unchanged. Note that the factor of 1/∆ is also
present in [22], which introduced PSA. In the literature
of differential privacy, ∆ is also called the sensitivity of
the function.
Figure 10 shows the encryption algorithm of the
fault tolerant scheme. For each node on the path from
client i to the root, i.e. for each instance of the PSA
scheme in which client i is part of, the algorithm creates one ciphertext by calling the encrypt function from
our PSA scheme in Figure 3. The parameters ϵ and δ
are the same for all clients. One execution of the encrypt
algorithm in Figure 10 provides (ϵ, δ) computational differential privacy. The values ϵ0 and δ0 are then set to
accommodate for the fact, that the aggregator gets K
ciphertexts per client.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2478/popets-2021-0063?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2478/popets-2021-0063, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "HYBRID",
"url": "https://petsymposium.org/popets/2021/popets-2021-0063.pdf"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-07-23T00:00:00
|
[
{
"paperId": "cf93b2ac8cfac8ce0bf42f685ca7a5e207756d70",
"title": "Dynamic Decentralized Functional Encryption"
},
{
"paperId": "f7da5a6484d7a5e8caa1e16d2741045ed600054a",
"title": "Key-Homomorphic Pseudorandom Functions from LWE with Small Modulus"
},
{
"paperId": "43d41a9ee94259440e114d203ceb2c514e49db7f",
"title": "From Single-Input to Multi-Client Inner-Product Functional Encryption"
},
{
"paperId": "95235de776ce51eb2340c9836cafa5a87d051e35",
"title": "Decentralizing Inner-Product Functional Encryption"
},
{
"paperId": "b32788a3bd83dbf5f3eba411d57c9fca8ced4c45",
"title": "Decentralized Multi-Client Functional Encryption for Inner Product"
},
{
"paperId": "db0cc2f21b20cbc0ab8946090967399c25709614",
"title": "Practical Secure Aggregation for Privacy-Preserving Machine Learning"
},
{
"paperId": "7ee8a1f2739f6879be0a9a1b5cce4a700c168826",
"title": "Aggregation of Time-Series Data Under Differential Privacy"
},
{
"paperId": "d705cc4656b997f0b9cde0ac5fb5e476625f0e03",
"title": "Privacy-Preserving Aggregation of Time-Series Data with Public Verifiability from Simple Assumptions"
},
{
"paperId": "c5bc008a9da6b493db9375ee05e21aaffbf6e238",
"title": "A New Framework for Privacy-Preserving Aggregation of Time-Series Data"
},
{
"paperId": "3365f31550db350ce54b7ccfd9dff3cb7715185c",
"title": "On the concrete hardness of Learning with Errors"
},
{
"paperId": "0fb2489b324c29df8ab19fe26a878bbb3cfaeb6e",
"title": "New and Improved Key-Homomorphic Pseudorandom Functions"
},
{
"paperId": "b32cea998f4842abc06f13b8190b8bf1cce46aac",
"title": "Key Homomorphic PRFs and Their Applications"
},
{
"paperId": "05b56733becfe17946766b0069799180d1260df8",
"title": "Non-Interactive Key Exchange"
},
{
"paperId": "683432f4c531a23dcf342584219c52da97c11476",
"title": "Pseudorandom Functions and Lattices"
},
{
"paperId": "555a239220db7e3fdcbf43b4de7f2bb7c6d101fe",
"title": "Privacy-Preserving Stream Aggregation with Fault Tolerance"
},
{
"paperId": "902ddd904effe50d228ea25ab23c16ae664d3ce5",
"title": "I Have a DREAM! (DiffeRentially privatE smArt Metering)"
},
{
"paperId": "c509296ec59da5c2a711aeeceb09c4b171e1dcca",
"title": "Improving privacy and security in multi-authority attribute-based encryption"
},
{
"paperId": "573630fdb9ce67c267caa043429f5e78b41e471f",
"title": "The Twin Diffie-Hellman Problem and Applications"
},
{
"paperId": "c84d340c6a738446170ce06344f38a42ab51e883",
"title": "Private Stream Aggregation from Labeled Secret Sharing Schemes"
},
{
"paperId": "8950f1e0beaebc3968ff536052243751294a3fcc",
"title": "SLAP: Simple Lattice-Based Private Stream Aggregation Protocol"
},
{
"paperId": "c4be6ae449b80ac2666532c5b01c389ca516b421",
"title": "Ad Hoc Multi-Input Functional Encryption"
},
{
"paperId": "810eb064e3f914554dfc46d0ab0ded71a62bff73",
"title": "Revisiting Private Stream Aggregation: Lattice-Based PSA"
},
{
"paperId": "7cc53ef35f8398181bd09755ecc2fa8f52d0da1d",
"title": "Privacy-Preserving Aggregation of Time-Series Data"
},
{
"paperId": null,
"title": "[ 0 ,µ − 1 G ′′ 0 ,µ − 1 G 0 ,µ the game can guess i ∗ Θ , without yet"
},
{
"paperId": null,
"title": "the value of Θ. i ∗ 1 ← $ [ n ] , i ∗ Θ ← $ [ n ] \\ { i ∗ 1 } ( pp , { k i } i ∈ [ n ] 0 ) ← Setup (1 λ , 1 n ) α ← A QEnc ( · , · , · ) , QCorrupt ( · ) , QChallenge ( · , · , · , · )"
}
] | 30,310
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00f84366664c896ee112940dc12eaacef6189f89
|
[
"Computer Science"
] | 0.886323
|
Distributed volumetric scene geometry reconstruction with a network of distributed smart cameras
|
00f84366664c896ee112940dc12eaacef6189f89
|
2009 IEEE Conference on Computer Vision and Pattern Recognition
|
[
{
"authorId": "2108215905",
"name": "Shubao Liu"
},
{
"authorId": "1712504",
"name": "Kongbin Kang"
},
{
"authorId": "1682784",
"name": "Jean-Philippe Tarel"
},
{
"authorId": "145700259",
"name": "D. Cooper"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Distributed Volumetric Scene Geometry Reconstruction With a Network of Distributed Smart Cameras
## Shubao Liu † Kongbin Kang † Jean-Philippe Tarel ‡ David B. Cooper † †Division of Engineering, Brown University, Providence, RI 02912
{sbliu, kk, cooper}@lems.brown.edu
## ‡Laboratoire central des Ponts et Chauss´ees (LCPC), Paris, France
jean-philippe.tarel@lcpc.fr
## Abstract
_Central to many problems in scene understanding based_
_on using a network of tens, hundreds or even thousands_
_of randomly distributed cameras with on-board processing_
_and wireless communication capability is the “efficient”_
_reconstruction of the 3D geometry structure in the scene._
_What is meant by “efficient” reconstruction? In this pa-_
_per we investigate this from different aspects in the con-_
_text of visual sensor networks and offer a distributed recon-_
_struction algorithm roughly meeting the following goals: 1._
_Close to achievable 3D reconstruction accuracy and robust-_
_ness; 2. Minimization of the processing time by adaptive_
_computing-job distribution among all the cameras in the_
_network and asynchronous parallel processing; 3. Com-_
_munication Optimization and minimization of the (battery-_
_stored) energy, by reducing and localizing the communica-_
_tions between cameras. A volumetric representation of the_
_scene is reconstructed with a shape from apparent contour_
_algorithm, which is suitable for distributed processing be-_
_cause it is essentially a local operation in terms of the in-_
_volved cameras, and apparent contours are robust to our-_
_door illumination conditions. Each camera processes its_
_own image and performs the computation for a small sub-_
_set of voxels, and updates the voxels through collaborat-_
_ing with its neighbor cameras. By exploring the structure_
_of the reconstruction algorithm, we design the minimum-_
_spanning-tree (MST) message passing protocol in order to_
_minimize the communication. Of interest is that the result-_
_ing system is an example of “swarm behavior”. 3D recon-_
_struction is illustrated using two real image sets, running_
_on a single computer. The iterative computations used in_
**_the single processor experiment are exactly the same as_**
**_are those used in the network computations. Distributed_**
_concepts and algorithms for network control and communi-_
_cation performance are theoretical designs and estimates._
## 1. An Overview of the System
### 1.1. Motivation
With the recent development of cheap and powerful visual sensors, wireless chips and embedded systems, cameras have enough computing power to do some on-board
“smart” processing. These “smart cameras” can form a
network to collaboratively monitor, track and analyze the
scenes of interest. This area have drawn a lot of attention in
both academia and industry over the past years (see [1] [11]
and [13] for an overview). However compared with the maturity and availability of the camera network hardware, the
software capable of fully utilizing the huge amount of visual
information is greatly under-developed. This has become
the bottleneck for the wide deployment of the smart camera
network (also called visual sensor network, VSN). There is
an obvious demand to synchronize the recent development
of vision algorithms with the development of the visual sensor network hardware. Our paper presents a completely new
and natural approach to 3D reconstruction within a smart
camera network.
### 1.2. The Goal
Our goal is minimum-error 3D scene reconstruction
based on edge information with N calibrated smart cameras through their collaborated distributed processing, as illustrated in Fig. 1. A Bayesian approach is taken to 3D
reconstruction, where the surface is treated as a stochastic
process modelling the smoothness of the surface. Thanks
to the apparent contours’ robustness to environmental factors, shape-from-apparent-contours is more suitable for outdoor distributed camera applications than the intensitybased multi-view reconstruction. The representation for the
estimated surface is a discretized level set function defined
on a grid of voxels. The cost function to be minimized is
the sum of the area of the 3D surface and the integral of
“consistency” between the apparent contour of the current
surface estimate and the image edges. The object surface
is to be reconstructed distributedly with N smart cameras
1
-----
co-operating to minimize both the processing time and the
consumed on-board battery energy. Computation and communication load-balancing are investigated to make battery
usage roughly at the same level over all the cameras.
### 1.3. 3D Surface Reconstruction
The proposed surface estimation procedure is iterative
through numerical solution of the first order variation of the
energy functional (i.e., the cost function). It turns out that
each iteration is a linear incremental change of the current
estimated surface. All of the computation takes place within
a thin band around the estimated surface. For each camera
c, the data and information available for the (t + 1)th iteration is: its projection matrix; the edges in its image; and
a subset of voxels that this camera maintains. The incremental update is the sum of two increments. The first increment comes from the contribution of the image edge data.
This increment attempts to align the contour generators of
the estimated 3D surface with the edge-data in the observed
image. The second increment is the contribution of the a
priori stochastic model of the 3D surface. Hence, for each
voxel on the estimated 3D surface at the start of an incremental surface update-iteration, the cameras contributing to
the voxel update are the ones whose contour generators are
close to that voxel. A voxel is in the primary responsibility set (PRS) of each camera whose image information
contributes to the voxel’s updating. Each contributes to the
first update-increment. One of these cameras takes responsibility for computing the second update-increment. This
group of cameras each has a record of the changes made,
and therefore of the total update change made. For a 3D
surface voxel not contributed by any camera at the start of
an update-iteration, there is no first incremental-update, and
one of the cameras takes responsibility for computing and
communicating the second incremental-update. This voxel
is in the second set of the responsible cameras (SRS). Fig. 2
illustrates these concepts on a sphere shape.
### 1.4. Distributed Processing
It is know that that the battery power for two wireless
cameras to communicate is approximately proportional to
the square of their distance. Hence, rather than two cameras
communicating directly, the signals from the transmitting
camera travels to the receiving camera through a sequence
of relays where it travels from one camera to a camera that
is close, then to another close camera, etc. In this way, communication power increases linearly with distance between
cameras. This is a camera communication network (CCN)
optimization problem that finds the best routing for each
communication, i.e. how to send messages.
For our purpose, only the end-to-end communication in
the application layer is considered. Inspired by the observation that the communication in the proposed algorithm
works more like broadcast (although not exactly, which will
be discussed later) than point-to-point ad-hoc communication, we can optimize the communication further by decid
Figure 1. Volumetric world, smart cameras and their observations.
ing what to send and who to send to, instead of only optimizing how to send. This results in an efficient message passing
protocol based on the minimum-spanning-tree of the cam_era reconstruction network (CRN, the exact meaning will_
be discussed later.)
Distributing the voxel updating job among all the smart
cameras to enable parallel processing is achieved by each
camera processing those voxels in its primary and second
responsibility sets. These sets for the various cameras are
close enough in size such that the partition results in balanced parallel processing. A camera determines its secondary responsibility set through negotiating the boundaries
with its neighbor cameras in the CRN. Also incurred is battery energy for the communications in determining the secondary responsibility set. Rough minimization of communications battery energy is achieved by routing communications over paths contained in an MST (Minimum Spanning Tree). Also some communication is required among
cameras having primary sets that are close in order for the
cameras to figure out their secondary responsibility sets.
## 2. Shape From Apparent Contours
A shape-from-apparent-contours algorithm is first developed to reconstruct the 3D shape from apparent edges in
different views. The algorithm also incorporates the prior
knowledge about the surface (e.g., surface smoothness) to
produce a complete shape. The proposed algorithm combines the ideas in 2D active contoursand variational surface
reconstruction [7, 6, 15, 9] based on implicit surface deformation. In active contour fitting, the best curve C [⋆] is found
by deforming a curve C(s) to make it fit the object boundaries:
C [⋆](s) = arg min
C(s) [E][(][C][(][s][))][.]
The functional E(C(s)) is usually defined as
� 1 � 1
E(C(s)) = µ C(s)ds − ||∇GI(C(s)||ds (1)
0 0
-----
Figure 2. Illustration of the key concepts (including contour generators, band, PRS, SRS) in the distributed reconstruction algorithm,
with a simple setting (a sphere shape and evenly distributed cameras around the equator of the sphere.
where ds is the infinitesimal curve length, ∇GI(C(s)) =
∇(G ∗ I(C(s))) is the data term measuring the influence
of the image intensity gradient along the fitted curve, G ∗ I
is the convolution of the intensity image I with a Gaussian
filter G, µ is a scalar value controlling the influence of the
length of the curve.
Apparent contours are curves coming from contour generators on the surface through perspective projection. So instead of assuming that the contours can be deformed freely,
we constrain them with a 3D surface:
Ci(s) = Πi(Gi(s)), (2)
where Πi is the ith camera’s perspective projection, which
maps a 3D point X to a 2D image point xi; Ci(s) is the apparent contour in image i, Gi(s) is the corresponding contour generator on the surface, as shown in Fig. 2. Notice
that
� 1 �
||∇GI(Ci(s))||ds = 1Gi(X)||∇GIi(Πi(X))||dA
0 S
(3)
where S is the surface. Equation (3) turns the line integral to
a surface integral with the introduction of the contour generator indicator function 1Gi(X), which is a impulse function. (In experiment, it is approximated with a Gaussian
function.) Through the occluding geometry relationship between the surface normal N and the tangent plane N[¯] i (got
from back-projecting of the tangent line of the apparent contours), we extend (3) to
�
1Gi(X)||∇GI(xi)|| · |N[¯] [T]i [N][|][dA] (4)
S
to further enforce the tangency constraint. The higher order
term |N[¯] [T]i [N][|][ make sthe shape evolution converge faster and]
more accurate.
The surface to be reconstructed S [⋆] is the optimal surface that minimizes an energy functional in the form of a
weighted area, with the weights depending on the M observed images as in (4) and a prior term:
E(S) = �S [Φ(][X][,][ N][)][dA]
M �
= �S ��i [1][G]i[(][X][)][||∇][G][I][(][x][i][)][|| · |][N][ ¯] [T]i [N][|][ +][ µ] dA,
(5)
where dA is an infinitesimal area element, µ is a parameter controlling the smoothness of the surface. Interpreted
in Bayesian language, the prior energy term �S [µdA][ corre-]
sponds to a prior probability Z[1] [e][−][µArea][((][S][))][, which is the]
energy representation of a 1st-order Continuous Markov
Random Field, encouraging smooth surfaces instead of
bumpy ones. The functional (5) is minimized through gradient descent methods by computing the first order variation.
The gradient descent flow for (5) can be written as [7]:
St = F N, (6)
F = 2κΦ −⟨ΦX, N⟩− 2κ⟨ΦN, N⟩. (7)
where κ is the mean curvature of the surface S. With the
level set representation, S = {X : φ(X) = 0}, the above
evolution equation can be rewritten as:
φt = F ||∇φ||. (8)
Through some calculus derivation, we get the speed function for (8) as
F = 2µκ −
M
�⟨ΦiX, N⟩. (9)
i=1
## 3. Distributed Algorithm for Scene Recon- struction
In the above, we have briefly described a centralized algorithm for shape from apparent contours, where one central processor collects data from all cameras and processes
them in batch. In the visual sensor network applications,
distributed algorithms are preferred, where each smart camera runs identical programs but with different states and different image inputs. In this section we show that the proposed algorithm can be run distributedly on the smart camera network by augmenting the algorithm with a job division module and a communication module. Principally the
algorithm can be extended distributedly because: (1) the algorithm reconstructs the contour generators, and the other
part of the surface is interpolated through the prior energy,
equivalently, a priori stochastic model for the 3D surface.
(2) It has been shown that the contour generators can be reconstructed locally by studying the differential geometry of
the apparent contour change [4, 3, 10].
-----
Fv[c] a scalar representing camera c’s contribution to voxel v’s updating
PRSc the primary responsible set of camera c
SRSc the secondary responsible set of camera c, PRSc ∪ SRSc = Vc
Table 1. Main notation summary
voxel ID voxel value neighbor cameras in MST
1001 1.302 {2, 30}
2187 -2.630 ...{10}
... ... ...
PRS: {1001, ...}
SRS: {2187, ...}
watching voxel list: {...}
boundary voxel list: {...}
Table 2. An example of the data structures that each camera maintains.
To highlight the structure of the reconstruction procedure, we summarize each voxel’s updating with this formula:
|V c|the set of voxels that the camera c maintains|
|---|---|
|C v|the set of cameras that maintains voxel v|
|F vc|a scalar representing camera c’s contribution to voxel v’s upd|
|PRS c|the primary responsible set of camera c|
|SRS c|the secondary responsible set of camera c, PRS ∪SRS = c c|
|voxel ID|voxel value|neighbor cameras in MST|
|---|---|---|
|1001 1.302 {2, 30} 2187 -2.630 ...{10} ... ... ... PRS: {1001, ...} SRS: {2187, ...} watching voxel list: {...}|||
φ[t]v[+∆][t] = φ[t]v [+ (2][µκ][ +] � Fv[c][)][||∇][φ][||][∆][t,] (10)
c∈Cv
where Cv is the set of cameras c that has Fv[c] [̸][= 0][ for voxel]
v, Fv[c] [is the speed contribution from camera][ c][ to voxel][ v][:]
Fv[c] [=][ −⟨][Φ]iX(v)[,][ N][(][v][)][⟩][.] (11)
Formula (10) describes the updating operation for each
voxel. A na¨ıve parallel implementation of the algorithm
is to divide the entire set of voxels into M (the number
of smart cameras) subsets, and each camera takes care of
one subset of the voxels. The problem with this na¨ıve approach is that (1) each camera needs to maintain a copy
of all the other cameras’ observed images. This implies
a huge amount of data communication, which prevents
the algorithm scaling up to a large camera network; (2)
the contour generators dynamically change as the surface
shape evolves. So fixing the set of voxels that each camera
maintains requires distant cameras to exchange information
about voxels’ states and image observations. This prevents
the communication between cameras from being localized.
Instead we build a camera-centric distributed algorithm,
in which each camera c maintains a gradually changing dynamic subset Vc of voxels around the current estimated contour generators seen by this camera. Algorithm 1 describes
the over-all procedure in a high level, with each subroutine being discussed in detail later in Algorithms 2 and 5.
Through each camera maintaining a subset of voxels Vc
and localizing the computation and communication, Algorithm 1 has good scalability with respect to the number of
cameras and the resolution of the volumetric representation.
In the following, we elaborate on different aspects of the
distributed algorithm, including complete surface coverage,
Figure 3. Illustration of job distribution scheme in 2D case. The
light-green strip indicates the narrow band. The dark blue indicates the PRS of camera 1; the shallow blue indicates the SRS
voxels of camera 1. The dark red indicates the PRS of camera 2;
the shallow red indicates the SRS of camera 2.
computation load balancing among cameras, communication optimization, etc. For the sake of clarity, Table 1 summarizes the main notation used in the following discussion;
And Table 2 shows the main data structures that each camera maintains to support the distributed algorithm. The usages of these data structures is discussed below.
**Algorithm 1 Camera-centric distributed algorithm for**
scene geometry reconstruction
1: for each smart camera c, do
2: Compute the incremental updates Fv[c][,][ ∀][v][ ∈V][c][,]
according to formula (11). If maxv∈Vc |Fv[c][|][ < ε]
(where ε is a stop criterion threshold), then terminate.
3: Send Fv[c] [to all the cameras in][ C][v][,][ ∀][v][ ∈V][c] [through]
a minimum-spanning-tree (MST) message passing
protocol as described in Algorithm 5.
4: Update each voxel’s level set value according to formula (10), after receiving messages from the other
tree branches of this node in the MST, as described
in Algorithm 5.
5: Update the voxel set Vc as described in Algorithm 2.
6: end for
-----
### 3.1. Job Distribution Scheme
In the level set method ([12, 14]), a narrow band implementation is commonly used to save memory and computation. It is based on the fact that only the voxels around
the surface (zero-level set) contribute to the shape evolution. So in the implementation, a band Ω around the surface S is defined with an interval [DL, DH] on each voxel’
level set function value and only the voxels inside the band
are updated (see Fig. 2). The price for this is that after each
iteration the band should be updated to keep the new surface always inside the band through keeping a watching list
of voxels, which keep track of the boundary of the narrow
band. As illustrated in Fig. 2 (3D version) and Fig. 3 (2D
version), we need to further divide the band into patches so
that each camera takes care of one patch. Each patch should
contain at least all the “core voxels” — those voxels around
its contour generator defined by the contour generator indicator function. The set of “core voxels” are called the Pri_mary Responsible Set (PRS); (2) Each patch should include_
some “free voxels” — those voxels around the core voxels
that are not taken care of by any other cameras. These “free
voxels” hosted by camera c belong to the Secondary Re_sponsible Set (SRS) of camera c. To effectively distribute_
the reconstruction job among the cameras, there are three
criteria that the job division scheme should address:
PRSc ∈Vc (correctness) (12)
∪cVc = Ω (complete coverage)(13)
|Vc| is approximately equal (load balance) (14)
Eqn. (12) guarantees the correctness of the speed computation Fv[c][; Eqn. (13) ensures that all voxels inside the narrow]
band Ω are updated. With the satisfaction of (12) and (13),
the “free” voxels are distributed with the consideration of
load balance among cameras with the Algorithm (4).
**Algorithm 2 Update the voxel set Vc for each camera c**
1: Update the PRS of camera c as described in Algorithm 3.
2: Update the SRS of camera c as described in Algorithm 4.
As described in Algorithm 1, after each iteration, for
each camera c, its voxel set Vc (composed of PRS and SRS)
should be updated. First each camera’s new PRS can be
computed easily, given the new detected contour generator,
through narrow band updating, as described in Algorithm 3.
Besides PRS, there are other portions of the surface that are
not covered by any camera. To ensure that these “free” voxels are updated correctly, we need to assign them to some
host cameras. These “free” voxels are put in the SRS of
their corresponding host cameras. There are two considerations in these voxels’ distribution: These voxels may belong
to neighbor cameras’ PRS in the next iterations, so if we
could put these voxels to these potential cameras then we
can save the communications later; Another concern is the
**Algorithm 3 Update the PRS**
1: % update the narrow band
2: for each voxel v in the watching list, do
3: **if its level set function value φ(v) ∈** [DL, DH] then
4: expand the boundary voxels by adding the neighbor voxels whose level set function’s absolute values are greater than |φ(v)|.
5: **else**
6: delete this voxel.
7: **end if**
8: end for
9: Update the contour generator indicator values 1Gc (v),
∀v ∈Vc for camera c. Put voxels whose indicator value
is above a threshold TG into the new PRS.
**Algorithm 4 Update the SRS for camera c**
1: % update the boundary list
2: for each boundary voxel v, do
3: **for each c[′]** ∈Cv, do
4: **if v ∈** PRSc′ then
5: delete v from Vc; Add its neighbors to the
boundary voxel list.
6: **end if**
7: **end for**
8: end for
9: % At this stage, each boundary voxel has only two
hosts.
10: % Now start pairwise load balance.
11: for each voxel v in the boundary list, do
12: c[′] = Cv\c,
13: **if |Vc′** | < |Vc| then
14: delete v from Vc, and add its neighbors to the
boundary voxel list.
15: **end if**
16: end for
load balance. Due to the non-uniformity of the surface and
the distribution of the cameras, the size of the PRS for each
camera is different. The existence of these “free” voxels
provides us a leverage to balance the workload among cameras. The PRSs are fixed for the given surface and the cameras’ locations; The SRSs are flexible as long as together
with PRS they cover the whole surface. We can take advantage of this to assign these “free” voxels to the cameras
that have relatively small PRS’s. The workload balances
are negotiated pairwisely by neighbor cameras that share
boundaries, as described in Algorithm 4. The communications in Algorithm 4 happens in two steps: 1) communication between c and c[′] when checking v ∈ PRSc[′]; 2) communication between c and c[′] when checking |Vc′ | < |Vc|.
Since this operation is performed for each boundary voxel,
the communication cost is proportional to the number of
boundary voxels.
-----
(a) (b) (c)
Figure 4. Illustration of a simple communication case. (a) the virtual communication path in the na¨ıve approach; (b) the physical
communication path in the na¨ıve approach; The communication
cost is 8 units; (c) the physical communication path in the MST
case; The communication cost is 4 units.
### 3.2. Communication Optimization
As discussed above, cameras need to communicate with
each other locally to share information about their common
voxels and dynamically assign work loads among cameras.
Here we examine the problem of optimizing the communications between these cameras. From the above description
(especially in (10)), we know that each voxel’s incremental
update is composed of the summation of the participating
cameras’ contributions. So the basic communication job is:
sending each camera c’s incremental updating contribution
Fc[v] [to all the other cameras in][ C][v][. Now let us analyze the]
communication cost of the na¨ıve approach – each camera
sends its own value Fv[c] [to all other cameras in the set][ C][v]
directly. Suppose the communication cost between neighbor cameras in the graph is 1 unit. For a random graph, the
average communication complexity for one message passing is O(D) = O(log(N )), where D is the diameter of
the communication graph of the network. Then, the total
average communication complexity is O(N [2]log(N )). The
worst case for one message passing is N, with the worst
total communication complexity being N [3].
Instead of sending Fv[c] [directly to all the other cameras]
in Cv, there exists a more efficient way. Look at what each
camera needs — the summation of Fc[v] [from all the partic-]
ipating cameras c ∈Cv. Based on this observation, our
solution is the tree message passing protocol, as described
in Algorithm 5 and illustrated in Fig. 5. We store this tree
representation of the CRN distributedly, through each camera maintaining a list of directly connected camera nodes
for each voxel, as shown in Table 2. Why does the message passing work correctly for the tree structure? This is
because there is “no loop” in the tree, which guarantees
that cutting each edge will separate the tree into two separate subtree. And the message sent through the edge is
all the summed information from the subtree. In this way,
each node’s value is contributed to other nodes exactly once.
Take the tree in Fig. 5 for example. For node j, it will receive message from k, l and i. And each message from
Figure 5. Illustration of the minimum spanning tree message passing. Each node sends a message to one of its edges given the
message from the other edges have arrived.
k, l, i is the summation of the values in their subtrees {k},
{l}, and {i, m, n, o, p, q}.
**Algorithm 5 Tree message passing protocol**
1: for each node in the tree, do
2: Compute and send message to one edge if the messages from the other edges have been received;
3: Otherwise, wait.
4: end for
Next the communication cost of the tree message passing scheme is analyzed. For a tree with N nodes, there are
(N − 1) edges and since we send information bidirectionally, the communication cost is 2(N − 1) units. Given the
set of cameras Cv for a fixed voxel v, there are many trees
that can be constructed; which one is the best? Given a
weighted undirected graph G, we define a minimum span_ning tree (MST) as a connected subgraph of G for which_
the combined weight of all the included edges is minimized.
In our case, the minimum spanning tree is the one that has
the minimum communication cost. Since voxel updating
is a key operation in the algorithm, the improvement on
this operation will greatly speed up the algorithm. The tree
message passing is very useful for distributed smart camera
systems in general, since it is a common operation to summarize message in one subgraph and send it to the other
branch. The tree message passing ensures that the protocol
described above works correctly for tree topology structure.
The MST can also be constructed and updated distributedly
(See [2, 8, 5] for more details.)
With this MST protocol described in Algorithm 5, we
can see that each camera updates its own copy of voxels
only after receiving messages from all its neighbor cameras.
By this, there is no need to synchronize among the cameras
after each iteration. Each camera runs its own algorithm
and updates its own state only after it receives all the information needed asynchronously. And the synchronization is
implicitly controlled by the message passing.
-----
Figure 10. Shape evolution path of the David bust
This dataset is challenging in two aspects. First the object is textureless, non-Lambertian, and the illumination
changes (due to flash light), which challenges most multiview stereo algorithms based on intensity matching. Secondly the object is embedded in an natural indoor background. In the experiment on this dataset, the level set function is defined on a 64 × 64 × 64 grid, and the parameter µ
is set as 0.05. The projection error for the David dataset is
0.37 pixels. The projected 3D reconstructed apparent contours are shown in Fig. 9. Fig. 10 shows the whole evolution
path, starting from a cubic bounding box. It can be seen that
the shape evolution process does converge to the object even
though the background in the image is complex.
A rough estimate of communication load and hence battery energy expenditure. As discussed previously, neighbor
cameras communicate with each other 1) to figure out the
ownership of the “free” voxels; and 2) to exchange information about the voxels’ update values. In these two experiments, the total number of iterations is set as 200. The
narrow band-width is 6, so the number of voxels in the narrow band is the surface area times the narrow band width
(approximately 3 × 10[4] for the Dinosuar dataset). Each
voxel is maintained by 3 cameras on average. So the total
number of voxels that all the cameras take care of is about
three times that number: 6 × 10[4]. Since 20-23 cameras
can cover the surface tightly, the number of “free” voxels
is small compared to the size of the union of the PRS. So
the message exchanged is dominated by the voxel updating messages. As shown in section 3.2, the total number of
update values exchanged is 2(N − 1) ≈ 1.2 × 10[5]. Each
update value is 4 bytes (stored in single precision format),
the total number of communication bytes for each iteration
is 4.8 × 10[5] = 48KB. With 200 iteration, the total data
exchanged is about 13.6 MB. This order of communication
cost is affordable in VSNs.
## 5. Conclusions and Discussion
In this paper we define the problem and present a solution to the 3D reconstruction of an object, indoors or outdoors, from silhouettes in images taken by a network of
_many randomly distributed battery powered cameras hav-_
Figure 6. Two sample images
of the Toy Dinosaur dataset
Figure 7. The reconstructed
dinosaur shape
Figure 8. Shape evolution path of the Toy Dinosaur
## 4. Experimental Results
We first test the proposed algorithm on a public dataset,
Toy Dinosaur [1]. In Fig. 6, two sample images out of a total of 23 images are shown. In this dataset, the background
is relatively simple. The level set function is defined on a
56 × 120 × 96 grid, µ is set as 0.01 (a small value to prevent
smoothing out the dinosaur’s high curvature parts). Fig. 7
shows the reconstructed shape after 200 iterations. From
the results, we see that the overall shape is successfully reconstructed. Fig. 8 shows the whole shape evolution process, starting from a bounding rectangular box. It successfully converges to the concave parts, e.g. recovering the two
hands, and separating two legs, etc. The reconstruction accuracy is measured by the projection error, which is defined
as the distance between the projected apparent contour and
the image apparant contour. The average projection error
for this dataset is 0.21 pixels.
The next experiment is on the David bust dataset which
consists of 20 calibrated images taken by one moving real
camera. Fig. 9 shows samples of the image sequence.
1This dataset is available at http://www-cvr.ai.uiuc.edu/
ponce_grp/data/mview/.
-----
Figure 9. (top) Image sequence with indoor background; (bottom) Projected silhouette contours (in red) estimated by the algorithm, during
the 3D reconstruction process, overlapped with the image edges
ing onboard processing and wireless communication. The
goal is reconstruction to close to the achievable accuracy
while roughly minimizing processing time and battery usage. (More generally, other constraints may be present, e.g.,
communication bandwidth limitation.) The challenge is to
use few image pixels in an image, to communicate as little data as possible, and for each camera to communicate
to as few other cameras as possible. Our solution involves
maximum a posteriori probability estimation for achieving
close to optimal accuracy, introducing and using a dynamically changing vision graph for assigning computation tasks
to the various cameras for achieving minimum computation
time, and routing camera communications over a minimum
spanning tree (MST) for achieving minimum communications battery usage.
The main contribution of this work is the distributed
processing of shape-from-contours, including the regionspecific vision graph, job division schemes, and the MST
message passing protocol. The region-specific vision graph
and MST message passing developed in the paper can be
applied to other distributed vision tasks generally. The job
division scheme is linked to the shape-from-contours approach more tightly, but the principals developed here, including the three constraints (correctness, complete cov_erage and load balance), can be extended to other vi-_
sion problems. For example, for shape from texture and
contours, similar job division schemes can be developed
by selecting most valuable image observations. The distributed algorithm proposed in the paper is not only applicable to smart camera network but also applicable to multiprocessor systems such as many-core CPUs and GPUs
nowadays. We compute rough estimates of the amount of
required computation and the required communication cost.
The approach is appropriate for networks of very large numbers of cameras.
## References
[1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci.
Wireless sensor networks: a survey. Computer Networks,
38:393–422, 2002.
[2] B. Awerbuch. Optimal distributed algorithms for minimum
weight spanning tree, counting, leader election and related
problems. In Proc. 19th Symp on Theory of Computing,
pages 230–240, May 1987.
[3] M. Brand, K. Kang, and D. Cooper. Algebraic solution to
visual hull. In CVPR, 2004.
[4] R. Cipolla and P. Giblin. Visual Motion of Curves and Sur_faces. Cambridge University Press, 2000._
[5] B. Das and V. Loui. Reconstructing a minimum spanning
tree after deletion of any node. Algorithmica, 31:530–547,
2001.
[6] O. Faugeras, J. Gomes, and R. Keriven. Geometric Level Set
_Methods in Imaging, Vision and Graphics. Osher and Para-_
_gios Eds., chapter Variational Principles in Computational_
Stereo. 2003.
[7] O. Faugeras and R. Keriven. Variational principles, surface
evolution, PDE’s, level set methods and the stereo problem.
_IEEE Trans. Image Processing, 7(3):336–344, 1998._
[8] R. Gallager, P. Humblet, and P. Spira. A distributed algorithm for minimum weight spanning tree. ACM Trans. on
_Programming Languages and Systems, 5(1):66–77, January_
1983.
[9] P. Gargallo, E. Prados, and P. Sturm. Minimizing the reprojection error in surface reconstruction from images. In ICCV,
pages 1–8, 2007.
[10] S. Liu, K. Kang, J.-P. Tarel, and D. Cooper. Free-form object reconstruction from occluding edges and texture edges:
A unified and robust operator based on duality. _PAMI,_
30(1):131–146, January 2008.
[11] K. Obraczka, R. Manduchi, and J. Garcia-Luna-Aveces.
Managing the information flow in visual sensor networks. In
_5th Symp. Wireless Personal Multimedia Communications,_
volume 3, pages 1177–1181, 2002.
[12] S. Osher and R. Fedkiw. Level Set Methods and Dynamic
_Implicit Surfaces. Springer-Verlag, New York, 2002._
[13] B. Rinner and W. Wolf. An introduction to distributed smart
cameras. Proceedings of the IEEE, 96:1565–1575, 2008.
[14] J. A. Sethian. Level Set Methods and Fast Marching Meth_ods. Cambridge University Press, 1999._
[15] A. J. Yezzi and S. Soatto. Stereoscopic segmentation. In
_ICCV, 2001._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/CVPR.2009.5206589?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/CVPR.2009.5206589, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://perso.lcpc.fr/tarel.jean-philippe/publis/jpt-cvpr09.pdf"
}
| 2,009
|
[
"JournalArticle",
"Conference"
] | true
| 2009-06-20T00:00:00
|
[] | 8,753
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00fe625102de79328a9ee3bf867afa05f8be82d7
|
[
"Business"
] | 0.907663
|
Blockchain-based multi-organization taxonomy for smart cities
|
00fe625102de79328a9ee3bf867afa05f8be82d7
|
SN Applied Sciences
|
[
{
"authorId": "1581528660",
"name": "Ekleen Kaur"
},
{
"authorId": "72563368",
"name": "Anshul Oza"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SN Appl Sci"
],
"alternate_urls": null,
"id": "4766beb3-bec6-4b4a-a5fc-cf12ba2339b6",
"issn": "2523-3963",
"name": "SN Applied Sciences",
"type": null,
"url": "https://www.springer.com/engineering/journal/42452"
}
| null |
**Research Article**
# Blockchain‑based multi‑organization taxonomy for smart cities
**Ekleen Kaur[1] · Anshul Oza[1]**
Received: 14 October 2019 / Accepted: 5 February 2020 / Published online: 18 February 2020
© Springer Nature Switzerland AG 2020
**Abstract**
With the tremendous development in distributed ledger technology, assimilation of tokenization in sustainable assets
has been a proof of concept. This paper is a documentation of ERC20 standards where cryptocontracts are an evident
implementation of WRC tokens. Issuing and earning of these token credits rely on aggregate hazardous waste released
into water as a by-product and careful monitoring of water quality standards (after treatment) using IOT sensors. The
minting of this token currency is put forth by exchanging ether. This research is twofold attempt to eliminate the differences between Small and Medium Enterprise and large-scale Enterprises and to establish a business ground with equal
opportunity of earning credits based on recycled wastewater.
**Keywords Blockchain · Water analysis · IOT sensing · Permissioned blockchain network**
## 1 Introduction
Distributed ledger technology has progressed after
Satoshi Nakamoto’s offered solution to double spending problem by implementing peer-to-peer transaction
after mining the first block [1]. Blockchain technology has
continuously evolved since then; blockchain is nothing
but cryptographically linked blocks storing information.
Each block contains a cryptographic hash of the previous
block, a timestamp, i.e., the time at which the transaction was confirmed after successful mining of blocks and
transaction data. With the advent of blockchain came the
concerns about the anonymity of miners and the organizational theory of decentralization [2]. Decentralized governance revolves around transparency and trust among
the members. Blockchain has 4 major features:
_Immutability Once the data of a transaction is confirmed_
and recorded, it can never be changed or amended. The
same asset can undergo various other transactions
adding to the list of confirmed recorded transactions,
but the state of those confirmed transactions remains
immutable.
_Provenance Immutability of recorded transaction gives_
provenance of assets, providing an entire history of the
asset, including where it has been, who or how many
members have owned the asset previously and so on.
_Consensus This weeds out potentially fraudulent trans-_
actions out of the database. A transaction cannot be
confirmed on a blockchain without consensus. The consensus is very important for the validation of a transaction.
_Distributed It urges various organizations to share and_
exchange data.
_Smart contracts are the building pillars of blockchain_
in a business organization. Inside distributed ledger
technology, we audit the smart contracts built for
blockchain, at which it is designed to be implemented
[3]. Ethereum is a well-known public as well as private
blockchain network. Ethereum demonstrates a successful implementation of a complex Merkle tree, i.e.,
Merkle Patricia tree. Ethereum works on PoW (proof of
work), PoS (proof of stake) and PoA (proof of author
- Ekleen Kaur, ekleenkaur17@gmail.com; Anshul Oza, ansuhuloza@ipsacademy.org | [1]IPS IES Academy, Indore, India.
SN Applied Sciences (2020) 2:440 | https://doi.org/10.1007/s42452-020-2187-4
V l (0123456789)
-----
ity) consensus algorithms. In terms of efficiency, PoA
is considered more robust than PoS [4]. PoA allows
non-consecutive block approval from any one authority. Ethereum is usually taken to a permissionless
blockchain network just like bitcoin.
A permissionless blockchain is a network in which
anyone can become a member and take part in consensus, add a new block and confirm new transactions. A
permissioned blockchain allows ‘any node’ to take part
if the identity and role are defined, whereas a private
blockchain only allows ‘known nodes’ to participate in
the network.
ERC20 is the standard used for smart contracts on the
ethereum blockchain for implementing tokens. Crypto
tokens are a special kind of virtual currency that represent an asset or utility. Tokenization of assets [5] on blockchain is creating tokens using smart contracts based on
some standards, i.e., ERC20 in this case and regulate those
tokens by writing functions in the contract that allow the
token to be traded and exchanged in the form of currency
or in the form of an asset. Each token item is a hash value
representing a crypto asset, which is under the consensus of all consortium members. Typically, a token contains
both direct access and user authenticated access. Hash
value in the token is made available only on the regulator’s or owner’s authentication.
ERC20 token standard is complemented by ERC223
[6] and ERC777 [7]. Just like ERC20, there are many other
smart contract-based token standards such as ERC721,
ERC1155, NEP5, NEP11, QRC20 although ERC223 and
ERC777 are very analogous to ERC20. Tokens dealing with
smart contracts involve two function calls under ERC20,
and on the other hand tokens working on ERC223 and
ERC777 fashion involve only a single function call. Despite
a few advantages, ERC20 is still prominent over ERC223
and ERC777.
Now the implementation of code in a smart contract
is done using solidity. Solidity is a high-level contractoriented programming language. It is used to build smart
contracts. Solidity works on the principles of object-oriented programming just like any other programming language like C++, Java, Python, etc.
Smart contracts are necessary for the regulation of a
business network for the purpose it was meant to fulfill.
Tokenizing sustainable assets are being developed under
great examination, such sustainable assets contribute to
waste management and establish sanitation. Improperly
treated wastewater is one of the largest sources of water
pollution in India [8]. Water quality and pollution are generally measured in terms of concentration [9]. So, waste
management mainly revolves around the non-biodegradable constituents which are required to be treated.
Overview of some major types of wastewater chemical
contaminants include:
(i.) _Total Dissolved Solids (TDS) Comprises of inorganic_
salts and small amounts of organic matter dissolved in water.
(ii.) _Biochemical Oxygen demand (BOD) Amount of oxy-_
gen required by aerobic microorganisms.
(iii.) _Chemical Oxygen demand (COD) Oxygen equivalent_
of organic matter content susceptible to oxidation
by a strong chemical oxidant.
Identification of water quality is done, based on some
standards:
- _PH Scale used to specify the acidity of a solution. Nor-_
mal water has a PH nearing 7.
- _Turbidity It is the measure of the degree to which_
water loses its transparency due to the presence of
suspended particulates (size greater than > 1000 nm).
Turbidity is measured in NTU’s which stands for Nephelometric Turbidity Units.
- _Biochemical Oxygen demand (BOD) 3–5 particulates per_
million for normal water.
- _Total Dissolved Solids (TDS) For normal water, the value_
of total dissolved solids is 300–500 mg/liter.
- _Temperature (°C) This property has a varying range_
based on the type of microorganisms perpetuating.
Industrial water has psychrophiles in 0–20 °C, mesophiles in 10–45 °C. Normal water body has a temperature of 13 °C.
- _Hardness and oil/grease Normal water ranges less_
between 45–46 mg/l and 5–6 mg/l correspondingly.
Now that certain quality standards have been established,
the question arises about what to monitor the water quality standards with?
As sensor technology advances the whole system of
everyday challenges is easily implemented with advanced
solutions. Internet of Things (IoT) possess ability to transfer
data over a network at high speed with great accuracy.
IoT sensors are devices that record data and provide new
insights by monitoring data. IoT sensors play a big role
in monitoring the water quality, the reason why it is significant to convert the data recorded by the sensors into
useful information and to deploy this information for interaction between stakeholders [10].
### 1.1 Challenges with wastewater management
- _Inadequate treatment of water Since there are organi-_
zations that are already recycling wastewater, these
-----
organizations do not realize the need for amending the
current systems. Due to the inappropriate treatment
of wastewater, the water quality standards are never
met that can be reused by the company to profitable
amounts.
- _Lack of careful monitoring The major reasons for inad-_
equate treatment of water is due to the lack of an efficiently established monitoring system that can regulate these organizations and regularly keep the water
quality standards under check.
- _No incentive earned As of now, India lacks a well-estab-_
lished wastewater monitoring system that can provide
an incentive to the organizations that are a part of the
business network.
- _Freshwater Dependency still intact There is no major_
monitoring of the treatment plants, so the resultant
recycled water is of no major use to the organization
itself, due to which the freshwater dependency of such
organizations is still intact. Correspondingly, the probability rises that the major pollutants are still present in
water.
## 2 Literature review
Swan [11]: Melanie Swan writes the manifestation of blockchain and how it is soon going to take over the economy,
the crucial necessity of decentralization in business, also
token exchange that has been evolving since the arrival of
blockchain and how vast the magnitude of the functionality of a smart contract in business is. However, blockchain technology is only limited to theoretical terms,
and its potential is defined without a scope of the actual
implementation. The book is a beginning ground for the
new economy that majorly lacks the actual complexities
of guidelines to implement blockchain tokenization on a
protuberant scale; the real-time challenges are just briefly
mentioned.
Mougayar [12]: William Mougayar surrounds blockchain
around exploring the new ways in which it is capable of
extending its service in business and economies, solving the contortions of Melanie Swan’s [11] work. In our
research, our real-time asset, i.e., wastewater is getting
monitored by IOT applications and traded as tokens; we
cite this book to further blockchain’s immense capability
in business to resolve larger problems that current centralized authorities are facing in the present time such as
maintaining water contamination and establishing real
standards to monitor the treatment of wastewater. William Mougayar’s paper doesn’t discuss the shortcomings
of extending blockchain as a service or provide real-time
solutions to those shortcomings.
Duca et al. [13]: The paper states the self-purification
of water, hydrochemicals that are essential for determining water quality and how the concept of mineralization
works, a very detailed illustration of H2O2 as an oxidizing
and reducing agent and radical formation across chemical
bonds, modeling of these chemical links in the interim of
redox reactions along with the role of transition metals
in quasi-reducing state. The author pens down the fact
that the dependency of self-purification on OH radical is
directly proportional and biotic components in water play
a key role in self-purification. Gheorge Duca’s Paper in the
year 2008 points to the efficiency of the detailed water
purification processes; blockchain’s architecture for smart
cities as discussed in our research empathizes with this
fact of monitoring the water purification standards while
growing the business economy. We engross the acceptance sampling to target the set of organizations with previously set up water purification management systems,
and our research aims at monitoring this efficiency in a
business network as we discuss in Sect. 3 of this paper.
Echard [14]: The data in a business network is prone to
attack; the author talks about the importance of encryption and the chances of vulnerable data, the need for
security keys and the decryption of data. The conclusion
speaks of hazy ideas regarding the security of data over a
network. Due to the presence of malicious crackers, data
are never too safe or completely secure on a network.
Schaad [15]: This paper is an additional focus on devices
that form a major part of Internet of Things today. Data
modeling using binary data, the signing and encryption
standards of objects (COSE), after amending from JOSE,
well-defined basic COSE structure, multi-signature on
objects have been a part of the modern updates. Signing
and verifying altogether with encryption algorithms, the
author also explains signature algorithms while defining
security aspects and authentication. COSE’s registries and
Media Type explain the usage of keys in IOT-based applications and are very necessary while maintaining security
standards.
Echard [14] mentions data security on a network connecting IOT devices; Schaad [15] advances his work by
COSE protocol encryption for cryptographic signing of
data, enhancing data security; COSE protocol follows layer
encryption, i.e., content and recipient layer. Our work uses
cryptographic encryption for ethereum blockchain which
uses Elliptic Curve Digital Signature algorithm as a key pair
encryption standard for unique public and private key
pairs owned by each organization.
Hong et al. [16] exemplifies policy-making complication given a local government that stands as a regulator, emphasizing schemes and target-based production
results. The game model theory puts wealth and welfare
with equal importance. The paper promotes the reduction
-----
of hazardous emissions at the stake of adopting green
technology inside big firms. The complexity of the problem case is analyzed depending upon the initial allowances which are expected to be clearly defined. Hybrid
algorithm composes of polynomial dynamic programming
(PDP), genetic algorithm (GA), and it also consists of binary
search in matters relating to the efficiency of the business
model on the ground of policy making. In the business
trade, the paper states about firms exchanging allowances
which are used either to sell or to buy; this theory is functioning in our research. Consensus on blockchain ensures
decentralization, so in a decentralized business network
exchanging of these allowances is aiding to the circulation
of the token economy that resolves the big question of the
token trade between organizations.
Zecchini [17]: This paper extends the above-proposed
business model governed by a regulator in its research.
The author clearly defines the problem statement of water
impurity standards and how IOT can help in revolutionizing data elucidation. The merits and demerits of a regulator organization like the government are well elaborated
and authentically structured; the entire thesis of the storage of data and transmission is through low power wide
area network and efficient comparisons with other data
transmission techniques and its architecture. Cryptographically encrypting data storage while laying out the merits
of blockchains, the analysis of a permissioned blockchain
over a permissionless blockchain, the rules for establishing
quality credits to offer incentive in exchange of water quality has been just theoretically defined by the author. His
future works include the implementation of permissioned
blockchain for quality credit. Our research on multi-organization blockchain taxonomy establishes proof for practical implementation of smart contracts over permissioned
blockchain network and a profit formula for anticipating
quality credit along with the token price.
Silvestre [18]: supply chain ambiguity and stimulating factors are certain management challenges faced by
the economies which eventually hinder the sustainable
growth of such economies. The research has a description
on the geographical factors that affect the social demands
and their roles in supply chain. Even though the rise of
globalization in complex business networks is facing high
uncertainty, all are promoting stages in aggrandizing
green technology in sustainable business networks.
Klassen and Vereecke [19]: Silvestre’s [18] work in sustainable development of economies in supply chain
results in expanding business which is associated to
certain complexities. Compared to Silvestre’s work, this
paper proposes that supply chain risks and social responsibility are inter-related terms. The social intervention
leads to increasing risks alongside opportunities. Social
responsibility must include close monitoring of customers’
demands and supplies. Collaborative decentralization
offers flexibility of workflows and processes; organizations
with too many regulations face human health challenges.
Innovation strategies involve management policies and
product innovations that legitimize performance. Responsibility extends to auditing social standards by stakeholders that include alignment of risks and profits combined.
We differ in this context because auditing social standards in order to increase sustainable business needs to be
maintained; smart contract in our business model maintains those standards. The most important factor is which
organization owns what quantified amount of the sustainable asset chosen in the model, i.e., the waste water token.
Li et al. [20]: Tokens are classified depending upon the
work it was supposed to fulfill. Utility tokens are the currency base of applications, determining access control of
the application. Security tokens derive its value on the top
of blockchain, externally, but it can be subject to mandatory surveillance and supervision. Asset-backed are tokens
converted from real and virtual assets under open asset
protocol, a policy backed token offering privacy conservancies on data. Recorded data on the ledger can be only
traded. Our research uses a real-world asset as an assetbased token implemented on ERC20 standard with the
token contract laying the rules of ownership. The contract
provides proof of ownership which can be transferred if
the ownership of the recorded data is specified with its
quantified amount given that the amount is less than the
total available token balance of that particular time stamp.
Zhou [21]: Data ownership in a permissionless blockchain is proposed using a Dlattice architecture of a Double
DAG, as the presence of an Account DAG data security is
immune to influence from other accounts. DPOS-BA-DAG
protocol establishes decentralization by consensus in the
presence of forks. Tokenizing data is given because of
Dlattice in the paper with consensus in the time period of
just 10 s. The author compares the economic incentives
of Algorand protocol, a large amount of signature data,
and establishes comparisons between various consensus
protocols like fault Tolerance, Ouroboros, etc. Dlattice aims
to extend healthcare with the help of IOT, exemplifying our
research on sustainable assets; however, IOT supervision
in our business model is a part of permissioned network
that uses proof of authority consensus. According to the
paper, the Account DAG structure allows same user to own
multiple accounts where each account has a separate public key as an identity, but this is not the current possibility
in our blockchain architecture because multiple accounts
owned by the same organization enterprise is irrelevant
and it may lead to loss of track on records on monthly
quality credits and penalties issued to the organization.
Sánchez-Corcuera [22]: The smart city application
faces many challenges; sustainable assets cater risks that
-----
can overweigh its benefits as discussed by Klassen’s [19]
research in 2012; these risks are only going to augment
in the future. Urban planning is comprehended by sustainable factors revolving around the environment. Waste
management, as stated by the paper, discusses different
approaches like smart containers and new infrastructures
but does not include authority access and data transparency, which is a major feature evident for smart cities. Data
tokenization, consensus on blockchain, is a big solution to
these problems.
Kundu [23]: Kundu Debasish supports and proves
Sánchez-Corcuera Ruben’s [22] work; transparency is a key
factor for an efficient smart city despite the available set of
technologies carrying the society in the imminent time. To
establish transparency and trust which is the building of
any business contract, smart contracts or cryptocontracts
direct the workflows and access control of transactions,
which we implemented for tokenizing WRC—wastewater
recycling certificates. Access control has a lot of contortions in token trade, defining the rules of data ownership
[20], such as
- Who has the right to mint ether,
- Who has the right to withdraw back from the token
amount back to ether,
- Who can send and regulate transactions,
- What is the minimum token price each organization
must own in order to be a part of the network.
The author elucidates the importance of transactions
without third-party intervention liquid economy that
tokenizes land base assets allowing the transferring of
such assets at a much faster and safer state. We focus our
findings on the basis of the four layer topology as discussed by Kundu Debasish.
Kouhizadeh [24]: Circular economy in a supply chain
using blockchain technology can help monitor and keep
a record of previously deleted products, by storing the
record of each transaction on the ledger and helps keep
track of all updates. Circular economy subsidizes exchanging of products. Blockchain’s shift over renewable energy
has helped reduce emissions on environment. The author
states about waste exchanges supplementing blockchain.
Accessing products, transaction records, final stage data,
by-products are all potential aspects of blockchain based
waste management. Data tokenization managed by local
governments or semi-government bodies namely regulator is the prime proposal of our research. Detailed elucidation of complex inter-relationships advancing toward
development in circular economy which are defined in
terms of participant owned assets and regulator regulated transactions. One of the major differences is that
the author claims on using blockchain as an alternative
for product deletion, whereas our major closure is on using
blockchain as a restate for resource elimination; wastewater cycle extends the capacity of these potential resources
for a more optimized utilization by organizations.
Savelyev [25]: Kouhizadeh Mahtab proves discarded
products in a supply chain can be regulated as a circular economy on blockchain [24]; Savelyev discusses these
blockchain economies in association with the legal aspects
ascending within such businesses. The paper clearly discusses tokenization of assets that can be registered on a
network working on blockchain; tokens function according to the rules essential for the business network in a
blockchain architecture. What is evident is that the author
discusses about the possibilities of providing tokenized
assets some cognizable rights that are recognized for
every token economy to inherit and transfer. It is lucid from
the paper about the existence of certain anomalies related
to the ownership of blockchain tokens that are still being
analyzed to legalize tokenized assets in the future. Our
research manifests blockchain mining; sustainable asset
solves the problem of ownership [20] of the tokenized
asset at the time of transactions which eliminates a major
research gap, since our research is not tokenizing private
property or objects owned by organizations.
Blemus et al. [26]: Increased transparency with the intervention of distributed ledger technology has improved
governance. Token economy is raising ICO (Initial Coin
Offering) investment; ever since the distributed ledger
technology has stepped into business the amount raised
has been progressing on the graph. The author mentions
token rights specific to the issuing of tokens; our research
is apropos for providing rights to identity of ownership of
tokens that are issued by the regulator symbolizing a successful quality check and utilization of wastewater. Corporate governance using tokenization on blockchain is just a
theoretical survey in the paper, without signifying major
protocols involved. Similarly the paper briefly mentions
about the establishment of a relationship between tokens
and economic/noneconomic ecosystem by proposing IOT
as a solution medium to develop that link but there is no
further progress in the research in this regard.
Roth et al. [27]: Blemus et al. [26] and Li et al. [20] classify
tokens as security, utility and crypto tokens similarly. The
author makes an interactive comparison between UTXObased, layer-based and smart contract-based tokens. The
paper contemplates the impact of tokenizing assets on
blockchain by eliminating the need of a platform provider
or a third-party authorization for transactions to enforce.
The paper briefly mentions regulatory issues of legal policies but majorly focuses on crowdfunding equity using
blockchain. Wastewater is not only quenchable to the
need of crowdfunding in our research but also retorts
the problem of asset ownership even though it is not a
-----
catechizing factor in our research. Our research highlights
toward the innovation of using wastewater as a circulating
business economy. Crowdfunding is a major problem in
the business economy; the author proposes solutions to
the problem of crowdfunding using blockchain. Blemus
et al. [26] points to the monopoly of token stakeholders
that form a major part in any corporate governance, and
Roth et al. [27] contravenes this fact by providing an analysis of market captalization of various blockchains with
ethereum token market capitalization at the maximum; his
findings justify the safety of assets from arbitrary manipulation and monopoly of token holders in a network using
blockchain; the end of this section sums up our differences
with this research.
Sanghavi et al. [28]: The author proposes to employ
the asset tokenization in quotidian phases and decision
making in an organization, with just a glimpses on the
fact of an actual possibility. Our research to bolster the
above facts focuses on implementing the amalgamation
property of cryptocontracts and proposes a touchstone
to enhance governance in the regulation of enterprises
or organizations that lobotomize over time. Taking the
above facts into consideration, the author acknowledges
possibilities without results. The author lacks significant
results regarding decision making process within the business network and makes predictions without any expert
review which has no value.
Prince Michael von und zu Liechtenstein [29]: According
to the paper policies to be put forth under the blockchain
law, as planned and argued by the Liechtenstein government, majorly involves the embryonic idea of tokenbased economy. The vast ground of asset tokenization
has extended to a large scale in the context of business;
cognizable blockchain rights ameliorate degree of trust
amidst organizations. Apart from immutable data and
security due to data enciphering, blockchain law contradicts all the belied theories of losing copious amounts of
money due to bugs or lack of proper auditing of smart
contracts that in the past has resulted in the loss of millions of dollars. The author extrapolates tokenization as a
method of establishing a trust to bolster entrepreneurial
participation in blockchain environments. There is a major
research gap; it doesn’t provide an actual plan on how the
token economy will be regulated once the law is imposed.
Davydov and Khalilova [30]: Unlike Liechtenstein’s [29]
proposal of cognizable rights for blockchain, the author
mentions the incrementing possibilities of money acquisition for the banks and also P2P market services by proposing a business model creating entrepreneurial opportunities. The research gap is lucid from the paper for not
discussing any of the legal aspects indulged with the
bank loan amount upon tokenization. The author’s finding attenuates the research because of a lack of sufficient
result that could actually supplement the theory proposed
or some amount of implemented result that could dwell
the readers’ trust on the less probability of money lost
due to tokenization, which the author fails to accomplish.
There is a significant probability of money getting lost in
the proposed business model; it is not reliable under load
conditions.
Levin [31]: The author mentions about the low liquidity property of smart contracts. The four steps involved in
asset offering include the process of digitization, tokenization, asset trading and dealing. With the instance of APT
tokens in the paper, the author mentions the contingency
of escrow accounts that hold in actual assets compatible
with the ERC20 standard. Some key emphasis is on information transparency that enhances trust and promotes
the sales market; tokenization is cogent in the present-day
economy because it deals with the problem of high capital investment. Escrowing assets save the property from
rental scams. The author discusses the potential of cryptocurrencies to be traded as real-time assets but the research
gap is that it lacks many possibilities and the instances fail
to sufficiently extend the theory. The author has provided
no relevance of multi-signature wallet in escrow account
holdings to uphold its credibility.
To sum up, the above papers extend various aspects
of blockchain; they deal with many of the incoming challenges rising with the diversification of blockchain in the
modern day ecosystem. This research is another aspect to
extend blockchain’s capability for business; however, none
of the aforementioned research builds token economy
with IOT using a sustainable asset. We tokenize wastewater
on ethereum due to its largest market capitalization; to the
best of our knowledge, this is the first research that builds
such a business model on private ethereum network using
POA consensus.
We deal with the challenges on introspection; IOT monitoring of water quality in a POA network has a regulator;
Proof of Authority Consensus demands the presence of
authorities to approve transactions for the issuing of quality credits. For this regard, we concern those organization
having favorable outcome in the quotidian scrutiny.
Section 3 of this paper provides our work for smart city
modeling; finally, we provide our discussions in the conclusion section that highlight the crux and summarize the
research.
## 3 Methodology
Sustainable energy credits are composing into business.
Tokenizing these energy credits for business on ERC20
fashion has some verdicts that need scrutiny. India has
many organizations utilizing freshwater for commercialism
-----
with detailed and well-structured water treatment
platforms.
Problem case
- The dependency on freshwater is still intact.
- There has been no considerate curtailing of waste emission in water bodies.
- There is a need for regulator [16] that can monitor and
examine the water quality standards.
- The organizations purifying wastewater need to reuse
that treated water for merchandising.
- We need a profit formula for the reuse of corresponding
recycled water.
- Large-Small organizations combined have a huge difference in water intake, so there is a need to standardize profits.
Based on these conglomerate quandaries, this research
tokenizes WRC using cryptocontracts from ether. WRC is
the token minted from ether. Any token created from ether
using ERC20 mandates a token price. Token price is the
price of token exchange for minting coins, while minting
ether into WRC tokens.
So, while trying to convert 1 ether into WRC coins, the
token price is taken to be 14th power of 10. Minting 10000
WRC coins for one Ether.
_Tp = 100000000000000 Wei_
_Ts = WRC_
_Tn = wastewater Reprocessing Coins_
where Tp is the token price, Ts is the token symbol, Tn is
the token name.
Organizations exchanging coins by minting and withdrawing WRC standardizes trade in the business network.
The problem case mentions a profit formula that issues
this WRC as quality credits for reusing wastewater. Regulator or the local government benchmark for issuing these
tokens is the regulator set ground zero, minimum percentage reuse of recycled wastewater.
The regulator set ground zero is the arbitrator of quality credits. Such qualified organizations contribute to the
maximum supply of coins and trade tokens amidst the
organizations that fail to meet the quality benchmark.
For the quality benchmark to be set, the benchmark
must be decided by auditing data of water quality
acquired from the treatment plants. The data readings
are received every hour with lead acid battery backups
to restrain it from losing contact with the server because
of major challenges while storing data is the problem of
data tampering. It is very important for the IOT sensors to
stay in contact with the server for reading data; this data
are gathered in the database, and the percentage reuse is
calculated based on this hourly received data. The percent
reuse determines the quality credits earned.
The break in data readings is the fact behind data tampering resulting in heavy penalties or no issued coins for
the month. In this way, efficient monitoring of water quality along with data tampering is accompanied by earning
tokens. The user interface provides the percentage to the
cryptocontract where the transactions are defined.
The profit formula varies according to the proportion of
varying organizations above and below the ground zero.
Percentage reuse standardizes the phenomenon of equal
competence and acquisition of quality credits, which helps
big large firms with the annual dependency of several
thousand gallons of water trade with SME’s (Small Medium
Enterprises) requiring only a few gallons of water intake.
For instance: Given a set of sample data to elucidate the
profit formula.
Organization 1: This is a chemical industry that requires
25,000 gallons of freshwater, emits 10,000 gallons of
wastewater and reuses 6000 gallons after recycling the
water that has met the quality standards.
The percentage reuse of this Chemical Industry
sums up to an approximate of 60%.
Organization 2: This is a leather industry that requires
8000 gallons of freshwater, emits 2000 gallons of wastewater and reuses 800 gallons of water after recycling
the water that has met the quality standards.
The percentage reuse of this Leather Industry
sums up to an approximate of 40%.
Organization 3: This is pharmaceutical industry that
requires 2000 gallons of freshwater, emits 800 gallons
of wastewater and reuses 300 gallons of water after
recycling the water that has met the quality standards.
-----
The percentage reuse of this Pharmaceutical
Industry sums up to an approximate of 37.5%.
The above statistics bolster the fact that,
- any type of corporate sector depending upon freshwater and
- any firm of large/small/medium scale
contribute equal opportunity of fair trade in the business
network.
The water quality monitoring is examined in the database and for the organization fitting into those quality
checks the regulator either issues monthly exchange of
WRC or receives a token amount of penalty. Internet of
Things administers water quality using sensors. Sensors
record the data to check it in the realm reach of normal
water quality; the UI connects to the blockchain node and
the frontend to the database. Some common sensors used
in this perspective are:
- Turbidity sensor
- BOD—biochemical oxygen demand sensor
- PH sensor
- Volume sensor
- Temperature sensor
- TDS—total dissolved solids sensor
There is a detailed reason behind using individual sensors over multi-sensor water quality sensing appliances
like YSI ProDSS/Pro 20, H198193, Proteus, etc.
1. The cost and maintenance of a multi-sensor are comparatively higher than installing individual sensors.
2. The range of individual water quality-testing sensors
is higher than a multi-sensor.
3. Every individual sensor is dependent on one another.
If one stops working others cannot function; to repair
the faulty sensor even the other sensors can’t be used
for sensing to record hourly data.
The mechanism of IOT sensing is that on any physical
change received during the working of the entire model,
the sensor is calibrated to detect and send the digital/
analog signal to the central unit, i.e., IC ESP32, which in
turn further connects to the webserver using wifi chip and
updates the data result at the unit interval of time.
What concerns the regulating of participant organizations is the fluctuation around regulator set at ground
zero,
the arrival of an organization B below ground zero
and the arrival of organization C above the ground zero.
This fluctuation is the bottom layer architecture of the
cryptostocks, this architecture determines the present
value of the token price. Understanding profits comes
with the count of organizations.
The profit gain formula can be considered as a ratio as
follows:
let Pr be the Profit Ratio given as under,
Pr = Oa∕(Ua + Oa)
given Pr > 50%
where _Oa is the grouping count of over-achieving_
organizations, Ua is the grouping count of under-achieving
organizations.
_Loss ratio: This damage is equivalent to the above profit_
ratio extending in one criterion.
P L
r = r
where Pr < 50%
So,
-----
Lr = Oa∕ [(]Ua + Oa
Lr < 50 %
),
The deviations in the above ratio are linked to the cryptostock. WRC is a token on ERC20; therefore, this token
corresponds to a token price. The regulator defines the
token price as a firm value standardized by the value corresponding to the ground zero percentage ratio.
Let Sp be the stock price,
_Sp has a constant value at ground zero equivalent to Tp,_
when the Pr ratio is equivalent to 50%
S T P
p = p ± r
_where Pr is > or < 50%, when Pr < 50% then Pr = Lr._
For instance:
Organization B in the bar graph is counted as an underachiever with a percentage reuse of 20%; similarly, organization C is counted as overachiever with the percentage
reuse of 60%. assuming this data the stock price won’t be
affected by Pr as
Oa = 1
Ua = 1
Pr = Oa∕ (Ua + Oa)
= 1∕2 i.e. 50 %
Note: The above formula for calculating Sp is valid only
when Pr is not equal to 50%, i.e., Pr ≠ 50%
The issuing of quality credits in tokens will be provided
to the respective groups (Oa or Ua) depending upon the
percentage reuse calculated.
Organization
IOT Meters 1
1
Server
Organization
IOT Meters 2
2
Organization Database
IOT Meters N
N
The application of IOT sensor in the wastewater monitoring integration involves an analysis of the water quality standard for normal wastewater after treatment. The
volumetric analysis is done before and after the treatment of wastewater. The conformity of the percentage
reuse of wastewater in the issuing of tokens is in regard
to this set of gathered readings from each of the participant enterprises. Enterprises in the business network
earn quality token credits, based on this endorsement
ratio. The set of gathered readings from each of the
sensors is required to pass a quality probe in every unit
time. The concluding volumetric analysis is the average
of those volumetric readings that pass the quality check
during the entire month in every standard unit of time
intervals.
Percentage reuse is given by
The ratio of volumetric analysis before and after the
treatment of wastewater.
Let Re be the required ratio.
Re = [(]V ∕ VA) ∗ 100
where V volume of wastewater, VA volume after treatment,
_VA is the average of those readings that pass the quality_
probe given the sample table to elucidate the sensing
process.
VA = 횺(volume analyzed with acceptable quality)∕n
where n is the No. of times the quality was maintained out
of the total no. of readings (N) taken, such that n > N/2, i.e.,
50%.
Vol- Result
ume
Oil
sensor
TDS
sensor
COD
sensor
BOD
sensor
Temperature
sensor
S. Turno. bidity
sensor
PH
sensor
1 ✓ ✗ ✓ ✗ ✗ ✓ ✗ _Va_ Fail
2 ✓ ✓ ✗ ✓ ✓ ✓ ✓ _Vb_ Pass
3 ✓ ✓ ✓ ✓ ✓ ✓ ✓ _Vc_ Pass
An overview glimpse of the deployment of IOT sensors
in the business model to test the water quality probe and
the application of IOT sensing can be taken into consideration by the following the diagram; the feature of esp32
microcontroller is that it can be programmed with wifi
and bluetooth. For obtaining digital signal output to the
analog sensor, we use an ADC converter, IOT meter port to
connect our sensors: turbidity, ph, temperature, etc. Triac
T1 BT138-V for output pulse modulation. IC2 7805T for
voltage regulation.
|Organization 1|IOT Meters 1|
|---|---|
|Organization 2|IOT Meters 2|
|---|---|
-----
## 4 Limitations
- The token is built on ERC20 which means its dependency on ether prohibiting full user access.
- The cost of ethereum nodes mining the blocks vary
according to demand and supply; this indirectly affects
the minting of the tokens at varying prices according
to Ether.
- The token price fluctuation is explicitly elucidated, but
_Pr is not the only deciding factor affecting Sp. So token_
price is not completely determined by factors inside
the business network.
- The cost associated with the setting up of IOT sensors
for a large number of organizations might be huge. This
involves a large amount of capital invested for IOT sensors; these sensors are then fitted in the pipe holes and
openings to record and monitor the task it is calibrated
to perform.
- After huge capital investment, the IOT sensors also
include the cost of maintenance; setup cost is not the
prerequisite alone. To attain the desired set of efficient
results and accurately measure data, the IOT sensors
need a different budget for its maintenance. The architecture after set up needs to be periodically inspected
and checked, any requirement regarding waste accumulation around the sensors has to be dealt with the
immediate action, so that the recording of data doesn’t
get affected and business network prolongates to serve
the fair trade.
- PH sensors used in sensing might not efficiently record
data as it was expected to after 2–3 months because of
the accumulation of layers of embedded salt around it.
This problem can only be either replacing the IOT sensor or ensuring regular inspection and cleaning around
the sensor body.
- The organization where the sensors are deployed
needs to have a continual internet network to stay connected to the server while reading the data.
- The research does not implement or provide a solution
to the cross border challenges for business in countries
with varying legal regulations and degree of acceptance for tokenizing blockchain economy [32].
## 5 Conclusion
With any observed physical change using the IoT sensors,
the record of our data maintaining the water quality gets
amended with the server. This eventually is directly concerned with the marking of standards in the growth statistics of each of the business organizations, composing
a part of the network. The contribution of quality credits
is merchandising blockchain for business on a protuberant scale. Encrypting the identities of organizations by the
cryptographic hash of the account address in ethereum
manifests security. This research on ethereum blockchain
coalesce trade is rooting on sustainable assets like wastewater. A private network on ethereum provides limited
record of transactions happening only within the network.
Our readings put in proportion the percent endorsement
of reuse from the regulator set ground zero. The tokens
are issued based upon this proportion, i.e., the percentage
reuse of wastewater, featuring the highlighted part of this
research alongside the factors affecting Tp. What needs a
careful thought is the prominence of the profits for the use
cases that involve organizational trade.
After the successful completion of this research, we pen
down the holistic reported results. We validate our business logic deployed on a permissioned ethereum network
on the POA consensus.
-----
Using the quality benchmark of 35%, we establish the
relationship and dependency of two important token
parameters, i.e., Sp and Tp in the business network.
According to the current business logic, all the organizations can mint and withdraw tokens; however, the purpose of research was to implement this innovation to build
smart cities, so as per real-time deployment challenges
and for better administration over decentralization, the
business logic is expected to concede with the regulator’s
consent for minting tokens from ethers at the organizational level of deployment.
Future works are on 2D/3D data visualization on the
IOT meter data clusters to minimize the probability of
tampering; however, these real times challenges are not
a hinderance to the research at this level. A proportion of
each organization’s profit with the volumetric dependency
on fresh water, after successful reuse of treated water is
another fact into consideration.
Further research might consider the use of Zigbee and
6LoWPAN protocol for connectivity and information transfer between the IOT sensors for featuring automation in
this corporate taxonomy. ECP32 is just demonstrating an
overview of the sensing application, a strong determining
factor of the incorporate future changes will occur according to the location of deployment.
### Compliance with ethical standards
**Conflict of interest On behalf of all authors, the corresponding au-**
thor states that there is no conflict of interest.
## References
1. Nakamoto S (2009) Bitcoin: a peer-to-peer electronic cash sys[tem. Cryptography Mailing list at https://metzdowd.com](https://metzdowd.com)
2. Atzori M (2017) Blockchain technology and decentralized gov[ernance: is the state still necessary? J Gov Regul. https://doi.](https://doi.org/10.22495/jgr_v6_i1_p5)
[org/10.22495/jgr_v6_i1_p5](https://doi.org/10.22495/jgr_v6_i1_p5)
3. Udokwu C, Kormiltsyn A, Thangalimodzi K, Norta A (2018) An
exploration of blockchain enabled smart-contracts application
[in the enterprise. https://doi.org/10.13140/rg.2.2.36464.97287](https://doi.org/10.13140/rg.2.2.36464.97287)
4. De Angelis S et al, PBFT vs proof-of-authority: applying CAP
theorum to permissioned Blockchain
5. Li X, Wu X, Pei X, Yao Z (2019) Tokenization: open asset protocol
on blockchain. In: 2019 IEEE 2nd international conference on
information and computer technologies (ICICT), Kahului, HI,
[USA, pp 204-209. https://doi.org/10.1109/infoct.2019.8711021](https://doi.org/10.1109/infoct.2019.8711021)
[6. See https://github.com/ethereum/EIPs/issues/223](https://github.com/ethereum/EIPs/issues/223)
[7. See http://eips.ethereum.org/EIPS/eip-777](http://eips.ethereum.org/EIPS/eip-777)
8. Amoatey P, Bani R (2011) Wastewater management
9. Chakraborty D, Water pollution in India: an input-output analysis
10. Frank B et al (2019) How urban storm- and wastewater management prepares for emerging opportunities and threats: digital
transformation, ubiquitous sensing, new data sources, and
beyond: a horizon scan. Environ Sci Technol 53(15):8488–8498.
[https://doi.org/10.1021/acs.est.8b06481](https://doi.org/10.1021/acs.est.8b06481)
11. Swan M (2015) Blockchain: blueprint for a new economy.
O’Reilly. P. Vii
12. Mougayar W (2016) The Business blockchain: promise, practice,
and application of the next internet technology. Wiley, London
13. Duca G, Bunduchi E, Gladchi V, Romanciuc L, Goreaceva N (2008)
Estimation of the natural water self-purification capacity from
the kinetic standpoint
14. Echard C (2017) Ensuring software integrity in IoT devices
15. Schaad J (2019) CBOR, Object Signing and Encryption (COSE),
Accessed 2019
16. Hong Z, Chu C, Zhang LL, Yu Y (2017) Optimizing an emission
trading scheme for local governments: a Stackelberg game
mode and hybrid algorithm
17. Zecchini M (2019) Data collection, storage and processing for
water monitoring based on IoT, and blockchain technologies.
zecchini.1596071@studenti.uniroma1.it
18. Silvestre BS (2015) Sustainable supply chain management in
emerging economies: environmental turbulence, institutional
voids and sustainability trajectories. Int J Prod Econ 167:156–169
19. Klassen RD, Vereecke A (2012) Social issues in supply chains:
capabilities link responsibility, risk (opportunity), and performance. Int J Prod Econ 140(1):103–115
20. Li X, Wu X, Pei X, Yao Z (2019) Tokenization: open asset protocol
on blockchain. In: 2019 IEEE 2nd international conference on
information and computer technologies, pp 204–209
21. Zhou T et al (2019) DLattice: permission-less blockchain based
on DPoS-BA-DAG consensus for data tokenization. IEEE Access.
[https://doi.org/10.1109/ACCESS.2019.2906637](https://doi.org/10.1109/ACCESS.2019.2906637)
22. Sánchez-Corcuera R et al (2019) Smart cities survey: technologies, application domains and challenges for the cities of the
future. Int J Distrib Sens Netw 15(6)
23. Kundu D (2019) Blockchain and trust in a Smart City. Environ
Urban Asia 10(1):31–43
24. Kouhizadeh M et al (2019) At the Nexus of Blockchain technology, the circular economy, and product deletion. Appl Sci 9:1712
25. Savelyev A (2018) Some risks of tokenization and blockchainizai[tion of private law. Comput Law Secur Rev 34(4):863–869. https](https://doi.org/10.1016/j.clsr.2018.05.010)
[://doi.org/10.1016/j.clsr.2018.05.010](https://doi.org/10.1016/j.clsr.2018.05.010)
26. Blemus S et al (2019) Initial crypto-asset offerings (ICOs), tokenization and corporate governance, 1905.03340
27. Roth J, Schär F, Schöpfer A (2019) The Tokenization of assets:
using blockchains for equity crowdfunding. Available at SSRN
3443382
28. Sanghavi V et al (2018) International Journal of Research in Engineering, IT and Social Sciences, 08(11), 60–64. ISSN 2250-0588
29. Prince Michael von und zu Liechtenstein HSH (2019) The
tokenization of assets and property rights. Trusts Trustees
25(6):630–632
30. Davydov V, Khalilova M (2019) Business model of creating digital platform for tokenization of assets on financial markets. In:
Conference series: materials science and engineering
31. Levin B (2018) Potential for cryptocurrency to fund investment
in sustainable real assets
32. Barsan IM (2017) Legal challenges of initial coin offerings (ICO).
Revue Trimestrielle de Droit Financier (RTDF) 3:54–65
**Publisher’s Note Springer Nature remains neutral with regard to**
jurisdictional claims in published maps and institutional affiliations.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s42452-020-2187-4?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s42452-020-2187-4, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/s42452-020-2187-4.pdf"
}
| 2,020
|
[] | true
| 2020-02-18T00:00:00
|
[] | 11,641
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ff024e0800db79a8f7c661b47357fc98e6c0af
|
[
"Computer Science"
] | 0.874122
|
Adversarial-Playground: A visualization suite showing how adversarial examples fool deep learning
|
00ff024e0800db79a8f7c661b47357fc98e6c0af
|
Visualization for Computer Security
|
[
{
"authorId": "2064028216",
"name": "Andrew P. Norton"
},
{
"authorId": "121817403",
"name": "Yanjun Qi"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Vis Comput Secur",
"VizSEC"
],
"alternate_urls": null,
"id": "55cbbee6-4000-432f-af6a-c0ffd25ba0c0",
"issn": null,
"name": "Visualization for Computer Security",
"type": "conference",
"url": null
}
|
Recent studies have shown that attackers can force deep learning models to misclassify so-called “adversarial examples:” maliciously generated images formed by making imperceptible modifications to pixel values. With growing interest in deep learning for security applications, it is important for security experts and users of machine learning to recognize how learning systems may be attacked. Due to the complex nature of deep learning, it is challenging to understand how deep models can be fooled by adversarial examples. Thus, we present a web-based visualization tool, Adversarial-Playground, to demonstrate the efficacy of common adversarial methods against a convolutional neural network (CNN) system. Adversarial-Playground is educational, modular and interactive. (1) It enables non-experts to compare examples visually and to understand why an adversarial example can fool a CNN-based image classifier. (2) It can help security experts explore more vulnerability of deep learning as a software module. (3) Building an interactive visualization is challenging in this domain due to the large feature space of image classification (generating adversarial examples is slow in general and visualizing images are costly). Through multiple novel design choices, our tool can provide fast and accurate responses to user requests. Empirically, we find that our client-server division strategy reduced the response time by an average of 1.5 seconds per sample. Our other innovation, a faster variant of JSMA evasion algorithm, empirically performed twice as fast as JSMA and yet maintains a comparable evasion rate1.
|
## ADVERSARIAL-PLAYGROUND: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning
#### Andrew P. Norton[*] Yanjun Qi[†]
Department of Computer Science, University of Virginia
**ABSTRACT**
Recent studies have shown that attackers can force deep learning
models to misclassify so-called “adversarial examples:” maliciously
generated images formed by making imperceptible modifications
to pixel values. With growing interest in deep learning for security
applications, it is important for security experts and users of machine learning to recognize how learning systems may be attacked.
Due to the complex nature of deep learning, it is challenging to
understand how deep models can be fooled by adversarial examples.
Thus, we present a web-based visualization tool, ADVERSARIALPLAYGROUND, to demonstrate the efficacy of common adversarial methods against a convolutional neural network (CNN) system.
ADVERSARIAL-PLAYGROUND is educational, modular and interactive. (1) It enables non-experts to compare examples visually and to
understand why an adversarial example can fool a CNN-based image
classifier. (2) It can help security experts explore more vulnerability
of deep learning as a software module. (3) Building an interactive
visualization is challenging in this domain due to the large feature
space of image classification (generating adversarial examples is
slow in general and visualizing images are costly). Through multiple novel design choices, our tool can provide fast and accurate
responses to user requests. Empirically, we find that our client-server
division strategy reduced the response time by an average of 1.5
seconds per sample. Our other innovation, a faster variant of JSMA
evasion algorithm, empirically performed twice as fast as JSMA and
yet maintains a comparable evasion rate[1].
**Index Terms:** I.2.6 [Artificial Intelligence]: Learning—
Connectionism and neural nets; K.6.5 [Management of Computing
and Information Systems]: Security and Protection—Unauthorized
access
**1** **INTRODUCTION**
Adversarial examples for Deep Neural Network (DNN) models are
usually crafted through an optimization procedure that searches for
small, yet effective, perturbations of the original image (details in
Sect. 2). Understanding why a DNN model performs as it does is
quite challenging, and even moreso to understand how such a model
can be fooled by adversarial examples.
With growing interest in adversarial deep learning, it is important
for security experts and users of DNN systems to understand how
DNN models may be attacked in face of an adversary. This paper
introduces a visualization tool, ADVERSARIAL-PLAYGROUND, to
enable better understanding of how different types of adversarial
examples fool DNN systems. ADVERSARIAL-PLAYGROUND provides a simple and intuitive interface to let users visually explore
the impact of three attack algorithms that generate adversarial examples. Users may specify parameters for a variety of attack types
and generate new samples on-demand. The interface displays the
resulting adversarial example compared to the original alongside
classification likelihoods on both images from the DNN.
ADVERSARIAL-PLAYGROUND provides the following benefits:
- Educational: Fig. 1 shows a screen-shot of the visualization.
This intuitive and simple visualization helps practitioners of
deep learning to understand how their models misclassify and
how adversarial examples by various algorithm differ.
- Interactive: We add two novel strategies in ADVERSARIALPLAYGROUND to make it respond to users’ requests in a sufficiently quick manner. The interactive visualization allow users
to gain deeper intuition about the behavior of DNN classification through maliciously generated inputs.
Investigating the behavior of machine learning systems in adversarial
environments is an emerging topic at the junction of computer security and machine learning [1]. While machine learning models may
appear to be effective for many security tasks like malware classification [7] and facial recognition [9], these classification techniques
were not designed to withstand manipulations made by intelligent
and adaptive adversaries. In contrast with applications of machine
learning to other fields, security tasks involve adversaries that may
respond maliciously to the classifier [1].
Recent studies show that intelligent attackers can force machine
learning systems to misclassify samples by performing nearly imperceptible modifications to the sample before attempting classification [4,10]. These samples, named as “adversarial examples,”
have effectively fooled many state-of-the-art deep learning models.
*e-mail: apn4za@virginia.edu
†e-mail: yanjun@virginia.edu
- Modular: Security experts can easily plug ADVERSARIALPLAYGROUND into their benchmarking frameworks as a module. The design also allows experts to easily add other DNN
models or more algorithms of generating adversarial examples
in the visualization.
To the authors’ best knowledge, this is the first visualization platform
showing how adversarial examples are generated and how they fool
a DNN system.
The rest of this paper takes the following structure: Sect. 2 discusses the relevant backgrounds, Sect. 3 introduces the system organization and software design of ADVERSARIAL-PLAYGROUND,
Sect. 4 presents an empirical evaluation with respect to different design choices, and Sect. 5 concludes the paper by discussing possible
extensions.
**2** **BACKGROUND**
1Project source code and data from our experiments available at: https:
```
//github.com/QData/AdversarialDNN-Playground.
```
**2.1** **Adversarial Examples and More**
Studies regarding the behavior of machine learning models in adversarial environments generally fall into one of three categories: (1)
_poisoning attacks, in which specially crafted samples are injected_
into the training of a learning model, (2) privacy-aware learning,
which aim to preserve the privacy of information in data samples,
or (3) evasion attacks, in which the adversary aims to create inputs
that are misclassified by a target classifier. Generating adversarial
examples is part of this last category.
-----
Figure 1: ADVERSARIAL-PLAYGROUND User Interface
The goal of adversarial example generation is to craft an input for
a particular classifier that, while improperly classified, reveals only
slight alteration on the input. To formalize the extent of allowed
alterations, evasion algorithms minimize the difference between
the “seed” input and the resulting adversarial example based on a
predefined norm (a function measuring the distance between two
inputs).
In some cases, the adversary specifies the “target” class of an
adversarial sample — for example, the adversary may desire an
image that looks like a “6” to be classified as a “5” (as in Fig. 1).
This is referred to as a targeted attack. Conversely, if the adversary
does not specify the desired class, the algorithm is considered to be
_untargeted._
Formally, let us denote f : X → _C to be a classifier that maps_
the set of all possible inputs, X, to a finite set of classes, C. Then,
given a target class yt ∈ _C, a seed sample x ∈_ _X, and a norm function_
_∥·∥, the goal of generating a targeted adversarial example is to find_
_x[′]_ _∈_ _X such that:_
_x[′]_ = argmin _{∥x_ _−_ _s∥_ : f (s) = yt _}_ (1)
_s∈X_
Similarly, in the untargeted case, the goal is to find x[′] such that:
_x[′]_ = argmin _{∥x_ _−_ _s∥_ : f (s) ̸= f (x)} (2)
_s∈X_
In this formalization, we see there are two key degrees of freedom in creating a new evasion algorithm: targeted vs. untargeted attacks and the choice of norm functions. The latter category provides a useful grouping scheme for algorithms generating adversarial inputs, suggested by Carlini and Wagner [2].
ADVERSARIAL-PLAYGROUND uses two evasion algorithms provided by the cleverhans library [3]: the Fast Gradient Sign Method
(FGSM) based on the L[∞] norm, and the Jacobian Saliency Map Approach (JSMA) based on the L[0] norm [8].
**2.2** **DNNs and the MNIST Dataset**
DNNs can efficiently learn highly-accurate models in many domains
[5,7]. Convolutional Neural Networks (CNNs), first popularized by
LeCun et al. [6], perform exceptionally well on image classification.
ADVERSARIAL-PLAYGROUND uses a state-of-the-art CNN model
on the popular MNIST “handwritten digits” dataset for visualizing
evasion attacks. This dataset contains 70,000 images of hand-written
digits (0 through 9). Of these, 60,000 images are used as training
data and the remaining 10,000 images are used for testing. Each
sample is a 28 _×_ 28 pixel, 8-bit grayscale image. Users of our system
are presented with a collection of seed images, selected from each
of the 10 classes in the testing set (see the right side of Fig. 1).
**2.3** **TensorFlow Playground**
Our proposed package follows the spirit of TensorFlow Playground
— a web-based educational tool that helps users understand how
neural networks work [11]. TensorFlow Playground has been used
in many classes as a pedagogical aid and helps the self-guided
student learn more. Its impact inspires us to visualize adversarial
examples through ADVERSARIAL-PLAYGROUND. Our web-based
visualization tool assists users in understanding and comparing the
impact of standard evasion techniques on deep learning models.
**3** **ADVERSARIAL-PLAYGROUND: A MODULAR AND INTER-**
**ACTIVE VISUALIZATION SUITE**
In creating our system, we made several design decisions to make
ADVERSARIAL-PLAYGROUND educational, modular and interactive. Here, we present the four major system-level decisions we
made: (1) building ADVERSARIAL-PLAYGROUND as a web-based
application, (2) utilizing both client- and server-side code, (3) rendering images with the client rather than the server and (4) implementing a faster variation of JSMA attack.
We released all project code on GitHub in the interest of providing
a high-quality, easy-to-use software package to demonstrate how
adversarial examples fool deep learning.
**3.1** **A Web-based Visualization Interface**
ADVERSARIAL-PLAYGROUND provides quick and effective visualizations of adversarial examples through an interactive webapp
as shown by Fig. 1. The user selects one attacking algorithm from
the navigation bar at the top of the webapp. On the right-hand pane,
the user sets the attacking strength the algorithm using the slider,
selects a seed image, and (if applicable) a target class. (Fig. 1 at
right.) Selecting a seed image immediately loads the image to the
left-hand display and displays the output of the CNN classifier in a
bar chart below.
After setting the parameters, the user clicks “Generate Adversarial
Sample.” This runs the chosen adversarial algorithm in real-time to
attempt generating an adversarial sample. The sample is displayed
in the primary pane to the left of the controls (Fig. 1 at center). The
generated sample is fed through the CNN classifier, and then the
likelihoods are displayed in a bar chart below the sample. Finally,
the classification of generated sample is displayed below the controls
at right.
This web-based visualization generates adversarial examples “ondemand” from user-specified parameters. Therefore users can see
the impact of different adversarial algorithms with varying configurations.
Developing ADVERSARIAL-PLAYGROUND as a web-based (as
opposed to a local) application enables a large number of users to
utilize the application without requiring an installation process on
each computer. By eliminating an installation step, we encourage
potential users who may be only casually interested in adversarial
machine learning to explore what it is. This supports the pedagogical
goals of the software package.
**3.2** **A Modular Design with Client-Server Division**
Two key features of ADVERSARIAL-PLAYGROUND are its modular
design and the division of the functionality between the client and
server; the client handles user interaction and visualization, while
the server handles more computationally intensive tasks. Fig. 2
diagrams the interaction between each component of our system. At
the upper right, we have the user who may specify hyperparameters
for the evasion algorithm. Moving counter-clockwise, these parameters are transferred to the server, where the appropriate adversarial
algorithm module is selected and run against the pre-trained CNN
module. TensorFlow is used to reduce computation time and improve compatibility. Finally, the resulting sample is sent to the client
and plotted using the JavaScript library Plotly.JS.
-----
Figure 2: ADVERSARIAL-PLAYGROUND System Sketch
Users running a local copy of the webapp may easily customize
the tool to their needs; by separating the deep learning model and
the evasion methods from the main visualization and interface codebase, changing or adding DNN models or adding new adversarial
algorithms is straightforward.
**TensorFlow based Server-side:** Our inspiration, the TensorFlow Playground, was written entirely in JavaScript and other clientside technologies, allowing a lightweight server to host the service
for many users. Unfortunately, adversarial examples are usually
generated on larger, deeper networks than those created by users of
TensorFlow Playground, and this makes a JavaScript-only approach
prohibitively slow.
Instead, we chose to use a GPU-enabled server running Python
with TensorFlow to generate the adversarial examples on the backend (server-side), then send the image data to the client. This provides increased speed (aiding interactivity), adds compatibility with
other TensorFlow-based deep learning models and allows the flexibility of evasion algorithms (promoting modularity).
**Server-side Configuration: The server-side of ADVERSARIAL-**
PLAYGROUND requires a computer with Python 3.5, TensorFlow
1.0 (or higher), the standard SciPy stack, and the Python package
```
Flask. We have tested the code on Windows, Linux, and Mac
```
operating systems.
To install, clone the GitHub repository and install the prerequisites via pip3 -r install requirements.txt. A pre-trained
MNIST model is already stored in the GitHub repository; all that
is needed to start the webapp is to run python3 run.py. Once the
app is started, it will run on localhost:9000.
**3.3** **Visualizing Sample through Client-side Rendering**
As shown in Fig. 2, through the client, the user adjusts hyperparameters and submits a request to generate an adversarial sample to
the server. Once the TensorFlow back-end generates the adversarial
image and classification likelihoods, the server returns this data to
the client. Finally, this information is displayed graphically to the
user through use of the Plotly JavaScript library.
As we generate adversarial samples on the server-side, it was
tempting to produce the output images on the server as well. In
our prototype, we used server-side rendering of these images with
the Python library matplotlib, then downloaded the image for
display on the client. However, we ultimately decided to assign all
visualization tasks to the client, using JavaScript and the Plotly.JS
library, after realizing this approach was faster.
This is because generating images on the server with the default
```
matplotlib utilities required creating a full PNG image, writing
```
it to disk, then transferring the image to the client; this took time
and increased latency. Fortunately, client-side rendering of images
required transmission of far less data; only pixel values for the 28 _×_
28 MNIST images and the 10 values for classification likelihoods
needed to be sent. Additionally, the Plotly.JS library provided
interactive plots that enable users to view the underlying values
for each pixel. Empirically, switching to a client-side rendering
of images reduced response time by approximately 1.5 seconds.
(Sect. 4.2.)
ADVERSARIAL-PLAYGROUND’s modularity extends into the visualization code, too. Although it may be hosted on any machine
that supports TensorFlow, the web-based client/server division of
the webapp allows the computationally intensive “back-end” to be
hosted on a powerful server while the visualizations may be accessed
from any device.
**3.4** **Faster Variant of JSMA Attack**
While dividing the computation and visualization steps between the
client and server saved some time, actually generating the adversarial example is where the most time is consumed (Table 1). In
particular, the Jacobian Saliency Map Approach (JSMA) algorithm
by Papernot et al. [8] can take more than two-thirds of a second to
generate a single adversarial output. In order to provide an interactive experience, our web app must generate adversarial samples
quickly. We therefore introduce a new, faster variant of the JSMA
that maintains a comparable evasion rate to the original, but can take
half as much time.
**JSMA Background: Most state-of-the-art evasion algorithms**
are slow due to the expensive optimization and the large search space
involved in image classification [2,3].
The original JSMA algorithm is a targeted attack that uses the L[0]
norm in Equation 1. To generate x[′] from x, JSMA iteratively selects
the “most influential” combination of two features to alter. To rank
features by their influence, JSMA uses a saliency map of the forward
derivative of the classifier. The ranking and alteration process is
repeated until the altered sample is successfully classified as yt or
the L[0] distance between the altered and seed samples exceeds a
provided threshold, ϒ.
The largest consumption of time in JSAM is the combinatorial
search over all feature pairs to determine the “best” pair to alter; if
there are M features in a given sample, JSMA must evaluate Θ(M[2])
candidates at each iteration. When working on high-dimensional
data, this can become prohibitively expensive. We introduce a new,
faster variant of JSMA that maintains a comparable evasion rate
to the original, which we call Fast Jacobian Saliency Map Apriori
(FJSMA).
**FJSMA Improvement: Our FJSMA approach is an approxima-**
tion of JSMA that uses an a priori heuristic to significantly reduce
the search space. Instead of considering all pairs of features (p, _q),_
our improvement only considers such pairs where p is in the top k
features when ranked by the derivative in the p-coordinate, where
_k is a small constant. (See red-bolded modifications to JSMA in_
Algorithm 1.)
If we denote the set consisting of the top k elements in A as ranked
by f by argtopx∈A ( f (x); k), then the loop in our Fast Jacobian
Saliency Map Apriori (FJSMA) selection routine is Θ(k _·|Γ|), where_
_k ≪|Γ| and |Γ| = M is the size of the feature set. Since determining_
the top k features can be done in linear time, this is considerable
improvement in asymptotic terms.
This modification improves the runtime from Θ(M[2]) to Θ(M · _k),_
where M is the feature size and k is some small constant. Our
experiments show k may be as little as 15% of M and still maintain
the same efficacy in terms of evasion rate as JSMA.
**4** **PERFORMANCE TESTING**
We conducted a series of timing tests to quantify how our design choices have influenced the speed of interactive responses by
ADVERSARIAL-PLAYGROUND. First, we consider the impact of
relegating the visualization code to the client (from Sect. 3.2); then,
-----
**Algorithm 1 Fast Jacobian Saliency Map Apriori Selection**
∇F(X) is the forward derivative, Γ the features still in the search
space, t the target class, and k is a small constant
**Input: ∇F(X), Γ, t, k**
� �
1: K = argtopp∈Γ _−_ _[∂]_ _∂[F][t]X[(][X]p_ [)] [;][ k] _▷_ Changed for FJSMA
2: for each pair (p, _q) ∈_ _K ×_ Γ, p ̸= q do ▷ Changed for FJSMA
3: _α = ∑i=p,q_ _∂_ **F∂tX(Xi** )
4: _β = ∑i=p,q ∑_ _j≠_ _t_ _∂_ **F∂ jX(Xi** )
5: **if α < 0 and β > 0 and −α ×** _β > max then_
6: _p1, p2 ←_ _p,_ _q_
7: _max ←−α ×_ _β_
8: **end if**
9: end for
10: return p1, p2
we show that FJSMA is faster and just as accurate as the JSMA
implementation by cleverhans package.
**4.1** **Client-side Rendering Improves Response Speed**
Rendering done on... Server Image Download Total
Server-side 4472 350 4821
Client-side 3335 — 3335
**Difference** 1137 350 1486
Table 1: Latency with and without client-side visualization. A time
profiling of the latency experienced by the user when 1) the server handled
all computation and visualization and 2) the visualization was offloaded to
the client. The “Server” column denotes time taken for the server to respond,
while the “Image Download” column shows the additional time taken to
transfer each image (only applicable for server-side rendering).
We first conducted timing tests to evaluate how the choice of
client-side rendering (Sect. 3.2) has influenced the speed of responding to users’ requests. We loaded the webapp and measured the
response time of the server for a variety of seed images and target classes. We repeated these for a total of between 10 and 16
times (depending on the algorithm), averaged the response time, and
reported the result in Table 1. The majority of the time for both
with and without client-side visualization is in the server computation. However, offloading the visualization to the client resulted
in a nearly 1.5-second speedup (an approximately 30% difference).
Interestingly, not only did the image download time get eliminated,
but the server computation time was reduced as well. This is, in part,
due to the reduction of I/O operations and image generation required
by the server when visualization is done by the client.
**4.2** **FJSMA Improvement**
We propose a faster approximation of the JSMA attacking algorithm: FJSMA in Sect. 3.4. Using the same CNN model we used
in ADVERSARIAL-PLAYGROUND for the MNIST dataset, we compared FJSMA with JSMA through two metrics: (1) the “wall clock”
time needed for successfully generating an adversarial example, and
(2) the evasion rate — a standard metric that reports the percentage
of seed images that were successfully converted into adversarial
samples.
This comparison was conducted in a batch manner. We ran both
evasion attacks on the 10000-sample MNIST testing set for a range
of values of the ϒ parameter for both algorithms. For FJSMA, we
also varied the value of input parameter k (the percentage of the
feature-set size). Intuitively, this k value is a control on how tight of
an approximation FJSMA is to JSMA; as k grows larger, we should
|ϒ|10%|15%|20%|25%|
|---|---|---|---|---|
|JSMA Evasion Rate|0.658|0.824|0.867|0.879|
|FJSMA Evasion Rate [k = 10%]|0.583|0.777|0.823|0.826|
|FJSMA Evasion Rate [k = 15%]|0.613|0.816|0.867|0.871|
|FJSMA Evasion Rate [k = 20%]|0.633|0.833|0.878|0.887|
|FJSMA Evasion Rate [k = 30%]|0.638|0.844|0.896|0.901|
|JSMA Time (s)|0.606|0.745|0.807|0.803|
|FJSMA Time [k = 10%] (s)|0.411|0.468|0.490|0.485|
|FJSMA Time [k = 15%] (s)|0.414|0.473|0.483|0.484|
|FJSMA Time [k = 20%] (s)|0.415|0.466|0.482|0.483|
|FJSMA Time [k = 30%] (s)|0.415|0.464|0.490|0.485|
Table 2: JSMA and FJSMA Comparison. Each column represents a
test run with a particular value of ϒ (the maximum allowed perturbation for
a sample). The top half of the table provides the average evasion rate for
each algorithm, while the bottom half provides the average time (in seconds)
that it took to generate an adversarial example. The FJSMA algorithm was
run with multiple values of k, where k was 10%, 15%, 20%, and 30% of the
feature space size.
expect the performance of the two approaches to converge to each
other.
Results of this experiment are summarized in Table 2. The
```
cleverhans JSMA and the proposed FJSMA attack achieve similar
```
evasion rates for all tested values of ϒ and k, with larger values of
_k increasing the evasion rate. Curiously, for k ≥_ 20%, our implementation of FJSMA even outperforms that of cleverhans JSMA;
this is likely due to implementation details. The average time to
form an evasive sample from a seed benign sample is given in the
second half of the table. Our FJSMA approach greatly improves
upon the speed of JSMA. However, varying the value of k does not
produce a significant variation in runtime per sample; we conjecture
this is because of the small feature space of MNIST and that searching 30% of the feature space likely does not dominate the runtime.
In summary, FJSMA achieves a significant improvement in speed,
while maintaining essentially the same evasion rate — an important
advantage for interactive visualization.
**5** **DISCUSSION AND FUTURE WORK**
The study of evasion attacks on machine learning models is a
rapidly growing field. In this paper, we present a web-based tool
ADVERSARIAL-PLAYGROUND for visualizing the performance of
adversarial examples against deep neural networks. ADVERSARIALPLAYGROUND enables non-experts to compare adversarial examples
visually and can help security experts explore more vulnerability of
deep learning. It is modular and interactive. To our knowledge, our
platform is the first visualization-focused package for adversarial
machine learning.
A straightforward extension of this work is to increase the variety of supported evasion methods. For example, including the new
attacks based on L[0], L[2], and L[∞] norms from Carlini and Wagner’s
recent paper [2] would be a good next step in comparing the performance of multiple evasion strategies. However, expansion in this
manner presents an additional issue of latency. To generate evading
samples “on-demand,” the adversarial algorithm must run quickly;
these other algorithms take much longer to execute than those we
selected, so some time-saving techniques must be explored.
Another direction for development is to provide more choices of
classifiers and datasets. Allowing the user to select from CIFAR,
ImageNet, and MNIST data would highlight the similarities and
differences between how a single attack method deals with different data. Similarly, providing the user with a choice of multiple
pre-trained models — possibly hardened against attack through adversarial training — would help distinguish artifacts of model choice
from the behavior of the attack. These two extensions would help
users more fully understand the behavior of an adversarial algorithm.
|Rendering done on...|Server|Image Download|Total|
|---|---|---|---|
|Server-side|4472|350|4821|
|Client-side|3335|—|3335|
|Difference|1137|350|1486|
-----
**REFERENCES**
[1] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar. The Security of
Machine Learning. Machine Learning, 81(2):121–148, 2010.
[2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural
networks. CoRR, abs/1608.04644, 2016.
[3] I. J. Goodfellow, N. Papernot, and P. D. McDaniel. cleverhans v0.1: an
adversarial machine learning library. CoRR, abs/1610.00768, 2016.
[4] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing
adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[5] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification
with Deep Convolutional Neural Networks. In Advances in Neural
_Information Processing Systems, pp. 1097–1105, 2012._
[6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based
learning applied to document recognition. Proceedings of the IEEE,
86(11):2278–2324, November 1998.
[7] Microsoft Corporation. Microsoft Malware Competition Challenge.
https://www.kaggle.com/c/malware-classification, 2015.
[8] N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and
A. Swami. The limitations of deep learning in adversarial settings.
_CoRR, abs/1511.07528, 2015._
[9] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep Face Recognition.
In British Machine Vision Conference, 2015.
[10] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. CoRR,
abs/1312.6199, 2013.
[11] J. Yosinski, J. Clune, A. M. Nguyen, T. J. Fuchs, and H. Lipson.
Understanding neural networks through deep visualization. CoRR,
abs/1506.06579, 2015.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1708.00807, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1708.00807"
}
| 2,017
|
[
"JournalArticle"
] | true
| 2017-08-01T00:00:00
|
[
{
"paperId": "ed46777287b287969571f4e9bffbbf3741ebb769",
"title": "Cleverhans V0.1: an Adversarial Machine Learning Library"
},
{
"paperId": "df40ce107a71b770c9d0354b78fdd8989da80d2f",
"title": "Towards Evaluating the Robustness of Neural Networks"
},
{
"paperId": "819167ace2f0caae7745d2f25a803979be5fbfae",
"title": "The Limitations of Deep Learning in Adversarial Settings"
},
{
"paperId": "1b5a24639fa80056d1a17b15f6997d10e76cc731",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"paperId": "bee044c8e8903fb67523c1f8c105ab4718600cdb",
"title": "Explaining and Harnessing Adversarial Examples"
},
{
"paperId": "d891dc72cbd40ffaeefdc79f2e7afe1e530a23ad",
"title": "Intriguing properties of neural networks"
},
{
"paperId": "abd1c342495432171beb7ca8fd9551ef13cbd0ff",
"title": "ImageNet classification with deep convolutional neural networks"
},
{
"paperId": "afd0859a858481d2f36109f68090aebd77456b7f",
"title": "The security of machine learning"
},
{
"paperId": "162ea969d1929ed180cc6de9f0bf116993ff6e06",
"title": "Deep Face Recognition"
},
{
"paperId": null,
"title": "Microsoft Malware Competition Challenge"
},
{
"paperId": "162d958ff885f1462aeda91cd72582323fd6a1f4",
"title": "Gradient-based learning applied to document recognition"
}
] | 7,086
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0100d00584f7b6a2e3969fd57a9d164103f39a69
|
[
"Computer Science"
] | 0.910426
|
ENHANCED DYNAMIC RESOURCE ALLOCATION SCHEME BASED ON PACKAGE LEVEL ACCESS IN CLOUD COMPUTING : A REVIEW
|
0100d00584f7b6a2e3969fd57a9d164103f39a69
|
Bioinformatics
|
[
{
"authorId": "144912228",
"name": "Manpreet Kaur"
},
{
"authorId": "145713817",
"name": "Rajinder Singh"
}
] |
{
"alternate_issns": [
"1367-4811"
],
"alternate_names": [
"Int Conf Bioinform",
"International Conference on Bioinformatics",
"BIOINFORMATICS"
],
"alternate_urls": null,
"id": "15d4205f-903b-403c-9a2e-906f02ce04d8",
"issn": "1367-4803",
"name": "Bioinformatics",
"type": "journal",
"url": "http://bioinformatics.oxfordjournals.org/"
}
|
Cloud computing is distributed computing, storing, sharing and accessing data over the Internet. It provides a pool of shared resources to the users available on the basis of pay as you go service that means users pay only for those services which are used by him according to their access times. This research work deals with the balancing of work load in cloud environment. Load balancing is one of the essential factors to enhance the working performance of the cloud service provider. It would consume a lot of cost to maintain load information, since the system is too huge to timely disperse load. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. We propose an improved load balancing algorithm for job scheduling in the cloud environment using load distribution table in which the current status, current package, VM Capacity and the number of cloudlets submitted to each and every virtual machine will be stored. Submit the job of the user to the datacenter broker. Data center broker will first find the suitable Vm according to the requirements of the cloudlet and will match and find the most suitable Vm according to its availability or the machine with least load in the distribution table. Multiple number of experiments have been conducted by taking different configurations of cloudlets and virtual machine. Various parameters like waiting time, execution time, turnaround time and the usage cost have been computed inside the cloudsim environment to demonstrate the results. The main contributions of the research work is to balance the entire system load while trying to minimize the make span of a given set of jobs. Compared with the other job scheduling algorithms, the improved load balancing algorithm can outperform them according to the experimental results.
|
# ENHANCED DYNAMIC RESOURCE ALLOCATION SCHEME BASED ON
PACKAGE LEVEL ACCESS IN CLOUD COMPUTING : A REVIEW
## Manpreet Kaur [(1)], Dr. Rajinder Singh[(2)]
(1) Research Scholar, Department of Computer Science & Engineering, GGSCET, Bathinda, Punjab.
### manpreet12395@gmail.com
(2) Assistant Professor, Department of Computer Science & Engineering, GGSCET, Bathinda, Punjab.
### rajneel2807@gmail.com
## ABSTRACT
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which
dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have
knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Scheduling is
one of the core steps to efficiently exploit the capabilities of heterogeneous computing systems. On cloud computing
platform, load balancing of the entire system can be dynamically handled by using virtualization technology through
which it becomes possible to remap virtual machine and physical resources according to the change in load.
However, in order to improve performance, the virtual machines have to fully utilize its resources and services by adapting
to computing environment dynamically. The load balancing with proper allocation of resources must be guaranteed
in order to improve resource utility.
## Keywords
Cloud Computing, Load Balancing, Virtual Machine, Packages, Leases
## INTRODUCTION
Cloud Computing (CC)[1] is an emerging technology that has abstruse connection to Grid Computing (GC) paradigm and
other relevant technologies such as utility computing, distributed computing and cluster computing. The aim of both GC
and CC is to achieve resource virtualization. In spite of the aim being similar, GC and CC have significant differences. The
main emphasis of GC is to achieve maximum computing, while that of CC is to optimize the overall computing capacity.
CC also provides a way to handle wide range of organizational needs by providing dynamically scalable servers and
application to work with. Leading CC service providers such as Amazon, IBM, `Dropbox', Apple's `iCloud',Google's
applications, Microsoft's `Azure', etc., are able to attract normal users through out the world. CC have introduced a new
paradigm, which helps its users to store or develop applications dynamically and access them from anywhere and anytime
just by connecting to an application using Internet. Depending on customer's requirement CC provides easy and
customizable services to access or work with cloud applications. Based on the user requirement CC can be used to
provide platform for designing applications, infrastructure to store and work on company's data and also provide
applications to do user's routine tasks. When a customer chooses to use cloud services, data stored in the local
repositories will be sent to a remote data center. This data in remote locations can be accessed or managed with the help
of services provided by cloud service providers. This makes clear that for a user to store or process a piece of data in
cloud, he/she needs to transmit the data to a remote server over a channel (internet). This data processing and storage
needs to be done with utmost care to avoid data breaches.
It is the model for convenient on-demand network access, with minimum management efforts for easy and fast network
access to resources that are ready to use. It is an upcoming paradigm that offers tremendous advantages in economic
aspects, such as reduced time to market, flexible computing capabilities, and limitless computing power. Popularity of
cloud computing is increasing day by day in distributed computing environment. There is a growing trend of using cloud
environments for storage and data processing needs. To use the full potential of cloud computing, data is transferred,
processed, retrieved and stored by external cloud providers. However, data owners are very skeptical to place their data
outside their own control sphere.
**Figure 1. Cloud Computing**
### 6207 | P
-----
V o l u m e 1 6 N u m b e r 2
I N T E R N A T I O N A L J O U R N A L O F C O M P U T E R S & T E C H N O L O G Y
### BENFITS OF CLOUD COMPUTING
Some common benefits of cloud computing are:
- **Reduced Cost: Since cloud technology is implemented incrementally (step-by-step), it saves organizations total**
expenditure.
**• Increased Storage: When compared to private computer systems, huge amounts of data can be stored than usual.**
**• Flexibility: Compared to traditional computing methods, cloud computing allows an entire organizational segment or**
portion of it to be outsourced.
- **Greater mobility: Accessing information, whenever and wherever needed unlike traditional systems (storing data in**
personal computers and accessing only when near it).
- **Shift of IT focus:** Organizations can focus on innovation (i.e., implementing new products strategies in organization)
rather than worrying about maintenance issues such as software updates or computing issues. These benefits of cloud
computing draw lot of attention from Information and Technology Community (ITC). A survey by ITC in the year 2008,
2009 shows that many companies and individuals are noticing that CC is proving to be helpful when compared to
traditional computing methods.
**Figure 2. Benefits of Cloud Computing**
### CLOUD COMPUTING: SERVICE MODELS
Cloud computing can be accessed through a set of services models. These services are designed to exhibit certain
characteristics and to satisfy the organizational requirements. From this, a best suited service can be selected and
customized for an organization's use. Some of the common distinctions in cloud computing services are Software-as-aService (SaaS), Platform-as-a-Service (PaaS), Infrastructureas-a-Service (IaaS), Hardware-as-a-Service (HaaS) and Data
storage-as-a-Service (DaaS). Service model details are as follows:
**• Software as a Service (SaaS)[4]: The service provider in this context provides capability to use one or more**
applications running on a cloud infrastructure. These applications can be accessed from various thin client interfaces such
as web browsers. A user for this service need not maintain, manage or control the underlying cloud infrastructure (i.e.
network, operating systems, storage etc.). Examples for SaaS cloud's are Salesforce, NetSuite.
**• Platform as a Service (PaaS)[5]: The service provider in this context provides user resources to deploy onto cloud**
infrastructure, supported applications that are designed or acquired by user. A user using this service has control over
deployed applications and application hosting environment, but has no control over infrastructure such as network,
storage, servers, operating systems etc. Examples for PaaS cloud's are Google App Engine, Microsoft Azure, Heroku.
**• Infrastructure as a Service (IaaS): The consumer is provided with power to control process, manage storage, network**
and other fundamental computing resources which are helpful to manage arbitrary software and this can include operating
system and applications. By using this kind of service, user has control over operating system, storage, deployed
applications and possible limited control over selected networking components. Examples for IaaS cloud's are Eucalyptus
(The Eucalyptus Opensource Cloud-computing System), Amazon EC2, Rackspace, Nimbus.
### 6208 | P a g e
-----
V o l u m e 1 6 N u m b e r 2
I N T E R N A T I O N A L J O U R N A L O F C O M P U T E R S & T E C H N O L O G Y
**Figure 3. Cloud Computing Service Model**
### CLOUD COMPUTING: DEPLOYMENT MODELS
Among the service models explained above, SaaS, PaaS and IaaS are popular among providers and users. These
services can be deployed on one or more deployment models such as, public cloud, private cloud, community cloud and
hybrid cloud to use features of cloud computing. Each of these deployment models are explained as follows:
- Public cloud: This type of infrastructure is made available to large industrial groups or public. These are maintained and
owned by organization selling cloud services.
- Private cloud: This type of cloud deployment is just kept accessible to the organization that designs it. Private clouds
can be managed by third party or the organization itself. In this scenario, cloud servers may or may not exist in the same
place where the organization is located.
- Hybrid cloud: With in this deployment model there can be two or more clouds like private, public or a community. These
constituting clouds (combinations of clouds used, such as `private and public', `public and community', etc.) remain
different but yet bound together by standardized or preparatory technology that enables application and data portability.
- **Community cloud: This type of cloud infrastructure is shared by several organizations and supports a specific**
community with shared concerns. This can be managed by an organization or third party and can be deployed off or in the
organizational premise.
Usage of deployments models and services modeled provided by CC changes how systems are connected and work is
done in an organization. It adds up dynamically expandable nature to the applications, platforms, infrastructure or any
other resource that is ordered and used in CC.
**Figure 4. Types of Cloud**
## LOAD BALANCING
One of the foremost usually used applications of load balancing is to produce quality of service from multiple servers,
typically called a server data center. Usually load-balanced systems are properly working inside popular internet sites, big
chat networks, high-bandwidth file transfer protocol sites, and domain name System (DNS) servers. It additionally
### 6209 | P a g e
-----
V o l u m e 1 6 N u m b e r 2
I N T E R N A T I O N A L J O U R N A L O F C O M P U T E R S & T E C H N O L O G Y
prevents the clients from contacting back-end servers directly, which can have security advantages by hiding the structure
of the inner network. Some load balancers give a mechanism for improving the one parameter specially within back end
server Load balancing offers the IT team an opportunity to attain a considerably higher fault tolerance. It will mechanically
give the capability required to handle any increase or decrease of application traffic.
It is additionally necessary that the load balancer itself doesn't become the cause of failure. Sometimes load balancers
enforced in high-availability servers can additionally replicate the user’s session needed by the application. Load balancing
is dividing work load between a set of computers in order to receive the good response time and all the nodes are equally
loaded and, in general, all users get served quicker. Load balancing may be enforced with hardware, software, or a mix of
each. Typically, load balancing is that the main reason for server’s unbalanced response time. Load balancing plans to
optimize the usage of resources, maximize overall success ratio, minimize waiting time interval, and evade overloading of
the resources. By the utilization of multiple algorithms and mechanisms with load balancing rather than one algorithm
might increase reliability and efficiency. Load balancing within the cloud differs from classical thinking on load balancing
design and implementation by misusage of data center servers to perform the requests on the basis of first come first
serve basis. The older load balancing algorithm allocates the requests according to the incoming requests of the client.
## RELATED WORK
Nguyen Khac Chien et al. (2016) has proposed a load balancing algorithm which is used to enhance the performance of
the cloud environment based on the method of estimating the end of service time. They have succeeded in enhancing the
service time and response time of the user.
Ankit Kumar et al (2016) focuses on the load balancing algorithm which distributes the incoming jobs among VMs
optimally in cloud data centers. The proposed algorithm in this research work has been implemented using Cloud Analyst
simulator and the performance of the proposed algorithm is compared with the three algorithms which are preexists on the
basis of response time. In the cloud computing milieu, the cloud data centers and the users of the cloud-computing are
globally situated, therefore it is a big challenge for cloud data centers to efficiently handle the requests which are coming
from millions of users and service them in an efficient manner.
S.Yakhchi et al. (2015) discusses that the energy consumption has become a major challenge in cloud computing
infrastructures. They proposed a novel power aware load balancing method, named ICAMMT to manage power
consumption in cloud computing data centers. We have exploited the Imperialism Competitive Algorithm (ICA) for
detecting over utilized hosts and then we migrate one or several virtual machines of these hosts to the other hosts to
decrease their utilization. Finally, we consider other hosts as underutilized host and if it is possible, migrate all of their VMs
to the other hosts and switch them to the sleep mode.
Surbhi Kapoor et al. (2015) aims at achieving high user satisfaction by minimizing response time of the tasks and
improving resource utilization through even and fair allocation of cloud resources. The traditional Throttled load balancing
algorithm is a good approach for load balancing in cloud computing as it distributes the incoming jobs evenly among the
VMs. But the major drawback is that this algorithm works well for environments with homogeneous VMS, does not
considers the resource specific demands of the tasks and has additional overhead of scanning the entire list of VMs every
time a task comes. The issues have been addressed by proposing an algorithm Cluster based load balancing which works
well in heterogeneous nodes environment, considers resource specific demands of the tasks and reduces scanning
overhead by dividing the machines into clusters.
Shikha Garg et al. (2015) aims to distribute workload among multiple cloud systems or nodes to get better resource
utilization. It is the prominent means to achieve efficient resource sharing and utilization. Load balancing has become a
challenge issue now in cloud computing systems. To meets the user’s huge number of demands, there is a need of
distributed solution because practically it is not always possible or cost efficient to handle one or more idle services.
Servers cannot be assigned to particular clients individually. Cloud Computing comprises of a large network and
components that are present throughout a wide area. Hence, there is a need of load balancing on its different servers or
virtual machines. They have proposed an algorithm that focuses on load balancing to reduce the situation of overload or
under load on virtual machines that leads to improve the performance of cloud substantially.
Reena Panwar et al. (2015) describes that the cloud computing has become essential buzzword in the Information
Technology and is a next stage the evolution of Internet, The Load balancing problem of cloud computing is an important
problem and critical component adequate operations in cloud computing system and it can also prevent the rapid
development of cloud computing. Many clients from all around the world are demanding the various services rapid rate in
the recent time. Although various load balancing algorithms have been designed that are efficient in request allocation by
the selection of correct virtual machines. A dynamic load management algorithm has been proposed for distribution of the
entire incoming request among the virtual machines effectively.
Mohamed Belkhouraf et al. (2015) aims to deliver different services for users, such as infrastructure, platform or software
with a reasonable and more and more decreasing cost for the clients. To achieve those goals, some matters have to be
addressed, mainly using the available resources in an effective way in order to improve the overall performance, while
taking into consideration the security and the availability sides of the cloud. Hence, one of the most studied aspects by
researchers is load balancing in cloud computing especially for the big distributed cloud systems that deal with many
clients and big amounts of data and requests. The proposed approach mainly ensures a better overall performance with
efficient load balancing, the continuous availability and a security aspect.
### 6210 | P a g e
-----
V o l u m e 1 6 N u m b e r 2
I N T E R N A T I O N A L J O U R N A L O F C O M P U T E R S & T E C H N O L O G Y
Lu Kang et al. (2015) improves the weighted least connections scheduling algorithm, and designs the Adaptive Scheduling
Algorithm Based on Minimum Traffic (ASAMT). ASAMT conducts the real-time minimum load scheduling to the node
service requests and configures the available idle resources in advance to ensure the service QoS requirements. Being
adopted for simulation of the traffic scheduling algorithm, OPNET is applied to the cloud computing architecture.
Hiren H. Bhatt et al. (2015) presents a Flexible load sharing algorithm (FLS) which introduce the third function. The third
function makes partition the system in to domain. This function is helpful for the selection of other nodes which are present
in the same domain. By applying the flexible load sharing to the particular domains in to the distribute system, the
performance can be improved when any node is in overloaded situation.
## RESEARCH GAP
Cloud computing thus involving distributed technologies to satisfy a variety of applications and user needs. Sharing
resources, software, information via internet are the main functions of cloud computing with an objective to reduced capital
and operational cost, better performance in terms of response time and data processing time, maintain the system stability
and to accommodate future modification in the system .So there are various technical challenges that needs to be
addressed like Virtual machine migration, server consolidation, fault tolerance, high availability and scalability but central
issue is the load balancing, it is the mechanism of distributing the load among various nodes of a distributed system to
improve both resource utilization and job response time while also avoiding a situation where some of the nodes are
heavily loaded while other nodes are idle or doing very little work. It also ensures that all the processor in the system or
every node in the network does approximately the equal amount of work at any instant of time. To make the final
determination, the load balancer retrieves information about the candidate server’s health and current workload in order to
verify its ability to respond to that request. Load balancing solutions can be divided into software-based load balancers
and hardware-based load balancers. Hardware-based load balancers are specialized boxes that include Application
Specific Integrated Circuits (ASICs) customized for a specific use. They have the ability to handle the high-speed network
traffic whereas Software-based load balancers run on standard operating systems and standard hardware components.
## PROBLEM FORMULATION
Clients request for the virtual machine and cloud broker handles the client request according to available virtual machine.
If the VM is idle, then broker allocate that VM to user for the processing and if VM is not free then incoming requests of the
client are send into the waiting state until the resources are free.
1. The current algorithm will work only in the homogeneous cloud system where all the resources are of same
configuration.
2. Cloudlets are not assigned to the virtual machine according to their capacities. There may be a scenario where a
cloudlet with highest priority is assigned to the machine with lowest capacity in the host.
3. The processing capacity (No of processors / MIPS) is not considered for assigning the VM to a job.
4. Every time before allocation, extra overhead is involved in parsing the table of virtual machines from top to bottom.
### CLOUD SIM
Cloud service providers charge users depending upon the space or service provided. In R&D [16], it is not always possible
to have the actual cloud infrastructure for performing experiments. For any research scholar, academician or scientist, it is
not feasible to hire cloud services every time and then execute their algorithms or implementations. For the purpose of
research, development and testing, open source libraries are available, which give the feel of cloud services. Nowadays,
in the research market, cloud simulators are widely used by research scholars and practitioners, without the need to pay
any amount to a cloud service provider.
**Tasks performed by cloud simulators :**
The following tasks can be performed with the help of cloud simulators:
- Modelling and simulation of large scale cloud computing data centres.
- Modelling and simulation of virtualised server hosts, with customisable policies for provisioning host resources to VMs.
- Modelling and simulation of energy-aware computational resources.
- Modelling and simulation of data centre [18] network topologies and message-passing applications.
- Modelling and simulation of federated clouds.
- Dynamic insertion of simulation elements, stopping and resuming simulation.
- User-defned policies for allocation of hosts to VMs, and policies for allotting host resources to VMs.
**The scope and features of cloud simulations include:**
- Data centres
- Load balancing
### 6211 | P a g e
-----
V o l u m e 1 6 N u m b e r 2
I N T E R N A T I O N A L J O U R N A L O F C O M P U T E R S & T E C H N O L O G Y
- Creation and execution of cloudlets
- Resource provisioning
- Scheduling of tasks
- Storage and cost factors
## CONCLUSION
This paper is based on cloud computing technology which has a very vast potential and is still unexplored. The capabilities
of cloud computing are endless. Cloud computing provides everything to the user as a service which includes platform as
a service, application as a service, infrastructure as a service. One of the major issues of cloud computing is load
balancing because overloading of a system may lead to poor performance which can make the technology unsuccessful.
So there is always a requirement of efficient load balancing algorithm for efficient utilization of resources. Our paper
focuses on the various load balancing algorithms and their applicability in cloud computing environment.
## REFERENCES
[1] S. Yakhchi, S. Ghafari, M. Yakhchi, M. Fazeli and A. Patooghy, "ICA-MMT: A Load Balancing Method in Cloud
Computing Environment," IEEE, 2015.
[2] S. Kapoor and D. C. Dabas, "Cluster Based Load Balancing in Cloud Computing," IEEE, 2015.
[3] S. Garg, R. Kumar and H. Chauhan, "Ef?cient Utilization of Virtual Machines in Cloud Computing using Synchronized
Throttled Load Balancing," 1st International Conference on Next Generation Computing Technologies (NGCT-2015), pp.
77-80, 2015.
[4] R. Panwar and D. B. Mallick, "Load Balancing in Cloud Computing Using Dynamic Load Management Algorithm,"
IEEE, pp. 773-778, 2015.
[5] M. Belkhouraf, A. Kartit, H. Ouahmane, H. K. Idrissi,, Z. Kartit and M. . E. Marraki, "A secured load balancing
architecture for cloud computing based on multiple clusters," IEEE, 2015.
[6] L. Kang and X. Ting, "Application of Adaptive Load Balancing Algorithm Based on Minimum Traffic in Cloud Computing
Architecture," IEEE, 2015.
[7] N. K. Chien, N. H. Son and H. D. Loc, "Load Balancing Algorithm Based on Estimating Finish Time of Services in
Cloud Computing," ICACT, pp. 228-233, 2016.
[8] H. H. Bhatt and H. A. Bheda, "Enhance Load Balancing using Flexible Load Sharing in Cloud Computing," IEEE, pp.
72-76, 2015.
[9] S. S. MOHARANA, R. D. RAMESH and D. POWAR, "ANALYSIS OF LOAD BALANCERS IN CLOUD COMPUTING,"
International Journal of Computer Sciencand Engineering (IJCSE), pp. 102-107, 2013.
[10] M. P. V. Patel, H. D. Patel and . P. J. Patel, "A Survey On Load Balancing In Cloud Computing," International Journal
of Engineering Research & Technology (IJERT), pp. 1-5, 2012.
[11] R. Kaur and P. Luthra, "LOAD BALANCING IN CLOUD COMPUTING," Int. J. of Network Security,, pp. 1-11, 2013.
[12] Kumar Nishant,, P. Sharma, V. Krishna, Nitin and R. Rastogi, "Load Balancing of Nodes in Cloud Using Ant Colony
Optimization," IEEE, pp. 3-9, 2012.
[13] Y. Xu, L. Wu, L. Guo,, Z. Chen, L. Yang and Z. Shi, "An Intelligent Load Balancing Algorithm Towards Ef?cient Cloud
Computing," AI for Data Center Management and Cloud Computing: Papers from the 2011 AAAI Workshop (WS-11-08),
pp. 27-32, 2011.
[14] A. K. Sidhu and S. Kinger, "Analysis of Load Balancing Techniques in Cloud Computing," International Journal of
Computers & Technology Volume 4 No. 2, March-April, 2013, ISSN 2277-3061, pp. 737-741, 2013.
[15] O. M. Elzeki, M. Z. Reshad and M. A. Elsoud, "Improved Max-Min Algorithm in Cloud Computing," International
Journal of Computer Applications (0975 – 8887), pp. 22-27, 2012.
### 6212 | P a g e
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.24297/ijct.v16i2.6046?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.24297/ijct.v16i2.6046, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://cirworld.com/index.php/ijct/article/download/6046/5981"
}
| 2,017
|
[
"Review"
] | true
| 2017-04-06T00:00:00
|
[
{
"paperId": "7d49c7f1dc2187077873b8561c0c92277ef17288",
"title": "Load balancing in cloud computing using dynamic load management algorithm"
},
{
"paperId": "88866c141a5cbcf802bf1e64861885928dd2a8db",
"title": "Efficient utilization of virtual machines in cloud computing using Synchronized Throttled Load Balancing"
},
{
"paperId": "9fdd53cb97a70ccb19589475a2b831948f08441c",
"title": "Enhance load balancing using Flexible load sharing in cloud computing"
},
{
"paperId": "68780ce5ce5dbcd23bc084e06d397767d0288d19",
"title": "Cluster based load balancing in cloud computing"
},
{
"paperId": "fbf1a24d685097a963e66d9ab52e6f32b22d3d2b",
"title": "Application of adaptive load balancing algorithm based on minimum traffic in cloud computing architecture"
},
{
"paperId": "e0756b7a204ee52b82ca5a2824c74bd546467534",
"title": "A secured load balancing architecture for cloud computing based on multiple clusters"
},
{
"paperId": "94b1f5eef5f37799a005efd24bafefaef97b7a17",
"title": "ICA-MMT: A load balancing method in cloud computing environment"
},
{
"paperId": "b20ece3866c93c2e8d4fa6ce8cf4db91cf13e4e5",
"title": "Improved Max-Min Algorithm in Cloud Computing"
},
{
"paperId": "8a64a37f426f030266d22eaac0b9ca18bde798e1",
"title": "Load Balancing of Nodes in Cloud Using Ant Colony Optimization"
},
{
"paperId": "d632593703a604d9ab27e9310ecd9a849d405346",
"title": "Analysis of Load Balancing Techniques in Cloud Computing"
},
{
"paperId": "3aa6121d76b8661d7911ebce4fa8f88f20e49def",
"title": "Load balancing algorithm based on estimating finish time of services in cloud computing"
},
{
"paperId": "8583671da2e3a598f8c1842fdf6da053a7f9ec29",
"title": "Load Balancing in Cloud Computing"
},
{
"paperId": "d0af10255083ce3fd567ca67192542c0b36eab53",
"title": "ANALYSIS OF LOAD BALANCERS IN CLOUD COMPUTING"
},
{
"paperId": "49f26118a6728ea07412328873e726560c0b658a",
"title": "An Intelligent Load Balancing Algorithm Towards Efficient Cloud Computing"
}
] | 5,719
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0101445aec81d2dec8562a83e656ac6ccd633ee2
|
[
"Computer Science"
] | 0.87755
|
Founding Cryptography on Tamper-Proof Hardware Tokens
|
0101445aec81d2dec8562a83e656ac6ccd633ee2
|
IACR Cryptology ePrint Archive
|
[
{
"authorId": "1707396",
"name": "Vipul Goyal"
},
{
"authorId": "1688856",
"name": "Yuval Ishai"
},
{
"authorId": "1695851",
"name": "A. Sahai"
},
{
"authorId": "144002578",
"name": "R. Venkatesan"
},
{
"authorId": "144506573",
"name": "A. Wadia"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IACR Cryptol eprint Arch"
],
"alternate_urls": null,
"id": "166fd2b5-a928-4a98-a449-3b90935cc101",
"issn": null,
"name": "IACR Cryptology ePrint Archive",
"type": "journal",
"url": "http://eprint.iacr.org/"
}
| null |
# Founding Cryptography on Tamper-Proof
Hardware Tokens
Vipul Goyal[1][,⋆], Yuval Ishai[2][,⋆⋆], Amit Sahai[3][,⋆⋆⋆], Ramarathnam Venkatesan[4],
and Akshay Wadia[3]
1 UCLA and MSR India
vipul.goyal@gmail.com
2 Technion and UCLA
yuvali@cs.technion.ac.il
3 UCLA
_{sahai,awadia}@cs.ucla.edu_
4 Microsoft Research, India and Redmond
venkie@microsoft.com
**Abstract. A number of works have investigated using tamper-proof**
hardware tokens as tools to achieve a variety of cryptographic tasks.
In particular, Goldreich and Ostrovsky considered the problem of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum
introduced the concept of one-time programs: in a one-time program, an
honest sender sends a set of simple hardware tokens to a (potentially
malicious) receiver. The hardware tokens allow the receiver to execute a
secret program specified by the sender’s tokens exactly once (or, more
generally, up to a fixed t times). A recent line of work initiated by Katz
examined the problem of achieving UC-secure computation using hardware tokens.
Motivated by the goal of unifying and strengthening these previous
notions, we consider the general question of basing secure computation
on hardware tokens. We show that the following tasks, which cannot be
realized in the “plain” model, become feasible if the parties are allowed
to generate and exchange tamper-proof hardware tokens.
**– Unconditional and non-interactive secure computation. We**
show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities f (x, y) which
take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires
a unidirectional communication of simple stateful tokens from the
The original version of this chapter was revised: The copyright line was incorrect. This has been
[corrected. The Erratum to this chapter is available at DOI: 10.1007/978-3-642-11799-2_36](http://dx.doi.org/10.1007/978-3-642-11799-2_36)
_⋆_ Research supported in part by a Microsoft Research Graduate Fellowship and the
grants of Amit Sahai mentioned below.
_⋆⋆_ Supported in part by ISF grant 1310/06, BSF grants 2008411, and NSF grants
0830803, 0716835, 0627781.
_⋆⋆⋆_ Research supported in part from NSF grants 0916574, 0830803, 0627781, and
0716389, BSF grant 2008411, an equipment grant from Intel, and an Okawa Foundation Research Grant.
-----
sender to the receiver. This strengthens previous feasibility results
for one-time programs both by providing unconditional security and
by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is
in fact UC-secure. This improves over previous works on UC-secure
computation based on hardware tokens, which provided computational security under cryptographic assumptions.
**– Interactive secure computation from** stateless tokens
based on one-way functions. We show that stateless hardware
tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions.
**– Obfuscation from stateless tokens. We consider the problem**
of realizing non-interactive secure computation from stateless tokens
for functionalities which allow the receiver to provide an arbitrary
number of inputs (these are the only functionalities one can hope
to realize non-interactively with stateless tokens). By building on
recent techniques for resettably secure computation, we obtain a
general positive result under standard cryptographic assumptions.
This gives the first general feasibility result for program obfuscation
using stateless tokens, while strengthening the standard notion of
obfuscation by providing security against a malicious sender.
## 1 Introduction
A number of works (e.g. [1,2,3,4,5,6,7,8,9,10,11,12,13]) have investigated using
tamper-proof hardware tokens[1] as tools to achieve a variety of cryptographic
goals. There has been a surge of research activity on this front of late. In particular, the recent work of Katz [9] examined the problem of achieving UC-secure [14]
two party computation using tamper-proof hardware tokens. A number of followup papers [10,11,12] have further investigated this problem. In another separate
(but related) work, Goldwasser et al. [13] introduced the concept of one-time
_programs: in a one-time program, a (semi-honest) sender sends a set of very_
simple hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a program specified by the sender’s tokens
exactly once (or, more generally, up to a fixed t times). This question is related
to the more general goal of software protection using hardware tokens, which was
first addressed by Goldreich and Ostrovsky [1] using the framework of oblivious
RAM.
The present work is motivated by the observation that several of these pre
vious goals and concepts can be presented in a unified way as instances of one
general goal: realizing secure computation using tamper-proof hardware tokens.
The lines of work mentioned above differ in the types of functionalities being
1 Informally, a tamper-proof hardware token provides the holder of the token with
black-box access to the functionality of the token. We will often omit the words
“tamper-proof” when referring to hardware tokens, but all of the hardware tokens
referred to in this paper are assumed to be tamper-proof.
-----
considered (e.g., non-reactive vs. reactive), the type of interaction between the
parties (interactive vs. non-interactive protocols), the type of hardware tokens
(stateful vs. stateless, simple vs. complex), and the precise security model (standalone vs. UC, semi-honest vs. malicious parties). This unified point of view also
gives rise to strictly stronger notions than those previously considered, which in
turn give rise to new feasibility questions in this area.
The introduction of tamper-proof hardware tokens to the model of secure com
putation, as formalized in [9], invalidates many of the fundamental impossibility
results in cryptography. Taking a step back to look at this general model from a
foundational perspective, we find that a number of natural feasibility questions
regarding secure computation with hardware tokens remain open. In this work
we address several of these questions, focusing on goals that are impossible to
realize in the plain model without tamper-proof hardware tokens:
**– Is it possible to achieve unconditional security for secure computa-**
**tion with hardware tokens? We note that this problem is open even for**
stand-alone security, let alone UC security, and impossible in the plain model
[15]. While in the semi-honest model this question is easy to settle by relying
on unconditional protocols based on oblivious transfer (OT) [16,17,18,19],
this question is more challenging when both parties as well as the tokens
they generate can be malicious. (See Sections 1.2 and 3.1 for relevant discussion.) In the case of stateless tokens, which may be much easier to implement,
security against unbounded adversaries cannot be generally achieved, since
an unbounded adversary can “learn” the entire description of the token. A
natural question in this case is whether stateless tokens can be used
**to realize (UC) secure computation based on the assumption that**
**one-way functions exist.**
Previous positive results for secure two-party computation with hardware
tokens relied either on specific number theoretic assumptions [9] or the existence of oblivious transfer protocols in the plain model [10,11], or alternatively offered weaker notions of security [20].
A related question is: is it possible to obtain unconditionally secure
**one-time programs for all polynomial-time computable functions?**
The previous work of [13] required the existence one-way functions in order
to construct one-time programs.
**– Is it possible to realize non-interactive secure two-party computa-**
**tion with simple hardware tokens? Again, this problem is open[2]** even
for stand-alone security, and impossible in the plain model. Constructions of
oblivious RAM [1] and one-time programs [13] provide partial solutions to
2 All the previous questions were open even without any restriction on the size of the
tokens. In the current and the following questions we restrict the tokens to be simple
in the sense that the size of each token can only depend on the security parameter.
This rules out a trivial solution of building a token which realizes a party in a secure
two-party computation protocol.
-----
this problem; however, in these models the sender is semi-honest.[3] Thus, in
the context of one-time programs we ask: is it possible to achieve one**time programs tolerating a malicious sender? We note that [13] make**
partial progress towards this question by constructing one-time zero knowledge proofs, where the prover can be malicious. However, in the setting of
hardware tokens, the GMW paradigm [21] of using zero knowledge proofs to
compile semi-honest protocols into protocols tolerating malicious behavior
does not apply, since one would potentially need to prove statements about
hardware tokens (as opposed to ordinary NP statements).
**– Which notions of program obfuscation can be realized using simple**
**hardware tokens? Again, this problem can be captured in an elegant way**
within the framework of secure two-party computation, except that here we
need to consider reactive functionalities which may take a single input from
the “sender” and a sequence of (possibly adaptively chosen) inputs from the
“receiver”. Obfuscation can be viewed as a non-interactive secure realization
of such functionalities. While this general goal is in some sense realized by the
construction of oblivious RAM [1] (which employs stateful tokens), several
natural questions remain: Is it possible to achieve obfuscation using
**only stateless tokens? Is it possible to offer a general protection**
**against a malicious sender using stateless or even stateful tokens?**
To illustrate the motivation for the latter question, consider the goal of
obfuscating a poker-playing program. The receiver of the obfuscated program
would like to be assured that the sender did not violate the rules of the game
(and in particular cannot bias the choice of the cards).
**– What are the simplest kinds of tamper-proof hardware tokens**
**needed to realize the above goals? For example, Goldwasser et al. [13] in-**
troduce a very simple kind of stateful token that they call an OTM (one-time
memory) token.[4] An OTM token stores two strings s0 and s1, takes a single
bit b as input, and then outputs sb and stops working (or self-destructs).
Note that an OTM token essentially implements the one-out-of-two string
OT functionality; a subtle distinction between OTM and traditional OT is
discussed in Section 3.1. An even simpler type of token is a bit-OTM token,
where the strings s0 and s1 are restricted to be single bits. Is it possible
**to realize unconditional, non-interactive, or UC-secure two-party**
**computation using only bit-OTM tokens? We note that previous works**
on secure two-party computation with hardware tokens [9,10,11,20] all make
use of more complicated hardware tokens.
3 In these models, the sender is allowed to arbitrarily specify the functionality of the
oblivious RAM or the one-time program, and the receiver knows nothing about this
functionality except an upper bound on its circuit size or running time. (Thus, the
issue of dishonest senders does not arise in these models.) In the present work, by
a one-time program tolerating a malicious sender, we mean that the receiver knows
some partial specification of the functionality – modeled in the usual paradigm of
secure two-party computation.
4 The use of OTM tokens in [13] is motivated in part by the goal of achieving leakage
_resilience, a feature that our constructions using such tokens inherit as well._
-----
**1.1** **Our Results**
We show that the following tasks, which cannot be realized in the “plain”
model, become feasible if the parties are allowed to generate and exchange simple
tamper-proof hardware tokens.
**– Unconditional non-interactive secure computation. We show that by**
exchanging stateful hardware tokens, any functionality can be realized with
_unconditional security against malicious parties. In the case of two-party_
functionalities f (x, y) which take their inputs from a sender and a receiver
and deliver their output to the receiver, our protocol is non-interactive and
only requires a unidirectional communication of tokens from the sender to
the receiver (in case an output has to be given to both parties, adding a reply
from the receiver to the sender is sufficient). This result strengthens previous
feasibility results for one-time programs by providing unconditional security,
by offering general protection against malicious senders, and by using only
bit-OTM tokens.
As is typically the case for unconditionally secure protocols, our protocol
is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security
under cryptographic assumptions.
See Sections 3.1 and 3.2 for details of this result and a high level overview
of techniques.
**– Interactive secure computation from stateless tokens based on one-**
**way functions. We show that stateless hardware tokens are sufficient to**
base general secure (in fact, UC-secure) computation on the existence of one_way functions. One cannot hope for security against unbounded adversaries_
with stateless tokens since an unbounded adversary could query the token
multiple times to “learn” the functionality it contains. See Section 4 for
details.
**– Obfuscation from stateless tokens. We consider the problem of real-**
izing non-interactive secure computation from stateless tokens for reactive
functionalities which take a single input from the sender and an arbitrary
sequence of inputs from the receiver (these are the only functionalities one
can hope to realize non-interactively with stateless tokens). By building on
recent techniques for resettably secure computation [22], we obtain a general positive result under standard cryptographic assumptions. This gives the
first general feasibility result for program obfuscation using stateless tokens,
while strengthening the standard notion of obfuscation by providing security
against a malicious sender. We also propose constructions of non-interactive
secure computation for general reactive functionalities with stateful tokens.
See the full version for details.
In all of the above results, the size of each hardware token is either constant or
polynomial in the security parameter, and its code is independent of the inputs
of the parties. Thus, the tokens could theoretically be “mass-produced” before
being used in any particular protocol with any particular inputs.
-----
We stress that in contrast to some previous results along this line (most no
tably, [1,13,20]), our focus is almost entirely on feasibility questions, while only
briefly discussing more refined efficiency considerations. However, in most cases
our stronger feasibility results can be realized while also meeting the main efficiency goals pursued in previous works.
The first two results above are obtained by utilizing previous protocols for
secure computation based on OT [18,19], and thus a main ingredient in our
constructions is showing how to securely implement OT using hardware tokens.
Note that in the case of non-interactive secure computation, additional tools are
needed since the protocols of [18,19] are (necessarily) interactive.
**1.2** **Related Work**
The use of tamper-proof hardware tokens for cryptographic purposes was first
explored by Goldreich and Ostrovsky [1] in the context of software protection
(one-time programs [13] is a relaxation of this goal, generally called program
obfuscation [23]), and by Chaum, Pederson, Brands, and Cramer [2,3,4] in the
context of e-cash. Ishai, Sahai, and Wagner [5] and Ishai, Prabhakaran, Sahai
and Wagner [24] consider the question of how to construct tamper-proof hardware tokens when the hardware itself does not guarantee complete protection
against tampering. Gennaro, Lysyanskaya, Malkin, Micali, and Rabin [6] consider a similar question, when the underlying hardware guarantees that part of
the hardware is tamper-proof but readable, while the other part of the hardware
is unreadable but susceptible to tampering. Moran and Naor [8] considered a
relaxation of tamper-proof hardware called “tamper-evident seals,” and given
number of constructions of graphic tasks based on this relaxed notion. Hofheinz,
M¨uller-Quade, and Unruh [25] consider a model similar to [9] in the context of
UC-secure protocols where tamper-proof hardware tokens (signature cards) are
issued by a trusted central authority.
The model that we primarily build on here is due to Katz [9], who considers
a setting in which users can create and exchange tamper-proof hardware tokens
where malicious users have full control over the functionality realized by each
token they create. The main result of [9] is a general protocol for UC-secure twoparty computation using stateful tokens, under the DDH assumption. Chandran,
Goyal, Sahai [10] implement UC-secure two-party computation using stateless
tokens, under the assumption that oblivious transfer protocols exist in the plain
model. Aside from just considering stateless tokens, [10] also introduce a variant
of the model of [9] that allows for the adversary to pass along tokens, and in
general allows the adversary not to know the code of the tokens he produces. We
do not consider this model here. Moran and Segev [11] also implement UC-secure
two-party computation under the same assumption as [10], but using stateful tokens, and only requiring tokens to be passed in one direction. Damg˚ard, Nielsen,
and Wichs [12] show how to relax the “isolation” requirement of tamper-proof
hardware tokens, and consider a model in which tokens can communicate a fixed
number of bits back to its creator. Hazay and Lindell [20] propose constructions of practical protocols for various problems of interest using trusted stateful
-----
tokens. Very recently and independently of our work, practical oblivious transfer
protocols using stateless tokens and relying only on one-way functions were suggested by Kolesnikov [26]. In contrast to the corresponding feasibility result from
our work, these protocols either provide a weaker security guarantee or assume
that tokens are well-formed, but on the other hand they offer better practical
efficiency.
Goldwasser, Kalai, and Rothblum [13] introduced the notion of one-time pro
grams, and showed how to realize it under the assumption that one-way functions
exist, as we have already discussed. They also construct one-time zero-knowledge
proofs under the same assumption. Their results focus mainly on achieving efficiency in terms of the number of tokens needed, and a non-adaptive use of the
tokens by the receiver.
Finally, in a seemingly unrelated work which is motivated by quantum physics,
Buhrman, Christandl, Unger, Wehner and Winter [27] consider the application
of non-local boxes to cryptography. Using non-local boxes, Buhrman et al. show
an unconditional construction for oblivious transfer in the interactive setting. A
non-local box implements a trusted functionality taking input and giving output to both the parties (as opposed to OTM tokens which could be prepared
maliciously). However, the key problem faced by Buhrman et al. is similar to a
problem we face as well: delayed invocation of the non-local box by a malicious
party. Indeed, one can give a simple interactive protocol (omitted here) for building a trusted non-local-box using OTM tokens. This provides an alternative to
the interactive variant of our construction of unconditional secure computation
from hardware tokens described in Section 3.1.
## 2 Preliminaries
In this section we briefly discuss some of the underlying definitions and concepts.
The reader is referred to the full version for the details.
We use the UC-framework of Canetti [28] to capture the general notion of se
cure computation of (possibly reactive) functionalities. Our main focus is on the
two-party case. We will usually refer to one party as a “sender” and to another as
a “receiver”. A non-reactive functionality may receive an input from each party
and deliver output to each party (or only to the receiver). A reactive functionality may have several rounds of inputs and outputs, possibly maintaining state
information between rounds.
Our model for tamper-proof hardware is similar to that of Katz [9]. As we
consider both stateful and stateless tokens, we define different ideal functionalities for the two. By Fwrap[single] we denote an ideal functionality that allows a sender
to generate a “one-time token” which can be invoked by its designated receiver.
A one-time token is a stateful token which takes an input from the receiver and
returns a function which is specified in advance by the sender. (Note that if the
sender is malicious, this function can be arbitrary.) After being invoked by the
receiver, such a token “self-destructs”. Thus, the only state these tokens keep is
a flag which indicates whether the token has been run or not. Simple tokens of
this type were used in [13].
-----
We also define an ideal functionality Fwrap[stateless] for stateless tokens. Here the
token computes some (deterministic) function specified by the sender, and the
receiver can query the token an unbounded number of times. Note that this
makes stateless tokens useless if the receiver has enough resources to “learn” the
token’s description (either because the token is too small or the receiver is too
powerful). [5]
By a non-interactive protocol we refer to a protocol in which the communi
cation only involves a single batch of tokens, possibly along with an additional
message, communicated from a sender to a receiver.
## 3 Unconditional Non-interactive Secure Computation Using Stateful Tokens
In this section we establish the feasibility of unconditionally non-interactive secure computation based on stateful hardware tokens. As is typically the case for
unconditionally secure protocols, our protocols are in fact UC secure.
This section is organized as follows. In Subsection 3.1 we present an interactive
protocol for arbitrary functionalities, which requires the parties to engage in
multiple rounds of interaction. This gives an unconditional version of previous
protocols for UC-secure computation based on hardware tokens [9,10,11], which
all relied on computational assumptions.[6] This subsection also introduces some
useful building blocks that are used for the non-interactive solution in the next
subsection.
In Subsection 3.2 we consider the case of secure evaluation of two-party func
tionalities which deliver output to only one of the parties (the “receiver”). We
strengthen the previous result in two ways. First, we show that in this case interaction can be completely eliminated: it suffices for the sender to non-interactively
send tokens to the receiver, without any additional communication. Second, we
show that even very simple, constant-size stateful tokens are sufficient for this
purpose. This strengthens previous feasibility results for one-time programs [13]
by providing unconditional security (in fact, UC-security), by offering general
protection against malicious senders, and by using constant-size tokens.
**3.1** **The Interactive Setting**
Unconditionally secure two-party computation is impossible to realize for most
nontrivial functionalities, even with semi-honest parties [29,30]. However, if the
parties are given oracle access to a simple ideal functionality such as Oblivious
5 While the formal definition of this functionality forces a malicious sender to also use
only stateless tokens, this requirement can be relaxed without affecting the security
of our protocols. See Section 4 for details.
6 The work of [11] realizes an unconditionally UC-secure commitment from stateful to
kens. This does not directly yield protocols for secure computation without additional
computational assumptions.
-----
Transfer (OT) [16,17], then it becomes possible not only to obtain unconditionally secure computation with semi-honest parties [31,32,33], but also unconditional UC-security against malicious parties [18,19]. This serves as a natural
starting point for our construction.
In the OT-hybrid model, the two parties are given access to the following
ideal OT functionality: the input of P1 (the “sender”) consists of a pair of k-bit
strings (s0, s1), the input of P2 (the “receiver”) is a choice bit c, and the receiver’s
output is the chosen string sc. The natural way to implement a single OT call
using stateful hardware tokens is by having the sender send to the receiver a
token which, on input c, outputs sc and erases s1−c from its internal state.
The use of such hardware tokens was first suggested in the context of one-time
programs [13]. Following the terminology of [13], we refer to such tokens as OTM
(one-time-memory) tokens.
An appealing feature of OTM tokens is their simplicity, which can also lead
to better resistance against side-channel attacks (see [13] for discussion). This
simplicity feature served as the main motivation for using OTM tokens as a
basis for one-time programs. Another appealing feature, which is particularly
important in our context, is that the OTM functionality does not leave room for
bad sender strategies: whatever badly formed token a malicious sender may send
is equivalent from the point of view of an honest receiver to having the sender
send a well-formed OTM token picked from some probability distribution. (This
is not the case for tokens implementing more complex functionalities, such as
2-out-of-3 OT or the extended OTM functionality discussed below, for which
badly formed tokens may not correspond to any distribution over well-formed
tokens.)
Given the above, it is tempting to hope that our goal can be achieved by
simply taking any unconditionally secure protocol in the OT-hybrid model, and
using OTM tokens to implement OT calls. However, as observed in [13], there
is a subtle but important distinction between the OT-hybrid model and the
OTM-hybrid model: while in the former model the sender knows the point in
the protocol in which the receiver has already made its choice and received its
output, in the latter model invoking the token is entirely at the discretion of the
receiver. This may give rise to attacks in which the receiver adaptively invokes
the OTM tokens “out of order,” and such attacks may have a devastating effect
on the security of protocols even in the case of unconditional security. A more
detailed discussion of such attacks and simple solution ideas (that do not work)
is included in the full version.
**Extending the OTM functionality. To solve the above problem, we will**
realize an extended OTM functionality which takes from the sender a pair of
strings (s0, s1) along with an auxiliary string r, takes from the receiver a choice
bit c, and delivers to the receiver both sc and r. We denote this functionality
by ExtOTM. What makes the ExtOTM functionality nontrivial to realize using
hardware tokens is the need to protect the receiver from a malicious sender who
may try to make the received r depend on the choice bit c while at the same
-----
_time protecting the sender from a malicious receiver who may try to postpone_
its choice c until after it learns r.
Using the ExtOTM functionality, it is easy to realize a UC-style version of
the OT functionality which not only delivers the chosen string to the receiver
(as in the OTM functionality) but also delivers an acknowledgement to the
sender. This flavor of the OT functionality, which we denote by, can be
_F_ [OT]
realized by having the sender invoke ExtOTM with (s0, s1) and a randomly
chosen r, and having the receiver send r to the sender. In contrast to OTM,
the functionality allows the sender to force any subset of the OT calls to
_F_ [OT]
be completed before proceeding with the protocol. This suffices for instantiating
the OT calls in the unconditionally secure protocols from [18,19]. We refer the
reader to the full version of this paper for a UC-style definition of the OTM,
ExtOTM, and functionalities.
_F_ [OT]
**Realizing ExtOTM using general[7]** **stateful tokens. As discussed above,**
we cannot directly use a stateful token for realizing the ExtOTM functionality,
because this allows the sender to correlate the delivered r with the choice bit
_c. On the other hand, we cannot allow the sender to directly reveal r to the_
receiver, because this will allow the receiver to postpone its choice until after
it learns r. In the following we sketch our protocol for realizing ExtOTM using
stateful tokens. This protocol is non-interactive (i.e., it only involves tokens sent
from the sender to the receiver) and will also be used as a building block towards
the stronger results in the next subsection. We refer the reader to the full version
of this paper for a formal description of the protocol and its proof of security.
Below we include a detailed overview.
As mentioned above, at a high level, the challenge we face is to prevent un
wanted correlations in an information-theoretic way for both malicious senders
and malicious receivers. This is a more complex situation than a typical similar
situation where only one side needs to be protected against (c.f. [34,35]). To
accomplish this goal, we make use of secret-sharing techniques combined with
additional token-based “verification” techniques to enforce honest behavior.
Our ExtOTM protocol ΠExtOTM starts by having the sender break its aux
iliary string r into 2k additive shares r[i], and pick 2k pairs of random strings
(q0[i] _[, q]1[i]_ [). (][Each][ o][f the str][in][gs][ q]b[i] [a][n][d][ r][i][ i][s][ k][-][b][i][t][ lon][g][, w][here][ k][ i][s a stat][i][st][i][ca][l]
security parameter.) It then generates 2k OTM tokens, where the i-th token
contains the pair (q0[i] _[◦]_ _[r][i][, q]1[i]_ _[◦]_ _[r][i][) (w][here][ ‘][◦][’ i][s the c][on][cate][n][at][ion o][perat][o][r][). No][te]_
that a malicious sender may generate badly formed OTM tokens which correlate
_r[i]_ with the i-th choice of the receiver; we will later implement a token-based
verification strategy that convinces an honest receiver that the sender did not
cheat (too much) in this step.
Now the receiver breaks its choice bit c into 2k additive shares c[i], and invokes
the 2k OTM tokens with these choice bits. Let (ˆq[i], ˆr[i]) be the pair of k-bit strings
obtained by the receiver from the i-th token. Note that if the sender is honest, the
7 Here, we make use of general tokens. Later in this section, we will show how to achieve
the ExtOTM functionality (and in fact every poly-time functionality) using only very
simple tokens – just bit OTM tokens.
-----
receiver can already learn r. We would like to allow the receiver to learn its chosen
string sc while convincing it that the sender did not correlate all of the auxiliary
strings ˆr[i] with the corresponding choice bits ci. (The latter guarantee is required
to assure an honest receiver that ˆr = [�] _rˆ[i]_ is independent of c as required.)
This is done as follows. The sender prepares an additional single-use hardware
token which takes from the receiver its 2k received strings ˆq[i], checks that for
each ˆq[i] there is a valid selection ˆci such that ˆq[i] = qcˆ[i]i [(o][ther][wi][se the t][o][ke][n][ retur][n][s]
_⊥), and finally outputs the chosen string scˆ1⊕...⊕cˆ2k_ . (All tokens in the protocol
can be sent to the receiver at one shot.) Note that the additive sharing of r
in the first 2k tokens protects an honest sender from a malicious receiver who
tries to learn scˆ where ˆc is significantly correlated with r, as it guarantees that
the receiver effectively commits to c before obtaining any information about
_r. The receiver is protected against a malicious sender because even a badly_
formed token corresponds to some (possibly randomized) ideal-model strategy
of choosing (s0, s1).
Finally, we need to provide to the receiver the above-mentioned guarantee
that a malicious sender cannot correlate the receiver’s auxiliary output ˆr = [�] _rˆ[i]_
with the choice bit c. To explain this part, it is convenient to assume that both
the sender and the badly formed tokens are deterministic. (The general case is
handled by a standard averaging argument.) In such a case, we call each of the
first 2k tokens well-formed if the honest receiver obtains the same r[i] regardless
of its choice c[i], and we call it badly formed otherwise. By the additive sharing
of c, the only way for a malicious sender to correlate the receiver’s auxiliary
output with c is to make all of the first 2k tokens badly formed. To prevent this
from happening, we require the sender to send a final token which proves that it
knows all of the 2k auxiliary strings ˆr[i] obtained by the receiver. This suffices to
convince the receiver that not all of the first 2k tokens are badly formed. Note,
however, that we cannot ask the sender to send these 2k strings r[i] in the clear,
since this would (again) allow a malicious receiver to postpone its choice c until
after it learns r.
Instead, the sender generates and sends a token which first verifies that the
receiver knows r (by comparing the receiver’s input to the k-bit string r) and
only then outputs all 2k shares r[i]. The verification step prevents correlation
attacks by a malicious receiver. The final issue to worry about is that the string
_r received by the token (which may be correlated with the receiver’s choices ci)_
does not reveal to the sender enough information to pass the test even if all of its
first 2k tokens are badly formed. This follows by a simple information-theoretic
argument: in order to pass the test, the token must correctly guess all 2k bits
_ci, but this cannot be done (except with 2[−][Ω][(][k][)]_ probability) even when given
arbitrary k bits of information about the ci.
The above protocol shows the following (see full version for proof):
_Claim. Protocol ΠExtOTM realizes ExtOTM with statistical UC-security in the_
_Fwrap[single][-][h][y][br][i][d m][o][de][l.]_
We are now ready to prove the main feasibility result of this subsection.
-----
**Theorem 1 (Interactive unconditionally secure computation using**
**stateful tokens). Let f be a (possibly reactive) polynomial-time computable**
_functionality. Then there exists an efficient, statistically UC-secure interactive_
_protocol which realizes f in the Fwrap[single][-hybrid model.]_
_Proof. We compose three reductions. The protocols of [18,19] realize uncondi-_
tionally secure two-party (and multi-party) computation of general functionalities using . A trivial reduction described above reduces to ExtOTM.
_F_ [OT] _F_ [OT]
Finally, the above Claim reduces ExtOTM to Fwrap[single][.]
**3.2** **The Non-interactive Setting**
In this subsection we restrict the attention to the case of securely evaluating
two-party functionalities f (x, y) which take an input x from the sender and an
input y from the receiver, and deliver f (x, y) to the receiver. We refer to such
functionalities as being sender-oblivious. Note that here we consider only non_reactive sender-oblivious functionalities, which interact with the sender and the_
receiver in a single round. The reactive case will be discussed in the full version.
Unlike the case of general functionalities, here one can hope to obtain non
_interactive protocols in which the sender unidirectionally send tokens (possibly_
along with additional messages[8]) to the receiver.
For sender-oblivious functionalities, the main result of this subsection
strengthens the results of Section 3.1 in two ways. First, it shows that a noninteractive protocol can indeed realize such functionalities using stateful tokens.
Second, it pushes the simplicity of the tokens to an extreme, relying only on
OTM tokens which contain pairs of bits.
Below we provide only a high-level description of the construction and the
underlying ideas. We refer the reader to the full version for the full description
of the protocols and their analysis.
**One-time programs. Our starting point is the concept of a one-time pro-**
_gram (OTP) [13]. A one-time program can be viewed in our framework as a_
non-interactive protocol for f (x, y) which uses only OTM tokens, and whose security only needs to hold for the case of a semi-honest sender (and a malicious
receiver).[9] The main result of [13] establishes the feasibility of computationallysecure OTPs for any polynomial-time computable f, based on the existence
of one-way functions. The construction is based on Yao’s garbled circuit technique [37]. Our initial observation is that if f is restricted to the complexity
class NC[1], one can replace Yao’s construction by an efficient perfectly secure
variant (cf. [38]). This yields perfectly secure OTPs for NC[1]. Alternatively, we
8 Since our main focus is on establishing feasibility results, the distinction between the
“hardware” part and the “software” part is not important for our purposes.
9 The original notion of OTP from [13] is syntactically different in that it views f as a
function of the receiver’s input, where a description of f is given to the sender. This
can be captured in our framework by letting f (x, y) be a universal functionality.
-----
also present a general construction of a OTP from any “decomposable randomized encoding” of f . This can be used to derive perfectly secure OTPs for larger
classes of functions (including NL) based on randomized encoding techniques
from [39,38]. See the full version for further details.
A next natural step is to construct unconditionally secure OTPs for any
polynomial-time computable function f . In the full version of this paper, we
describe a direct and self-contained construction which uses the perfect OTPs
for NC[1] described above to build a statistically secure construction for any f .
However, this result will be subsumed by our main result, which can be proved
(in a less self-contained way) without relying on the latter construction.
**Handling malicious senders. As in Section 3.1, the main ingredient in our**
solution is an interactive secure protocol Π for f . The high level idea of our construction is obtain a non-interactive protocol for f which emulates Π by having
the sender generate and send a one-time token which computes the sender’s
next message function for each round of Π (a similar idea was used in [13] to
construct one time proofs). Using the above procedure, we transform Π into a
non-interactive protocol Π _[′]_ which uses very complex one-time tokens (for implementing the next message functions of Π). The next idea is that we can break
each such complex token into simple OTM tokens by using a one-time program
realization of each complex token. More details are provided in the full version.
**From the plain model to the OT-hybrid model. So far we assumed the**
protocol Π to be secure in the plain model. This rules out unconditional security
as well as UC-security, which are our main goals in this section. A natural approach for obtaining unconditional UC-security is to extend the above compiler
to protocols in the OT-hybrid model. This introduces a subtle difficulty which
was already encountered in Section 3.1: the sender cannot directly implement
the OT calls by using OTM tokens. To solve this problem, we build on the
(non-interactive) ExtOTM protocol from Section 3.1. See full version for details.
**From string-OTM to bit-OTMs. As a final optimization, in the full version**
we show how to use an unconditionally UC-secure non-interactive implementation of a string-OTM token using bit-OTM tokens.
This yields the following main result of this section:
**Theorem 2 (Non-interactive unconditionally secure computation us-**
**ing** **bit-OTM** **tokens).** _Let f_ (x, y) be a non-reactive, sender-oblivious,
_polynomial-time computable two-party functionality. Then there exists an efficient,_
_statistically UC-secure non-interactive protocol which realizes f in the Fwrap[single][-]_
_hybrid model in which the sender only sends bit-OTM tokens to the receiver._
## 4 Two-Party Computation with Stateless Tokens
In this section, we again address the question of achieving interactive two-party
computation protocols, but asking the following questions: (1) Can we rely on
-----
_stateless tokens while only assuming that one-way functions exist? (2) Can the_
above be achieved without requiring that the complexity or number of the tokens
grows with the complexity of the function being computed, as was the case in
the previous section? We show how to positively answer both questions: We use
stateless tokens, whose complexity is polynomial in the security parameter, to
implement the OT functionality. Since (as discussed earlier) secure protocols for
any two-party task exist given OT, this suffices to achieve the claimed result.
Before turning to our protocols, we make a few observations about stateless
tokens to set the stage. First, we observe that with stateless tokens, it is always
possible to have protocols where tokens are exchanged only at the start of the
_protocol. This is simply because each party can create a “universal” token that_
takes as input a pair (c, x), where c is a (symmetric authenticated/CCA-secure)
encryption[10] of a machine M, and outputs M (x). Then, later in the protocol,
instead of sending a new token T, a party only has to send the encryption of the
code of the token, and the other party can make use of that encrypted code and
the universal token to emulate having the token T . The proof of security and
correctness of this construction is straightforward.
**Dealing with dishonestly created stateful tokens. The above discussion,**
however, assumes that dishonest players also only create stateless tokens. If that
is not the case, then re-using a dishonestly created token may cause problems
with security. If we allow dishonest players to create stateful tokens, then a
simple solution is to repeat the above construction and send separate universal
tokens for each future use of any token by the other player, where honest players
are instructed to only use each token once. Since this forces all tokens to be used
in a stateless manner, this simple fix is easily shown to be correct and secure;
however, it may lead to a large number of tokens being exchanged. To deal
with this, as was discussed in the previous section, we observe that by Beaver’s
OT extension result [36] (which requires only one-way functions), it suffices to
implement O(k) OTs, where k is the security parameter, in order to implement
any polynomial number of OTs. Thus, it suffices to exchange only a polynomial
number of tokens even in the setting where dishonest players may create stateful
tokens.
**Convention for intuitive protocol descriptions. In light of the previous**
discussions, in our protocol descriptions, in order to be as intuitive as possible, we
describe tokens as being created at various points during the protocol. However,
as noted above, our protocols can be immediately transformed into ones where
a bounded number of tokens (or in the model where statelessness is guaranteed,
only one token each) are exchanged in an initial setup phase.
**4.1** **Protocol Intuition**
We now discuss the intuition behind our protocol for realizing OT using stateless
tokens; due to the complexity of the protocol, we do not present the intuition
10 An “encrypt-then-MAC” scheme would suffice here.
-----
for the entire protocol all at once, but rather build up intuition for the different
components of the protocol and why they are needed, one component at a time.
For this intuition, we will assume that the sender holds two random strings s0
and s1, and the receiver holds a choice bit b. Note that OT of random strings is
equivalent to OT for chosen strings [41].
**The Basic Idea. Note that, since stateless tokens can be re-used by malicious**
players, if we naively tried to create a token that output sb on input the receiver’s
choice bit b, the receiver could re-use it to discover both s0 and s1. A simple
idea to prevent this reuse would be the following protocol, which is our starting
point:
1. Receiver sends a commitment c = com(b; r) to its choice bit b.
2. Sender sends a token, that on input (b, r), checks if this is a valid decommit
ment of c, and if so, outputs sb.
3. Receiver feeds (b, r) to the token it received, and obtains w = sb
**Handling a Malicious Receiver. Similar to the problem discussed in the**
previous section, there is a problem that the receiver may choose not to use
the token sent by the sender until the end of the protocol (or even later!). In
our context, this can be dealt with easily. We can have the sender commit to a
random string π at the start of the protocol, and require that the sender’s token
must, in addition to outputting sb, also output a valid decommitment to π. We
then add a last step where the receiver must report π to the sender. Only upon
receipt of the correct π value does the sender consider the protocol complete.
**Proving Knowledge. While this protocol seems intuitive, we note that it is**
actually insecure for a fairly subtle reason. A dishonest sender could send a token
that on input (b, r), simply outputs (b, r) (as a string). This means that at the
end of the protocol, the dishonest sender can output a specific commitment c,
such that the receiver’s output is a decommitment of c showing that it was a
commitment to the receiver’s choice bit b. It is easy to see that this is impossible
in the ideal world, where the sender can only call an ideal OT functionality.
To address the issue above, we need a way to prevent the sender from creating
a token that can adaptively decide what string it will output. Thinking about it
in a different way, we want the sender to “prove knowledge” of two strings before
he sends his token. We can accomplish this by adding the following preamble to
the protocol above:
1. Receiver chooses a pseudo-random function (PRF) fγ : {0, 1}[5][k] _→{0, 1}[k],_
and then sends a token that on input x ∈{0, 1}[5][k], outputs fγ(x).
2. Sender picks two strings x0, x1 ∈{0, 1}[5][k] at random, and feeds them (one
at-a-time) to the token it received, and obtains y0 and y1. The sender sends
(y0, y1) to the receiver.
3. Sender and receiver execute the original protocol above with x0 and x1 in
place of s0 and s1. The receiver checks to see if the string w that it obtains
from the sender’s token satisfies fγ(w) = yb, and aborts if not.
-----
The crucial feature of the protocol above is that a dishonest sender is effectively
committed to two values x0 and x1 after the second step (and in fact the simulator can use the PRF token to extract these values), such that later on it must
output xb on input b, or abort.
Note that a dishonest receiver may learn k bits of useful information about
_x0 and x1 each from its token, but this can be easily eliminated later using the_
Leftover Hash Lemma (or any strong extractor).
**Preventing correlated aborts. A final significant subtle obstacle remains,**
however. A dishonest sender can still send a token that causes an abort to be
correlated with the receiver’s input, e.g. it could choose whether or not to abort
based on the inputs chosen by the receiver (see full version for a discussion of
why this is a problem).
To prevent a dishonest sender from correlating the probability of abort with
the receiver’s choice, the input b of the receiver is additively shared into bits
_b1, . . ., bk such that b1 +_ _b2 +_ _· · ·_ + _bk = b. The sender, on the other hand, chooses_
strings z1, . . ., zk and r uniformly at random from {0, 1}[5][k]. Then the sender and
receiver invoke k parallel copies of the above protocol (which we call the Quasi_OT protocol), where for the ith execution, the sender’s inputs are (zi, zi + r),_
and the receiver’s input is bi. Note that at the end of the protocol, the receiver
either holds [�] _zi if b = 0, or r +_ [�] _zi if b = 1._
Intuitively speaking, this reduction (variants of which were previously used
by, e.g. [34,35]) forces the dishonest sender to make one of two bad choices: If
each token that it sends aborts too often, then with overwhelming probability
at least one token will abort and therefore the entire protocol will abort. On
the other hand, if few of the sender’s tokens abort, then the simulator will be
able to perfectly simulate the probability of abort, since the bits bi are (k −
1)-wise independent (and therefore all but one of the Quasi-OT protocols can
be perfectly simulated from the receiver’s perspective). We make the receiver
commit to its bits bi using a statistically hiding commitment scheme (which can
be constructed from one-way functions [42]) to make this probabilistic argument
go through.
This completes the intuition behind our protocol. The result of this section is
summarized by the following theorem, whose proof appears in full version.
**Theorem 3 (Interactive UC-secure computation using stateless to-**
**kens). Let f be a (possibly reactive) polynomial-time computable functionality.**
_Then, assuming one-way functions exist, there exists a computationally UC-_
_secure interactive protocol which realizes f in the Fwrap[stateless]-hybrid model. Fur-_
_thermore, the protocol only makes a black-box use of the one-way function._
_Oblivious Reactive Functionalities in the Non-Interactive Setting._ In the full
version, we generalize our study of non-interactive secure computation to the
case of reactive functionalities. Roughly speaking, reactive functionalities are
the ones for which in the ideal world, the parties might invoke the ideal trusted
party multiple times and this trusted party might possibly keep state between
-----
different invocations. For the interactive setting (i.e. when the parties are allowed
multiple rounds of interaction in the Fwrap-hybrid models) there are standard
techniques using which, given protocol for non-reactive functionality, protocol
for securely realizing reactive functionality can be constructed. However, these
techniques fail in the non-interactive setting. In the full version, we study what
class of reactive functionalities can be securely realized in the non-interactive
setting for the case of stateless as well as stateful hardware token.
_Acknowledgements. We thank J¨urg Wullschleger for pointing out the relevance_
of [27] and for other helpful comments. We thank Guy Rothblum for useful
discussions.
## References
1. Goldreich, O., Ostrovsky, R.: Software protection and simulation on oblivious rams.
J. ACM 43(3), 431–473 (1996)
2. Chaum, D., Pedersen, T.P.: Wallet databases with observers. In: Brickell, E.F. (ed.)
CRYPTO 1992. LNCS, vol. 740, pp. 89–105. Springer, Heidelberg (1993)
3. Brands, S.: Untraceable off-line cash in wallets with observers (extended abstract).
In: Stinson, D.R. (ed.) CRYPTO 1993. LNCS, vol. 773, pp. 302–318. Springer,
Heidelberg (1994)
4. Cramer, R., Pedersen, T.P.: Improved privacy in wallets with observers (extended
abstract). In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 329–343.
Springer, Heidelberg (1994)
5. Ishai, Y., Sahai, A., Wagner, D.: Private circuits: Securing hardware against prob
ing attacks. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 463–481.
Springer, Heidelberg (2003)
6. Gennaro, R., Lysyanskaya, A., Malkin, T., Micali, S., Rabin, T.: Algorithmic
tamper-proof (ATP) security: Theoretical foundations for security against hardware tampering. In: Naor, M. (ed.) TCC 2004. LNCS, vol. 2951, pp. 258–277.
Springer, Heidelberg (2004)
7. Hofheinz, D., M¨uller-quade, J., Unruh, D.: Universally composable zero-knowledge
arguments and commitments from signature cards. In: Proc. of the 5th Central European Conference on Cryptology MoraviaCrypt 2005, Mathematical Publications
(2005)
8. Moran, T., Naor, M.: Basing cryptographic protocols on tamper-evident seals. In:
Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005.
LNCS, vol. 3580, pp. 285–297. Springer, Heidelberg (2005)
9. Katz, J.: Universally composable multi-party computation using tamper-proof
hardware. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 115–128.
Springer, Heidelberg (2007)
10. Chandran, N., Goyal, V., Sahai, A.: New constructions for UC secure computation
using tamper-proof hardware. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS,
vol. 4965, pp. 545–562. Springer, Heidelberg (2008)
11. Moran, T., Segev, G.: David and Goliath commitments: UC computation
for asymmetric parties using tamper-proof hardware. In: Smart, N.P. (ed.)
EUROCRYPT 2008. LNCS, vol. 4965, pp. 527–544. Springer, Heidelberg (2008)
-----
12. Damg˚ard, I., Nielsen, J.B., Wichs, D.: Isolated proofs of knowledge and isolated
zero knowledge. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965,
pp. 509–526. Springer, Heidelberg (2008)
13. Goldwasser, S., Kalai, Y.T., Rothblum, G.: One-time programs. In: Wagner, D.
(ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 39–56. Springer, Heidelberg (2008)
14. Canetti, R.: Universally composable security: A new paradigm for cryptographic
protocols. In: FOCS, pp. 136–145 (2001)
15. Chor, B., Kushilevitz, E.: A zero-one law for boolean privacy. SIAM J. Discrete
Math. 4(1), 36–47 (1991)
16. Rabin, M.O.: How to exchange secrets with oblivious transfer (1981)
17. Even, S., Goldreich, O., Lempel, A.: A randomized protocol for signing contracts.
Commun. ACM 28(6), 637–647 (1985)
18. Kilian, J.: Founding cryptography on oblivious transfer. In: STOC, pp. 20–31
(1988)
19. Ishai, Y., Prabhakaran, M., Sahai, A.: Founding cryptography on oblivious transfer
- efficiently. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 572–591.
Springer, Heidelberg (2008)
20. Hazay, C., Lindell, Y.: Constructions of truly practical secure protocols using stan
dardsmartcards. In: Ning, P., Syverson, P.F., Jha, S. (eds.) ACM Conference on
Computer and Communications Security, pp. 491–500. ACM, New York (2008)
21. Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a com
pleteness theorem for protocols with honest majority. In: STOC, pp. 218–229 (1987)
22. Goyal, V., Sahai, A.: Resettably secure computation. In: Joux, A. (ed.)
EUROCRYPT 2009. LNCS, vol. 5479, pp. 54–71. Springer, Heidelberg (2009)
23. Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S.P.,
Yang, K.: On the (im)possibility of obfuscating programs. In: Kilian, J. (ed.)
CRYPTO 2001. LNCS, vol. 2139, pp. 1–18. Springer, Heidelberg (2001)
24. Ishai, Y., Prabhakaran, M., Sahai, A., Wagner, D.: Private circuits ii: Keeping
secrets in tamperable circuits. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS,
vol. 4004, pp. 308–327. Springer, Heidelberg (2006)
25. Hofheinz, D., M¨uller-Quade, J., Unruh, D.: Universally composable zero-knowledge
arguments and commitments from signature cards. In: 5th Central European Con[ference on Cryptology (2005), http://homepages.cwi.nl/~hofheinz/card.pdf](http://homepages.cwi.nl/~hofheinz/card.pdf)
26. Kolesnikov, V.: Truly efficient string oblivious transfer using resettable tamper
proof tokens. In: Micciancio, D. (ed.) TCC 2010. LNCS, vol. 5978. Springer,
Heidelberg (2010)
27. Buhrman, H., Christandl, M., Unger, F., Wehner, S., Winter, A.: Implications
of superstrong nonlocality for cryptography. Proceedings of the Royal Society
A 462(2071), 1919–1932
28. Canetti, R.: Universally composable security: A new paradigm for cryptographic
protocols. In: FOCS, pp. 136–145 (2001)
29. Ben-Or, M., Goldwasser, S., Wigderson, A.: Completeness theorems for non
cryptographic fault-tolerant distributed computation (extended abstract). In:
STOC, pp. 1–10 (1988)
30. Kushilevitz, E.: Privacy and communication complexity. SIAM J. Discrete
Math. 5(2), 273–284 (1992)
31. Goldreich, O., Vainish, R.: How to solve any protocol problem - an efficiency im
provement. In: Pomerance, C. (ed.) CRYPTO 1987. LNCS, vol. 293, pp. 73–86.
Springer, Heidelberg (1988)
-----
32. Galil, Z., Haber, S., Yung, M.: Cryptographic computation: Secure faut-tolerant
protocols and the public-key model. In: Pomerance, C. (ed.) CRYPTO 1987. LNCS,
vol. 293, pp. 135–155. Springer, Heidelberg (1988)
33. Goldreich, O.: Foundations of Cryptography: Basic Applications. Cambridge Uni
versity Press, Cambridge (2004)
34. Kilian, J.: Uses of Randomness in Algorithms and Protocols. MIT Press, Cambridge
(1990)
35. Lindell, Y., Pinkas, B.: An efficient protocol for secure two-party computation
in the presence of malicious adversaries. In: Naor, M. (ed.) EUROCRYPT 2007.
LNCS, vol. 4515, pp. 52–78. Springer, Heidelberg (2007)
36. Beaver, D.: Correlated pseudorandomness and the complexity of private computa
tions. In: STOC, pp. 479–488 (1996)
37. Yao, A.: How to generate and share secrets. In: FOCS, pp. 162–167 (1986)
38. Ishai, Y., Kushilevitz, E.: Perfect constant-round secure computation via perfect
randomizing polynomials. In: Widmayer, P., Triguero, F., Morales, R., Hennessy,
M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 244–256.
Springer, Heidelberg (2002)
39. Feige, U., Kilian, J., Naor, M.: A minimal model for secure computation (extended
abstract). In: STOC, pp. 554–563 (1994)
40. Brassard, G., Cr´epeau, C., Santha, M.: Oblivious transfers and intersecting codes.
IEEE Transactions on Information Theory 42(6), 1769–1780 (1996)
41. Beaver, D., Goldwasser, S.: Multiparty computation with faulty majority (extended
announcement). In: FOCS, pp. 468–473. IEEE, Los Alamitos (1989)
42. Haitner, I., Reingold, O.: Statistically-hiding commitment from any one-way func
tion. In: STOC, pp. 1–10 (2007)
43. Anderson, W.E.: On the secure obfuscation of deterministic nite automata. Cryp
tology ePrint Archive, Report 2008/184 (2008)
44. Canetti, R., Goldreich, O., Goldwasser, S., Micali, S.: Resettable zero-knowledge
(extended abstract). In: STOC, pp. 235–244 (2000)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-11799-2_19?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-11799-2_19, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F978-3-642-11799-2_19.pdf"
}
| 2,010
|
[
"JournalArticle"
] | true
| 2010-02-09T00:00:00
|
[] | 14,854
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01044c3e265ad414aec8cf608c24e1d1cf406077
|
[
"Computer Science"
] | 0.916699
|
Hardened Bloom Filters, with an Application to Unobservability
|
01044c3e265ad414aec8cf608c24e1d1cf406077
|
Ann. UMCS Informatica
|
[
{
"authorId": "50198404",
"name": "Nicolas Bernard"
},
{
"authorId": "1953911",
"name": "Franck Leprévost"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Classical Bloom filters may be used to elegantly check if an element e belongs to a set S, and, if not, to add e to S. They do not store any data and only provide boolean answers regarding the membership of a given element in the set, with some probability of false positive answers. Bloom filters are often used in caching system to check that some requested data actually exist before doing a costly lookup to retrieve them. However, security issues may arise for some other applications where an active attacker is able to inject data crafted to degrade the filters’ algorithmic properties, resulting for instance in a Denial of Service (DoS) situation. This leads us to the concept of hardened Bloom filters, combining classical Bloom filters with cryptographic hash functions and secret nonces. We show how this approach is successfully used in the TrueNyms unobservability system and protects it against replay attacks.
|
Annales UMCS Informatica AI XII, 4 (2012) 11–22
DOI: 10.2478/v10065-012-0018-y
### Hardened Bloom Filters, with an Application to Unobservability
Nicolas Bernard[1][∗], Franck Leprévost[1][†]
```
1LACS, University of Luxembourg
```
162 a, Avenue de la Faïencerie, L-1511 Luxembourg
Abstract – Classical Bloom filters may be used to elegantly check if an element e belongs to a set
S, and, if not, to add e to S. They do not store any data and only provide boolean answers regarding
the membership of a given element in the set, with some probability of false positive answers. Bloom
filters are often used in caching system to check that some requested data actually exist before doing
a costly lookup to retrieve them. However, security issues may arise for some other applications where
an active attacker is able to inject data crafted to degrade the filters’ algorithmic properties, resulting
for instance in a Denial of Service (DoS) situation. This leads us to the concept of hardened Bloom
filters, combining classical Bloom filters with cryptographic hash functions and secret nonces. We
show how this approach is successfully used in the TrueNyms unobservability system and protects it
against replay attacks.
### 1 Introduction
Many applications in computer science depend on the result of the following problem:
check if an element e belongs to a set S, and, if it does not, add e to S. Depending on
the application we have in mind, the "match" or "no match" answer will usually lead
to additional processing, like for instance in the following two examples:
(1) Filtering duplicated packets on a network connection: On a network connection, it can happen that a packet is duplicated. The destination host then
receives it twice, so does the application. This is for instance the case on a
UDP connection.
∗Nicolas.Bernard@uni.lu
†Franck.Leprevost@uni.lu
-----
(2) Counting the number of different elements in a collection: If they are not in
this set, a counter is increased and the element is added to the set.
Bloom filters [1] address these problems in an elegant manner. A Bloom filter is
a probabilistic data structure that allows to represent a finite set S without storing
the actual elements of the set S. Among their main properties, Bloom filters have
small footprints, a fast lookup time, allow to add elements quickly to the represented
set S, and the addition of an element cannot fail due to the data structure being
“full”. Bloom filters do not store any data and can only provide boolean answers on
the membership of a given element in the set, with some probability of false positive
answers. They are often used in caching system to check that some requested data
actually exist before doing a costly lookup to retrieve them.
In the situation of the example (1) above, a Bloom filter at the receiving end could
be used to drop the duplicated packets: packets that do not match are processed
(i.e., used by the application) and added to the set, while packets that do match are
considered as duplicated and discarded.
In the situation of example (2), each element of the collection is matched against
a Bloom filter representing an “already accounted” set. While the result is only
of probabilistic nature, its complexity is O(m) whereas the complexity of a classical algorithm remains O(m log m), where m is the number of elements of the collection.
This being said, security issues may be raised for many applications, leading e.g. to
Denial of Service (DoS) attacks. The purpose of this article is to provide a solution
to these issues by introducing hardened Bloom filters. Moreover, we show their use in
the seminal example of the TrueNyms protocol [2], which raised our interest in Bloom
filters and motivated the present contribution.
This article is organized as follows: in section 2, we briefly explain the underlying
concept of a classical Bloom filter. In section 3, we describe the security issues that an
external malicious party may exploit, leading to the construction of hardened Bloom
filters. In section 4, we briefly describe the TrueNyms unobservability system, and
describe how to efficiently use hardened Bloom filters to prevent replay attacks on this
system. We conclude this article with some further ideas for the enhancements of our
approach, which we plan to develop in due time.
### 2 Classical Bloom filters
A Bloom filter (in the classical understanding as defined in [1]) is a probabilistic
data structure representing a finite set S. It consists of a bit array A of size 2[n] (in
practice n is small, say n < 25), and k distinct hash functions (Hj)1≤j≤k such that
Hj(data) = ij ∈ [0, 2[n] − 1]. (1)
-----
In other words, ij is an index of A, depending on the data considered. Moreover,
k is also small: its chosen value — in a first approach — depends on the allowed
probabilistic "false-positive" occurrences according to formula 2 below. The discussion
about the (lack of) requirements on hash functions in the context of classical Bloom
filters is addressed in part 2.2.
2.1 Construction of S and A
Initially, S = ∅ and all the bit values of A are equal to 0. An element e is added to S by
setting to 1 all the positions of the array A indexed by the hash values i1 = H1(e), i2 =
H2(e), . . ., ik = Hk(e):
∀j ∈ [1, k], A[Hj(e)] ← 1.
The test to determine if an element e is already in S is performed by generating the
indices for this element. An element e is then probably in S if, and only if:
∀j ∈ [1, k], A [Hj(e)] = 1.
The probability in the previous sentence applies only to the "if" part. Indeed, there
can be values i, j, e, e[′] s.t.
Hi(e) = Hj(e[′]).
In other words, an index in the array A could be “part of” multiple elements of S. As
a consequence, there is no way to remove elements from S and, once set to 1, a value
A[i] is never reset to 0. It implies in particular that, once added, an element belonging
to S is always found if matched against the filter.
Now, with some probability, the filter can represent an element e as belonging to S
although it is not the case: it may indeed happen that all the indices corresponding to
e are equal to 1, while e ̸∈ S. Such a “false-positive” occurs with a probability:
km[�][k]
�1 − �1 − [1] � ≈ �1 − e −2km[n][ �][k], (2)
2[n]
where m is the number of elements in S.
2.2 Non-cryptographic hash functions
A hash function as used in the context of classical Bloom filters a priori differs
strongly from a hash function used in the context of cryptology. It is a function[1]:
H : N −→ [0, 2[n] − 1]
with good statistical distribution properties for given “normal data”, as described for
instance in section 6.4 of [3]. In particular, these hash functions usually lack the
1We consider here any finite word on any finite alphabet as mappable to an element of N, and
that distinct words lead to distinct elements of N.
-----
compression property (see [4, section 9.2.1]) that is a mandatory and important part
of a cryptographic hash function.
Such a hash function can be very simple, and usually it is in order to be fast. For
instance, it may consist in the modular division by some prime, chosen according to
the needed size of the image. In fact, since the hash function does not need to consider
all the data given but only a suitable part to obtain a correct distribution, we can even
construct hash functions with complexity in O(1).
Consequences are multiple, but here we will only note the three following :
(1) Recall that we need k distinct hash functions for the classical Bloom filters.
We can create many different functions with similar properties by changing a
parameter in one fixed scheme. For instance, in a scheme based on modular
division, the choice of k distinct appropriate primes leads to k distinct hash
functions.
(2) It is possible to find preimages : it means that given an i, it is possible to find
Dx, Dy, · · · such that H(Dx) = H(Dy) = · · · = i. Indeed, many simple hash
functions can be easily inverted. Anyway, given the usual size of the image
set, it would be easy to find such values by brute-force.
(3) It is usually even possible, given a few such hash functions H1, · · ·, Hj and
corresponding indices i1, · · ·, ij, to find a common preimage D such that
H1(D) = i1, · · ·, Hj(D) = ij. (3)
### 3 Security issues and Hardened Bloom filters
As mentioned in the introduction (section 1), security issues may be raised in some
applications. For instance, assume the elements to be matched can be tampered by
an external malicious party, say Mallory. Recall then that the probability given in
equation 2 applies to “ordinary” elements. Since the hash functions Hk are a priori
non-cryptographic ones, Mallory can craft special elements that will fill A with bits set
to 1 much faster than random data would[2]. Of course, once all the bits of the array A
are equal to 1, each element tried against the filter will match, which results in a denial
of service (DoS) attack in the cases given beforehand: all the elements are considered
as already in the set, even when they are not. So, Bloom filters must be hardened to
prevent such attacks if Mallory controls the incoming data.
If an attacker can inject as many elements he wants to, the battle is lost because
even if he is restricted to the probability given by equation 2, with m growing, the
probability will converge to 1. However, such a case is rare, and most of the time the
attacker will find himself unable to add more than a fixed number of elements per time
unit. Here, it is possible to fight back, and design appropriate countermeasures.
2The irony being that, while collisions are usually a sign of weakness in cryptographic hash
functions, here Mallory has to find non-colliding elements in order to set to 1 all the bits of the array
A as fast as possible.
-----
3.1 Protection against index selection attacks
To prevent Mallory from just deciding upon a set of indices and creating suitable
data to send, the first idea is to use Bloom filters where the k hash functions have some
cryptographic properties.
Notably, it must be hard — no faster way than brute force — to find preimages,
to insure that the attacker will not be able in practice to find a common preimage as
defined in equation 3. With such hash functions, it would be far harder for Mallory to
find non-colliding packets than simply deciding which bits in the array A he wants to
set and generating the corresponding data.
The natural choice for a hash function with such cryptographic properties, is to take
a cryptographic hash function H [c] [4, page 323]. Note however that the properties of
a cryptographic hash function are a superset of what is actually needed: we comment
on these aspects in section 5.
3.2 k cryptographic hash functions ?
The first difficulty is to find k such functions. As we have seen in section 2.2, it is
easy to have many non-cryptographic hash functions. Unfortunately, even for a small
relevant k, we cannot find k different standard cryptographic hash functions. The list
of such hash functions mainly consist of md5, sha-1, the sha-2 and ripemd families
[4, 5], and this list can hardly be extended much further.
Nonetheless, there are multiple ways to solve this issue :
(1) Conceptually, the easiest way is probably to add the index of the function
before the data. In other words, given one cryptographic hash function H [C],
and using the | symbol for concatenation, we define the k hash functions as
Hi(data) := H [c] (i|data), 1 ≤ i ≤ k.
Some variants of this method can be imagined. For instance, the index could
be used in the initialization vector of the compression function of the hash
function. However this proposal only makes the implementation harder as
specifying this vector is usually not possible through the API of the cryptographic libraries providing such functions.
(2) One can also think of using the iterated application of the cryptographic hash
function H [c] to produce the (Hi)1≤i≤k. More precisely, the k hash functions
are defined as
Hi(data) := (H [c])[i] (data), 1 ≤ i ≤ k,
with
(H [c])[i] (data) =
� H [c](data) if i = 1,
�
H [c][ �](H [c])[i][−][1] (data) if 2 ≤ i ≤ k.
(3) Another way, is to notice that the fingerprint returned by a cryptographic
hash function is a lot longer than an index for the bit array of the Bloom
-----
filter. Indeed, the shortest fingerprints are at least 128 bits long, while it is
unusual for an index to be more than 25 bits long, as noted in section 2. The
idea then would be to see the fingerprint provided by a cryptographic hash
function as the concatenation of l indices :
H [c](data) = i1|i2|i3| · · · |il|r,
where r is an unused “remainder” if the size of the fingerprint is not a multiple
of the size of an index, and ij are the indices of equation 1. Of course, it may
happen that l < k, then this scheme would need to be combined with one of
the previous two to generate the k required indices. However, as there are
standard hash functions with fingerprints size up to 512 bits at least, it should
be possible to use it alone in most cases.
(4) Another possibility that we will not detail here would be to construct custom
hash functions using block ciphers [4, section 9.4.1].
Security-wise, there is no evidence that one of the previous schemes has some obvious
advantage over the others. Let us then compare them on their speed. The algorithmic
complexity of a cryptographic hash function is at least in O(s), where s is the size
of the data to be hashed. To simplify, assume that the algorithm complexity of the
cryptographic hash function is indeed s, the complexity of the different schemes would
then be in :
(1) ks for the first one, as the H [c] function is called k times on data of size s + ϵ
(ϵ being the size of the index added before the actual data).
(2) s + (k − 1)f for the second one, where f is the size of a fingerprint: H [c] is
called once on data of size s, then k − 1 times on the fingerprint of size f
generated at the previous step. The second scheme is hence faster than the
first one if the data size is large.
(3) The third one needs only one call to the cryptographic hash function if l ≥ k.
If l < k the exact complexity depends on the combination with one of the
other schemes, but will be reduced compared to it anyway.
The third scheme then seems to be the best choice, since it is the fastest one. It must
be noted however that a cryptographic hash function is anyway much slower than a
non-cryptographic one. To take an example, the number of operations to hash data
of size s can be as low as 1 for a non-cryptographic hash function as described in 2.2,
while it would be of the order of 160s for a typical cryptographic hash function like
Ripemd-160 [6].
3.3 Protection against offline attacks
Let us recall that the hash functions considered here give a value that is an index
for the array A, i.e. a value belonging to [0, 2[n]], with n < 25, and hence preimages
can be found by brute force. Moreover, because Bloom filters are deterministic (and
the different schemes presented in 3.2 do not change this), the same input will fill two
filters in the same way. Mallory can then perform the following offline DoS attack:
-----
Brute force the hash functions to create a set of elements that would fill the Bloom
filter faster than “normal” data would. Even if he is not anymore able to select indices
and craft data to set them specifically, he can still generate a lot of data packets and
send the group of them that sets the greatest number of bits in the array A. While
such elements would have some collisions on indices, they would still fill the filter a lot
faster than the statistical probability predicts.
Let us summarize the situation: to insure the protection against index selection
attacks (seen in part 3.1), we rely on Bloom filters using cryptographic hash functions.
Now, to furthermore insure the protection against an offline attack as described above,
we add the utilization of secret nonces. A nonce is a random value, which in our context
is generated at the instantiation of a Bloom filter and is then used as a key so that the
cryptographic hash functions are in fact replaced by MACs (or keyed hash functions,
see [4, page 325]). Instead of giving all details, we provide here the conceptual idea,
which amounts to specializing H [c] for each Bloom filter F in something like
H [c,][F](data) := H [c](nF|data), (4)
where nF is the nonce used for filter F.
With such a scheme, Mallory is blinded: he is not able to know the effect of an
element and hence cannot craft special elements anymore. As a consequence, an
active DoS attack by Mallory against Bloom filters hardened this way does not work,
provided that Mallory is only able to add a limited number of elements per second.
The main drawback is that it is not possible anymore to take the union of two sets
by using a bit-wise OR operation on the arrays of the corresponding bloom filters
unless they are using the same nonce. For most applications, this however should not
be a significant issue.
We define here a hardened Bloom filters as a classical Bloom filter using
cryptographically-enhanced hash functions together with a secret nonce, addressing index-selection attacks as well as offline attacks.
### 4 Hardened Bloom filters and TrueNyms
We now describe how such hardened Bloom filters are used in the TrueNyms unobservability system [7, 8, 2] as a protection against some forms of active traffic analysis.
Let us first recall what TrueNyms[3] is.
3We partially rely on [8] for the wording of some paragraphs of subsections 4.1 and 4.2, as well
as for the figures 1 and 2.
-----
4.1 The TrueNyms unobservability system
The TrueNyms system allows Alice and Bob to communicate over an IP network
without any observer knowing it. More precisely, when parties are using TrueNyms for
their communications, an observer, as powerful as he may be, is unable to know who
they are communicating with. He is unable to know when a communication occurs.
He is even unable to know if a communication occurs at all.
This TrueNyms system is a peer-to-peer overlay network based on Onion-Routing [9,
10], to which it adds protection against all forms of traffic analysis, including replay
attacks. Its performance is experimentally validated and is appropriate for most uses
(e.g. Web browsing and other HTTP-based protocols like RSS, Instant Messaging, file
transfers, audio and video streaming, remote shell, . . . ) but the usability of applications
requiring a very low end-to-end latency (like for instance telephony over IP) may be
degraded.
Briefly, Onion-Routing transmits data through nested encrypted tunnels established
through multiple relays R1, R2, etc. (see Figure 1 — in the following, a node denotes
either a relay or Alice or Bob). These relays accept to take part in an anonymity
system, but are not supposed trusted. Indeed, some of them can cooperate with a
passive observer Eve or with an active observer Mallory. Relays see only enciphered
traffic and know only the previous and next nodes on the route. They do not know if
those nodes are other relays or end-points.
Fig. 1. In Onion-Routing, to communicate with Bob, Alice creates a set of
nested encrypted tunnels. For every packet, each relay removes the outermost encryption layer (hence the name of this scheme).
To clarify some terminology used throughout this section, an encrypted tunnel between
Alice and one of the nodes is called a connection. Then, a set of nested connections
between Alice and Bob is called a route. Despite being created by Alice, those routes are
-----
not related to IP source routing or other IP-level routing. Standard IP routing is still
used between successive nodes if these nodes are on an IP network as we consider here.
At last, in TrueNyms, a communication is a superset of one or more routes between
Alice and Bob that are used to transmit data between them. A communication can
use multiple routes simultaneously and / or sequentially.
4.2 Replay attacks
An issue with standard cryptography modes when used in Onion-Routing is that they
allow an active replay attack[4]. Let us examine the situation at a relay at a given time:
for instance, let us assume that this specific relay is a part of three routes, as depicted
in Figure 2.
## 1
7TEXIF
WXOVGR
OATGBX
FWULFO
TTPAXO
CFBAQL
AUTFYF
NAELF2
ETEOPG
QXBGFA
DM3XRE
TUZLFB
## 1
7TEXIF
WXOVGR
OATGBX
FWULFO
TTPAXO
CFBAQL
AUTFYF
NAELF2
ETEOPG
QXBGFA
QXBGFA
TUZLFB
ABCDEF
XNSXAX
XNSXAX
LAMPFB
ORSUAT
ZAFPFL
ECZAFV
ORWCMX
CLOCRW
VOYUAV
4NBXVE
XLDTFH
ABCDEF
XNSXAX
3NTUBM
LAMPFB
ORSUAT
ZAFPFL
ECZAFV
ORWCMX
CLOCRW
VOYUAV
4NBXVE
XLDTFH
## A
## A
Fig. 2. Cryptography hides connection bindings to a passive observer (left),
but not to an active observer able to inject duplicate packets (right).
On the left of Figure 2, the observer sees three distinct incoming connections (A, B, C),
while there is also three outgoing connections (1, 2, 3). To make the relaying useless,
the observer must discover the relationship between the incoming and the outgoing
connections, or at least he must discover the outgoing connection corresponding to an
incoming one he is interested in.
As an encryption layer is removed on each connection, he cannot discover this by a
casual glance at the content of the packets. Moreover, in TrueNyms, the packet size
and rate are normalized, and care is taken to prevent information leaks when a route
is established or closed (as described in [7, 8, 2]).
Those standard traffic analysis methods are hence closed to an attacker.
However, as cryptography is deterministic, if nothing is done, a given packet entered
twice through a same incoming connection would be output twice — in its form with
an encryption layer removed — on the corresponding outgoing connection. So Mallory
takes a packet and duplicates it, say on connection A, which leads to the right side
of Figure 2. He then looks for two identical packets on the output, and finds them
```
4This is different of the replay attacks well known in cryptography, where an attacker can play
```
part of a protocol back from a recording, and that are usually prevented by the use of nonces or
timestamps.
-----
on the connection 3, so he learns that connection A and connection 3 are part of
the same route. Obviously, depending on the interest of Mallory, he can perform a
similar attack on the next relay having the connection 3 as an incoming connection,
and then see where it leads ultimately. Or he can perform the same attack on the other
incoming connections B and C, and figure out exactly which outgoing connection 1 or
2 corresponds to them.
The obvious way to prevent an external attacker to inject packets would be to use
node-to-node authentication on a route, but in this case it would not be sufficient
since, even if we assume that the replay of an authenticated packet is not possible, the
possibility for Mallory to operate a node must also be accounted for. This means there
is no way to actually prevent packet injection by an active observer, and so the system
has to be designed in a way that makes such injection useless.
4.3 Using hardened Bloom filters to prevent replay attacks
Recall that packets between two successive nodes on a route can be replayed by
Mallory, and hence will be output on the corresponding outgoing connection to the
downstream relay.
In the TrueNyms implementation, to prevent such replay attacks, a relay “remembers” all the packets of a transmission and compares each incoming packet on the same
connection to them. If it does not match, the packet is forwarded; if it does match, it
is dropped (and a dummy packet is forwarded).
Of course this approach requires a very fast way to compare a new packet to the
previous ones, hence the need for Bloom filters.
The situation is then similar to the context described in the example (1) of section 1:
an accepted packet is added to the filter if it “was not” already in it. In TrueNyms,
as the traffic is shaped, Mallory cannot simply flood the filter as the addition to the
filter is only done for transmitted packets, and packets outside the shaping envelope
are simply dropped.
In order to protect our unobservability system against the security issues raised in
section 3, TrueNyms relies on hardened Bloom filters.
Notice that, as false positives can occur, legitimate packets may be dropped. This
may slightly alter the performance of the system, but is not otherwise an issue as
TrueNyms provides end-to-end reliability if needed: the packet will then be resent with
another aspect. To ensure this different aspect, unacknowledged packets are buffered
unencrypted. If it is necessary to retransmit a packet, a nonce (unrelated to the nonces
used in the hardened Bloom filters in part 3.3) it includes is changed before the packet
is re-encrypted. As the cipher is used in bi-IGE mode (see below), the new encrypted
packet will have no similarities with the old one.
-----
Nonetheless, a long term connection would start to swamp the hardened Bloom Filter
after some time, and packets would start to be lost more and more. In TrueNyms, this
is not an issue due to two distinct features :
(1) Even if the communication is long-term, this is not the case of the routes it
uses. The lifetime of a route is chosen at random and is fixed before it is
used ;
(2) Routes are re-keyed from time to time. It means the encryption keys used
for the connections are changed. As the same packet entering twice but going
through the encryption layer with different keys would give different (and a
priori unmatchable without knowing the keys) outputs, the hardened Bloom
filters can be replaced by new ones during the key changes.
Of course, it only prevents Mallory from replaying identical packets. If let unhindered, he will replay slightly different packets and his attack would be successful because
after adding or removing an encryption layer with a standard block cipher mode, the
original and replayed packets will have similarities. For the use of hardened Bloom
filters to be effective, this attack must be prevented too, for instance by employing a
special mode like bi-IGE (which is a bi-directional application of the Infinite Garble
Extension mode — Campbell, 1977, [11]) as it is done in TrueNyms.
### 5 Conclusions and further work
In this paper, after recalling the functioning and the main properties of classical
Bloom filters, we considered the situation where a malicious party may develop indexselection attacks or offline attacks against some applications, leading e.g. to Denial
of Service situations. We then designed hardened Bloom filters able to withstand
such attacks, combining classical Bloom filters together with cryptographic hash
functions and secret nonces. Although these hardened Bloom filters are slower than
classical Bloom filters, mostly due to the use of cryptographic hash functions over
non-cryptographic ones, we described how they are concretely successfully used in
the TrueNyms unobservability system to defend it against active traffic analysis attacks.
Should the need arise, performance can probably be improved by further work on
the hash functions. Our proposed hardened Bloom filters relies notably on cryptographic hash functions. However, the requirements are probably weaker: for instance,
while compression and preimage resistance appear to be needed, it is not obvious
that second-preimage and collision-resistance are necessary as well. It may hence be
possible to construct custom hash functions with only the mandatory properties, that
would be faster than the usual cryptographic hash functions. We intend to study these
possibilities in a future work.
Finally, multiple variants of Bloom filters have been proposed (Bloomier filters, etc.)
over the years, some faster, some using less space, some allowing to remove elements,
-----
etc. In a future work, we also intend to study the possibility to similarly harden some
of these numerous existing variants of Bloom filters.
### Acknowledgements
The FNR/04/01/05/TeSeGrAd grant partially supported this research.
### References
[1] Bloom B. H., Space/time trade-offs in hash coding with allowable errors, Communications of the
ACM 13 (7) (1970): 422.
[2] Bernard N., Leprévost F., Unobservability of low-latency communications: the TrueNyms protocol, work in progress.
[3] Knuth D. E., Sorting and Searching,The Art of Computer Programming 3 (1998).
[4] Menezes A. J., van Oorschot P. C., Vanstone S. A., Handbook of Applied Cryptography, Discrete
Mathematics and its Applications, CRC Press (1997).
[5] Anderson R., Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley
(2001).
[6] Preneel B., Dobbertin H., Bosselaers A., The Cryptographic Hash Function RIPEMD-160, CryptoBytes 3 (2) (1997): 9.
[7] Bernard N., Non-observabilité des communications à faible latence, Université du Luxembourg,
Université de Grenoble 1 – Joseph Fourier (2008).
[8] Bernard N., Leprévost F., Beyond TOR: The TrueNyms Protocol, Security and Intelligent Information Systems 7053 (2012): 68.
[9] Goldschlag D. M., Reed M. G., Syverson P. F., Hiding Routing Information, Proceedings of
Information Hiding: First International Workshop, Springer-Verlag, LNCS 1174 (1996): 137.
[10] Reed M. G., Syverson P. F., Goldschlag D. M., Anonymous connections and Onion Routing,
IEEE Journal on Selected Areas in Communications 16(4) (1998): 482.
[11] Knudsen L., Block Chaining Modes of Operation, Department of Informatics, University of Bergen
(2000); http://www.ii.uib.no/publikasjoner/texrap/ps/2000-207.ps
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2478/v10065-012-0018-y?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2478/v10065-012-0018-y, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://journals.umcs.pl/ai/article/download/3361/2555"
}
| 2,012
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "45ad696d9922ae2aba4a7122091e494953e29607",
"title": "Beyond TOR: The TrueNyms Protocol"
},
{
"paperId": "c60b9518c2559caf7a0f3b9b7f939b2126628ff2",
"title": "Non-observabilité des communications à faible latence"
},
{
"paperId": "1161313527071f5825d65402474e2de0a8fb72e8",
"title": "Cryptographic Hash Function"
},
{
"paperId": "a3336e6f4d5cfe621dd616d9d9bac443ba4f8b6b",
"title": "The Art of Computer Programming: Volume 3: Sorting and Searching"
},
{
"paperId": "d3c44bf122dc45c95496ebedfb47247fe07cd4bb",
"title": "Anonymous connections and onion routing"
},
{
"paperId": "961011a97535f89d386f81926335bdd8196ff300",
"title": "Hiding Routing Information"
},
{
"paperId": "f39a2c11983b21fd5054d5393614959bfbc4e50f",
"title": "Space/time trade-offs in hash coding with allowable errors"
},
{
"paperId": null,
"title": "The TrueNyms Protocol, Security and Intelligent Information Systems"
},
{
"paperId": "c1515b351a216d4896cec060e48bcf57f8b62b8b",
"title": "Block Chaining Modes of Operation"
},
{
"paperId": null,
"title": "Handbook"
},
{
"paperId": "9e0c718def0567d068959cd55b67d15d990748a8",
"title": "Sorting and Searching"
},
{
"paperId": "b4c9a5000cfd1a8717b3a20117eeba34d80df600",
"title": "Sorting and searching\" the art of computer programming"
},
{
"paperId": null,
"title": "It is possible to find preimages"
},
{
"paperId": "e86c10dc325d6777ecf7eb02c113157d9e324677",
"title": "1 The Cryptographic Hash Function RIPEMD-160"
},
{
"paperId": null,
"title": "Unobservability of low-latency communications: the TrueNyms protocol"
}
] | 7,442
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01060a62a51c08248abc0b204b4e255a09731ad8
|
[] | 0.852913
|
A Framework for Data Privacy Preserving in Supply Chain Management Using Hybrid Meta-Heuristic Algorithm with Ethereum Blockchain Technology
|
01060a62a51c08248abc0b204b4e255a09731ad8
|
Electronics
|
[
{
"authorId": "2212173325",
"name": "Yedida Venkata Rama Subramanya Viswanadham"
},
{
"authorId": "3382395",
"name": "Kayalvizhi Jayavel"
}
] |
{
"alternate_issns": [
"2079-9292",
"0883-4989"
],
"alternate_names": null,
"alternate_urls": [
"http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-247562",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-247562",
"https://www.mdpi.com/journal/electronics"
],
"id": "ccd8e532-73c6-414f-bc91-271bbb2933e2",
"issn": "1450-5843",
"name": "Electronics",
"type": "journal",
"url": "http://www.electronics.etfbl.net/"
}
|
Blockchain is a recently developed advanced technology. It has been assisted by a lot of interest in a decentralized and distributed public ledger system integrated as a peer-to-peer network. A tamper-proof digital framework is created for sharing and storing data, where the linked block structure is utilized to verify and store the data. A trusted consensus method has been adopted to synchronize the changes in the original data. However, it is challenging for Ethereum to maintain security at all blockchain levels. As such, “public–private key cryptography” can be utilized to provide privacy over Ethereum networks. Several privacy issues make it difficult to use blockchain approaches over various applications. Another issue is that the existing blockchain systems operate poorly over large-scale data. Owing to these issues, a novel blockchain framework in the Ethereum network with soft computing is proposed. The major intent of the proposed technology is to preserve the data for transmission purposes. This new model is enhanced with the help of a new hybrid algorithm: Adaptive Border Collie Rain Optimization Algorithm (ABC-ROA). This hybrid algorithm generates the optimal key for data restoration and sanitization. Optimal key generation is followed by deriving the multi objective constraints. Here, some of the noteworthy objectives, such as information preservation (IP) rate, degree of modification (DM), false rule (FR) generation, and hiding failure (HF) rate are considered. Finally, the proposed method is successfully implemented, and its results are validated through various measures. The recommended module ensures a higher security level for data sharing.
|
# electronics
_Article_
## A Framework for Data Privacy Preserving in Supply Chain Management Using Hybrid Meta-Heuristic Algorithm with Ethereum Blockchain Technology
**Yedida Venkata Rama Subramanya Viswanadham * and Kayalvizhi Jayavel**
Department of Computer Science and Engineering, SRM Institute of Science and Technology,
Chennai 603203, Tamil Nadu, India
*** Correspondence: yvrsvish@gmail.com**
**Citation: Viswanadham, Y.V.R.S.;**
Jayavel, K. A Framework for Data
Privacy Preserving in Supply Chain
Management Using Hybrid
Meta-Heuristic Algorithm with
Ethereum Blockchain Technology.
_[Electronics 2023, 12, 1404. https://](https://doi.org/10.3390/electronics12061404)_
[doi.org/10.3390/electronics12061404](https://doi.org/10.3390/electronics12061404)
Academic Editor: Akshya Swain
Received: 12 January 2023
Revised: 7 March 2023
Accepted: 8 March 2023
Published: 15 March 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Blockchain is a recently developed advanced technology. It has been assisted by a lot of**
interest in a decentralized and distributed public ledger system integrated as a peer-to-peer network.
A tamper-proof digital framework is created for sharing and storing data, where the linked block
structure is utilized to verify and store the data. A trusted consensus method has been adopted to
synchronize the changes in the original data. However, it is challenging for Ethereum to maintain
security at all blockchain levels. As such, “public–private key cryptography” can be utilized to
provide privacy over Ethereum networks. Several privacy issues make it difficult to use blockchain
approaches over various applications. Another issue is that the existing blockchain systems operate
poorly over large-scale data. Owing to these issues, a novel blockchain framework in the Ethereum
network with soft computing is proposed. The major intent of the proposed technology is to preserve
the data for transmission purposes. This new model is enhanced with the help of a new hybrid
algorithm: Adaptive Border Collie Rain Optimization Algorithm (ABC-ROA). This hybrid algorithm
generates the optimal key for data restoration and sanitization. Optimal key generation is followed by
deriving the multi objective constraints. Here, some of the noteworthy objectives, such as information
preservation (IP) rate, degree of modification (DM), false rule (FR) generation, and hiding failure (HF)
rate are considered. Finally, the proposed method is successfully implemented, and its results are
validated through various measures. The recommended module ensures a higher security level for
data sharing.
**Keywords: data privacy preservation system; Ethereum blockchain technology; adaptive border**
collie rain optimization; supply chain network; data sanitization and restoration
**1. Introduction**
Developers can create many distributed apps based on smart contracts over the
Ethereum programming platform. For example, voting, financial transactions, business
administration, and contract signing are the applications used in the Ethereum platform [1].
During data sharing, the protection of privacy is very important. If shared data fall into the
wrong hands, the data could be misused and accessed by intruders, the denial of loans and
health insurance, and victimization of a person by financial fraud [2]. However, if the data
can be shared with the right users, then there is no information to be stolen and misused by
unauthorized entities [3]. The newly developed privacy-preserving data publishing (PPDP)
and privacy-preserving data mining (PPDM) approaches are utilized to reduce privacy
issues [4]. Data analysis contains commodity data, and mining turns it into economic
value [5]. The digital world increases the possibility of losing control over all of one’s own
intellectual, emotional, and situational knowledge, breaking the informational privacy
area and losing one’s autonomy [6]. The primary problem in this situation is that control
of privacy leakage by every individual requires high freedom in the flow of information
the technology enables, the connections it facilitates, and the advantage supplied by the
-----
_Electronics 2023, 12, 1404_ 2 of 29
information source [7]. Government organizations also create legislative guidelines to
protect personal data, including what purposes the particular data is used for, how it is
gathered, and how it should be preserved. Corporate privacy issues are also solved by
these guidelines [8].
Ethereum is a programming environment that makes it possible to provide privacy
preservation with the help of blockchain by building over distributed applications [9].
With “smartcontract”, the contract may be carried out without needing a centralized
authority after it has been installed. The smart contract executes intent in a perfect world,
and it produces reliable results [10]. Blockchain is a new technology that has gained
popularity due to the rise of cryptocurrency such as Ethereum and Bitcoin [11]. It may be
considered a distributed ledger with cryptography, and accurately records the results of
every transaction. A transaction recorded on the blockchain cannot be changed beyond that
point, and everyone can see it [12]. Every network member will reach a consensus on the
blockchain, ensuring that all valid transactions are recorded [13]. Secondly, every network
member will reach a consensus on the blockchain, ensuring that no invalid transactions are
recorded. Thirdly, all transactions recorded on the blockchain are auditable by network
members and cannot be tampered with by other issues. Smart contracts are being deployed
alongside the advancement of blockchain technology such as Ethereum and Hyperledger
to increase the potentiality of blockchain [14]. Addressing a transaction for a certain smart
contract’s address will cause it to be activated [15]. When a smart contract is activated, it
can run the predefined program independently through a centralized authority.
The challenges of the existing supply chain are shown below. Rival actors can be
found in the same supply chain, which makes data sharing very difficult [16]. Due to these
challenges, research on and implementation of traceability are growing slowly. Although
there has been work to create supply chain traceability, it is not practical in the actual world.
The work demonstrates several procedures in a centralized system’s data structures and
transparent framework. Researchers’ interest in blockchain technology and its application
to the supply chain has grown over the last few years [17]. Some limitations of the existing
research and their techniques are depicted here. The supply chains’ global character makes
it challenging to achieve the desired traceability. Several methods are implemented to
provide better privacy-preserved data transfer in the supply chain network. However,
these methods fail to address issues with certificate verifiability and raise concerns about
the existence of privacy-sensitive information. A hash network links the blockchain in
chronological order. For each node, it has a copy of a ledger, and few mutually distrusting
nodes often maintain this hash network. This system provides better confidentiality
and non-repudiation over the Ethereum networks, but the cost requirement is high and
the system’s feasibility low. To overcome these challenges, a novel blockchain-based
privacy preservation model is implemented in the Ethereum network to provide better
privacy preservation.
The contributions of the designed Ethereum-based privacy preservation model are
listed below.
To design a blockchain-based data privacy preservation model with a hybrid meta
_•_
heuristic algorithm over the supply chain network to secure information exchange and
guarantee the privacy of data access in the Ethereum platform. Here, the performance
improvement of the proposed model is applicable to different applications regarding
cryptocurrency, food supply chains, and sealed-bid auctions.
_•_ To generate the key with the help of developed ABC-ROA for exchanging the secured
data in the supply chain framework using data restoration and data sanitization
procedures in the Ethereum environment. The developed ABC-ROA algorithm is
utilized to restore the data from the receiver side. Consequently, it helps to access the
original data that can be generated from the original key.
To implement the hybrid meta-heuristic algorithm known as ABC-ROA for choosing
_•_
the best optimal key to maximize the performance of the developed blockchain-based
-----
_Electronics 2023, 12, 1404_ 3 of 29
privacy preservation model. Here, the designed ABC-ROA algorithm improves the
system’s robustness. It is also used to solve complex issues.
_•_ To compare the developed ABC-ROA-based privacy preservation system with existing
meta-heuristic algorithms using a variety of metrics to verify the performance of the
developed model.
This paper is split into the following sections. The merits and demerits of recent
privacy preservation in the Ethereum network model based on blockchain are described
in Section 2. The suggested dataset for the blockchain-based privacy preservation system
in Ethereum and the model explanation offered are covered in Section 3. The procedures
used to create the privacy preservation model, such as data restoration and sanitization
in the supply chain method, are shown in Section 4. The Ethereum privacy preservation
model with the ABC-ROA algorithm, optimal key details, and objective function details
are described in Section 5. The acquired outcomes of the recommended method are
summarized in Section 6. The paper is concluded in Section 7.
**2. Literature Survey**
_2.1. Related Works_
In 2021, Lin et al. [18] introduced privacy-preserving blockchain architecture (PPChain),
and PPChain’s design has been changed in Ethereum. PPChain’s architecture allowed
regulation to provide security. They specifically incorporated cryptographic primitives
such as broadcast encryption and group signature into a workable byzantine fault tolerance
consensus protocol with a separation mechanism to remove the transaction fee and mine
for a reward instead of using the existing mining model. They offered in-depth security
and privacy analysis and a performance study to demonstrate the usefulness of PPChain.
Examples from the food supply chain, sealed-bid auctions, and cryptocurrency described
how the PPChain might be used in regulation applications. In 2022, Rahmadika et al. [19]
implemented an efficient architecture for secure misbehavior detection in lightweight IoMT
tools in the “artificial pancreas” model (APS). The suggested method used deep learning,
which protected privacy, and boosted security by integrating blockchain technology built
on the Ethereum smart contract ecosystem. The efficacy of the developed system has been
empirically tested for commensurate incentive schemes, exhaustiveness with compact
findings, and an untraceable characteristic from a different neural deep learning technique.
Consequently, the model has a high recall rate, demonstrating that it is almost completely
capable of identifying harmful events in the case being studied.
In 2022, Xiong et al. [20] designed a secure privacy preservation authentication mechanism for inter-constellation collaboration. They created both permanent and transient
identities for each satellite to protect privacy. The permanent identity was used for interconstellation collaboration, whereas the temporary identity was utilized for communication
inside the constellation. For information exchange among cooperative satellite constellations, a consortium blockchain was introduced. A replica storage node mechanism has been
suggested to enable effective authentication with minimal resources, where well-resourced
satellites cache the duplicated data exchanged across the blockchain. A branch-and-bound
approach has been used to address the integer programming issue of choosing the replica
storage node. According to a security study, the suggested authentication technique was
secure against various possible attacks, which included formal analysis using informal
verification and BAN Logic. The suggested system provided efficiency in communication
overheads, signaling, and processing with greater performance, based on a comparison
between it and other privacy preservation schemes. Evaluations showed that the suggested
onboard caching approach achieved minimal storage costs and communication delay.
In 2022, Singh et al. [21] implemented a privacy preservation model in smart healthcare using a blockchain-based federated learning method for preserving privacy, which
used IoT cloud platforms to provide privacy and security. Scalable machine learning
applications such as health-care use federated learning technologies. Users could also
utilize a well-trained deep learning system without putting their private information in the
-----
_Electronics 2023, 12, 1404_ 4 of 29
cloud. Additionally, it covered the uses of federated learning in a smart city’s distributed
secure environment. In 2020, Guo et al. [22] implemented a blockchain-based privacy
preservation system in the Ethereum network to provide better privacy preservation data.
In that model, they used various mechanisms in the blockchain. Privacy protection and
analyzing anonymity methods were used in digital currency. The aim of the encryption
mechanism was focused on the privacy protection scheme.
In 2022, Mohan et al. [23] designed the proof of authority (PoA) consensus process,
which required little computational power, and it has been implemented on a Raspberry Pi
network. The elliptic integrated encryption process used a double-encryption process to
protect the secrecy of data. With a speed of at least 25 transactions per second, it has been
reported to perform well in contrast to previous systems. It was also readily expandable to
accommodate various health-care workers. A new range of real-time health monitoring
tools with excellent security and data privacy might come from further work on this concept,
potentially leading to significant innovation in the IoMT sector. In 2018, Elisa et al. [24]
developed a model for a decentralized deep learning e-government system utilizing a
blockchain framework that guaranteed data confidentiality and privacy and boosted public
sector confidence. A prototype of the suggested system was also provided, supported by a
theoretical and in-depth examination of the system’s security and privacy consequences.
In 2022, Dewangan et al. [25] suggested a system that used tokens to create pupils’
identities and saved them in a file system. The suggested model used SHA-256 for cryptographic hashing, Edwards-curve digital signature algorithm (EdDSA), and IPFS for digital
signature and verification. The results of this suggested method show the transaction speed,
the time needed for validating and signing a transaction, and the time needed for each
transaction. They evaluated the privacy, transaction costs, huge file storage, blockchain
registration, and implementation costs of this system to those of the already built solutions.
_2.2. Statement of Problem_
Personal information is stolen by many hackers and intruders, so privacy preservation is much needed nowadays. Personal data information theft, virus threats, and
spamming are illegal activities. However, limited hardware resources in IoT applications,
including network bandwidth, computing power, and storage pose unique challenges
to the blockchain. Therefore, various researchers have developed efficient technique to
secure the data along with the blockchain technology. Some of the disadvantages and
advantages of the existing blockchain-based privacy preservation techniques are listed
in Table 1. PPChain [18] provides qualitative security and efficient privacy over personal
data. Additionally, it ignores the correlation between variables to enhance training performance. However, it is not sufficiently developed to achieve conflicting properties, such as
regulation, transparency, anonymity, and confidentiality, and is computationally expensive.
Bi-LSTM [19] significantly increases the storage cost. Additionally, it achieves higher prediction accuracy, precision, and f1 score, yet it requires more time for training and is slow
compared to other convolutional techniques and does not contain expensive hardware
to perform complex mathematical calculations. Signs [20] indicate a high percentage of
false positives. Additionally, the board caching method achieves low storage costs and
communication latency. However, it does not access the service anywhere and does not
provide any security system for communication and privacy protection, so it suffers slightly
from the security perspective. Federated learning [21] achieves efficiency in computation,
communication overheads, and low signaling with more functionality. Additionally, it
achieves high authentication with replica storage and limited resources, yet the efficiency
is low when compressing the massive number of devices in the security system and it is
very expensive. Data encryption [22] efficiently achieves strict rights management, mainly
used for several functions. Additionally, it increases the integrity of the data, and it is
very cheap to implement. However, it provides less computation for compatibility and
the scalability is very low. For the Raspberry Pi network [23], the implementation cost is
low. Additionally, the implementation cost is low, yet to handle a large amount of data,
-----
_Electronics 2023, 12, 1404_ 5 of 29
more storage is needed in this network, and it does not provide a proper balance between
connectivity and storage requirements. Peer-to-peer technology [24] improves computation
rationality and identity anonymity. Additionally, it is easy to set up the client data and
does not need any special knowledge. However, the data confidentiality is very poor, and
the file resources do not centrally organize, so it takes more time. EdDSA [25] improves the
immutability and transparency to enhance the privacy system. Additionally, it reduces the
computational complexity of the decentralization algorithms, yet while using the enormous
data, the private key is sometimes leaked and does not support merging complex data.
Therefore, these challenges motivated us to develop an efficient privacy-preserving system
with blockchain technology.
**Table 1. Features and disadvantages of the existing blockchain-based privacy preservation techniques**
with blockchain technology in Ethereum.
**Study** **Techniques** **Features** **Disadvantages**
Lin et al. [18] PPChain
Rahmadika et al. [19] BiLSTM
Xiong et al. [20] SGINs
Federated
Singh et al. [21]
Learning
Guo et al. [22] Data encryption
Raspberry Pi
Mohan et al. [23]
network
Elisa et al. [24] peer-to-peer
Dewangan et al. [25] EdDSA
_•_ It provides qualitative security and
efficient privacy over personal data.
_•_ It ignores the correlation between
variables to enhance the
training performance.
_•_ It significantly increases the storage cost.
_•_ It achieves higher prediction accuracy,
precision and f1 score.
_•_ It gives a high percentage of false
positive rates.
_•_ The board caching scheme achieves low
storage costs and low
communication latency.
_•_ It achieves more functionality attributes
regarding the efficiency in signaling,
communication overheads,
and computation.
_•_ It achieves efficient authentication with
replica storage and limited resources.
_•_ It efficiently achieves strict rights
management, mainly used for
several applications.
_•_ It increases the integrity of the data, and
it is very cheap to implement.
_•_ The implementation cost is low.
_•_ The model solution is more feasible
and reliable.
_•_ It improves computation rationality and
identity anonymity.
_•_ It is easy to set up the client data and
does not need special knowledge.
_•_ It improves the immutability and
transparency to enhance the privacy
system highly.
_•_ It reduces the computational complexity
of the decentralization algorithms.
_•_ The data confidentiality is very poor.
_•_ The file resources do not centrally organize,
so it takes more time.
_•_ While using enormous data, the private key
is sometimes leaked.
_•_ It does not support merging complex data.
_•_ It is not developed for attaining conflicting
parameters such as regulation and
confidentiality. Anonymity
and transparency.
_•_ It is computationally expensive.
_•_ It requires more training time and is a slower
process than other convolutional techniques.
_•_ It does not contain expensive hardware to do
complex mathematical calculations.
_•_ It does not access the service anywhere.
_•_ It does not provide any security system for
communication and privacy protection, so it
suffers slightly from the security issue.
_•_ The efficiency is low when compressing the
massive number of devices in the
security system.
_•_ The federated communication system is
very expensive.
_•_ It provides less computation
for compatibility.
_•_ The scalability is very low.
_•_ To handle a large amount of data, more
storage is needed in this network.
_•_ It does not provide a proper balance between
connectivity and storage requirements.
-----
_Electronics 2023, 12, 1404_ 6 of 29
**3. Privacy Preservation of Supply Chain Management Data: New Meta-Heuristic with**
**Ethereum Blockchain**
_3.1. Data Used for Privacy Preservation_
Client data are protected to establish a privacy preservation system. SCM gathers input
data from a dataset called DataCo Smart Supply Chain for Big Data Analysis. It is available
[at https://www.kaggle.com/shivkp/customer-behaviour (accessed on 10 January 2023).](https://www.kaggle.com/shivkp/customer-behaviour)
The firm DataCo Global uses these data related to supply networks for their analysis. This
dataset comprises registered operations that allow using R software areas and machine
learning techniques, such as commercial distribution, sales, production, and supply. It also
incorporates the relationship between organized and unstructured data. The gathered data
are separated into three subsets: dataset 1, which comprises manufacturer data, dataset 2
counts transmitted data to managers who are present in different nations, and dataset 3
contains data transferred to each firm in each country.
_3.2. SCM Privacy Preservation Framework_
SCM is one of the best-known commercial organizations due to its capacity for improving the efficiency of the firm. Supply chains and Ethereum blockchains are coupled
to improve the security of supply chain networks. Many methods for ensuring the data
privacy preservation and security of the Ethereum network with blockchains were introduced, including a VMI mode system, homomorphic encryption, fully observable supply
chain management, PBFT algorithm, and consensus-based collaborative management
mechanism. These methods help with cost-effective reconciliation, minimizing dispute settlement, increasing security, reducing dwell time, enhancing transparency, decentralizing
data distribution, lowering complexity, increasing throughput, and addressing issues with
_Electronics 2023, 12, x FOR PEER REVIEW information sharing and data tracking. The architectural representation of the Ethereum7 of 30_
blockchain technology developed for privacy preservation is given in Figure 1.
**Figure 1.** Architectural representation of the privacy preservation platform developed in the
**Figure 1.** Architectural representation of the privacy preservation platform developed in the
Ethereum blockchain.
Ethereum blockchain.
Owing to these issues, higher credibility and dependability of data sharing and
keeping security in the Ethereum blockchain technology, a data privacy preservation
d l i bl k h i h b t bli h d I iti ll th d i d d t th d f
-----
_Electronics 2023, 12, 1404_ 7 of 29
Owing to these issues, higher credibility and dependability of data sharing and keeping security in the Ethereum blockchain technology, a data privacy preservation model
using blockchain has been established. Initially, the desired data are gathered from publicly
available online databases. Data restoration and data sanitization are the two stages of
secured data transfer in the developed model. The initial manufacturer data are cleaned
during the sanitization procedure to guarantee the security of the data during transmission.
Here, the originally collected data undergo sanitization. Then, the optimal key is generated with the ABC-ROA algorithm, which follows multi objective functions, such as HF,
IP, FR, and DM. Then, the sanitized data progress to the restoration process, where they
are stored. The restored data are transferred to the supply chain framework, where the
blockchain is adapted to preserve the data during transmission. These overall processes
are done in an Ethereum environment using blockchain technology. This technique sends
the sanitized data over a single blockchain to prevent unauthorized access and delay in
data transmission. The performance of this blockchain-based privacy preservation model
is verified through various heuristic algorithms regarding key sensitivity, cost function,
Euclidean distance, and harmonic mean.
**4. Supply Chain Network Creation and Privacy Preservation Steps Handled**
_4.1. Supply Chain Networks_
The supply chain network is divided into key levels: level 1 indicates the raw materials,
level 2 denotes the supplier, level 3 indicates the manufacturer, level 4 denotes the manager,
level 5 indicates the delivery, and level 6 denotes the customer. The network is then
incorporated into the blockchain to raise the level of information-sharing security. Here, the
manufacturer’s data undergo data sanitization to conceal them by employing the “chosen
optimal key” created by the ABC-ROA algorithm. Then, in the restoration phase, these
cleaned data are restored using the same optimum key through the authenticated users
on the receiver end. These actions protect the confidentiality of the data shared in supply
chain networks. The proposed data privacy preservation model divides the supply chain
network into four main levels: level 1 designates the product’s manufacturer, level 2 the
management, level 3 the product delivery, and level 4 the vendor. The phenomena that are
a part of the supply chain network are further characterized. Manufacturers in various
industries create their databases with information on the price, weight, product manager,
delivery method, and suppliers of their manufactured goods. The manufacturer’s data are
concealed or cleaned up using an ideal key. Then, the managers upload their data into the
blockchain, dividing it into various subchains to increase security.
Additionally, the data are sent appropriately in their supply chain subchains when
transferred from the first management level to the last delivery level. The data restoration
then happens at the vendor level. By restoring the actual data, the vendor utilizes the best
key to access the private information. Five manufacturers with their required production
goods are part of the supply chain network. Pretend these companies produce leather,
cosmetics, electronics, paper, and wooden goods. They create a database based on the
product information, which contains information such as item price, description, weight
(kg), amount, brand, controlled by vendor manager, and shipment mode (delivery). The
price, item quality, brand, and item description are understood to be the sensitive fields in
this situation, and they must be sanitized using the created EF-HHO algorithm.
The manufacturer’s subchains are shown in Equation (1).
_JM1[(][1][)][,][ JM]1[(][2][)][,][ JM]1[(][3][)][,][ JM]1[(][4][)][,][ JM]1[(][5][)]_ (1)
The terms JM1[(][1][)][,][ JM]1[(][2][)][,][ JM]1[(][3][)][,][ JM]1[(][4][)] and JM1[(][5][)] are the sub-chains and are calculated
using Equation (2).
_JM1[(][1][(][n][))][;][n][=][1,2], JM1[(][2][(][n][))][;][n][=][1,2], JM1[(][3][(][n][))][;][n][=][1,2],_ (2)
_JM1[(][4][(][n][))][;][n][=][1,2,3], JM1[(][5][(][n][))][;][n][=][1,2,3]_
-----
_Electronics 2023, 12, 1404_ 8 of 29
The single blockchain JM1 is created by combining these subchains.
These data are then transferred to the manager level. The managers and their corresponding subchains are shown in Equations (3) and (4).
_FR1, FR2, FR3, FR4, FR5_ (3)
_JM2[(][1][)][,][ JM]2[(][2][)][,][ JM]2[(][3][)][,][ JM]2[(][4][)][,][ JM]2[(][5][)]_ (4)
The subchains of the manager are given in Equation (5).
_JM2[(][1][(][n][))][;][n][=][1,2], JM2[(][2][(][n][))][;][n][=][1,2], JM2[(][3][(][n][))][;][n][=][1,2],_ (5)
_JM2[(][4][(][n][))][;][n][=][1,2,3], JM2[(][5][(][n][))][;][n][=][1,2,3]_
The vendors and their corresponding subchains are measured by Equation (6) and
Equation (7), respectively.
The single blockchain is indicated by JM2.
_WT1, WT2, WT3, WT4, WT5_ (6)
_JM2[(][1][)][,][ JM]2[(][2][)][,][ JM]2[(][3][)][,][ JM]2[(][4][)][,][ JM]2[(][5][)]_ (7)
Then, the subchains of the manager are denoted in below Equation (8).
_JM3[(][1][(][n][))][;][n][=][1,2], JM3[(][2][(][n][))][;][n][=][1,2], JM3[(][3][(][n][))][;][n][=][1,2],_ (8)
_JM3[(][4][(][n][))][;][n][=][1,2,3], JM3[(][5][(][n][))][;][n][=][1,2,3]_
_Electronics 2023, 12, x FOR PEER REVIEW_ Finally, the delivery level is represented in Equation (9). 9 of 30
_DE1, DE2, DE3, DE4, DE5_ (9)
_DE1,_ _DE2_, _DE3,_ _DE4_, _DE5_ (9)
Here, the term DE1 is a single delivery level. The term WT1 is vendor value in the
supply chain.Here, the term _DE1 is a single delivery level. The term_ _WT1 is vendor value in the_
supply chain. The representation of the supply chain network with blockchain is given in Figure 2.
The representation of the supply chain network with blockchain is given in Figure 2.
**Figure 2. Supply chain network with blockchain.**
**Figure 2. Supply chain network with blockchain.**
**_4.2. Data Sanitization and Data Restoration_**
The sanitization and restoration techniques used in this blockchain-based privacy
-----
_Electronics 2023, 12, 1404_ 9 of 29
_4.2. Data Sanitization and Data Restoration_
The sanitization and restoration techniques used in this blockchain-based privacy
preservation system are described below.
Sanitization [26]: The data sanitization phase over the recommended data privacy
preservation system in the Ethereum network is described in this section. The collected data
from the real databases undergo data sanitization. In most cases, the sanitization process
occurs at the manufacturer level, and blockchain sanitization also happens. The sensitive
data in the blockchains’ subblocks are cleaned after being separated into subblocks. It
is not necessary to sanitize the non-sensitive data in the subblocks. The term JM1[∗] [is a]
blockchain-sanitized database calculated by Equation (10).
_JM1[∗]_ [= (][C][2] _[⊕]_ _[JM][1][) +][ 1]_ (10)
The term JM1 is actual data, and the binarized sanitized data is JM1[∗][. The term][ U]Y1[(][1][)]×X
is a sensitive field, and the corresponding subchain is represented by JM1[(][4][(][n][))][;][n][=][1,2,3,4]. The
data columns are to be sanitized in the blockchain JM1[(][1][)][. The sensitive field is given in the]
blockchain matrix JM1[(][4][(][n][))][;][n][=][1]. The blockchain matrix is given in Equation (11).
1 1
4 3
2 2
3 5
5 4
4 1
2 3
_M1_ _M3_
_F1_ _F3_
_V1_ _V3_
_KF1_ _KF3_
_Q1_ _Q3_
_D1_ _D3_
_U1_ _U3_
_JM1[(][4][(][n][))][;][n][=][1]_ =
=
(11)
The term UY1(1)×X [is a sensitive matrix. It is from the blockchain matrix][ JM]1[(][4][(][n][))][;][n][=][1]
and is given in Equation (12).
_KF1_ _KF8_
_Q1_ _Q8_
_D1_ _D8_
_U1_ _U8_
3 5
5 4
4 1
2 3
=
(12)
_U_ (1)
_Y1_ _×X_ [=]
Here, the result of the sanitized matrix in the blockchain matrix based on the rule is
given. The sensitive data is present in the binarized data C2. The subblock is indicated by
_JM1[(][4][(][n][))][;][n][=][1], and the identity matrix is denoted by YPS. In addition, the term JM1[(][4][(][n][))][;][n][=][1]_
and YPS are added to the sanitized data JM1[(][∗][4][(][n][))][;][n][=][1].
Restoration: The restoration process is very efficient for a privacy-preserving system.
When employing the ABC-ROA algorithm on the receiver side, the receiver can access the
original data using the generated optimal key. The first step is to binarize the blockchain.
The key generation methods, the data C2, and the sanitized blockchain JM1[∗] [are binarized.]
The sanitized data’s binarized form is further altered through the unit phase. To extract the
restored data JM3, the binarized key matrix JM1[∗] [and binarized][ YPS][ are subtracted. The]
data restoration process is measured using Equation (13).
� �
_JM3[(][n][(][n][))]_ = _JM([∗]n[(][1])[(][n][))]_ _−_ 1 _⊕_ _A2_ (13)
Then, the newly designed ABC-ROA algorithm is used to recreate the sanitized key
_A2. The restored data are denoted by JM3,and hence the lossless function can be performed._
The data sanitization and restoration process in the Ethereum blockchain network for the
data privacy preservation system is depicted in Figure 3.
-----
_ElectronicsElectronics 20232023, 12, 12, 1404, x FOR PEER REVIEW_ 11 of 30 10 of 29
Proposed
ABC-ROA
Subtract a unit
step
Subtract a unit
step
Sanitization
key
Sanitized
data
Actual
database
**Figure 3. Data sanitization and data restoration process in the blockchain-based privacy preserva-**
**Figure 3. Data sanitization and data restoration process in the blockchain-based privacy preserva-**
tion system.
tion system.
**5. Adaptive Border Collie Rain Optimization Algorithm with Ethereum Blockchain for5. Adaptive Border Collie Rain Optimization Algorithm with Ethereum Blockchain**
**SCM Data Privacy Preservationfor SCM Data Privacy Preservation**
_5.1. Optimal Key Generation_
_5.1. Optimal Key Generation_
In the above section, the selected sensitive fields are chosen based on the sanitization
In the above section, the selected sensitive fields are chosen based on the sanitization
process. The ABC-ROA optimization algorithm chooses the optimal key in sensitive fields.
process. The ABC-ROA optimization algorithm chooses the optimal key in sensitive fields.
The termis adapted to transfer the solution. The optimal key is recreated with the matrix dimensionThe term AAn is an optimal key value. In the key generation phase, the Khatri–Rao productn is an optimal key value. In the key generation phase, the Khatri–Rao product
is adapted to transfer the solution. The optimal key is recreated with the matrix dimension
and calculated using Equation (14).
and calculated using Equation (14).
5 1 1 1
5 1 1 1
7 2 4 3
7 2 4 3
7 3 2 2
7 3 2 2
_JM1[(][∗][4][(][n][))]JM[;][n][=]1(∗[1]4( )=n_ );n=168= 126, A11, =A1 5= 5353 545 (14)(14)
1 48 2 45 14
2 6 2 3
1 4 4 1 [[√]M[n]JM[×][JM][max][]]
Here, the term JM1[∗] [is sanitized data. The rule-hiding technique is used in the sanitized]2 6 2 3 _M_ _JMn_ ×JM max
database. The ABC-ROA algorithm is adapted to optimize the key values betweenAn, respectively. The length of the key value is the same as that of the number of sensitiveHere, the term _JM is sanitized data. The rule-hiding technique is used in the sani-1*_ _A1 and_
fields. The key length is determined by usingtized database. The ABC-ROA algorithm is adapted to optimize the key values between�M[n]JM[. Finally, the optimal key is formed by]
using the ABC-ROA algorithm.[A]1[ and] _An, respectively. The length of the key value is the same as that of the number of_
sensitive fields. The key length is determined by using _M_ _JMn_ . Finally, the optimal key is
formed by using the ABC-ROA algorithm.
Proposed
ABC-ROA
Original
database
Sanitized
data
-----
_Electronics 2023, 12, 1404_ 11 of 29
_5.2. Objective Function_
The ABC-ROA-based privacy preservation system chooses the optimal key in the
restoration and sanitization process. These methods are used to solve constraints such
_H1, H2, H3&H4. The objective function of the model is calculated by Equation (15)._
_GG = argmin(H1 + H2 + H3 + H4)_ (15)
_{An}_
The selected optimal key is denoted by An. The term IG is normalized data and
measured using Equation (16).
_h1_
_H1 =_ (16)
max(h) _iters_
_∀_
= _[no][.][o f sensitiveJM][∗]_ (17)
_no.o f sensitiveJM_
The number of sensitive rules is given in Equation (18). The ratio of sensitive rules to
the number of sensitive rules is presented below in Equation (19).
_h1 =_ ���J1��� _∩_ _ST_ (18)
_H1 =_
��J1�� _∩_ _ST_
(19)
_ST_
The information preservation ratio is calculated using Equation (20).
_h2_
_H2 =_ (20)
max(h2)∀iters
= _[no][.][o f non][ −]_ _[sensitive wronghiddnJM][∗]_ (21)
_no.o f nonsensitiveJM_
The information loss is measured using Equation (22).
_h2 = 1 −_
��J − _J1��_
(22)
_J_
_|_ _|_
Here, the term H3 is a false rule generation. The false rules generation is calculated by
Equation (23).
_h3_
_H3 =_ (23)
max(h3)∀iters
= _[no][.][o f dataouto f bounceJM][∗]_ (24)
_total no.o f recordJM_
_h3 =_
��J − _J1��_
(25)
_J_
_|_ _|_
The modified degree is measured in Equation (26).
_h4_
_H4 =_ (26)
max(h4)∀iters
_h4 = dist(JM, JM[∗])_ (27)
The optimal key is generated in the blockchain-based privacy preservation system.
The optimal key selection improves the performance of the model.
-----
_Electronics 2023, 12, 1404_ 12 of 29
The optimal key is generated in the blockchain-based privacy preservation system.
The optimal key selection improves the performance of the model.
_5.3. Ethereum Blockchain Technology5.3. Ethereum Blockchain Technology_
A blockchain is a decentralized ledger of data kept by network nodes, and a singleA blockchain is a decentralized ledger of data kept by network nodes, and a single
entity does not control it. The blockchain data blocks are connected using cryptographicentity does not control it. The blockchain data blocks are connected using cryptographic
concepts. Everyone on a blockchain is responsible for their actions since the transactionconcepts. Everyone on a blockchain is responsible for their actions since the transaction
data are immutable and open to the public. A blockchain-based application is transparentdata are immutable and open to the public. A blockchain-based application is transparent
and attack-resistant. Ethereum is an open-source blockchain framework for decentralizedand attack-resistant. Ethereum is an open-source blockchain framework for decentralized
applications that manage digital wealth. “Smart contracts” refers to the applications thatapplications that manage digital wealth. “Smart contracts” refers to the applications that
operate on the Ethereum virtual machine (EVM). Two widely used scripting languages foroperate on the Ethereum virtual machine (EVM). Two widely used scripting languages
creating smart contracts on Ethereum are Solidity and Vyper. The two types of accountsfor creating smart contracts on Ethereum are Solidity and Vyper. The two types of
in Ethereum are contract accounts and externally owned accounts (EOA). Every account
accounts in Ethereum are contract accounts and externally owned accounts (EOA). Every
type contains a different 20-byte hexadecimal string-based unique address. The data are
account type contains a different 20-byte hexadecimal string-based unique address. The
transmitted with the help of the owner’s private key and thus, it controls the EOA, which
data are transmitted with the help of the owner’s private key and thus, it controls the
has an ether balance (i.e., sending data to prompt the initiation of a smart contract). There is
EOA, which has an ether balance (i.e., sending data to prompt the initiation of a smart
no code associated with an EOA. On the other hand, the corresponding code for a contract
contract). There is no code associated with an EOA. On the other hand, the corresponding
account with an ether balance is triggered by a transaction or another smart contract. A few
code for a contract account with an ether balance is triggered by a transaction or another
benefits of this model’s use of Ethereum blockchain technology include restricted access
smart contract. A few benefits of this model’s use of Ethereum blockchain technology
to the consumer’s or generator’s private data, practical calculation techniques that may
include restricted access to the consumer’s or generator’s private data, practical
be implemented on the smart contract, and complete decentralization, achieving trans
calculation techniques that may be implemented on the smart contract, and complete
parent on-chain market clearing. The architecture representation of Ethereum blockchain
decentralization, achieving transparent on-chain market clearing. The architecture
technology is given in Figure 4.
representation of Ethereum blockchain technology is given in Figure 4.
**Figure 4. Privacy preservation framework in Ethereum blockchain technology.**
**Figure 4. Privacy preservation framework in Ethereum blockchain technology.**
_5.4. Proposed ABC-ROA_
The ABC-ROA algorithm is designed to enhance the effectiveness of the developed
data privacy preservation system to select the optimal key in the data sanitization and
data restoration process in the SCM. The existing algorithms face a few challenges in
privacy preservation in Ethereum blockchain technology, where the existing heuristic
algorithms have a limited number of resources to store the data. Hence, they face security
and scalability challenges. The existing ROA and BCO algorithms are utilized for this
work. Here, the BCO algorithm is selected in the developed method to increase the
performance and robustness of the model. Additionally, it reduces the errors to increase
the effectiveness concerning precision, f1score, and accuracy. The BCO is solving the multi
objective combinational optimization problems. However, it gains low scalability and
-----
_Electronics 2023, 12, 1404_ 13 of 29
utility. Due to the issues in the BCO algorithm, the ABC-ROA algorithm is developed
by combining the ROA. ROA is chosen in the implemented model since it can perform
in parallel computing and is mostly helped for high privacy protection. It is used to
improve the user experience. These advantages in both BCO and ROA make the efficient
performance in the suggested blockchain-based privacy preservation system by overcoming
these conventional issues. The ABC-ROA algorithm is used to improve the performance
of the developed blockchain-based data privacy preservation system. The ABC-ROA
algorithm is implemented based on the current fitness and average fitness function. The
term defines the current fitness Fitcr, and the average fitness is denoted by the term Fitavg.
If Fitcr < Fitavg, choose the random parameter s = 2 otherwise, select s = 3 as the random
parameter value. However, in the conventional algorithm, the random parameter s is
selected randomly in the interval between [0, 1]. If s == 2 then, select the ROA algorithm
otherwise, select the BCO algorithm. The developed ABC-ROA algorithm increases the
fitness function.
BCO [27]: Three dogs and sheep are considered in the Border collie optimization
process. In a real-world scenario, one dog can manage the herd by himself. However,
three dogs are considered because the search space for certain optimization issues might be
large. When the algorithm is started, a group of three dogs and some lambs are exhibited
here. The dogs are in charge of returning the sheep to the farm after they have gone out in
different directions for grazing.
Random variables are used to initialize the positions of sheep and dogs. According to
their positions, the dogs are designated lead, left, and right. From the front, the lead dog
directs the herd. The person with the fitness Fitg is chosen to be the dog in front of the herd
or the lead dog.
The major task of these dogs is to observe and stalk the herd. The terms Fitsj and
_Fitm f denotes the fitness values. The fitness of the sheep is known as Fitt. The velocity of_
the lead dog is calculated using Equation (28).
�
_Wg(u + 1) =_ _Wg(u)[2]_ + 2 × NCg(u) × Pg(u) (28)
Then, the velocity of the left dog is measured using the given Equation (29).
�
_Wsj(u + 1) =_ _Wsj(u)[2]_ + 2 × NCsj(u) × Psj(u) (29)
The term NC is the acceleration of the dog and the term P is the dog’s position. The term
_W indicates the velocity of the dog. The right dog velocity is calculated using Equation (30)._
�
_Wm f (u + 1) =_
_Wm f (u)[2]_ + 2 × NCm f (u) × Pm f (u) (30)
Here, the variables Wsj(u + 1), Wm f (u + 1) and Wg(u + 1) is denoted by the velocity
of time at (u + 1) for the right, left, and lead dogs. Moreover, the terms NCg(u),NCsj(u),
and NCm f (u) denotes the acceleration for lead, right, and left dogs. Pg(u),Psj(u), and
_Pm f (u) describes the position of lead, right, and left dogs._
Gathering: In the sheep gathering method, the updated sheep velocity is considered.
The approaches of stalking, gathering and eyeing are used in this algorithm. The sheep
near the lead dog follow the lead dog’s direction. The sheep that is closer to the lead dog
is determined using Equation (31). Here, the term Eh determines the positive shows the
sheep nearer to the lead dog.
_Eh =_ �Fitg − _Fitt�_ _−_ �� _Fitm f −2_ _Fitsj_
�
_−_ _Fitt_
�
(31)
-----
_Electronics 2023, 12, 1404_ 14 of 29
Equation (32) indicates the sheep’s velocity. The term Pg is the current sheep location.
�
_Wth(u + 1) =_ _Wg(u + 1)[2]_ + 2 × NCg(u) × Pg(u) (32)
The variable Wth is defined as the velocity of the sheep that is influenced by the lead dog.
Stalking: To keep the dogs to guide, they must stalk the sheep closer to the left and
right dogs. The stalked sheep velocity is updated using Equations (33) and (34), respectively.
_Wsj =_ ��Wsj(u + 1) tan(s1)�2 + 2 × NCsj(u) × Psj(u) (33)
_Wm f =_
2
�� �
_Wm f (u + 1) tan(s2)_ + 2 × NCm f (u) × Pm f (u) (34)
_Wm f + Wsj_
_Wtt(u + 1) =_ (35)
2
Consequently, the term Wtt represents the velocities of left and right dogs. Hence, the
traversing angles of s1 and s2 is considered randomly.
Eyeing: In this scenario, it is anticipated that the least fit dog will follow the sheep and
give them a close look. The velocity of the left dog is given in Equation (36). The variable
_Wm f and NCm f is described the velocity and acceleration of the left dog. Additionally,_
the term Wsj and NCsj is defined as the velocity and acceleration of the right dog. Hence,
the term Psj defines the collection of sheep that are presented in the current location.
Additionally, the average time of individual can be represented as e.
�
_Wt f (u + 1) =_
_Wm f (u + 1)[2]_ _−_ 2 × NCm f (u) × Pm f (u) (36)
The velocity of the right dog is given in Equation (37).
�
_Wt f (u + 1) =_ _Wsj(u + 1)[2]_ _−_ 2 × NCsj(u) × Psj(u) (37)
The updated acceleration of the sheep and dog is calculated by Equation (38).
�
_NCj(u + 1) =_
�
_Wj(u + 1) −_ _Wj(u)_
_Timej(u)_
(38)
The updated time of the sheep and dog is measured by Equation (39).
_e_
_Timej(u + 1) = Avg∑_
_j=1_
_Wj(u + 1) −_ _Wj(u)_
(39)
_NCj(u + 1)_
The lead dog’s position is updated using Equation (40).
_Pg(u + 1) = Wg(u + 1) × Timeg(u + 1)_
(40)
+ [1]2 _[NC][g][(][u][ +][ 1][)][ ×][ Time][g][(][u][ +][ 1][)][2]_
The left dog’s position is updated and calculated by Equation (41).
_Pm f (u + 1) = Wm f (u + 1) × Timem f (u + 1)_
(41)
+ [1]2 _[NC][m f][ (][u][ +][ 1][)][ ×][ Time][m f][ (][u][ +][ 1][)][2]_
The position of the right dog is updated using Equation (42).
_Psj(u + 1) = Wsj(u + 1) × Timesj(u + 1)_
(42)
+ [1]2 _[NC][sj][(][u][ +][ 1][)][ ×][ Time][sj][(][u][ +][ 1][)][2]_
-----
_Electronics 2023, 12, 1404_ 15 of 29
The updated locations of the sheep are determined using Equation (43) and Equation (44), respectively.
_Pth(u + 1) = Wth(u + 1) × Timeth(u + 1)_
(43)
+ [1]2 _[NC][th][(][u][ +][ 1][)][ ×][ Time][th][(][u][ +][ 1][)][2]_
_Ptt(u + 1) = Wtt(u + 1) × Timett(u + 1)_
(44)
_−_ [1]2 _[NC][tt][(][u][ +][ 1][)][ ×][ Time][tt][(][u][ +][ 1][)][2]_
The eyed sheep are updated, and it is determined using below Equation (45).
_Pt f (u + 1) = Wt f (u + 1) × Timet f (u + 1)_
(45)
_−_ [1]2 _[NC][t f][ (][u][ +][ 1][)][ ×][ Time][t f][ (][u][ +][ 1][)][2]_
Then, the sheep go to the track with the help of dog guidance, which is given in
Equation (46).
_Pg(u + 1) = Wg(u + 1) × Timeg(u + 1)_
+ [1]2 (Wg(uTime+1)j−(uW)g(u)) _NCt f (u + 1) × Timet f (u + 1)[2]_ (46)
The stalking, gathering and eyeing behavior over the sheep by a dog is described. By
substituting the value of NC(u + 1) in Equations (46) and (47), the population values are
attained based on the gathered sheep, left dog, stalked sheep, eyed sheep, and right dog.
ROA [28]: Raindrops fall on the ground randomly. A raindrop can serve as a metaphor
for each possible solution. As raindrops fall randomly on the ground, certain places in the
solution space can be chosen randomly. Each raindrop’s radius is the most distinguishing
characteristic. As time passes and a raindrop is joined to other droplets, its radius can
decrease time. The radius of each droplet can decrease the time and also increase the
connectivity of other droplets within a suitable range after the first population of replies is
generated. Every droplet checks its nearest neighborhood based on its size at each cycle.
Check for the end of the area that a single droplet has covered if it is still unconnected to
any other droplet. Every droplet has variables while we are addressing a problem in n
dimensions. Here, the term S is a large drop in radius.
Then, the radius S1 and S2 makes a large form of the raindrop. The term m defines the
variables in each droplet and is calculated using Equation (47).
1
_S = (S1[m]_ [+][ S]2[m][)] _m_ (47)
Therefore, by increasing the number of iterations, weak droplets disappear, or the
droplets create strong droplets.
The initial population will decrease continuously, causing a speed of determining the
correct answer.
The term γ represents the soil characteristic given in Equation (48).
1
_S = (γS1[m][)]_ _m_ (48)
Here, the variable s1 is the radius that does not move on the properties of the soil,
which is depicted as γ. As a result, the droplet’s radius will be used to establish the lower
and upper bounds of the variable in the initial stage. Two endpoints of the variable are
examined in the next stage, and so on until the last variable. The term PDp is an ordering
cost and is measured using Equation (49).
_M_
_PDp =_ ∑
_j=1_
_ESj(U)_
(49)
_Rj_
-----
_Electronics 2023, 12, 1404_ 16 of 29
The initial droplet cost would be adjusted at this point. The inventory holding cost is
indicated by IDps, and is given in Equation (50).
_M_
_IDps =_ ∑
_j=1_
_Ij_
2Rj
�Rj − _Tj�2_ (50)
The shortage of the cost is denoted by TDp and is shown in Equation (51).
_M_
_TDp =_ ∑
_j=1_
_Mj_
2Rj _Tj[2]_ (51)
The term UsDp is a transportation cost measured using Equation (52).
_M_
_UsDp =_ ∑ _ESjTIDpj_ (52)
_j=1_
The objective of the solution is calculated by Equation (53).
_UpDp = (EDp + CDp + PDp + IDps + TDp + UsDp)_ (53)
Here, the term UpDp denotes the total cost. The total investment is indicated by InV,
and it is shown in Equation (54).
_M_ _Jkj_
_InV =_ ∑ ∑ _GMJjk_ _YMJjk_ (54)
_k=1_ _Jkj=1_
Here, the term YMJjk is an inventory capacity level. The fixed cost is represented by GMJjk.
The term EU is total time calculated by Equation (55).
_m_
### ∑
_k=1_
_o_
### ∑ (UMO + UYMO) ESj·ZYMO (55)
_j=1_
_EU =_
_y_
### ∑
_y=1_
The number of raindrops is denoted by y and the number of warehouses is represented
by m.
For each droplet, this situation would be repeated. Nearby droplets in their route
may interact with one another, greatly speeding up the process. A droplet’s radius
continuously decreased at the lowest point, greatly improving the accuracy of the re
-----
_Electronics 2023, 12, 1404_ 17 of 29
sponse. The pseudocode of the implemented ABC-ROA is presented in Algorithm 1.
**Algorithm 1: Developed ABC-ROA**
Initialize the population and acceleration value
Find the fitness solution
Calculate the velocity Using Equation (29).
For j = 1 to Maxiter
For k = 1 to PoP
If (CurrentFit < avgFit)
Assign the value of s = 2
Else
Assign the value of s = 3
End if
If (s = = 2)
Select the radius of the raindrop using Equation (50).
**Update the solution with the ROA algorithm using Equation (51).**
Else
**Update the solution with the BCO algorithm using Equation (38).**
Determine the best fitness of the sheep
Update the velocity of the sheep in the BCO algorithm
Evaluate the sheep’s position
Update the position of the sheep using Equation (32).
End if
End
End
**Obtain the best position**
End
**6. Results and Discussion**
_6.1. Simulation Setting_
The developed ABC-ROA-based privacy preservation model over the Ethereum network by blockchain technology was implemented in the MATLAB environment. In this
developed system, the chromosome length was set at five, and the population size was set
at 10. The efficiency analysis was conducted over key sensitivity, cost function, Euclidean
distance, known-plaintext attack (KPA), harmonic mean, known ciphertext attack (KCA),
arithmetic mean, CCA, and CPA. The efficiency of the developed model has been compared
through various heuristic algorithms such as Harris hawks optimizer (HHO) [28], entity
framework Harris hawks optimizer (EF-HHO) [29], BCO [26], and ROA [27].
_6.2. Effectiveness Analysis Using Euclidean Distance_
The overall analysis of the recommended ABC-ROA-based privacy preservation model
over the Ethereum network with three datasets in terms of Euclidean distance is given in
Figure 5. From the analysis, dataset 2 gives a very low Euclidean distance than dataset 1
and 2. While using dataset 2, the developed ABC-ROA-based privacy preservation model
gives improved Euclidean distance of 36.03%, 8.62%, 2.54%, and 8.77% over HHO, EFHHO, BCO, and ROA. In the graph analysis, the developed ABC-ROA method is utilized
to show effective performance. Here, the existing EF-HHO algorithm attains second-best
performance. While considering all three datasets, the Euclidean distance of the proposed
method shows better performance in dataset 2. Thus, the developed ABC-ROA-based data
privacy preservation model over the Ethereum network gives higher effectiveness than the
other heuristic algorithms.
-----
##### ROA-based data privacy preservation model over the Ethereum network gives higher ef-ROA-based data privacy preservation model over the Ethereum network gives higher ef
_Electronics 2023, 12, 1404_ 18 of 29
##### fectiveness than the other heuristic algorithms. fectiveness than the other heuristic algorithms.
**Figure 5. Effectiveness analysis on the offered blockchain-based privacy preservation model using**
Euclidean distance.
**Figure 5. Figure 5. Effectiveness analysis on the offered blockchain-based privacy preservation model usingEffectiveness analysis on the offered blockchain-based privacy preservation model using**
Euclidean distance. Euclidean distance.6.3. Performance Analysis Using the Harmonic Mean
_6.3. Performance Analysis Using the Harmonic MeanThe harmonic mean analysis in terms of Euclidean distance, Pearson correlation, and_
##### 6.3. Performance Analysis Using the Harmonic Mean
the spearman correlation on the developed privacy preservation system over the
The harmonic mean analysis in terms of Euclidean distance, Pearson correlation, and
Ethereum network among various datasets is given in Figure 6. In dataset 2, the developed The harmonic mean analysis in terms of Euclidean distance, Pearson correlation, and
the spearman correlation on the developed privacy preservation system over the Ethereum
##### the spearman correlation on the developed privacy preservation system over thenetwork among various datasets is given in FigureABC-ROA-based privacy preservation system over the Ethereum network provides en- 6. In dataset 2, the developed ABC- Ethereum network among various datasets is given in Figure 6. In dataset 2, the developedROA-based privacy preservation system over the Ethereum network provides enhancedhanced harmonic means of25.71%, 27.77%, 35%, and 27.1% than HHO, EF-HHO, BCO,
and ROA. From the given graph analysis, the Pearson correlation can be analyzed to meas
##### ABC-ROA-based privacy preservation system over the Ethereum network provides en-harmonic means of25.71%, 27.77%, 35%, and 27.1% than HHO, EF-HHO, BCO, and ROA.
ure the strength and direction between the two variables. Here, the Pearson correlation
##### hanced harmonic means of25.71%, 27.77%, 35%, and 27.1% than HHO, EF-HHO, BCO,From the given graph analysis, the Pearson correlation can be analyzed to measure the
lies in the range of [−1 to 1]. Moreover, the negative correlation is denoted as “−1” as well
##### and ROA. From the given graph analysis, the Pearson correlation can be analyzed to meas-strength and direction between the two variables. Here, the Pearson correlation lies in the
as the positive correlation can be represented as “1”. However, Spearman’s correlation
range of [ 1 to 1]. Moreover, the negative correlation is denoted as “ 1” as well as the
##### ure the strength and direction between the two variables. Here, the Pearson correlation− −
positive correlation can be represented as “1”. However, Spearman’s correlation can be usedcan be used to measure the association between the variables. As a result, the analysis of
##### lies in the range of [−1 to 1]. Moreover, the negative correlation is denoted as “−1” as well
to measure the association between the variables. As a result, the analysis of the designedthe designed ABC-ROA-based privacy preservation system is superior to the other heu
##### as the positive correlation can be represented as “1”. However, Spearman’s correlationABC-ROA-based privacy preservation system is superior to the other heuristic approaches.ristic approaches. can be used to measure the association between the variables. As a result, the analysis of the designed ABC-ROA-based privacy preservation system is superior to the other heu- ristic approaches.
**Figure 5. Effectiveness analysis on the offered blockchain-based privacy preservation model using**
##### the designed ABC-ROA-based privacy preservation system is superior to the other heu
(a) (b)
**Figure 6. Cont.**
##### (a) (b)
-----
_Electronics 2023, 12, x FOR PEER REVIEW_ 20 of 30
_Electronics 2023, 12, 1404_ 19 of 29
(c)
(c)
**Figure 6. Effectiveness analysis of the developed blockchain-based privacy preservation system to**
**Figure 6.Figure 6. (a) Euclidean distance, ( Effectiveness analysis of the developed blockchain-based privacy preservation system toEffectiveness analysis of the developed blockchain-based privacy preservation system to b) Pearson correlation and (c) Spearman correlation.**
(a) Euclidean distance, (b) Pearson correlation and (c) Spearman correlation.
(a) Euclidean distance, (b) Pearson correlation and (c) Spearman correlation.
_6.4. Effectiveness Analysis Using the Arithmetic Mean_
_6.4. Effectiveness Analysis Using the Arithmetic Mean6.4. Effectiveness Analysis Using the Arithmetic Mean_
Comparison of the Pearson and Spearman correlations of the designed ABC-ROA
Comparison of the Pearson and Spearman correlations of the designed ABC-ROA-Comparison of the Pearson and Spearman correlations of the designed ABC-ROA
based privacy preservation system among various heuristic algorithms is shown in Figure
based privacy preservation system among various heuristic algorithms is shown in Figurebased privacy preservation system among various heuristic algorithms is shown in Figure 7.
7. The sum of the numerical values of each observation divided by the total number of
The sum of the numerical values of each observation divided by the total number of7. The sum of the numerical values of each observation divided by the total number of
observations is known as the arithmetic mean. From the analysis, the developed ABC
observations is known as the arithmetic mean. From the analysis, the developed ABC-ROA-observations is known as the arithmetic mean. From the analysis, the developed ABC
ROA-based privacy preservation achieves secured data transfer when dataset 3 shows a
based privacy preservation achieves secured data transfer when dataset 3 shows a lowROA-based privacy preservation achieves secured data transfer when dataset 3 shows a
low value. Regarding the spearman correlation value in dataset 2, the developed ABC
value. Regarding the spearman correlation value in dataset 2, the developed ABC-ROA-low value. Regarding the spearman correlation value in dataset 2, the developed ABC
ROA-based privacy preservation model has high arithmetic means of 40%, 20%, 20%, and
based privacy preservation model has high arithmetic means of 40%, 20%, 20%, and 34.4%,ROA-based privacy preservation model has high arithmetic means of 40%, 20%, 20%, and
34.4%, better than HHO, EF-HHO, BCO, and ROA. As a result, the designed ABC-ROA
better than HHO, EF-HHO, BCO, and ROA. As a result, the designed ABC-ROA-based34.4%, better than HHO, EF-HHO, BCO, and ROA. As a result, the designed ABC-ROA
based privacy preservation system over the Ethereum network shows higher effectiveness
privacy preservation system over the Ethereum network shows higher effectiveness thanbased privacy preservation system over the Ethereum network shows higher effectiveness
than other heuristic algorithms.
other heuristic algorithms.
than other heuristic algorithms.
(a) (b)
(a) (b)
**Figure 7.** Effectiveness analysis of the designed blockchain-based privacy preservation model in
**Figure 7.Figure 7. Effectiveness analysis of the designed blockchain-based privacy preservation model inEffectiveness analysis of the designed blockchain-based privacy preservation model in**
terms of (a) Pearson correlation and (b) Spearman correlation.
terms of (terms of (a) Pearson correlation and (a) Pearson correlation and (b) Spearman correlation.b) Spearman correlation.
_6.5. Cost Function Analysis on the Proposed Model6.5. Cost Function Analysis on the Proposed Model_
_6.5. Cost Function Analysis on the Proposed Model_
Comparison of the proposed ABC-ROA-based privacy preservation system conver-Comparison of the proposed ABC-ROA-based privacy preservation system conver
Comparison of the proposed ABC-ROA-based privacy preservation system conver
gence rate to existing meta-heuristic algorithms with various datasets is shown in Figuregence rate to existing meta-heuristic algorithms with various datasets is shown in Figure 8.
gence rate to existing meta-heuristic algorithms with various datasets is shown in Figure
Compared to HHO, EF-HHO, BCO, and ROA, the performance of the ABC-ROA-based8. Compared to HHO, EF-HHO, BCO, and ROA, the performance of the ABC-ROA-based
8. Compared to HHO, EF-HHO, BCO, and ROA, the performance of the ABC-ROA-based
privacy preservation system, the cost function is highly improved by 2.98%, 3.07%, 3.56%,privacy preservation system, the cost function is highly improved by 2.98%, 3.07%, 3.56%,
privacy preservation system, the cost function is highly improved by 2.98%, 3.07%, 3.56%,
and 4.12%, respectively, in dataset 3 at iteration value of 15. If the iteration increases, thenand 4.12%, respectively, in dataset 3 at iteration value of 15. If the iteration increases, then
and 4.12%, respectively, in dataset 3 at iteration value of 15. If the iteration increases, then
the cost function of the designed ABC-ROA method gets decreased. Hence, the graph
-----
_Electronics 2023, 12, 1404_ 20 of 29
the cost function of the designed ABC-ROA method gets decreased. Hence, the graph
analysis shows better performance in the recommended method. The existing EF-HHO
analysis shows better performance in the recommended method. The existing EF-HHO
algorithm achieves second-best performance. As a result, the developed ABC-ROA-based
algorithm achieves second-best performance. As a result, the developed ABC-ROA-based
privacy preservation model using blockchain technology performs more effectively than
privacy preservation model using blockchain technology performs more effectively than
other algorithms.
other algorithms.
(a) (b)
(c)
**Figure 8.** Convergence analysis on developed blockchain-based privacy preservation model in
**Figure 8. Convergence analysis on developed blockchain-based privacy preservation model in terms**
terms (a) dataset 1, (b) dataset 2, and (c) dataset 3.
(a) dataset 1, (b) dataset 2, and (c) dataset 3.
_6.6. Effectiveness Analysis Using Key Sensitivity6.6. Effectiveness Analysis Using Key Sensitivity_
The key sensitivity of the obtained optimum key in the ABC-ROA-based privacyThe key sensitivity of the obtained optimum key in the ABC-ROA-based privacy
preservation system for three datasets with various existing meta-heuristic algorithms atpreservation system for three datasets with various existing meta-heuristic algorithms at
various percentage levels is shown in Figurevarious percentage levels is shown in Figure 9. The key sensitivity of the proposed system 9. The key sensitivity of the proposed system
is noticed as a lower value while increasing the percentage of the key for all the threeis noticed as a lower value while increasing the percentage of the key for all the three
datasets. From the analysis, the developed model shows key sensitivity improvements ofdatasets. From the analysis, the developed model shows key sensitivity improvements of
11%, 15%, 20%, and 19% to heuristic algorithms such as HHO, EF-HHO, BCO, and ROA,11%, 15%, 20%, and 19% to heuristic algorithms such as HHO, EF-HHO, BCO, and ROA,
respectively. In the given graph analysis, it shows the equivalence performance. Basedrespectively. In the given graph analysis, it shows the equivalence performance. Based on
on the key sensitivity value, the ROA algorithm is not effective to secure the data in thethe key sensitivity value, the ROA algorithm is not effective to secure the data in the blockblockchain. At learning percentage 30, the key sensitivity of the existing EF-HHO algorithmchain. At learning percentage 30, the key sensitivity of the existing EF-HHO algorithm
secures second-best performance. As a result, the suggested blockchain-based privacysecures second-best performance. As a result, the suggested blockchain-based privacy
preservation model executes more effectively than other heuristic algorithms.preservation model executes more effectively than other heuristic algorithms.
-----
_Electronics 2023, 12, x FOR PEER REVIEW_ 22 of 30
_Electronics 2023, 12, 1404_ 21 of 29
(a) (b)
(c)
**Figure 9. Performance analysis of the designed blockchain-based privacy preservation model con-**
**Figure 9. Performance analysis of the designed blockchain-based privacy preservation model con-**
cerning (a) dataset 1, (b) dataset 2, and (c) dataset 3.
cerning (a) dataset 1, (b) dataset 2, and (c) dataset 3.
_6.7. Performance Analysis Using CPA and CCA6.7. Performance Analysis Using CPA and CCA_
Comparison of performance analysis of the ABC-ROA-based privacy preservationComparison of performance analysis of the ABC-ROA-based privacy preservation
system over the existing heuristic algorithms in terms of chosen ciphertext attacks (CCA)system over the existing heuristic algorithms in terms of chosen ciphertext attacks (CCA)
and chosen plaintext attacks (CPA) with three datasets is given in Tablesand chosen plaintext attacks (CPA) with three datasets is given in Tables 2 and 3, respec- 2 and 3, respectively.
In CPA, the attacker can be used to encrypt the message. Hence, the goal of the CPA attacktively. In CPA, the attacker can be used to encrypt the message. Hence, the goal of the
is to reduce the security of the encryption scheme. Here, symmetric and asymmetricCPA attack is to reduce the security of the encryption scheme. Here, symmetric and asymcryptography can be used. However, the CPA is often feasible in the diverse applications.metric cryptography can be used. However, the CPA is often feasible in the diverse appliSince the CPA is essential for public key cryptography where the encryption key is publiccations. Since the CPA is essential for public key cryptography where the encryption key
and so attackers can encrypt any plaintext they choose. Moreover, the CCA can able tois public and so attackers can encrypt any plaintext they choose. Moreover, the CCA can
decrypt the ciphertext message. Here, the CCA can be widely used in cryptanalysis toable to decrypt the ciphertext message. Here, the CCA can be widely used in cryptanalysis
collect information by obtaining the decryptions of the chosen ciphertext, since it needs toto collect information by obtaining the decryptions of the chosen ciphertext, since it needs
recover the hidden secret key which is used for the decryption. The proposed model hasto recover the hidden secret key which is used for the decryption. The proposed model
more secure shared data in the transmission while analyzing the results. The CCA value ofhas more secure shared data in the transmission while analyzing the results. The CCA
the developed system is guaranteed with a lower value on raising the proportion of keyvalue of the developed system is guaranteed with a lower value on raising the proportion
variations for all three datasets. From the dataset 2 analysis of CPA, the ABC-ROA-basedof key variations for all three datasets. From the dataset 2 analysis of CPA, the ABC-ROAprivacy preservation model has high performance of 7.58%, 5.93%, 4.15%, and 7.64% thanbased privacy preservation model has high performance of 7.58%, 5.93%, 4.15%, and
HHO, EF-HHO, BCO, and ROA for the key variations of 50. The proposed blockchain-based7.64% than HHO, EF-HHO, BCO, and ROA for the key variations of 50. The proposed blockprivacy preservation model is more effective than other heuristic algorithms.chain-based privacy preservation model is more effective than other heuristic algorithms.
-----
_Electronics 2023, 12, 1404_ 22 of 29
**Table 2. Effectiveness analysis using CCA with three datasets for the developed blockchain-based**
data privacy preservation system over Ethereum network.
**Key Variations in the** **EF-HHO**
**HHO [29]** **BCO [27]** **ROA [28]** **ABC-ROA**
**Percentage** **[30]**
Dataset 1
10 99.911 91.762 95.836 99.319 87.762
20 99.877 93.113 96.495 99.396 89.113
30 99.845 94.312 97.079 99.673 90.312
40 99.873 95.739 97.806 99.787 91.739
50 99.927 96.462 98.194 99.861 92.462
Dataset 2
10 99.909 91.626 95.767 99.31 87.626
20 99.876 92.894 96.385 99.389 88.894
30 99.843 94.071 96.957 99.669 90.071
40 99.871 95.452 97.661 99.784 91.452
50 99.926 96.283 98.105 99.859 92.283
Dataset 3
10 99.926 92.981 96.454 99.429 88.981
20 99.899 93.969 96.934 99.494 89.969
30 99.872 94.909 97.391 99.729 90.909
40 99.895 96.057 97.976 99.823 92.057
50 99.94 96.858 98.399 99.884 92.858
**Table 3. Effectiveness analysis using CPA with three datasets for the developed blockchain-based**
data privacy preservation system over Ethereum network.
**Key Variations in the** **EF-HHO**
**HHO [29]** **BCO [27]** **ROA [28]** **ABC-ROA**
**Percentage** **[30]**
Dataset 1
10 58.784 52.465 55.624 58.236 43.465
20 62.388 59.351 60.87 61.866 50.351
30 65.623 63.651 64.637 65.13 54.651
40 68.534 65.454 66.994 68.075 56.454
50 71.114 65.723 68.418 70.684 56.723
Dataset 2
10 57.875 49.222 53.548 57.313 40.222
20 61.469 55.313 58.391 60.935 46.313
30 64.722 60.534 62.628 64.219 51.534
40 67.695 63.39 65.542 67.223 54.39
50 70.332 64.698 67.515 69.891 55.698
Dataset 3
10 68.646 61.093 64.87 68.191 52.093
20 71.919 66.26 69.089 71.501 57.26
30 74.772 70.154 72.463 74.389 61.154
40 77.269 72.702 74.986 76.918 63.702
50 79.448 74.553 77 79.127 65.553
-----
_Electronics 2023, 12, 1404_ 23 of 29
_6.8. Statistical Analysis of the Designed Method_
The statistical analysis of the designed blockchain-based data privacy preservation
system over the Ethereum network is shown in Table 4. The designed ABC-ROA method
attains 14.9%, 0.7%, 11.4%, and 4.1% better performance than HHO, EF-HHO, BCO, and
_Electronics 2023, 12, x FOR PEER REVIEW ROA regarding dataset 1. Throughout the analysis, the experimental outcome has attained24 of 30_
superior performance when compared to other traditional approaches.
attains 14.9%, 0.7%, 11.4%, and 4.1% better performance than HHO, EF-HHO, BCO, and
**Table 4. Statistical analysis on developed data privacy preservation model over the Ethereum network.**
ROA regarding dataset 1. Throughout the analysis, the experimental outcome has attained superior performance when compared to other traditional approaches. Terms **HHO [29]** **EF-HHO [30]** **BCO [27]** **ROA [28]** **ABC-ROA**
Dataset 1
**Table 4. Statistical analysis on developed data privacy preservation model over the Ethereum net-**
Best 6.5506 5.5954 6.2903 5.8088 5.5696
work.
Worst 6.6609 6.6609 6.6609 6.6609 6.125
**Terms** **HHO [29]** **EF-HHO [30]** **BCO [27]** **ROA [28]** **ABC-ROA**
Mean 6.5815 5.6644 6.3546 5.9459 5.6203
Dataset 1
Median Best 6.5506 6.5506 5.5963 5.5954 6.29276.2903 5.8088 5.8587 5.5696 5.5836
StandardWorst 0.050579 6.6609 0.212396.6609 0.120526.6609 6.6609 0.18825 6.125 0.11126
Deviation
Mean 6.5815 5.6644 6.3546 5.9459 5.6203
Median 6.5506 5.5963 Dataset 2 6.2927 5.8587 5.5836
Standard Deviation Best 6.5078 0.050579 5.02810.21239 6.28310.12052 0.18825 5.2618 0.11126 4.8695
Worst 6.6822 6.6822Dataset 2 6.6716 6.6822 5.7822
Mean Best 6.5701 6.5078 5.1008 5.0281 6.36626.2831 5.2618 5.4352 4.8695 4.9401
Median Worst 6.5303 6.6822 5.0323 6.6822 6.30726.6716 6.6822 5.3044 5.7822 4.8807
Mean 6.5701 5.1008 6.3662 5.4352 4.9401
Standard
deviationMedian 0.071683 6.5303 0.329515.0323 0.118696.3072 5.3044 0.3084 4.8807 0.18276
Standard deviation 0.071683 0.32951 0.11869 0.3084 0.18276
Dataset 3
Dataset 3
Best 6.552 6.2897 6.5597 6.3459 6.2438
Best 6.552 6.2897 6.5597 6.3459 6.2438
Worst 6.7356 6.7356 6.7356 6.7356 6.5356
Worst 6.7356 6.7356 6.7356 6.7356 6.5356
Mean 6.6597 6.3163 6.625 6.4021 6.3077
Mean 6.6597 6.3163 6.625 6.4021 6.3077
MedianMedian 6.6337 6.6337 6.2905 6.2905 6.60296.6029 6.3933 6.3933 6.2847 6.2847
StandardStandard deviation 0.080236 0.088813 0.054733 0.082943 0.063698
0.080236 0.088813 0.054733 0.082943 0.063698
deviation
_6.9. ANOVA Test for the Developed Data Privacy Preservation Model over the Ethereum Net-_
_6.9. ANOVA Test for the Developed Data Privacy Preservation Model over the Ethereum Networkwork_
The validation of the ANOVA test for the designed ABC-ROA method regardingThe validation of the ANOVA test for the designed ABC-ROA method regarding fit
ness function is shown in Figure 10. Thus, the experimental result of the developed
fitness function is shown in Figure 10. Thus, the experimental result of the developed
method attains superior performance compared to other traditional approaches.
method attains superior performance compared to other traditional approaches.
(a) (b)
**Figure 10. Cont.**
-----
_Electronics 2023, 12, x FOR PEER REVIEW_ 25 of 30
_Electronics 2023, 12, 1404_ 24 of 29
(c)
(c)
**Figure 10. ANOVA of the designed method for privacy preservation over the Ethereum network**
**Figure 10. ANOVA of the designed method for privacy preservation over the Ethereum network**
**Figure 10. ANOVA of the designed method for privacy preservation over the Ethereum network**
regarding fitness function (regarding fitness function (aa) dataset 1, () dataset 1, (bb) dataset 2, and () dataset 2, and (c) dataset 3. c) dataset 3.
regarding fitness function (a) dataset 1, (b) dataset 2, and (c) dataset 3.
_6.10.6.10.Validation of Control for Parameters of Different Algorithms Using the Designed MethodValidation of Control for Parameters of Different Algorithms Using the Designed Method_
_6.10. Validation of Control for Parameters of Different Algorithms Using the Designed Method_
The validation of control for parameters of different existing algorithms regarding The validation of control for parameters of different existing algorithms regarding
The validation of control for parameters of different existing algorithms regarding
Euclidean distance and Pearson and Spearman correlations is shown in Figure 11. Here, Euclidean distance and Pearson and Spearman correlations is shown in Figure 11. Here,
Euclidean distance and Pearson and Spearman correlations is shown in Figure 11. Here,
the evaluation of the parameter for the proposed ABC-ROA method is taken as ird= 0.06.
the evaluation of the parameter for the proposed ABC-ROA method is taken as ird= 0.06.
the evaluation of the parameter for the proposed ABC-ROA method is taken as ird = 0.06.
Throughout the analysis, the developed method achieves enhanced performance com
Throughout the analysis, the developed method achieves enhanced performance com
Throughout the analysis, the developed method achieves enhanced performance comparedpared to the other existing methods.
pared to the other existing methods.
to the other existing methods.
(a) (b)
(a) (b)
(c)
(c)
**Figure 11. Analysis of controlling the parameters of the designed method using (a) Euclidean distance,**
(b) Pearson correlation, and (c) Spearman correlation.
(b)
(c)
-----
_Electronics 2023, 12, 1404_ **Figure 11. Analysis of controlling the parameters of the designed method using (a) Euclidean dis-25 of 29**
tance, (b) Pearson correlation, and (c) Spearman correlation.
**7. Security Analysis**
**7. Security Analysis**
The developed ABC-ROA-based privacy preservation system model is evaluated
The developed ABC-ROA-based privacy preservation system model is evaluated
with various attacks such as KCA, KPA, adaptive chosen-plaintext analysis (ACPA), and
with various attacks such as KCA, KPA, adaptive chosen-plaintext analysis (ACPA), and
Ciphertext-Only Analysis (COA) assessed based on three datasets by comparing with re
Ciphertext-Only Analysis (COA) assessed based on three datasets by comparing with
cently used algorithms, as shown in Figures 12–15, respectively. Correlating one original
recently used algorithms, as shown in Figures 12–15, respectively. Correlating one original
datum with all original data and one sanitized datum with all sanitized data defines KPA
datum with all original data and one sanitized datum with all sanitized data defines KPA
analysis. KCA analysis is described as correlating each sanitized data with its data re
analysis. KCA analysis is described as correlating each sanitized data with its data restored
stored data. The ACPA attack is similar to the CPA attack. It selects the plaintext and
data. The ACPA attack is similar to the CPA attack. It selects the plaintext and ciphertext
ciphertext that are learned from past encryptions. In a COA attack, it uses known data
that are learned from past encryptions. In a COA attack, it uses known data collection.
collection. In the ABC-ROA-based privacy preservation system, the ACPA, COA, KPA,
In the ABC-ROA-based privacy preservation system, the ACPA, COA, KPA, and KCA
and KCA analysis shows the lowest value and indicates the minimum error. While ana
analysis shows the lowest value and indicates the minimum error. While analyzing the
lyzing the evaluation of different attacks for the designed ABC-ROA method, it is revealed
evaluation of different attacks for the designed ABC-ROA method, it is revealed that
that the designed ABC-ROA based privacy preservation over the Ethereum network at
the designed ABC-ROA based privacy preservation over the Ethereum network attains
tains effective performance.
effective performance.
(a) (b)
(c)
**Figure 12. Effectiveness analysis of the implemented ABC-ROA based privacy preservation model**
**Figure 12. Effectiveness analysis of the implemented ABC-ROA based privacy preservation model**
using KCA in terms of (a) dataset 1, (b) dataset 2, and (c) dataset 3
using KCA in terms of (a) dataset 1, (b) dataset 2, and (c) dataset 3.
-----
_Electronics Electronics 20232023,, 12 12, x FOR PEER REVIEW, x FOR PEER REVIEW_ 27 of 30 27 of 30
_Electronics 2023, 12, 1404_ 26 of 29
((aa) ) ((bb) )
((cc) )
**Figure 13. Figure 13. Performance analysis of the developed ABC-ROA based privacy preservation model us-Performance analysis of the developed ABC-ROA based privacy preservation model us-**
**Figure 13. Performance analysis of the developed ABC-ROA based privacy preservation model using**
ing KPA regarding (ing KPA regarding (aa) dataset 1, () dataset 1, (bb) dataset 2, and () dataset 2, and (cc) dataset 3. ) dataset 3.
KPA regarding (a) dataset 1, (b) dataset 2, and (c) dataset 3.
((aa) ) ((bb) )
**Figure 14. Cont.**
-----
_Electronics Electronics 20232023,, 12 12, x FOR PEER REVIEW, x FOR PEER REVIEW_ 28 of 30 28 of 30
_Electronics 2023, 12, 1404_ 27 of 29
((cc) )
**Figure 14. Figure 14. Performance analysis of the developed ABC-ROA based privacy preservation model us-Performance analysis of the developed ABC-ROA based privacy preservation model us-**
**Figure 14. Performance analysis of the developed ABC-ROA based privacy preservation model using**
ing ACPA regarding (ing ACPA regarding (aa) dataset 1, () dataset 1, (bb) dataset 2, and () dataset 2, and (cc) dataset 3. ) dataset 3.
ACPA regarding (a) dataset 1, (b) dataset 2, and (c) dataset 3.
((aa) ) ((bb) )
((cc) )
**Figure 15.Figure 15. Figure 15. Performance analysis of the developed ABC-ROA based privacy preservation model usingPerformance analysis of the developed ABC-ROA based privacy preservation model us-Performance analysis of the developed ABC-ROA based privacy preservation model us-**
ing COA regarding (ing COA regarding (aa) dataset 1, () dataset 1, (bb) dataset 2, and () dataset 2, and (cc) dataset 3. ) dataset 3.
COA regarding (a) dataset 1, (b) dataset 2, and (c) dataset 3.
**8. Conclusions8. Conclusions8. Conclusions**
A new blockchain-based privacy preservation model over the Ethereum networkA new blockchain-based privacy preservation model over the Ethereum network A new blockchain-based privacy preservation model over the Ethereum network
was developed for preserving data privacy using blockchain technology. The data werewas developed for preserving data privacy using blockchain technology. The data were was developed for preserving data privacy using blockchain technology. The data were
collected from standard databases. Initially, the data were sanitized, and an optimalcollected from standard databases. Initially, the data were sanitized, and an optimal key collected from standard databases. Initially, the data were sanitized, and an optimal key
key developed using the ABC-ROA algorithm. The optimal key generation followed
-----
_Electronics 2023, 12, 1404_ 28 of 29
the objective functions HF rate, IP rate, FR, and DM. The sanitized data progressed to
the data restoration process, which restored the data in the database. These data were
formed as subchains, known as the supply chain framework. The developed blockchain
framework gave better privacy for the data over the supply chain network with the help
of the generated optimal key. The effectiveness of the proposed blockchain-based privacy
preservation model was compared with the existing privacy preservation models. The
proposed ABC-ROA-based privacy preservation model performed 20.2% better than HHO,
17.4% better than EF-HHO, 13.7% better than BCO, and 20.7% better than ROA while
considering dataset 2 with the key variation of 50. Therefore, compared to other privacy
preservation approaches, the developed ABC-ROA-based privacy preservation model
performs better for all key variations than other heuristic algorithms. One of the most
important challenges in the existing privacy preservation model over the Ethereum network
is scalability. Due to the scalability issues, it cannot provide the optimal solution, and also it
generates issues such as inefficiency and limited block size. In this research, the developed
ABC-ROA method was utilized to solve these issues. The estimation of convergence
and optimization of deep-structure architectures were utilized to resolve the scalability
issues. Moreover, implementing standard machine learning and deep learning approaches
provides the ability to solve these issues.
**Author Contributions: Conceptualization, Y.V.R.S.V.; Methodology, Y.V.R.S.V.; Data curation, Y.V.R.S.V.;**
Writing—Original draft preparation, Y.V.R.S.V.; Visualization, K.J.; Investigation, K.J.; Validation, Y.V.R.S.V.;
Reviewing and Editing, K.J. All authors have read and agreed to the published version of the manuscript.
**Funding: This research did not receive any specific funding.**
**Data Availability Statement: The data underlying this article are available in DataCo Smart Supply**
[Chain for Big Data Analysis database, at https://www.kaggle.com/shivkp/customer-behaviour](https://www.kaggle.com/shivkp/customer-behaviour)
(accessed on 10 January 2023).
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Weng, J.; Weng, J.; Zhang, J.; Li, M.; Zhang, Y.; Luo, W. DeepChain: Auditable and Privacy-Preserving Deep Learning with
[Blockchain-Based Incentive. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2438–2455. [CrossRef]](http://doi.org/10.1109/TDSC.2019.2952332)
2. Tahir, S.; Tahir, H.; Sajjad, A.; Rajarajan, M.; Khan, F. Privacy-preserving COVID-19 contact tracing using blockchain. J. Commun.
_[Netw. 2021, 23, 360–373. [CrossRef]](http://doi.org/10.23919/JCN.2021.000031)_
3. Huang, C.; Zhao, Y.; Chen, H.; Wang, X.; Zhang, Q.; Chen, Y.; Wang, H.; Lam, K.-Y. ZkRep: A Privacy-Preserving Scheme for
[Reputation-Based Blockchain System. IEEE Internet Things J. 2021, 9, 4330–4342. [CrossRef]](http://doi.org/10.1109/JIOT.2021.3105273)
4. Wu, G.; Wang, S.; Ning, Z.; Zhu, B. Privacy-Preserved Electronic Medical Record Exchanging and Sharing: A Blockchain-Based
[Smart Healthcare System. IEEE J. Biomed. Health Inform. 2021, 26, 1917–1927. [CrossRef] [PubMed]](http://doi.org/10.1109/JBHI.2021.3123643)
5. Yang, Y.; Wu, J.; Long, C.; Liang, W.; Lin, Y.-B. Blockchain-Enabled Multiparty Computation for Privacy Preserving and Public
[Audit in Industrial IoT. IEEE Trans. Ind. Inform. 2022, 18, 9259–9267. [CrossRef]](http://doi.org/10.1109/TII.2022.3177630)
6. Du, R.; Ma, C.; Li, M. Privacy-Preserving Searchable Encryption Scheme Based on Public and Private Blockchains. Tsinghua Sci.
_[Technol. 2023, 28, 13–26. [CrossRef]](http://doi.org/10.26599/TST.2021.9010070)_
7. Yang, Q.; Wang, H. Privacy-Preserving Transactive Energy Management for IoT-Aided Smart Homes via Blockchain. IEEE Internet
_[Things J. 2021, 8, 11463–11475. [CrossRef]](http://doi.org/10.1109/JIOT.2021.3051323)_
8. Tran, Q.N.; Turnbull, B.P.; Wu, H.-T.; de Silva, A.J.S.; Kormusheva, K.; Hu, J. A Survey on Privacy-Preserving Blockchain Systems
[(PPBS) and a Novel PPBS-Based Framework for Smart Agriculture. IEEE Open J. Comput. Soc. 2021, 2, 72–84. [CrossRef]](http://doi.org/10.1109/OJCS.2021.3053032)
9. Yang, Y.; Wei, L.; Wu, J.; Long, C.; Li, B. A Blockchain-Based Multidomain Authentication Scheme for Conditional Privacy
[Preserving in Vehicular Ad-Hoc Network. IEEE Internet Things J. 2021, 9, 8078–8090. [CrossRef]](http://doi.org/10.1109/JIOT.2021.3107443)
10. Baza, M.; Sherif, A.; Mahmoud, M.M.E.A.; Bakiras, S.; Alasmary, W.; Abdallah, M.; Lin, X. Privacy-Preserving Blockchain-Based
[Energy Trading Schemes for Electric Vehicles. IEEE Trans. Veh. Technol. 2021, 70, 9369–9384. [CrossRef]](http://doi.org/10.1109/TVT.2021.3098188)
11. Zhang, X.; Jiang, S.; Liu, Y.; Jiang, T.; Zhou, Y. Privacy-Preserving Scheme with Account-Mapping and Noise-Adding for Energy
[Trading Based on Consortium Blockchain. IEEE Trans. Netw. Serv. Manag. 2021, 19, 569–581. [CrossRef]](http://doi.org/10.1109/TNSM.2021.3110980)
12. Wu, Y.; Tang, S.; Zhao, B.; Peng, Z. BPTM: Blockchain-Based Privacy-Preserving Task Matching in Crowdsourcing. IEEE Access
**[2019, 7, 45605–45617. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2908265)**
13. Abdelsalam, H.A.; Srivastava, A.K.; Eldosouky, A. Blockchain-Based Privacy Preserving and Energy Saving Mechanism for
[Electricity Prosumers. IEEE Trans. Sustain. Energy 2021, 13, 302–314. [CrossRef]](http://doi.org/10.1109/TSTE.2021.3109482)
-----
_Electronics 2023, 12, 1404_ 29 of 29
14. Zou, S.; Xi, J.; Xu, G.; Zhang, M.; Lu, Y. CrowdHB: A Decentralized Location Privacy-Preserving Crowdsensing System Based on
[a Hybrid Blockchain Network. IEEE Internet Things J. 2021, 9, 14803–14817. [CrossRef]](http://doi.org/10.1109/JIOT.2021.3084937)
15. Rahman, M.S.; Khalil, I.; Moustafa, N.; Kalapaaking, A.P.; Bouras, A. A Blockchain-Enabled Privacy-Preserving Verifiable Query
Framework for Securing Cloud-Assisted Industrial Internet of Things Systems. IEEE Trans. Ind. Inform. 2021, 18, 5007–5017.
[[CrossRef]](http://doi.org/10.1109/TII.2021.3105527)
16. Zhang, C.; Zhu, L.; Xu, C.; Sharif, K. PRVB: Achieving Privacy-Preserving and Reliable Vehicular Crowdsensing via Blockchain
[Oracle. IEEE Trans. Veh. Technol. 2020, 70, 831–843. [CrossRef]](http://doi.org/10.1109/TVT.2020.3046027)
17. Chulerttiyawong, D.; Jamalipour, A. A Blockchain Assisted Vehicular Pseudonym Issuance and Management System for
[Conditional Privacy Enhancement. IEEE Access 2021, 9, 127305–127319. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3112013)
18. Lin, C.; He, D.; Huang, X.; Xie, X.; Choo, K.-K.R. PPChain: A Privacy-Preserving Permissioned Blockchain Architecture for
[Cryptocurrency and Other Regulated Applications. IEEE Syst. J. 2021, 15, 4367–4378. [CrossRef]](http://doi.org/10.1109/JSYST.2020.3019923)
19. Rahmadika, S.; Astillo, P.V.; Choudhary, G.; Duguma, D.G.; Sharma, V.; You, I. Blockchain-Based Privacy Preservation Scheme for
[Misbehavior Detection in Lightweight IoMT Devices. IEEE J. Biomed. Health Inform. 2022, 27, 710–721. [CrossRef]](http://doi.org/10.1109/JBHI.2022.3187037)
20. Xiong, T.; Zhang, R.; Liu, J.; Huang, T.; Liu, Y.; Yu, F.R. A blockchain-based and privacy-preserved authentication scheme for
[inter-constellation collaboration in Space-Ground Integrated Networks. Comput. Netw. 2022, 206, 108793. [CrossRef]](http://doi.org/10.1016/j.comnet.2022.108793)
21. Singh, S.; Rathore, S.; Alfarraj, O.; Tolba, A.; Yoon, B. A framework for privacy-preservation of IoT healthcare data using Federated
[Learning and blockchain technology. Futur. Gener. Comput. Syst. 2021, 129, 380–388. [CrossRef]](http://doi.org/10.1016/j.future.2021.11.028)
22. Guo, L.; Xie, H.; Li, Y. Data encryption based blockchain and privacy preserving mechanisms towards big data. J. Vis. Commun.
_[Image Represent. 2019, 70, 102741. [CrossRef]](http://doi.org/10.1016/j.jvcir.2019.102741)_
23. Mohan, D.; Alwin, L.; Neeraja, P.; Lawrence, K.D.; Pathari, V. A private Ethereum blockchain implementation for secure data
[handling in Internet of Medical Things. J. Reliab. Intell. Environ. 2021, 8, 379–396. [CrossRef]](http://doi.org/10.1007/s40860-021-00153-2)
24. Elisa, N.; Yang, L.; Chao, F.; Cao, Y. A framework of blockchain-based secure and privacy-preserving E-government system. Wirel.
_[Netw. 2018, 1–11. [CrossRef]](http://doi.org/10.1007/s11276-018-1883-0)_
25. Dewangan, N.K.; Chandrakar, P.; Kumari, S.; Rodrigues, J.J. Enhanced privacy-preserving in student certificate management in
[blockchain and interplanetary file system. Multimedia Tools Appl. 2022, 1–20. [CrossRef]](http://doi.org/10.1007/s11042-022-13915-8)
26. Abidi, M.H.; Alkhalefah, H.; Umer, U.; Mohammed, M.K. Blockchain-based secure information sharing for supply chain
[management: Optimization assisted data sanitization process. Int. J. Intell. Syst. 2020, 36, 260–290. [CrossRef]](http://doi.org/10.1002/int.22299)
27. [Dutta, T.; Bhattacharyya, S.; Dey, S.; Platos, J. Border Collie Optimization. IEEE Access 2020, 8, 109177–109197. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2999540)
28. Moazzeni, A.R.; Khamehchi, E. Rain optimization algorithm (ROA): A new metaheuristic method for drilling optimization
[solutions. J. Pet. Sci. Eng. 2020, 195, 107512. [CrossRef]](http://doi.org/10.1016/j.petrol.2020.107512)
29. Elgamal, Z.M.; Yasin, N.B.M.; Tubishat, M.; Alswaitti, M.; Mirjalili, S. An Improved Harris Hawks Optimization Algorithm with
[Simulated Annealing for Feature Selection in the Medical Field. IEEE Access 2020, 8, 186638–186652. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3029728)
30. Hu, H.; Ao, Y.; Bai, Y.; Cheng, R.; Xu, T. An Improved Harris’s Hawks Optimization for SAR Target Recognition and Stock Market
[Index Prediction. IEEE Access 2020, 8, 65891–65910. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2985596)
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/electronics12061404?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/electronics12061404, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2079-9292/12/6/1404/pdf?version=1678939714"
}
| 2,023
|
[] | true
| 2023-03-15T00:00:00
|
[
{
"paperId": "9156c07c27c6739408bcfceae55d9bbd1cc14df3",
"title": "Privacy-Preserving Searchable Encryption Scheme Based on Public and Private Blockchains"
},
{
"paperId": "d7fad1aa2bac0e7368d2abe21d4409d5c15af221",
"title": "Blockchain-Enabled Multiparty Computation for Privacy Preserving and Public Audit in Industrial IoT"
},
{
"paperId": "2f8fbae457ea0a597f57342d77505aebc14f09b5",
"title": "Enhanced privacy-preserving in student certificate management in blockchain and interplanetary file system"
},
{
"paperId": "7dd5e5efa92765c909cd8eaa9c9949c82ea794c8",
"title": "CrowdHB: A Decentralized Location Privacy-Preserving Crowdsensing System Based on a Hybrid Blockchain Network"
},
{
"paperId": "acb772e615373b3856abcc9411ff4778a61e683f",
"title": "A Blockchain-Enabled Privacy-Preserving Verifiable Query Framework for Securing Cloud-Assisted Industrial Internet of Things Systems"
},
{
"paperId": "2e85d94f5a6b541e11bb7a06c7e3bbde57268c32",
"title": "Blockchain-Based Privacy Preservation Scheme for Misbehavior Detection in Lightweight IoMT Devices"
},
{
"paperId": "ef89aac4a4cce36f2986d173a10dcfdc3d973493",
"title": "A Blockchain-Based Multidomain Authentication Scheme for Conditional Privacy Preserving in Vehicular Ad-Hoc Network"
},
{
"paperId": "0b21645d8e7f84ddc97734a597b9c5f222d760a3",
"title": "Privacy-Preserving Scheme With Account-Mapping and Noise-Adding for Energy Trading Based on Consortium Blockchain"
},
{
"paperId": "8776224a0a099597d5cf841776f72c0b775afe11",
"title": "A blockchain-based and privacy-preserved authentication scheme for inter-constellation collaboration in Space-Ground Integrated Networks"
},
{
"paperId": "210ac509ad7310226c3b4ceb5099d2361f888dd1",
"title": "Blockchain-Based Privacy Preserving and Energy Saving Mechanism for Electricity Prosumers"
},
{
"paperId": "d055ec0254351d709a59ecb8f20eecf3e85370ef",
"title": "A framework for privacy-preservation of IoT healthcare data using Federated Learning and blockchain technology"
},
{
"paperId": "f6f63f46c3955113a850c22f4c40fa6333ffe416",
"title": "Privacy-Preserved Electronic Medical Record Exchanging and Sharing: A Blockchain-Based Smart Healthcare System"
},
{
"paperId": "0cd88c83357ec0aaf9dbac8c79237f66a0d5ec99",
"title": "Privacy-Preserving Blockchain-Based Energy Trading Schemes for Electric Vehicles"
},
{
"paperId": "4d2cad9f22c590a474dbe58869955648a044faa0",
"title": "ZkRep: A Privacy-Preserving Scheme for Reputation-Based Blockchain System"
},
{
"paperId": "d45d23c874b6831cd22787362079c48adf318049",
"title": "A private Ethereum blockchain implementation for secure data handling in Internet of Medical Things"
},
{
"paperId": "33dac2d110a0cf088e145b12a1184af7628a248c",
"title": "Privacy-Preserving Transactive Energy Management for IoT-Aided Smart Homes via Blockchain"
},
{
"paperId": "daaaebbe1910f41da860db2e90105d84372a4691",
"title": "PRVB: Achieving Privacy-Preserving and Reliable Vehicular Crowdsensing via Blockchain Oracle"
},
{
"paperId": "1a8c6b39d67103fe0e772a63535abb0f4a16110a",
"title": "Rain optimization algorithm (ROA): A new metaheuristic method for drilling optimization solutions"
},
{
"paperId": "85be5b16c9749a4eeef3568426fd6385bbe4b29f",
"title": "Blockchain‐based secure information sharing for supply chain management: Optimization assisted data sanitization process"
},
{
"paperId": "e4de4928dfd42a68ce602d075f2d98b1f88a6bb3",
"title": "PPChain: A Privacy-Preserving Permissioned Blockchain Architecture for Cryptocurrency and Other Regulated Applications"
},
{
"paperId": "80f614b891b30f2a86aabc00c3c5d25e6d0b8f82",
"title": "Data encryption based blockchain and privacy preserving mechanisms towards big data"
},
{
"paperId": "07f229cc6e5b80fc8ffbbb3e4db85142466f55a9",
"title": "DeepChain: Auditable and Privacy-Preserving Deep Learning with Blockchain-Based Incentive"
},
{
"paperId": "fa1bbc1cd49a9872b993feb0d33a405ec2f3c94d",
"title": "BPTM: Blockchain-Based Privacy-Preserving Task Matching in Crowdsourcing"
},
{
"paperId": "4a4587a43228c9b7e0784190656ab10d583cebc1",
"title": "A framework of blockchain-based secure and privacy-preserving E-government system"
},
{
"paperId": "71e3b0b2fb888dc5aa0ff6b2d4659dff32498023",
"title": "A Survey on Privacy-Preserving Blockchain Systems (PPBS) and a Novel PPBS-Based Framework for Smart Agriculture"
},
{
"paperId": "52fba58cd69d0f71878004dd2c616d88a0799479",
"title": "Privacy-preserving COVID-19 contact tracing using blockchain"
},
{
"paperId": "c4dc2fe1d1189d5c8215b0fcc73b7708bb3d8146",
"title": "A Blockchain Assisted Vehicular Pseudonym Issuance and Management System for Conditional Privacy Enhancement"
},
{
"paperId": "b6a934727f869f620e50f84fa7f46d6159164fdc",
"title": "An Improved Harris’s Hawks Optimization for SAR Target Recognition and Stock Market Index Prediction"
},
{
"paperId": "97eabc4d6bfd27cd2f33c1c3de5cb073cd645b59",
"title": "Border Collie Optimization"
},
{
"paperId": "b384e0796db848d7b14d214d886798c1900eb09b",
"title": "An Improved Harris Hawks Optimization Algorithm With Simulated Annealing for Feature Selection in the Medical Field"
}
] | 25,772
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0106a389bab04617737f3a786e08e31fa4afe7e5
|
[
"Computer Science"
] | 0.895569
|
Why and how informatics and applied computing can still create structural changes and competitive advantage
|
0106a389bab04617737f3a786e08e31fa4afe7e5
|
Applied Computing and Informatics
|
[
{
"authorId": "2022247",
"name": "S. Mitropoulos"
},
{
"authorId": "1728886",
"name": "C. Douligeris"
}
] |
{
"alternate_issns": [
"2634-1964"
],
"alternate_names": [
"Appl Comput Informatics"
],
"alternate_urls": [
"https://www.journals.elsevier.com/applied-computing-and-informatics/",
"https://www.emeraldgrouppublishing.com/journal/ajms"
],
"id": "256a5738-77cc-4cee-ad61-ef18264f639c",
"issn": "2210-8327",
"name": "Applied Computing and Informatics",
"type": "journal",
"url": "http://www.elsevier.com/wps/find/journaldescription.cws_home/724375/description#description"
}
|
PurposeIn the new digital age, enterprises are facing an increasing global competition. In this paper, we first examine how Information Technology (IT) can play an important role in giving significant competitive advantage in the modern enterprises. The business value of IT is examined, as well as the limitations and the trade-offs that its applicability faces. Next, we present the basic principles for a successful IT strategy, considering the development of a long-term IT renovation plan, the strategic alignment of IT with the business strategy, and the adoption of an integrated, distributed, and interoperable IT platform. Finally, we examine how a highly functional and efficient IT organization can be developed.Design/methodology/approachOur methodological approach was based to the answers of the following questions: 1. Does IT still matter? 2. What is the business value created by IT along with the corresponding limitations and trade-offs? 3. How could a successful IT Strategy be build up? 4. How could an effective? T planning aligned with the business strategy be build up? 5. How could a homogenized and distributed corporate IT platform be developed? and finally, 6. How could a high-performance IT-enabled enterprise be build up?FindingsThe enterprises in order to succeed in the new digital area need to: 1. synchronize their IT strategy with their business strategy, 2. formulate a long-term IT strategy, 3. adopt IT systems and solutions that are implemented with elasticity, interoperability, distribution, and service-orientation. 4. keep a strategic direction towards the creation of an exceptional organization based on IT.Originality/valueThis paper is original with respect to the integrated approach the overall problem is examined. There is a prototype combined investigation of all perspectives for an effective enforcement of IT in a way that causes acceleration in competitive advantage when conducting business.
| ERROR: type should be string, got "https://www.emerald.com/insight/2210-8327.htm\n\n# Why and how informatics and applied computing can still create structural changes and competitive advantage\n\n## Sarandis Mitropoulos\n### Regional Development, Ionian University, Lefkada, Greece, and\n## Christos Douligeris\n### Informatics, University of Piraeus, Piraeus, Greece\n\nAbstract\nPurpose – In the new digital age, enterprises are facing an increasing global competition. In this paper, we first\nexamine how Information Technology (IT) can play an important role in giving significant competitive\nadvantage in the modern enterprises. The business value of IT is examined, as well as the limitations and the\ntrade-offs that its applicability faces. Next, we present the basic principles for a successful IT strategy,\nconsidering the development of a long-term IT renovation plan, the strategic alignment of IT with the business\nstrategy, and the adoption of an integrated, distributed, and interoperable IT platform. Finally, we examine\nhow a highly functional and efficient IT organization can be developed.\nDesign/methodology/approach – Our methodological approach was based to the answers of the following\nquestions: 1. Does IT still matter? 2. What is the business value created by IT along with the corresponding\nlimitations and trade-offs? 3. How could a successful IT Strategy be build up? 4. How could an effective?\nT planning aligned with the business strategy be build up? 5. How could a homogenized and distributed\ncorporate IT platform be developed? and finally, 6. How could a high-performance IT-enabled enterprise be\nbuild up?\nFindings – The enterprises in order to succeed in the new digital area need to: 1. synchronize their IT strategy\nwith their business strategy, 2. formulate a long-term IT strategy, 3. adopt IT systems and solutions that are\nimplemented with elasticity, interoperability, distribution, and service-orientation. 4. keep a strategic direction\ntowards the creation of an exceptional organization based on IT.\nOriginality/value – This paper is original with respect to the integrated approach the overall problem is\nexamined. There is a prototype combined investigation of all perspectives for an effective enforcement of IT in\na way that causes acceleration in competitive advantage when conducting business.\nKeywords New technologies, IT strategy, Strategic alignment, Business value, SOA\nPaper type Research paper\n\n1. Introduction\nThe steadily increasing global market competition is significantly reducing the turnover time\nof innovation products. Information Technology (IT) can provide a significant competitive\nmarket advantage to an enterprise because it dynamically promotes relationships with its\ncustomers and suppliers, strengthens strategic alliances, promotes innovative products and\nservices, adapts existing solutions, and achieves lower operating costs, while it improves the\ninternal business processes. It is true that new technologies, such as the cloud, mobile\ncomputing, the Internet of Things (IoT), and the use of artificial intelligence (AI) have brought\n\n© Sarandis Mitropoulos and Christos Douligeris. Published in Applied Computing and Informatics.\nPublished by Emerald Publishing Limited. This article is published under the Creative Commons\nAttribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works\nof this article (for both commercial and non-commercial purposes), subject to full attribution to the\n[original publication and authors. The full terms of this licence may be seen at http://creativecommons.](http://creativecommons.org/licences/by/4.0/legalcode)\n[org/licences/by/4.0/legalcode](http://creativecommons.org/licences/by/4.0/legalcode)\n\n\n## Informatics and applied computing\n\nReceived 7 June 2021\nRevised 19 July 2021\n16 September 2021\nAccepted 17 September 2021\n\nApplied Computing and\nInformatics\nEmerald Publishing Limited\ne-ISSN: 2210-8327\np-ISSN: 2634-1964\n[DOI 10.1108/ACI-06-2021-0149](https://doi.org/10.1108/ACI-06-2021-0149)\n\n\n-----\n\n## ACI\n\n\naround structural changes in the way businesses operate as well as in the way markets\nfunction. For example, the internet has created hyperarchies that influence the creation of\nnew business models [1], by replacing the traditional enterprise hierarchies, while on the other\nhand it has necessitated a comprehensive review of the traditional business strategies. This\nrevision of the companies’ business strategies, as well as the new supply value chain have\nsignificantly increased the productivity, the quality of services offered to customers and the\noverall value proposition of the enterprises.\nNevertheless, there are many enterprises, which, despite investing a significant amount of\nmoney and human resources in applied computing, as well as in relevant research, have failed\nin having the initially expected advantages. This happened because these enterprises failed\nto adopt and then successfully implement the new e-business and e-governance models [2]. In\nthe context of rapidly responding to market needs and to their fierce competition, many\nenterprises have only pursued short-term benefits from applied computing and informatics\nby trying only to accelerate the development of new services and products at the lowest\npossible operating costs. But ignoring the fundamentals of strategy formulation has led to a\nconvergence of business practices based on cost leadership [3], something we have seen\nhappening in the past with the dotcom enterprises. This problem still exists with the latest\ninformatics technologies, where we see that enterprises still do not incorporate the\ninformatics and computing applications in their processes in the right way [2, 4, 5]. The main\nfaults that enterprises make regarding the adoption of informatics are that they do not:\n\n(1) synchronize their IT strategy with the goals of their business strategy,\n\n(2) formulate a long-term IT strategy,\n\n(3) keep a strategic direction towards the creation of an exceptional organization based\non IT,\n\n(4) adopt effective IT systems, and integrated IΤ solutions; instead these are\nimplemented hastily, inelastically and without interoperability.\n\nThe exponential growth of ubiquitous computing drives the need for new business models,\nwhich must serve an effective IT-enabled business strategy directly related to the Digital\nTransformation. The problem space here is how the effective adoption of new IT technologies\ncan be achieved which in turn drives the requirements of renewed business strategy and\nprocesses, and of culture change [29]. The main problem which modern enterprises face\nconcerns the right IT implementation along with the adaptation of the right strategy, process,\nand culture. This paper tries to answer all these requirements in a consistent and\nmethodological way. Even though a considerable body of research towards this direction\nexists, there is still a gap for the development of a perfect alignment between the strategy the\nenterprises must follow and an intelligent and effective way the new technologies are\nimplemented so that they create a competitive advantage [5]. Internet, cloud computing,\nmobile computing, software-oriented architectures (SOA), Internet of things (IoT), blockchain,\nserver virtualization and other modern technologies provide the enterprises with the\nopportunity to migrate from the traditional business models to new ones [30, 31, 35, 36].\nAlthough there exist theoretical frameworks in the literature [2–5], there is still a lack of of\nframeworks that combine the characteristics of hybrid, both research and practical methods\napproaches. The outcomes of the existing approaches show that the use of IT has not been\nproven fully effective, except from some exceptional cases.\nTo achieve an effective IT incorporation, this paper proposes a framework with very\nspecific suggestions. This framework can be used as a roadmap for future readiness and\ngrowth. The evaluation discussion shows the effectiveness of the proposed framework and\nmethodology. This framework in short consists of the following steps:\n\n\n-----\n\n(1) Examination of the expected business value to be created by IT along with any\nexpected limitations and necessary trade-offs.\n\n(2) Formulation of an effective IT strategy aligned with the overall business strategy.\n\n(3) Development of an effective IT planning.\n\n(4) Development of a homogenized, interoperable, and distributed IT platform.\n\n(5) Development of a high IT-enabled organization in all its dimensions.\n\n(6) Establishment of an on-going evaluation system for the continuous improvement of\nthe IT strategy and the corresponding infrastructure.\n\nThe issues and steps mentioned above are thoroughly analysed and potential solutions are\nproposed. A thorough and comprehensive investigation of whether IT can still make\nstructural changes in the operation of businesses is performed. In addition, it is examined how\nthis can be achieved with respect to the creation of business value considering the\ncorresponding limitations and necessary trade-offs. But since creating value without\ndeveloping a rational strategy is impossible the principles of a sound IT strategy are\nexamined, as well as how this can be effectively developed. Finally, implementation issues\nand evaluation results are discussed, as well as potential future work.\n\n2. IT still does matter\nMany researchers argue that IΤ no longer offers any innovation over the competition and\nhave, therefore, IΤ has reached the stage of maturing as a service [6]. This viewpoint has been\nexpressed without considering its high and disruptive evolution. Τhere is obviously a part of\nIT that is common to almost all enterprises, making this utility approach workable.\nNevertheless, this is only one side of the coin because IT is creating new situations that\naccelerate the developments in the operation of markets and enterprises. The concept of\nubiquitous computing, for example, is a new trend which will surely cause structural changes,\nwhile new ideas in virtual communities can create significant changes in corporate\ncollaboration by forming on-demand virtual business partner hyperarchies. In addition, new\nintelligent algorithms for machine learning, artificial intelligence and biotechnology are being\ndeveloped, thus facilitating research into new products (e.g. drugs and crystal solid materials)\nand services (e.g. telemedicine) with significant business and social benefits [7]. Thus, the new\ninformation technologies can leverage the capacity for innovation. Enterprises through the\nsmart and targeted use of IT will be able to create growth, and innovative products and\nservices at the right time.\nIn addition, enterprises need to focus on another perspective of IT, that of the growing\nbusiness value through digital culture. A well-designed business strategy, synchronized\nwith that of IT, creates an innovation-oriented corporate culture. Such a culture is not easy\nto be copied from the competition, thus, giving the enterprises that adopt it a clear lead.\nThere are many important examples that prove this to be true, such as the ones of Toyota\nand Dell – their supply chain and production chain management practices make them\nstand out [6].\nThe innovative and smart use of IT (e.g. green informatics) can create an additional\ncompetitive advantage which, most of the times, is closely connected with a change of culture\n\n[8]. However, the technology is not a panacea. In practice, many enterprises do not effectively\nenforce the new IT-enabled business models because their adoption of IT has not been\ndirected to develop a highly IT-enabled corporate culture, they have not integrated their\ndigital strategy with the overall business strategy, or they have not adequately understood\nthe new technology trends while developing a corresponding strategy.\n\n\n## Informatics and applied computing\n\n\n-----\n\n## ACI\n\n\n3. Business value creation: limitations and trade-offs\nNowadays, the relationship between business strategy and IT-based innovation is a strongly\ninteracting one. A successful business strategy must define an IT infrastructure driven by\ninnovation. Obviously, all of the expected benefits offer improved business performance and\nare, therefore, critical to the business success [5, 9].\nIT enables an increased control of the operating costs as well as of productivity. Indeed,\nthe assembly of products e.g. through the Computer Integrated Manufacturing (CIM) and the\nRobotics systems of the industry 4.0 era, increases productivity and quality, while reducing\ncosts. We also notice that information technology significantly increases the knowledge of\nthe business environment through monitoring systems, such as the ones used in the product\ndelivery status or in the stock levels of warehouses. IT systems can help develop staff skills\nthrough e-learning and on-the-job on-line assistance.\nNevertheless, there exist several criteria which need to be fulfilled for enterprises to be able\nto differentiate from competitors. Such criteria include among others the brand name visibility,\nthe quick-to-market response, and the service quality [10]. Despite these advantages, there are\nimportant limitations that need to be considered, such as the existence of isolated and\nproprietary software applications and databases. Outdated information (legacy) systems,\nwhich require high-cost adapters to bridge the newer applications, and the use of a wide range\nof different and heterogeneous technologies, are two other examples. The low utilization of\nthe existing IT resources is also a problem. All these limitations significantly reduce the\nefficiency of IT, the process support, the access and dissemination of information, the\nimplementation of new projects, and the staff adaptation to the new conditions, while they\nfurther increase the operating costs.\nThere are solutions to such problems, which enterprises are called upon to adopt, such as\ne.g. the Enterprise Application Integration (EAI), the rapid development of new applications,\nand the development of business process management systems By gradually adopting such\nsolutions an enterprise can in a reasonable time develop a much more integrated business\nenvironment, launch new services in a timely fashion, achieve world-wide access, improve its\nbusiness operations control, acquire resources on demand, work on a flexible infrastructure\nand lower costs [2, 10].\nMost of the times, the enforcement of these IT solutions requires the consideration of a\nvariety of trade-offs, while the main question is: “is there adequate capacity, qualities and\nstrategic direction for a change from the old economy environment to the new one?”. The\nanswer to this question includes: (1) the adoption of an innovation culture, which in turn\nrequires management of change and capability building up for the human resources, and (2)\nan elastic and interoperable infrastructure [11, 12].\nIn addition, the changing of an operational business model can drive the requirements\nfor higher adaptation in the application software portfolio creating another trade-off\namong efficiency, innovation, experimentation, and conformance to the relevant standards\n\n[11–13].\n\n4. The proposed methodological approach\nGiven that IT is still considered to be able to bring about significant structural changes and\nprovide a business competitive advantage in the modern markets along with all the related\noperational benefits, the question is how this goal can be achieved methodologically. This\ndevelopment can be considered to have the following four main dimensions:\n\n(1) Development of a successful IT strategy, whose goals should be synchronized with\nthe business strategy, e.g. there can be no strategy to penetrate global markets, while\nthe company’s information system cannot support the internationalization of the\ninformation it offers and manages.\n\n\n-----\n\n(2) The IT development plan which should consider the organizational structure of\nenterprise, the demanded quality of service to its customers, the firm’s human\nresources and the existing IT infrastructure and systems. Everything may need to be\nchanged partially or to a very large extent.\n\n(3) Development of the IT Infrastructure as a homogenized and distributed service\nplatform provided either to the internal customers of the company or to the external\nones.\n\n(4) Development of services and policies for management, research and innovation,\ntraining for the creation of a technologically high-performance enterprise,\n\nAll these key dimensions of development of innovation and IT solutions are shown in\nFigure 1 as phases.\n\n5. Building up a successful IT strategy\nA business strategy expresses the vision, the mission and the main business goals of a\nenterprise or organization. The business goals must be interpreted into subgoals that are\nenforced on specific high-level business domains, e.g. sales, customer relationships,\nproduction, and logistics. In fact, these domains, according to the balanced scorecard\n(BSC) approach [5], are influenced by the corporate strategy regarding the perspectives of the\nfuture readiness and the innovation, the internal process improvement, the customer\norientation, the cost control, and the financial goals. It is obvious that enterprises need to\neffectively approach these perspectives if they want to achieve a transition from the oldfashion business models to the new ones. Thus, the strategic components incorporated in\nthese perspectives, must be refined and improved.\nThese components, in fact, construct the high-level business domains upon which the\ngoals of the business strategy must be enforced. Of course, these domains must contain\nsubdomains or, in other words, lower-level domains that express the implementation\ncomponents of an enterprise or organization. Such components include among others\ntechnical processes, business data, operations, and informatics implementations. This topdown approach is very useful for the executives because it provides them with a tool for the\nsuccessful enforcement of a business strategy considering all its dimensions.\nThe IT potentiality in terms of new technologies and solutions, makes the IT strategy to be\nthe determinant while the business strategy to be the weak area which needs improvement\n\n[5]. This fact is expressed by the Strategic Alignment (SA) model [14] for the business and IT\nstrategies and the organisational and IT infrastructures.\nFor example, ubiquitous computing brings new capabilities to the enterprises and, thus, it\npushes for a revision of the corresponding business strategies. The goal is to achieve a\n\n\n## Informatics and applied computing\n\nFigure 1.\nThe basic dimensions\nof a high-performance\nenterprise based on\ninnovation and new\ntechnologies\n\n\n-----\n\n## ACI\n\n\ncompetitive advantage over the competition through the development of new products and\nthe appropriate modification of the business scope, the distinct competencies, and the\nbusiness governance, along with the improvement of the organizational infrastructure which\nconcerns the business processes, the human resource capability, and the administrative\nstructure. As mentioned, this approach helps enterprises to conform to the new requirements\nthat arise due to the new technological solutions and the potentialities created by them.\nTowards this direction, the following are proposed:\n\n(1) IT and business strategy alignment,\n\n(2) dimensioning of IT resource requirements,\n\n(3) adoption of new IT architectures and their management, based on innovation and\nadaptability, and\n\n(4) selection of Key Performance Indicators (KPI) for evaluation reasons.\n\nThe first point imposes an IT strategic plan fully aligned with the business strategy. The next\npoint asks for an open, interoperable, scalable, and distributed IT infrastructure, while the\nthird point relates to a high-performance IT-enabled enterprise [4, 5, 15]. Key performance\nmetrics are always required so that the evaluation of the adopted overall approach to be\npossible.\n\n6. Developing an effective IT planning aligned with the business strategy\nIT planning must incorporate the current as well as the future enterprise needs and\ntechnological trends. Thus, it must cover long-term issues as well as whatever is necessary\nfor future readiness. A Strategic Alignment (SA) between the business and IT strategies\naddresses four main areas: business strategy, IT strategy, organizational infrastructure, and\nIT infrastructure [5, 16]. These areas need to be aligned with each other according to the\nbusiness requirements and the type of enterprises. Namely, the requirement for alignment\nvaries from enterprise to enterprise and while it is generally very helpful, it may, however,\nlimit the degrees of freedom in some business cases. In fact, trade-offs and equilibria between\nstrategic alignment and flexibility must be addressed successfully. For example, the\nproduction and other business operations, like the customer relationship management (CRM),\nare positively affected by the Strategic Alignment. On the other hand, the business planning,\nthe marketing, and the sales are less influenced by the SA. Specifically, enterprises, whose\ncritical operations do not focus on IT, do not require such a strict SA. The more supportive\nand functional the role of IT is, the more SA it requires. On the contrary, where IT moves\nwithin a strategic role, the need for a strict SA decreases [17–19]. The following perspectives\ncan be considered in an IT environment [20]:\n\n(1) IT infrastructure: emerging IT infrastructure technologies and solutions call for the\nreformulation of the IT strategy, while the business strategy is implicitly impacted.\n\n(2) IT organization infrastructure: emerging IT calls for the reformulation of the\norganizational infrastructure, while the business strategy is implicitly impacted.\n\n(3) competitive potentiality: emerging IT capabilities, as well as new IT governance\npatterns call for the reformulation of the business strategy, while the organizational\ninfrastructure is implicitly impacted.\n\n(4) service level: the IT resources use, as well as the orientation to the customer calls for\nthe reformulation of the IT infrastructure, while the organizational infrastructure is\nimplicitly impacted.\n\n\n-----\n\n7. Developing a homogenized and distributed corporate IT platform\nThe following services can be offered in an enterprise network in an open and interoperable\nmanner [21, 22]:\n\n(1) electronic services for channel-management, as far as all the involved parties, like\nenterprises, clients, and suppliers, are concerned,\n\n(2) security, that concerns the IT resources protection,\n\n(3) communication mechanisms needed for the communication and interworking\nbetween all the internal/external business entities,\n\n(4) database and file management services, for the purpose of making the required data\nand files available over the enterprise network,\n\n(5) application services, for the purpose of making the required applications, like EPR,\nSCM and HRM, available over the enterprise network, and\n\n(6) management of IT facilities, needed for integration and synchronization of the\ninfrastructure layers and for provision of servers and platforms.\n\nNew trends in IT technologies call for the distribution, homogenization, integration, and\ninteroperability of systems and services [23]. Open IT standards and SOA are among the\ncurrent information systems technologies that enterprises need to move towards because\nthey offer increased interoperability, transfer of application services to heterogeneous\nenvironments, enterprise application integration, service reusability, high operational control\nand monitoring, and flexible service configuration and measurability in service\nperformance [24].\nA SOA helps to create networks of services for the purpose of their common management.\nService orientation here is mentioned as an architectural approach of the new IT systems\nwithout restricting its implementation to specific offered solutions, as it is the Enterprise\nService Bus (ESB). In most cases, service orientation must be followed due to the benefits\nmentioned above. Figure 2 introduces the conceptual approach of a service grid, where\napplication services are interacting each other through several underlying platform services,\nlike routing, message exchange and message queuing, data and knowledge management,\ndistributed API’s, security, accounting, QoS assurance, system monitoring and control.\nIndeed, the adoption of IT service grids can prove to be a key factor for enterprises in their\nefforts to gain a significant competitive advantage. New informatics and applied services\nneed to be distributed, scalable, open, and reliable in a low cost and quick-to-market\n\n**_Services_**\n\n**_Services_** **_Services_**\n\n**_Protocols:_**\n**_Rest, XML/SOAP, WSDL, UDDI_** **_Transportation:_**\n\n**_Routing, Queuing, MOM_**\n\n**_Utilities:_**\n\n**_Services_**\n\n**_Services_** **_Accounting, Security_**\n\n**_Data and Knowledge Management:_**\n\n**_Management:_** **_Directories, Discovery, DBMS, NFS_**\n**_QoS, SLA, Monitoring_**\n\n**_Services_** **_Services_**\n\n**_Services_**\n\n\n## Informatics and applied computing\n\nFigure 2.\nThe conceptual\napproach of SOA and\nthe Service Grid\n\n\n**_Services_**\n\n**_Services_**\n\n**_Protocols:_**\n**_Rest, XML/SOAP, WSDL, UDDI_**\n\n**_Utilities:_**\n\n**_Services_** **_Accounting, Security_**\n\n**_Management:_**\n**_QoS, SLA, Monitoring_**\n\n\n**_Services_** **_Services_**\n\n**_Services_**\n\n\n-----\n\n## ACI\n\nFigure 3.\nThe supply chain\n\n\ndeployment fashion. Furthermore, this adoption will cause channel enhancements, like\ndisintermediation, mitigation of information asymmetry, world-wide access, virtualization,\ncost reduction and control, efficient management information, economies of scale, and\nimproved strategic positioning.\nService grids will be enhanced with the new ubiquitous computing capabilities. For\nexample, mobile computing can effectively leverage the quality of service, as well as the\ncollaboration between employers, employees, customers, strategic partners, and third parties\n\n[25]. Furthermore, smartphones can provide a variety of functionalities, like user interaction,\ntask management, user online help, blogging, wiki, chatting, and remote access from trusteed\nparties or customers through appropriate authentication and authorizations, that can\nsignificantly leverage the operational effectiveness of modern enterprises. The dissemination\nof notifications (e.g. Google Cloud Messages) in mobile apps is another example which can\nfacilitate business operations.\n\n8. Developing a high-performance IT-enabled enterprise\nThe IT services implemented with technologies like SOA and ubiquitous computing, must be\nsupplemented by IT management services. Towards this direction [21, 24], the following\nmanagement services can be identified:\n\n(1) IT administration services, for the platforms, the IT systems planning and the project\nmanagement, the SLAs, and the negotiations with the IT suppliers,\n\n(2) IT architecture services, which need the incorporation of system management policies\nfor the effective management of IT resources,\n\n(3) IT R&D services, concerning new products, services, processes, and operations using\nIT, and\n\n(4) training services in the use of IT, strongly needed for the capacity building of\nenterprise’s staff.\n\nAlong with the management, the overall business processes as well as any activity involved\nin a process need to be designed effectively. It has been well-documented that business\nprocesses which are supported by IT, along with the right organizational and management\nservices, are significantly more efficient than those of the old economy [26, 27]. The supply\nchain is a classic example that proves the truth of the matter. Figure 3 illustrates the process\nof supply chain. In brief, creating a performance-oriented IT organization requires process\ninnovation and efficiency, effective communication with the customers, the partners, and the\nsuppliers, inventory management, development of new products and services, and agile\nadaptation of the existing ones.\nIT-enabled enterprises need to be supported by several key functionalities and\ntechnologies [22, 28, 33, 34]. Bridges between the different information systems via\nappropriate interfaces by using e.g. web services (XML or json-based approaches), are also\nrequired. Of course, the adoption of these technologies may raise critical security and risk\nmanagement issues., All the vulnerabilities and security holes must be eliminated in an way.\n\n\n-----\n\n9. Analysis discussion and evaluation\nHereafter, some examples are provided regarding the right way to employ new technologies\nin modern enterprises. As mentioned in the Accenture report “Improving Business ROI with\nDigital Technologies”[1], “to become a more efficient finance organization, considerable\ninvestments must be made to improve processes and technology”. According to this survey, the\nreasons of fail are that (1) there is no clear strategy and vision, (2) they do not deal effectively\nwith the legacy systems, and (3) they do not understand adequately the existing digital\ncapabilities. The outcomes of this survey are fully aligned with the assumptions of our\nresearch presented above.\nIn the report entitled “The ROI of IoT: The 7 benefits it can bring to your business”[2], it is\nstated that “if you know how your key equipment and assets are behaving, and how people are\ninteracting with them, you can unlock the value of that data through an IoT software program\nthat delivers actionable insights and improves business processes in future”. This is a key issue\nfor the development of an improved long-term planning and strategy, which in turn it is\nnecessary for the success of an IT investment project, according to our research.\nAccording to [32], 88% of companies reported that they already use cloud services, while\n50% of companies expect to have all their data stored in the cloud for a period of two years.\nThe same survey revealed that most enterprises already have applications in production\nbased on cloud at a level of about 47%, while a significant percentage intend to develop such\napplications. However, these enterprises are almost entirely concerned with the security of\ntheir data, which requires a high-performance IT-enabled enterprise accordingto our research.\nFinally, according to the study conducted in [22], by adopting virtualization technology\nfor a computer centre, the cost of investing in it gradually yielded positive profits over the\nannual investment cost over a period of five years. Also, the return on investment (ROI)\nbecame positive after the second year, while during the fourth year, the total investment cost\nbecame significantly lower than the projected profits.\nItshould benotedthat fora continuousevaluation of theITstrategy and its implementation,\nit is necessary to define critical success factors of the organizational and the IT infrastructures\nthat are linked to certain KPIs. These indicators will assess not only the effectiveness of\ninternal processes, customer satisfaction or financial gains, but also the achievement of the\nstructural changes necessary to adapt the organization to an ever-evolving environment.\n\n10. Conclusions and future work\nThis paper presented in a concise and detailed way, the very important role that IT still plays\nnowadays in modern enterprises, giving them a competitive advantage in modern globalized\nmarkets. It was posited that enterprises that adopt IT in a smart and innovative way have a\nsignificant capacity building capability, improved future readiness, and better strategic\npositioning.\nIn the future, we intend to evaluate our approach in an enterprise. The process of analysis\nwill include the business strategy, processes and practices, organizational structures, IT\ninfrastructure, etc. The whole task faces the restrictions of the confidentiality of business\ninformation making access to all this information rather difficult, especially when this\ninformation may be made public. An organization before the application of our methodology\nrequires an adequate mapping (a master plan), while then a road map should be developed\nwith all the technical details of integration of practices, plans, technological solutions,\nevaluations, etc. Evaluations could be based among others in the implementation of a\nbalanced scorecard, where the results will be expressed via Key Performance Indicators\nconcerning crucial success factors (CSF) in all, the internal processes, the quality of services to\nthe customers, and the financial profit which is essentially the main motivation of every\nenterprise.\n\n\n## Informatics and applied computing\n\n\n-----\n\n## ACI\n\n\nNotes\n[1. https://www.accenture.com/nl-en/blogs/insights/how-digital-technologies-improve-business-roi](https://www.accenture.com/nl-en/blogs/insights/how-digital-technologies-improve-business-roi)\n\n(last access: 14/7/2021).\n\n[2. https://blog.worldsensing.com/critical-infrastructure/roi-iot/ (last access 14/7/2021).](https://blog.worldsensing.com/critical-infrastructure/roi-iot/)\n\nReferences\n\n1. Naved K, Tabassum F. Reinventing business organizations: the information culture framework.\nSingapore Management Rev. 2005; 27(2): 37-63.\n\n2. Porter ME, Heppelmann JE. How smart, connected products are transforming competition. Harv\nBusiness Rev Spotlight Managing Internet Things. 2014, November 2014.\n\n3. Teece DJ. A capability theory of the firm: an economics and (strategic) management perspective.\nTaylor & Francis Online. 2017, New Zealand Economic Papers; 53(1) 2019.\n\n4. Tiwana A. IT strategy for non-IT managers. MIT Press. 2017, London, England.\n\n5. Mitropoulos S. An integrated model for formulation, alignment, execution and evaluation of\nbusiness and IT strategies. Int J Business Syst Res. 2021; 15(1): 90.\n\n6. Vandenbosch B, Lyytinen K. Much ado about IT: a response to “the corrosion of IT advantage”\nby Carr’ NG, J Business Strategy. 2004; 25(6): 10-12.\n\n7. Schmidt J, Marques MRG, Botti S, Marques MAL. 2019. Recent advances and applications of\n[machine learning in solid-state materials science. Available at: https://www.nature.com/articles/](https://www.nature.com/articles/s41524-019-0221-0)\n[s41524-019-0221-0.](https://www.nature.com/articles/s41524-019-0221-0)\n\n8. Benlamri R, Sparer M. Leadership, innovation and Entrepreneurship as driving forces of the global\neconomy. Proceedings of the 2016 ICLIE, Springer Proceedings in Business and Economics. 2016.\n\n9. Koi-Akrofi GW. Justification for IT investments: evaluation methods, frameworks, and models.\nTexila Int J Management. 2017; 3(2).\n\n10. Melarkode A, From-Poulsen M and Warnakulasuriya S. Delivery agility through IT. Business\nStrategy Rev. 2004, Autumn 2004.\n\n11. Wadhwa M, Harper A. Technology, innovation and enterprise transformation. IGI Glob book Ser\nAdv Business Inf Syst Analytics (Abisa). 2015, Hershey, USA.\n\n12. Schrage M, Kiron D, Hancock B, Breschi R. Performance management’s digital shift. MIT Sloan\n[Management Rev. February 26, 2019. Available at: https://sloanreview.mit.edu/projects/](https://sloanreview.mit.edu/projects/performance-managements-digital-shift/)\n[performance-managements-digital-shift/.](https://sloanreview.mit.edu/projects/performance-managements-digital-shift/)\n\n13. Dong J and Yang CH. Business value of big data analytics: a systems-theoretic approach and\nempirical test. Inf Management. 2020; 57(1): 103124.\n\n14. Ilmudeen, A, Bao, Y, Alharbi, IM. How does strategic alignment affect firm performance? the roles\nof information technology investment and environmental uncertainty, J Enterprise Inf\nManagement. 2019; 32(3): 457-76.\n\n15. Davies P. Strategic objectives and principles. 2015. Version V1.0, University of Sussex, Date of\n[Issue: 9th April 2015 Available at: http://www.sussex.ac.uk/its/pdfs/IT_Strategy_2015.pdf.](http://www.sussex.ac.uk/its/pdfs/IT_Strategy_2015.pdf)\n\n16. Chugunov A, Misnikov Y, Roshchin E, Trutnev D. Electronic governance and open society:\nchallenges in Eurasia. 5th International Conference. 2018, EGOSE 2018.\n\n17. Tallon, P. How information technology infrastructure flexibility shapes strategic alignment: a\ncase study investigation with implications for strategic IS planning. Plann Inf Syst. 2015; 15,\nAcemap: 425-55.\n\n18. Tallon P, Kraemer K. Investigating the relationship between strategic alignment and IT business\nvalue: the discovery of a paradox, In: Shin N (Eds), Creating business value with IT: challenges\nand solutions, Hershey, Pennsylvania, PA: Idea Group Publishing, 2003.\n\n19. Kaleka A, Morgan NA. How marketing capabilities and current performance drive strategic\nintentions in international markets. Ind Marketing Management. 2017; 78: 108-121.\n\n\n-----\n\n20. Coleman P, Papp R. Strategic alignment: analysis of perspectives. Proceedings of the 2006\nSouthern Association for Information Systems Conference, March 11–12, 2006, Florida, FL. 2006.\n\n21. Weill P, Subramani M, Broadbent M. Building IT infrastructure for strategic agility. MIT Sloan\nManagement Rev. 2002, Fall 2002.\n\n22. Lambropoulos G., Mitropoulos S., Douligeris C. Improving business performance by employing\nvirtualization technology: a case study in the financial sector. Computers. 2021; 10(4): 52.\n\n23. Razis M., Mitropoulos S. An integrated approach for the banking intranet/extranet information\nsystems: the interoperability case. Publ Int J Business Syst Res. 2021, Inderscience Publishers.\nforthcoming paper.\n\n24. Katsikogiannis G., Kallergis D., Garofalaki Z., Mitropoulos S., Douligeris C. A policy-aware service\noriented architecture for secure machine-to-machine communications. J Ad Hoc Networks. 2018;\n80, November 2018, Elsevier Science Publishers.\n\n25. Chassapis P, Mitropoulos S, Douligeris C. A prototype mobile application for the athens\nnumismatic museum. J Appl Comput Inform. 2020; ahead-of-print(ahead-of-print), Emerald\nPublishers.\n\n26. Mitropoulos S. A simulation-based approach for IT and business strategy alignment and\nevaluation. Int J Business Inf Syst. 2012; 10(4): 369-396, Inderscience Publishers.\n\n27. Mitropoulos S, Giannakos K, Achlioptas J, Douligeris C. A prototype workflow MIS for supply\nchain management: architecture, implementation and business evaluation. Publ Int J Business Inf\nSyst. 2021, Inderscience Publishers. forthcoming paper.\n\n28. Mitropoulos S, Mitsis C, Valacheas P, Douligeris C. An online Emergency medical management\ninformation system using mobile computing. J Appl Comput Inform. 2020. online version,\nEmerald Publishers.\n\n29. Sayabek, Z, Suieubayeva, S, Utegenova, A. Digital transformation in business, ISCDTE 2019:\ndigital age: chances, challenges and future lecture notes in networks and systems. 2020; 84:\n408-415, Springer, Cham.\n\n30. Gimpel H, R€oglinger M. Digital Transformation Changes and chances- Insights based on an\n[empirical study. 2015. Available at: https://fim-rc.de/wp-content/uploads/2020/02/Fraunhofer-](https://fim-rc.de/wp-content/uploads/2020/02/Fraunhofer-Studie_Digitale-Transformation.pdf)\n[Studie_Digitale-Transformation.pdf.](https://fim-rc.de/wp-content/uploads/2020/02/Fraunhofer-Studie_Digitale-Transformation.pdf)\n\n31. IDC. Exploring the Impact of infrastructure virtualization on digital transformation strategies and\n[carbon emissions, white paper. 2019, Available at: https://www.vmware.com/content/dam/](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/company/vmware-exploring-impact-of-infrastructure-virtualization-on-digital-transformation-strategies-and-carbon-emissions-whitepaper.pdf)\n[digitalmarketing/vmware/en/pdf/company/vmware-exploring-impact-of-infrastructure-](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/company/vmware-exploring-impact-of-infrastructure-virtualization-on-digital-transformation-strategies-and-carbon-emissions-whitepaper.pdf)\n[virtualization-on-digital-transformation-strategies-and-carbon-emissions-whitepaper.pdf.](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/company/vmware-exploring-impact-of-infrastructure-virtualization-on-digital-transformation-strategies-and-carbon-emissions-whitepaper.pdf)\n\n[32. Oracle KPMG. Oracle and KPMG cloud threat report. 2020, Available at: https://www.oracle.com/](https://www.oracle.com/a/ocom/docs/cloud/oracle-cloud-threat-report-2020.pdf)\n[a/ocom/docs/cloud/oracle-cloud-threat-report-2020.pdf, [Accessed 12 June 2021].](https://www.oracle.com/a/ocom/docs/cloud/oracle-cloud-threat-report-2020.pdf)\n\n33. Huang M, et al.. An effective service-oriented networking management architecture for 5Genabled internet of things. Compuert Networks. 2020; 173: 107208.\n\n34. Niu Y, et al. Exploiting device-to-device communications in joint scheduling of access and\nbackhaul for mmWave small cells. IEEE JSAC. 2015; 33(10): 2052-2069.\n\n35. Ahmad Qadri Y, et al.. The future of healthcare internet of things: a survey of emerging\ntechnologies. IEEE Commun Surv Tutorials. 2020; 22(2): 1121-1167.\n\n36. Bera B, et al.. Designing blockchain-based access control protocol in IoT-enabled smart-grid\nsystem. IEEE Internet Things J. 2021; 8(7): 5744-5761.\n\nCorresponding author\n[Sarandis Mitropoulos can be contacted at: smitropoulos@ionio.gr](mailto:smitropoulos@ionio.gr)\n\nFor instructions on how to order reprints of this article, please visit our website:\nwww.emeraldgrouppublishing.com/licensing/reprints.htm\nOr contact us for further details: permissions@emeraldinsight.com\n\n\n## Informatics and applied computing\n\n\n-----\n\n"
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1108/aci-06-2021-0149?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1108/aci-06-2021-0149, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.emerald.com/insight/content/doi/10.1108/ACI-06-2021-0149/full/pdf?title=why-and-how-informatics-and-applied-computing-can-still-create-structural-changes-and-competitive-advantage"
}
| 2,021
|
[] | true
| 2021-10-01T00:00:00
|
[
{
"paperId": "f5d9cdc4e20b80e455d1016d797f626788edbc07",
"title": "An online emergency medical management information system using mobile computing"
},
{
"paperId": "943df55f3385793fbbee9ca23597baa1cde334a2",
"title": "Improving Business Performance by Employing Virtualization Technology: A Case Study in the Financial Sector"
},
{
"paperId": "3456d7b76609193aec77ed861b6c40c4736763d8",
"title": "Designing Blockchain-Based Access Control Protocol in IoT-Enabled Smart-Grid System"
},
{
"paperId": "67023ff6f414784305e0ddf5aa7af8e2f167a5ab",
"title": "A prototype mobile application for the Athens Numismatic Museum"
},
{
"paperId": "d6ea75b325737824d9232f95a227ad2d04fe9c6a",
"title": "An effective service-oriented networking management architecture for 5G-enabled internet of things"
},
{
"paperId": "efcc84bc5542453001483bc9c81c7d4c6c409b80",
"title": "The Future of Healthcare Internet of Things: A Survey of Emerging Technologies"
},
{
"paperId": "b0aea93214fc501217f5091e8886520ff6a8b794",
"title": "Business value of big data analytics: A systems-theoretic approach and empirical test"
},
{
"paperId": "0273507eb05f1135f3a05f9c7adc9a56f12c7c5c",
"title": "Recent advances and applications of machine learning in solid-state materials science"
},
{
"paperId": "3150c9cde9ba3bc220c87971e59fd8bb5979318f",
"title": "How does business-IT strategic alignment dimension impact on organizational performance measures"
},
{
"paperId": "5b9d8e9a622dad3f0550b51ac1641295cc4497b2",
"title": "A capability theory of the firm: an economics and (Strategic) management perspective"
},
{
"paperId": "758153bebcd9e4e90cac7cb1166d5cf8f303d842",
"title": "How Does Strategic Alignment Affect Firm Performance? The Roles of Information Technology Investment and Environmental Uncertainty"
},
{
"paperId": "35b5c808a8e82dba941252d7447acf3679edf5c6",
"title": "A policy-aware Service Oriented Architecture for secure machine-to-machine communications"
},
{
"paperId": "4c9885c4a071b864732ef7f89b89bd8d130bd516",
"title": "Justification for IT Investments: Evaluation Methods, Frameworks, and Models"
},
{
"paperId": "22cfeafc9fd60144cc8e74fbab10c3a6701bd198",
"title": "How marketing capabilities and current performance drive strategic intentions in international markets"
},
{
"paperId": "06765dd985e923bb50787f2679af61e45c5a2f88",
"title": "How Information Technology Infrastructure Flexibility Shapes Strategic Alignment: A Case Study Investigation with Implications for Strategic IS Planning"
},
{
"paperId": "76116b7fa6ace076fc82a9b0508a990ad6624070",
"title": "Exploiting Device-to-Device Communications in Joint Scheduling of Access and Backhaul for mmWave Small Cells"
},
{
"paperId": "8119a80c6059bfda198e1f6e5b52cf7351b0962d",
"title": "How Smart, Connected Products Are Transforming Competition"
},
{
"paperId": "4d632c0828decf67cba6b58a05ed305a65dd4b54",
"title": "Technology, Innovation, and Enterprise Transformation"
},
{
"paperId": "a8be48fa449bb6db40ac7ea062d433a0add5681f",
"title": "A simulation-based approach for IT and business strategy alignment and evaluation"
},
{
"paperId": "194edbd38b15ee3b1c4d72d98ce109ff966385fe",
"title": "Much ado about IT: a response to “the corrosion of IT advantage” by Nicholas G. Carr"
},
{
"paperId": "959a535954985bcc2ec55b9f76ec9cbbab47fab4",
"title": "Building IT Infrastructure for Strategic Agility"
},
{
"paperId": "c3d9ef3c9c664559bb7e7bb63f6beefd3a43e86b",
"title": "A prototype workflow MIS for supply chain management: architecture, implementation and business evaluation"
},
{
"paperId": "605a7299c12396b64ec2b5bb1b0f0cb175a5025c",
"title": "An integrated approach for the banking intranet/extranet information systems: the interoperability case"
},
{
"paperId": "044750a44d4accc4724beaeb73eb2c4c1e133086",
"title": "An integrated model for formulation, alignment, execution and evaluation of business and IT strategies"
},
{
"paperId": "35cbc9b8d5250b99416f80b764c5ac2eb17f9d93",
"title": "Electronic Governance and Open Society: Challenges in Eurasia"
},
{
"paperId": "6fcba108774e4023a43649930d55c612780e8730",
"title": "IT Strategy for Non-IT Managers: Becoming an Engaged Contributor to Corporate IT Decisions"
},
{
"paperId": "b7d6826b49c6f152dd3c223e13216e377dd55bc2",
"title": "Leadership, Innovation and Entrepreneurship as Driving Forces of the Global Economy"
},
{
"paperId": "e8f3656f673e7edaf1b6ada2366df07710c5f801",
"title": "Digital Transformation : Changes and Chances – Insights based on an Empirical Study"
},
{
"paperId": "06865e2dd2ee73aeacf83e3daceb30b86a0550a4",
"title": "STRATEGIC ALIGNMENT: ANALYSIS OF PERSPECTIVES"
},
{
"paperId": "a8dd24d8f87cc7d3d8d69542be9bed166c85c031",
"title": "Investigating the Relationship between Strategic Alignment and Information Technology Business Value: The Discovery of a Paradox"
},
{
"paperId": "4df9d9574ebd255ca8cf1fb8cc1d78adab605618",
"title": "Investigating the relationship between strategic alignment and IT business value: the discovery of a paradox"
},
{
"paperId": null,
"title": "Strategic objectives and principles"
},
{
"paperId": null,
"title": "Digital transformation in business, ISCDTE 2019: digital age: chances, challenges and future lecture notes in networks and systems"
},
{
"paperId": null,
"title": "Performance management's digital shift. MIT Sloan Management Rev"
},
{
"paperId": null,
"title": "Reinventing business organizations: the information culture framework"
},
{
"paperId": null,
"title": "Exploring the Impact of infrastructure virtualization on digital transformation strategies and carbon emissions"
},
{
"paperId": null,
"title": "Delivery agility through IT"
},
{
"paperId": null,
"title": "Oracle and KPMG cloud threat report"
}
] | 9,334
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01079c5c50c9b17fe9c05eccc3764f25df77393c
|
[
"Computer Science"
] | 0.906166
|
A Link-Layer Virtual Networking Solution for Cloud-Native Network Function Virtualisation Ecosystems: L2S-M
|
01079c5c50c9b17fe9c05eccc3764f25df77393c
|
Future Internet
|
[
{
"authorId": "2072871840",
"name": "Luis F. Gonzalez"
},
{
"authorId": "143917913",
"name": "I. Vidal"
},
{
"authorId": "145764713",
"name": "F. Valera"
},
{
"authorId": "2232383964",
"name": "Raul Martin"
},
{
"authorId": "2232320867",
"name": "Dulce Artalejo"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-156830",
"https://www.mdpi.com/journal/futureinternet"
],
"id": "c3e5f1c8-9ba7-47e5-acde-53063a69d483",
"issn": "1999-5903",
"name": "Future Internet",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-156830"
}
|
Microservices have become promising candidates for the deployment of network and vertical functions in the fifth generation of mobile networks. However, microservice platforms like Kubernetes use a flat networking approach towards the connectivity of virtualised workloads, which prevents the deployment of network functions on isolated network segments (for example, the components of an IP Telephony system or a content distribution network). This paper presents L2S-M, a solution that enables the connectivity of Kubernetes microservices over isolated link-layer virtual networks, regardless of the compute nodes where workloads are actually deployed. L2S-M uses software-defined networking (SDN) to fulfil this purpose. Furthermore, the L2S-M design is flexible to support the connectivity of Kubernetes workloads across different Kubernetes clusters. We validate the functional behaviour of our solution in a moderately complex Smart Campus scenario, where L2S-M is used to deploy a content distribution network, showing its potential for the deployment of network services in distributed and heterogeneous environments.
|
## future internet
_Article_
# A Link-Layer Virtual Networking Solution for Cloud-Native Network Function Virtualisation Ecosystems: L2S-M
**Luis F. Gonzalez *** **, Ivan Vidal *** **, Francisco Valera** **, Raul Martin and Dulce Artalejo**
Telematic Engineering Department, Universidad Carlos III de Madrid, Avda. Universidad, 30,
28911 Leganés, Spain; fvalera@it.uc3m.es (F.V.); 100384060@alumnos.uc3m.es (R.M.);
100384053@alumnos.uc3m.es (D.A.)
*** Correspondence: luisfgon@it.uc3m.es (L.F.G.); ividal@it.uc3m.es (I.V.)**
**Abstract:** Microservices have become promising candidates for the deployment of network and
vertical functions in the fifth generation of mobile networks. However, microservice platforms
like Kubernetes use a flat networking approach towards the connectivity of virtualised workloads,
which prevents the deployment of network functions on isolated network segments (for example, the
components of an IP Telephony system or a content distribution network). This paper presents L2S-M,
a solution that enables the connectivity of Kubernetes microservices over isolated link-layer virtual
networks, regardless of the compute nodes where workloads are actually deployed. L2S-M uses
software-defined networking (SDN) to fulfil this purpose. Furthermore, the L2S-M design is flexible
to support the connectivity of Kubernetes workloads across different Kubernetes clusters. We validate
the functional behaviour of our solution in a moderately complex Smart Campus scenario, where
L2S-M is used to deploy a content distribution network, showing its potential for the deployment of
network services in distributed and heterogeneous environments.
**Keywords: microservices; cloud computing; virtual networks**
**1. Introduction**
**Citation: Gonzalez, L.F.; Vidal, I.;**
Valera, F.; Martin, R.; Artalejo, D. A
Link-Layer Virtual Networking
Solution for Cloud-Native Network
Function Virtualisation Ecosystems:
L2S-M. Future Internet 2023, 15, 274.
[https://doi.org/10.3390/fi15080274](https://doi.org/10.3390/fi15080274)
Academic Editor: Izzat Alsmadi
Received: 14 July 2023
Revised: 7 August 2023
Accepted: 15 August 2023
Published: 17 August 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
In the last couple of years, the continuous development of the Internet has led to an
unprecedented increase in the demand for telecommunication services from users. This
increase has brought new challenges for operators and service providers, which have
been forced to adapt new models and disruptive paradigms to accommodate the everincreasing demand. These challenges include, among others, reducing development cycles
and decreasing the speed needed to launch new services to the market; supporting their
continuous update transparently to users to satisfy the constant demand for innovation;
supporting the scalable operation of services by taking into account a high potential number
of users, where an appropriate quality of experience must be delivered; etc.
In response to these challenges, cloud technologies, particularly the cloud-native
model, have received great interest from the involved actors in the provision of Internet
services. According to the Cloud Native Computing Foundation (CNCF) [1], “Cloud native
_technologies empower organisations to build and run scalable applications in modern, dynamic_
_environments such as public, private, and hybrid clouds”. This model favours an application_
design based on microservice architectures [2], in contrast to traditional approaches based
on monolithic designs. By following a microservices approach, an application is developed
as a set of small services communicated through simple, well-defined mechanisms, for
example, based on HTTP. This model allows to overcome the inherent limitations of the
monolithic design, where every application is developed as a single indivisible block, which
requires more coordination between development teams, introduces further complexity in
the updating processes, and does not allow to independently scale parts of an application.
In a cloud-native model, microservices can be executed in virtualisation containers [3].
This allows a high degree of flexibility when deploying an application since containers are
-----
_Future Internet 2023, 15, 274_ 2 of 28
lightweight in comparison with traditional virtualisation platforms based on hypervisors.
Containers can be exported to other virtualisation platforms with enough computational,
networking and storage capacity to run them. Furthermore, they are able to pack the
required software to execute every microservice independently from the rest since their own
containers have all the software needed to run the service without the need for emulating an
entire operating system. In the same fashion, containers offer a scalable solution that allows
to flexibly adapt a service to its demand. For example, if an application experiences a sharp
increase in traffic, new containers can be quickly deployed to provision for this demand,
minimising service cut-offs. Given the rise in the popularity of container technology, there
are several platforms that allow the management and orchestration of containers, both
open source, such as Kubernetes (K8s) [4], Docker Swarm [5] or OpenStack [6], as well as
solutions offered by cloud providers, like Google Kubernetes Engine [7] or Amazon Elastic
Kubernetes Service [8]. It is worth mentioning that K8s has become the most popular tool in
the service cloud-native market. According to the 2021 survey of CNCF [9], performed over
the global cloud-native community (including software and technology, financial services,
consultancy and telecommunication organisations), 96% of the surveyed reported the use
of K8s in their organisation.
The rise of the cloud-native model has provided multiple benefits for the deployment
of Internet services. However, microservice technologies have also been regarded as excellent candidates for the deployment of network functions in cloud-native environments for
the next generation of mobile networks (5G and 6G). The network function virtualisation
(NFV) paradigm has greatly assisted in the agile deployment and development of network
services (NSes) in both cloud and edge environments. NFV aims at the softwarisation of
network services and functionalities through the use of virtualisation technologies, such
as virtual machines (VMs), reducing the deployment and development costs since it is
not necessary to develop and maintain the dedicated hardware involved in the provision
of some network functions. Naturally, containers have been regarded as the next step
for the deployment of NSes under the NFV umbrella since their lightweight nature and
easier management can enhance the provision of NSes in comparison with more computationally demanding solutions. Naturally, the provision of network services involve
certain degree of orchestration between containers to ensure its proper functionality. In this
regard, microservice platforms like K8s are excellent candidates for this purpose, thanks
to their orchestration and abstraction tools (automated rollouts and rollbacks, self-healing
properties, service discovery . . .) that assist in the swift and efficient deployment of NSes in
data centre environments, allowing microservices to properly interact with each other to
offer a complex application.
Some examples of NSes that could be deployed using NFV technologies include
load balancing, service discovery and routing functionalities. All these services must be
connected to one or several virtual networks able to isolate each VNF in different local area
network (LAN) domains. This behaviour provides these functions with a finer control of
their networking aspects within the platform where they are deployed, regardless of their
location. These virtual networks are of the utmost importance for any NFV deployments
since they enable isolation between different VNF instances at the network level to operate
independently and securely, thus allowing to implement the VNF chaining necessary for
the deployment of complex NSes.
Virtual networks are necessary for the deployment of NSes in cloud-native environments. However, microservice platforms, like K8s, usually take a hands-off approach
towards the connectivity of container networking: the flat networking approach. In this
model, all microservices are visible to each other at the network (IP) layer through the
use of networking agents deployed over a platform. These communications are especially
useful when the applications and workloads must communicate through application-based
communications (i.e., using APIs to communicate with one another) since these mechanisms dissociate the network configuration from the application itself. This approach in
turn can provide high availability and higher resilience to failures, and its implementation
-----
_Future Internet 2023, 15, 274_ 3 of 28
can benefit microservice-based applications in multiple ways (high availability, automatic
service discovery, etc.).
Unfortunately, there is an important downside that could limit the deployment of
NSes in microservice platforms, as this flat networking approach prevents the creation
and management of virtual networks, which are necessary to interconnect all the VNFs
that compose a NS. Since all microservices are able to “see” each other at the network
layer, there is no isolation between them. In consequence, the VNF chaining needed
cannot be performed. Therefore, it is impossible to effectively deploy NSes in microservice
platforms that only implement a flat networking approach towards their connectivity, as
they lack the necessary tools to create and manage the required virtual networks used in
NFV deployments.
In order to address this limitation, this paper presents a networking solution that
enables the link-layer connectivity of microservice platforms using software-defined networking (SDN) technology. More concretely, link-layer secure connectivity for microservice
platforms (L2S-M) provides a programmable data plane that virtualised workloads can use
to connect among each other at the link-layer level, enabling the establishment of point-topoint or multi-point links between workloads in a microservice environment. Furthermore,
this paper also explores the potential of L2S-M to provide link-layer communications
between workloads located in different clusters if they are managed by a microservice
platform, or sites, which can be managed by other virtualisation platforms based on VMs.
As a validation use case example, this paper presents a Smart Campus scenario, where L2SM is deployed to communicate different campuses located in geographically distributed
scenarios and managed using distinct virtualisation technologies/orchestration functions,
implementing a content delivery network (CDN) service to provide multimedia content in
a university environment.
**2. Background**
The world of telecommunications and services has recently experienced an unprecedented demand for more efficient, resilient and robust applications able to support the
ever-increasing demand of consumers, thanks to the new services and solutions that have
flourished under the umbrella of the 5th Generation of mobile networks (5G). In this regard,
the traditional monolithic design used in the development of telecommunication services
falls short in many aspects. Under this model, the functionality of a complex application
is wrapped as a single program or process, usually running inside a single host able to
have all the necessary resources to execute its modules and functionalities. This model
has significant drawbacks that come with this design architecture: higher complexity, less
adaptability (a change in one module can have effects in the entirety of the code), lower
scalability and long-term support [10]. To combat all these challenges that prevent the
effective development of new telecommunication applications and services, microservices
architectures have rose as a key enabler towards the development and deployment of
scalable, resilient and cost-efficient applications. In this model, each application is split
into individual modules that can be distributed among several hosts and architectures.
In order to build a complex functionality, each module is able to communicate with each
other regardless of their physical location, and they can operate independently of the rest
of the services, as it will usually perform a single task [11]. This model assists with all of
the problems that come with monolithic applications since complexity can be alleviated by
focusing on each module (i.e., not having to modify an entire program), and scalability is
increased through the deployment of multiple copies of each module used in the provision
of the service (as well as being distributed in multiple architectures).
Precisely due to this paradigm shift in application development, there is a conscious
effort in applying novel virtualisation technologies to enable this transition since traditional virtualisation technologies (i.e., hypervisor-based solutions) use more resources than
container-based technologies [3]. Virtual machines rely on hypervisors, which are software
programs that operate at the hardware level within a host: its purpose is the emulation
-----
_Future Internet 2023, 15, 274_ 4 of 28
of an entire operating system isolated from the host that they are running, including its
kernel. Containers, on the other hand, use the core operating system in the host to execute
some of their functionalities, while having a file system isolated from the host [12]. Since
containers require far fewer resources than a virtual machine, a single host is able to deploy
a wider array of functionalities and applications, which in turn provides higher scalability,
resiliency and efficiency in comparison with virtual machines [3,13,14].
This performance improvement makes containers the perfect candidate for the implementation of distributed architectures for the implementation and deployment of applications within the 5th generation of networks. With this idea in mind, there have been
conscious efforts to apply container technology for the deployment of network functions
and verticals, for instance, in [14], where the authors define an application/service development kit (SDK) for the deployment of NFV-based applications for both hypervisor-based
and containerised approaches. In [15], the authors build a lightweight docker-based architecture for virtualising network functions to provide IoT security (although VNFs were
connected using standard Docker network capabilities).
One fundamental aspect of container-based environments is guaranteeing that these
containers are provided with a functional network interface, which enables the communication between microservices and other elements or devices outside (for example, devices
connected to the Internet). In this regard, CNCF provides the container network interface
(CNI) solution [16], a specification and a set of libraries to provide a reference framework to
develop plugins that allow the configuration of network interfaces for containers. Currently,
this reference framework has been adopted by multiple container management solutions,
such as the case of K8s, which supports multiple plugins for a wide arrange of networking
plugins for microservice platforms: Flannel [17], Calico [18] or Multus [19] are some of the
most widely known (and utilised) in K8s. In consequence, the CNI model eases the addition
of network interfaces in containers, allowing the connectivity between containers through
their respective network interfaces. This connectivity model is clearly appropriate for
microservice-based applications, where all of them must be able to communicate between
each other (due to their intrinsic design, described in previous paragraphs).
Naturally, microservice platforms tend to use this CNI solutions to define how the
different microservices interact between each other. In this regard, K8s explicitly state
that every pod, the minimal computation unit that can be deployed in K8s composed
of one or more containers (sharing a common network namespace), must have its own
IP address, so it is not necessary to explicitly build links between pods. Moreover, K8s
imposes the restriction that all pods deployed in any networking implementation must be
able to communicate with all other pods on any other node without the use of network
address translation (NAT) functionalities.
Regarding the logical interactivity of the services running within a microservice
platform, although CNI plugins enable the network connectivity of the containers that are
deployed over a cluster (as it was explained beforehand), they do not define how each of
the microservices interact with each other using these communications, i.e., they do not
implement the networking logic that a module (or set of modules) must have in order to
provide the functionality of a complex application, or its relation with each module. For this
reason, microservice platforms usually rely on the service mesh concept to connect different
services (networking abstractions that can integrate one or several microservices, usually of
the same type) and define how each microservice communicates within an infrastructure.
Service mesh solutions like istio [20] and Envoy [21] use proxy functionalities that build
a data plane between the microservices over a cluster, and use these functions to filter
traffic and apply policies to the data sent to/from the proxies. This service mesh has its
limitations, however, as it does not provide isolation between microservices since it only
modifies routing information in a host, so the flat network implemented with the CNIs
is still present regardless (i.e., all microservices can still see each other at the network
(IP) level).
-----
_Future Internet 2023, 15, 274_ 5 of 28
However, although this connectivity model is appropriate for applications, it is important to mention that it presents some limitations when deploying NFV services. In
the NFV paradigm, services are deployed as sets of VNFs interconnected through virtual
networks. These virtual networks provide the abstraction of point-to-point or multi-access
links to the VNFs: they allow two or more VNFs to effectively connect to a single link-layer
network, sharing a single broadcast domain, where all connected VNFs can be seen as
‘neighbours’ at a single IP hop distance. Furthermore, the traffic transmitted over a virtual
network is not accessible to VNFs and entities outside the virtual network. Under the
previous considerations, the connectivity model between cloud-native platforms presents
some difficulties to support the abstraction offered by virtual networks commonly used in
NFV ecosystems.
Despite these limitations, the integration of cloud-native technologies in the NFV
ecosystem can present important advantages. Among others, we can mention the following: (a) the use of lightweight containers, portable and scalable, and the use of continuous
integration and continuous deployment (CI/CD) methodologies, offering a solution to
tackle the development and deployment of NFV services in microservice platforms; (b) the
immense popularity of the cloud-native model and its adoption state, both in development and production environments, opens up new opportunities to the incorporation
of developers, manufacturers and cloud service providers into the NFV market, which
will positively impact the innovation process and the flexibility of options for service
deployment—additionally, the access to a vast catalogue of virtual functions, developed
through the cloud-native model, would be enabled for the provision of NFV services of aggregated value; and (c) the current initiatives to translate cloud-native technologies to edge
environments, like the cases of KubeEdge [22], OpenYurt [23] or K3s [24], centred around
the use of K8s. These initiatives represent a promising alternative to have potentially
limitless computation, storage and networking resources for the automatic deployment of
operator services and verticals in the future.
Even though the flat networking approach has been the de facto standard in microservice platforms, some CNI solutions have tried to go beyond this model to provide
networking functionalities similar to the ones implemented in other virtual infrastructure
manager (VIM) solutions. For example, the OpenShift SDN network plugin [25] allows the
isolation of pods at the project level and the application of traffic policies, which can help
with isolating workloads in a cluster (although they will still see each other at the network
level, and it is only available for OpenShift clusters). The Nodus network controller [26]
enables the creation of subnetworks that pods can use to enable their connectivity in a K8s
cluster. However, this subnetting is limited since the subnetworks are all located in the
same IP range, so pods are not completely isolated within a K8s cluster. The Kube-OVN
CNI plugin [27] implements an OVN-based network virtualisation with K8s, which allows
the creation of virtual subnetworks (not dependent on a single IP address range, contrary
to the Nodus solution) that pods are able to attach to. This solution has its limitations however, as it does not allow the implementation of traffic engineering policies to route traffic
between pods, it is not compatible with physical networking devices in a K8s clusters, and
its inter-cluster capabilities is limited (it can only connect workloads at the network level).
The open-source software community is taking the first steps towards the evolution
of networking models in cloud-native technologies, as well as adapting them to the NFV
ecosystem. As an example, the ETSI Open Source MANO (OSM) project [28], whose main
objective is developing a management and orchestration platform for NFV environments
in accordance to the ETSI standards, has supported the deployment of virtualised functions
in K8s clusters since its SEVEN release. Nevertheless, this support is limited since it does
not enable the creation of virtual links (VLs) to enable the isolated connectivity of different
Kubernetes-based VNFs (KNFs) in a NS, as K8s does not natively provide a networking
solution for creating virtual networks (i.e., OSM only deploys KNFs but does not define
their connectivity, and all KNFs can communicate with each other). This is an important
step towards the integration of NFV in microservice platforms since the management of
-----
_Future Internet 2023, 15, 274_ 6 of 28
VLs (usually represented as virtual networks) is a fundamental aspect for the effective
deployment of NSes, as seen in works like [29], where the authors perform a comprehensive
analysis of NFV systems and deployments, or [30], where the authors explain in detail
the use of virtual networking and its performance impact depending on the virtual link
types. In this regard, the interest of the research community in closing the gap between
NFV and microservice platforms can be seen in such works as [31], where the authors
propose an architecture based on the monitoring, analysis, planning, and execution (MAPE)
paradigm for the optimisation of network service performance to guarantee quality of
service (QoS) in NFV environments, including container-based solutions, such as Docker
and Kubernetes. Similarly, another example of this effort can be seen in the work [32],
where authors propose an implementation of a novel framework that enables the optimal
deployment of contained-based VNFs, using a multi-layer approach based on monitoring
and analysing data from the physical, virtual and service layers.
There have also been proposals to enhance the networking of microservice platforms
in the NFV context. One prominent example is the network service mesh (NSM) [33].
NSM offers a set of application programming interfaces (APIs) that allow the definition of
network services (for example, an IP router, a firewall or a VPN tunnel), and establishes
virtual point-to-point links between pods that want to access determined network services
and pods that implement such services. NSM is designed to provide its connectivity service
in scenarios with multiple clusters or cloud infrastructures, keeping the CNI solution used
in every cluster. NSM presents a promising approach for exploiting the potential of cloudnative technologies in an NFV ecosystem. In such an ecosystem, NSM would provide the
abstraction of a network service. For example, if a VNF offers an IP routing service, NSM
would allow the establishment of virtual point-to-point links among this VNF and the
remaining VNFs that must have IP connectivity with the former, depending on the NFV
service to be deployed. However, this connectivity service does not provide the versatility
of a virtual network. On the one hand, NSM does not allow to connect multiple VNFs into
a single link-layer network in such a way that they can share a single broadcast domain
(i.e., NSM does not offer the multi-access link abstraction). This aspect can be a limiting
factor to deploy telecommunication or verticals services in an NFV ecosystem. On the other
hand, the NSM APIs do not offer an open interface that allows the cloud infrastructure
administrator to flexibly manage the existing virtual links. Following the previous example,
it could be desirable to change the configuration of a point-to-point link in such a way that
it terminates in another IP router instance in order to support load balancing; mirroring
port configurations could be required to monitor data traffic transmitted over a link; or
the temporary shutdown of certain links and their subsequent activation could be needed
when managing a security incident; etc.
**3. Virtual Networking for Microservice Platforms: L2S-M**
_3.1. Problem Statement_
Due to the intrinsic nature of cloud-native environments, applications are developed
and deployed with the following principles of architectural patterns: scalability since
applications should be able to increase or decrease their performance based on demand
and cost effectiveness; elasticity, as applications should be able to dynamically adapt
their workloads to react against the possible changes in the infrastructure; and agility,
as applications should be deployed as fast as possible to minimise service down times.
In principle, an application should be able to be deployed over a distributed cloud in
such a way that it can provide its service without any, or minimal, disruptions, while also
adapting its performance (and distribution) to the demand and available resources in an
infrastructure [2]. One prominent case that focuses on this philosophy is microservice
platforms: each function is composed of several modules (usually implemented as separate
containers, as described in previous sections) that interact with each other to execute a
complex functionality for one (or sometimes several) applications. These modules might
not be deployed in the same infrastructure, physical equipment or virtual machine: it is
-----
_Future Internet 2023, 15, 274_ 7 of 28
usual that they are distributed across multiple geographical locations, or even spread across
multiple clouds managed by different service providers. This model is an antithesis of
the monolithic design (i.e., a single application with all its components embedded in its
code), and it provides several advantages over its counterpart: high availability since a
module might have several copies distributed over the cloud; resilience since a failure in
one module does not compromise the entire functionality of the application; and shorter
development and deployment cycles.
Taking into account the main characteristics of cloud-native environments, it is essential to define the networking between applications and modules to allow for seamless
connectivity without compromising any of the benefits of the microservice model. In
order to preserve such advantages, most solutions rely on a flat networking approach that
facilitates communication among deployed functions and modules across different clouds,
independently of the physical location and network configurations [17,18]. In this model,
each microservice is able to reach at the IP level all of the containers that are deployed
within an infrastructure. By using this approach, high-level APIs, such as RESTful APIs, can
be implemented to enable the exchange information between functions (since all containers
are connected between each other, their proper communication can only be guaranteed
through application-layer mechanisms).
Naturally, in order to effectively implement a complex application, it is essential that
networking solution enables external connectivity for some of its modules (containers).
As it usually is not possible to directly send data to a particular container from outside
a microservice platforms, platforms like K8s rely on networking abstractions to enable
its connectivity from the outside. In this regard, K8s relies on K8s Services to expose its
own pods to the exterior. For this purpose, K8s uses a Service API to define a logical set
of endpoints (usually these endpoints are pods) to make those pods accessible within the
platform. Then, the own K8s distributes this incoming traffic to each pod. Furthermore,
some CNI plugins (like Calico [18]) also implement some mechanisms to filter traffic for/to
the pods, for instance, defining network policies that filter out undesired traffic (at the
application layer).
It is clear that this model has its advantages in cloud-native environments: as applications might not be permanently deployed in a single node or cloud (either due to its
unavailability or to optimise resources), a flat networking approach allows communication
with/to an application regardless of its distribution and IP addressing. Furthermore, this
model allows application developers to pass over the inner networking being performed
in an infrastructure since it is completely transparent to the applications themselves. In
other words, developers can assume that other modules of an application will always have
connectivity, and that no further configurations should be required in the modules to be
used in the infrastructure.
Despite its advantages, this model might not be suitable for all the services that could
be provided through cloud-native platforms. In a previous work, we realised the potential
of microservice platforms [34] as enablers for the deployment network and vertical services
in the context of the 5th and 6th generations of mobile networks (5G/6G) in cloud-native
environments (particularly, microservice platforms, which are a subset of functions in
the cloud-native model). Microservice platforms, like Kubernetes (K8s), usually employ
container technology to build applications in their managed infrastructures. Due to their
lightweight nature, in some environments (e.g., resource constrained scenarios), these platforms have multiple benefits over traditional virtualisation solutions like OpenStack, which
usually require more resources for the management and deployment of NSes. However,
due to the intrinsic nature of NSes, it is necessary to ensure that communication at the
lower layers is available (and not just at the application layer) for the functions that build
the service.
One prominent example of the necessities of virtual networking can be seen in the
implementation of a router functionality. A traditional router must analyse incoming
packets at the IP level from one of its network interfaces in order to check the destination
-----
_Future Internet 2023, 15, 274_ 8 of 28
that will be forwarded. However, this functionality can only be performed if each one of its
interfaces is located in a different LAN. The same situation is present when dealing with
virtual networks: in virtual networks, each function is located at different LANs (despite
their geographical location) and a router functionality would have one interface in each
virtual network, enabling the analysis of the incoming/outgoing packets to be forwarded to
the corresponding functions located in different (isolated) virtual networks. In consequence,
functions are isolated between each other at the network level and can only communicate
through the corresponding router, which enables the secure deployment and isolation of
network functionalities, as well as their chaining (necessary for NS implementations).
However, this behaviour is impossible to achieve in flat networking approaches: if
all the functionalities are located in the same LAN (like they are in the flat networking
approach), then there is no isolation between functions since they are in the same network
domain, so the router cannot “decide” the routing/forwarding of the incoming/outgoing
packets to each function. Hence, this problem heavily hinders the implementation of
networking services in cloud native to host NSes in one, or several, clouds.
To further illustrate the issue that microservice platforms have for the implementation
of NSes in cloud environments, Figure 1 showcases a logic implementation of a multimedia
content delivery service, in particular, a simple content delivery network (CDN) service.
This CDN has two HTTP proxies: the first one is utilised to cache the content sent from
a multimedia server, which allows to make the content closer to the end users (reducing
download times and bandwidth used in the network) while the second one is a firewall
that filters undesired requests from a function outside the CDN. Ideally, a service of such
characteristics must enable sending the requests from/to the corresponding proxies and
servers involved in the CDN, following the schema depicted in the upper side of the
figure. Furthermore, the HTTP proxy must be able to analyse incoming packets at the
network layer to appropriately filter undesired requests. Implementing services like the
one depicted in this example is often performed in cloud environments through virtual
links (i.e., virtual networks) that enable the isolated communication of different modules,
which can only reach the corresponding peers of a particular virtual network. This enables
the creation of links between functions similar to the ones that are used in NSes in the
context of NFV, enabling the connectivity of microservices in schemas like the one shown
in the upper part of the figure.
**Figure 1. CDN service in a traditional networking approach vs. flat networking.**
-----
_Future Internet 2023, 15, 274_ 9 of 28
Unfortunately, due to the intrinsic behaviour of the networking model used in microservice platforms, an implementation of such characteristics is impossible to achieve. As
it is depicted in the bottom part of the figure, microservice platforms with flat networking
approaches do not isolate these components: all of them are able to see each other at the
network level, usually through a set of networking agents in charge of forwarding traffic to
the rest of the components deployed over the platform. Although these agents can usually
discriminate traffic based on some protocols (normally application-layer based), they do
not completely isolate these components between them since they all are reachable at the
IP layer.
With the drawbacks of flat networking approaches in microservice platforms in mind,
this paper presents a solution that enables the use of virtual networking for the deployment
of network services and verticals in cloud-native environments: L2S-M. L2S-M aims at the
provision of secure link-layer connectivity to NSes in cloud-native environments. Instead of
being developed as a full networking solution to replace established connectivity solutions
in different microservice platforms, L2S-M provides a flexible complementary approach to
allow containers present in a cloud to attach into a programmable data plane that enables
point-to-point or multi-point link-layer connectivity with any other container managed by
the platform, regardless of its placement inside the infrastructure. This programmable data
plane relies on software-defined networking (SDN) techniques to ensure the isolation of the
link-layer traffic exchanged between containers. Moreover, SDN allows the application of
traffic engineering mechanisms over the programmable data plane based on several factors
like traffic priority or delay.
_3.2. Functional Design of L2S-M_
In our previous work [35], L2S-M was first introduced as a complementary networking
service to enable the deployment of NSes in NFV infrastructures (NFVIs) composed of
resource-constrained environments (particularly, aerial microservice platforms based on
K8s, as it is the de facto microservice platform and the one that could provide significant
advantages in comparison with other VIMs [34] in UAV networks). In this regard, L2S-M
was created to address the limitations that NFV Orchestration could have in such scenarios
as seen in our previous works [36,37]. Particularly, L2S-M enables the creation of virtual
networks that could connect different VNFs at the link-layer level, which is essential to
ensure the deployment of NSes for aerial networks (as seen in our previous works like [38]).
Moreover, the use of SDN could allow modifying the paths that traffic could use in aerial
ad hoc network scenarios in response to sudden cut-offs, instead of leaving this task to the
routing protocol in an aerial network.
However, it is clear that there is a need for bringing virtual networking solutions in
cloud and edge microservice-based platforms, which is necessary to ensure the proper
provision of applications and services in the form of cloud network functions (CNFs),
particularly in K8s platforms. The flexibility of its design allows L2S-M to not interfere
with standard networking models already implemented in microservice platforms, such
as K8s, bringing a complementary solution instead that can be exploited by any developers/platform owners interested in deploying CNFs in microservice platforms. Moreover,
L2S-M also has the potential to enable secure link-layer connectivity between several cloud
and edge solutions, effectively enabling inter-site communications for network functions
and verticals deployed over multiple infrastructures. With these ideas in mind, this paper
showcases the design of L2S-M as a cloud-native solution that enables the management
of virtual networks in microservice platforms. Particularly, this paper presents a full
architectural design of this solution, envisioned as a K8s operator used in data centre
infrastructures that, due to its flexibility, can be exported to other kind of scenarios (e.g.,
edge environments). This work presents an implementation of L2S-M as a K8s operator in
order to detail its functionality in the well-known microservice platform (although it can
be exported to any microservice platform).
-----
_Future Internet 2023, 15, 274_ 10 of 28
Figure 2 showcases the design of L2S-M in a cloud infrastructure. L2S-M delivers a
programmable data plane that applications and services can use to establish point-to-point
or multi-point links between each network function on demand. This objective is achieved
through the creation and management of virtual networks, to which applications are able
to attach, sharing the same broadcast domain between all the containers that joined one of
these virtual networks, i.e., all containers will see each other in the same fashion as if they
were in the same local network, regardless of their physical location within a cluster (set
of machines governed by the same cloud-native management platform). This behaviour
enables the direct point-to-point and multi-point link-layer communication between each
container, isolating the traffic from each network to avoid unnecessary traffic filtering (for
example, by having to implement multiple traffic policies for each application) and to
ensure their secured operation.
**Figure 2. L2S-M design in a cloud infrastructure.**
The way that L2S-M is able to introduce this virtual networking model is through a set
of programmable link-layer switches spread across the infrastructure as seen in Figure 2.
These switches can either be physical equipment (labelled as Switch in the figure (such as
the ones that can be found in traditional data centre infrastructures)) or virtual switches
(labelled as V-SW in the figure), which can take advantage of the benefits of container virtualisation technology. In order to establish the point-to-point links between the switches
to allow their communications and enable the desired in the cluster, IP tunnelling mechanisms are used, for instance virtual extensible LANs (VXLANs) [39] or generic routing
encapsulation (GRE) [40]. That way, the basis of the L2S-M programmable data plane is
established through this infrastructure of switches. Figure 2 showcases an infrastructure
that divides three different availability zones with different characteristics. For example,
one has physical switches that could be used to deploy hardware-accelerated functionalities,
while another zone is an edge with resource-constrained devices like UAVs.
It is worth mentioning that most networking solutions in cloud-native environments
also rely on IP tunnelling mechanisms to build their communications ‘backend’. However,
there are noticeable differences with respect to the approach used in L2S-M: first of all,
these solutions build the IP tunnels to interconnect their own networking agents, which
perform routing tasks in host IP tables themselves, and can interfere with the networking
of some machines and/or functions (and cannot be easily modified) in turn. Furthermore,
these tunnels are built between all members of a cluster to build a mesh, while L2S-M has
-----
_Future Internet 2023, 15, 274_ 11 of 28
the flexibility to allow the use of any kind of topology and can be dynamically adapted
depending on the necessities of the platform owners.
This overlay of programmable link-layer switches serves as the basis for the creation
of the virtual networks. In order to provide the full programmable aspect of the overlay,
L2S-M uses an SDN controller (SDN-C in the figure) to inject the traffic rules in each one
of the switches, specifying which ports must be used in order to forward, or to block,
the corresponding traffic coming from the containers attached to the switches and/or
other members of the overlay. This SDN controller can also be embedded into the own
virtualisation infrastructure as shown in Figure 2. The use of this SDN approach can also
enable the application of traffic engineering mechanisms to the traffic distributed across
the programmable data plane. For instance, priority mechanisms could be implemented in
certain services that are sensitive to latency constrains.
Service mesh solutions like Istio [20] could be seen as alternatives that would enable
a similar behaviour as the one provided by L2S-M. However, the service mesh was developed with the same ideas and concepts as the flat networking approach: instead of
managing the network interfaces directly, service mesh solutions use proxy functionalities
to forward/block traffic based on networking services (network abstractions) in a separate
data plane from the one presently used in the cluster, basing its routing/forwarding in
their logical definition (i.e., the user defines which services must communicate with each
other). Although this is a favourable approach to provide high availability and keep the
abstraction models present in the microservice platforms, this solution still does not address
the isolation aspects needed for NFV deployments since all their containers are still located
in the same LAN domain (through its CNI Plugin agents) and can be directly reached by
the rest of the containers. Therefore, L2S-M provides a behaviour that service mesh cannot
provide, as it does not have the proper tools to enable the use of virtual networks needed
in NFV deployments.
It is true that other solutions have explored similar virtual networking concepts in
microservice platforms, highlighting Nodus [26] and Kube-OVN [27]. However, all these
solutions have tried to implement a substitute for current CNI plugins, while L2S-M
provides this behaviour as a complementary solution for those applications that may
require such a degree of networking control. Furthermore, L2S-M has been designed to
enable its seamless use with physical switching infrastructures (commonly found in data
centre networks) through single root input/output virtualisation (SR-IOV) interfaces [41],
which can greatly extend its use for multiple use cases in the NFV space (e.g., network
acceleration, and 5G CORE deployments). Finally, L2S-M has a higher degree of flexibility
to accommodate different SDN applications to introduce traffic engineering mechanisms
based on the required scenario (which cannot be performed with the previous solutions, as
they rely on an internal SDN mechanism that cannot be easily modified to implement new
algorithms and applications).
_3.3. Inter-Cluster Communication through L2S-M_
The previous section explained how L2S-M allows establishing link-layer virtual
networks that connect CNFs executed in the same cluster through the combined use of
network-layer tunnelling and SDN technologies. We refer to this type of connectivity as
intra-cluster communications. Nonetheless, this idea can be extended to the inter-domain
scope to provide link-layer connectivity between CNFs that run in different clusters. We
will refer to the latter as inter-cluster communications. For this section, clusters may include
any kind of cloud-native environments (not only K8s), since the L2S-M design is flexible
enough to accommodate any kind of infrastructure.
At this point, it is necessary to introduce two new elements in the L2S-M design to
enable the inter-cluster communications: the network edge devices (NEDs) and the inter
domain connectivity orchestrator (IDCO).
The NEDs are programmable switches similar to the ones shown in the previous
subsections: they can be either implemented as software, or be physical hardware present in
-----
_Future Internet 2023, 15, 274_ 12 of 28
every site. Each cluster can be connected to one or more NEDs to constitute an inter-domain
programmable switch infrastructure. Each NED must have network-layer connectivity
with at least another NED. Following a similar approach to the one used for the intra-cluster
communications, an overlay network is created by connecting the NEDs through secure
network-layer tunnels that encapsulate the link-layer frames (e.g., VXLAN over IPSec).
This overlay can be manually created when deploying the NEDs, although a new overlay
manager can be present in order to manage the establishment of these tunnels.
Each frame that is transmitted from a certain cluster A to a cluster B travels from one
of the NEDs of cluster A to one of the NEDs of cluster B, traversing the overlay network
and possibly going through other NEDs in other clusters. The interconnection of the cloudnative platforms with the NEDs will vary depending on its nature: for instance, a K8s cluster
could deploy a NED as a pod in one of the nodes of the cluster and attach several ports into
an L2S-M switch so that the communication in the cluster is managed through several predefined virtual networks in the cluster (to indicate to L2S-M to which ports the traffic should
be sent for inter-domain communications); in OpenStack environments, a NED can be a
VM attached to a provider network, which can be relied on to distribute and/or send traffic
accordingly. Nevertheless, this setup provides the link-layer communications between
elements in different clusters (although it does not isolate traffic between them yet).
The IDCO element is in charge of managing the inter-cluster virtual networks. It has
both northbound and southbound interfaces. The northbound interface is implemented as
an HTTP REST API that allows external authorised entities to create, modify or delete the
virtual networks. The southbound interface is used by the IDCO to obtain information from
the NEDs and to inject the switching rules in them through a SDN southbound protocol
(e.g., OpenFlow [42] or P4Runtime [43]).
The IDCO decides how the frames that belong to each inter-cluster virtual network
should traverse the overlay network and injects several rules in the NEDs to create the
needed paths and accomplish the network isolation between them.
**4. Implementing L2S-M in a Cloud-Native Platform for Intra-Site Connectivity:**
**K8s Case**
_4.1. L2S-M Implementation as a K8s Operator_
This subsection introduces an implementation of L2S-M as a K8s operator, enabling the
creation and management of a virtual network in a distributed K8s infrastructure to securely
communicate workloads at the link-layer level. Although this subsection focuses on the
intra-site implementation of L2S-M, the validation present in this paper will showcase the
basic functionality of inter-cluster communications (described in the previous section) to
demonstrate its functionality in those scenarios.
Figure 3 depicts the detailed implementation of the full L2S-M solution in a K8s
cluster. This implementation contemplates the deployment of all the components depicted
in Figure 2 with their respective particularities to allow functionality inside a K8s cluster
since the implementation of this solution is not a straightforward task due to the complexity
that pod namespace isolation and API model introduce in K8s.
First of all, L2S-M requires the deployment of a set of L2 switches over the K8s infrastructure, as it is showcased in Figure 2. These switches are necessary components to enable
the establishment of the L2 overlay required to exchange data between pods. However,
instead of directly installing the switch in the node itself, L2S-M relies on the advantages
that K8s provides (containerisation, automatic deployment, life-cycle management, etc.)
to deploy each switch as a pod on every node of the cluster. Particularly, L2S-M uses a
“daemonset” (a K8s resource that deploys one pod per node in the cluster) that installs an
open virtual switch (OVS) in the node. Although any L2 switch solution can be used for
this purpose, L2S-M uses OVS due to its compatibility with multiple OS and distributions,
as well as its simplicity for its installation and configuration.
-----
_Future Internet 2023, 15, 274_ 13 of 28
**Figure 3. L2S-M implementation in a K8s infrastructure.**
Once the switch infrastructure is available in the K8s cluster, it is necessary to deploy
point-to-point links between the desired neighbouring nodes to enable link-layer connectivity between each other through IP tunnelling mechanisms, as showcased in the design and
in Figure 2. Instead of building a mesh with all the members of a K8s cluster (a common
practice in CNI plugin solutions), each programmable switch is only interconnected to the
desired peer, which can either be a virtual switch or a physical one. The figure depicts these
connections (thick blue links between V-SWs) performed using VXLAN tunnels, although
any IP tunnelling mechanism can be used (for instance, GRE [40]).
L2S-M must be able to create this overlay as well. However, having the switches
containerised introduces a problem, as it can be seen in Figure 3: the pods are not able to
reach the other nodes directly since they are located behind their own namespace inside
the node, so directly building an IP tunnel with the node would not work, or it would be
necessary to use the CNI plugin for the standard K8s networking (which follows the flat
networking approach and is incompatible with the concept of the solution). To avoid this
struggle, L2S-M builds the VXLAN tunnels beforehand in the host namespace (since they
must be able to see other at the IP layer, as a requirement of the K8s cluster), either using
a dedicated interface for this purpose or its main interface. Afterwards, L2S-M “moves”
these tunnel interfaces into the switch pod using the Multus CNI plugin since this plugin
enables to bring the pre-created VXLAN interfaces into the OVS pod without losing the
link-layer configuration.
Regardless of the containerisation of the switches, it is necessary to enable the attachment of the pods with the switches of the overlay deployed in every node in order
to perform the data exchange between other pods. However, every pod is located in its
own namespace, so connectivity between the pods and their (virtualised) switch cannot
be directly established (as it can be seen in Figure 3). To overcome this problem, a virtual
Ethernet (vEth) element can be used to exchange messages between each namespace, as
it mimics a “real” Ethernet cable, where packets sent at one end of the vEth appear at the
other end, regardless of the namespaces at which they are located.
L2S-M builds a set of vEth pairs in the host namespace, and then L2S-M attaches one
extreme to the switch, leaving the remaining one in the host namespace. Once a pod desires
to connect to a virtual network, L2S-M uses the Multus plugin to attach the other end into
the pod, effectively connecting these two elements (just as it is done in a physical switch):
-----
_Future Internet 2023, 15, 274_ 14 of 28
when a packet is generated inside the pod, the vEth will forward the packet into the Linux
Kernel, and the packet will be forwarded to the switch.
However, K8s does not have the tools to deal with the definition and management
of virtual networks on demand, or allowing the assignment of the corresponding vEth
pairs in each one of the hosts to the workloads deployed in the cluster. This is where a key
element in the L2S-M design comes into play: the L2S-M K8s operator. A K8s operator [44]
is a software extension to Kubernetes that allows the management of custom resources in a
K8s cluster, which might contain any information and can only be used by coordinating
the K8s API events with the operator to perform certain actions or events in the cluster.
The L2S-M K8s operator takes advantage of a pre-existing CustomResourceDefinition
(i.e., resources that are not native to the K8s environments) (CRD) from Multus (the wellknown NetworkAttachmentDefinition (NAD)) to define the virtual networks that we want
to create in a cluster. If we want a pod to be added into one of these virtual networks,
it will be written in its metadata in the same fashion as it would be done in a standard
Multus definition (a common standard in real K8s deployments). However, these are not
mere Multus annotations since they must be managed by an element that is able to identify
which pods want to belong to which virtual networks, as well as to perform the necessary
actions to attach the ports of each switch into the pod.
Therefore, L2S-M includes the definition of the L2S-M operator, which is an agent
(deployed as a pod) that is deployed in a controller node of a cluster as it can be seen
in Figure 3. This pod will constantly monitor the calls between the K8s API, picking up
several events that occur within the cluster. Depending on the type of event that is picked
up by the operator, it will perform a different action. In this fashion, the L2S-M operator
will be triggered when a “creation event” with a NAD is registered from the K8s API to
check if the corresponding resource is a virtual network or a standard Multus definition (in
such a case, the operator will not perform additional actions). If it is a virtual network, it
will register its creation in the cluster, writing this network in its database (L2S-M DB).
After the creation of a virtual network, once a pod starts its deployment in a cluster, it
will generate a creation event and, if the pod being generated includes a NAD annotation
in its metadata, the operator will begin to process this annotation prior to its deployment.
The operator will then identify each one of these annotations to see if the pods express
their desire to be attached into one, or several, virtual network(s), checking if any of the
networks created in the cluster are present. If not, the operator will let Multus handle the
deployment. Otherwise, the operator will retrieve the node where the pod is going to be
deployed, and will check if there are available interfaces (i.e., free vEth ends) in the host
namespace. Once a vEth is selected, the operator will assign that interface to the pod and
register that it belongs to a particular virtual network. In case the Kubernetes API schedules
the pod’s deletion, the operator will remove the interface from the virtual network in the
corresponding node, and the vEth will be returned to the host namespace to be available for
future workloads. During all these events, the operator will be modifying its DB depending
on their actions.
In order to provide the mechanisms to isolate traffic between virtual networks, L2S-M
contemplates the deployment of a software-defined networking (SDN) controller in the
K8s cluster as seen in Figure 3. The L2S-M operator and the controller interact through
a common API, which allows the operator to communicate the interfaces (ports) where
each pod is attached since the operator knows which virtual networks the pod belongs to.
The SDN controller will use this information to send the appropriate traffic rules to each
one of the programmable switches to ensure that the traffic generated in each network will
only be sent to the proper ports (either forwarding the information to the corresponding
neighbour in the overlay or to one of the ports in the switch). This mechanism ensures that
the traffic in each virtual networks becomes isolated between each other since traffic will
not be forwarded between workloads unless they belong to the same virtual network (i.e.,
they are treated as if they were in the same LAN). The current version of L2S-M [45] does
not implement the entire isolation mechanism: it is able to isolate most of the traffic within a
-----
_Future Internet 2023, 15, 274_ 15 of 28
virtual network since it can interact with an ONOS controller [46] to enable the forwarding
only with the appropriate ports depending on the network. However, traffic destined to
multiple hosts/pods (e.g., broadcast traffic in ARP Requests) must be forwarded to all
elements in the overlay since ONOS does not natively implement a way to isolate this
traffic. Future versions of L2S-M will fully isolate traffic in their respective virtual networks,
regardless of their nature, using a specific SDN application used with ONOS.
_4.2. Virtual Network Management Flow_
Figures 4–6 showcase the communications that are established between all the components of the L2S-M solution, divided into four main steps: the creation of a virtual network
in the K8s cluster, the attachment of a pod into a virtual network, the deletion of a pod in
the cluster and the deletion of a virtual network in a K8s cluster.
**Figure 4. Creation of a virtual network in L2S-M.**
**Figure 5. Attachment of a pod into a virtual network in L2S-M.**
-----
_Future Internet 2023, 15, 274_ 16 of 28
**Figure 6. Deleting a pod from a virtual network in L2S-M.**
4.2.1. Virtual Network Creation
First of all, when a user wants to create a virtual network, as seen in Figure 4, the user
will instruct K8s through its command-line interface (kubectl) to create the resource inside the
cluster (i.e., a NAD with the definition of the virtual network). Once this creation is performed,
the L2S-M operator picks up the K8s event and checks that this NAD definition corresponds
to a virtual network. The way the L2S-M knows is that the NAD includes a virtual interface
(i.e., an interface that is not physically defined in the host) called “l2sm-vNet”, informing
L2S-M that the annotation is a virtual network. Once the event is picked up, the operator
registers the creation inside its database, completing the creation of the virtual network.
4.2.2. Attachment of a Pod into a Virtual Network
The attachment of a pod into one, or more, virtual networks using L2S-M follows this
structure as seen in Figure 5:
1. When a pod wants to be deployed in the cluster associated with one (or several)
virtual network, it will introduce the corresponding annotation in its descriptor, using
the standard Multus annotation format. The user will then use kubectl to deploy the
pod, generating a creation event in the K8s cluster.
2. The L2S-M will pick up the event and check whether the pod has the corresponding
annotation and if so, it will check each annotation element to see if it corresponds to a
virtual network NAD from its database. Once it matches, L2S-M checks in its database
the free vEth in the node where the deployment is being performed (these data are
retrieved using the K8s API), writing an entry in the database for that interface with
the name of the pod and the virtual network that this interface is associated with.
3. The L2S-M operator updates the deployment with the new interface annotation,
instructing the Multus agent in the node of the vEth pair interface that will be aggregated to the pod. Once this operation is completed, the pod finishes its deployment
phase attached to the OVS switch of the node.
4. After the deployment, the L2S-M operator sends the SDN controller the new attachment of the pod, notifying the controller that the new port of the switch is associated to
a new virtual network. With this information, the SDN controller can configure all the
switches of the overlay with the corresponding rules to exclusively forward packets
between the members of the virtual network. This behaviour is up to the application
running in the SDN controller, which is in charge of finding the appropriate path
-----
_Future Internet 2023, 15, 274_ 17 of 28
between the pods and configuring the forwarding rules of the switches. One way
to perform this could be using intent-based connectivity in such a way that L2S-M
provides the MAC address of the members of the virtual network to the SDN controller
using intents so that the controller can properly configure the paths between them.
4.2.3. Detachment of a Pod from a Virtual Network
The procedure to delete a pod from a virtual network is very similar to the one for its
deployment as seen in Figure 6:
1. Once the pod is scheduled to be deleted from the cluster (either from a deletion event
or a failure in the pod/node), L2S-M picks up the event generated from the K8s API
and realises that the pod being removed is attached into a virtual network.
2. L2S-M removes the pod entry from its database, marking the interface as idle.
3. Simultaneously, L2S-M sends to the SDN controller the instruction to remove the
pod from the virtual network (e.g., removing the previous intent(s) generated in the
attachment phase).
4. The SDN controller will configure the forwarding tables from the switches to remove
the entries related to the pod that have been deleted, effectively removing the pod
from the virtual network.
4.2.4. Virtual Network Deletion
Finally, once a user wants to create a virtual network, as seen in Figure 7, the user will
instruct K8s through kubectl to delete the resource inside the cluster. Similarly to the creation
of networks, L2S-M picks up the K8s event and removes the virtual network entry from its
database. This action will only be possible if all pods have been detached from the network;
otherwise the operator will throw an error and prevent the deletion of the virtual network.
**Figure 7. Deleting a virtual network in L2S-M.**
_4.3. L2S-M Information and Uses_
L2S-M has been released as a publicly available open-source code that can be used
in most K8s distributions [45]. This solution has received interest from the research community, as it has been used in multiple European research projects in different contexts:
LABYRINTH [47], focused on providing security functions using UAVs, using L2S-M
to enable communications between the aircrafts; and the FISHY project [48], focused on
providing a coordinates framework for cyber-resilient supply chain systems, where L2S-M
is used as the basis of the main platform built to deploy the functionalities used in the
project, the FISHY reference framework (FRF).
As it was described in previous paragraphs, current NFV orchestrators, like the wellknown Open Source MANO (OSM) [28], have limited support for the deployment of NFV
cloud functions using K8s since there is no native way to create virtual networks able
to interconnect several VNFs in the clusters. However, our ongoing work includes the
definition of a new feature to be included in the codebase of OSM to enable the deployment
of network functions in K8s clusters. This feature, named “Connectivity among CNFs
-----
_Future Internet 2023, 15, 274_ 18 of 28
using SDN”, has already been approved and it is currently in the design phase in direct
collaboration with the OSM community. The details of this feature can be seen in the official
OSM site [49]. In this regard, we will briefly describe the steps that will be performed to
add a K8s cluster in the OSM ecosystem using L2S-M, as well as the deployment of a NS
using virtual networks within the cluster:
- At the cluster (VIM) registration time, the OSM user selects that the data plane used in
the CNFs communications is provided by L2S-M. Then, the user defines the resource
definitions (i.e., .yaml templates or Helm [50] charts) in order to tell the orchestrator
how to build and manage these networking resources within the cluster.
- When a new network is deployed using the orchestrator, the resource orchestrator
component (RO) of OSM takes the values and configuration parameters of each VL in the
descriptor and translates them into the parameters used in the L2S-M virtual networks.
After this process, the orchestrator contacts the K8s cluster and follows the flow seen in
Figure 4.
- Once the VLs have been processed, then the orchestrator proceeds to add the corresponding K8s annotations to each VNF to add the VLs associated with them, and start
their deployment within the K8s cluster (as seen in Figure 5), finalising the deployment
of the NS using L2S-M as the data plane networking solution in the cluster.
**5. Practical Experience with L2S-M**
_5.1. Description of the Testbed_
This section describes the testbed that is considered to validate the implementation
of the design introduced in Section 3. This testbed mimics the infrastructure that could
be deployed in a university Smart Campus environment. Universities, due to the nature
of the academic and research activities that they perform on a daily basis, must have a
powerful, reliable and secure infrastructure that allows them to flexibly deploy various
applications used by the members of the university. These applications, ideally, should be
able to effectively use the resources provided by the university infrastructure, as well as
having good scalability properties (to adapt the service to the possible demands, which can
dynamically be modified) and be resilient to temporary failures and/or service cut-offs. In
this regard, microservices are able to provide most of these characteristics. Unfortunately,
some of these services require networking capabilities that solutions like K8s are not able
to provide. One prominent example of this situation is the use case presented in this paper:
the implementation of a content-delivery network (CDN) for the distribution of academic
content in a Smart Campus scenario.
Generally, universities are composed of several campuses spread in distributed geographical locations. Each campus has its own size and importance inside the structure
of the university, especially regarding the resources that they are able to provide for their
own cloud environments for content distribution. This can potentially be an issue since a
pure centralised model might impact the performance needed to effectively send content to
remote campuses, possibly being a more desirable solution to move the content closer to
the users to avoid overloading the main infrastructure of the network, which in turn may
reduce latency as well. A centralised model can also introduce a single point of failure if
the main infrastructure is down and/or link disruptions occur. Finally, it is important to
mention that the network infrastructures of these campuses should be able to be dynamically modified to accommodate the demand that each campus may require at each moment,
without interfering with the functionality.
Microservice solutions like K8s are not usually able to provide the necessary tools to
implement a distributed scenario due to its flat networking approach and the limitations of
inter-cluster communications. However, L2S-M will be used in this paper to provide a CDN
to distribute content across different campuses of a wide variety of characteristics located
across geographically distributed regions, while also allowing to easily accommodate new
infrastructures and members to fit the necessities of the university (for instance, setting up
a temporary network for an event).
-----
_Future Internet 2023, 15, 274_ 19 of 28
This use case includes four different sites distributed along two campuses to prove the
effectiveness of the solution in an intra-campus scenario and of the NEDs for the inter-cluster
communications. Each campus is designed to include different resources to showcase the
interaction between heterogeneous infrastructures. Accordingly, there are two different sites
inside each campus, resembling the cloud and edge of each campus, where each site can
have its own infrastructure and implementation, to validate the initial statements.
The scenario presented in this paper can be seen in Figure 8. The first campus in the
scenario (on the right) is composed of two different sites: one central campus environment,
and one temporary edge site, which is set up only if an event is performed in the university
facilities (e.g., a conference or a workshop). The central infrastructure of this campus is
regarded as the main cloud of the whole organisation, holding most of the software and
teaching resources available to the students and teachers. In consequence, the representation
of this site is conducted through two VMs (since these are considered “heavier” machines
in terms of available resources). One of the VMs is used as a general-purpose server to
host multimedia and software contents. The remaining VM hosts the corresponding NED
used to connect this site with the rest of the sites of the university, both the ones in other
campuses (inter-campus communications) and the ones in its own campus (intra-campus
communications), as long as they are in different sites or clusters. The second site is the edge
environment of the campus, and it is meant to represent the temporary devices deployed
for an opportunistic event that members of the university will use to retrieve the content, as
well as providing a cache server for the university’s CDN. It comprises two Raspberry Pi 4
Model B computers that act as nodes of a K8s cluster where L2S-M is installed. The NED of
this site provides the connection of this site with the general cloud, which is the other site in
this same campus, and with the other cloud present in the second campus.
The second campus has two different sites as well. The first one is the designated
campus’ cloud, which is used as a proxy for the connectivity of the edge environment
deployed in the remaining site. This edge environment provides the infrastructure that
students will use on a regular basis to download the university content. Naturally, the
site offers a proxy as part of the CDN in order to store content in its premises, closing the
information to be closer to the students. The NED of the cloud allows for the connection
to the two sites in Campus 1 and to the other site in the same campus’ edge. The second
edge NED connects this site with the same campus’ cloud. The structure of both edges is
symmetrical in our deployment, although each site may have a different infrastructure and
configuration, validating the initial premise of the benefits of L2S-M.
The use case that is implemented to validate this research aims to simulate the previously described CDN, where a content server is located in the general cloud, to store
all the desired data, one proxy server is located in the edge of the first campus, and two
proxy servers are located on the second campus, one on its cloud and one on the edge, all
aiming to cache data closer to the user. They will be deployed as different pods running
a nginx [51] web server with the functionality of an HTTP reverse caching proxy. In the
cloud of the first campus, the server was installed inside a VM to act as the main content
server (i.e., where the information is permanently stored). In order to protect the access to
this main cloud from external sources, an HTTP proxy was deployed in the cloud of the
neighbouring campus. Regarding the edge sites (present in both campuses), an additional
HTTP proxy was deployed to cache the content coming from the remote server. Both edges
were designed in a symmetrical way. Apart from the proxy, one access point (AP) was
deployed as a pod on each edge, giving the user the possibility to download the content
from the CDN by connecting to the proxy available in the edge. To effectively enable this
connection, a domain name system (DNS) provides the user with the IP address of the
edge HTTP proxy when introducing the URL corresponding to that content. To avoid
reaching the HTTP server directly without connecting to the enabled AP service, a firewall
service (developed as a Linux router) was introduced in both edges, and it is in charge of
forwarding the traffic from the AP into the nginx proxy and vice versa.
-----
_Future Internet 2023, 15, 274_ 20 of 28
**Figure 8. Use case high-level design.**
For this scenario, some virtual networks must be created to attach the different components, once again in a symmetrical way between campuses 1 and 2, connecting, in the first
network, the router and firewall with the DNS and the AP. This can be seen in Figure 9,
where Net1 corresponds to the virtual network in the first campus and Net2 to the one in
the second campus. Another virtual network is deployed among the different campuses,
connecting the content server with the different proxies and routers. This stands in Figure 8
as Net3. All these deployments allowed for the connection between elements of the network
to be established and for the behaviour of the scenario to be as expected.
**Figure 9. Use case detailed implementation.**
-----
_Future Internet 2023, 15, 274_ 21 of 28
_5.2. Experimental Environment_
Figure 9 showcases the detailed implementation of the scenario, with the different
pods that were deployed on each site and the connections between them.
Starting with the first campus, the edge site is represented using a rack of four Raspberry Pi4 Model B computers, all of them having 8 GB of RAM and running an Ubuntu
Server 20.04 installation. All these RPis were connected using gigabit Ethernet connectivity
(since these are considered fixed devices that could be placed in classrooms all over the
campus). All RPis are part of the same K8s cluster, using the L2S-M operator for intracluster communications. In order to enable the communications between pods and avoid
loops over the L2S overlay created within the cluster, a RYU SDN controller was deployed
(running as a pod) using the spanning tree protocol (STP).
Each RPi hosts a different functionality, all of them deployed as K8s pods, which can
be seen in more detail in Figure 9. The first RPi, the closest one to the users, hosts the AP
functionality (enabling the requests of content downloads) and the DNS service within the
edge cluster. The next hop of the CDN service, once a download has been requested from
the AP, is the firewall (implemented as a Linux router) that will redirect the HTTP requests
to the proxy, located in the second RPi. This proxy also provides a cache that allows hosting
some of the requested content inside the cluster (allowing to have the information closer
to the users). The cache can be dynamically modified depending on the demand and the
status of the network. If the content is not found in this proxy, it will redirect the request
to the proxy located in the campus cloud through the NED present in the remaining RPi,
which oversees these inter-cluster communications.
This cloud is composed of a single VM (Ubuntu Cloud 20.04) that hosts a K8s cluster.
Two functionalities were deployed (as pods): one proxy HTTP, in charge of redirecting
the requests from the edge campus to the main cloud of the university, and the SDN
controller used for inter-cluster communication. This last component is essential for the
whole functionality of the university since it allows the configuration of all the NEDs
present in each cluster/site. Similar to the intra-site communications, this controller was
implemented using the RYU SDN controller running a SPT protocol to avoid network loops.
All of the previously described K8s clusters were installed using the 1.26 version of
kubeadm [52], running the K8s 1.26 release. using containerd as the container runtime. For
the default networking CNI Plugin, we selected Flannel [17] since it is one of the most-used
CNI plugin solutions in production clusters.
In the case of the main campus, the content server is directly installed and configured
in one VM within an OpenStack cluster, and it is connected to the cloud of the second
campus and to the other site in the campus premises through a NED, installed inside
another VM. Both VMs run an Ubuntu Cloud 20.04 image, using 2 CPUs and 8 GBs of
RAM. This content server stores all the multimedia files, Linux images, Debian packages
and many other types of files that could be used daily in a university environment. The
combination of all the functionalities and elements of both campuses build the CDN service
that is implemented in this work.
Due to the nature of the activities that could be present in a university, the edge
environment of the main campus is considered a temporary infrastructure that is aggregated
into the Smart Campus infrastructure. This new edge site, built as a K8s cluster using
L2S-M for intra-cluster communications, has the same functionalities as the edge of the
second campus, with the exception that its proxy will directly request the content to the
main content server, rather than using an intermediate proxy.
All of the configuration, deployment and network files used in this validation section
can be found in the corresponding repository [53].
_5.3. Functional Validation and Results_
5.3.1. Throughput Performance
This first set of validation tests aimed at showcasing the possible impact that L2S-M
could have in the available throughput for the virtual functions deployed over a K8s cluster.
-----
_Future Internet 2023, 15, 274_ 22 of 28
To test this impact, we used the well-known traffic-generation tool iperf3 [54] in order to test
the total available bandwidth between the two pods deployed in the scenario, testing both
the standard K8s networking (flannel [17] for intra-cluster networking, and K8s NodePort
for inter-cluster networking) and L2S-M in each scenario. In particular, the pods were
deployed using the following configurations:
- Two pods deployed in the same node of Campus 2 edge cluster (RPi1).
- Two pods deployed in different nodes of Campus 2 edge cluster (RPi1 and RPi3).
- Two pods deployed in different clusters (RPi1 of Campus 2 edge cluster and Campus
2 cloud cluster).
For each configuration, an iperf3 flow of 180 s was established between each pod in
both directions, using the standard K8s networking and the L2S-M in every iteration of the
test. In this regard, each iteration was run 30 times, and then the average of each run was
used to calculate the values shown in Table 1.
**Table 1. Throughput comparison between Flannel and L2S-M in all the possible scenario configurations.**
**Test** **Flannel (Mb/s)** **L2S-M (Mb/s)**
intra-node 4860 5350
intra-cluster 870 847
inter-cluster 915 869
As it can be seen in Table 1, the performance of L2S-M does not introduce any significant performance degradation in comparison with the standard K8s networking approaches
(Flannel and service abstraction). L2S-M improves the throughput between the pods when
they are co-located in the same node since the traffic generated between them does not
need to pass through the Flannel agent deployed in the node, which introduces some performance degradation. For the connectivity between pods that are located inside the K8s
cluster, but in different nodes, L2S-M and Flannel exhibit quite similar performance, with
a slight throughput decrease in the L2S-M case. Overall, the performance between both
solutions is very similar, and showcases that L2S-M does not harm the traffic performance
over the K8s cluster.
Regarding the inter-site connectivity scenario, L2S-M provides approximately 50 Mb/s
less throughput than its counterpart as expected since K8s services establish the connection
directly without the use of IP tunnelling mechanisms, while L2S-M still requires the use of
VXLAN tunnels between the infrastructures, which in turn introduces some overhead in
the packets exchanged between the pods/VMs. Nevertheless, NodePort communications
do not isolate the exchanged traffic between pods, unlike L2S-M, which only distributes
traffic to the pods located in the same virtual network.
These tests showcase that L2S-M does not introduce significant performance degradation in comparison with standard CNI plugin communications in K8s.
5.3.2. CDN Download Test
The purpose of this test is to show the capacity of L2S-M to implement complex NSes,
like the CDN proposed in this paper, over heterogeneous infrastructures implemented with
different management and orchestration solutions.
For this round of tests, we performed the downloading of a fairly large video file
(1 GB) from Campus 2 edge and measure the average throughput that the CDN has when a
user tries to download some media content in the Smart Campus scenario.
For this test, a pod located in the edge cluster of Campus 2 downloads a video using
[a specific URL that represents the video content “https://university-content/edu.mp4”.](https://university-content/edu.mp4)
When the pod tries to download the video content (using the well-known wget program),
the DNS service present in the campus edge will translate the URL into the IP address of
the local (i.e., campus) nginx server, sending the HTTP request in the process, following
the process described in the previous subsection.
-----
_Future Internet 2023, 15, 274_ 23 of 28
In order to test the efficiency of the CDN in realistic scenarios, these tests emulate the
congestion of one of the links used for the download, using iperf3 to send a TCP download
at the maximum available rate possible. In this case, the link that is congested during
the tests is the one interconnecting both campus clouds since in a realistic scenario it is
expected to be the link that exchanges the highest amount of data between campuses.
The cache will be set in two different ways: firstly by disabling the whole cache
(making the server a simple http proxy), and secondly, allowing the cache to store the
whole video. These modes will clearly showcase the impact of having a cache inside the
scenario by reflecting if there is any significant improvement to the available throughput
and/or the download speed.
With all these considerations, the performed tests were the following: one set of tests,
where the cache in the edge was disabled, performing the download when the link between
campuses was idle and then congested. Each set was performed 30 times. Afterwards,
the cache was enabled to hosts the whole video, repeating the aforementioned tests (idle
and congested links). Figure 10 showcases the average throughput results, including the
95% confidence intervals.
**Figure 10. Throughput with cache enabled and disabled.**
As it can be seen in the figure, the presence of a cache obviously improves performance
in terms of throughput in both scenarios: since the content is closer to the user, it traverses
fewer functions and infrastructures, which in turn makes it easier for the nginx to send the
content from one site to the other. When the cache is disabled, there is also a significant
decrease between the idle and congested scenario. This is the expected behaviour since the
content must traverse (and be processed) by an additional nginx server (the one located in
the second campus).
This behaviour can be further seen in Figure 11, where the traffic on each nginx
element can be seen for two runs: one where the cache is disabled, and one where it is
enabled (in both cases, the link was not congested). As it can be seen in all figures, traffic
is present in all pods/VMs when the cache is disabled since the traffic is generated from
the video-source VM in the main campus cloud and traverses the middle nginx pod. Since
these entities must process this traffic, the overall throughput is lower, and the download
**Figure 10. Throughput with cache enabled and disabled.**
-----
_Future Internet 2023, 15, 274_ 24 of 28
takes significantly more time to finish. On the other hand, when the cache is enabled, the
traffic is only generated from the edge nginx in Campus 2, which in turn requires less
packet processing in the infrastructure (fewer nginx servers are involved), so the overall
bandwidth is higher, and the download is significantly shorter.
**Figure 11. Traffic capture in every CDN element of the cluster.**
5.3.3. Cluster Addition and Wi-Fi Download Test
The last test of this validation section will provide some insights about the capabilities
of L2S-M to incorporate a new infrastructure into a complex scenario such as the Smart
Campus. It is common that universities hold events with many participants that require
access to the Internet or/and retrieve content from the university to perform some activity.
Some examples might include congresses, practical/lab sessions, etc. In all these cases,
it is important to be able to set up an adequate infrastructure able to effectively host the
network services required for each activity. Nevertheless, this set-up must be quick and
dynamic since these events frequently change, so the requirements and equipment needed
will vary depending on the type of activity being performed.
In this regard, the set-up of a K8s cluster is well known to be simple and fast to deploy
(a cluster can be set up in 20 min using administration tools such as Kubeadm [52]). In a
similar fashion, L2S-M (and its inter-cluster functions) can be easily deployed within a K8s
cluster as well since its configuration and set-up can be performed in a short period of time
(approximately 10 min).
For this test, the Campus 1 edge was deployed using two RPis (which can act as
AP) and a Mini-ITx compute node to provide the connectivity with the main university
cloud. This last battery of tests uses the RPis Wi-Fi module to provide the channel for the
downloads, emulating a real scenario, where other clients would connect into an AP and
download the content from there (rather than downloading it in the equipment, unlike in
previous tests).
In this case, the audiovisual content was downloaded from an external PC connected
into the virtualised AP of an RPI, enabling and disabling the cache to test the CDN function
**Figure 11. Traffic capture in every CDN element of the cluster.**
-----
_Future Internet 2023, 15, 274_ 25 of 28
ality. This process was repeated 30 times for every mode, obtaining the download values
that can be seen in Table 2.
**Table 2. Download throughput and time used for retrieving content using Wi-Fi.**
**Cache Enabled** **Throughput (Mb/s)** **Download Time**
Off 4.207 3:20
On 4.780 3:05
As it can be seen in the table, the download speeds are lower than the ones in previous
tests. These were due to the use of an unstable medium, such as the Wi-Fi connectivity of
the RPis. Nevertheless, both download speeds and average throughput were improved
when the cache was enabled in the site, proving again the effectiveness of the CDN.
Beyond the particular results (throughput, etc.) obtained for the different featured
scenarios, the main objective accomplished was to show that L2S-M can be used to provide
complex network services that might involve different kind of network interfaces like
wireless interfaces. This is also a starting point for the exploration of this concept since L2SM could be used to enable the connectivity with private networks over an infrastructure
(e.g., the same way in which VIMs like OpenStack connect to provider networks).
**6. Conclusions and Future Work**
The rise of new paradigms like NFV in the context of the 5th generation of mobile
networks has provided new ways to enable the development and deployment of network
services. In this regard, microservice platforms assist in the optimisation and orchestration
of network functions in distributed infrastructures. However, these platforms have some
limitations due to the flat networking approach that they implement for the communication
of their workloads (containers) in order to build a complex application. Furthermore,
these solutions can also have some limitations for connectivity with other infrastructures,
requiring networking abstractions that could prevent communicating network functions
between clusters or other platforms.
To address these issues, this paper has presented L2S-M as a solution that enables linklayer connectivity as a service in cloud-native ecosystems. L2S-M provides a programmable
data plane that microservice platforms (like K8s) can use to create virtual networks that
can be used by the containers to communicate at the link-layer level, since they will see
the rest of the containers as if they were located in the same local area network (even
though they might be distributed in different locations, depending on the cluster and the
underlying infrastructure). Using these virtual networks, L2S-M provides the necessary
network isolation required to deploy network and vertical specific functions in microservice
platforms, which current solutions cannot easily provide. Furthermore, since L2S-M uses
SDN technology to establish the paths between pods in a cluster, other SDN applications
can be flexibly deployed to support traffic engineering and optimise traffic distribution,
using an alternative network path across the L2S-M overlay.
This paper also provides a first exploration of the potential of L2S-M to provide intercluster communications between containers using virtual networks, enabling the direct
communication of network functions between heterogeneous infrastructures managed by
different platforms, which can implement different virtualisation techniques, or even run
bare-metal functions.
This paper also presented the use implementation of L2S-M in a complex Smart Campus scenario, deploying a CDN to distribute multimedia content in a complex, distributed
and heterogeneous scenario. The tests performed in the validation of this paper showed
that L2S-M is suitable to deploy complex NSes based in microservices that require the
use of multiple isolated virtual networks for their proper functionality, interconnecting
workloads located in different infrastructures over geographically distributed locations.
-----
_Future Internet 2023, 15, 274_ 26 of 28
Moreover, these tests depicted the flexibility of L2S-M to incorporate new infrastructures,
like Wi-FI access points, to extend the functionality of the NSes in the use cases.
Our future work for the advancement and further development of L2S-M includes
the implementation of the overlay manager figure in the solution to dynamically modify
the overlay network. Furthermore, we will also explore the implementation of L2S-M with
SR-IOV interfaces to enable its direct use with the physical switching equipment commonly
present in data centre infrastructures. This future work will also involve the exploration of
alternative SDN controllers to increase the functionality and isolation aspects of L2S-M, as
well as the development and application of SDN algorithms to apply traffic engineering
mechanisms. Finally, we want to contribute to the relevant open-source communities with
L2S-M. In this regard, we are working on a feature in OSM to support the creation of virtual
networks in K8s clusters, using L2S-M as the reference operator [49].
**Author Contributions: Funding acquisition, F.V.; Investigation, L.F.G., I.V. and F.V.; Supervision, I.V.**
and F.V.; Validation, L.F.G., R.M. and D.A.; Writing—original draft, L.F.G., R.M. and D.A.; Writing—
review and editing, L.F.G., I.V. and F.V. All authors have read and agreed to the published version of
the manuscript.
**Funding: This article has partially been supported by the H2020 FISHY Project (Grant agreement**
ID: 952644) and by the TRUE5G project (PID2019-108713RB681) funded by the Spanish National
Research Agency (MCIN/AEI/10.13039/5011000110).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Data sharing not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. [Cloud Native Computing Foundation. Building Sustainable Ecosystems for Cloud Native Software. Available online: https:](https://www.cncf.io)
[//www.cncf.io (accessed on 11 June 2023).](https://www.cncf.io)
2. Liu, G.; Huang, B.; Liang, Z.; Qin, M.; Zhou, H.; Li, Z. Microservices: Architecture, container, and challenges. In Proceedings of
the 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Macau, China,
[11–14 December 2020; pp. 629–635. [CrossRef]](http://doi.org/10.1109/QRS-C51114.2020.00107)
3. Morabito, R.; Kjällman, J.; Komu, M. Hypervisors vs. Lightweight Virtualization: A Performance Comparison. In Proceedings of
[the 2015 IEEE International Conference on Cloud Engineering, Tempe, AZ, USA, 9–13 March 2015; pp. 386–393. [CrossRef]](http://dx.doi.org/10.1109/IC2E.2015.74)
4. [The Linux Foundation. Kubernetes: Production-Grade Container Orchestration. Available online: https://kubernetes.io (accessed](https://kubernetes.io)
on 11 July 2023).
5. [Docker. Swarm Mode Overview. Available online: https://docs.docker.com/engine/swarm/ (accessed on 11 July 2023).](https://docs.docker.com/engine/swarm/)
6. [The OpenStack Project. OpenStack: Open Source Cloud Computing Infrastructure. Available online: https://www.openstack.org](https://www.openstack.org)
(accessed on 11 July 2023).
7. [Google Kubernetes Engine. Available online: https://cloud.google.com/kubernetes-engine (accessed on 11 July 2023).](https://cloud.google.com/kubernetes-engine)
8. [Amazon Elastic Kubernetes Service (EKS). Available online: https://aws.amazon.com/es/eks/ (accessed on 11 July 2023).](https://aws.amazon.com/es/eks/)
9. [Cloud Native Computing Foundation. CNCF Survey 2021. Available online: https://github.com/cncf/surveys (accessed on](https://github.com/cncf/surveys)
11 June 2023).
10. Ponce, F.; Márquez, G.; Astudillo, H. Migrating from monolithic architecture to microservices: A Rapid Review. In Proceedings of
the 2019 38th International Conference of the Chilean Computer Science Society (SCCC), Concepcion, Chile, 4–9 November 2019;
[pp. 1–7. [CrossRef]](http://dx.doi.org/10.1109/SCCC49216.2019.8966423)
11. Ren, Z.; Wang, W.; Wu, G.; Gao, C.; Chen, W.; Wei, J.; Huang, T. Migrating Web Applications from Monolithic Structure to Microservices Architecture. In Proceedings of the 10th Asia-Pacific Symposium on Internetware, Beijing, China, 16 September 2018;
[Association for Computing Machinery: New York, NY, USA, 2018. [CrossRef]](http://dx.doi.org/10.1145/3275219.3275230)
12. Joy, A.M. Performance comparison between Linux containers and virtual machines. In Proceedings of the 2015 International
Conference on Advances in Computer Engineering and Applications, Ghaziabad, India, 19–20 March 2015; pp. 342–346.
[[CrossRef]](http://dx.doi.org/10.1109/ICACEA.2015.7164727)
13. Moravcik, M.; Segec, P.; Kontsek, M.; Uramova, J.; Papan, J. Comparison of LXC and Docker Technologies. In Proceedings
of the 2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), Kosice, Slovenia,
[12–13 November 2020; pp. 481–486. [CrossRef]](http://dx.doi.org/10.1109/ICETA51985.2020.9379212)
-----
_Future Internet 2023, 15, 274_ 27 of 28
14. Acar, U.; Ustok, R.F.; Keskin, S.; Breitgand, D.; Weit, A. Programming Tools for Rapid NFV-Based Media Application Development
in 5G Networks. In Proceedings of the 2018 IEEE Conference on Network Function Virtualization and Software Defined Networks
[(NFV-SDN), Verona, Italy, 27–29 November 2018; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1109/NFV-SDN.2018.8725610)
15. Sairam, R.; Bhunia, S.S.; Thangavelu, V.; Gurusamy, M. NETRA: Enhancing IoT security using NFV-based edge traffic analysis.
_[IEEE Sens. J. 2019, 19, 4660–4671. [CrossRef]](http://dx.doi.org/10.1109/JSEN.2019.2900097)_
16. [Cloud Native Computing Foundation. CNI: The Container Network Interface. Available online: https://www.cni.dev (accessed](https://www.cni.dev)
on 11 July 2023).
17. [Flannel: A Simple and Easy Way to Configure a Layer 3 Network Fabric Designed for Kubernetes. Available online: https:](https://github.com/flannel-io/flannel)
[//github.com/flannel-io/flannel (accessed on 11 July 2023).](https://github.com/flannel-io/flannel)
18. [Tigera. Calico Open Source. Available online: https://www.tigera.io/project-calico/ (accessed on 11 July 2023).](https://www.tigera.io/project-calico/)
19. Multus: A Container Network Interface (CNI) Plugin for Kubernetes That Enables Attaching Multiple Network Interfaces to
[Pods. Available online: https://github.com/k8snetworkplumbingwg/multus-cni (accessed on 11 July 2023).](https://github.com/k8snetworkplumbingwg/multus-cni)
20. Istio Authors. Istio: Simplify Observability, Traffic Management, Security, and Policy with the Leading Service Mesh. Available
[online: https://istio.io (accessed on 4 August 2023).](https://istio.io)
21. Envoy Project Authors. ENVOY: An Open Source Edge and Service Proxy, Designed for Cloud-Native Applications. Available
[online: https://www.envoyproxy.io (accessed on 4 August 2023).](https://www.envoyproxy.io)
22. [KubeEdge: A Kubernetes Native Edge Computing Framework. Available online: https://kubeedge.io (accessed on 11 July 2023).](https://kubeedge.io)
23. [OpenYurt: An Open Platform That Extends Upstream Kubernetes to Edge. Available online: https://openyurt.io (accessed on](https://openyurt.io)
11 July 2023).
24. [Lightweight Kubernetes: The Certified Kubernetes Distribution Built for IoT & Edge Computing. Available online: https://k3s.io](https://k3s.io)
(accessed on 11 July 2023).
25. [Red Hat, Inc. About the OpenShift SDN Network Plugin. Available online: https://docs.openshift.com/container-platform/4.](https://docs.openshift.com/container-platform/4.13/networking/openshift_sdn/about-openshift-sdn.html#about-openshift-sdn)
[13/networking/openshift_sdn/about-openshift-sdn.html#about-openshift-sdn (accessed on 5 June 2023).](https://docs.openshift.com/container-platform/4.13/networking/openshift_sdn/about-openshift-sdn.html#about-openshift-sdn)
26. [Akraino. Nodus. Available online: https://github.com/akraino-edge-stack/icn-nodus/tree/master (accessed on 5 June 2023).](https://github.com/akraino-edge-stack/icn-nodus/tree/master)
27. The Linux Foundation. Kube-OVN: The Most Advanced Kubernetes Network Fabric for Enterprises. Available online:
[https://www.kube-ovn.io (accessed on 5 June 2023).](https://www.kube-ovn.io)
28. [European Telecommunications Standards Institute (ETSI). Open Source MANO (OSM). Available online: https://osm.etsi.org](https://osm.etsi.org)
(accessed on 11 July 2023).
29. Mijumbi, R.; Serrat, J.; Gorricho, J.L.; Bouten, N.; De Turck, F.; Boutaba, R. Network Function Virtualization: State-of-the-Art and
[Research Challenges. IEEE Commun. Surv. Tutor. 2016, 18, 236–262. [CrossRef]](http://dx.doi.org/10.1109/COMST.2015.2477041)
30. Lai, W.P.; Wang, Y.H. On the performance impact of virtual link types to 5G networking. In Proceedings of the 2017 Asia-Pacific
Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia,
[12–15 December 2017; pp. 1470–1474. [CrossRef]](http://dx.doi.org/10.1109/APSIPA.2017.8282265)
31. Karkazis, P.A.; Railis, K.; Prekas, S.; Trakadas, P.; Leligou, H.C. Intelligent Network Service Optimization in the Context of
[5G/NFV. Signals 2022, 3, 587–610. [CrossRef]](http://dx.doi.org/10.3390/signals3030036)
32. Uzunidis, D.; Karkazis, P.; Roussou, C.; Patrikakis, C.; Leligou, H.C. Intelligent Performance Prediction: The Use Case of a
[Hadoop Cluster. Electronics 2021, 10, 2690. [CrossRef]](http://dx.doi.org/10.3390/electronics10212690)
33. [Network Service Mesh: The Hybrid/Multi-Cloud IP Service Mesh. Available online: https://networkservicemesh.io (accessed](https://networkservicemesh.io)
on 11 July 2023).
34. Gonzalez, L.F.; Vidal, I.; Valera, F.; Sanchez-Aguero, V. A Comparative Study of Virtual Infrastructure Management Solutions for
UAV Networks. In Proceedings of the 7th Workshop on Micro Aerial Vehicle Networks, Systems, and Applications, Virtual,
24 June–2 July 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 13–18.
35. Gonzalez, L.F.; Vidal, I.; Valera, F.; Lopez, D.R. Link Layer Connectivity as a Service for Ad-Hoc Microservice Platforms. IEEE
_[Netw. 2022, 36, 10–17. [CrossRef]](http://dx.doi.org/10.1109/MNET.001.2100363)_
36. Gonzalez, L.F.; Vidal, I.; Valera, F.; Sanchez-Aguero, V.; Nogales, B.; Lopez, D.R. NFV orchestration on intermittently available
SUAV platforms: Challenges and hurdles. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer
[Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 301–306. [CrossRef]](http://dx.doi.org/10.1109/INFCOMW.2019.8845040)
37. Gonzalez, L.F.; Vidal, I.; Valera, F.; Nogales, B.; Sanchez-Aguero, V.; Lopez, D.R. Transport-Layer Limitations for NFV Orchestra[tion in Resource-Constrained Aerial Networks. Sensors 2019, 19, 5220. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s19235220)
38. Nogales, B.; Sanchez-Aguero, V.; Vidal, I.; Valera, F.; Garcia-Reinoso, J. A NFV System to Support Configurable and Automated
Multi-UAV Service Deployments. In Proceedings of the 4th ACM Workshop on Micro Aerial Vehicle Networks, Systems, and
Applications, Munich, Germany, 10–15 June 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 39–44.
[[CrossRef]](http://dx.doi.org/10.1145/3213526.3213534)
39. Mahalingam, M.; Dutt, D.; Duda, K.; Agarwal, P.; Kreeger, L.; Sridhar, T.; Bursell, M.; Wright, C. Virtual eXtensible Local
Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks. RFC 7348, 2014.
[Available online: https://www.rfc-editor.org/rfc/rfc7348.txt (accessed on 16 August 2023).](https://www.rfc-editor.org/rfc/rfc7348.txt)
40. Li, T.; Farinacci, D.; Hanks, S.P.; Meyer, D.; Traina, P.S. Generic Routing Encapsulation (GRE). RFC 2784, 2000. Available online:
[https://www.rfc-editor.org/rfc/rfc2784.txt (accessed on 16 August 2023).](https://www.rfc-editor.org/rfc/rfc2784.txt)
-----
_Future Internet 2023, 15, 274_ 28 of 28
41. [VMWare. What is Single Root I/O Virtualization (SR-IOV). Available online: https://docs.vmware.com/en/VMware-vSphere/](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-networking/GUID-CC021803-30EA-444D-BCBE-618E0D836B9F.html)
[8.0/vsphere-networking/GUID-CC021803-30EA-444D-BCBE-618E0D836B9F.html (accessed on 11 June 2023).](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-networking/GUID-CC021803-30EA-444D-BCBE-618E0D836B9F.html)
42. [Open Networking Foundation (ONF). OpenFlow Switch Specification v1.0–v1.5. Available online: https://opennetworking.org/](https://opennetworking.org/software-defined-standards/specifications/)
[software-defined-standards/specifications/ (accessed on 18 April 2023).](https://opennetworking.org/software-defined-standards/specifications/)
43. [Open Networking Foundation (ONF). P4 Open Source Programming Language. Available online: https://p4.org (accessed on](https://p4.org)
18 April 2023).
44. [The Linux Foundation. Operator Pattern. Available online: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
(accessed on 28 June 2023).
45. Gonzalez, L.F.; Vidal, I.; Valera, F.; Lopez, D.R. Link-Layer Secure Connectivity for Microservice Platforms (L2S-M). Available
[online: http://l2sm.io (accessed on 11 July 2023).](http://l2sm.io)
46. Open Networking Foundation. Open Network Operating System (ONOS), Open Source SDN Controller for Building Next[Generation SDN/NFV Solutions. Available online: https://opennetworking.org/onos (accessed on 11 July 2023).](https://opennetworking.org/onos)
47. [European H2020 LABYRINTH Project. Ensuring Drone Traffic Control and Safety. Available online: https://labyrinth2020.eu/](https://labyrinth2020.eu/)
(accessed on 11 July 2023).
48. European H2020 FISHY Project. A Coordinated Framework for Cyber Resilient Supply Chain Systems Over Complex ICT
[Infrastructures. Available online: https://fishy-project.eu/ (accessed on 11 July 2023).](https://fishy-project.eu/)
49. Gonzalez, L.F.; Vidal, I.; Valera, F.; Nogales, B.; Lopez, D.R. Feature 10921: Connectivity among CNFs Using SDN. Available
[online: https://osm.etsi.org/gitlab/osm/features/-/issues/10921 (accessed on 11 July 2023).](https://osm.etsi.org/gitlab/osm/features/-/issues/10921)
50. [Helm Authors. HELM: The Package Manager for Kubernetes. Available online: https://helm.sh (accessed on 6 August 2023).](https://helm.sh)
51. [F5 Inc. Nginx Documentation. Available online: https://nginx.org/en/docs/ (accessed on 18 April 2023).](https://nginx.org/en/docs/)
52. [The Linux Foundation. Creating a Cluster with Kubeadm. Available online: https://kubernetes.io/docs/setup/production-](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
[environment/tools/kubeadm/create-cluster-kubeadm/ (accessed on 18 April 2023).](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
53. [Gonzalez, L.F.; Vidal, I.; Valera, F.; Artin, R.M.; Artalejo, D. Smart Campus Scenario. Available online: https://github.com/](https://github.com/Networks-it-uc3m/Smart-Campus-Scenario)
[Networks-it-uc3m/Smart-Campus-Scenario (accessed on 6 August 2023).](https://github.com/Networks-it-uc3m/Smart-Campus-Scenario)
54. [Dugan, J.; Elliott, S.; Mah, B.A.; Poskanzer, J.; Prabhu, K. What Is iPerf/iPerf3? Available online: https://iperf.fr/ (accessed on](https://iperf.fr/)
11 June 2023).
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/fi15080274?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/fi15080274, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1999-5903/15/8/274/pdf?version=1692267150"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-08-17T00:00:00
|
[
{
"paperId": "3a545977e316c1f52926a2d2b17d7167f1828b95",
"title": "Intelligent Network Service Optimization in the Context of 5G/NFV"
},
{
"paperId": "42db85b2dc52997efa9a6c8882298e8835e16d40",
"title": "Link Layer Connectivity as a Service for Ad-Hoc Microservice Platforms"
},
{
"paperId": "26c83e011c74d869ba715b859974fb45d498df91",
"title": "Intelligent Performance Prediction: The Use Case of a Hadoop Cluster"
},
{
"paperId": "0107278d781099481d35f2ed0207632144cbb614",
"title": "Transport-Layer Limitations for NFV Orchestration in Resource-Constrained Aerial Networks"
},
{
"paperId": "c87f0010784bf6e34f4713163b8ec8f0f4645d0b",
"title": "Migrating Web Applications from Monolithic Structure to Microservices Architecture"
},
{
"paperId": "47c199393b70fb09d169a3c74ef6c4b57c22d3a1",
"title": "A NFV system to support configurable and automated multi-UAV service deployments"
},
{
"paperId": "dba65f8c9d379dad42583c3bbbec8685de9fd0a7",
"title": "NETRA: Enhancing IoT Security Using NFV-Based Edge Traffic Analysis"
},
{
"paperId": "340c6b4f57b8125f644c0e937934b0c6b23603ac",
"title": "Network Function Virtualization: State-of-the-Art and Research Challenges"
},
{
"paperId": "5bd0abaa8c3fe1b4cff21b93db62bcf7a53fee44",
"title": "Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks"
},
{
"paperId": "0312d85340352f014ec44ae34898fff58df4dbea",
"title": "Generic Routing Encapsulation (GRE)"
}
] | 25,151
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0109b47dabb716f4886a9c4317735459dbf5b13b
|
[
"Computer Science"
] | 0.878876
|
MONSOON: A Coevolutionary Multiobjective Adaptation Framework for Dynamic Wireless Sensor Networks
|
0109b47dabb716f4886a9c4317735459dbf5b13b
|
Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008)
|
[
{
"authorId": "2055747",
"name": "P. Boonma"
},
{
"authorId": "122573052",
"name": "J. Suzuki"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# MONSOON: A Coevolutionary Multiobjective Adaptation Framework for Dynamic Wireless Sensor Networks
## Pruet Boonma and Junichi Suzuki Department of Computer Science University of Massachusetts, Boston pruet, jxs @cs.umb.edu { }
## Abstract
_Wireless sensor applications (WSNs) are often required_
_to simultaneously satisfy conflicting operational objectives_
_(e.g., latency and power consumption). Based on an obser-_
_vation that various biological systems have developed the_
_mechanisms to overcome this issue, this paper proposes a_
_biologically-inspired adaptation mechanism, called MON-_
_SOON. MONSOON is designed to support data collection_
_applications, event detection applications and hybrid appli-_
_cations. Each application is implemented as a decentralized_
_group of software agents, analogous to a bee colony (appli-_
_cation) consisting of bees (agents). Agents collect sensor_
_data and/or detect an event (a significant change in sen-_
_sor reading) on individual nodes, and carry sensor data to_
_base stations. They perform these data collection and event_
_detection functionalities by sensing their surrounding en-_
_vironment conditions and adaptively invoking biologically-_
_inspired behaviors such as pheromone emission, reproduc-_
_tion and migration. Each agent has its own behavior pol-_
_icy, as a gene, which defines how to invoke its behaviors._
_MONSOON allows agents to evolve their behavior policies_
_(genes) and adapt their operations to given objectives. Sim-_
_ulation results show that MONSOON allows agents (WSN_
_applications) to simultaneously satisfy conflicting objec-_
_tives by adapting to dynamics of physical operational envi-_
_ronments and network environments (e.g., sensor readings_
_and node/link failures) through evolution._
## 1. Introduction
Autonomous adaptability is a key challenge in wireless
sensor networks (WSNs) [1–4]. With minimal intervention
to/from human operators, WSN applications are required to
adapt their operations to dynamic changes in physical operational environments (e.g., sensor readings) and network
environments (e.g., network traffic and node/link failures).
A critical issue in this challenge is that each WSN application tends to have conflicting operational objectives. For
example, the success rate of data transmissions from individual nodes to base stations is an important objective because higher success rate ensures that base stations have
more data for operators to better understand a physical oper
ational environment and make better informed decisions. At
the same time, the latency of data transmissions from individual nodes to base stations is another important objective.
Lower latency ensures that base stations can collect sensor
data for operators to understand a physical operational environment more quickly and make more timely decisions.
Success rate and latency conflict with each other. For improving success rate, hop-by-hop recovery is often applied;
however, this can degrade latency. For improving latency,
nodes may transmit data to base stations with the shortest
paths; however, success rate can degrade because of traffic
congestion on the paths.
In order to address this adaptability issue, the authors
of the paper envision autonomous WSN applications that
understand their operational objectives and simultaneously
satisfy them against the dynamics of network environments.
Toward this vision, the authors observe that various biological systems have developed the mechanisms to overcome the above adaptability issue. For example, each
bee colony autonomously satisfies conflicting objectives to
maintain its well-being [5]. Those objectives include maximizing the amount of collected honey, maintaining temperature inside a nest and minimizing the number of dead
drones. If bees focus only on foraging, they fail to ventilate their nest and remove dead drones. Given this observation, the proposed application architecture, called BiSNET/e (Biologically-inspired architecture for Sensor NETworks, evolutionary edition), applies key biological mechanisms to implement adaptive WSN applications.
Figure 1 shows the BiSNET/e runtime architecture. The
BiSNET/e runtime operates atop TinyOS on each node. It
consists of two software components: agents and middle_ware platforms, which are modeled after bees and flowers,_
respectively. Each WSN application is designed as a decentralized group of agents. This is analogous to a bee colony
(application) consisting of bees (agents). Agents collect
sensor data and/or detect an event (a significant change
in sensor reading) on platforms (flowers) atop individual
nodes. Then, they carry sensor data to base stations, in
turn, to a backend server (the MONSOON server in Figure
1), which is modeled after a nest of bees. Agents perform
-----
|Col1|Col2|
|---|---|
|EAs|DA|
these data collection and event detection functionalities by
autonomously sensing their surrounding environment conditions and adaptively performing biological behaviors such
as pheromone emission, reproduction, migration, swarming
and death. A middleware platform runs on each node, and
hosts an arbitrary number of agents (Figure 1). It provides
a series of runtime services that agents use to perform their
functionalities and behaviors.
This paper describes a key mechanism in BiSNET/e,
called MONSOON[1], which is an co-evolutionary adaptation framework for agents. Each agent possesses its own
behavior policy, as a gene, which defines how to invoke its
behaviors. MONSOON allows agents to evolve their behavior policies via genetic operations (mutation and crossover)
across generations and simultaneously adapt the behavior
policies to conflicting objectives in dynamic physical operational environments and network environments. Currently, MONSOON considers three objectives: success rate,
latency and power consumption. The evolution process
in MONSOON frees application designers from anticipating all possible environment conditions and tuning their
agents’s behavior policies to the conditions at design time.
Instead, agents can autonomously evolve and tune their behavior policies. This significantly simplifies the implementation and maintenance of agents (i.e., WSN applications).
MONSOON supports data collection applications, event
detection applications and hybrid applications. Different
types of applications are implemented with different types
of agents. Data collection and event detection applications
are implemented with data collection agents (DAs) and
_event detection agents (EAs), respectively. Both DAs and_
EAs are used to implement hybrid applications, which perform both data collection and event detection. In hybrid
applications, DAs and EAs coevolve and adapt their behavior policies (genes) in a symbiotic manner. EAs helps DAs
improve their behavior policies, and vice versa.
This paper is organized as follows. Section 2 overviews
the BiSNET/e runtime, and Section 3 describes the design
of MONSOON. Section 4 evaluates MONSOON with a
series of simulation results. Simulation results show that
MONSOON allows agents (WSN applications) to simul
1Multiobjective Optimization for Network of Sensors using a cOevOlutionary mechaNism
taneously satisfy conflicting objectives by adapting to dynamics of physical operational environments and network
environments (e.g., sensor readings and node/link failures)
through evolution. Sections 5 and 6 conclude with some
discussion on related work.
## 2. The BiSNET/e Runtime
At the beginning of a WSN’s operation, one DA and
one EA are deployed on each node. They have randomlygenerated behavior policies. A DA collects sensor data on
each node periodically (i.e., at each data collection cycle)
and carry the data to a base station on a hop-by-hop basis.
An EA collects sensor data on each node periodically, and if
it detects an event (i.e., a significant change in sensor data),
carries the data to a base station on a hop-by-hop basis. If
an event is not detected, the EA discards the data.
### 2.1. Agent Structure and Behaviors
Each agent consists of attributes, body and behaviors.
_Attributes carry descriptive information on an agent. They_
include agent type (i.e., EA or DA), behavior policy (gene),
sensor data to be reported to a base station, the data’s time
stamp, and the ID of a node where the data is collected.
_Body implements the functionalities of an agent: collect-_
ing and processing sensor data (e.g., discarding it or reporting it to a base station).
_Behaviors implement actions inherent to all agents. Sim-_
ilar to biological entities (e.g., bees), agents sense their
surrounding environment conditions and behave according
to the sensed conditions without any intervention from/to
other agents, platforms, base stations and human operators.
This paper focuses on the following seven behaviors.
**(1) Food gathering and consumption: Biological enti-**
ties strive to seek food for living. For example, bees gather
nectar to produce honey. Similarly, each agent periodically
reads sensor data (as nectar) to gain energy (as honey)[2], and
consumes a constant amount of energy for living.
**(2) Pheromone emission: Agents may emit different**
types of pheromones: migration and alert pheromones.
They emit migration pheromones on their local nodes
when they migrate to neighboring nodes. Each migration
pheromone references the destination node an agent has migrated to. Agents also emit alert pheromones when they fail
migrations within a timeout period. Each alert pheromone
references a possibly failed node that an agent could not migrate to. Each pheromone has its own concentration, which
decays by half at every data collection cycle. A pheromone
disapears when its concentration becomes zero.
**(3) Replication: EAs may make a copy of themselves**
in response to the abundance of stored energy, while DAs
always make a copy of themselves in each data collection
2The concept of energy in BiSNET/e does not represent the amount of
physical battery in a node. It is logically affects agent behaviors.
-----
cycle. A replicated (child) agent is placed on the node that
its parent resides on, and it inherits the parent’s agent type
and behavior policy (gene). Replicated agents are intended
to move toward base stations to report collected sensor data.
**(4) Migration: Agents may move from one node to an-**
other. Migration is used to transmit agents (sensor data) to
base stations. Each agent chooses a migration destination
node by sensing three types of pheromones available on the
local node: base station, migration and alert pheromones.
Each base station periodically propagates base station
_pheromones to individual nodes in the network. Their con-_
centration decays on a hop-by-hop basis. Using base station
pheromones, agents can sense where base stations exist approximately, and move toward the base stations by climbing
pheromone’s concentration gradient[3].
An agent may move to a base station by following a migration pheromone trace on which many other agents have
traveled. The trace can be the shortest path to the base
station. Conversely, an agent may goes off a migration
pheromone trace and follows another path to a base station
when the concentration of migration pheromones is too high
on the trace (i.e., when too many agents have followed the
trace). This avoids separating the network into islands. The
network can be separated with the migration paths that too
many agents follow, because the nodes on the paths consume more power and go down earlier than the others.
An agent may also avoid moving to a node referenced
by an alert pheromone. This allows agents to reach base
stations by bypassing link/node failures.
**(5) Swarming: Agents may swarm (or merge) with oth-**
ers on their ways to base stations. Multiple agents become
a single agent. (A DA can merge with both DAs and EAs,
and an EA can merge with both EAs and DAs.) The resulting agent (swarm) aggregates sensor data contained in
other agents, and uses the behavioral policy of the best agent
in the swarm in terms of latency and power consumption.
This data aggregation saves power consumption of nodes
because in-node data processing requires much less power
consumption than data transmission does.
**(6) Reproduction: Once agents arrive at the MON-**
SOON server ( Figure 1), they are evaluated according to
their objectives. Then, MONSOON selects best-performing
(or elite) agents, and propagates them to individual nodes.
An agent running on each node performs reproduction with
one of the propagated agents. A reproduced agent inherits a behavior policy (gene) from its parents via crossover,
and mutation may occur on the inherited behavior policy.
Reproduced agents perform a generation change by taking
over existing agents running on individual nodes.
Reproduction is intended to evolve agents so that the
agents that fit better to the environment become more abun
3Base station pheromones are designed after the Nasonov gland
pheromone, which guides bees to move toward their nest [6].
dant. It retains the agents whose fitness to the current network conditions is high (i.e., the agents that have effective
behavior policies, such as moving toward a base station in
a short latency), and eliminates the agents whose fitness is
low (i.e., the agents that have ineffective behavior policies,
such as consuming too much power to reach a base station).
Through successive generations, effective behavior policies
become abundant in agent population while ineffective ones
become dormant or extinct. This allows agents to adapt to
dynamic network conditions.
**(7) Death: Agents periodically consume energy for liv-**
ing, and expend energy to invoke their behaviors. (The energy costs to invoke behaviors are constant for all agents.)
Agents die due to lack of energy when they cannot balance energy gain and expenditure. The death behavior is
intended to eliminate the agents that have ineffective behavior policies. For example, an agent would die before arriving at a base station if it follows a too long migration path.
When an agent dies, the local platform removes the agent
and releases all resources allocated to the agent.
### 2.2. Behavior Sequences for DAs and EAs
Figures 2 and 3 show a sequence of behaviors that each
DA and EA perform on a node in each data collection cycle.
A DA reads sensor data (as nectar) with the underlying
sensor device and gains a constant amount of energy (as
honey). Given the energy intake (EF), each agent updates
its energy level as follows.
_E(t) = E(t −_ 1) + EF (1)
_E(t) is the current energy level of the DA, and E(t −_ 1) is
the DA’s energy level in the previous data collection cycle.
_t is incremented by one at each data collection cycle._
If a DA’s (E(t)) becomes very low (below the death
threshold: TD), the DA dies due to starvation[4].
A DA replicates itself in each data collection cycle. A
replicating (parent) agent splits its energy units to halves
( _[E][(][t][)]2[−][E][R]_ ), gives a half to its child agent, and keeps the other
half. ER is the energy cost for an agent to perform the replication behavior. A child agent contains the sensor data that
its parent collected, and carries it to a base station.
Each replicated DA migrates toward a base station on a
hop by hop basis. On each intermediate node, it examines
Equation 2 to determine which next node it migrates to.
An DA calculates this weighted sum (WS j) for each
neighboring node j, and moves to a node that generates
the highest weighted sum. t denotes pheromone type; P1 _j,_
4If all agents are dying on a node at the same time, a randomly selected
agent for each type (i.e., EA and DA) will survive. At least one agent of
each type runs on each node.
_WS j =_
�t=31 _wt_ _PPtmaxt,_ _j − −PPtmintmin_ (2)
-----
_P2 j and P3 j represent the concentrations of base station,_
migration and alert pheromones on the node j. Ptmax and
_Ptmin denote the maximum and minimum concentration of_
_Pt among neighboring nodes._
When a DA is migrating to a neighboring node, it emits
a migration pheromone on the local node. If the DA’s
migration fails, it emits an alert pheromone. Each alert
pheromone spreads to one-hop away neighboring nodes.
**for each data collection cycle**
Read sensor data and gain energy (EF ).
Update energy level (E(t)).
**if E(t) < the death threshold (TD)**
**then Invoke the death behavior.**
Invoke the replication behavior to make a child agent.
Give the half of the current energy level to a replicated (child) agent.
**do** **for each migrating agent**
Determine the destination node of migration.
Emit a migration pheromone on the local node.
Migrate to a neighboring node.
**do**
**if Migration fails**
�Emit an alert pheromone on the local node.
**then**
Propagate it to neighboring nodes.
**Figure 2. A Sequence of DA Behaviors in**
**Each Data Collection Cycle**
is used to smooth out short-term minor oscillations in the
data series of E. It places more emphasis on the long-term
transition trend of E; only significant changes in E have the
effects to change TR. The α value is a constant to control
the responsiveness of EWMA against the changes of E.
Similar to DAs, a parent EA splits its energy units to
halves, gives a half to its child agent, and keeps the other
half. The EA keeps replicating itself until its energy level
becomes less than its TR. A child agent contains the sensor
data that its parent collected, and carries it to a base station.
EAs perform the migration behavior with Equation 2 in
the same way as DAs do.
### 2.3 Agent Behavior Policy
EAs and DAs have the same structure for behavior policies (genes). Each behavior policy contains a set of weight
values in Equation 2 (wt, 1 ≤ _t ≤_ 3). w1 and w3 are non negative, and w2 can be negative. These weight values govern
how agents perform the migration behavior. For example,
if an agent has zero for w2 and w3, the agent ignores migration and alert pheromones, and moves toward the base
stations by climbing the concentration gradient of base station pheromones. If an agent has a positive value for w2, it
follows a migration pheromone trace on which many other
agents have traveled. A negative w2 value allows an agent
to go off a migration pheromone trace and follow another
path toward a base station. If an agent has a positive w3, it
moves to a base station by bypassing link/node failures.
## 3. MONSOON
MONSOON is a coevolutionary multiobjective adaptation mechanism designed for agents in BiSNET/e. It allows
agents to heuristically adapt to multiple objectives simultaneously. This adaptation process is performed through elite
_selection and genetic operations. The elite selection process_
evaluates each type of agents (DAs and EAs) that arrive at
base stations, based on given objectives, and chooses the
best (or elite) ones. Elite agents are propagated to the network in order to perform genetic operations and reproduce
an offspring (next generation) agent on each node. Elite selection is performed in the MONSOON server (see Figure
1), and genetic operations are performed in each node.
### 3.1. Operational Objectives
Agents (DAs and EAs) consider three conflicting objectives: latency, cost and success rate of their migration (i.e.,
data transmission) from individual nodes to base stations.
**(1) Latency represents the time required for an agent**
(DA or EA) to travel to a base station from a node where
the agent is born (replicated). As depicted below, latency is
measured as a ratio of this agent travel time to the physical
distance (PD) between a base station and a node where the
**while true**
Read sensor data and gain energy (EF ).
Update energy level (E(t)).
**if E(t) < the death threshold (TD)**
**then Invoke the death behavior.**
**while E(t) > the replication threshold (TR(t))**
�Invoke the replication behavior to make a child agent.
**do**
Give the half of the current energy level to the child agent.
**do**
**for each migrating agent**
Determine the destination node of migration.
Emit a migration pheromone on the local node.
Migrate to a neighboring node.
**do**
**if Migration fails**
�Emit an alert pheromone on the local node.
**then**
Propagate it to neighboring nodes.
**Figure 3. A Sequence of EA Behaviors**
When an EA reads sensor data (as nectar) with the underlying sensor device and gains energy (as honey), its current
energy level (E(t)) is updated with Equation 3.
_E(t) = E(t −_ 1) + _S · M_ (3)
_S represents the absolute difference between the current_
and previous sensor data. M is metabolic rate, which is a
constant value between 0 and 1.
Each EA replicates itself if its energy level exceeds the
replication threshold: TR(t) (Figure 3). The replication
threshold is continuously adjusted as EWMA (Exponentially Weighted Moving Average) of each EA’s energy level:
_TR(t) = (1_ − α)TR(t − 1) + αE(t) (4)
_TR(t) is the current replication threshold, and TR(t −_ 1)
is the one in the previous data collection period. EWMA
-----
agent is born. The MONSOON server knows the location
of each node with a certain localization mechanism.
_Latency =_ _[Agent travel time][ (][sec][)]_ (5)
_PD (meter)_
**(2) Cost represents the amount of power consumption**
required for an agent (DA or EA) to travel to a base station
from a node where the agent is born. It is measured with
the total number of data transmissions, each node’s radio
transmission range (radius), and PD.
_Cost =_ _[Total][ #][ of data transmissions]_ (6)
_Transmission range/PD_
The total number of data transmissions include successful and unsuccessful (failed) agent migrations as well as the
transmissions of migration or alert pheromones.
**(3) Success Rate is measured differently for DAs and**
EAs. For DAs, it is measured as follows.
_S uccess rateDA =_ [#][ ofagents that arrive at base stations] (7)
_The total # of nodes_
For EAs, success rate is measured as follows.
is divided into small cubes. Each non-nominated agent is
plotted in this hypercube space based on their objective values. A single agent is randomly selected from each cube as
an elite agent. This elite selection is designed to maintain
the diversity of elite agents’ genes. The diversification of
agent genes contribute to improve agents’ adaptation even
to unanticipated network conditions.
Figure 5 shows an example hypercube space. Each axis
is divided into two ranges; therefore, eight cubes exist in
total. Thus, the maximum number of elite agents is eight. In
this example, six (A to F) non-dominated agents are plotted
in the hypercube space. Three agents (B, C, and D) are
plotted in the lower left cube, while the other three agents
(A, E, and F) are plotted in three different cubes. From the
lower left cube, only one agent is randomly selected as an
elite agent. A, E, and F are selected as elite agents because
they are in different cubes.
Latency
# of successful agent migrations
_S uccess rateEA =_ (8)
_The total # of attempts of agent migrations_
### 3.2. Elite Selection
Figure 4 shows how elite selection occurs at the MONSOON server in each data collection cycle. The MONSOON server performs the same selection process for EAs
and DAs separately. The first step is to obtain three objective values (i.e., latency, cost and success rate) from each of
the agents that reach the MONSOON server via base stations. Then, each agent is evaluated whether it is dominated
by another agent. An agent is considered to be dominated if
another agent outperforms it in all of three objectives.
Empty the archive
**for each data collection cycle**
Empty the population pool.
Collect agents from the network.
Add collected agents to the population pool.
Move agents from the archive to the population pool.
Empty the archive
**do** **for each agent of the ones in the population pool**
**if not dominated by all other agents in**
**do** the population pool
**then Add the agent to the archive.**
Select elite agents from the archive.
Propagate elite agents to the network.
**Figure 4. Elite Selection in MONSOON**
In the next step, a subset of non-dominated agents are
selected as elite agents. This is performed with a hypercube
space, which a three dimensional space whose axes represent three objectives (i.e., latency, cost and success rate).
Each axis of the hypercube space is divided so that the space
|A B C D uccess Rate (Maximize)|(Minimize) Non-dominated agent F E Cost (Minimize)|
|---|---|
**Figure 5. An Example Elite Selection**
### 3.3. Genetic Operations
Once elite DAs and EAs are selected, the MONSOON
server propagates them to each node in the network. They
are propagated with base station pheromones.
Based on a certain reproduction probability, an agent
performs the reproduction behavior on each node through
genetic operations (crossover and mutation) when elite
agents arrive at the node. As a mating partner, the agent selects one of the elite agents that has the most similar gene.
Gene similarity is measured with the Euclidean distance between the values of two genes. DAs can mate with elite
EAs, and EAs can mate with elite DAs. This cross-mating
allows DAs and EAs to coevolve their behavior policies;
DAs can improve EAs’ genes, and vice versa.
During reproduction, an agent inherits the half of its gene
from its parent agent and the other half from its parent’s
mating partner. Mutation occurs on the child agent’s gene
with a certain mutation probability by randomly changing
gene values within a predefined value range.
## 4. Simulation Results
This section shows a set of simulation results to evaluate
MONSOON. It is evaluated with a data collection applica
Success Rate
-----
100
90
80
70
60
50
Cost 40
Latency
Success Rate 30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
**Simulation Ticks**
2
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
**Generations**
**(a) Static Network**
tion (Section 4.1), event detection application (Section 4.2)
and hybrid application (Section 4.3). A simulated WSN
consists of 100 nodes uniformly deployed in an observation area of 300x300 square meters. Each node’s communication range is 30 meters. A base station is deployed
on the northwestern corner of the observation area. The
base station links the MONSOON server via emulated serial port connection. All the software components in the
BiSNET/e runtime are implemented in nesC, and the MONSOON server is implemented in Java. Simulation time is
counted with ticks. Each tick represents five minutes. In
genetic operations, the reproduction probability is 0.75, and
the mutation probability is 0.025.
### 4.1 Data Collection Application
A data collection application is implemented with DAs
that perform the sequence of behaviors shown in Figure 2.
No EAs are used in this application. The data collection
cycle corresponds to a simulation tick (five minutes).
Figure 6 (a) shows the average objective values produced
by DAs at each simulation tick. Each objective value gradually improves and converges at the 17th tick. This simulation result shows that MONSOON allows DAs to simultaneously satisfy conflicting objectives by evolving their behavior policies.
Figures 8, and 9 and 10 show the objective values that
elite DAs produced at the 20th tick. Since each objective
value’s change is less than 1% from the 17th to 20th tick, it
is fair enough to say that the elite DAs are on the Pareto
front at the 20th tick. Figures 8, and 9 and 10 plot the
elite DAs in three different perspectives: latency-cost, costsuccess rate, and latency-success rate perspectives. Each
gray dot represents an elite DA, and a black dot represents
overlapping elite DAs. These figures demonstrate that elite
agents are well diversified as intended by an elite selection
process described in Section 3.2.
Figure 6 (b) shows how the performance of DAs changes
against a dynamic node addition. 25 nodes are added at random locations at the 20th tick. Upon this change in the network environment, objective values degrade dramatically
because DAs have randomly-generated behavior policies on
the new nodes. Those DAs cannot migrate efficiently toward the base station. Also, enough pheromones are not
available on new nodes; DAs cannot make proper migration decisions when they move to the new nodes. However,
DAs gradually improve their performance again, and objective values converge again at the 43th tick. MONSOON allows DAs to autonomously recover application performance
despite dynamic node addition by evolving their behavior
policies.
Figure 6 (c) shows how the performance of DAs changes
against dynamic node failures. 25 nodes randomly fail
at the 20th tick. Objective values degrade because some
DAs try to migrate to failed nodes referenced by migration
2
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.8
0.6
0.4
0.2
0
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
**(a) Static Network**
2
100
90
80
70
60
50
40
30
20
10
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
**Simulation Ticks**
2
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(b) Node Addition**
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.8
0.6
0.4
0.2
0
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76
**Simulation Ticks**
**(c) Random Node Failure**
2
**(b) Node Addition**
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52
**Simulation Ticks**
0.8
0.6
0.4
0.2
0
2
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.8
0.6
0.4
0.2
0
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
**(c) Random Node Failure**
2
2
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58
**Simulaiton Ticks**
**(d) Selective Node Failure**
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76
**Simulation Ticks**
**(d) Selective Node Failure**
2
100
90
80
70
60
50
40
30
20
10
0
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1.8
1.6
1.4
1.2
1
100
90
80
70
60
50
40
30
20
10
0
0.8
0.6
0.4
0.2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43
**Simulation Ticks**
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(e) Base Station Failure**
**Figure 6. Objective Values of DAs**
**without EAs**
**(e) Base Station Failure**
**Figure 7. Objective Values of EAs**
**without DAs**
-----
1.00
0.99
0.98
0.97
0.96
0.95
0.94
0.93
0.92
0.91
0.90
0.0 0.2 0.4 0.6 0.8 1.0 1.2
**Latency (Minimize)**
### Figure 10. Latency-Success
Rate Objective Values on the
Pareto Front
1.20
1.15
1.10
1.05
1.00
0.95
0.0 0.2 0.4 0.6 0.8 1.0 1.2
**Latency (Minimize)**
### Figure 8. Latency-Cost Objective Values on the Pareto Front
1.00
0.99
0.98
0.97
0.96
0.95
0.94
0.93
0.92
0.91
0.90
0.95 1.00 1.05 1.10 1.15 1.20
**Cost (Minimize)**
### Figure 9. Cost-Success Rate Objective Values on the Pareto Front
pheromones. This increases the number of unsuccessful
agent migrations. However, DAs gradually improve their
performance again, and objective values converge again at
the 45th tick. MONSOON allows DAs to autonomously
recover application performance despite dynamic node failures by evolving their behavior policies.
Figure 6 (d) shows how the performance of DAs changes
when nodes selectively fail in a specific area. At the 20th
tick, 20 nodes fail in the middle of WSN observation area.
Hence, a WSN has a hole in its middle area. Compared with
Figure 6 (c), it takes longer time for DAs to recover their
performance. Objective values converge at 50th tick again.
The converged cost and latency are worse than the ones at
the 20th tick because DAs have to detour a hole (i.e., a set of
failed nodes) and take longer migration paths to the base station. This simulation results shows that MONSOON allows
DAs to survive selective node failures through evolution.
Figure 6 (e) shows how the performance of DAs changes
against base station failures. In this simulation scenario,
two base stations are deployed at the northwestern and
southeastern corners of WSN observation area. At the 20th
tick, a base station at the southeastern corner fails. Objective values degrade because some DAs try to migrate
toward the failed base station referenced by base station
pheromones. This increases the number of unsuccessful
agent migrations. However, DAs gradually improve their
performance again, and objective values converge again at
the 37th tick. MONSOON allows DAs to autonomously
evolve and recover application performance despite dynamic base station failures.
### 4.2 Event Detection Application
An event detection application is implemented with EAs
that perform the sequence of behaviors shown in Figure 3 in
every simulation tick. No DAs are used in this application.
This simulation study simulates an event, which occurs in
the middle of WSN observation area at the 50th tick and
radially spreads over time.
Figure 7 (a) shows the average objective values at each
simulation tick. Upon an event detection, objective values
are low because EAs use random behavior policies at first.
However, each objective value gradually improves and converges at the 45th tick. This simulation result shows that
MONSOON allows EAs to simultaneously satisfy conflicting objectives by evolving their behavior policies.
Figure 7 (b) shows how the performance of EAs changes
against a dynamic node addition. 25 nodes are added at
random locations at the 50th tick. Upon this environmental
change, objective values degrade slightly because EAs have
randomly-generated behavior policies on the new nodes.
Those EAs cannot migrate efficiently toward the base station. However, EAs gradually improve their performance
immediately, and objective values converge again at the
70th tick. MONSOON allows EAs to autonomously recover application performance despite dynamic node addition by evolving their behavior policies.
Figure 7 (c) shows how the performance of EAs changes
against dynamic node failures. 25 nodes randomly fail at the
50th tick. Objective values degrade slightly because some
EAs try to migrate to failed nodes referenced by migration
pheromones. This increases the number of unsuccessful
agent migrations. However, EAs gradually improve their
performance again, and objective values converge again at
the 72th tick. MONSOON allows EAs to autonomously
recover application performance despite dynamic node failures by evolving their behavior policies.
Figure 7 (d) shows the result of a simulation when 20
sensor nodes are selected in selective fashion, i.e. create a
hole in the middle of network, to be deactivated at the 50th
tick. Hence, the sensor network contains a hole in the middle of the network. Compared with the result in figure 7 (c),
MONSOON takes longer time to improve the performance
of the WSN. The success rate converges at about the 75th
tick to approximately 38%. The cost and latency also show
the similar trend. Particularly, after the 52nd tick, the av
-----
100
90
80
70
60
50
40
30
20
10
0
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
erage value of cost and latency are higher than the values
just before the 20th tick because agents have to detour in a
longer path to avoid the hole in the middle of the network.
The simulation results shows that MONSOON allows WSN
to survives a selective sensor nodes failure by adjusting the
operational parameters of WSN to be suitable to the changes
in network condition.
Figure 7 (e) shows the result of a simulation which initially has two base stations deployed at the northwestern and
southeastern corner of the observation area. Then, at the
50th tick, the base station at the southeastern corner is deactivated. From the figure, at the 51st tick, the success rate
drops sharply to about 20% from around 50% in the 50th
tick because more than a half of the agents still try to move
to the base station at the southeastern corner. However, the
success rate is improved successively and reach the same
level as before the base station is deactivated at the 66th tick.
Cost and latency show the same trend. MOSOON allows
WSN to survives a base station failure by autonomously directing all agents to the remaining base station.
### 4.3 Hybrid Application
This section represents simulation results from a sensor network with two application deployed simultaneously.
Figure 11 shows the average objective values from collected
DAs, i.e. for data collection application, in each simulation
ticks. On the other hand, figure 12 shows the average objective values from collected EAs, i.e. for event collection
application, in each simulation ticks.
In the figure 12 (a), at 50th simulation ticks, oil spill happens and EAs start detecting and moving to the base station.
The impact of EAs on DAs can be observed from the figure
with the drop in success rate and the increase of cost and
latency. However, within ten simulation ticks, MONOON
allows DAs to adapt to the EAs and retain their performance. The simulation results shows that MONSOON allows a WSN application to adapt to the other application
such that they can co-exist tranquilly in a same sensor network.
Figure 12 (b), (c), (d) and (e) show the similar scenario as
in figure 12 (b), (c), (d) and (e), respectively. The simulation
result in the former set of the figures also show the similar
trend as in the later set of the figures; therefore, MONSOON
allows a WSN application to adapt to network changes, i.e.
partial node failure or the base station failure, even when it
has to work simultaneously with another application on the
same network.
Figure 12 (a) portraits the same scenario as in figure 7
(a). In the figure, 12 (a), sensor network hosts two applications, data collection and event detection. However, the
objective values of event detection application, i.e. EAs, in
figure 12 (a) are improved faster than in figure 7 (a). For example, the latency is reduced to lower than 0.05 at around
the 28th tick in figure 12 (a) but it takes about the 38th tick
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1 6 11 16 21 26 31 36
**Simulation Ticks**
**(a) Static Network**
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
**Generations**
**(a) Static Network**
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.8
0.6
0.4
0.2
0
1 6 11 16 21 26 31 36 41 46 51 56
**Simulation Ticks**
**(b) Node Addition**
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(b) Node Addition**
2
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66
**Simulation Ticks**
**(c) Random Node Failure**
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(c) Random Node Failure**
100
90
80
70
60
50
40
30
20
10
0
100
90
80
70
60
50
40
30
20
10
0
2
100
90
80
70
60
50
40
30
20
10
0
0.8
0.6
0.4
0.2
0
2
1.8
1.6
1.4
1.2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(d) Selective Node Failure**
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71
**Simulation Ticks**
**(d) Selective Node Failure**
2
100
90
80
70
60
50
40
30
20
10
0
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1.8
1.6
1.4
1.2
1
100
90
80
70
60
50
40
30
20
10
0
0.8
0.6
0.4
0.2
0
1 6 11 16 21 26 31 36 41 46 51 56 61
**Simulation Ticks**
1 6 11 16 21 26 31 36 41 46 51 56 61 66
**Simulation Ticks**
**(e) Base Station Failure**
**Figure 11. Objective Values of DAs**
**with EAs**
**(e) Base Station Failure**
**Figure 12. Objective Values of EAs**
**with DAs**
-----
in figure 7 (a) to reduce to the same level. Thanks to crossmating (see Section 3.3), MONSOON allows event detection application, i.e., EAs, to improve its objective values
by using information from the other application. Figure 12
(b), (c), (d) and (e) also show the similar results.
### 4.4 Power Consumption
Figure 13 shows the impact of MONSOON and BiSNET/e on power consumption, and compare it with the
power consumption by RUGGED [7, 8]. RUGGED is a
gradient-based routing protocol. Figure 13 compares the
average power consumption of nodes running BiSNET/e
and RUGGED in the simulation scenario of Figure 6 (a).
BiSNET/e consumes more power than RUGGED first because agents use random behavior policies. However,
MONSOON allows agents to evolve their behavior policies and, in turn, reduce power consumption. After the
17th tick, power consumption is mostly same in BiSNET/e
and RUGGED. Power consumption is nearly constant in
RUGGED because it does not have dynamic adaptation
mechanisms.
400
350 BiSNET/e
RUGGED
300
250
200
150
100
50
0
1 3 5 7 9 11 13 15 17 19
**Simulation Ticks**
**Figure 13. Average Power Consumption**
### 4.5 Memory Footprint
Table 1 shows the memory footprint of the BiSNET/e
runtime in a MICA2 mote, and compares it with the footprint of Blink (an example program in TinyOS), which periodically turns on and off an LED, RUGGED and Agilla,
which is a mobile agent platform for WSNs [9]. The BiSNET/e runtime is lightweight in its footprint thanks to the
simplicity of the biologically-inspired mechanisms in BiSNET/e. BiSNET/e can even run on a smaller-scale nodes,
for example, TelosB, which has 48KB ROM.
**Table 1. Memory Footprint in a MICA2 Mote**
|Col1|RAM (KB)|ROM (KB)|
|---|---|---|
|BiSNET|2.5|30.0|
|Blink|0.04|1.6|
|RUGGED|0.84|20|
|Agilla|3.59|41.6|
## 5. Related Work
This work is an extension to the authors’ prior work,
BiSNET [10]. BiSNET allows agents to autonomously
adapt to dynamic network conditions. However, it does
not investigate evolutionary adaptation (i.e., MONSOON);
agent behavior policies are manually configured through
trial-and-errors and fixed at runtime. Unlike BiSNET, BiSNET/e allows agents to dynamically adapt their behavior
policies even to unanticipated network conditions.
MONSOON is designed as an extension to an existing
mutiobjective optimization algorithm, called PESA-II [11],
which in turn extends the NSGA-II algorithm [12]. MONSOON executes elite selection and genetic operations at
physically different locations (i.e., at the MONSOON server
and individual nodes, respectively), while both PESA-II and
NSGA-II execute the two processes at the same location. In
MONSOON, an agent chooses a mate that has the closest
gene to its own gene, in order to consider the agent’s performance stability. A mate is randomly chosen from the
elite archive in PESA-II. In NSGA-II, a mate is selected
with a binary tournament. Moreover, unlike PESA-II and
NSGA-II, MONSOON considers coevolution between different types of agents (DAs and EAs).
kOS is an operating system that applies biological mechanisms to implement adaptive WSN applications [13].
However, kOS has not implemented specific biologicallyinspired mechanisms yet. Also, [13] does not provide
any evaluation results as well as the implementation details of kOS. In contrast, BiSNET/e implements specific
biologically-inspired mechanisms such as pheromone emission, reproduction, genetic operations and migration. Moreover, this paper evaluates the impacts of those mechanisms
on WSN applications’ (i.e., agents’) adaptability.
Agilla proposes a programming language to implement
mobile agents for WSNs, and provides a runtime system
(interpreter) to operate agents on TinyOS [9]. On the other
hand, BiSNET/e does not focus on investigating a new programming language for WSNs. BiSNET/e and Agilla provide a similar set of behaviors such as migration and replication. However, Agilla does not address the research issues that BiSNET/e focuses on: evolutionary adaptation to
conflicting objectives. In addition, BiSNET/e focuses on
its design simplicity and runtime lightweight. As shown in
table 1, BiSNET/e is much more lightweight than Agilla.
Several research efforts have applied genetic algorithms
to WSNs, for example, to cluster-based routing [14–17],
data processing [18], localization [19] and node placement [20,21]. Every work uses a fitness function that combines multiple objective values as a weighted sum, and uses
the function to rank agents/genes in elite selection. Application designers need to manually configure the weight values
in a fitness function through trial-and-errors. In BiSNET/e,
no manually-confired parameters exist for elite selection because of a domination ranking mechanism. As a result, BiSNET/e requires much less configuration cost for application
designers. Also, [14,15,17,19–21] do not consider dynamics in the network, but assumes the network is static.
Evolutionary multiobjective optimization algorithms
-----
have been used for node placement [22–24] and routing [25,26]. In each of these work, an optimization process
is performed in a central server. This can lead to scalability issue as the network size increases. In contrast, MONSOON is carefully designed to perform its adaptation process in both the MONSOON server and individual nodes.
## 6. Conclusion
This paper describes an evolutionary multiobjective
adaptation framework, MONSOON, in a biologicallyinspired application architecture, called BiSNET/e. MONSOON allows WSN applications to simultaneously satisfy
conflicting operational objectives by adapting to dynamics of physical operational environments and network environments (e.g., sensor readings and node/link failures)
through evolution. Thanks to a set of simple biologicallyinspired mechanisms, the BiSNET/e runtime is implemented lightweight.
## References
[1] K. Akkaya and M. Younis, “A survey of routing protocols
in wireless sensor networks,” Elsevier Ad Hoc Networks,
vol. 3, no. 3, pp. 325–349, 2005.
[2] J. Blumenthal, M. Handy, F. Golatowski, M. Haase, and
D. Timmermann, “Wireless sensor networks - new challenges in software engineering,” in Proc. of IEEE Emerging
_Technologies and Factory Automation, September 2003._
[3] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and
E. Cayirci, “Wireless sensor networks: a survey,” Elsevier
_J. of Computer Networks, vol. 38, pp. 393–422, 2002._
[4] P. Rentala, R. Musunuri, S. Gandham, and U. Sexena, “Survey on sensor networks,” in Proc. of the 7th ACM Int’l Conf.
_on Mobile Computing and Networking, 2001._
[5] T. Seeley, The Wisdom of the Hive. Harvard University
Press, 2005.
[6] J. B. Free and I. H. Williams, “The role of the nasonov gland
pheromone in crop communication by honey bees,” Int’l J.
_of Behavioural Biology, Brill Publishing, vol. 41, 1972._
[7] J. Faruque, K. Psounis, and A. Helmy, “Analysis of gradientbased routing protocols in sensor networks,” in Proc. of
_IEEE/ACM Int’l Conf. on Distributed Computing in Sensor_
_Systems, 2005._
[8] J. Faruque and A. Helmy, “RUGGED: Routing on fingerprint gradients in sensor networks,” in Proc. of IEEE Int’l
_Conf. on Pervasive Services, 2004._
[9] C. L. Fok, G. C. Roman, and C. Lu, “Rapid development and
flexible deployment of adaptive wireless sensor network applications,” in Proc. of 25th IEEE Int’l Conf. on Distributed
_Computing Systems, June 2005._
[10] P. Boonma and J. Suzuki, “BiSNET: A biologicallyinspired middleware architecture for self-managing wireless sensor networks,” Elsevier J. of Computer Networks.
doi:10.1016/j.comnet.2007.06.006, 2007.
[11] D. W. Corne, N. R. Jerram, J. D. Knowles, and M. J. Oates,
“PESA-II: Region-based selection in evolutionary multiobjective optimization,” in Proc. of Genetic and Evolutionary
_Computation Conference, 2001._
[12] K. Deb, S. Agrawal, A. Pratab, and T. Meyarivan., “A fast
elitist non-dominated sorting genetic algorithm for multiobjective optimization: NSGA-II,” in Proc. of INRIA Par_allel Problem Solving from Nature, 2000._
[13] M. Britton, V. Shum, L. Sacks, and H. Haddadi, “A biologically inspired approach to designing wireless sensor networks,” in Proc. of The 2nd European Workshop on Wireless
_Sensor Networks, 2005._
[14] R. Khanna, H. Liu, and H. Chen, “Self-organisation of sensor networks using genetic algorithms,” Inderscience Int’l J.
_of Sensor Networks, vol. 1, no. 3, pp. 241–252, 2006._
[15] S. Hussain and A. W. Matin, “Hierarchical cluster-based
routing in wireless sensor networks,” in Proc. of the 5th Int’l
_Conf. on Info. Processing in Sensor Nets, 2006._
[16] S. Jin, M. Zhou, and A. S. Wu, “Sensor network optimization using a genetic algorithm,” in Proc. of Multiconf. on
_Systemics, Cybernetics and Informatics, 2003._
[17] K. P. Ferentinos and T. A. Tsiligiridis, “Adaptive design optimization of wireless sensor networks using genetic algorithms,” Elsevier J. of Computer Nets., vol. 51, no. 4, 2007.
[18] J. Hauser and C. Purdy, “Sensor data processing using genetic algorithms,” in Proc. of the 43rd IEEE Midwest Sym_posium on Circuits and Systems, 2000._
[19] V. Tam, K. Y. Cheng, and K. S. Lui, “Using micro-genetic
algorithms to improve localization in wireless sensor networks,” J. of Comm., Academy Publisher, vol. 1, no. 4, 2006.
[20] H. Y. Guo, L. Zhang, L. L. Zhang, and J. X. Zhou, “Optimal
placement of sensors for structural health monitoring using
improved genetic algorithms,” Smart Materials and Struc_tures, vol. 13, no. 3, pp. 528–534, 2004._
[21] J. Zhao, Y. Wen, R. Shang, and G. Wang, “Optimizing sensor node distribution with genetic algorithm in wireless sensor network,” in Proc. of Int’l Symp. on Neural Nets., 2004.
[22] D. B. Jourdan and O. L. de Weck, “Multi-objective genetic
algorithm for the automated planning of a wireless sensor
network to monitor a critical facility,” in Proc. of SPIE De_fense and Security Symposium, 2004._
[23] R. Rajagopalan, P. K. Varshney, C. K. Mohan, and K. G.
Mehrotra, “Sensor placement for energy efficient target detection in wireless sensor networks: A multi-objective optimization approach,” in Proc. of Annual Conf. on Information
_Sciences and Systems, 2005._
[24] A. M. Raich and T. R. Liszkai, “Multi-objective genetic algorithm methodology for optimizing sensor layouts to enhance structural damage identification,” in Proc. of the 4th
_Int’l Workshop on Structural Health Monitoring, 2003._
[25] R. Rajagopalan, C. Mohan, P. Varshney, and K. Mehrotra, “Multi-objective mobile agent routing in wireless sensor networks,” in Proc. of IEEE Congress on Evolutionary
_Computation, 2005._
[26] R. Rajagopalan, P. K. Varshney, K. G. Mehrotra, and C. K.
Mohan, “Fault tolerant mobile agent routing in sensor networks: A multi-objective optimization approach,” in Proc.
_of the 2nd IEEE Upstate New York Workshop on Communi-_
_cation and Networking, 2005._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/HICSS.2008.323?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/HICSS.2008.323, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://www.cs.umb.edu/~jxs/pub/hicss07-bisnet-e.pdf"
}
| 2,008
|
[
"JournalArticle",
"Conference"
] | true
| 2008-01-07T00:00:00
|
[] | 14,728
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010a74a07848f32bfc5c6155fa3977f24606075a
|
[
"Computer Science"
] | 0.917127
|
Second Price Auctions - A Case Study of Secure Distributed Computating
|
010a74a07848f32bfc5c6155fa3977f24606075a
|
IFIP International Conference on Distributed Applications and Interoperable Systems
|
[
{
"authorId": "1701915",
"name": "B. Decker"
},
{
"authorId": "1788020",
"name": "G. Neven"
},
{
"authorId": "1739936",
"name": "Frank Piessens"
},
{
"authorId": "2891948",
"name": "E. V. Hoeymissen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Distrib Appl Interoper Syst",
"DAIS",
"Distributed Applications and Interoperable Systems",
"IFIP Int Conf Distrib Appl Interoper Syst"
],
"alternate_urls": null,
"id": "246ecad8-9680-4191-a119-7529c01ee421",
"issn": null,
"name": "IFIP International Conference on Distributed Applications and Interoperable Systems",
"type": "conference",
"url": "https://link.springer.com/conference/dais"
}
| null |
# SECOND PRICE AUCTIONS
## A Case Study of Secure Distributed Computing
Bart De Decker[1], Gregory Neven[2], Frank Piessens[3], Erik Van Hoeymissen[1]
1K. U. Leuven, Dept. Computer Science, Celestijnenlaan 200A, B-3001 Leuven, Belgium
{Bart.DeDecker.Erik.VanHoeymissen}@cs.kuleuven.ac.be
2Research Assistant of the Fund for Scientific Research, Flanders, Belgium (F.W.O)
Gregory.Neven@cs.kuleuven.ac.be
_Postdoctoral Fellow of the Belgian National Fund for Scientific Research (F.W.O.)_
Frank.Piessens@cs.kuleuven.ac.be
**Abstract** Secure distributed computing addresses the problem of performing a computation
with a number of mutually distrustful participants, in such a way that each of
the participants has only limited access to the information needed for doing the
computation. Over the past two decades, a number of solutions requiring no
_trusted third party have been developed using cryptographic techniques. The_
disadvantage of these cryptographic solutions is the excessive communication
overhead they incur.
In this paper, we use one of the SDC protocols for one particular application:
second price auctions, in which the highest bidder acquires the item for sale at
the price of the second highest bidder. The protocol assures that only the name
of the highest bidder and the amount of the second highest bid are revealed. All
other information is kept secret (the amount of the highest bid, the name of the
second highest bidder, ...). Although second price auctions may not seem very
important, small variations on this theme are used by many public institutions:
e.g., a call for tenders, where contract is given to the lowest offer (or the second
lowest).
The case study serves two purposes: we show that SDC protocols can be used
for these kind of applications, and secondly, we assess the network overhead and
how well these applications scale. To overcome the communication overhead,
we use mobile agents and semi-trusted hosts.
**Keywords:** Secure distributed computing, SDC, mobile agents, second price auction, agents,
semi-trusted execution platform
#### 1. INTRODUCTION
Secure distributed computing (SDC) addresses the problem of distributed
computing where some of the algorithms and data that are used in the computa
-----
218 _MOBILE AGENTS_
tion must remain private. Usually, the problem is stated as follows, emphasizing
privacy of data. Let f be a publicly known function taking n inputs, and suppose there are n parties (named ), each holding one private input
The n parties want to compute the value without leaking any
information about their private inputs (except of course the information about
that is implicitly present in the function result) to the other parties. An ex
ample is voting: the function f is addition, and the private inputs represent yes
or no votes. In case an algorithm is to be kept private, instead
of just data, one can make f an interpreter for some (simple) programming
language, and let one of the be an encoding of a program.
In descriptions of solutions to the secure distributed computing problem,
the function f is usually encoded as a boolean circuit, and therefore secure
distributed computing is also often referred to as secure circuit evaluation.
It is easy to see that an efficient solution to the secure distributed computing
problem would be an enabling technology for a large number of interesting
distributed applications across the Internet. Some example applications are:
auctions ([8]), charging for the use of algorithms on the basis of a usage count
([9, 10]), querying a secret database ([6]), various kinds of weighted voting,
protecting mobile code integrity and privacy ([10, 5]), ...
Secure distributed computing is trivial in the presence of a globally trusted
third party(TTP): all participants send their data and code to the TTP (over a
secure channel), the TTP performs the computation and broadcasts the results.
The main drawback of this approach is the large amount of trust needed in the
TTP.
Solutions without a TTP are also possible. Over the past two decades, a
fairly large variety of solutions to the problem has been proposed. An overview
is given by Franklin [3] and more recently by Cramer [2] and Neven [7]. These
solutions differ from each other in the cryptographic primitives that are used,
and in the class of computations that can be performed (some of the solutions
only allow for specific kinds of functions to be computed). The main drawback,
however, of these solutions is the heavy communication overhead that they incur.
In this paper, we investigate a case study: second price auctions. Here, the
highest bidder wins but has to pay the second highest bid. The final outcome
will only reveal the name of the winner and the amount of the second highest
##### bid. All other bids and even the name of the second highest bidder remain secret.
We have chosen this application, because it illustrates the merits of SDC, and
is somewhat exemplary for many other useful applications. For instance, the
authority and many public institutions request for quotations before awarding
the job/purchase to the lowest or second lowest offer. The reader can easily
verify that determining the lowest (or second lowest) offer, without revealing
the other quotations, is only a small variation on our case study.
-----
_Second Price Auctions_ 219
In this case study, we try to be as specific as possible. We will show how SDC
can be used in this application. Moreover, we will look at the performance. In
##### particular, we examine the communication overhead and the scalability of the
application in terms of number of participants. Although the communication
overhead seems prohibitively high, a reasonable remedy is proposed, using
mobile agents and semi-trusted sites. Indeed, mobile agents employing SDC
protocols can provide for a trade-off between communication overhead and
trust. The communication overhead is alleviated if the communicating parties
are brought close enough together. In our approach, every participant sends
its representative agent to a trusted execution site. The agent contains a copy
of the private data xi and is capable of running an SDC-protocol. Different
participants may send their agents to different sites, as long as these sites are
located closely to each other. Of course, a mobile agent needs to trust his
execution platform, but we will show that the trust requirements in this case are
much lower than for a classical TTP. Also, in contrast with protocols that use
unconditionally TTPs, the trusted site is not involved directly. It simply offers
##### a secure execution platform: i.e. it executes the mobile code correctly, does not
spy on it and does not leak information to other mobile agents. Moreover, the
trusted host does not have to know the protocol used between the agents. In
other words, the combination of mobile agent technology and secure distributed
computing protocols makes it possible to use a generic TTP that, by offering a
secure execution platform, can act as TTP for a wide variety of protocols in a
uniform way. A detailed discussion of the use of mobile agent technology for
advanced cryptographic protocols is given in [7].
The sequel of the paper is organized as follows: in section 2, we review
one of the SDC protocols that will be used by the application; a design of the
the application, second price auctions, is given in section 3; in this section,
we also examine the communication overhead and tackle the scalability issue.
In section 4, we introduce a modus operandi for the application. Finally, in
section 5, we summarize the main outcomes of this paper.
### 2. SECURE DISTRIBUTED COMPUTING USING
GROUP-ORIENTED CRYPTOGRAPHY
In [4], Franklin and Haber propose a protocol that evaluates a boolean circuit
on data encrypted with a homomorphic probabilistic encryption scheme for any
number of participants. It resembles the protocol for two parties, proposed by
Abadi and Feigenbaum ([1]).
To extend the idea of [1] to the multi-party case, an encryption scheme is
needed that allows anyone to encrypt, but needs the cooperation of all participants to perform a decryption. In a joint encryption scheme, all participants
know the public key while each participant has his own pri
-----
220 _MOBILE AGENTS_
vate key Using the public key, anyone can create an encryption
of some message m, where such that the private
key of each participant in S is needed to decrypt. More formally, if de
notes the decryption with private key, the relation between encryption and
decryption is given by
The plaintext m should be easily recoverable from
In the joint encryption scheme used by Franklin and Haber, a bit b is encrypted
as
where _p and q are two primes such that_ mod 4, and
The public key is given by where each
represents the private (secret) key of participant
This scheme has some additional properties that are used in the protocol:
_XOR-Homomorphic. Anyone can compute a joint encryption of the XOR_
of two jointly encrypted bits. Indeed, if and
then
_Blindable. Given an encrypted bit, anyone can create a random ciphertext_
that decrypts to the same bit. Indeed, if and
then
is a joint encryption of the same bit.
_Witnessable. Any participant can withdraw from a joint encryption by_
providing the other participants with a single value. Indeed, if
it is easy to compute from
First of all, the participants must agree on a value for N and g, choose a secret
key and broadcast mod N to form the public key. To start the actual
protocol, each participant broadcasts a joint encryption of his input bits. For an
XOR-gate, everyone simply applies the XOR-homomorphism. The encrypted
output of a NOT-gate can be found by applying the XOR-homomorphism with
a default encryption of a one, e.g. [1,–1].
However, it is the AND-gate that causes some trouble. Suppose the encrypted
input bits for the AND-gate are and To compute a joint
encryption they proceed as follows:
-----
_Second Price Auctions_ 221
1 Each participant chooses random bits and and broadcasts
and
2 Each participant repeatedly applies the XOR-homomorphism to calculate
and
##### Each participant broadcasts decryption witnesses and
3 Everyone can now decrypt and We have the following relation
##### between and
Each participant is able to compute a joint encryption of he knows
and (he chose them himself) and he received encryptions from the
other participants, so he can compute as follows:
If then so any default encryption for a zero will
do, e.g. [1,1].
If then so is a valid substitution for
and can be computed in an analogous way. He uses
the XOR-homomorphism to combine all these terms, blinds the result
and broadcasts this as
4 Each participant combines and again using the
XOR-homomorphism, to form
When all gates in the circuit have been evaluated, every participant has a
joint encryption of the output bits. Finally, all participants broadcast decryption
witnesses to reveal the output.
#### 3. SECOND PRICE AUCTIONS
In this section we consider second price auctions, where there is one item
for sale and there are n bidders. The item will only be sold if the bid of one
participant is strictly higher than the other bids. In all other cases there is no
-----
222 _MOBILE AGENTS_
winner. The clearing price is the second highest bid. The requirements for this
type of auction are the following:
if there is no winner, nothing is revealed;
if there is a winner:
– the identity of the highest bidder is revealed, but the highest bid
remains secret;
– the 2[nd] highest bid is revealed, but the identity of the 2[nd] highest
bidder is kept secret;
– no other information (other bids) are to be revealed.
For three participants X, Y and Z, the boolean circuit is shown in Figure
1. The inputs to the circuit are 32-bit bids[1]. The output is the identity of the
winner, represented by the bits and ( no winner, 01 winner
is X, 10 winner is Y, 11 winner is Z), and the clearing price. If there is no
winner, the clearing price is set to zero. To determine the winner, the circuit
uses three comparators and a number of AND and OR gates. To determine
the clearing price, four multiplexers are used. Consider the situation where X
makes the highest bid. In this case
and so the second input to the final multiplexer will be chosen.
The input on this line is determined by the bids made by Y and Z. If
then and Y will be selected as the clearing price. In the other cases
or _Z will be the clearing price._
Our goal is to estimate the communication overhead of an implementation of
secure distributed second price auctions with the protocol proposed by Franklin
and Haber. The auction is designed as a boolean circuit and the communication overhead for secure circuit evaluation is estimated. The communication
overhead is determined by the following steps in the protocol:
broadcast of the encrypted input bits of each participant;
evaluation of an AND gate:
– broadcast of the encrypted bits
– broadcast of the decryption witnesses
– broadcast of the blinded
broadcast of the output decryption witnesses.
The associated communication overhead is:
1 In reality, fewer bits (e.g. 8 or 16) would suffice.
-----
_Second Price Auctions_ 223
_Figure 1._ Boolean circuit implementation of second price auctions.
for the broadcast of the input bits;
for the evaluation of an AND gate;
##### for the decryption broadcast.
where is the length of N in bits, which is the same as the number of
bits needed to represent an element of is the number of input bits of
participant i, n is the number of participants and out is the number of output
bits of the circuit. In order to estimate the communication overhead, we need to
##### be able to determine the number of AND gates in the boolean circuit (note that
each OR gate can be implemented with AND and NOT gates). Each comparator
can be built with 374 AND-gates[2]
2The boolean function can be expressed as
Hence if A and B are k-bit numbers, AND gates are needed. Both
functions, and are needed for each comparator.
-----
224 _MOBILE AGENTS_
For participants, the circuit changes as follows. The number of
comparators needed is now The final multiplexer will
need to distinguish between different cases, i.e. n possible winners or no
winner at all. The other n multiplexers are there to select the clearing price out
of bids when there is a winner. The number of AND gates needed for
each multiplexer as a function of the number of inputs m is shown in Figure 2.
Besides the comparators and the multiplexers, some additional AND and OR
gates are needed. However, the number of these gates is negligible compared to
the number of gates needed for the comparators and multiplexers. In summary,
the circuit has a total gate complexity of
_Figure 2._ Number of AND gates needed in a mulitplexer
The results of estimating the communication overhead for this circuit as a
function of the number of participants n are summarized in Table 1[3]. Franklin
and Haber’s protocol is linear in the number of broadcasts, so the total message
complexity is However, it must be noted that this only holds on a
network with broadcast or multicast functionality, such that the communication
overhead of sending a message to all participants is the same as that of sending
a message to a single participant. In absence of such infrastructure, the total
message complexity is
3We choose to be 1024 bits.
-----
_Second Price Auctions_ 225
#### 4. MODUS OPERANDI
From the previous section, it should be clear that the design of the application
has pros and cons:
A major advantage is that our solution does not require a globally trusted
third party that plays the role of the arbiter.
The worst drawback is the immense communication overhead and the
fact that the solution does not scale very well.
There exists a trade-off between ’trust’ and ’communication overhead’ in both
options, the first one using a TTP and the solution that uses SDC. In this section,
we investigate this trade-off and present a nice remedy for the communication
overhead.
If a globally trusted third party is used, every participant, has to send
##### its private bid to that TTP who will select the highest bidder, determine the
second highest bid, and disseminate its decision to the participants
(see Figure 3). Of course, before sending its private data to the TTP, every
_Figure 3._ 2nd Price Auction Using a TTP.
must first authenticate the TTP, and then send through a safe channel. This
can be accomplished via conventional cryptographic techniques. It is clear that
this approach has a very low communication overhead: the data is only sent
once to the TTP; later, every participant receives the result of the computation.
However, every participant should uncondionally trust the TTP. It is not clear
-----
226 _MOBILE AGENTS_
whether n distrustful participants will easily agree on one single trustworthy
site. If this site is compromised, all secrets, may be compromised! Also,
the site needs the appropriate software for this particular application. Hence,
for every new ‘application’ new software needs to be installed. Therefore, the
participants not only need to trust the (security of) the site, but also the software
for this application.
In our approach (see Figure 4), the trust requirements are really minimal:
every participant trusts its own execution site and expects that the other
participants provide correct values for their own inputs. (Note that in this
protocol, a participant cannot cheat, because of the use of witnesses.) Although
our approach is very attractive, it suffers extensive communication overhead
and does not scale well.
_Figure 4._ 2nd Price Auction Using SDC.
The communication overhead of SDC-techniques can be remedied by intro
ducing semi-trusted execution sites and mobile agents (see Figure 5). Every
participant sends its representative, agent to a trusted execution site
The agent contains a copy of the private data and is capable of running a
SDC-protocol. It is allowed that different participants send their agents to different sites. The only restriction being that the sites should be located closely to
each other, i.e. should have high bandwidth communication between them. Of
course, every execution site needs a mechanism to safely download an agent.
However, that can be easily accomplished through convential cryptographic
techniques. The amount of large distance communication is moderate: every
participant sends its agent to a remote site, and receives the result from its
-----
_Second Price Auctions_ 227
_Figure 5._ 2nd Price Auctions Using Agents (SDC) and Semi-Trusted Sites.
agent. The agents use a SDC-protocol, which unfortunately involves a high
communication overhead. However, since the agents are executing on sites that
##### are near each other, the overhead of the SDC-protocol is acceptable. No high
bandwidth communication between the participants is necessary, and there is no
longer a need for one single trusted execution site. The agents that participate
in the secure computation are protected against malicious behaviour of other
(non-trusted) execution sites by the SDC-protocols. That is sufficient to make
this approach work. Moreover, in contrast with the approach where one uses
##### an unconditionally trusted third party, the trusted sites are not involved directly.
They simply offer a secure execution platform: the trusted hosts do not have to
know the protocol used between the agents. In other words, the combination
##### of mobile agent technology and secure distributed computing protocols makes
it possible to use generic trusted third parties that, by offering a secure execution platform, can act as trusted third party for a wide variety of protocols in
##### a uniform way. Finally, the question remains whether it is realistic to assume
that participants can find execution sites that are close enough to each other.
Given the fact however that these execution sites can be generic, we believe that
providing such execution sites could be a commercial occupation. Various deployment strategies are possible. Several service providers, each administering
a set of geographically dispersed “secure hosts”, can propose their subscribers
##### an appropriate site for the secure computation. The site is chosen to be in the
neighborhood of a secure site of the other service providers involved. Another
-----
228 _MOBILE AGENTS_
approach is to have execution parks, offering high bandwidth communication
facilities, were companies can install their proprietary “secure site”. The park
itself could be managed by a commercial or government agency.
#### 5. CONCLUSIONS
This paper demonstrates that second price auctions, and many other rele
vant applications, can be implemented by using SDC protocols. That way, the
participants can make sure that all confidential information is kept secret. The
major disadvantage, the overwhelming communication overhead, can be remedied through the use of mobile agents and semi-trusted sites. There is no need
for one generally trusted site, nor does the program code have to be endorsed by
all participants. The trusted execution sites are generic and can be small (which
might allow to draft a formal security for these sites). The communication
overhead of secure distributed computing protocols is no longer prohibitive for
their use since the execution sites are located closely to each other.
#### References
[1] M. Abadi and J. Feigenbaum, “Secure circuit evaluation, a protocol based on hiding infor
mation from an oracle,” Journal of Cryptology, 2(1), p. 1–12, 1990
[2] R. Cramer. “An introduction to secure computation”, in LNCS 1561, pp 16–62, 1999.
[3] M. Franklin, “Complexity and security of distributed protocols,” Ph. D. thesis, Computer
Science Department of Columbia University, New York, 1993
[4] M. Franklin and S. Haber, “Joint encryption and message-efficient secure computation,”
Journal of Cryptology, 9(4), p. 217–232, Autumn 1996
[5] S. Loureiro and R. Molva, “Privacy for Mobile Code”, Proceedings of the workshop on
_Distributed Object Security, OOPSLA ’99, p. 37–42._
[6] G. Neven, F. Piessens, B. De Decker, “On the Practical Feasibility of Secure Distributed
Computing: a Case Study”, Information Security for Global Information Infrastructures (S.
Qing, J. Eloff, ed.), Kluwer Academic Publishers, 2000, pp. 361-370.
[7] G. Neven, E. Van Hoeymissen, B. De Decker, F. Piessens, “Enabling Secure Distributed
Computations : Semi-trusted Hosts and Mobile Agents”, to appear in Networking and
Information Systems Journal 3 (2001).
[8] N. Nisan, “Algorithms for selfish agents”, Proceedings of the 16th Annual Symposium on
_Theoretical Aspects of Computer Science, Trier, Germany, March 1999, p. 1–15._
[9] T. Sander and C. Tschudin, “On software protection via function hiding”, Proceedings of
_the second workshop on Information Hiding, Portland, Oregon, USA, April 1998._
[10] T. Sander and C. Tschudin, 'Towards mobile cryptography”, Proceedings of the 1998 IEEE
_Symposium on Security and Privacy, Oakland, California, May 1998._
[11] T. Sander, A. Young, M. Yung, “Non-Interactive CryptoComputing for ”, preprint.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/0-306-47005-5_19?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/0-306-47005-5_19, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F0-306-47005-5_19.pdf"
}
| 2,001
|
[
"JournalArticle"
] | true
| 2001-09-17T00:00:00
|
[] | 5,508
|
en
|
[
{
"category": "Political Science",
"source": "s2-fos-model"
},
{
"category": "Law",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010a9df80b73d0c589eaffcb3a9f4cb05b07d225
|
[] | 0.907507
|
The Elections Clause Obligates Congress to Enact a Federal Plan to Secure U.S. Elections Against Foreign Cyberattacks
|
010a9df80b73d0c589eaffcb3a9f4cb05b07d225
|
[
{
"authorId": "3964911",
"name": "S. Malempati"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
While foreign adversaries continue to launch cyberattacks aimed at disrupting elections in the United States, Congress has been reluctant to take action. After Russia interfered in the 2016 election, cybersecurity experts articulated clear measures that must be taken to secure U.S. election systems against foreign interference. Yet the federal government has failed to act. Congress’s reticence is based on a misguided notion that greater federal involvement in the conduct of elections unconstitutionally infringes on states’ rights. Both state election officials and certain congressional leaders operate under the assumption that federalism principles grant states primacy in conducting federal elections. This Comment dispels the myth that Congress must defer to states to regulate federal elections. The text of the Elections Clause in Article I, Section 4 of the U.S. Constitution confers to Congress final authority in determining the “Times, Places and Manner” of federal elections. Therefore, the system of administering federal elections is based on decentralization rather than federalism. The risk of foreign interference in U.S. elections was a precise reason the founders bestowed on Congress ultimate control over federal elections. States and municipalities lack the capacity to effectively combat foreign cyber invasion. This Comment makes the case that Congress has a responsibility to exercise its power under the Elections Clause to create a federal plan to secure voter registration databases and voting mechanisms against cyberattacks in order to protect the integrity of American democracy. MALEMPATI_12.2.20 12/2/2020 2:44 PM 418 EMORY LAW JOURNAL [Vol. 70:417 INTRODUCTION ............................................................................................. 423 I. THE CURRENT CYBERSECURITY THREAT TO U.S. ELECTION INFRASTRUCTURE .............................................................................. 427 A. Russian Interference in the 2016 U.S. Election ........................ 428 B. Recommendations of Cybersecurity Experts to Strengthen U.S. Election Infrastructure ...................................................... 432 C. States Responded Inadequately and Ineffectively to Russian Cyberattacks ............................................................................. 437 II. THE LANDSCAPE OF CONGRESSIONAL AUTHORITY OVER FEDERAL ELECTIONS ......................................................................................... 440 A. Congressional Authority Under the Reconstruction Amendments and the Voting Rights Act .................................... 441 B. The Demise of the Voting Rights Act and Shifting StateFederal Authority to Regulate Elections .................................. 446 C. The Elections Clause Grants Congress Broad Authority to Regulate Federal Elections ...................................................... 448 1. Decentralization Versus Federalism .................................. 449 2. Congress Has Used Its Election Clause Authority to a Limited Degree ................................................................... 450 III. CONGRESS SHOULD ACT TO PROTECT U.S. ELECTION INFRASTRUCTURE .............................................................................. 454 A. Congress Has an Obligation Under the Elections Clause to Protect U.S. Democracy ........................................................... 454 B. Congress Has a Duty to Secure U.S. Elections Against Foreign Interference Because States Are Ill-Equipped and Reluctant to Do So ........................................................................................ 457 C. Congress Must Enact a Federal Plan to Preserve the Right of All Citizens to Vote ................................................................... 459 IV. A PROPOSED FEDERAL PLAN TO SECURE U.S. ELECTIONS ............... 462 A. Congress Should Establish Binding Federal Standards for States to Register Voters, Maintain Secure Voter Databases, and Check-in Voters at the Polls .............................................. 463 B. Congress Should Mandate Uniform Paper Ballots for All Federal Elections ...................................................................... 465 C. Congress Should Require All States to Submit to Federal Election Audits .......................................................................... 466 CONCLUSION ................................................................................................. 467 MALEMPATI_12.2.20 12/2/2020 2:44 PM 2020] THE ELECTIONS CLAUSE 419
|
# Emory Law Journal Emory Law Journal
### Volume 70 Issue 2
2020
# The Elections Clause Obligates Congress to Enact a Federal Plan The Elections Clause Obligates Congress to Enact a Federal Plan
to Secure U.S. Elections Against Foreign Cyberattacks to Secure U.S. Elections Against Foreign Cyberattacks
### Suman Malempati
[Follow this and additional works at: https://scholarlycommons.law.emory.edu/elj](https://scholarlycommons.law.emory.edu/elj?utm_source=scholarlycommons.law.emory.edu%2Felj%2Fvol70%2Fiss2%2F4&utm_medium=PDF&utm_campaign=PDFCoverPages)
[Part of the Law Commons](http://network.bepress.com/hgg/discipline/578?utm_source=scholarlycommons.law.emory.edu%2Felj%2Fvol70%2Fiss2%2F4&utm_medium=PDF&utm_campaign=PDFCoverPages)
### Recommended Citation Recommended Citation
Suman Malempati, The Elections Clause Obligates Congress to Enact a Federal Plan to Secure U.S.
Elections Against Foreign Cyberattacks, 70 Emory L. J. 417 (2020).
[Available at: https://scholarlycommons.law.emory.edu/elj/vol70/iss2/4](https://scholarlycommons.law.emory.edu/elj/vol70/iss2/4?utm_source=scholarlycommons.law.emory.edu%2Felj%2Fvol70%2Fiss2%2F4&utm_medium=PDF&utm_campaign=PDFCoverPages)
This Comment is brought to you for free and open access by the Journals at Emory Law Scholarly Commons. It has
been accepted for inclusion in Emory Law Journal by an authorized editor of Emory Law Scholarly Commons. For
[more information, please contact law-scholarly-commons@emory.edu.](mailto:law-scholarly-commons@emory.edu)
-----
## THE ELECTIONS CLAUSE OBLIGATES CONGRESS TO ENACT A FEDERAL PLAN TO SECURE U.S. ELECTIONS AGAINST FOREIGN CYBERATTACKS
ABSTRACT
_While foreign adversaries continue to launch cyberattacks aimed at_
_disrupting elections in the United States, Congress has been reluctant to take_
_action. After Russia interfered in the 2016 election, cybersecurity experts_
_articulated clear measures that must be taken to secure U.S. election systems_
_against foreign interference. Yet the federal government has failed to act._
_Congress’s reticence is based on a misguided notion that greater federal_
_involvement in the conduct of elections unconstitutionally infringes on states’_
_rights. Both state election officials and certain congressional leaders operate_
_under the assumption that federalism principles grant states primacy in_
_conducting federal elections._
_This Comment dispels the myth that Congress must defer to states to regulate_
_federal elections. The text of the Elections Clause in Article I, Section 4 of the_
_U.S. Constitution confers to Congress final authority in determining the “Times,_
_Places and Manner” of federal elections. Therefore, the system of administering_
_federal elections is based on decentralization rather than federalism._
_The risk of foreign interference in U.S. elections was a precise reason the_
_founders bestowed on Congress ultimate control over federal elections. States_
_and municipalities lack the capacity to effectively combat foreign cyber_
_invasion. This Comment makes the case that Congress has a responsibility to_
_exercise its power under the Elections Clause to create a federal plan to secure_
_voter registration databases and voting mechanisms against cyberattacks in_
_order to protect the integrity of American democracy._
-----
INTRODUCTION ............................................................................................. 423
I. THE CURRENT CYBERSECURITY THREAT TO U.S. ELECTION
INFRASTRUCTURE .............................................................................. 427
_A. Russian Interference in the 2016 U.S. Election ........................ 428_
_B. Recommendations of Cybersecurity Experts to Strengthen_
_U.S. Election Infrastructure ...................................................... 432_
_C. States Responded Inadequately and Ineffectively to Russian_
_Cyberattacks ............................................................................. 437_
II. THE LANDSCAPE OF CONGRESSIONAL AUTHORITY OVER FEDERAL
ELECTIONS ......................................................................................... 440
_A. Congressional Authority Under the Reconstruction_
_Amendments and the Voting Rights Act .................................... 441_
_B. The Demise of the Voting Rights Act and Shifting State-_
_Federal Authority to Regulate Elections .................................. 446_
_C. The Elections Clause Grants Congress Broad Authority to_
_Regulate Federal Elections ...................................................... 448_
_1. Decentralization Versus Federalism .................................. 449_
_2. Congress Has Used Its Election Clause Authority to a_
_Limited Degree ................................................................... 450_
III. CONGRESS SHOULD ACT TO PROTECT U.S. ELECTION
INFRASTRUCTURE .............................................................................. 454
_A. Congress Has an Obligation Under the Elections Clause to_
_Protect U.S. Democracy ........................................................... 454_
_B. Congress Has a Duty to Secure U.S. Elections Against Foreign_
_Interference Because States Are Ill-Equipped and Reluctant to_
_Do So ........................................................................................ 457_
_C. Congress Must Enact a Federal Plan to Preserve the Right of_
_All Citizens to Vote ................................................................... 459_
IV. A PROPOSED FEDERAL PLAN TO SECURE U.S. ELECTIONS ............... 462
_A. Congress Should Establish Binding Federal Standards for_
_States to Register Voters, Maintain Secure Voter Databases,_
_and Check-in Voters at the Polls .............................................. 463_
_B. Congress Should Mandate Uniform Paper Ballots for All_
_Federal Elections ...................................................................... 465_
_C. Congress Should Require All States to Submit to Federal_
_Election Audits .......................................................................... 466_
CONCLUSION ................................................................................................. 467
-----
INTRODUCTION
Does federalism prevent Congress from taking action to secure U.S.
elections against foreign cyberattacks? Since its founding, the United States has
grappled with how to balance the authority of state governments against that of
the federal government in managing elections.[1] Article I, Section 4 of the U.S.
Constitution, often called the “Elections Clause,” grants each state the power to
designate the “Times, Places and Manner” of federal elections, but it also states
that “Congress _may at any time by Law make or alter such Regulations.”[2]_
Despite the seemingly sweeping power designated to Congress by the Elections
Clause, scholars and the Supreme Court have traditionally viewed the regulation
of elections and the voting process through the lens of state sovereignty.[3]
Currently, U.S. election infrastructure consists of a heterogeneous array of voter
registration procedures, registered voter databases, pollbooks, voting machines,
and vote counting mechanisms that vary from state to state.[4] States are also
inconsistent in the degree to which they delegate election management to
counties and municipalities.[5]
Two hundred and thirty years ago in the Federalist Papers, Alexander
Hamilton explained the rationale for embedding Congress’s power to regulate
elections into the Constitution.[6] Hamilton explained that leaving control of
federal elections solely in the hands of state governments could create an
existential risk to the nation.[7] With the Elections Clause, the drafters of the
Constitution “reserved to the national authority a right to interpose, whenever
_extraordinary circumstances might render that interposition necessary to its_
safety.”[8] Hamilton presciently recognized that the threat of foreign interference
1 Guy-Uriel E. Charles & Luis Fuentes-Rohwer, State’s Rights, Last Rites, and Voting Rights, 47 CONN.
L. REV. 481, 514 (2014) (“This struggle between the states and the national government with respect to the
apportionment of powers over elections has waxed and waned throughout American history.”).
2 U.S. CONST. art. I, § 4 (emphasis added).
3 _See, e.g., Shelby Cnty. v. Holder, 570 U.S. 529, 535 (2013) (stating the Voting Rights Act of 1965_
which granted federal oversight over the voting laws of certain states was “a drastic departure from the principle
of federalism”); Justin Weinstein-Tull, Election Law Federalism, 114 MICH. L. REV. 747, 753 (2016) (describing
“election law federalism” as consisting of “multiple sovereigns” at the federal, state, and local government
levels).
4 Weinstein-Tull, supra note 3, at 754 (listing the differences in voting hours, funding schemes, absentee
voting rules, and voter registration, or the “nuts and bolts of the election”).
5 _Id._
6 THE FEDERALIST NO. 59 (Alexander Hamilton).
7 _Id. (“With so effectual a weapon in [state legislators’] hands as the exclusive power of regulating_
elections for the national government, a combination of few such men, in a few of the most considerable States,
where the temptation will always be the strongest, might accomplish the destruction of the union.”).
8 _Id. (emphasis added)._
-----
in U.S. elections would be such an extraordinary circumstance.[9] He wrote in
_Federalist 59 that “a firm union of this country, under an efficient government,_
will probably be an increasing object of jealousy to more than one nation of
Europe; and that enterprises to subvert it will sometimes originate in the
intrigues of foreign powers.”[10]
In 2016, for the first time in the history of this nation, Hamilton’s prediction
of foreign interference came true when Russia attempted to interfere with and
influence the U.S. presidential election.[11] Along with a campaign of
misinformation, Russia directly attacked U.S. election systems.[12] Beginning as
early as 2014, the Russian government directed extensive activity against U.S.
election infrastructure at the state and local levels.[13] A 2018 report by the Senate
Intelligence Committee revealed that Russian operatives attempted to hack into
the election systems of each of the fifty states.[14] Russia attacked a point of
vulnerability in U.S. election infrastructure—states’ supposed primacy in
conducting federal elections.[15] According to the Senate Intelligence Report,
“[s]tate elections officials, who have primacy in running elections, were not
sufficiently warned or prepared to handle an attack from a hostile nation-state
actor.”[16] Hamilton’s interpretation of the Elections Clause suggests that Russian
aggression is a clear reason for Congress to exert its constitutional authority to
protect U.S. election infrastructure.[17]
Despite the obvious risk that our democracy may be undermined by foreign
interference, some members of Congress have expressed reluctance to take a
greater role in protecting federal elections.[18] State officials have also pushed
back and even rejected federal help in securing their state and local election
9 _Id._
10 _Id._
11 NAT’L ACAD. OF SCIS., ENG’G & MED., SECURING THE VOTE: PROTECTING AMERICAN DEMOCRACY 13
(2018) [hereinafter NAS REPORT].
12 _Id. at 14._
13 S. SELECT COMM. ON INTEL., S. REP. NO. 116-XX, REPORT ON RUSSIAN ACTIVE MEASURES
CAMPAIGNS AND INTERFERENCE IN THE 2016 U.S. ELECTION 3 (2019) (partially redacted) [hereinafter SENATE
INTELLIGENCE REPORT].
14 _Id. at 12._
15 _Id. at 4 (“Russian efforts exploited the seams between federal authorities and capabilities, and_
protections for the states.”).
16 _Id._
17 _See_ _infra Part III._
18 _See, e.g., Dean Dechiaro,_ _Election Officials Want Security Money, Flexible Standards, ROLL CALL_
(Aug. 15, 2019), https://www.rollcall.com/news/congress/election-officials-want-security-money-flexiblestandards (describing Senate Majority Leader Mitch McConnell’s reluctance to bring House-passed election
security bills up for votes in the Senate).
-----
systems out of concern for maintaining state sovereignty.[19] Although Congress
has previously overridden the right of states to conduct elections by passing the
Voting Rights Act of 1965 (VRA) under the Fifteenth Amendment, it has yet to
invoke its full Elections Clause powers.[20] With its holding in Shelby County v.
_Holder in 2013, the Supreme Court gutted the VRA, tilting the balance toward_
state autonomy in conducting elections.[21] Therefore, Congress can no longer
rely solely on its power to enforce the Reconstruction Amendments to supersede
state authority over elections.[22]
This Comment argues that the threat of foreign attacks against U.S. election
infrastructure requires Congress to exercise its power under the Elections Clause
to enact legislation establishing a uniform system for federal elections.[23] This
Comment takes the position that foreign cyber intrusion is the type of existential
threat for which the Elections Clause gives Congress the authority to act.
Because the Constitution grants Congress the ultimate authority to regulate
federal elections, the creation of a federal system for elections does not intrude
on state sovereignty.
Part I describes the current cybersecurity threat to U.S. election
infrastructure. A paucity of federal regulations poses significant risks in the face
of such twenty-first century threats. This Part describes the scope of Russia’s
attacks on state and local election systems during the 2016 election and catalogs
the recommendations of cybersecurity experts in how best to secure election
infrastructure against future attacks. By detailing how state and local election
officials responded ineffectively to cyberattacks in 2016 and leading up to the
2018 election, this Comment predicts that without a comprehensive federal plan,
Russia and other foreign actors may successfully disrupt future federal elections.
19 _See_ _infra Part III.B._
20 Voting Rights Act of 1965, 52 U.S.C. § 10301; see South Carolina v. Katzenbach, 303 U.S. 301, 308
(1966) (upholding the invalidation of state laws restricting voter access to the polls as an appropriate means for
carrying out Congress’s constitutional responsibilities under the Fifteenth Amendment).
21 570 U.S. 529, 557 (2013).
22 The Thirteenth, Fourteenth, and Fifteenth Amendments to the U.S. Constitution are often called the
“Reconstruction Amendments.” The Thirteenth Amendment prohibited slavery. U.S. CONST. amend. XIII. The
Fourteenth Amendment established birthright citizenship and created due process and equal protection rights
against state action. U.S. CONST. amend. XIV. The Fifteenth Amendment guaranteed the right to vote regardless
of color or condition of previous servitude. U.S. CONST. amend. XV.
23 This Comment does not address one aspect of Russia’s interference in the 2016 election—a social
media campaign of disinformation aimed at influencing voters. For a summary of that issue and
recommendations for confronting Russia’s efforts, see Alex Stamos, Sergey Sanovich, Andrew Grotto & Allison
Berke, _Combatting State-Sponsored Disinformation Campaigns from State-aligned Actors,_ _in SECURING_
AMERICAN ELECTIONS: PRESCRIPTIONS FOR ENHANCING THE INTEGRITY AND INDEPENDENCE OF THE 2020 U.S.
PRESIDENTIAL ELECTION AND BEYOND 43 (Michael McFaul ed., 2019).
-----
Next, Part II explores the history of the Supreme Court’s interpretation of
constitutional provisions that confer differential authority to states and the
federal government to regulate federal elections. This Part describes how the
Court’s recognition of congressional authority to control federal elections has
waxed and waned over the past 150 years. The Court has previously granted
relatively broad powers to Congress to invalidate state legislation that infringed
on citizens’ right to vote under the enforcement provisions of the Fourteenth and
Fifteenth Amendments.[24] The expansion of congressional authority under the
Reconstruction Amendments was followed by a reversion to greater state
sovereignty over elections with the Court’s holding in Shelby County.[25] This Part
explains that Shelby County represents a shift in the Court’s view towards greater
state autonomy in conducting elections. Therefore, Congress must find another
source of authority to enact federal election legislation. Part II argues that such
authority can be found in the Elections Clause, which provides an
underrecognized source of power for Congress to regulate federal elections.
Despite the Supreme Court’s reluctance to infringe on states’ purported
sovereignty in conducting elections, the Elections Clause gives Congress the
power to supersede any state action regarding elections. The text and purpose of
the Elections Clause provide a system for U.S. elections based on
decentralization rather than federalism.
Part III contends that, for three main reasons, Congress has an obligation to
use its Election Clause authority to enact a federal election plan. First, foreign
attacks on U.S. election infrastructure fall within the category of “extraordinary
circumstances” as described by Hamilton, which provides the impetus for
Congress to regulate the “Times, Places and Manner” of federal elections.[26]
Cyber invasion by Russia and potentially other nation-states is a matter of
national security that requires a federal response. Second, state and local
officials lacked the capacity to manage the attacks during the 2016 U.S. election.
Cyberattacks will continue to intensify without a coordinated national response,
and states cannot be left to defend election infrastructure from such attacks.
Third, insecure voting systems in several states violate the rights of voters under
the Fourteenth Amendment by preventing voters from confidently knowing that
their votes will count.[27] Therefore, despite the Supreme Court’s holding in
_Shelby County, Congress also has a responsibility to step in where states have_
24 U.S. CONST. amend. XIV, § 5; U.S. CONST. amend. XV, § 2.
25 _Shelby County, 570 U.S. at 544; Charles & Fuentes-Rohwer, supra note 1, at 514–15, 518._
26 U.S. CONST. Article I, § 4; THE FEDERALIST NO. 59 (Alexander Hamilton).
27 _See Curling v. Kemp, 334 F. Supp. 3d 1303, 1328 (N.D. Ga. 2018) (“A wound or reasonably threatened_
wound to the integrity of a state’s election system carries grave consequences beyond the results in any specific
election, as it pierces citizens’ confidence in the electoral system and the value of voting.”).
-----
failed in securing their election systems pursuant to the Fourteenth
Amendment’s enforcement provision.
Lastly, Part IV provides a prescriptive solution and suggests legislation that
Congress may enact. Namely, Congress should enact a federal election plan that
provides for federal oversight of uniform procedures and standards that each
state must follow while maintaining the decentralized conduct of elections.[28]
The plan should include federally mandated standards for maintaining
registration databases and electronic pollbooks. The federal plan should also
require that all states use the same mechanism to generate voter-verified paper
ballots, which are read by federally certified optical scanners. Finally, a federal
election plan should mandate that all states submit to federal post-election audits.
I. THE CURRENT CYBERSECURITY THREAT TO U.S. ELECTION
INFRASTRUCTURE
Securing U.S. elections and citizens’ confidence in the election process is of
paramount importance to maintain this nation’s republican form of government.
After the 2016 presidential election, evidence is clear that foreign powers are
capable of interfering with U.S. election systems to, at minimum, erode voter
confidence and at worst, suppress voter turnout, manipulate vote tallies, and
sway election results.[29] Along with hacking into the Democratic National
Committee’s servers and launching a disinformation campaign on social media,
Russia directly targeted U.S. election infrastructure and continues to do so.[30]
Cybersecurity experts are fully aware of the vulnerability of U.S. election
systems and have developed clear, consensus recommendations on how best to
secure elections against cyberattacks.[31] The onus is now on the federal
government to create a national plan that will implement these
recommendations.
While decentralization provides some protection from a single crippling
attack, it also creates a barrier to generating a cohesive and uniform response to
foreign cyberattacks.[32] Although states and municipalities play a critical
administrative role in conducting elections, they are generally ill-prepared to
28 _See NAS_ REPORT, supra note 11, at 16 n.11 (noting decentralization of U.S. elections is one aspect of
the current U.S. election system that protects against cyberattacks).
29 Kim Zetter, The Crisis of Election Security, N.Y. TIMES MAG. (Sept. 26, 2018), https://www.nytimes.
com/2018/09/26/magazine/election-security-crisis-midterms.html.
30 _See generally SENATE INTELLIGENCE REPORT, supra note 13; NAS_ REPORT, supra note 11.
31 _See_ _infra Part I.B._
32 Lawrence Norden, How to Secure Elections for 2020 and Beyond, BRENNAN CTR. FOR JUST. (Oct. 23,
2019), https://www.brennancenter.org/our-work/research-reports/how-secure-elections-2020-and-beyond.
-----
confront a threat from a foreign nation-state.[33] States and municipalities have
demonstrated an inability to handle attacks from a foreign nation-state and have
still not taken adequate steps to secure election infrastructure at the local level.[34]
Therefore, a foreign threat to U.S. elections requires a uniform federal response,
and Congress must pass legislation to preserve the integrity of federal elections.
_A. Russian Interference in the 2016 U.S. Election_
The 2016 U.S. election presented challenges that states, municipalities, and
the nation had not previously faced. Russia made a concerted to effort to
interfere with and disrupt many aspects of the election.[35] One line of attack was
to launch cyberattacks against electronic components of state election systems.[36]
Actors sponsored by the Russian government “obtained and maintained access
to multiple U.S. state or local electoral boards.”[37] Although the Senate
Intelligence Committee found no evidence that vote tallies were changed or that
voter registration records were altered, the committee’s insight is limited in this
regard because a full forensic analysis has not been done.[38] What is certain is
that Russian government-affiliated actors “conducted an unprecedented level of
activity” that targeted state election systems leading up to the 2016 election.[39]
Russian hacking into U.S. election infrastructure was a “watershed moment”
in the history of U.S. elections.[40] Protecting election infrastructure became a
national security issue when Russia targeted cyberattacks against U.S. voter
databases and election systems.[41] The Intelligence Community first detected
evidence of hacking into state election systems in the summer of 2016.[42] In July
33 _Id. (“[I]t is not reasonable to expect each of these state and local election officials to independently_
defend against hostile nation-state actors.”) (statement of Bob Brehm, co-executive director of the New York
State Board of Elections) (internal quotation marks omitted); see _infra Part III.B._
34 _See infra Part III.B._
35 SENATE INTELLIGENCE REPORT, supra note 13, at 3.
36 NAS REPORT, supra note 11, at 1.
37 _Id. (quoting OFF. OF THE DIR. OF NAT’L INTEL.,_ ASSESSING RUSSIAN ACTIVITIES AND INTENTIONS IN
RECENT US ELECTIONS iii (2017), https://www.dni.gov/files/documents/ICA_2017_01.pdf.)
38 NAS REPORT, supra note 11, at 2 n.3. The NAS committee was not aware of any ongoing investigation
into the possibility that vote tallies were changed. Deficiencies in “intelligence gathering, information sharing,
and reporting” leave some uncertainty about the exact consequences of Russia’s attacks. _Id.; SENATE_
INTELLIGENCE REPORT, supra note 13, at 5; Zetter, supra note 29.
39 SENATE INTELLIGENCE REPORT, supra note 13, at 5.
40 NAS REPORT, supra note 11, at xii.
41 _Id. at 117._
42 SENATE INTELLIGENCE REPORT, _supra note 13, at 6. The U.S. Intelligence Community consists of_
sixteen agencies working under the coordination of the Office of the Director of National Intelligence. The
sixteen agencies are: Central Intelligence Agency, Defense Intelligence Agency, Federal Bureau of
Investigation, National Geospatial-Intelligence Agency, National Reconnaissance Office, National Security
-----
2016, Illinois noticed unusual activity on the state’s Board of Elections voter
registry website.[43] An FBI investigation discovered that the activity resulted in
data being exfiltrated from the voter registration database.[44] Ultimately, the FBI
determined that Russian actors successfully penetrated Illinois’s voter
registration database, viewed multiple database tables, and eventually accessed
up to 200,000 voter registration records.[45] Russian cyber actors were in a
position to delete or change voter data, although there is no evidence that they
did so.[46]
Further, evidence shows that Russian operatives targeted several small
jurisdictions around the country. In the summer of 2016, General Staff of the
Russian Army (GRU) officers sought “access to state and local election
computer networks by exploiting known software vulnerabilities” on state and
local government websites.[47] By mid-August 2016, federal cybersecurity
personnel became confident that Russian cyber actors were probing the election
infrastructures and voter registration databases of several states.[48] By late
September of that year, U.S. intelligence agencies identified twenty-one states
that were targeted by Russian government cyber actors.[49] Eventually,
intelligence officials concluded that Russia had attempted to invade the election
systems of all fifty states.[50]
In one line of attack, GRU officers sent spear-phishing emails to over 120
Florida county election officials.[51] The emails contained an attached Word
document carrying a virus that would permit the GRU to access an infected
computer.[52] The FBI believes, through this operation, the GRU was able to gain
access to the network of at least one county government in Florida.[53] Eventually,
Agency/Central Security Service, U.S. Department of Energy, U.S. Department of Homeland Security, U.S.
Department of State, U.S. Department of the Treasury, Drug Enforcement Administration, U.S. Air Force, U.S.
Army, U.S. Coast Guard, U.S. Marine Corps, and U.S. Navy. NAS REPORT, supra note 11, at 1 n.2. Russian
activity began as early as 2014. SENATE INTELLIGENCE REPORT, supra note 13, at 3.
43 SENATE INTELLIGENCE REPORT, supra note 13, at 6.
44 _Id._
45 _Id. at 22._
46 _Id._
47 Michael McFaul & Bronte Kass, _Understanding Putin’s Intentions and Actions in the 2016 U.S._
_Presidential Election, in SECURING AMERICAN ELECTIONS,_ _supra note_ 23, at 5, 14.
48 SENATE INTELLIGENCE REPORT, supra note 13, at 7.
49 _Id._
50 _Id. at 12._
51 Herbert Lin, Alex Stamos, Nate Persily & Andrew Grotto, Increasing the Security of U.S. Election
_Infrastructure, in SECURING AMERICAN ELECTIONS,_ _supra note_ 23, at 17, 18.
52 ROBERT S. MUELLER, III, U.S. DEP’T OF JUST., REPORT ON THE INVESTIGATION INTO RUSSIAN
INTERFERENCE IN THE 2016 PRESIDENTIAL ELECTION 51 (2019).
53 _Id._
-----
a Russian operative was indicted by Special Counsel Robert Mueller for probing
election websites of certain rural counties in Georgia, Florida, and Iowa in
October 2016.[54]
Russia also targeted electronic pollbook systems in several states.[55] In one
example of an attack on Election Day in 2016, registered voters in North
Carolina were denied the right to vote when the local electronic pollbook
systems could not locate their records.[56] Although hacking was never proven to
be the cause of the electronic pollbook discrepancy, a forensic analysis was not
conducted as county election officials in North Carolina declined the FBI’s offer
to investigate.[57]
The Intelligence Community understood the seriousness of the foreign
attacks.[58] In October 2016, the Department of Homeland Security (DHS) and
the Office of the Director on National Intelligence issued a joint statement on
election security, which revealed that the probing of state election systems had
originated from “servers operated by a Russian company.”[59] The statement also
warned state and local governments about the cybersecurity threats and asked
them to seek assistance from DHS.[60] In January 2017, then-DHS Secretary Jeh
Johnson issued a statement designating U.S. election infrastructure as a part of
the nation’s critical infrastructure, which made election systems an ongoing
“priority for cybersecurity assistance and protections” from DHS.[61] Members of
the Intelligence Community generally agreed that some of Russia’s motives for
the cyberattack were to sow discord and undermine voters’ confidence in the
54 Indictment at 26, U.S. v. Netyksho, No. 18-cr-00215 (D.D.C. Jul. 13, 2018).
55 Benjamin Wofford, The Hacking Threat to the Midterms Is Huge. And Technology Won’t Protect Us,
VOX (Oct. 25, 2018, 5:00 AM), https://www.vox.com/2018/10/25/18001684/2018-midterms-hacked-russiaelection-security-voting.
56 _Id. Electronic pollbooks are electronic voter check-in databases that are increasingly being used in_
place of paper voter rolls in precincts around the U.S. See _infra Part I.B._
57 Wofford, supra note 55.
58 SENATE INTELLIGENCE REPORT, supra note 13, at 7–8.
59 Press Release, DHS & ODNI Election Sec., Joint Statement on Election Security (Oct. 7, 2016),
https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2016/item/1635-joint-dhs-and-odnielection-security (“We believe, based on the scope and sensitivity of these efforts, that only Russia’s senior-most
officials could have authorized these activities.”).
60 _Id._
61 Press Release, Jeh Johnson, DHS Sec’y, Statement on the Designation of Election Infrastructure as a
Critical Infrastructure Subsector (Jan. 6, 2017), https://www.dhs.gov/news/2017/01/06/statement-secretaryjohnson-designation-election-infrastructure-critical. Election infrastructure is comprised of “storage facilities,
polling places, and centralized vote tabulations locations used to support the election process, and information
and communications technology to include voter registration databases, voting machines, and other systems to
manage the election process and report and display results on behalf of state and local governments.” Id.
-----
U.S. election system.[62] However, intelligence officials believed that the general
public did not fully comprehend the threat and had a dim understanding of the
vastness of Russia’s attack during the 2016 election.[63]
The attacks did not subside after the 2016 election. Russia continued to
attack U.S. election infrastructure for the purpose of interfering with the 2018
midterm elections.[64] The Intelligence Community was clearly aware of the
ongoing threat from Russia.[65] As one U.S. cybersecurity expert noted before the
2018 midterm elections, “The Russians will attempt, with cyberattacks and with
information operations, to go after us again. They’re doing it right now.”[66] An
October 11, 2018, DHS Report stated, “We judge that numerous actors are
regularly targeting election infrastructure, likely for different purposes,
including to cause disruptive effects, steal sensitive data, and undermine
confidence in the election. We are aware of a growing volume of malicious
activity targeting election infrastructure in 2018[.]”[67] There is now abundant
evidence that Russia targeted the campaigns of at least a dozen House and Senate
candidates in the 2018 midterm elections.[68] The Intelligence Community also
believes that Russia continued its activity against state and local election
systems.[69] The extent to which Russia succeeded in its endeavors in 2018 is still
not known.[70]
Russia has demonstrated it has sufficient sophistication and knowledge of
U.S. voting patterns to understand that cyberattacks on local election systems
could cause significant disruption.[71] Although it may be difficult to change vote
tallies across the country in national elections, cyber actors can access databases
in particular districts, manipulate voter files, and cause enough voter suppression
to impact the outcome.[72] Therefore, an attack on a few key battleground states
62 SENATE INTELLIGENCE REPORT, supra note 13, at 35–36.
63 Wofford, supra note 55.
64 _Id._
65 _Id._
66 _Id. (quoting Eric Rosenbach, former Pentagon Chief of Staff)._
67 SENATE INTELLIGENCE REPORT, supra note 13, at 21.
68 Wofford, supra note 55.
69 _See SENATE_ INTELLIGENCE REPORT, _supra_ note 13, at 10 (stating that prior to the 2018 midterm
election, DHS determined “numerous actors are regularly targeting election infrastructure, likely for different
purposes, including to cause disruptive effects, steal sensitive data, and undermine confidence in the election”).
70 _See Lin et al., supra note 51, at 18–19 (“[T]here is no evidence that votes were actually changed and_
that no lasting damage was done to voter registration databases. Nonetheless, these incidents should be viewed
as precursors or dress rehearsals for similar attacks against the 2020 U.S. presidential election.”).
71 Eric Manpearl, _Securing U.S. Election Systems: Designating U.S. Election Systems as Critical_
_Infrastructure and Instituting Election Security Reforms, 24 B.U._ J. SCI. & TECH. L. 168, 175 (2018).
72 _Id. at 173–74; Zetter, supra note 29._
-----
during a presidential race could swing the election.[73] Because small
manipulations are easier to perpetrate without detection, the risk that
cyberattacks may affect the result of an election is “greatest when the electorate
is evenly divided and vote counts are close, as has been the case recently in a
number of Presidential elections.”[74] Attacks on specific competitive districts
during congressional elections could also substantially change the composition
of the federal legislature.[75] No proof exists that such attacks have occurred, but
they are certainly a risk for the future.[76] The consensus opinion among the
Intelligence Community is that the threat of foreign cyberattacks on U.S.
election systems persists.[77] And the risk is not just from Russia. Evidence shows
that China, Iran, North Korea, and ISIS have all conducted cyber intrusions
against U.S. election infrastructure.[78]
### B. Recommendations of Cybersecurity Experts to Strengthen U.S. Election Infrastructure
Election cybersecurity experts generally agree that certain remedies would
create a more secure U.S. election system. Because of long-standing concerns
about insecure voting systems and the recent recognition of foreign cyberattacks,
the National Academy of Sciences, Engineering, and Medicine (“NAS”)
appointed an ad hoc committee to consider the future of voting in the United
States.[79] The NAS committee determined that, due to the events of the 2016
election and the ongoing threat of cyberattacks, the current U.S. system of voting
must evolve.[80] In its report, the NAS committee noted that because of the new
73 Manpearl, supra note 71, at 175; NAS REPORT, supra note 11, at 16 n.11; see Zetter, supra note 29
(describing how a few thousand missing votes and a 537-vote victory for George W. Bush in Florida determined
the result of the 2000 presidential election).
74 Lin et al., supra note 51, at 19.
75 Manpearl, supra note 71, at 175; NAS REPORT, supra note 11, at 16 n.11.
76 Zetter, supra note 29.
77 SENATE INTELLIGENCE REPORT, supra note 13, at 43 (quoting Russian Interference in the 2016 U.S.
_Elections: Open Hearing Before the S. Comm. on Intelligence, 115th Cong. 117 (2017) (statement of Alex_
Halderman, Professor of Computer Science and Engineering, University of Michigan)); see Jeremy Herb, Brian
Fung, Jennifer Hansler & Zachary Cohen, Russian Hackers Targeting State and Local Governments Have Stolen
_Data, US Officials Say, CNN, https://www.cnn.com/2020/10/22/politics/russian-hackers-election-data/index._
html (Oct. 23, 2020, 11:39 AM) (reporting that “Russian state-sponsored hackers” targeted state and local
government and stole voter registration information in the weeks leading up to the 2020 election).
78 William Roberts, Election Security: The Fight to Secure the Vote, 33 WASH. LAW. 12, 14 (2018).
79 The committee was charged with: (1) documenting the current state of technology, standards, and
resources for voting technologies; (2) examining the challenges arising out of the 2016 federal election;
(3) evaluating advances in current and upcoming technology that can improve voting; and, (4) providing
recommendations to make voting “easier, accessible, reliable, and verifiable.” NAS REPORT, supra note 11, at
3–4.
80 _Id. at 121._
-----
foreign threat, “[w]e must think strategically and creatively about the
administration of U.S. elections” and must “seriously reexamine . . . the role of
federal and state governments in securing our elections.”[81] While cybersecurity
experts are not in a position to opine on the constitutionality of federal authority
to regulate states in conducting federal elections, they have a strong, coherent,
consensus opinion on how best to secure election infrastructure against
cybersecurity threats. Experts recommend measures to secure two critical
aspects of elections: voter registration databases and vote-casting mechanisms.[82]
First, voter registration lists must be complete and accurate.[83] The Help
America Vote Act of 2002 (HAVA) required each state to create a statewide
voter database, rather than leave the maintenance of voter registration to counties
and municipalities.[84] The administration of voter registration databases requires
two main large scale tasks.[85] Election administrators must (1) maintain the
correct status and relevant information of citizens who are properly registered to
vote; and (2) deliver precinct-specific lists of registered voters to each precinct.[86]
Because of the complexity and flexibility needed to maintain accurate, upto-date lists of registered voters, lists are by necessity kept electronically.[87]
Electronic voter registration databases are easier than paper counterparts to
manage and maintain but are vulnerable to cyberattacks.[88] And in many states,
“databases containing voter registration lists are connected, directly or
indirectly, to the Internet or to state computer networks.”[89] This connectivity
creates a significant risk of cyber invasion and manipulation.[90] Manipulation of
voter registration data would cause chaos when voters arrive at the polls and find
their names have been removed from the rolls.[91] Removing or changing data for
a small number of voters in contentious congressional races or in swing states
81 _Id._
82 Lin et al., supra note 51, at 17.
83 NAS REPORT, supra note 11, at 59.
84 Help America Vote Act of 2002, Pub. L. No. 107-252, 116 Stat. 1666 (codified as amended at 42
U.S.C. §§ 15301–15545) (requiring “a single, uniform, official, centralized, interactive, computerized statewide
voter registration list defined, maintained, and administered at the state level”).
85 Lin et al., supra note 51, at 17.
86 _Id._
87 NAS REPORT, supra note 11, at 57–61.
88 _Id. at 61._
89 _Id. at 57._
90 _See_ _infra Part I.C. Russia breached online voter databases in Illinois and Arizona, obtaining personal_
information on tens of thousands of registered voters. SENATE INTELLIGENCE REPORT, supra note 13, at 22–24;
NAS REPORT, supra note 11, at 25.
91 SENATE INTELLIGENCE REPORT, supra note 13, at 2.
-----
for a presidential race could change the results of an election.[92] The NAS
recommends that election administrators routinely assess the integrity of voter
registration databases and put in place systems that detect evidence of probing
or tampering with the system.[93] The Senate Intelligence Committee recommends
updating software in state voter registration systems and maintaining paper
backup copies of registration databases.[94]
Managing statewide voter registration databases requires states to deliver
precinct-specific lists, also known as pollbooks, to each precinct.[95] Pollbooks,
which can either be paper-based or electronic, are used to verify voter eligibility
and check-in voters.[96] Over 80% of jurisdictions use preprinted paper pollbooks
to check-in voters, but the use of electronic pollbooks (e-pollbooks) is
increasing.[97] Between 2012 and 2016, there was a 75% increase in use of epollbooks, and now almost half of voters are checked in electronically.[98]
E-pollbooks, which may or may not be networked or connected to the
internet, provide some advantages over paper pollbooks. E-pollbooks generally
speed up the check-in process and can better track which voters have already
cast ballots.[99] When networked, e-pollbooks allow polling places to send and
receive real-time updates to voter registration data, which is critical for states
that use same-day registration.[100] However, e-pollbooks are vulnerable to
cyberattacks that could change voter data, disrupt check-in procedures, and
manipulate information on who has and has not voted.[101] Alternatively, a “denial
of service” attack could simply shut down operation of an e-pollbook, which
would altogether disrupt voting at a particular precinct.[102]
Currently no national security standards exist for e-pollbooks, and security
practices vary by state.[103] The NAS recommends jurisdictions that use epollbooks have paper backup lists available to be used in the event of any
92 Manpearl, supra note 71, at 175.
93 NAS REPORT, supra note 11, at 63.
94 SENATE INTELLIGENCE REPORT, supra note 13, at 57 (noting that one state’s voter registration system
is more than ten years old).
95 NAS REPORT, supra note 11, at 69.
96 _Id. at 69–70._
97 _Id. at 70._
98 _Id._
99 _Id._
100 _Id. at 71._
101 _Id._
102 _Id. at 72._
103 _Id. at 71._
-----
disruption or compromise to the electronic version.[104] The NAS also
recommends that Congress provide funds for the U.S. Election Assistance
Commission to develop national security standards for the use of e-pollbooks.[105]
Second, cybersecurity experts generally agree that cybersecurity risks are
inherent when states rely entirely on computers for voters to cast ballots.[106]
Currently, jurisdictions use a variety of types of ballots, including paper, card,
and machine-only, and votes are cast by a variety of mechanisms.[107] In the
majority of jurisdictions, voters mark their choices on paper ballots, either by
hand or by using a ballot-marking device (BMD).[108] Paper ballots are either
hand-counted or machine-counted, most commonly by optical scanners.[109]
Several states use direct recording electronic (DRE) voting machines in at
least some jurisdictions.[110] DREs are free-standing computer units that record
selections voters make using a touchscreen.[111]
States purchased DREs with funding from HAVA, which was passed as a
response to the problems with lever machines and punch card ballots in the 2000
presidential election.[112] The advent of DREs introduced “new technical
challenges,” such as touchscreen miscalibration, which causes a voter’s intended
selection of one candidate to be misinterpreted as a vote for another candidate.[113]
Almost immediately, several security risks with DREs were identified, leading
some states to decertify and stop using the machines as early as 2007.[114]
Cybersecurity experts now recognize the full extent of the cybersecurity
risks with DREs. In its report on election security, the NAS noted that because
they are completely paperless, DREs create a risk that a cyberattack on the
104 _Id. at 72._
105 _Id._
106 _Id. at 78._
107 _Id. at 37, 39._
108 When voting with a BMD, a voter uses a touchscreen or keypad to mark his or her choices, after which
the BMD prints a paper copy of the selections. The paper printout is human-readable. The paper is then scanned
and tabulated by a separate device. With some BMD printouts, an optical scanner records and tallies the humanreadable ballot. With other BMDs, the actual selections are recorded on a barcode, which is then read by the
tabulating machine. Id. at 39.
109 _Id. at 80._
110 Lawrence Norden & Andrea Cordova, Voting Machines at Risk: Where We Stand Today, BRENNAN
CTR. FOR JUST. (Mar. 5, 2019), https://www.brennancenter.org/our-work/research-reports/voting-machinesrisk-where-we-stand-today.
111 NAS REPORT, supra note 11, at 78.
112 Zetter, supra note 29.
113 NAS REPORT, supra note 11, at 78.
114 Zetter, supra note 29.
-----
machines will be undetectable.[115] A computer virus could steal votes from one
candidate and assign them to another or could stop a machine from accepting
votes altogether.[116] According to the Senate Intelligence Report, DRE voting
machines “can be programmed to show one result to the voter while recording a
different result in the tabulation.”[117] Therefore, the report called for states to
discontinue using DREs, which “are now out of date.”[118] A cybersecurity expert
actually demonstrated in a courtroom how a DRE machine could be infected
with malware that could alter vote counts on the machine.[119] The same expert
showed that malware could be introduced remotely and be spread from machine
to machine.[120]
The Senate Intelligence Report concluded that “[p]aper ballots and optical
scanners are the least vulnerable to cyberattack.”[121] Secure voting systems must
allow a voter to verify that the recorded ballot reflects his or her intent, which is
not possible with paperless DRE machines.[122] Therefore, the NAS recommends
that “[w]ell designed, voter-marked paper ballots” be the standard way for voters
to cast their votes.[123] The consensus opinion from national cybersecurity experts
is that an independent record of the voter’s physical ballot is essential as a
reliable audit tool.[124] An auditable record can be achieved by using hand-marked
paper ballots.[125] When voting machines are used to mark ballots, the machine
must provide a physical, human-readable record of the voter’s selections.[126]
National security experts also agree that the threat of foreign interference in
U.S. elections persists.[127] In his testimony before Senate Intelligence
Committee, former Assistant Attorney General for National Security John
Carlin stated,
I’m very concerned about . . . our actual voting apparatus, and the
attendant structures around it . . . . We’ve literally seen it already, so
115 NAS REPORT, supra note 11, at 78.
116 SENATE INTELLIGENCE REPORT, supra note 13, at 42.
117 _Id._
118 _Id._ at 59; _see also Zetter, supra note 29 (noting that as early as 2007, some states have decertified_
electronic voting machines after finding them to be susceptible to viruses and malicious software).
119 Curling v. Kemp, 334 F. Supp. 3d 1303, 1308 (N.D. Ga. 2018).
120 _Id. at 1309. Accordingly, a federal judge in Georgia ordered a permanent injunction against the use of_
DRE machines in the state after 2019. See infra Part III.C.
121 SENATE INTELLIGENCE REPORT, supra note 13, at 59.
122 NAS REPORT, supra note 11, at 79.
123 _Id._
124 _Id. at 79–80._
125 _Id. at 42._
126 _Id. at 78._
127 SENATE INTELLIGENCE REPORT, supra note 13, at 43.
-----
shame on us if we can’t fix it heading into the next election cycles.
And it’s the assessment of every key intel professional, which I share,
that Russia’s going to do it again because they think it was successful.
So we’re in a bit of a race against time heading up to the two-year
election. Some of the election machinery that’s in place should not
be.[128]
Consequently, “[g]iven Russian intentions to undermine the credibility of the
election process, states should take urgent steps to replace outdated and
vulnerable voting systems.”[129]
_C. States Responded Inadequately and Ineffectively to Russian Cyberattacks_
In the summer of 2016, after it became clear to the Intelligence Community
that foreign actors were attacking state election infrastructure, intelligence
officials began the process of reaching out to states to offer cybersecurity
support.[130] During a call with state election officials on August 15, 2016, DHS
Secretary Jeh Johnson offered to provide help to states by inspecting voting
systems for viruses and other signs of cyber invasion.[131] DHS proposed
conducting on-site risk and vulnerability assessments as well as remote “cyber
hygiene scans” on internet-connected election management systems such as
voter registration databases.[132] Several states rejected the offer for help.
According to Secretary Johnson, the general response from state officials was
“[t]his is our responsibility and there should not be a federal takeover of the
election system.”[133] Then-Georgia Secretary of State Brian Kemp cited concerns
about “federal overreach” and claimed that help from federal intelligence
agencies would “subvert the [C]onstitution to achieve the goal of federalizing
elections under the guise of security.”[134] Similarly, Louisiana Secretary of State
Tom Schedler chided Congress for overemphasizing the extent of the risk and
stated that election administration should be left to the states because “[t]hat’s
128 _Id. (quoting Interview by Senate Select Comm. on Intel. with John Carlin, Former Assistant Att’y Gen._
for Nat’l Sec. (Sept. 25, 2017)).
129 _Id. at 58._
130 _Id. at 46–47._
131 _Id. at 47–48; Aliya Sternstein, At Least One State Declines Offer for DHS Voting Security, NEXTGOV_
(Aug. 25, 2016), https://www.nextgov.com/cybersecurity/2016/08/some-swing-states-decline-dhs-votingsecurity-offer/131037/.
132 SENATE INTELLIGENCE REPORT, supra note 13, at 52.
133 _Id. at 47._
134 Sternstein, supra note 131.
-----
what the Constitution says.”[135] Republican legislators also blocked funds for
election security in Minnesota and Arizona.[136]
Even more concerning, many states failed to recognize the extent or
seriousness of the threat and chose not to heed warnings from the Intelligence
Community.[137] Several states also opposed the decision of Secretary Johnson to
designate U.S. election systems as critical infrastructure.[138] DHS initially
intended to make the designation in August 2016 but held off until January 2017
because of pushback from state election officials.[139] Again rejecting federal
support, the National Association of Secretaries of State (NASS) expressed
opposition to DHS’s critical infrastructure designation, mistakenly citing states’
primacy in regulating elections.[140] The NASS stated that DHS “has no authority
to interfere with elections, even in the name of national security.”[141] Secretary
Kemp declared that “[d]esignating voting systems or any other election system
as critical infrastructure would be a vast federal overreach.”[142]
Despite the dire warnings and offers to help from the Intelligence
Community, states did little to respond to the ongoing threat of cyberattacks on
election systems. Even after the breaches to databases in Illinois and Arizona
were known, states continued to struggle to respond to security risks.[143] States
have displayed widely varying degrees of concern about election security and
efforts to address the security risks. For the most part, states relied on the same
insecure infrastructure to conduct elections in 2018 as they did in 2016, despite
the known risks.[144] But the attacks on local elections systems did not subside
135 Aliya Sternstein, _9 States Accept DHS’s Election Security Support, NEXTGOV_ (Sept. 21, 2016),
https://www.nextgov.com/cybersecurity/2016/09/9-states-accept-dhss-election-security-support/131741/.
136 Gopal Ratnam, Democrats Target State Elections with Focus on Election Security, ROLL CALL (Aug.
22, 2019), https://www.rollcall.com/news/congress/democrats-target-state-elections-focus-election-security.
137 _See infra Part III.B._
138 Manpearl, supra note 71, at 186. The purpose of a critical infrastructure designation is to allow the
Federal Government to partner with and provide support to the identified sectors. The designation added U.S.
election systems to the other critical infrastructure sectors: chemical; commercial facilities; communications;
critical manufacturing; dams; defense industrial base; emergency services; energy; financial services; food and
agriculture; government facilities; health care and public health; information technology; nuclear reactors,
materials, and waste; transportation systems; and water and wastewater systems. Press Release, Off. of the Press
Sec’y, Presidential Policy Directive—Critical Infrastructure Security and Resilience (Feb. 12, 2013),
https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-criticalinfrastructure-security-and-resil.
139 SENATE INTELLIGENCE REPORT, supra note 13, at 48–49.
140 Manpearl, supra note 71, at 187.
141 Nat’l Ass’n of Sec’ys of State, NASS Resolution Opposing the Designation of Elections as Critical
_Infrastructure, at 21–22 (Feb. 18, 2017)._
142 Sternstein, supra note 131.
143 SENATE INTELLIGENCE REPORT, supra note 13, at 39; Norden & Cordova, supra note 110.
144 Wofford, supra note 55.
-----
after the 2016 election, and states continue to be ill-equipped to handle the
attacks.[145]
Georgia, for example, exhibited a grossly inadequate response to the
cybersecurity challenges that came to light in the 2016 election. The Georgia
Secretary of State’s Office left its registration database completely open to
hackers with 6.5 million voter records exposed during a six-month period in
2016–17.[146] U.S. cybersecurity experts were able to access the database and even
plant files during that time.[147] Malicious actors could have manipulated the data,
including dropping voters from the database or changing their data.[148] But
Georgia election officials claimed they saw no evidence that any election related
data was compromised.[149] However, a forensic evaluation was not done initially
because Georgia officials wiped the server that housed the data after the breach
was discovered.[150] Evidence from an FBI image taken of the server before it was
wiped shows that there may have been signs of tampering.[151]
Georgia also knew of the substantial evidence that Russia was targeting
election systems and that its paperless, internet-connected voting system was
ripe for hacking.[152] Yet, it made no significant changes, and in the 2018 federal
election, voters cast ballots on the same outdated, insecure system used in
2016.[153] Georgia election officials were reluctant to acknowledge the full extent
of the vulnerability of Georgia’s electronic voting equipment even though
security flaws in DRE machines had been known for over a decade and Georgia
had not updated the software on its machines since 2005.[154] Therefore, Georgia
voters used the same hackable and non-auditable voting machines in the 2018
145 _Id._
146 NAS REPORT, supra note 11, at 58.
147 Frank Bajak, Georgia Election Server Wiped After Suit Filed, PBS NEWSHOUR (Oct. 26, 2017, 9:34
AM), https://www.pbs.org/newshour/politics/georgia-election-server-wiped-after-suit-filed.
148 NAS REPORT, supra note 11, at 57.
149 Frank Bajak, Georgia Election Server Showed Signs of Tampering, AP (Jan. 16, 2020), https://apnews.
com/39dad9d39a7533efe06e0774615a6d05.
150 Kim Zetter, Georgia Election Systems Could Have Been Hacked Before 2016 Vote, POLITICO (Jan. 16,
2020, 11:07 PM), https://www.politico.com/news/2020/01/16/georgia-election-systems-could-have-beenhacked-before-2016-vote-100334.
151 _Id._
152 _See Curling v. Kemp, 334 F. Supp. 3d 1303, 1327 (N.D. Ga. 2018) (“[Georgia] stood by for far too_
long, given the mounting tide of evidence of the inadequacy and security risks of Georgia’s DRE voting system
and software.”).
153 _See_ Curling v. Raffensperger, 397 F. Supp. 3d 1334, 1382–92 (N.D. Ga. 2019) (summarizing the
affidavits of 137 Georgia voters, 2 county pollworkers, and 15 pollwatchers, and concluding that the “same
pattern of problems with Georgia’s voting systems and registration databases has persisted across multiple
elections cycles”).
154 _Id. at 1339, 1348._
-----
midterm elections.[155] As a result, voters in Georgia experienced significant
difficulty voting in 2018.[156] Problems reported by voters included long lines due
to malfunctioning machines being taken out of service, machines selecting the
wrong candidates when voters marked their choices on touchscreens, and checkin problems with e-pollbooks, including incorrect polling places or incorrect
addresses listed for voters.[157] A federal court noted that Georgia state election
officials had “stood by for far too long” and “buried their heads in the sand”
rather than address the inadequacy and insecurity of Georgia’s voting system.[158]
Similarly, North Carolina refused an offer from the FBI to investigate
election irregularities in 2016.[159] A forensic analysis was never conducted after
registered voters could not be located in local e-pollbook systems.[160] Although
hacking was never proven as the cause of the e-pollbook discrepancy, it was
discovered that Russia targeted e-pollbook systems in several states, including
North Carolina.[161] Despite knowing that information, county election officials
in North Carolina declined the FBI’s offer to investigate.[162]
Given that some states and municipalities have demonstrated they are
incapable and, in some instances, even unwilling to secure election
infrastructure, the United States needs a national election infrastructure plan.
Such a plan should follow the recommendations of national cybersecurity
experts to provide uniformity and address vulnerabilities in many state and local
election systems.
II. THE LANDSCAPE OF CONGRESSIONAL AUTHORITY OVER FEDERAL
ELECTIONS
Many state election officials, scholars, and federal legislators consider
primary authority over the conduct of federal elections to belong to the states.
For example, the first recommendation in the Senate Intelligence Report on
155 _Id. at 1392; see Adam Levin & Beau Friedlander, Georgia’s Shaky Voting System, N.Y._ TIMES (Nov.
13, 2018), https://www.nytimes.com/2018/11/13/opinion/voting-machines-georgia-security.html (describing
how Georgia, for its 2018 gubernatorial election, relied on the same voting system it used in 2016 despite the
cybersecurity vulnerabilities that had been identified).
156 Mark Niesse, Long Lines and Equipment Problems Plague Election Day in Georgia, AJC (Nov. 6,
2018), https://www.ajc.com/news/state—regional-govt—politics/long-lines-and-equipment-problems-plagueelection-day-georgia/l7NUidWbMetr5OFdGcb5ZM/.
157 _Curling, 397 F. Supp. 3d at 1383._
158 _Curling, 334 F. Supp. 3d at 1327._
159 Wofford, supra note 55.
160 _Id._
161 _Id._
162 _Id._
-----
Russian interference in the 2016 election is to “reinforce states’ primacy in
running elections.”[163] The Supreme Court’s view on whether the federal
government or states have the ultimate right to prescribe the manner in which
federal elections are conducted has been unclear. The pendulum of the Court’s
interpretation of the differential authority between Congress and the states over
federal elections has swung back and forth for two centuries. From the
antebellum era to the Reconstruction Amendments to the VRA to the Court’s
decision in Shelby County, the Court has expanded and contracted congressional
authority relative to state sovereignty. But even this pendulum swing has
remained in a somewhat narrow range because Congress has never attempted to
exercise the full breadth of its authority under the Elections Clause.
The vast majority of congressional action to regulate elections since the Civil
War has been pursuant to the Reconstruction Amendments rather than the
Elections Clause.[164] Even when congressional authority was at its peak under
the VRA, Congress approached election legislation from a deferential
framework. Congress only passed the VRA after the Civil Rights Movement’s
expansive and concerted fight for voting rights in the South brought national
attention and shifted public opinion on this issue.[165] The Supreme Court upheld
this action by Congress under the Enforcement Clause of the Fifteenth
Amendment because of the long-standing and pernicious evil of racial
discrimination in voting.[166] But Congress has yet to exercise and the Court has
yet to uphold the full extent of Congress’s power to enact federal election
legislation under the Elections Clause, which extends beyond antidiscrimination.
_A. Congressional Authority Under the Reconstruction Amendments and the_
_Voting Rights Act_
The end of the Civil War and the Reconstruction era brought a new paradigm
to the balance of federal authority versus state autonomy. The Fourteenth
Amendment provided an avenue for Congress to ensure that each state did not
abridge or deny certain rights to its own citizens.[167] The Fifteenth Amendment
163 SENATE INTELLIGENCE REPORT, supra note 13, at 54.
164 Franita Tolson, The Spectrum of Congressional Authority Over Elections, 99 B.U. L. REV. 317, 341
(2019).
165 CAROL ANDERSON, ONE PERSON, NO VOTE: HOW VOTER SUPPRESSION IS DESTROYING OUR
DEMOCRACY 21–22 (2018); see South Carolina v. Katzenbach, 303 U.S. 301, 315 (1966) (“The burden is too
heavy—the wrong to citizens is too serious—the damage to our national conscience too great not to adopt more
effective measures than exist today.”).
166 _Id. at 303–04._
167 U.S. CONST. amend. XIV.
-----
prohibited states from denying the right to vote “on account of race, color, or
previous condition of servitude.”[168] Despite the Fifteenth Amendment
guarantee, many former Confederate states still prevented African American
citizens from exercising their new constitutional right to vote.[169] But embedded
in the Reconstruction Amendments were enforcement provisions that
established a role for Congress to protect the rights of all citizens against state
action.[170] The constitutional enfranchisement of African American voters
created a new framework for Congress to play a greater role in elections in order
to protect the right to vote.
While Congress had the power to enforce the Reconstruction Amendments
to prevent states from infringing on their citizens’ right to vote, the
Reconstruction-era framework preserved a concept of federalism and state
sovereignty over the conduct of elections.[171] Congress attempted to exert broad
authority to regulate elections through the Enforcement Acts of 1870 and 1871,
which instituted a system of federal oversight for congressional elections.[172]
However, despite Congress’s greater power to protect voters under the
Reconstruction Amendments, the Supreme Court did not allow Congress full
license to regulate elections. In United States v. Reese, the Court struck down
provisions of the Enforcement Act of 1870 because they exceeded the scope of
Congress’s mandate under the Fifteenth Amendment.[173] The Court held that
section 4 of the statute was invalid because it created criminal penalties for state
officials who denied citizens the right to vote.[174] According to the Court, the
Fifteenth Amendment did not confer upon Congress expansive power to regulate
elections and protect voters, but simply prevented states from discriminating
based on race.[175]
Similarly, the Court restrained Congress from using the Enforcement Act of
1870 to assert broad authority over states pursuant to the Fourteenth Amendment
in United States v. Cruikshank.[176] In that case, election inspectors in Louisiana
were criminally charged with conspiring to prevent two African American
168 U.S. CONST. amend. XV, § 1.
169 ANDERSON, supra note 165, at 2.
170 U.S. CONST. amend. XIII, § 2; U.S. CONST. amend. XIV, § 5; U.S. CONST. amend. XV, § 2.
171 Tolson, supra note 164, at 354.
172 Enforcement Act of 1870, ch. 114, 16 Stat. 140; Enforcement Act of 1871, ch. 99, 16 Stat. 433; Tolson,
_supra note 164,_ at 358.
173 United States v. Reese, 92 U.S. 214, 220 (1875).
174 _Id. at 217–18, 220._
175 _Id. at 217._
176 United States v. Cruikshank, 92 U.S. 542, 555 (1875).
-----
citizens from exercising their right to vote.[177] The Court dismissed the
indictments, holding that the Louisiana officials did not intentionally
discriminate based on race.[178] Importantly, the Court noted that the federal
government had authority to prohibit discrimination under the Fourteenth
Amendment, but the right to vote itself came from the states.[179] The Court,
however, did not address Congress’s power to regulate elections and ensure the
right to vote under the Elections Clause.
The post-Reconstruction era, beginning with the federal government’s
withdrawal of military troops in 1876, allowed Southern states to construct
significant structural barriers to African American suffrage.[180] Discriminatory
devices to prevent African Americans from voting were enacted into state laws
and even embedded into the constitutions of several former Confederate
states.[181] In addition to literacy tests, poll taxes, and good-morals requirements,
the small percentage of African Americans who were able to cast ballots in the
South often had to overcome outright violence.[182]
During the Jim Crow era of renewed disenfranchisement, the Supreme Court
invalidated several state laws designed to prevent African Americans from
voting as violations of the Fourteenth and Fifteenth Amendment.[183] However,
case-by-case litigation was essentially a game of whack-a-mole. Each time
federal courts struck down a discriminatory state law that restricted the right of
its citizens to vote, states found insidious, creative alternative ways to
disenfranchise African American voters.[184] For example, after two Supreme
Court decisions invalidated all-white primary elections, states such as South
Carolina and Texas found ways to unofficially hold “pre-primaries” without
such laws being on their books.[185] The Civil Rights Movement forced Congress
177 _Id. at 544–45._
178 _Id. at 556–57._
179 _Id. at 554–56 (holding that the Fourteenth Amendment only confers on Congress the power to ensure_
that states do not deny the equality of rights of their citizens, but states still assume the primary duty to guarantee
these rights: “The power of the national government is limited to the enforcement of this guaranty.”).
180 ANDERSON, _supra note 165, at 2–3._
181 Virginia E. Hench, The Death of Voting Rights: The Legal Disenfranchisement of Minority Voters, 48
CASE W. RES. L. REV. 727, 733–43 (1998).
182 ANDERSON, _supra note 165, at_ 14–18.
183 _See,_ _e.g., Schnell v. Davis, 336 U.S. 933, 933 (1949) (striking down, as a violation of the Equal_
Protection Clause, a provision of the Alabama state constitution that required citizens to understand and explain
an article of the U.S. Constitution in order to exercise the right to vote).
184 ANDERSON, _supra note 165, at_ 13.
185 Smith v. Allwright, 321 U.S. 649, 656–57 (1944); ANDERSON, _supra note 165, at_ 13.
-----
to enact a comprehensive plan to “banish the blight of racial discrimination in
voting.”[186]
Nearly a century after the Fourteenth and Fifteenth Amendments were
ratified, Congress responded to the grassroots efforts of the Civil Rights
Movement by passing the Voting Rights Act of 1965.[187] The VRA prescribed
remedies for voting discrimination that it imposed on particular states that were
known to have constructed the greatest barriers for African American voters.[188]
By exercising its power under the Enforcement Clause of the Fifteenth
Amendment, Congress supplanted the right of states to enact particular
discriminatory voter qualification laws.[189] The VRA placed significant
constraints on states’ autonomy in determining voter qualifications.[190] Section 5
of the VRA required states or counties that had a history of discriminating
against African American voters, as defined in section 4(b), to submit to
preclearance by the U.S. Attorney General of any new law that impacted voter
qualifications or registration.[191] The Act also authorized federal examiners to
directly place and remove voters from the registration lists of states and localities
who fell under the VRA’s coverage formula.[192]
When the Supreme Court upheld the VRA as “an appropriate means for
carrying out Congress’ constitutional responsibility,” federal authority to
regulate elections under the Reconstruction Amendments was at its zenith.[193]
South Carolina challenged the VRA on the grounds it exceeded Congress’s
powers and infringed on a function that had traditionally been left to states.[194]
But the Court dismissed these concerns.[195] The Court held that “[a]s against the
reserved powers of the States, Congress may use any rational means to effectuate
the constitutional prohibition of racial discrimination in voting.”[196] The Court in
186 South Carolina v. Katzenbach, 383 U.S. 301, 308 (1966); ANDERSON, _supra note 165, at 21–22._
187 The Voting Rights Act was signed into law by President Lyndon Johnson on August 6, 1965. _See_
Voting Rights Act of 1965, Pub. L. No. 89-110, §§ 1–19, 79 Stat. 437 (codified as amended in scattered sections
of 52 U.S.C.); see Eric S. Lynch, Trusting the Federalism Process Under Unique Circumstances: United States
_Election Administration and Cybersecurity, 60 WM._ & MARY L. REV. 1979, 1991–92 (2019) (noting that
President Johnson introduced the voting rights bill to Congress three days after the “Bloody Sunday” Selma-toMontgomery march).
188 §§ 1–7, 79 Stat. at 437–41.
189 U.S. CONST. amend. XV, § 2 (“Congress shall have the power to enforce this provision through
appropriate legislation.”); §§ 1–2, 79 Stat. at 437.
190 §§ 1–6, 79 Stat. at 437–40.
191 §§ 4(b)–5, 79 Stat. at 438–39.
192 § 7, 79 Stat. at 440–41.
193 South Carolina v. Katzenbach, 303 U.S. 301, 308 (1966).
194 _Id. at 323._
195 _Id._
196 _Id. at 324._
-----
_South Carolina v. Katzenbach stated that Congress’s authority relative to states’_
rights under the Enforcement Clause of the Fifteenth Amendment is just as broad
as Congress’s power under the Necessary and Proper Clause.[197] Therefore, to
prevent racial discrimination, the Supreme Court established that Congress had
paramount authority to supersede state autonomy in determining who was
eligible to cast a ballot.
According to the Court, “[t]he Voting Rights Act was designed by Congress
to banish the blight of racial discrimination in voting, which has infected the
electoral process.”[198] The Court emphasized the “unique circumstances” that
permitted Congress to exert such expansive powers to violate state sovereignty
under the Fifteenth Amendment.[199] The unique circumstances to which the
Court referred were the overt discriminatory actions of several former slave
states that violated the Fifteenth Amendment.[200] In _Katzenbach, the Court’s_
ratification of Congress’s power to enact the VRA was specific to the era as well
as the manner and degree to which the infringement on the rights of African
Americans were being infringed.[201]
Over the next almost fifty years, the Supreme Court continued to uphold the
VRA as a legitimate exercise of Congress’s power to enforce the Fifteenth
Amendment.[202] The Court recognized Congress’s authority to invalidate
provisions that did not have a stated discriminatory purpose but had a disparate
impact on the right of African Americans to vote. In _City of Rome v. United_
_States, the Court upheld the VRA’s ban on changes to a municipality’s voting_
provisions that would have had a discriminatory effect.[203] In that case, the city
of Rome, Georgia challenged the VRA on federalism grounds.[204] But the Court
made clear that the mandate embedded in the enforcement provisions of the
Reconstruction Amendments trumped federalism concerns.[205] The Court stated
that “principles of federalism that might otherwise be an obstacle to
197 _Id. at 325–26;_ _see Ex parte Virginia, 100 U.S. 339, 345–46 (1879) (“Whatever Legislation is_
appropriate, that is, adapted to . . . secure to all persons the enjoyment of perfect equality of civil rights and the
equal protection of the laws against State denial or invasion, if not prohibited, is brought within the domain of
_congressional power.”) (emphasis added)._
198 _Katzenbach, 383 U.S. at 308._
199 _Id. at 335 (“Under the compulsion of these unique circumstances, Congress responded in a permissibly_
decisive manner.”).
200 _Id._
201 _Id. at 326–31._
202 _See, e.g., Lopez v. Monterey Cnty., 525 U.S. 266, 287 (1999); City of Rome v. United States, 446 U.S._
156, 173 (1980).
203 _City of Rome, 446 U.S. at 173._
204 _Id. at 178._
205 _Id. at 179._
-----
congressional authority are necessarily overridden by the power to enforce the
Civil War Amendments ‘by appropriate legislation.’”[206] The Court held that
Congress has the power to impose voting regulations on states and their political
subdivisions because the “[Reconstruction] Amendments were specifically
designed as an expansion of federal power and an intrusion on state
sovereignty.”[207]
The Supreme Court took its view of federal power over state regulations
under the Fifteenth Amendment one step further in Lopez v. Monterey County.[208]
In that case, Monterey County was subject to the coverage formula under section
4(b) of the VRA, but the State of California as a whole was not.[209] California
passed a state law that determined the manner in which county judges were to
be elected.[210] Voters alleged that the law was invalid as applied to Monterey
County because any changes to existing law that applied to the county had to be
precleared by the federal government.[211] The Court determined that the
California law could not take effect in Monterey County until it received
preclearance pursuant to section 5 of the VRA.[212] Therefore, the Court
recognized that Congress’s authority to enforce the Reconstruction
Amendments includes the power to supersede the rights of states to regulate their
own counties. Accordingly, at end of the twentieth century, Congress had broad
authority under the Fifteenth Amendment to regulate federal elections through
the VRA.
_B. The Demise of the Voting Rights Act and the Shifting State-Federal_
_Authority to Regulate Elections_
The twenty-first century brought a dramatic shift in the Supreme Court’s
deference to Congress to enforce the Fifteenth Amendment through the VRA,
which culminated in the Court’s gutting of the VRA in _Shelby County v._
_Holder.[213] Chief Justice John Roberts’s general ideology appears to limit_
congressional power in favor of state sovereignty through principles of
federalism.[214] Relying on federalism, the Roberts Court has limited Congress’s
206 _Id. (quoting Fitzpatrick v. Bitzer, 427 U.S. 445, 456 (1976))._
207 _Id. at 179–80._
208 Lopez v. Monterey Cnty., 525 U.S. 266, 287 (1999).
209 _Id. at 269._
210 _Id._
211 _Id. at 271, 274._
212 _Id. at 287._
213 Shelby Cnty. v. Holder, 570 U.S. 529, 556–57 (2013).
214 Joshua A. Douglas, (Mis)Trusting States to Run Elections, 92 WASH. U. L. REV. 553, 580 (2015); see
Adam B. Cox & Thomas J. Miles, Judging the Voting Rights Act, 108 COLUM. L. REV. 1, 3 (2008) (demonstrating
-----
ability to oversee elections and has elevated the role of states in regulating
various aspects of the voting process and election conduct.[215] In sharp contrast
to the Civil Rights era that led to the VRA, the Court in recent years has more
closely scrutinized Congressional regulation of voting and elections while
affording more deference to election laws enacted by states.[216]
In 2009, the Court foreshadowed its holding in Shelby _County by expressing_
outright hostility to the VRA in _Northwest Austin Municipal Utility District_
_Number One v. Holder.[217] In that case, a Texas municipal district challenged the_
VRA’s preclearance requirement.[218] The Court avoided the question of the
VRA’s constitutionality by resolving the district’s claims on statutory
grounds.[219] In dicta, however, the Court raised concerns about whether the VRA
was constitutional.[220] In his majority opinion, Chief Justice Roberts noted that
section 5 of the VRA “authorizes federal intrusion into . . . state and local
policymaking” and “imposes substantial ‘federalism costs.’”[221] The Court also
stated that section 5 exceeded Congress’s mandate under the Fifteenth
Amendment by suspending all changes to election law in the jurisdictions falling
under its coverage formula.[222] In the concluding paragraphs of the opinion,
which foreshadowed _Shelby County, the Court claimed that the “exceptional_
conditions” that justified the VRA no longer exist as “we are now a very
different Nation.”[223]
Four years later, in Shelby County, the Supreme Court struck down section
4(b) of the VRA.[224] Section 4(b) had delineated the “coverage” formula that
determined which states and localities were subject to federal preclearance
before enacting new voting legislation.[225] In invalidating portions of the VRA,
the Court described its rationale as a combination of federalism issues, concerns
that judicial ideology impacts judicial decisions regarding voting rights).
215 Douglas, supra note 214, at 583.
216 _Id. at 579; see Franita Tolson, Election Law “Federalism” and the Limits of the Anti-Discrimination_
_Framework, 59 WM._ & MARY L. REV. 2211, 2215 (2018) (arguing that recent case law has limited the extent of
Congress’s powers under the Fourteenth and Fifteenth Amendments due to federalism concerns and the Supreme
Court now views states as having broad authority to regulate federal elections).
217 Nw. Austin Mun. Util. Dist. No. 1 v. Holder, 557 U.S. 193, 203 (2009).
218 _Id. at 196._
219 _Id. at 205–06._
220 _Id. at 204._
221 _Id. at 202 (quoting Lopez v. Monterey Cnty., 525 U.S. 266, 282 (1999))._
222 _Id._
223 _Id. at 211._
224 Shelby Cnty. v. Holder, 570 U.S. 529, 556–57 (2013).
225 Voting Rights Act of 1965, Pub. L. No. 89-110, § 4(b), 79 Stat. 437, 438 (codified as amended in
scattered sections of 52 U.S.C.).
-----
about equal sovereignty among states, and changed conditions regarding racial
inequality in voting.[226] A concern for state sovereignty predominated Justice
Roberts’s majority opinion.[227] The Court described the VRA’s requirement that
certain states obtain federal permission before enacting voting laws as “a drastic
departure from basic principles of federalism.”[228]
Scholars and interested parties soon discovered that the _Shelby County_
decision definitively altered the Court’s view of the balance between state and
federal government in regulating elections under the Reconstruction
Amendments.[229] Prior to _Shelby County, the Court had generally recognized_
Congress’s authority to supersede state laws regulating elections in order to
protect voters’ rights.[230] _Shelby County turned that assumption on its head._
Contrary to the prior understanding of the federal-state balance regarding
elections, the Court stated that the original intent of the framers was for states to
have primary authority to regulate federal elections.[231] The Court in _Shelby_
_County held that the VRA was only a legitimate exercise of Congress’s power_
when it was enacted because it was the product of a particular time in history.[232]
However, the Court’s emphasis in _Shelby County on federalism and state_
sovereignty in conducting elections was misguided. The Court viewed the
authority to regulate elections solely from an antidiscrimination perspective and,
ignoring its City of Rome precedent, focused on overt discriminatory intent.[233]
By only evaluating Congress’s power to protect the rights of minority voters
under the Fourteenth and Fifteenth Amendments, the Court discounted
Congress’s broad powers to contradict state laws and regulate elections under
the Elections Clause.
_C. The Elections Clause Grants Congress Broad Authority to Regulate_
_Federal Elections_
Congress’s authority to regulate federal elections under the Elections Clause
226 _Shelby County,_ 570 U.S. at 534–44, 547.
227 _Id. at 535 (stating that the VRA infringed on state sovereignty and section 4 violated “the principle that_
all states enjoy equal sovereignty”).
228 _Id._
229 _See Charles & Fuentes-Rohwer, supra note 1, at 488, 522 (presenting the case against an “optimistic”_
reading of the Shelby County holding for voting rights advocates).
230 _Id. at 500–01, 516._
231 _Id. at 517._
232 _Id. at 495 (noting that Chief Justice Roberts’ majority opinion stated that the VRA was only acceptable_
in 1966 because “exceptional conditions can justify legislative measures not otherwise appropriate” (quoting
South Carolina v. Katzenbach, 303 U.S. 301, 334 (1966))).
233 _See Shelby County, 570 U.S. at 551, 553, 556._
-----
is significantly broader than the Court has acknowledged since Shelby County.[234]
In _Federalist 59, Alexander Hamilton explained that the Elections Clause_
invested ultimate authority to regulate federal elections in “the national
legislature.”[235] Because of the clear mandate of the Elections Clause, the
Supreme Court was remiss in Shelby County to overvalue state sovereignty in
regard to the conduct of federal elections.[236] The Court mistakenly relied on
what it called a “prevailing view that federalism best explains” the U.S. election
system.[237]
_1. Decentralization Versus Federalism_
The Elections Clause precludes viewing the balance of state-versus-federal
authority to regulate elections through traditional notions of federalism.[238] The
text and history of the Elections Clause demonstrate that the Constitution
prescribed a system for federal elections based on decentralization rather than
federalism.[239] Though often conflated, “federalism” and “decentralization” are
distinct concepts.[240] Decentralization is a hierarchically organized “managerial
concept” in which the leader at the top has plenary power over the subordinate
units.[241] Federalism may be structurally similar to decentralization.[242] But as a
political concept, federalism implies that the subordinate units retain certain
rights and “areas of jurisdiction that cannot be invaded by the central
authority[.]”[243] In the United States, federalism denotes separate sovereignty and
a “system of parallel federal and state governance.”[244]
Regarding federal elections, the Elections Clause prescribes a system of
decentralization rather than federalism.[245] A traditional notion of federalism
does not bar Congress from enacting broad legislation to dictate the manner in
234 Tolson, supra note 216, at 2217.
235 THE FEDERALIST NO. 59 (Alexander Hamilton).
236 Tolson, supra note 216, at 2214.
237 _Id. at 2216._
238 _Id. at 2215–18; see Tolson, supra note 164, at 321–22._
239 U.S. CONST. art. I, § 4; see Franita Tolson, Reinventing Sovereignty?: Federalism as a Constraint on
_the Voting Rights Act, 65 VAND._ L. REV. 1195, 1247 (2012) (“The organizational structure of the [Elections]
Clause itself is not really federalist, but reflects a decentralized organizational structure that is often confused
with federalism.”); Weinstein-Tull, supra note 3, at 790 (noting that some scholars argue that federal election
statutes do not implicate federalism, but demonstrate a form of “managerial decentralization”).
240 Edward L. Rubin & Malcolm Feeley, Federalism: Some Notes on a National Neurosis, 41 UCLA L.
REV. 903, 910–11 (1994).
241 _Id._
242 _Id. at 911._
243 _Id._
244 Weinstein-Tull, supra note 3, at 775.
245 Tolson, supra note 239, at 1202, 1247.
-----
which federal elections will be conducted.[246] In contrast, states have no plenary
power to regulate federal elections.[247] States can administer federal elections
under direct grant from the Elections Clause but subject to Congress’s ultimate
authority.[248] Pursuant to the Elections Clause, “the Constitution primarily treats
states as election administrators rather than sovereign entities.”[249] Therefore,
states may only regulate federal elections in a managerial sense.[250] Congress has
the final say in how authority is delegated and has generally left states “to fill
in . . . the blanks with respect to the nuts and bolts of federal elections[.]”[251]
_2. Congress Has Used Its Election Clause Authority to a Limited Degree_
In addition to exercising federal authority over elections under the Fifteenth
Amendment, Congress has, at times, used its Elections Clause power.[252] Two
examples of statutes enacted under the Elections Clause that have been upheld
by courts are the National Voter Registration Act of 1993 (NVRA) and the Help
America Vote Act of 2002 (HAVA).[253]
Congress enacted the NVRA to increase voter participation in elections by
making voter registration easier for all eligible citizens.[254] The NVRA requires
states to provide opportunities to register to vote when citizens interact with
various state government offices, such as applying for driver’s licenses or
applying for aid through public assistance and disability services offices.[255] The
NVRA also authorizes the federal government to enforce its provisions through
civil actions against states.[256]
Federal courts have generally upheld the NVRA as a legitimate exercise of
Congress’s Elections Clause authority.[257] Despite giving no weight to the
246 Tolson, supra note 216, at 2216 (“Congress and the courts can disregard state sovereignty in enacting,
enforcing, and resolving the constitutionality of legislation passed pursuant to the Elections Clause.”).
247 Michael T. Morley, The Intratextual Independent “Legislature” and the Elections Clause, 109 NW. U.
L. REV. 847, 849 (2015).
248 _Id._
249 Harkless v. Bruner, 545 F. 3d 445, 454 (6th Cir. 2008).
250 _See Tolson, supra note 239, at 1197._
251 Tolson, supra note 216, at 2218.
252 Franita Tolson, The Elections Clause and Underenforcement of Federal Law, 129 YALE L.J. F. 171,
173 (2019).
253 _See Help America Vote Act of 2002, Pub. L. No. 107-252, §§ 101–906, 116 Stat. 1666 (codified as_
amended at 52 U.S.C. §§ 20901–21145); see National Voter Registration Act of 1993, Pub. L. No. 103-31, §§ 1–
13, 107 Stat. 77 (codified as amended at 52 U.S.C. §§ 20501–20511).
254 § 2, 107 Stat. at 77.
255 §§ 4–5, 7, 107 Stat. at 78, 80–81.
256 § 11, 107 Stat. at 88.
257 _See Weinstein-Tull, supra note 3, at 762–63, 765._
-----
Elections Clause in Shelby County, the Supreme Court recognized Congress’s
broad power to regulate voter qualification standards under the Elections Clause
in Arizona v. Inter Tribal Council of Arizona, Inc.[258] In Inter Tribal Council, the
Court held that the NVRA preempted an Arizona state law.[259] The Court noted
that the Elections Clause grants Congress final policymaking authority over
many aspects of federal elections.[260] The NVRA required states to accept a
national mail registration form developed by the Federal Election
Commission.[261] The Court held that the NVRA mandate that states “accept and
use” a federal form to register voters superseded Arizona’s law that required
voters to present proof of citizenship to register to vote.[262]
In some cases, courts have noted that Congress’s right to disregard states’
autonomy under the Elections Clause is even broader than its powers under the
Commerce Clause.[263] For example, “[i]f Congress determines that the voting
requirements established by a state do not sufficiently protect the right to vote,
it may force the state to alter its regulations.”[264] In ACORN v. Miller, the Sixth
Circuit rejected Michigan’s challenge to the NVRA.[265] Michigan argued that
“Congress overstepped its power to regulate federal elections by compelling
state legislation to effectuate a federal program, directing states to legislate
toward a federal purpose, and forcing states to bear the financial burden of
enacting a federal scheme.”[266] However, the Sixth Circuit held that, unlike the
Commerce Clause, the Elections Clause “specifically grants Congress the
authority to force states to alter their regulations regarding federal elections.”[267]
Congress’s power under the Elections Clause extends as far as
commandeering state offices and state election officials to carry out federal
258 Arizona v. Inter Tribal Council of Arizona, Inc., 570 U.S. 1, 14–15 (2013).
259 _Id. at 14–15, 20._
260 _Id. at 8–9._
261 National Voter Registration Act of 1993, Pub. L. No. 103-31, § 6, 107 Stat. 77, 79–80 (codified as
amended at 52 U.S.C. §§ 20501–20511). When HAVA was enacted, this function of the Federal Election
Commission transferred to the Election Assistance Commission. See Help America Vote Act of 2002, Pub. L.
No. 107-252, § 303, 116 Stat. 1666, 1713–14 (codified as amended at 52 U.S.C. §§ 20901–21145).
262 _Inter Tribal Council, 570 U.S. at 15._
263 _See Harkless v. Bruner, 545 F. 3d 445, 454 (6th Cir. 2008) (“[U]nlike the Commerce Clause . . . Article_
I section 4 specifically grants Congress the authority to force states to alter their regulations regarding federal
elections.” (quoting ACORN v. Miller, 129 F.3d 833, 836 (6th Cir. 1997))). Congress’s power to prescribe the
details that state legislatures must adopt to hold federal elections stands in stark contrast to virtually all other
provisions of the Constitution. Id.
264 _ACORN, 129 F.3d at 837._
265 _Id. at 837–38._
266 _Id. at 836._
267 _Id._
-----
law.[268] For example, the NVRA imposes duties on state officials: each state must
designate a particular state election official to be responsible for carrying out
state obligations under the Act.[269] States have claimed that the NVRA violates
the anticommandeering doctrine because it forces them to enact new legislation
to administer a federal program.[270]
The anticommandeering doctrine prohibits the federal government from
compelling states to “implement, by legislation or executive action, federal
regulatory programs.”[271] However, as it relates to commandeering, courts have
distinguished the source of congressional power in upholding federal election
legislation.[272] The prohibition on commandeering under Congress’s Commerce
Clause authority does not extend to Congress’s authority under the Elections
Clause.[273] In contrast to the Commerce Clause, the Elections Clause allows
Congress to “conscript state agencies” to administer a federal election
scheme.[274] Therefore, under the Elections Clause, Congress may “enact election
legislation that forces a state to take action it might not otherwise take, without
violating the anticommandeering doctrine.”[275] Despite this mandate, Congress
has been reluctant to use the full extent of its Elections Clause authority because
of “federalism” concerns.[276]
Congress passed HAVA in response to the challenges encountered in the
2000 presidential election.[277] That election was plagued by unreliable voting
systems that varied by jurisdiction, culminating in the “hanging chad” debacle
in Florida.[278] HAVA provided federal funds for states to update their voting
machines while placing several requirements on states.[279] HAVA’s mandatory
provisions include allowing voters to review and verify votes before casting a
268 Tolson, supra note 216, at 2220 (noting that Congress’s primacy in regulating elections is embodied
by “its independent authority to make legislation, alter state law, and commandeer state officials to implement
federal law”).
269 National Voter Registration Act of 1993, Pub. L. No. 103-31, § 10, 107 Stat. 77, 87 (codified as
amended at 52 U.S.C. §§ 20501–20511)
270 Voting Rts. Coal. v. Wilson 60 F.3d 1411, 1415–16 (9th Cir. 1995); see ACORN v. Edgar, 56 F.3d
791, 793 (7th Cir. 1995) (describing an argument by the state of Illinois that the NVRA would require it to
change its state laws that govern voter registration).
271 Printz v. United States, 521 U.S. 898, 925 (1997).
272 Weinstein-Tull, supra note 3, at 782.
273 _Id._
274 _Voting Rts. Coal., 60 F.3d at 1415._
275 Weinstein-Tull, supra note 3, at 782.
276 _See infra Part III.A._
277 Weinstein-Tull, supra note 3, at 757.
278 _Id._
279 Help America Vote Act of 2002, Pub. L. No. 107-252, §§ 102, 301, 303, 116 Stat. 1666, 1670–71,
1704–05, 1708 (codified as amended at 52 U.S.C. §§ 20901–21145).
-----
ballot, making voting accessible to people with disabilities, and centralizing
voter registration databases at the state level.[280] But HAVA did not “fully
nationalize election administration.”[281] Even after HAVA, states and
municipalities remain relatively autonomous in conducting elections.[282]
With HAVA, Congress used a carrot as much as a stick to coax states into
making voting more secure and accessible.[283] HAVA required states to update
voting machines and provided funds for the upgrades, but left states to determine
which systems to use.[284] HAVA requires that elections be auditable, but stops
short of requiring paper ballots.[285] In March 2018, the U.S. Election Assistance
Commission announced that it would provide $380 million in election security
grants to states, but it left states with discretion in how to use the funds.[286] Under
the Elections Clause, Congress has much more authority than it exercised with
HAVA. Congress can create a national plan for elections and force states to
comply with and administer the plan.[287]
Thus, unlike the antidiscrimination framework of the Fourteenth and
Fifteenth Amendments, Congress is not constrained by federalism when it exerts
its authority under the Elections Clause.[288] Courts can and should disregard
claims of state sovereignty in resolving the constitutionality of legislation passed
pursuant to the Elections Clause.[289] But Congress has exercised its Elections
Clause power far less often than it has used its authority to enforce the
Fourteenth and Fifteenth Amendments.[290] Because the Supreme Court’s
decision in _Shelby County diminished Congress’s power to regulate elections_
under the Reconstruction Amendments, Congress must rely on its Elections
Clause authority to enact legislation that protects U.S. election infrastructure.[291]
280 §§ 301, 303, 116 Stat. at 1704–05, 1708.
281 Weinstein-Tull, supra note 3, at 759.
282 _Id._
283 _Cf. JAMES T._ BENNET, MANDATE MADNESS: HOW CONGRESS FORCES STATES AND LOCALITIES TO DO
ITS BIDDING 211, 214–15 (2014) (describing and criticizing the “carrot and stick” approach of HAVA, which
provided federal funds to help induce states to comply with the statute’s requirement that they update and
modernize voting equipment).
284 §§ 102–305, 116 Stat. at 1670–71, 1714.
285 §§ 301, 116 Stat. at 1704–06.
286 _U.S. Election Assistance Commission to Administer $380 Million in 2018 HAVA Election Security_
_Funds, U.S._ ELECTION ASSISTANCE COMM’N NEWS (Mar. 29, 2018), https://www.eac.gov/news/2018/03/29/uselection-assistance-commission-to-administer-380-million-in-2018-hava-election-security-funds.
287 _See infra Part III.A._
288 Tolson, supra note 252, at 173.
289 Tolson, supra note 216, at 2216.
290 Tolson, supra note 252, at 173.
291 Tolson, supra note 216, at 2215.
-----
While Congress has not previously exercised the full extent of its power under
the Elections Clause, it could do so to create a uniform federal election system.
III. CONGRESS SHOULD ACT TO PROTECT U.S. ELECTION INFRASTRUCTURE
Due to the threat of foreign interference in U.S. elections, Congress has both
the authority and an obligation to act. The notion that Congress cannot create a
federal plan for elections because such action would infringe on states’ rights
misinterprets the Constitution. The Elections Clause gives Congress a definitive
right to regulate federal elections.[292] The combination of multiple sources of
constitutional authority—the Elections Clause and the Reconstruction
Amendments—provides Congress with even greater power to act.[293] Congress
is also duty-bound to protect the integrity of our democracy and to ensure the
rights of all citizens to have their votes properly counted.[294] It has a
responsibility to take action to protect U.S. election infrastructure in the face of
cybersecurity threats because state and local election officials are incapable of
doing so.[295]
Therefore, to combat foreign interference, Congress must enact legislation
to improve the security of election systems throughout the country. Congress
should pass a federal plan for three main reasons. First, the structure and purpose
the Elections Clause bestows upon Congress a duty to maintain the legitimacy
of the federal government.[296] In other words, Congress must ensure that the
result of federal elections reflects the will of voters. Second, states are illequipped and reticent to take the cybersecurity measures necessary to protect
election infrastructure.[297] Third, the enforcement clauses of the Fourteenth and
Fifteenth Amendments obligate Congress to protect the right of all citizens to
vote.[298]
_A. Congress Has an Obligation Under the Elections Clause to Protect U.S._
_Democracy_
The integrity of elections is critical to maintaining democracy in the United
States. Almost 150 years ago, the Supreme Court analogized the power to
292 _See supra Part II.C._
293 Tolson, supra note 164; see _infra Part III.C._
294 _See_ United States v. Slone, 411 F.3d 643, 649 (2005) (“Under the Elections Clause, Congress is
authorized to protect the integrity of federal elections.”).
295 _See infra Part III.B._
296 _See U.S. CONST. art. I, § 4, cl. 1; Tolson, supra note 216, at 2218._
297 _See infra Part III.B._
298 U.S. CONST. amend. XIV, § 5; U.S. CONST. amend. XV, § 2; see infra Part III.C.
-----
regulate federal elections to the right to defend the nation itself.[299] In Ex parte
_Yarbrough, the Court stated “[t]hat a government whose essential character is_
republican . . . has no power by appropriate laws to secure this election from the
influence of violence, of corruption, and of fraud, is a proposition so startling as
to arrest consideration and demand the gravest consideration.”[300] Foreign
interference in U.S. elections is not a necessary, but a sufficient, condition for
Congress to exercise its authority under the Elections Clause. Congress has a
constitutional responsibility to ensure the integrity of the U.S. election process
and to protect the fundamental right of citizens to vote.
The overarching purpose of the Elections Clause “is to ensure the continued
existence and legitimacy of federal elections.”[301] Hamilton described the critical
point of the Elections Clause: “every government ought to contain in itself the
means of its own preservation.”[302] According to Hamilton, Congress must use
its authority to assume from states the responsibility of regulating the manner of
federal elections “whenever extraordinary circumstances might render that
imposition necessary to its safety.”[303] Foreign interference in U.S. elections is
one such extraordinary circumstance.[304] Therefore, for the safety of the nation
and the preservation of confidence in federal elections, Congress has an
obligation to invoke the Elections Clause to create a federal plan for election
administration.[305]
While Congress has occasionally exercised its broad powers to regulate
elections under the Elections Clause, it has been reluctant to take full action
against the threat of foreign interference. In response to Russia’s cyberattacks in
2016 and 2018, the Democratic-led House of Representatives attempted to take
small steps to improve the security of federal elections. In 2018, Congress
authorized $380 million under HAVA for states to bolster their election
security.[306] While several states used the HAVA funds to strengthen
cybersecurity and purchase new voting equipment, the amount of money is far
299 _Ex parte Yarbrough, 110 U.S. 651, 657–58 (1884)._
300 _Id. at 657._
301 Tolson, supra note 216, at 2218.
302 THE FEDERALIST NO. 59 (Alexander Hamilton).
303 _Id._
304 Lynch, supra note 187, at 2008–11.
305 Tolson, supra note 216, at 2218.
306 Dustin Volz, U.S. Spending Bill to Provide $380 Million for Election Cyber Security, REUTERS (Mar.
21, 2018, 1:30 PM), https://www.reuters.com/article/us-usa-fiscal-congress-cyber/u-s-spending-bill-to-provide380-million-for-election-cyber-security-idUSKBN1GX2LC; Norden & Cordova, supra note 110.
-----
from sufficient.[307] Congress has otherwise been reluctant to pass legislation that
would be effective enough to prevent further cyberattacks.[308]
Although the House passed three election security bills in 2019,
predominantly along party-line votes, the bills have made no progress in the
Senate.[309] Congressional Republicans have downplayed the extent of foreign
interference in the 2016 and 2018 elections.[310] Objecting to the 2019 Securing
America’s Federal Elections (SAFE) Act, Representative Rodney Davis (R-Ill.)
stated that Congress should not force states to update voting technology because
“there is no evidence of voting machines being hacked in 2016, 2018[,] or
ever[.]”[311] Senate Majority Leader Mitch McConnell (R-Ky.), who has refused
to bring any of the House bills up for a vote in the Senate, has also minimized
the risk.[312] Senator McConnell even chided the media for fostering panic among
voters and for not giving more credit to the current administration for preventing
major security breaches in the 2018 election.[313]
However, in objecting to the SAFE act, Congressional Republicans have
primarily argued that the bill’s provisions interfere with the authority of states
and localities to conduct elections.[314] Senator McConnell stated that while he
believes Russian meddling to be real, he doesn’t believe that the federal
government should tell states how to run elections.[315]
The Republican sentiment, as expressed by Senator McConnell,
misinterprets the authority granted to Congress under the Constitution. Because
the Elections Clause gives Congress final policymaking authority over the times,
places, and manners of federal elections, it “allows Congress to legislate
independent of and without deference to state sovereignty.”[316] Therefore, the
307 Norden & Cordova, supra note 110.
308 _Id._
309 For the People Act of 2019, H.R. 1, 116th Cong.; Stopping Harmful Interference in Elections for a
Lasting Democracy (SHIELD) Act, H.R. 4617, 116th Cong.; Securing America’s Federal Elections (SAFE) Act,
H.R. 2722, 116th Cong.
310 Maggie Miller & Julie G. Brufke, House Passes Sweeping Democratic-Backed Election Security Bill,
HILL (Jun. 27, 2019, 5:00 PM), http://thehill.com/homenews/house/450737-house-passes-sweeping-democratbacked-election-security-bill; Hailey Fuchs & Karoun Demirjian, _Divided House Passes Election Security_
_Legislation over Republican Objection, WASH._ POST (Jun. 27, 2019, 4:45 PM), https://www.washingtonpost.
com/powerpost/divided-house-passes-election-security-legislation-over-republican-objections/2019/06/27/a07
1c10c-98f1-11e9-8d0a-5edd7e2025b1_story.html.
311 Miller & Brufke, supra note 310.
312 Fuchs & Demirjian, supra note 310.
313 _Id._
314 _Id._
315 DeChiaro, supra note 18.
316 Tolson, supra note 164, at 324.
-----
notion that Congress must cajole states to undertake security fixes to their
election systems and abide by federal security standards is grossly misguided.[317]
Congress has an obligation under the Elections Clause to preserve the legitimacy
of the federal government by ensuring that federal elections reflect the will of
the people.[318] A strong and uniform federal plan is needed to protect against
efforts by foreign actors to disrupt U.S. elections.
_B. Congress Has a Duty to Secure U.S. Elections Against Foreign_
_Interference Because States Are Ill-Equipped and Reluctant to Do So_
The United States is unique in that it currently has no nationwide election
authority.[319] Conducting elections in the United States is a complex process “that
involves multiple levels of government, personnel with a variety of skills and
capabilities, and numerous electronic systems that interact in the performance of
a multitude of tasks.”[320] State or local officials manage elections in accordance
with state laws and local regulations.[321] Elections are administered by over 9,000
state and local jurisdictions containing over 114,000 polling places.[322] The
thousands of jurisdictions vary widely in size, in funding available for election
administration, and in the ability to detect and manage irregularities, particularly
cyberattacks.[323] Several of the small elections offices “have few dedicated staff
and little access to the latest information technology (IT) training or tools.”[324]
A lack of cyber sophistication was evident in the 2016 election as states and
municipalities were unequipped to deal with the severity of the threat. One state
official said, “I don’t think any of us expected to be hacked by a foreign
government.”[325] Another official stated, “If a nation-state is on the other side,
it’s not a fair fight. You have to phone a friend.”[326] In most states, the
decentralized structure means that counties and municipalities have varying
317 _See SENATE INTELLIGENCE REPORT, supra note 13, at 54 (stating in its recommendations that “[s]tates_
should remain firmly in the lead on running elections, and the federal government should ensure they receive
the necessary resources and information”).
318 _See THE FEDERALIST NO. 59 (Alexander Hamilton) (“Every government ought to contain in itself the_
means of its own preservation.”).
319 NAS REPORT, supra note 11, at 31.
320 _Id. at 4._
321 NAS REPORT, supra note 11, at 17.
322 Manpearl, supra note 71, at 169.
323 NAS REPORT, _supra note 11, at 17._ _See generally David C. Kimball & Brady Baybeck, Are All_
_Jurisdictions Equal? Size Disparity in Election Administration, 12 ELECTION L.J. 130 (2013) (discussing how_
size disparities lead to diverging experiences for election officials and voters in large versus small jurisdictions).
324 NAS REPORT, supra note 11, at 17.
325 SENATE INTELLIGENCE REPORT, supra note 13, at 39.
326 _Id._
-----
levels of resources to conduct elections.[327] County election officials, who are on
the front lines of defending election equipment, often have very limited IT
support.[328] A Wisconsin state election administrator noted that some counties’
election teams may only consist of “a county clerk and one more person working
on elections.”[329]
Many county officials have not received any cybersecurity training, even
after the 2016 cyberattacks were made known. In Pennsylvania, election
officials in three of the four largest counties had not received cybersecurity
training as of August 2017.[330] In Michigan, officials in fewer than one-third of
counties indicated that they received formal cybersecurity training.[331] And in
Arizona, officials in only five of fifteen counties received such training.[332]
States also vary widely in the level of security they maintain around voter
registration databases. DHS analysis of state election systems found significant
variance in the security of state voter registration databases, including lack of
encryption and lack of backups in many states.[333] As of May 2017, forty-one
states were still using voter registration systems that were created more than a
decade prior.[334] Types of vote casting systems also vary dramatically from state
to state. Forty-five states continue to use outdated voting machines that are no
longer manufactured.[335] Some machines are at least fifteen years old and run on
outdated software that is no longer supported, such as Windows XP.[336] In the
November 2018 election, fourteen states did not use a voting mechanism that
allowed for a voter-verified paper audit trail.[337]
Many states understand the need for more secure voting equipment but lack
sufficient financial resources. Although the 2018 HAVA funds were dispersed
quickly, states did not have enough time to make major improvements to their
327 _See Norden & Cordova, supra note 110._
328 _Id._
329 _Id._
330 Likhitha Butchireddygari, Many County Officials Still Lack Cybersecurity Training, NBC NEWS (Aug.
23, 2017, 5:20 AM), https://www.nbcnews.com/politics/national-security/voting-prep-n790256.
331 _Id._
332 _Id._
333 SENATE INTELLIGENCE REPORT, supra note 13, at 46.
334 Tim Lau, U.S. Elections Are Still Vulnerable to Foreign Hacking, BRENNAN CTR. FOR JUST. (Jul. 18,
2019), https://www.brennancenter.org/our-work/analysis-opinion/us-elections-are-still-vulnerable-foreignhacking.
335 Norden & Cordova, supra note 110.
336 _Id._
337 Lin et al., supra note 51, at 22.
-----
election systems before the 2018 midterm elections.[338] The funding has also
been insufficient for states to overhaul their elections systems and replace
outdated voting machines.[339] Most states recognized a need to purchase new
equipment before the 2020 election, but two thirds of the state officials claimed
that they lack the money to do so, even with the additional HAVA funds.[340]
Consequently, states and municipalities cannot be relied on to successfully
combat foreign cyberattacks against U.S. election systems. According to Senator
Ron Wyden (D-Or.),
If there was ever a moment when Congress needed to exercise its clear
constitutional authorities, this is it. America is facing a direct assault
on the heart of our democracy by a determined adversary. We would
not ask a local sheriff to go to war against the missiles, planes and
tanks of the Russian army. We shouldn’t ask a county IT employee to
fight a war against the full capabilities and vast resources of Russia’s
cyber army. That approach failed in 2016 and it will fail again.[341]
Simply providing funding to states is also not enough. Congress must create a
comprehensive plan to secure federal elections against foreign attacks.
_C. Congress Must Enact a Federal Plan to Preserve the Right of All Citizens_
_to Vote_
Professor Franita Tolson has effectively described how Congress’s license
to enact comprehensive federal election legislation may be even greater than its
Elections Clause power alone because it derives from multiple sources of
authority.[342] In addition to its obligation to preserve the integrity of federal
elections under the Elections Clause, Congress has a responsibility to exercise
its authority under the enforcement clauses of Fourteenth and Fifteenth
Amendments to protect the right of all citizens to vote.[343] Multiple sources of
authority confer even broader power when Congress acts to protect
constitutional rights and may provide the impetus for the Supreme Court to find
a federal statute valid where it would have considered it unconstitutional under
a single source of authority.[344] Therefore, notwithstanding the Supreme Court’s
338 The EAC dispersed 96% of the HAVA funds by August 2018. Lynch, supra note 187, at 1999.
339 Norden & Cordova, supra note 110.
340 _Id._
341 SENATE INTELLIGENCE REPORT, supra note 13, Minority Views of Senator Wyden, at 1.
342 Tolson, supra note 164, at 329.
343 _Id. at 324._
344 _Id. at 329. The Supreme Court has been inconsistent in its recognition of a greater scope of authority_
when Congress acts pursuant to multiple sources of authority. Compare Tennessee v. Lane, 541 U.S. 509, 516
-----
holding in Shelby County, the Reconstruction Amendments provide additional
power to Congress’s Elections Clause authority to establish a federal system for
election infrastructure.[345] With this power comes a duty for Congress to act.
Cyberattacks that disrupt the voting process and create risks that vote tallies
will be manipulated infringe on the right of citizens to vote. The fundamental
right to vote includes the right to be certain that one’s vote matters.[346] Courts
have found that plaintiffs have standing to bring Fourteenth Amendment Due
Process and Equal Protection claims where they allege that certain voting
methods prohibit their votes from being properly counted.[347] In _Stewart v._
_Blackwell, the Sixth Circuit found that the increased probability that plaintiffs’_
votes would not be properly counted due to a faulty punch-card system was
“neither speculative nor remote” and was therefore a justiciable claim.[348]
Similarly, a Pennsylvania court found that voters had proper standing to bring a
Fourteenth Amendment claim because the machines they used to vote did not
allow them to know whether their votes had been cast or would be counted.[349]
A recent lawsuit brought by voters in Georgia demonstrates how voting
systems that are not secure against cyberattacks infringe on voters’ rights.[350] A
federal court granted an injunction against using insecure DRE machines based
on the merits of the plaintiffs’ Fourteenth Amendment Due Process and Equal
Protection claims.[351] The plaintiffs in Curling claimed that the state had violated
their Due Process rights by placing a “substantial burden” on their fundamental
right to vote and had violated their Equal Protection rights by placing “more
severe burdens” on their right to vote than voters who did not have to use DRE
machines.[352] The court agreed and granted plaintiff’s relief in part because the
(2004) (upholding Title II of the Americans with Disabilities Act (ADA) based on “the power to enforce the
[F]ourteenth [A]mendment and to regulate commerce”), with Bd. of Trs. v. Garrett, 531 U.S. 356, 374 (2001)
(ignoring the Congress’s Commerce Clause authority when invalidating the ADA in part as an improper exercise
of the Fourteenth Amendment enforcement clause), _and Shelby Cnty. v. Holder, 570 U.S. 529, 553 (2013)_
(giving no weight to Congress’s additional authority for enacting the VRA under both the Fourteenth and the
Fifteenth Amendments).
345 _See Tolson, supra note 164, at 330 (“[F]ar-reaching and potentially controversial legislation can gain_
substantial legitimacy from the fact that Congress can draw on multiple sources of power.”).
346 _See Curling v. Kemp, 334 F. Supp. 3d 1303, 1328 (N.D. Ga. 2018)._
347 _E.g., id._
348 Stewart v. Blackwell, 444 F.3d 843, 855 (6th Cir. 2006), superseded by Stewart v. Blackwell, 473 F.3d
692 (6th Cir. 2007).
349 Banfield v. Cortes, 922 A.2d 36, 44 (Pa. Commw. Ct. 2007).
350 _Curling, 334 F. Supp. 3d 1303; Curling v. Raffensperger, 397 F. Supp. 3d 1334 (N.D. Ga. 2019)._
351 _Curling, 397 F. Supp. 3d at 1410._
352 _Curling, 334 F. Supp. 3d at 1312._
-----
state’s ongoing use of an insecure voting method “pierce[d] citizens’ confidence
in the electoral system and the value of voting.”[353]
Therefore, in some instances, voting rights advocates can protect the right to
vote against insecure voting systems through litigation.[354] Federal courts may be
willing to recognize that an infringement on voters’ right to feel secure that their
votes will count is an injury for which relief may be granted.[355] Insecure voting
systems can also affect voters’ ability to merely cast a ballot. Long wait times to
vote—resulting from erroneous registration data or voting equipment
dysfunction—may impact minority voting districts to a greater degree than
predominantly white precincts.[356] And as wait times increase, voter participation
drops.[357] Consequently, the Equal Protection Clause and the Fifteenth
Amendment may be implicated when citizens of color are disproportionately
denied the right to vote when cyberattacks disrupt voting on election day.
However, litigation is cumbersome and cannot always protect the rights of
all voters or ensure the integrity of federal elections. Indeed, one impetus for the
VRA in 1965 was that piecemeal litigation had failed to sustainably protect the
African Americans’ right to vote in most jurisdictions in the Deep South.[358] With
each hard fought victory in courts, state and local governments found ways to
enact new restrictions.[359] Moreover, litigation only grants relief after harm has
occurred. Courts can grant prospective relief to require security measures for
future election cycles.[360] But there is no sufficient remedy for the harm to voters
that has already occurred after they participated in an insecure election.[361] Thus,
353 _Curling, 397 F. Supp. 3d at 1411 (quoting Curling, 334 F. Supp. 3d at 1328)._
354 _Id. at 1410._
355 _Id.; see_ _Curling, 334 F. Supp. 3d at 1328 (“A wound or reasonably threatened wound to the integrity_
of a state’s election system carries grave consequences beyond the results in any specific election, as it pierces
citizens’ confidence in the electoral system and the value of voting.”). Contra Heindel v. Andino, 359 F. Supp.
3d 341, 357 (D.S.C. 2019) (holding that plaintiffs failed to show a clearly impending injury that was traceable
to state election officials because they “merely speculate and make assumptions about whether their votes will
be inaccurately counted as the result of a potential hack” (quoting Clapper v. Amnesty Int’l, 568 U.S. 398, 411
(2013))).
356 Stephanie Mencimer, Even Without Voter ID Laws, Minority Voters Face More Hurdles to Casting
_Ballots, MOTHER_ JONES (Nov. 3, 2014), https://www.motherjones.com/politics/2014/11/minority-voterselection-long-lines-id/; German Lopez, Minority Voters Are Six Times More Likely as White Voters to Wait More
_Than an Hour to Vote, VOX (Nov. 8, 2016, 1:30 PM), https://www.vox.com/identities/2016/11/8/13564406/_
voting-lines-race-2016.
357 _Lopez, supra note 356._
358 South Carolina v. Katzenbach, 303 U.S. 301, 314 (1966).
359 _Id.; ANDERSON, supra note 165, at 13; see supra Part II.A._
360 _See_ _Curling, 397 F. Supp. 3d at 1412._
361 _See_ _Curling, 334 F. Supp. 3d at 1315._
-----
the federal government must respond comprehensively to protect voters’ rights
against cyberattacks from foreign actors.
In sum, Congress must act to protect U.S. election infrastructure and to
combat foreign interference in federal elections. Congress has the primary
obligation to safeguard the legitimacy of the federal government, to protect the
fundamental right of citizens to vote, and to ensure that the election results
reflect the choice of the majority of voters. And Congress has the authority to
act pursuant to the Elections Clause coupled with the enforcement provisions of
the Reconstruction Amendments, which provide additional power to protect the
right of all citizens to vote.
IV. A PROPOSED FEDERAL PLAN TO SECURE U.S. ELECTIONS
Congress has the power under the Elections Clause to enact legislation that
establishes a federal plan to which state election authorities must adhere.[362] The
Elections Clause authorizes Congress to designate the manner in which federal
elections are conducted in order to protect the integrity of the federal government
against a threat of foreign interference.[363] After Russian cyberattacks against
state and local election systems in 2016 and 2018, and the anemic, ineffective
response by state election officials, the need for a uniform federal election plan
is evident.[364] Therefore, Congress has the obligation to enact a national plan that
creates uniform standards across all election jurisdictions to ensure that federal
elections are secure and that all citizens are able to exercise their right to vote
and know their votes will count.
A national plan for federal elections does not imply that the entirety of
election administration should be conducted by the federal government. The
decentralized approach to U.S. elections, which relies on states and localities to
manage the nuts and bolts of elections, provides efficiency.[365] The cybersecurity
benefit of a decentralized structure remains—it protects against the devastating
impact of a single widespread cyberattack or technological breakdown.[366] But
an ongoing role for states to conduct elections does not preclude implementing
uniform rules and standards for federal elections. Measures to secure U.S.
362 _See supra Part III.A._
363 _See id._
364 _See supra Part I.C._
365 _See THE FEDERALIST NO. 59 (Alexander Hamilton) (stating that regulation of federal elections is left_
to local administrations because “it may be more convenient and more satisfactory”).
366 Manpearl, supra note 71, at 182; NAS REPORT, supra note 11, at 119.
-----
election infrastructure would be most effective if they are implemented at a
national level.[367]
Although Congress’s national plan for federal elections should be mandatory
for states to follow, the Elections Clause does not grant Congress authority over
state and local elections.[368] However, Congress can encourage states to follow a
federal election plan for their own internal elections. First, because of logistics,
efficiency, and cost, states would likely use federal election infrastructure to
conduct state and local elections along with federal elections. Second, states’
inability to take appropriate cybersecurity measures for their own elections
provides the impetus for Congress to act under the Fourteenth and Fifteenth
Amendments to protect the right of all citizens to know that their votes with
count.[369] Unlike the Elections Clause, the Fourteenth and Fifteenth Amendments
apply to all elections: federal, state, and local.[370] Third, Congress could use its
Spending Clause power to condition funding for election infrastructure on a
state’s compliance with a federal plan for all elections conducted within the
state.[371]
A national election plan should have three main components. First, it should
create uniform federal standards for securing voter registration databases and for
transmitting voter information to polling places so that voters can be checked-in
on election day. Second, Congress should require that all states implement a
secure method of voting that uses a uniform ballot design. All voters should be
allowed to mark and record their selections in the manner that is least susceptible
to cyberattacks: hand-marked paper ballots read by secure, state-of-the-art
optical scanners. Finally, to ensure the integrity of every federal election, states
must be required to submit to federal post-election audits.
367 _See Mark Lanterman, Fair Elections and Cybersecurity, 75 BENCH &_ BAR MINN. 10, 10 (2018) (“[T]he
sorts of measures that would most likely effect positive security outcomes are best implemented at a national
level, where standardized procedures can provide a framework for ongoing improvement.”).
368 U.S. CONST. art. I, § 4, cl. 1.
369 _See supra Part III.C._
370 U.S. CONST. amend. XV, § 1 (“The right of citizens of the United States to vote shall not be denied or
abridged by the United States or by any State . . . .”) (emphasis added).
371 _See Art. I, § 8, cl. 1 (empowering Congress to “lay and collect Taxes, Duties, Imposts, and Excises, to_
pay the Debts and provide for the common Defence and general Welfare of the United States”); South Dakota
v. Dole, 483 U.S. 203, 207 (1987) (“[O]bjectives not thought to be within Article I’s enumerated legislative
fields may nevertheless be attained through the spending power and the conditional grant of federal funds.”)
(internal quotation marks omitted).
-----
_A. Congress Should Establish Binding Federal Standards for States to_
_Register Voters, Maintain Secure Voter Databases, and Check-in Voters at_
_the Polls_
Voter registration databases that are maintained electronically are
particularly vulnerable to manipulation by malicious cyber actors.[372] Election
administrators currently rely on county or state government IT departments to
secure voter registration databases.[373] A DHS analysis found that the security of
voter databases varied significantly by state, and many states lacked encryption
and backups for their databases.[374] Federal intelligence and cybersecurity
officials have made recommendations to states and have offered to provide
cybersecurity measures to protect voter registration databases.[375] But many
states have demonstrated a reluctance to receive help from the federal
government or to follow recommendations.[376]
Consequently, Congress must pass legislation that directs states to
implement specific cybersecurity measures for voter registration databases,
which include updating relevant software, creating paper back-ups, and
instituting two-factor authentication for user access to the databases.[377] This
action would not be novel—Congress has previously set mandatory
requirements for state voter databases.[378] A federal plan should also require
states to put in place standard security procedures for monitoring voter database
integrity.[379] Such measures should include installing monitoring sensors on state
registration systems to detect attempts to hack into the systems and reporting
any identified compromises immediately to DHS.[380]
A national plan must also create standards for transmitting voter data to
polling places for voter verification and check-in. Because they are electronic,
e-pollbooks are vulnerable to cyberattacks, particularly if they are locally
372 SENATE INTELLIGENCE REPORT, supra note 13, at 57.
373 NAS REPORT, supra note 11, at 58.
374 SENATE INTELLIGENCE REPORT, supra note 13, at 46.
375 _Id. at 52._
376 _See supra Part III.B; SENATE INTELLIGENCE REPORT, supra note 13, at 48–49; see also_ _id., Minority_
Views of Senator Wyden, at 2 (“The Committee report describes a range of cybersecurity measures needed to
protect voter registration databases, yet there are currently no mandatory rules that require that require states to
implement even minimum security measures.”).
377 SENATE INTELLIGENCE REPORT, supra note 13, at 57.
378 Help America Vote Act of 2002, Pub. L. No. 107-252, 116 Stat. 1666 (codified as amended at 42
U.S.C. §§ 15301-15545) (requiring “a single, uniform, official, centralized, interactive, computerized statewide
voter registration list defined, maintained, and administered at the state level”).
379 NAS REPORT, supra note 11, at 63.
380 SENATE INTELLIGENCE REPORT, supra note 13, at 57.
-----
networked or connected to the internet.[381] Cyberattacks could change voter data,
alter information on who has voted, or simply shut down operation of an epollbook through a “denial of service” attack.[382] Congress should, therefore,
include national security standards for the use of e-pollbooks in its federal plan.
Because e-pollbooks have advantages over paper and are easy to use, their use
should not be discontinued.[383] Rather, the NAS recommends that Congress
authorize and fund the National Institute of Standards and Technology to
develop security standards along with verification and validation protocols for
e-pollbooks.[384] In addition, each precinct should be required to maintain a paper
copy of the precinct’s pollbook as a back-up in the event that voter data is
manipulated or access to electronic data is disrupted.[385]
_B. Congress Should Mandate Uniform Paper Ballots for All Federal Elections_
Voters across the country cast their ballots using methods that are subject to
varying degrees of cyber risks, and many states are either unwilling or incapable
of following the recommendations of cybersecurity experts.[386] Voting systems
that do not provide human-readable printouts for voters to confirm their
selections and do not maintain a voter-verified paper audit trail are most
vulnerable to cyberattacks.[387] Experts have called for discontinuing the use of
paperless DRE machines because they are vulnerable to hacking without
detection and do not produce auditable paper trails.[388] Yet, in 2019, twelve states
were still using paperless DRE machines in at least some jurisdictions, and four
states still used them statewide.[389] Congress should pass legislation that prohibits
states from using outdated, paperless voting machines and requires the use of a
uniform method of voting that will provide an auditable paper trail.
The Senate Intelligence Report concluded that “[p]aper ballots and optical
scanners are the least vulnerable to cyberattack.”[390] The most secure and costeffective method for voting would be to use hand-marked paper ballots in all
381 See supra Part I.B. regarding the vulnerability of e-pollbooks.
382 NAS REPORT, supra note 11, at 71, 86.
383 _Id. at 72._
384 _Id._
385 _Id._
386 _See supra Part III.B._
387 SENATE INTELLIGENCE REPORT, supra note 13, at 42.
388 See supra Part I.B. for a detailed description of the security flaws associated with DRE voting
machines.
389 Norden & Cordova, supra note 110.
390 SENATE INTELLIGENCE REPORT, supra note 13, at 59.
-----
federal elections.[391] Using a uniform paper ballot for federal elections that voters
mark by hand would also allow states to continue and expand the use of voteby-mail.[392] Alternatively, Congress could require and provide funding for
uniform BMD machines to be used across all jurisdictions. The BMDs must
produce a paper record of the voter’s choices, which each voter can review
before casting their ballot. However, because BMD machines are potentially
vulnerable to cyberattacks, the most secure election systems use hand-marked
paper ballots as the primary method for voting.[393] Moving forward, Congress
should mandate that all federal elections be conducted using human-readable
paper ballots that are counted either by hand or by using federally certified
optical scanners.[394]
_C. Congress Should Require All States to Submit to Federal Election Audits_
As part of a federal election plan, Congress should require that all states
submit to post-election audits. Audits require voter-verifiable paper ballots that
provide a human-readable record of the voter’s selections.[395] Such audits
provide assurance that the outcome of any election reflects the voters’ choices
and is based on an accurate tabulation of the ballots cast.[396]
NAS election cybersecurity experts recommend risk-limiting audits as the
most efficient and effective means to ensure the reliability of an election.[397]
Risk-limiting audits examine randomly selected, individual ballots until a
predetermined level of statistical assurance is reached.[398] In 2017, risk-limiting
audits were piloted statewide in Colorado, and several other states plan to
conduct pilots in the next few years.[399] However, rather than leaving the
requirement for audits to the discretion of states, Congress should pass
391 _Id.; Christopher Deluzio & Kevin Skoglund, Guess Which Ballot Costs Less and Is More Secure—_
_Paper or Electronic?,_ PATRIOT NEWS (Aug. 20, 2019), https://www.pennlive.com/opinion/2019/08/guesswhich-ballot-costs-less-and-is-more-secure-paper-or-electronic-opinion.html.
392 In 2016, Colorado, Oregon, and Washington used mail-only voting, and most ballots in California and
Utah were cast by mail. NAS REPORT, supra note 11, at 48–50.
393 _See generally Andrew Appel, Richard A. DeMillo & Philip B. Stark, Ballot-Marking Devices (BMDs)_
_Cannot Assure the Will of the Voters,_ 19 ELECTION L.J. 432 (2020) (describing the vulnerability of BMD voting
machines to hacking as well as risk that BMDs may not accurately record a vote as the voter had intended and
arguing that the most secure method of voting is a system that uses hand-marked paper ballots).
394 NAS REPORT, supra note 11, at 80.
395 _Id. at 94._
396 _Id._
397 _Id. at 95._
398 _Id._
399 _Id._
-----
legislation to require all states to submit to federal risk-limiting audits after each
federal election.
The federal government’s response to ongoing Russian cyberattacks must
extend beyond offers to provide resources to states.[400] To protect and defend
U.S. elections, Congress must “establish mandatory nation-wide cybersecurity
requirements.”[401] Such requirements must designate specific measures to ensure
the security of voter registration databases and pollbooks and should compel the
use of uniform paper ballots and post-election audits.
CONCLUSION
The right of citizens to freely choose who will represent them is the essence
of our republican form of government. The founders understood that
maintaining free and fair elections is a core tenet of this nation. Therefore, they
placed in the Constitution the means for Congress to have final authority to
regulate federal elections when the need arises. Russian cyberattacks on state
and local election systems constitute a challenge to the core values of American
democracy, which require a comprehensive, uniform federal response.
To varying degrees over the past 150 years, Congress has imposed
regulations on states to protect election integrity by ensuring that all citizens
have the right to vote. The current threat requires an even greater response. This
Comment describes a source of authority that authorizes Congress to prescribe
cybersecurity measures to which states must adhere in conducting federal
elections. The value implicit in the Elections Clause is that federal elections must
be administered in a manner that produces a clear and legitimate outcome.
Congress has the authority and an obligation under the Elections Clause to
ensure the integrity of American democracy in the face of cyberattacks by a
foreign adversary. Congress must exercise this power to create a comprehensive
national plan for federal elections.
SUMAN MALEMPATI[*]
400 SENATE INTELLIGENCE REPORT, supra note 13, Minority Views of Senator Wyden, at 1.
401 _Id._
- J.D. Candidate, Emory University School of Law, Class of 2021. I extend my deepest gratitude to
Professor Robert Schapiro for his wisdom, guidance, and support throughout the writing process. Thank you to
Natalie Baber and Connor Hees for providing insightful feedback. To the Emory Law Journal staff, particularly
Brennan Mancil and Sam Reilly, thank you for the incredible work you have done make this Comment better
and get it published.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3590843?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3590843, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,020
|
[] | false
| null |
[] | 32,311
|
|
en
|
[
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010d55f6ebe83e48bb83926c88b0d72be0c08538
|
[] | 0.846713
|
Investment Portfolio Optimization in Indonesia (Study On: Lq-45 Stock Index, Government Bond, United States Dollar, Gold and Bitcoin)
|
010d55f6ebe83e48bb83926c88b0d72be0c08538
|
International Journal of Current Science Research and Review
|
[
{
"authorId": "2241335791",
"name": "I. Made"
},
{
"authorId": "2226312743",
"name": "Gede Abandi Semeru"
},
{
"authorId": "119396810",
"name": "Y. Nainggolan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Curr Sci Res Rev"
],
"alternate_urls": null,
"id": "fd0a02a9-7a23-4ec4-ae7b-0a3d06990d81",
"issn": "2581-8341",
"name": "International Journal of Current Science Research and Review",
"type": "journal",
"url": null
}
|
In forming their portfolios, investors should analyze the risk and return of each investment instrument. This is aimed at preventing investors from speculating and gambling with their investments. Conducting an investment portfolio optimization study on LQ-45 stock index, government bond, USD, gold, and Bitcoin can provide valuable insights due to unique market characteristics in Indonesia. This research analyzes the formation of investment instruments over the last 60 months, specifically from January 2018 to December 2022. The research method used in this study is quantitative research aimed at selecting several investment instruments for a portfolio in Indonesia. The portfolio aims to minimize risk and maximize return using the Markowitz method, also known as the optimal portfolio. To fulfill the objectives of this research, data on the prices of each instrument are required. An optimal portfolio can be obtained by combining two instruments: 18% bitcoin and 82% gold. This optimal portfolio can achieve an expected return of 1.29% with a risk level of 5.15%. Considering a risk-free rate of 0.375%, this portfolio forms a slope of 0.1775, which is the largest slope formed between the combination of risk-free instruments and risky portfolios. Investors should allocate their funds more wisely, considering not only the highest return but also the associated risk. High returns often come with high risks, so investors need to assess the risk-return trade-off before making investment decisions.
|
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
# **Investment Portfolio Optimization in Indonesia (Study On: Lq-45 Stock ** **Index, Government Bond, United States Dollar, Gold and Bitcoin)**
### **I Made Gede Abandi Semeru [1], Yunieta A. Nainggolan [2]**
1,2 School of Business & Management, Institut Teknologi Bandung
**ABSTRACT:** In forming their portfolios, investors should analyze the risk and return of each investment instrument. This is aimed
at preventing investors from speculating and gambling with their investments. Conducting an investment portfolio optimization
study on LQ-45 stock index, government bond, USD, gold, and Bitcoin can provide valuable insights due to unique market
characteristics in Indonesia. This research analyzes the formation of investment instruments over the last 60 months, specifically
from January 2018 to December 2022. The research method used in this study is quantitative research aimed at selecting several
investment instruments for a portfolio in Indonesia. The portfolio aims to minimize risk and maximize return using the Markowitz
method, also known as the optimal portfolio. To fulfill the objectives of this research, data on the prices of each instrument are
required. An optimal portfolio can be obtained by combining two instruments: 18% bitcoin and 82% gold. This optimal portfolio
can achieve an expected return of 1.29% with a risk level of 5.15%. Considering a risk-free rate of 0.375%, this portfolio forms a
slope of 0.1775, which is the largest slope formed between the combination of risk-free instruments and risky portfolios. Investors
should allocate their funds more wisely, considering not only the highest return but also the associated risk. High returns often come
with high risks, so investors need to assess the risk-return trade-off before making investment decisions.
**KEYWORDS:** Bitcoin, Government Bond, Gold, LQ-45, Portfolio Optimization, USD.
**INTRODUCTION**
The portfolio formed by an investor can provide high returns or, on the contrary, cause losses for the investor. In other words,
risk is a deviation from the expected return. There is a positive relationship between return and risk in investing, known as high riskhigh return, which means the greater the risk that must be borne, the greater the resulting return. Return is the result obtained from
an investment, which can be in the form of realized return or expected return that has not yet occurred but is expected to happen in
the future. Meanwhile, portfolio risk consists of systematic and unsystematic risk. Both of these risks are often referred to as total
risk. Some factors that influence this uncertainty include securities prices and interest rates, which can change at any time. The
benefits of diversification are well-known through the principle that says "Don't put all your eggs in one basket", because if that
basket falls, then all the eggs in it will break. In the context of investment, this proverb can be interpreted as a recommendation not
to invest all the funds owned in only one asset, because if that asset fails, then all the invested funds will disappear.
Investors expect to get maximum returns with minimum possible risk. However, the larger the profit obtained from an
investment, the higher the associated risk. Therefore, investors need to consider the balance between risk and return in investing.
Risk can be minimized by diversification or by combining several investment instruments into a portfolio. If one instrument
experiences a loss while another instrument generates a profit, the profit from one instrument can offset the loss from the other
investment instrument. Effective diversification of investment instruments yields efficient results in a portfolio, providing maximum
expected returns with minimal variance for those expected returns. Such a portfolio is called a Markowitz Efficient Portfolio This
study focuses on investment instruments in Indonesia, including the LQ-45 stock index, government bonds, United States dollar,
gold, and Bitcoin can provide valuable insights due to unique market characteristics, diversification benefits, local investor
perspectives, period-specific analysis, and the opportunity to contribute to existing knowledge.
The selected instruments represent different asset classes, each with its own characteristics and potential benefits. By including
a mix of equities (LQ-45 stock index), fixed income (bonds), currencies (US dollar), commodities (gold), and cryptocurrencies
(bitcoin), this can analyze how diversification across these assets may impact portfolio performance and risk management. The
LQ45 stock index is a widely recognized benchmark index for the Indonesian stock market, providing insights into the performance
of the country's largest and most liquid stocks. Government bonds, on the other hand, represent fixed-income securities issued by
### 4922 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
the Indonesian government, offering income and potentially lower risk compared to equities. The US dollar is a commonly used
global reserve currency and serves as a benchmark for many international transactions. Gold is a well-known precious metal and
often considered a store of value. Bitcoin, as a cryptocurrency, represents a digital and decentralized form of currency with its own
unique characteristics. By including a diverse set of instruments, this can conduct a comprehensive analysis that covers a broader
range of investment options. This can enhance the understanding of portfolio optimization, risk management, and the potential for
achieving better risk-adjusted returns.
When forming a portfolio, investors seek to minimize risk and maximize returns. A portfolio that can achieve these goals is
called an optimal portfolio. To form an optimal portfolio, several assumptions need to be made about investor behavior in making
investment decisions. It is assumed that investors tend to avoid risk (risk averse). This type of investor would choose an investment
with lower risk if presented with two investments with the same expected return but different levels of risk.
**LITERATURE REVIEW**
***A.*** ***Portfolio***
Investing aims to generate profits with a certain level of risk. The purpose of creating an investment portfolio is to diversify risk
so that the funds held have minimum risk. Investing in more than one investment instrument has lower risk compared to investing
in only one instrument. The more investment instruments involved in the portfolio, the lower the risk. If there is a decrease in one
investment instrument, then other instruments can offset or replace it. Therefore, investors must have diversity in their portfolio so
that the funds held do not experience a decrease from their initial value (Markowitz, 1952) Markowitz assumed that investors would
be able to create an efficient portfolio. He also stated that the portfolio should be diversified to achieve risk spreading. Such
diversification will produce an efficient portfolio where it provides a higher level of return than other portfolios with the same risk
and a lower risk than other portfolios with the same level of return.
***B.*** ***Optimal Portfolio***
The optimal portfolio is a portfolio that provides the highest expected return for a given level of risk or the lowest level of risk
for a given level of expected return. In other words, it is the portfolio that offers the best risk-reward trade-off for an investor. The
concept of the optimal portfolio was introduced by Harry Markowitz in his seminal paper "Portfolio Selection" in 1952. To find the
optimal portfolio, an investor needs to consider the expected returns, standard deviations, and correlations of all the assets in the
portfolio. The optimal portfolio can be identified by plotting the efficient frontier, which is a curve that represents the set of portfolios
that offer the highest expected return for a given level of risk, or the lowest level of risk for a given level of expected return. The
point on the efficient frontier that corresponds to the investor's risk tolerance and expected return is the optimal portfolio for that
investor.
The optimal portfolio is crucial for investors who want to maximize their returns while minimizing their risk. By diversifying
their portfolio and selecting assets with low correlations, investors can reduce their portfolio's risk and increase their expected
returns. The optimal portfolio is also useful for portfolio managers who want to construct a portfolio that meets the investment
objectives of their clients while minimizing risk.
***C.*** ***Asset Allocation***
Asset allocation is more focused on placing funds in various investment instruments rather than emphasizing stock choices in
the portfolio. From the study results, differences in performance are more due to asset allocation rather than investment choices.
According to Markowitz (1952), asset allocation is one of the factors that determine the level of return and risk of the portfolio.
Perrit and Lavine (1990) state that besides diversification, this asset allocation is a very important factor in investment, for practical
reasons such as targeting long-term investments, determining the risk that investors can tolerate over time, and eliminating
investment decision changes based on changes in financial conditions.
***D.*** ***Conceptual Framework***
This study presents a conceptual framework encompassing the key elements of modern portfolio theory (MPT), including the
optimal portfolio, Sharpe ratio, portfolio variance and covariance, risk preference, and efficient frontier. Developed by Harry
Markowitz in the 1950s, MPT offers a robust framework for constructing portfolios that strive to optimize the delicate balance
between risk and return.
### 4923 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
**Figure 1.** Conceptual Framework
**RESEARCH METHODOLOGY**
***A.*** ***Data Collection Method***
In this research, historical data was obtained by visiting websites that provide the required data. This research analyzes the
formation of investment instruments over the last 60 months, specifically from January 2018 to December 2022. In selecting the
instruments, several instruments were chosen to represent the entire range of instruments available in Indonesia.
***B.*** ***Data Analysis Method***
Due to the complexity of data processing, assistance from computer software, specifically Microsoft Excel, is required. Apart
from being easy to operate, this software also offers the necessary functions and features for performing calculations. The functions
in Microsoft Excel are highly useful for data processing, including stdevp (calculating standard deviation), average (calculating the
mean), correl (calculating correlation), covar (calculating covariance), and varp (calculating variance).
In addition to these functions, the additional features in Microsoft Excel, especially the Solver feature, are crucial for data
processing. This feature allows for finding solution values in linear programming equations by setting value criteria and applying
various constraints or objective function limitations. In addition to its usefulness, one of the advantages of Microsoft Excel is its
ease of application in the portfolio calculation procedure using the Markowitz Method employed in this study. It is user-friendly and
widely popular software in the community.
***C.*** ***Calculating Investment Instrument Returns and Market Value***
The historical data obtained consists of monthly instrument prices or given return values. For data that is still in the form of
instrument prices, the initial step of calculation is to compute the monthly returns.
***D.*** ***Calculating Average Returns of Instruments and Market Value***
The next step is to calculate the average return and standard deviation. From the historical return data, the monthly average return
and standard deviation are calculated. With a total of 60 records, the average return for each instrument and market value is calculated
to obtain the monthly average return.
***E.*** ***Calculating Standard Deviation of Instruments and Market Value***
To simplify the calculation, the stdevp(argument) function is used, where the argument contains the return data of the instruments
during the research period in Microsoft Excel software.
### 4924 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 www.ijcsrr.org**
***F.*** ***Calculating Correlation of Investment Instruments***
The next step is to calculate the correlation coefficient between instruments. The correlation coefficient is used to analyze
whether a variable has a significant relationship with another variable. It helps determine the strength of the relationship, as well as
how one variable influences the other. In this case, the variables are investment instruments.
The correlation coefficient indicates the magnitude of the relationship between the movements of two variables relative to their
respective deviations. In statistics, the correlation coefficient ranges between two extreme values: perfect positive correlation (+1),
indicating a strong positive relationship, perfect negative correlation (-1), indicating a strong inverse relationship, and a correlation
coefficient of zero (0), indicating no correlation.
***G.*** ***Calculating Covariance of Investments Instruments***
The next step is to calculate the covariance between instruments. Covariance is the average of the products of deviations between
one instrument and another.
***H.*** ***Calculating Portfolio Variance***
The next step is to calculate the portfolio variance. In calculating the portfolio variance, the portfolio standard deviation is
calculated first. The portfolio standard deviation is the square root of the portfolio variance. The portfolio variance is obtained by
multiplying the covariances between instruments with the weights of each instrument in the portfolio.
***I.*** ***Calculating Portfolio Return and Standard Deviation***
The next step is to calculate the portfolio return. To calculate the portfolio return, we first calculate the average return per
instrument per month over the research period. Then, the portfolio return can be calculated by accumulating the average returns per
instrument in the research. The next step is to calculate the standard deviation of the portfolio. The next step is to find the portfolio
return and portfolio standard deviation using the Solver feature in Microsoft Excel. To facilitate the calculation process, the solver
feature in Microsoft Excel is used. In this feature, several variables need to be filled in order to obtain the instrument weights that
minimize the variance. From filling in all the variables mentioned above, the spreadsheet calculation process is performed by clicking
the solve button. The portfolio standard deviation and portfolio return resulting from the solver calculation process represent a
combination of all instruments that minimize the variance, which is also known as the Global Minimum Variance (GMV) point.
***J.*** ***Constructing the Minimum Variance Frontier Curve***
The next step is to find the points that represent combinations of portfolio return and portfolio standard deviation, forming the
minimum variance frontier curve using the solver feature in Microsoft Excel. Before finding these values, it is necessary to identify
the instrument with the highest return and the instrument with the lowest return as individual instruments. If needed, the data should
be plotted on a graph for easier search. Then, determine the number of points to be generated between the highest return and the
lowest return, which will result in a return increment (delta return).
To obtain these points, the solver feature is used with the objective function and constraints as described in section 3.10. The
difference is that in the subject to constraints column, a constraint for the portfolio return is added. The lowest individual return is
added to the delta return, resulting in a different standard deviation. Similarly, the other points are processed by adjusting the subject
to constraints column with multiples of the delta return until reaching the highest individual return. From the generated points, a line
can be drawn through all of them, forming a curve that opens to the right. This curve will also pass through the GMV point directly
since this point represents the minimum point of the efficient frontier.
***K.*** ***Selecting the Efficient Frontier Curve***
The next step is to determine the efficient frontier, which is a part of the minimum variance frontier curve. By forming the points
described in section 3.10, the minimum variance frontier curve can be created. From the data processing in section 3.9, the GMV
point located on the minimum variance frontier curve is obtained. The curve below the GMV point on the minimum variance frontier
is considered the non-efficient frontier. This is because, with the same standard deviation, portfolios on the minimum variance
frontier curve above the GMV point can achieve higher returns.
### 4925 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 www.ijcsrr.org**
***L.*** ***Finding the Optimal Portfolio***
The next step is to find the optimal point in the risky asset portfolio. In order to obtain the optimal portfolio point, the risk-free
rate needs to be determined. Once known, an equation to calculate the reward-to-variability ratio is created in specific cells,
referencing other cells that contain portfolio returns, the risk-free rate, and portfolio standard deviation.
***M.*** ***Finding the Complete Optimal Portfolio***
The next step is to construct a portfolio that involves investment in a risk-free instrument. By combining the risk-free instrument
with the optimal portfolio of risky instruments, a complete optimal portfolio can be formed.
**RESULT AND DISCUSSION**
***A.*** ***Data Processing with Markowitz Method***
With the available historical data for several sample instruments including Bitcoin, gold, government bond, LQ45, and the US
dollar from the period of 2018 to 2022, data processing is conducted. The goal is to form an optimal portfolio with measured portfolio
performance.
***B.*** ***Instrument Return Analysis***
Investment Return Analysis begins with calculating the return of each instrument. According to Kritzman (1990, p.7) in his book
titled "Asset Allocation for Institutional Portfolios," return is the income generated from an asset, adjusted for changes in prices that
occur over a specific period, divided by the price of the asset at the beginning of the period. According to Levy (1999, p.198) in his
book titled "Introduction to Investments," expected return represents the average of the potential rates. Expected return is also known
as the mean return, simplified as the mean. Expected return has two components, namely the probability and the rate of return of an
asset. The fluctuation in prices of each instrument makes it difficult for the author to estimate the probability distribution of each
instrument. Therefore, to calculate the expected return per month, the researcher assumes that the probability distribution remains
constant. This means that the denominator is the sum of monthly sample returns (closing price per month) for each instrument during
the research period. In this study, there are sixty months from January 2018 to December 2022.
***C.*** ***Average Return and Risk***
The initial step in data processing, according to Bodie, Kane, and Marcus (2011, p.156), is to calculate the average return. From
the historical returns of all instruments obtained, the average return can be calculated for each instrument over the entire research
period. This is done by dividing the total return of each instrument during the research period by the number of months in the
research period. The "average" function in Microsoft Excel can be used with the arguments of each return for the entire research
period to obtain the expected return per instrument.
The next step in data processing, according to Bodie, Kane, and Marcus (2011, p.156), is to calculate the risk (standard deviation)
for each instrument over the entire research period. Risk is the square root of variance, so calculating risk is aligned with calculating
variance. The "stdev" function in Microsoft Excel can be used with the arguments of each return for the entire research period to
obtain the risk per instrument.
**Table 1.** Standard Deviation and Monthl y Returns of Individual Instruments
|Col1|No.|Instrument|Standar Deviation (σ)|Expected Return (E(r))|Col6|
|---|---|---|---|---|---|
||1|Bitcoin (BTC)|21.67%|3.26%||
||2|Gold|4.15%|0.86%||
||3|Goverment Bond|5.59%|0.26%||
||4|LQ45|5.27%|-0.14%||
||5|US Dolar|2.77%|0.29%||
### 4926 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
From Table 1. above, it can be seen that Bitcoin has the highest risk, with a standard deviation of 21.67% and an expected return
of 3.26%. On the other hand, the US Dollar has the lowest risk, with a standard deviation of 2.77% and an expected return of
0.29%, confirming the concept of high risk high return.
***D.*** ***Correlation Coefficients***
The next step is to calculate the correlation coefficients for all the instruments. The correlation coefficient, or simply correlation,
is a statistical measure used to assess the relationship between individual instrument returns or the tendency of two instruments to
move together. The correlation coefficient of returns between two instruments is calculated using the statistical function "correl" in
Microsoft Excel, with the arguments being the returns of the two instruments.
**Table 2.** Correlation Coefficients amon g Instruments
|Correlation|BTC|Emas|Goverment Bond|LQ45|US Dolar|
|---|---|---|---|---|---|
|BTC|1|0.00888|-0.03914|0.26831|-0.12025|
|Gold|0.0088 8|1|-0.05084|-0.25173|0.41984|
|Goverment Bond|-0.03914|-0.05084|1|-0.38037|0.55016|
|LQ45|0.2683 1|-0.25173|-0.38037|1|-0.64063|
|US Dolar|-0.12025|0.41984|0.55016|-0.64063|1|
From Table 2. above, it can be observed that the correlations among instruments range between -0.64063 < ρ < 0.55016. No
instrument exhibits positive correlation with all other instruments. For example, Bitcoin shows positive correlation with Gold and
LQ45, but negative correlation with Government Bond and US Dollar, with coefficients of -0.03914 and -0.12025, respectively. On
the other hand, no instrument exhibits negative correlation with all other instruments. Government Bond, for instance, shows
negative correlation with Bitcoin, Gold, and LQ45, but positive correlation with US Dollar, with a coefficient of 0.55016.
***E.*** ***Covariance***
The next step is to calculate the covariance of all instruments. Covariance is a measure of how two different sets of data vary
together. Covariance determines the extent to which two variables are related or how they vary together. Covariance is the average
of the deviations from each data point to their respective means. By knowing the covariances and correlations among instruments,
investors can determine the composition of available assets to achieve an optimal portfolio with minimal risk and maximum return.
The covariance between two instruments is calculated using the covar statistical function in Microsoft Excel, with the arguments
being the returns of the two instruments.
|Table 3. Instruments Covariances|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Covariance|BTC|XAU_ID R|Goverme nt Bond|LQ45|USD_ID R|
|BTC|4.616 %|0.008%|-0.047%|0.301%|-0.071%|
|XAU_IDR|0.008 %|0.169%|-0.012%|-0.054%|0.047%|
|Goverment Bond|- 0.047%|-0.012%|0.307%|-0.110%|0.084%|
|LQ45|0.301 %|-0.054%|-0.110%|0.273%|-0.092%|
|USD_IDR|- 0.071%|0.047%|0.084%|-0.092%|0.075%|
### 4927 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 www.ijcsrr.org**
***F.*** ***Variance***
The portfolio variance is calculated using equation 3.9 in chapter 3. Due to the number of instruments used in this study being
six, the equation becomes quite long and complex. The variance of each instrument is calculated using the multiplication function
in Microsoft Excel. The portfolio variance is calculated in the spreadsheet with a matrix arrangement designed to facilitate the
calculation of the long and complex equation.
|le 4. Instrument Variances|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Variance BTC||XAU_IDR|Goverment Bond|LQ45|USD_IDR|
|[Individual] Weight (Wi)|0%|6%|2%|31%|61%|
|[Portofolio] Total Weight (Wp)|100%|||||
|[Individual] Variance|0.000%|0.001%|0.000%|0.007%|0.013%|
|[Individual] Expected Return|3.26%|0.86%|0.26%|-0.14%|0.29%|
|[Individual] Expected Return * (Wi)|0.000|0.001|0.000|0.000|0.002|
|[Portofolio] Variance|0.022%|||||
|[Portofolio] Standard Deviation|1.50%|||||
|[Portofolio] Expected Return|0.19%|||||
|Risk Free Rate|0.375%|4.5% annually||||
|CAL slope|-12.18%|||||
***G.*** ***Optimal Portfolio***
To obtain an optimal portfolio, several steps need to be taken, namely forming the minimum variance frontier curve, calculating
the GMV (Global Minimum Variance) Portfolio point, selecting the efficient frontier curve, determining the optimal portfolio point,
and forming several Capital Allocation Lines. The process of determining the optimal portfolio point will be detailed below.
***H.*** ***Forming the Minimum Variance Frontier Curve***
The minimum variance frontier curve is initially formed by the instruments that provide the highest return and the instruments
with the lowest return. Once obtained, 20 other frontier points are formed that minimize variance. As a result, a curve is obtained
that opens in the opposite direction to the Y-axis, which represents expected return.
**Figure 2.** Minimum Variance Frontier Curve
### 4928 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 www.ijcsrr.org**
***I.*** ***Global Minimum Variance Portfolio***
The principle behind the frontier set of risky portfolios is to capture all levels of risk. However, investors are primarily interested
in portfolios that provide the highest return. The entire range of portfolio compositions between risk levels and return levels is
depicted in the arrangement of points on the efficient frontier of risky assets. From this arrangement, the Global Minimum Variance
(GMV) Portfolio is determined, which minimizes variance while maximizing return.
**Table 5.** Global Minimum Variance
|Individual|W1|BTC|0%|
|---|---|---|---|
||W2|XAU_IDR|6%|
||W3|Goverment Bond|2%|
||W4|LQ45|31%|
||W5|USD_IDR|61%|
||Total||100%|
|Portofolio||Varian|0.022%|
|||Std Dev|1.50%|
|||Exp Return|0.19%|
|||Risk Free Rate|0.375%|
|||Slope|-12.18%|
**Figure 3.** Global Minimum Variance Portfolio
The GMV point represents the formation of the lowest-risk and efficient portfolio, obtained by minimizing the variance in the
portfolio. Since minimizing variance in the portfolio corresponds to the points on the minimum frontier curve, the GMV point is
guaranteed to lie on this minimum frontier curve. The GMV point is located on the curve with the smallest variance or standard
deviation, so it lies at the end of the curvature of the minimum frontier curve. As this point is at the far end of the curve, it is ensured
to be unique. If a line is drawn from the GMV point parallel to the X-axis (standard deviation), it forms the GMV line that serves to
separate the efficient curve and the inefficient curve. The efficient curve is the minimum frontier curve located above the GMV line,
while the inefficient curve consists of the minimum frontier below the GMV line.
### 4929 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 www.ijcsrr.org**
***J. Efficient Frontier Curve of Risky Assets***
The Efficient Frontier Curve of Risky Assets is a segment of the minimum variance frontier curve that provides efficient
performance, aiming to achieve higher portfolio returns with the same level of risk. This curve is formed by a collection of portfolios
that are located above the GMV portfolio line.
**Figure 4.** Efficient Frontier Curve
The connected curve above the GMV Portofolio line represents the efficient frontier of risky assets, while the disjointed curve
below the GMV Portofolio line represents the inefficient frontier. This curve is a plot of the dominant efficient portfolios as they
have higher returns compared to portfolios with the same standard deviation located below the GMV Portofolio.
Assuming that investors are rational and risk-averse, they will choose portfolios with higher returns when faced with two
portfolios that have the same level of risk. Therefore, portfolios located below the GMV portfolio do not need to be depicted in the
graph above. In the efficient frontier curve, the portfolio with the lowest level of risk is the GMV portfolio with a standard deviation
of 1.50% and a return of 0.29%. The curve will then bend parabolically, and the maximum return is achieved at a position of 3.26%
with a standard deviation of 21.67%, where the entire portfolio is invested in bitcoin.
**Figure 5.** Efficient Frontier Curve 2
As seen in Figure 4.5, the curve formed below the GMV portfolio line represents the inefficient frontier. This is evident in the
case of the LQ45 instrument, which bears a risk of 5.27%. By diversifying and forming a portfolio, the expected return can be
increased. By observing the intersection point of LQ45 with the efficient frontier curve (Ev), the expected return can be increased
from -0.45% to 1.29% without increasing the risk. Similarly, as shown in Figure 4.5, in the case of Government Bond, which obtains
### 4930 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
a return of 0.26%, diversifying and forming a portfolio can reduce the risk. By observing the intersection point of Government Bond
with the efficient frontier curve (Ep), the risk can be reduced from 5.59% to 1.54% without increasing the risk. This demonstrates
that diversification in the form of a portfolio can reduce the level of risk in investments. In other words, investing in a single
instrument alone is inefficient compared to investing in a portfolio.
***K. Optimal Portfolio***
From various combinations and allocations of instruments resulting in a portfolio, with the help of the solver function in
Microsoft Excel, data on portfolio returns and standard deviations are obtained. These data are plotted on a graph to form the efficient
frontier curve.
**Figure 6.** Optimal Portfolio
The Optimal Portfolio can be determined from one of the points on the efficient frontier curve. To determine which point is the
optimal portfolio, another factor needs to be considered, which is the return rate of the risk-free asset. The return rate of the riskfree
asset at the end of the research period or at the time of portfolio formation is 4.5% per year or 0.375% per month. As mentioned
earlier, the best portfolio is the one that provides the best trade-off between the risk taken and the return obtained. The slope of the
Capital Allocation Line (CAL) is a ratio that calculates the relationship between excess return and risk. It is referred to as the
rewardto-variability ratio.
**Table 6.** Optimal Portfolio
|Individual|W1|BTC|18%|
|---|---|---|---|
||W2|XAU_IDR|82%|
||W3|Govermen t Bond|0%|
||W4|LQ45|0%|
||W5|USD_IDR|0%|
||Total||100%|
|Portofolio||Varian|0.27%|
|||Std Dev|5.15%|
|||Exp Return|1.29%|
|||Risk Free Rate|0.375%|
|||Slope|17.75%|
### 4931 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
From Table 6. it can be seen that the portfolio consists of only two instruments, namely bitcoin and gold. Gold has the largest
weight with a composition of 82% and the weight of bitcoin is only 18%. From the types of assets obtained, the optimal portfolio is
formed from a combination of the gold instrument, which has a correlation of 0.0088 or approximately 0.9%. This is in line with
Markowitz's theory that in order to reduce risk, investors need to form a portfolio with the lowest possible correlation. This is to
ensure that losses incurred from one or more instruments in the portfolio can be offset by other instruments with lower correlation.
***L. Capital Allocation Line and Efficient Frontier Curve***
In determining the previous portfolio, all the instruments used were risky assets. If we include an element or opportunity to invest
in a risk-free asset, such as the interest rate of Bank Indonesia Certificates, a new portfolio will be obtained. The risk-free asset will
be linked to a risky portfolio and form a straight line called the Capital Allocation Line (CAL). By finding the point where CAL
intersects the Efficient Frontier curve, an optimal alternative portfolio can be obtained, known as the tangency portfolio, which
represents the maximum slope (CAL slope) between the return of risky assets and the risk-free asset on the Efficient Frontier curve.
The risk-free point (r_f) represents an instrument with a combination of standard deviation and expected return that is free from
risk (standard deviation = 0), obtained from the Bank Indonesia interest rate instrument (Sertifikat Bank Indonesia - SBI). In this
study, the average interest rate over the research period was taken, which is 4.5% per year or 0.375% per month. Thus, for the
riskfree asset, the point (r_f) is obtained at the coordinates (0, 0.0375%). Therefore, CAL(A) can be formed by connecting the point
(r_f) and the maximum expected return, which is the return of the bitcoin instrument. Bitcoin has the highest expected return among
individual assets, which is 3.26% with a risk level of 21.67%. For the second asset allocation line, CAL(G) can be formed by
connecting the point (r_f) and the global minimum variance portfolio (GMV portfolio) point. The GMV portfolio has an expected
return of 0.19% with a risk level of 1.5%. As for CAL(P), it is formed from the point (r_f) to the tangency point between CAL and
the efficient frontier curve. This point represents the optimal portfolio that provides the highest performance, with an expected return
of 1.29% and a risk level of 5.15%, as shown in Figure 7. below:
**Figure 7.** Capital Allocation Line
If a line is drawn from the risk-free asset rate point (r_f) parallel to the Y-axis (standard deviation), it will intersect with the CAL
lines. Among the capital allocation lines (CALs), CAL(P) forms the largest tangent angle with the risk-free asset line. This is
considered the optimal portfolio according to Sharpe (1995) as it provides the highest value among the angles formed by the other
CALs.
**CONCLUSION AND RECOMMENDATION**
The investment portfolio instruments have varying levels of return and risk. Bitcoin has the highest return among the instruments,
with a difference of 3.26% compared to the others. However, it is also associated with a high level of risk, reaching 21.67%. Other
instruments such as gold, government bonds, LQ45, and US dollars have much lower levels of return compared to bitcoin.
### 4932 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
Diversification in investment can help investors increase their investment returns while maintaining the same level of risk as
individual assets. Additionally, the risk level of an asset can be reduced in a portfolio investment with the same return level as
individual assets. Based on the calculation of returns from the five instruments and the average standard deviation, an optimal
portfolio can be obtained by combining two instruments: 18% bitcoin and 82% gold. This optimal portfolio can achieve an expected
return of 1.29% with a risk level of 5.15%. Considering a risk-free rate of 0.375%, this portfolio forms a slope of 0.1775, which is
the largest slope formed between the combination of risk-free instruments and risky portfolios. Investors should allocate their funds
more wisely, considering not only the highest return but also the associated risk. High returns often come with high risks, so investors
need to assess the risk-return trade-off before making investment decisions. It is recommended for future research to use data from
a period that is not a transitional phase. The data used in this study covers the years 2018 to 2022, which includes the period affected
by the COVID-19 pandemic starting from early 2020. This global pandemic has significantly influenced all global economic
movements, and it may be beneficial to analyze data from a more stable period for a more accurate assessment of investment
performance.
**REFERENCES**
1. Algarvio, H., Lopes, F., Sousa, J., & Lagarto, J. (2017). Multi-agent electricity markets: Retailer portfolio optimization using
Markowitz theory. Electric Power Systems Research, 148, 282-294.
2. Bodie, Zvi, Alex Kane, & Alan J. Marcus. (2011). Investments. Singapore: Irwin/McGraww-Hill.
3. Dian, C. (2020). Pembentukan Portofolio Optimal Pada Beberapa Indeks Saham Menggunakan Model Markowizt. Jurnal
Akuntansi Muhammadiyah (JAM), 10(2), 149-159.
4. Elton, Edwin J. dan Martin J. Gruber (1995). Modern Portfolio Theory And Invesment Analysis. John Wiley & Sons
5. Farkhati, F., Hoyyi, A., & Wilandari, Y. (2014). Analisis Pembentukan Portofolio Optimal Saham dengan Pendekatan
Optimisasi Multiobjektif untuk Pengukuran Value at Risk. Jurnal Gaussian, 3(3), 371-380.
6. Fernández-Navarro, F., Martínez-Nieto, L., Carbonero-Ruz, M., & Montero-Romero, T. (2021). Mean Squared Variance
Portfolio: A Mixed-Integer Linear Programming Formulation. Mathematics, 9(3), 223.
7. Fischer, E. Donald dan Jordan J. Ronald. (1995). Security Analysis And Portfolio Management. Prentice Hall Inc.
8. Grinold, Richard C. and Ronald N.Kahn. (1995). Active Portfolio Management: Quantitative Theory and Applications.
Chicago: Probus Publishing.
9. Gurrib, I. (2014). Diversification in Portfolio Risk Management: The Case of UAE Financial Market. International Journal
of Trade, Economic and Finance, 445-449.
10. Hali, N. A., & Yuliati, A. (2020). Markowitz Model Investment Portfolio Optimization: a Review Theory. International
Journal of Research in Community Services, 1(3), 14-18.
11. Hanif, A., Hanun, N. R., & Febriansah, R. E. (2021). Optimization of Stock Portfolio Using the Markowitz Model in the Era
of the COVID-19 Pandemic. The International Journal of Applied Business, 5(1), 37-50.
12. Ivanova, M., & Dospatliev, L. (2017). Application of Markowitz portfolio optimization on Bulgarian stock market from
2013 to 2016. International Journal of Pure and Applied Mathematics, 117(2), 291-307.
13. Jones, Charles P. (2000). Investment: Analysis and Management (7th Edition). USA: Wiley & Son, Inc.
14. Kamali, S. (2014). Portfolio optimization using particle swarm optimization and genetic algorithm. Journal of mathematics
and computer science, 10(2), 85-90.
15. Konno, H., & Yamazaki, H. (1991). Mean-absolute deviation portfolio optimization model and its applications to Tokyo
stock market. Management science, 37(5), 519-531.
16. Kritzman, Mark P. (1990). Asset allocation for institutional investors (2th Edition). USA: McGraw-Hill Companies.
17. Lee, H. S., Cheng, F. F., & Chong, S. C. (2016). Markowitz portfolio theory and capital asset pricing model for Kuala
Lumpur stock exchange: A case revisited. International Journal of Economics and Financial Issues, 6(3S), 59-65.
18. Levy, Haim. (1998). Introductions to investments. South-Western Educational Publishing.
19. Lindblad, J. T. (2015). Foreign direct investment in Indonesia: Fifty years of discourse. Bulletin of Indonesian Economic
Studies, 51(2), 217-237.
### 4933 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
## **International Journal of Current Science Research and Review **
### **ISSN: 2581-8341 ** Volume 06 Issu e 07 July 2023 **DOI: 10.47191/ijcsrr/V6-i7-108, Impact Factor: 6.789 ** **IJCSRR @ 2023 **
### **www.ijcsrr.org**
20. Manurung, Adler Haymans and C.Berlian. (2004). Portofolio investasi: Studi empiris 1996-2003 Majalah Usahawan, No.8
Th. XXXIII, 44-48
21. Markowitz, Harry M. (1952). Portfolio Selection. Journal of Finance, 7, 77-91.
22. Muis, M. A., & Adhitama, S. (2021). The Optimal Portofolio Creation Using Markowitz Model. Accounting and Financial
Review, 4(1), 72-81.
23. Negara, I. N. W., Langi, Y. A., & Manurung, T. (2021). Analisis Portofolio Saham Model Mean–Variance Markowitz
Menggunakan Metode Lagrange. D'cartesian, 9(2), 173-180.
24. Reilly, Frank K and Brown, Keith C. (2000). Investment analysis and portfolio management (6th Edition). USA: Harcourt,
Inc.
25. Reilly, Frank K and Brown, Keith C. (2006). Investment analysis and portfolio management (8th Edition). USA: Tomson
South-Western
26. Septyanto, E. D. (2019). Analisis Portofolio Optimal Menggunakan Metode Multi Objektif pada Saham Jakarta Islamic
Index. UNP Journal of Mathematics, 2(1), 1-6.
27. Sharpe, William F; Gordon J. Alexander; Jeffrey. (1995). Invesment (5th Edition). Prentice Hall.
28. Siregar, B., & Pangruruk, F. A. (2021). A Portfolio Optimization Based on Clustering in Indonesia Stock Exchange: A Case
Study of The Index LQ45. Indonesian Journal of Business Analytics, 1(1), 59-70.
29. Verdiyanto, R. (2020). An Empirical Implementation of Markowitz Modern Portfolio Theory on Indonesia Sharia Equity
Fund: A Case of Bahana Icon Syariah Mutual Fund. Journal of Accounting and Finance in Emerging Economies, 6(4),
11591172
30. Xiao, Y., & Watson, M. (2019). Guidance on conducting a systematic literature review. Journal of Planning Education and
Research, 39(1), 93-112.
***Cite this Article: I Made Gede Abandi Semeru, Yunieta A. Nainggolan (2023).*** ***Investment Portfolio Optimization in Indonesia***
***(Study On: Lq-45 Stock Index, Government Bond, United States Dollar, Gold and Bitcoin). International Journal of Current***
***Science Research and Review, 6(7), 4922-4934***
### 4934 [*] Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 ** Available at: www.ijcsrr.org** ** Page No. 4922-4934**
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.47191/ijcsrr/v6-i7-108?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.47191/ijcsrr/v6-i7-108, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ijcsrr.org/wp-content/uploads/2023/07/108-24-2023.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-07-24T00:00:00
|
[
{
"paperId": "ebdbe428e77abe9440101d40b9071b0339cb790c",
"title": "Portfolio Optimization Based on Clustering in Indonesia Stock Exchange: A Case Study of The Index LQ45"
},
{
"paperId": "80c2009d0d3cb61af902034b6655ab5cc2fb7314",
"title": "The Optimal Portofolio Creation using Markowitz Model"
},
{
"paperId": "69ed95136cba84dc62f6a7e42c418df470f58b7a",
"title": "Optimization of Stock Portfolio Using the Markowitz Model in the Era of the COVID-19 Pandemic"
},
{
"paperId": "8104adafaf320c5c5c2b161eceada70cb09e88d4",
"title": "Mean Squared Variance Portfolio: A Mixed-Integer Linear Programming Formulation"
},
{
"paperId": "43afbb410455c8963227306a45df9b363cbf4040",
"title": "Analisis Portofolio Saham Model Mean – Variance Markowitz Menggunakan Metode Lagrange"
},
{
"paperId": "42b798ae53824722daf8cfef1e17142114abf08e",
"title": "An Empirical Implementation of Markowitz Modern Portfolio Theory on Indonesia Sharia Equity Fund: A Case of Bahana Icon Syariah Mutual Fund"
},
{
"paperId": "e4295ff08b2e20b9177fd58e42e5f1de2411353b",
"title": "Markowitz Model Investment Portfolio Optimization: a Review Theory"
},
{
"paperId": "662bdd2a44672f09a3d77e00012e5a8c9430acfa",
"title": "PEMBENTUKAN PORTOFOLIO OPTIMAL PADA BEBERAPA INDEKS SAHAM MENGGUNAKAN MODEL MARKOWIZT"
},
{
"paperId": "454ca515348a27984cc6fd2c6d5eef12ae9e23d5",
"title": "APPLICATION OF MARKOWITZ PORTFOLIO OPTIMIZATION ON BULGARIAN STOCK MARKET FROM 2013 TO 2016"
},
{
"paperId": "5572fcc987d521aa4c0d244291536358b49cbd1a",
"title": "Guidance on Conducting a Systematic Literature Review"
},
{
"paperId": "0ebe8309c30b2d0384b15c179da8bbb5aadbeac2",
"title": "Multi-agent electricity markets: Retailer portfolio optimization using Markowitz theory"
},
{
"paperId": "bffda437077f0dabfb03d47c99c6ec78f895d17c",
"title": "Markowitz Portfolio Theory and Capital Asset Pricing Model for Kuala Lumpur Stock Exchange: A Case Revisited"
},
{
"paperId": "92f49bf7fdd27f42b033daf90f1a1fcc802363e8",
"title": "Investments"
},
{
"paperId": "ff5666f3caec461a4d8683d319073f79c67bda3b",
"title": "ANALISIS PEMBENTUKAN PORTOFOLIO OPTIMAL SAHAM DENGAN PENDEKATAN OPTIMISASI MULTIOBJEKTIF UNTUK PENGUKURAN VALUE AT RISK"
},
{
"paperId": "8447623aec9be706bd7d517bfc700d91c8a8773a",
"title": "Portfolio Optimization using Particle Swarm Optimization and Genetic Algorithm"
},
{
"paperId": "ba4c5aab9f535b63645b7f7947f7ace9c8b0af31",
"title": "Diversification in Portfolio Risk Management: The Case of the UAE Financial Market"
},
{
"paperId": "16053126f5027144fe23feebde9c7cea23193644",
"title": "Mean-absolute deviation portfolio optimization model and its applications to Tokyo stock market"
},
{
"paperId": "a8d5640416e8e7f0d60d62991c1ccedef10023a4",
"title": "Security analysis and portfolio management"
},
{
"paperId": "39a0f436cbd7b98830b80b58f7e7b115673ef295",
"title": "Portfolio Selection"
},
{
"paperId": null,
"title": "Analisis Portofolio Optimal Menggunakan Metode Multi Objektif pada Saham Jakarta Islamic Index"
},
{
"paperId": "7806aa9cc9ab7a47c10bdea38f5b048588a4aa14",
"title": "Investment Analysis And Portfolio Management 7th Edition"
},
{
"paperId": null,
"title": "4933 * Corresponding Author: I Made Gede Abandi Semeru Volume 06 Issue 07 July 2023 19"
},
{
"paperId": null,
"title": "Portofolio investasi: Studi empiris 1996-2003"
},
{
"paperId": "0b50e3fbffdf968df31965b3a521e6a22e239644",
"title": "Active Portfolio Management: Quantitative Theory and Applications"
},
{
"paperId": null,
"title": "Invesment (5th Edition)"
},
{
"paperId": null,
"title": "Asset allocation for institutional investors (2th Edition)"
},
{
"paperId": "3019ae6463569cec9969ebad1b195f5996a333db",
"title": "Modern portfolio theory and investment analysis"
}
] | 11,391
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Law",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010d89793ad314e40acc52f726a8eb4440010419
|
[
"Computer Science"
] | 0.85231
|
Blockchain as privacy and security solution for smart environments: A Survey
|
010d89793ad314e40acc52f726a8eb4440010419
|
arXiv.org
|
[
{
"authorId": "31584949",
"name": "Maad Ebrahim"
},
{
"authorId": "145556030",
"name": "A. Hafid"
},
{
"authorId": "46256177",
"name": "Etienne Elie"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Blockchain was always associated with Bitcoin, cryptocurrencies, and digital asset trading. However, its benefits are far beyond that. It supports technologies like the Internet-of-Things (IoT) to pave the way for futuristic smart environments, like smart homes, smart transportation, smart energy trading, smart industries, smart supply chains, and more. To enable these environments, IoT devices, machines, appliances, and vehicles, need to intercommunicate without the need for centralized trusted parties. Blockchain replaces these trusted parties in such trustless environments. It provides security enforcement, privacy assurance, authentication, and other key features to IoT ecosystems. Besides IoT-Blockchain integration, other technologies add more benefits that attract the research community. Software-Defined Networking (SDN), Fog, Edge, and Cloud Computing technologies, for example, play a key role in enabling realistic IoT applications. Moreover, the integration of Artificial Intelligence (AI) provides smart, dynamic, and autonomous decision-making capabilities for IoT devices in smart environments. To push the research further in this domain, we provide in this paper a comprehensive survey that includes state-of-the-art technological integration, challenges, and solutions for smart environments, and the role of these technologies as the building blocks of such smart environments. We also demonstrate how the level of integration between these technologies has increased over the years, which brings us closer to the futuristic view of smart environments. We further discuss the current need to provide general-purpose Blockchain platforms that can adapt to unique design requirements of different applications and solutions. Finally, we provide a simplified architecture of futuristic smart environments that integrate these technologies, showing the advantage of such integration.
|
# **Blockchain as privacy and security** **solution for smart environments: A** **Survey**
**MAAD EBRAHIM** **[1]** **, ABDELHAKIM HAFID** **[1]** **(Member, IEEE), and ETIENNE ELIE** **[2]**
1 Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, QC H3T1J4 Canada
2 Intel Corporation, 2200 Mission College Blvd, Santa Clara, CA 95054
Corresponding author: Maad Ebrahim (e-mail: maad.ebrahim@umontreal.ca).
**ABSTRACT** Blockchain was always associated with Bitcoin, cryptocurrencies, and digital asset trading.
However, the benefits of Blockchain are far beyond that. It has been recently used to support and augment
many other technologies, including the Internet-of-Things (IoT). IoT, with the help of Blockchain, paves
the way for futuristic smart environments, like smart homes, smart transportation, smart energy trading,
smart industries, smart supply chains, and more. To enable these smart environments, IoT devices,
machines, appliances, and vehicles, will need to intercommunicate without the need for a centralized
trusted party. Blockchain can replace third trusted parties by providing secure means of decentralization
in such trustless environments. They also provide security enforcement, privacy assurance, authentication,
and other important features to IoT ecosystems. Besides the benefits of Blockchain-IoT integration for
smart environments, other technologies also have important features and benefits that attracted the research
community. Software-Defined Networking (SDN), Fog, Edge, and Cloud Computing technologies, for
example, play an important role in enabling realistic IoT applications. Moreover, the integration of Machine
Learning and Artificial Intelligence (AI) algorithms provides smart, dynamic, and autonomous decisionmaking capabilities for IoT devices in smart environments. To push the research further in this domain,
we provide in this paper a comprehensive survey that includes state-of-the-art technological integration,
challenges, and solutions for smart environments, and the role of Blockchain and IoT technologies as the
building blocks of such smart environments. We also demonstrate how the level of integration between
these technologies has increased over the years, which brings us closer to the futuristic view of smart
environments. We further discuss the current need to provide general-purpose Blockchain platforms that
can adapt to different design requirements of different applications and solutions. Finally, we provide a
simplified architecture of futuristic smart environments that integrates all these technologies, showing the
advantage of such integration.
**INDEX TERMS** Artificial Intelligence (AI), Blockchain, Cloud Computing, Edge Computing, Fog
Computing, Internet-of-Things (IoT), Software-Defined Networking (SDN), Smart Environments
**I. INTRODUCTION**
Technology development is progressing rapidly, even faster
than the expectations decades ago. The reason for such
explosion in the technology is the huge effort that is being conducted by the research community, which aims to
facilitate human life via developing a futuristic view of a
smarter earth. There has been a lot of academic work and
industrial adoption to create and implement prototypes of
smart cities, which include smart homes, smart factories,
smart cars, smart transportation, and various smart human
gadgets. A lot of core technologies helped reaching this
point of success for this futuristic human civilization. These
technologies include, but not limited to, Internet-of-Things
(IoT), Software-Defined Networking (SDN), Artificial In
telligence (AI), and Cloud, Fog, and Edge Computing. In
addition, Blockchain was able to augment those technologies
with more features that are essential for the full automation
that is needed in smart environments. The role of IoT and
Blockchain in Smart environments can be understood from
the increase in popularity of the web search terms that are
shown in Fig. 1 over the last few years.
IoT enabled every physical device to be connected to the
internet in order to communicate with other physical devices
and services. This can enable, for example, a fridge in the
future to automatically detect missing items for its users
and automatically order those items from the nearest grocery
store. The system in the grocery store can respond to this
order and automatically receive the required monetary value
1
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
**FIGURE 1.** Google Trends search interest for IoT, Smart Home and
Blockchain terms between 2004-2020.
when accepting the transaction. The grocery items are then
automatically collected and sent to the customer using a
self driving vehicle. In this futuristic world, the owner of
the house does not need to set his smart alarm clock, as it
is automatically synchronized to wake him up for his next
meeting. To reach his meeting on time, his self-driving car
chooses the fastest and safest path using real-time information of the traffic in the city. The car is notified in real-time
of nearby accidents in order to optimize its route. This car
can communicate with other smart vehicles along the route
to provide the safest driving experience for all vehicles on
the road.
Besides IoT, SDN enables for dynamic and programmable
control and management of the underlying network in a smart
way. SDN best suits IoT networks, since they change rapidly
in terms of the number of devices, their locations, and the
amount of data they send. Moreover, SDN enables for the
integration of AI and machine learning into the decisionmaking process of load balancing, computational offloading,
traffic control, and data flow in the network. SDN is considered one of the major factors in enabling IoT, and hence smart
cities innovation [1]. However, it requires intensive computations and storage requirements that cannot be provided by the
low-power and limited-resources IoT devices. That is why
Cloud, Fog, and Edge Computing technologies were introduced to basically help providing storage and computation
resources as paid services to manage such networks.
Cloud Computing provides theoretically unlimited storage
and computation resources as paid services. They are usually
located in central data centers located in distant geographical locations. The distance can burden the core network
by the huge amount of traffic created by IoT devices. This
distance also increases the service response time (delay)
for IoT devices, specially when they need a feedback on
their requests. Such delay might not be acceptable for timesensitive IoT applications, such as self-driving cars, where
a delay in milliseconds can cause catastrophic incidents.
Hence, technologies such as Fog and Edge Computing provide solutions to these problems by bringing those resources
closer to the IoT infrastructure. They minimize the delay and
save the network bandwidth by performing preliminary preprocessing and analysis on IoT data before sending it to the
cloud for heavier processing and permanent storage.
There is one missing connection for all those technologies
in order to enable the futuristic concept of "trustless" smart
2
**FIGURE 2.** The integration of IoT with Blockchain, SDN, AI, Cloud, Fog, and
Edge Computing technologies.
cities we talked about earlier. To enable the communica
tion among multiple IoT devices that are usually manufactured/owned by different organizations, a third trusted party
is usually needed to provide the trust among the devices
performing the transactions. Blockchain can act here as that
missing connection in order to provide this trust mechanism
in a decentralized manner. Blockchain can also be used as
a mechanism to permanently log the transactions executed
by IoT devices, manage digital assets trading, and perform
monetary transactions between them. Actually, Blockchains
can do much more than that; they can support and enrich
the development of the IoT industry, and they can mitigate
many of the current limitations in SDN, Cloud, Fog, and
Edge solutions for IoT applications.
In this paper, we present a comprehensive survey that
shows what Blockchain can provide by its integration with
other technologies to build the foundation for future smart
environments. This integration is oriented towards IoT applications and supported by different emerging technologies
(see Fig. 2). We also show few applications that are brought
to life with the help of Blockchain and its integration with
those technologies. We further discuss some of the challenges
and open research problems of Blockchain and its integration with those technologies. These challenges need to be
addressed by the research community in order to provide a
ready-to-go Blockchain-based decentralization platform for
smart environments.
The rest of the paper is organized as follows. Section II
compares this work with existing surveys. We then start by
introducing our definition for smart environments in Section
III. Section IV briefly introduces Blockchain and some of its
applications. In Section V, we present IoT-Blockchain integration and some applications for such integration. Section
VI describes the benefits of integrating Cloud, Fog, and Edge
Computing technologies into Blockchain-IoT ecosystems.
Sections VII and VIII show how SDN and AI technologies,
respectively, help supporting Blockchain-IoT infrastructures.
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
In Section IX, we discuss some challenges and open research
problems that need to be addressed to successfully enable
smoother technological integration. Finally, we present in
Section X a general smart environment architecture that
integrates the technologies presented in this survey to satisfy
its requirements.
**II. OUR WORK AND EXISTING SURVEYS**
Presenting Blockchain integration into multiple technologies
in the context of smart environments is what makes this work
unique compared to previous surveys. We show how this
integration is able to augment those technologies, and how
it helps pave the way for smart environments of the future.
Beside giving a brief introduction about Blockchain and
its applications in smart environments, this work discusses
Blockchain integration with IoT to allow for full automation
in smart devices. We then study how Blockchain-based IoT
solutions are augmented with SDN, Cloud, Fog, and Edge
Computing to enhance the capabilities of IoT applications.
Finally, we study the impact of AI and machine learning
algorithms to make such solutions even smarter.
Table 1 shows the level of technological integration to
Blockchain-IoT solutions in reviews and surveys. The table shows that most existing surveys do not consider the
inclusion of all these technologies to enhance BlockchainIoT integration. In addition, existing surveys do not elaborate
on the direct impact such integration on smart environment
applications. Most surveys focus on integrating Cloud Computing to mitigate the resource limitations of IoT devices
while considering Fog and Edge technologies to provide
privacy and minimize the delay. However, there is a lack of
surveys covering the recent interest in using SDN and AI
technologies for dynamic network management and complex
optimization problems, respectively.
Stojkoska and Trivodaliev [24], for example, focused on
the role of IoT in smart home applications. In addition,
Bhushan *et al.* [25] discussed the integration of Blockchain
to support IoT applications in smart cities. Even though
our work is oriented towards smart environments, we also
study the effect of technological integration to establish such
smart environments. We cover how other technologies help
mitigate several problems in Blockchain-based IoT smart
systems. Other surveys also discussed IoT integration with
other technologies, like AI [26], or SDN [27]. However, these
surveys do not consider smart environment applications, and
do not cover the benefits of Blockchain decentralization
properties.
The majority of the surveys only focus on Blockchain and
IoT integration, including the benefits and challenges of such
integration [2], [5], [6], [9], [14]. Ali *et al.* [11], for example,
reviewed Blockchain-based platforms and services that are
used to augment IoT applications. Similarly, Lao *et al.* [17]
covered the use of Blockchain to address IoT limitations
and secure IoT networks. They also gave a comprehensive
overview of IoT-Blockchain applications, including architectures, communication protocols, and traffic models for such
**TABLE 1.** The integration of Cloud, Edge, Fog, SDN, and AI technologies in
Blockchain-IoT solutions.
|Authors|Year|Integration with Edge/Fog Cloud SDN AI|
|---|---|---|
|Conoscenti et al. [2]|2016||
|Christidis and Devetsikiotis [3]|2016||
|Reyna et al. [4]|2018||
|Ramachandran and Krishnamachar [5]|2018||
|Panarello et al. [6]|2018||
|Fernández-Caramés and Fraga-Lamas [7]|2018| |
|Banerjee et al. [8]|2018| |
|Atlam et al. [9]|2018||
|Zheng et al. [10]|2018||
|Ali et al. [11]|2019| |
|Dai et al. [12]|2019| |
|Ferrag et al. [13]|2019| |
|Makhdoom et al. [14]|2019| |
|Salah et al. [15]|2019| |
|Yang et al. [16]|2019| |
|Lao et al. [17]|2020||
|Alharbi [18]|2020| |
|LI et al. [19]|2020||
|Luo et al. [20]|2020| |
|Xie et al. [21]|2020||
|Mohanta et al. [22]|2020| |
|Chamola et al. [23]|2020| |
applications. There are several other surveys that only focus
on the effort to secure IoT networks using Blockchain [8],
[28]. Riabi *et al.* [28], for example, covered contributions
that use Blockchain to mitigate single-points-of-failures in
centralized access control architectures for IoT devices.
Even though the majority of the surveys only deal with
Blockchain-IoT integration, some of them have different
focus or interest. Reyna *et al.* [4] did cover BlockchainIoT integration, in particular, running Blockchain on IoT
devices. Fernández-Caramés and Fraga-Lamas [7] reviewed
Blockchain-IoT integration in healthcare, logistics, smart
cities, and energy management systems. Likewise, Ferrag *et*
*al.* [13] focused on Blockchain-IoT integration for applications in Internet-of-Vehicles (IoV), Internet-of-Energy (IoE),
Internet-of-Cloud (IoC), and Edge Computing. In addition to
those applications, Mohanta *et al.* [22] did overview existing
security solutions for IoT networks using Blockchain and
AI technologies. Garcia [29] reviewed the integration of AI,
IoT, and Blockchain from the taxation, legal, and economical
point of views.
There are also other surveys that focus on the integration
of Blockchain and other technologies outside the context of
IoT. Ekramifard *et al.* [30], for example, produced a systematic literature review on the integration of Blockchain with
AI; particularly identifying the applications that can benefit
from such integration. Similarly, Salah *et al.* [15] surveyed
the way Blockchain can enhance and solve AI limitations.
They also reviewed the role of Blockchain in achieving
decentralized AI schemes. Additionally, Akter *et al.* [31]
investigated a diverse set of applications in the literature that
3
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
are based on Blockchain, AI, and Cloud technologies. Xie
*et al.* [21] surveyed Blockchain-based solutions to augment
the Cloud Computing technology. They did study the role
of Blockchain to provide decentralized Cloud Exchange services.
Other surveys focused on Blockchain integration with Fog
and/or Edge computing technologies. For instance, Baniata and Kertesz [32] covered contributions that integrate
Blockchain with Fog Computing. likewise, Yang *et al.* [16]
provided a survey on Blockchain integration with Edge computing, which can help decentralize network management.
Moreover, SDN integration with Blockchain has been also
presented in other surveys [18], [19]. Alharbi [18], for example, surveyed existing papers that secure SDN architectures
from different attacks using Blockchain. Moreover, LI *et*
*al.* [19] reviewed how Blockchain and SDN technologies
complement each other when integrated together.
Our work goes far beyond few technological integration
between Blockchain and IoT to mitigate some of their limitations. We review existing work according to different levels
of technological integration, which is oriented towards smart
environment applications. We briefly discuss the benefits of
Blockchain technology itself, and the benefits of BlockchainIoT solutions in smart environments. Then, we present the
benefits of integrating Cloud, Fog, Edge, SDN, and AI technologies in Blockchain-IoT smart systems. We then present
open research problems to be addressed to create smooth
technological integration for smart futuristic environments.
Finally, we provide a simplified smart environment architecture that shows how Blockchain help integrating the various
technologies discussed in this paper.
**III. SMART ENVIRONMENTS**
Over history, humans have always created innovative solutions to make their lives easier. The recent technological
inventions allowed us to live in an environment that was con
sidered science fiction in the past. However, scientists always
wanted to push this further by creating a smarter world where
automation is included in every aspect of human lives. This
was only possible through introducing IoT technology, where
IoT sensors and actuators are embedded in physical devices
and machines, making them able to interconnect through the
internet. IoT, with the help of Big data analysis, AI, Machine
Learning, and many other innovative technologies allowed
for the realization of these smart environments [33], [34].
Governments, industries, and scientists are all racing towards creating and prototyping smart cities for boosting
the life quality of their citizens. Smart homes, for example,
provide a futuristic domestic environment that delivers a
technologically advanced living experience for people [35].
While smart education includes smart campuses, smart universities, and smart classrooms for students in these smart
cities [36]. Various innovative solutions were used to mitigate different challenges in realizing these environments,
like using Blockchain to secure smart city applications [37].
The difficulty in realizing these applications increases as the
4
human interactions with the smart infrastructure get more
complex, as in the case of smart transportation systems [38].
Therefore, the research in these complex smart infrastructures, including smart transportation systems, had become a
dominant research topic in the context of smart environments
[39].
The fourth industrial revolution, also called Industry 4.0, is
another important component of smart environments, which
could only be realized after introducing smart factories and
smart supply chain systems [40]. To decrease the cost while
increasing the quality of mass production, businesses stood
up for shifting from traditional manufacturing to smart factories [41]. The introduction of Smart Industry allows for the
automation of intelligent predictive maintenance strategies,
which provide major cost saving over time-based preventive
maintenance [42]. The fourth industrial revolution in smart
cities also led to the development of distributed smart energy
trading systems, which require Blockchain for security and
reliability [43].
Smart farming is another component of smart environments, which was enabled by IoT, Wireless Sensor Networks
(WSN), Cloud Computing, Fog Computing, as well as Big
data analytics [44]. Big data analytic plays an important
role in bringing real-time decision-making capabilities into
smart farming environments by obtaining valuable information from the collected data [45]. For example, machine
learning can automate decision-making in smart farming environments by predicting soil drought and crop productivity
[46]. Smart vehicles also exist in almost all smart environments, including smart farming, smart factories, smart
cities, and smart transportation systems, and we also see how
Blockchain can secure the transactions between these smart
vehicles [47].
IoT paved the way to pervasive computing, also called
ubiquitous computing, which is the interconnection of sensors with computing capabilities through the internet. These
smart devices are the main building blocks for smart environments, which aim at providing a comfortable living
experience for humans via performing repetitive or risky
tasks using these devices. Furthermore, other technologies
are needed to get the full benefit out of these devices, including AI, machine learning, Big data analytics, as well as
computer networks, parallel and distributed computing, and
much more. The realization of smart environments needs a
lot of work, so in this paper, we focus on how Blockchain
provides security, privacy, and other features into smart environments with the help of other technologies. The outcomes
of this research spot the light on what Blockchain can provide
for smart environments. Besides, it proposes a simplified IoTbased smart environment architecture using Blockchain with
Cloud/Fog computing, SDN, and AI technologies.
**IV. BLOCKCHAIN AND ITS APPLICATIONS**
Starting by Bitcoin in 2008, Blockchain went far beyond the
world of cryptocurrencies as it was proposed by Nakamoto
*et al.* [48]. Blockchain enabled many smart environment
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
##### Previous Block Next Block
##### Tx1 Tx2 … Txn Tx1 Tx2 … Txn
|Col1|Hash|
|---|---|
||Other Info Nonce … Tx1 Tx2 Txn|
**FIGURE 3.** The concept of Blockchain.
|Hash|Col2|
|---|---|
|Other Info Nonce … Tx1 Tx2 Txn||
applications and solutions as has been clearly seen in recent
studies. For example, Blockchain provides authentication and
authorization for smart city applications [49], like managing
real estate deals in smart cities [50]. Blockchain was also
used as a core framework to secure smart vehicles [50] and
smart grid systems [51].
Blockchain is essentially a distributed ledger that is
made up of blocks (list of transactions) that are backwardconnected through hash pointers as shown in Fig. 3. Besides the transactions’ Merkel tree inside each block. These
pointers guarantee the immutability of this ledger. Different
consensus algorithms, such as Proof-of-Work (PoW) [48],
have been proposed to securely update the ledger without the
need to a centralized entity. In PoW, the nodes in the network
compete to solve a puzzle that is solved only by trying different Nonce values. Blockchain can provide pseudonymity
and traceability, which are very useful in dozens of domains. Those features are achieved with the help of different
technologies, including cryptography, hashing, and digital
signatures.
In a public Blockchain, anyone can participate in the
network to create and validate transactions and blocks. Con
trarily, only authorized nodes can join a private Blockchain
to read, create, or validate transactions and blocks. A consortium Blockchain has mixed features of both private and public Blockchain implementations, where only permissioned
users can perform Blockchain transaction with different levels of restrictions. Bitcoin and Ethereum are examples of
public Blockchain implementations, and Hyperledger Fabric
[52] is an example of a consortium Blockchain. We can
also have a private Blockchain using a private version of
Hyperledger Fabric or Ethereum [53]. Zheng *et al.* [10]
gave a taxonomy on different types and implementations of
Blockchain, and the different consensus algorithms used in
them. Beside PoW and Proof-of-Stake (PoS) consensus algorithms, Delegated-PoS (DPoS) [54] and Practical Byzantine
Fault Tolerant (PBFT) [55] have been also considered in
different implementations. However, PoW is still the most
secure consensus algorithm despite its huge computational
and energy consumption.
The peer-to-peer (P2P) network of Blockchain should have
decentralized management of the data that is synchronized
among all peers in the network. To keep the data synchronized efficiently, two message transfer protocols are usually
adopted between the nodes, i.e. Gossip [56] and Kademlia
[57]. Bitcoin uses Gossip, which spreads information by
communicating only with neighbors, mimicking the spread
of epidemic diseases. Ethereum communication protocol on
the other hand is inspired by Kademlia, which maintains
a distributed hash table that specifies the communicating
neighbors for each node. The peers in the network are usually
of three types, i.e. core, full, and light nodes. All peers participate in validating and broadcasting transactions and blocks.
Core nodes are responsible for network routing, whereas full
nodes are responsible for storing the whole Blockchain. Light
nodes are only responsible for maintaining users’ accounts in
resource-constrained devices.
Asymmetric cryptography and zero-knowledge proof [58]
can be used to secure users’ data with the help of Blockchain
[59]. Hence, Blockchain can be used as a secure distributeddatabase system, e.g. medical record system. The patients can
ensure data integrity and privacy by giving data access only to
specific medical firms. Records from different hospitals and
clinics can be obtained in a secure manner without vulnerable
central authorities. Peng *et al.* [60] implementation prevents
falsified data retrieval to ensure authenticity, integrity, and
efficiency of Blockchain data queries. With the help of smart
contracts, automation in such systems mitigate the need for
human centric auditing and revisioning.
Smart contracts [61] were proposed in the 1994 by Nick
Szabo [62] and rediscovered in the context of Blockchain
with Ethereum [63]. They enforce rules and conditions inside transactions to lower the cost induced by third central
parties, such as law firms. These contracts are automated
and permanently stored in Blockchain immutable ledger.
Running code inside Blockchain using smart contracts adds
decentralization to applications in trustless environments.
Smart contracts adds automation to network management,
security services, and IoT applications. However, Blockchain
still suffer many challenges and problems, like scalability
[64] and privacy leakage [65]. Electricity consumption of
PoW and the capitalism problem in PoS algorithm [66] are
some consensus-related problems in Blockchain. Moreover,
the financial use of Blockchain still needs much work from
legal and law enforcement perspectives [67].
***A. BLOCKCHAIN APPLICATIONS***
Zheng *et al.* [10] classified Blockchain applications to finance
systems, reputation systems, public and social services, and
security and privacy applications. In this survey however,
we focus on security [68], AI [15], IoT [14], and healthcare
[69] applications. We show how Blockchain has led to the
development and enhancement of many applications in these
domains. We give below brief descriptions of few Blockchain
applications that demonstrate the power of Blockchain in
enhancing and simplifying humans lives. Such applications
and prototypes pave the way for the smart environments of
the future we are looking for:
*MedRec* [70]: Is a distributed Electronic Health Records
(EHR) and medical research data. This decentralized medical
claim system handles EHRs using Blockchain. It allows
patients to easily and securely share their medical records
5
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
across different health insurance companies, medical institutions, clinics, and pharmacies. It guarantees authentication, confidentiality, accountability for sharing patients’ data.
Medical stakeholders can be incentivized to play the role of
block miners in this system.
*BitAV* [71]: Is a Blockchain-based fast anti-malware scanning application that secures entire networks. It can provide
security services in a decentralized manner to computationallimited environments, like IoT networks, given enough RAM
and storage. It is 1,400% faster than conventional antivirus
software, and 500% less in terms of average update propagation flow. It achieves this performance using the P2P network
maintenance mechanism inspired by Blockchain consensus.
*OriginChain* [72]: Is an adaptable Blockchain-based traceability system. It is decentralized, transparent, and tamperproof; it traces the origin of products across complex supply
chains. Private data, customer/product information, product
certificates and photos are kept off-chain to increase the
performance and save space. However, the hashes of that data
are kept on-chain to ensure immutability.
*E-Voting* : Are decentralized electronic voting systems that
are usually required to scale well to large scale voting [73].
Yang *et al.* [74] used an Ethereum smart contract to give
a prototype of a voting system that provides confidentiality
using homomorphic encryption. The eligibility of the voters,
and the integrity and validity of their votes can also be verified. Similarly, Khoury *et al.* [75] created transparent, consistent, and deterministic Ethereum smart contracts, which can
be modified by voting organizers. Voters should pre-register
with mobile phone numbers and can only vote once in that
voting platform. Hjálmarsson *et al.* [76] used a smart contract
in a private version of Ethereum to guarantee transparency
and privacy. They used Blockchain-as-a-Service to host nationwide elections, but they still need additional measures to
support countries with huge population.
*Reputation Systems* : Are decentralized systems for rewards and educational records. Sharples and Domingue [77]
democratise educational reputations beyond the academic
community using a decentralized Blockchain-based framework. It creates a permanent distributed record of intellectual
effort and associated reputational reward. It can be also
used for crowd-sourced timestamped patenting, i.e. proof of
academic, art, and scientific work.
**V. BLOCKCHAIN & INTERNET OF THINGS (IOT)**
IoT devices in smart environments should be digitally connected in order to share their data and automate their tasks.
These devices are usually made up of sensors and actuators that connect through the internet. Blockchain can be
used to increase IoT automation and solve a number of
its limitations, including security, privacy, and scalability.
That makes Blockchain one of the enabling technologies for
IoT networks in smart environments. Using Blockchain for
decentralized monetary transaction and digital asset trading
is also an enabler for IoT devices in smart environments.
Blockchain was also used as a distributed access management
6
system for the ever-growing IoT networks to mitigate the
overhead of centralized architectures [78].
IoT is one of the biggest trends in today’s innovations
[79]. It enables physical devices to communicate through the
internet to send/receive data, and can perform actions. IoT
has already emerged in humans life in different domains,
such as smart home devices [24], smart cities [80], and smart
transportation [81]. It has been proven to be well suited for
E-business models, specially with the help of Blockchain
and smart contracts [82], [83]. As an example, Zhang and
Wen [82] proposed an E-business architecture to build systematic, highly efficient, flexible, reasonable, and low cost
business-oriented IoT ecosystems. In addition, Zheng *et al.*
[10] discussed IoT-Blockchain integration and its associated
challenges. They identified scalability, ever-growing storage,
privacy leakage, and selfish mining as critical problems.
They pointed out that big-data analytics and AI can enhance
Blockchain-IoT integration and their applications.
Blockchain technology is a good solution to mitigate the
problems of traditional central communication and management systems for large scale IoT devices. On one hand,
resource limitations of IoT devices, and scalability issues in
Blockchain create big problems for Blockchain-IoT integration. On the other hand, the continuous research effort has led
to innovative solutions to these problems, such as IoTA [84].
IoTA is not an abbreviation; rather it comes from IoT and
the word "iota", which means an extremely small amount.
The name reflects its purpose to connect IoT devices through
micro/zero value transactions. Shabandri and Maheshwari
[85] developed this architecture as a protocol to provide
trust in IoT networks. It eliminates transaction fees and the
concept of mining to solve both of those problems. The main
component of the IoTA is what they called the Tangle, which
is a guided acyclic graph (DAG) for transaction storage. Shabandri and Maheshwari [85] demonstrated the performance
of IoTA by implementing two IoT applications, namely a
smart utility meter system and a smart car transaction system.
ADEPT (Autonomous Decentralized Peer-to-Peer Telemetry) [86] is another example of a Blockchain-based databaselike framework for decentralized IoT networks. It is a proofof-concept that was produced by a collaboration between
IBM and Samsung. ADEPT provides a secure and a lowcost interaction mechanism for IoT devices, where devices
have the ability to make orders, pay for them, and confirm
their shipment autonomously. The underlying technologies
behind ADEPT are Ethereum smart contracts, BitTorrent file
sharing and TeleHash peer-to-peer messaging. They used a
mix of PoW and PoS consensus algorithms to provide secure
decentralization for transaction approval.
The huge number of IoT devices creates a burden on the
network and raises problems such as data security, privacy,
and integrity. Two of the most challenging issues in IoT
security are heterogeneity and scalability of IoT devices that
are distributed over the network. Blockchain can solve the
security, privacy, and data integrity issues in a decentralized
manner. Blockchain is also able to create traceable IoT
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
networks, where transaction data are recorded and verified
without intermediary management and control [17]. Distributed Blockchain-based management of IoT devices can
be also performed at the edge of the network to avoid using
distant resources [87]–[89]. Blockchain can also provide a
distributed digital payment system for IoT devices to reduce
the cost induced by third parties.
To protect IoT data in a Blockchain, a hybrid combination
of private and public Blockchains is needed [90]. Data privacy is maintained by private management nodes, whereas
the consensus algorithm is maintained by public nodes. In
addition, traditional Blockchain consensus algorithms are
not suitable for IoT-Blockchain solutions because of their
computational and time requirements. To solve this problem, Samaniego and Deters [91] proposed a Blockchainas-a-service platform for IoT applications. They introduced
structural improvements for Blockchain to fit IoT networks
by improving consensus algorithms.
Atlam *et al.* [9] listed several benefits for using Blockchain
for IoT, like publicity, decentralization, resiliency, security,
speed, cost saving, immutability, and anonymity. However,
they also highlighted the challenges, such as scalability,
processing power, time delay, storage requirements, lack
of skills, legal and compliance, and naming and discovery.
Similarly, Ramachandran and Krishnamachari [5] proposed
Blockchain-based monetary exchange for data and compute
in IoT networks. IoT transactions can also be recorded on
Blockchain for future accounting and auditing. However,
they were also concerned by Blockchain challenges, like latency, bandwidth consumption, transaction fees, transaction
volumes, partition tolerance, and physical attacks on IoT
devices.
Dorri *et al.* [92] investigated the delay, expensive computations, and bandwidth overhead problems of Blockchain to
better fit IoT applications. They proposed a secure, private,
and lightweight hierarchical architecture for Blockchain-IoT
applications. The hierarchical architecture is made up of three
layers, namely local network (smart home), overlay network,
and cloud storage (see Fig. 4). Likewise, Reyna *et al.* [4]
argued the benefits of Blockchain-IoT integration to securely
push code into IoT devices to speed up the deployment of
new IoT ecosystems. They proposed Blockchain-based direct
firmware update without the need to trust third-parties. Since
IoT devices are manufactured by different vendors, they may
not agree on sharing a common Blockchain. Hence, IoT
devices should be able to send transactions across different
Blockchain implementations with different consensus protocols.
Dai *et al.* [12] named the integration of Blockchain and
IoT as Blockchain-of-Things (BCoT). They stated that a
successful integration requires interoperability, traceablitiy
[72], reliability, and autonomicity [82]. They also stated that
decentralization, heterogeneity, poor interoperability, privacy
and security vulnerabilities [3] are critical issues for such
integration. Beside those problems, the lack of publicly
available IoT datasets for the research community is another
|Col1|Local Storage|Col3|
|---|---|---|
|Smart IoT Devices|||
|Centralized Private Blockchain|||
**Smart Home**
**FIGURE 4.** Hierarchical Blockchain-based IoT architecture for smart homes.
problem to be addressed. Thus, Banerjee *et al.* [8] worked on
standards for securely developing and sharing IoT datasets.
They proposed two conceptual solutions to ensure IoT data
integrity and privacy using Blockchain.
Reyna *et al.* [4] proposed using three different communication approaches for the communication between IoT devices
with the help of Blockchains (see Fig. 5). The first approach
is IoT-IoT interactions, where IoT interactions take place offchain. This approach is the fastest among other approaches
since only a part of IoT data is stored on-chain. The second
approach is IoT-Blockchain, where all the interactions take
place through Blockchain. With this approach, all IoT data
are stored on-chain to ensure traceable interactions. The third
approach is hybrid, where only part of the interactions/data
goes through Blockchain, while the rest is done directly
between IoT devices. The hybrid approach is better in terms
of performance and security; however, it requires careful
orchestration for those interactions.
The impact of IoT on industry and enterprise systems
asked for standardizing this technology to speed up its development and spread in this domain [93]. Christidis and
Devetsikiotis [3] showed that Blockchain-IoT integration
will cause transformations across industries, and open the
door for new business models and distributed applications. It
will also facilitate service and resource sharing, and automate
time-consuming workflows. Blockchain smart contracts are
the main source of such automation for complex multistep
processes. They can reduce cost and time for future business
models and applications in smart environments.
7
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
**(b)** IoT-Blockchain
**(c)** Hybrid Approach
**(a)** IoT-IoT Interactions
**FIGURE 5.** Three different Blockchain-IoT interaction architectures proposed
by Reyna *et al.* [4].
***A. BLOCKCHAIN-IOT APPLICATIONS***
A number of key applications were developed for smart
environments with the help of Blockchain-IoT integration.
Prototypes of these applications are necessary to study them
and solve their limitations and issues. Here, we list few applications, case studies, and prototypes for smart environments
that are brought to live using Blockchain-IoT integration:
*Smart Homes* : Dorri *et al.* [92] proposed a smart-home
case study that uses Blockchain to add security and privacy features to various IoT applications. They presented
a lightweight secure system for IoT-based smart homes to
minimize the overhead of consensus algorithms. Their hierarchical architecture consists of smart homes, an overlay
network, and cloud storage. Other works later analyzed this
architecture and the role of smart homes as miners in a
private Blockchain [94], [95]. They used simulation results
to show the insignificance of Blockchain overhead, including
power consumption, compared to achieving confidentiality,
integrity, availability, security, and privacy.
*Energy Trading* : Sikorski *et al.* [96] presented a proof-ofconcept and a detailed implementation for energy trading
using realistic data. It is a machine-to-machine electricity
market for the chemical industry. They used a Blockchain
smart contract for automatic confirmation of trading and
payment commitments. They implemented a scenario of two
electricity producers and one consumer that automatically
trades energy using IoT. Producers publish energy trading
offers for a given price, while consumers can read, analyse,
and accept, or refuse those offers. Consumers can pick and
accept the offer with the minimum cost using a smart contract
execution as an atomic exchange of assets, i.e. currency for
8
energy. Each transaction is saved on the immutable ledger for
future proofing.
*Smart Things* : Panarello *et al.* [6] did a systematic survey
of Blockchain-IoT integration. They covered various smartapplication domains, such as smart homes, smart properties,
and smart cities. They also covered smart energy-trading,
smart manufacturing, smart data-marketplaces, and other
generic smart applications. They classified existing work
based on the development levels, consensus algorithms, and
technical challenges. They also identified the challenges that
include confidentiality, authentication, integrity, availability,
and nonrepudiation.
Conoscenti *et al.* [2] gave a systematic literature review of
Blockchain applications for IoT. They discussed 18 use cases
of Blockchain, 4 of which are specifically designed for IoT.
The rest of the use cases are applications for decentralized
private data management systems that are inline with IoT
applications. The four IoT-related Blockchain applications
they discussed are:
1) *E-business models for IoT solutions* : Zhang and Wen
[82] designed a methodology for transactions and payments between smart IoT devices using Blockchain
smart contracts.
2) *IoT Data-Market* : Wörner and von Bomhard [97] proposed a prototype for a system where sensors can sell
data directly to a data-market in exchange of Bitcoins.
3) *Public-Key Infrastructure* : Axon and Goldsmith [98]
adapted what is called Certcoin [99] to a privacyaware Blockchain-based public-key infrastructure to
avoid web certificate authorities, provide certificate
transparency, and mitigate single points of failures.
4) *Enigma* : An autonomous Blockchain-based decentralized computation platform proposed by Shrobe *et al.*
[100]. It allows different users to run computations on
personal data with guaranteed privacy.
In order to fully benefit from IoT applications and systems,
we need to start by mitigating their current limitations and
challenges. The power and resource limitations of IoT devices do not make them suitable to process heavy computations and store large data. Cloud Computing helped providing
theoretically unlimited storage and computational resources
for IoT devices. In addition, Fog and Edge Computing bring
those resources closer to IoT devices to decrease network
delay and bandwidth consumption. Cloud, Fog, and Edge
resources allow for performing heavy analysis on IoT data
and maintain real-time performance for time-sensitive IoT
applications. These resources will extend the capabilities of
Blockchain-IoT integration and mitigate many of its limitations and issues.
**VI. BLOCKCHAIN & CLOUD, FOG, AND EDGE**
**COMPUTING**
The huge amount of IoT data generated in smart environments needs to be processed in large data centers that have
enough computing and storage capacities. That is why Cloud
Computing was proposed as the first solution for big data
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
analysis and storage for IoT-based applications in smart
environments [101]. However, Fog Computing paradigm was
evolved to support the Cloud in order to mitigate latency intolerance of real-time IoT applications in smart environments
[102], such as autonomous vehicles. Similarly, Edge Computing utilizes available computational resources in smart environments, such as the resources in smart vehicles or smart
phones, in order to reduce the latency even further [103].
Blockchain was again able to support these technologies by
securing and protecting the privacy of this big data in smart
environments [104].
With the advent of the Cloud Computing technology, it is
possible to perform very expensive computational tasks and
store tremendous amount of data. Payment is only for the cost
of usage, which is better than purchasing expensive resources
for time-framed tasks. The Cloud technology removes the
overhead of maintenance and resource management, which is
usually related to owning resources by small to medium sized
companies. Furthermore, the Cloud enables resource and
power-limited devices, such as smartphones and IoT devices,
to perform heavy computations and store huge amounts of
data. Those devices need to only use a lightweight remote
interface with the cloud as a solution to mitigate their power
and resource limitations.
Cloud Computing is the outcome of integrating parallel,
distributed, and grid computing [105]. Although it has been
proposed in the 60s, the technology has started to be widely
used commercially in 2006 by Amazon [106]. Services are
usually provided as packages, namely Software-as-a-Service
(SaaS), Platform-as-a-Service (PaaS), and Infrastructure-asa-Service (IaaS). Zhou *et al.* [107] added the possibility
of having Data-as-a-Service (DaaS), Identity and Policy
Management as-a-Service (IPMaaS), Network-as-a-Service
(NaaS). X-as-a-Service (XaaS) is a term called for the countless number of services that can be provided by cloud computing [108]. XaaS allows for more service packages, which
enable the creation of various applications and systems based
on the services provided.
Jadeja and Modi [106] categorized the deployment of
the cloud infrastructures into public, private, hybrid, and
community clouds. They listed easy management, cost reduction, uninterrupted services, disaster management, and green
computing as the main advantages of Cloud Computing. A
tremendous number of systems and applications were built
based on Cloud services since 2010 [107]. Ma and Zhang
[108] studied a provider for cloud services called Google
App Engine (GAE). They explored three of its services, i.e.
Google File System (GFS), MapReduce, and Bigtable. They
showed how these services opened the doors for Big Data
Analysis in IoT environments.
Before discussing how Cloud integrates into Blockchainbased IoT solutions, we first show how Blockchain helped
the Cloud technology itself. Blockchain has been used for
Cloud Exchange [21], which allows for provisioning and
management of multiple Cloud providers. It can lower the
price and can provide flexible options for Cloud users.
Xie *et al.* [21] proposed using Blockchain to decentralize
Cloud exchange services. It mitigates malicious attacks and
the cheating behaviors of third-party auctioneers. Furthermore, Blockchain form new models for security-aware Cloud
schedulers, like the lightweight Proof–of–Schedule (PoSch)
consensus algorithm [109]. In addition, integrating the Cloud
with Blockchain-IoT solutions enables for seamless authen
tication, data privacy, security, easy deployment, robustness
against attacks, and self-maintenance [7].
Because of limited power, storage, and computational
resources, IoT devices heavily depend on Cloud resources.
However, relying on the cloud can create unacceptable delays, specially when a feedback is required. Fog and Edge
technologies can be used to reduce these delays; in addition,
they provide better privacy by processing IoT data in proximity to IoT devices. Therefore, they are more convenient
than the Cloud, specially for Blockchain-IoT applications.
Samaniego and Deters [91] evaluated both edge-based and
cloud-based Blockchain implementations for IoT networks.
They did show, via simulations, that edge-based Blockchain
implementations outperform cloud-based implementations.
Transferring massive amounts of data, that is produced by
IoT devices, to the Cloud consumes considerable amount of
network bandwidth. Flooding the core network with massive
traffic, like in streaming IoT applications, will create network
bottlenecks and single points of failures. There are also problems with privacy exposure, and context and geographical
location unawareness. Edge Computing solves these problems by forming a distributed and collaborative computing
resources. It reduces power consumption, provides real-time
service, and improves scalability for many industrial IoT solutions [110]. In addition, Unmanned Aerial Vehicles (UAVs)
can act as limited-resource edge servers in environments with
limited or no infrastructures [111]. Blockchain can be used
to provide mutual-confidence between UAVs of different
providers [112], and to preserve privacy and security for their
data [113].
Fog Computing has been also proposed to pre-process and
trim IoT data before sending it to the cloud for computationally expensive analysis [114]. Fog servers are usually
deployed in smart gateways, which are equipped with decent
computational resources. They eliminate unnecessary communication to the cloud to save the network bandwidth and
reduce the load in its data centers. Fog Computing should not
be mixed with Edge Computing, as Edge Computing brings
the computations very close to the end devices. Edge servers
are usually deployed in Radio Access Networks (RANs) or
mobile Base Stations (BSs). On the other hand, Fog Computing provides distributed mini-cloud resources between the
end devices and the cloud (see Fig. 6). Both technologies
reduce the network delay, and provide better Quality-ofService and Experience (QoS & QoE). These technologies
are essential parts in real-time/streaming IoT applications,
which might also require location/context-aware information
processing.
Mobile Edge Computing, also called Multiaccess Edge
9
-----
Cloud Layer →
Fog Layer →
Edge Layer →
IoT Layer →
**FIGURE 6.** Cloud, Fog, and Edge Computing for IoT networks.
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
devices provides real-time services for the users. To mitigate
malicious attacks on fog nodes, Wu and Ansari [121] proposed partitioning the fog nodes into different clusters. The
nodes in each cluster have their own access control list, which
is protected and managed by Blockchain. They showed, using
simulations, the effectiveness of their approach in reducing
the computational and storage requirements of Blockchain.
They included a heuristic algorithm to reduce the time needed
to solve the consensus puzzle by having all fog nodes perform
the computations cooperatively.
Cloud, Fog, and Edge computing technologies provided
IoT devices with more capabilities, which increased their
adoption in new applications and domains. The continuous
expansion of IoT networks and their dynamic nature ask
for intelligent and dynamic management of such networks.
Adaptive control of the network does not only save in terms
of hardware cost, but also dynamically optimizes the operations in the network. Network optimization can be done using
Software-Defined Networking, which dynamically changes
data flow in the network based on its state. SDN can incorporate Artificial Intelligence algorithms to optimally choose
between using Cloud, Fog, or Edge resources, or even a
combination of them, to process IoT data.
Computing (MEC), is a specific type of Edge Computing that
leverages mobile Base Stations. It complements cloud computing by offloading computations closer to mobile and IoT
devices. MEC supports ultralow latency and delay-sensitive
IoT applications in 5G networks [115]. Xiong *et al.* [116]
proposed a prototype for MEC-enabled Blockchain for mobile IoT applications. However, the limited MEC resources
makes it critical to optimally offload heavy computations to
different Edge, Fog, or Cloud resources. Liu *et al.* [117],
for example, optimized the joint computation offloading and
content caching problems to tackle the intensive computations in PoW consensus algorithm.
Xiong *et al.* [118] studied the relationship between cloud
or fog providers and PoW-based Blockchain miners with
limited computation resources. They chose to offload the
computational intensive part of PoW to the cloud and/or fog
nodes. The computing nodes offer services to the miners for
a given price using a game theoretic approach. The miners
can then decide on the amount of service to purchase from
the computing nodes. Tuli *et al.* [119] also used Blockchain
to provide authentication and encryption services to secure
IoT sensitive data and operations. They proposed FogBus, a
lightweight end-to-end platform-independent framework for
IoT applications. It enables easy deployment, scalability, and
cost efficiency by integrating IoT into cloud, fog, and edge
computing with the help of Blockchain.
Blockchain has been used to provide distributed access
control for IoT devices [28]. Almadhoun *et al.* [120] proposed a user authentication system for IoT devices using fog
computing. In their proposed system, fog nodes utilize an
Ethereum smart contract to authenticate the users and man
age access permissions. The proximity of fog nodes to IoT
10
**VII. BLOCKCHAIN & SOFTWARE-DEFINED**
**NETWORKING (SDN)**
SDN Technology demonstrated its importance in managing
routing decisions in smart environment IoT networks since
they are usually vulnerable to node/link failures [122]. SDN
has been used to mitigate latency issues, like congestion and
transmission delays, in time-sensitive smart industrial IoT
environments [123]. SDN also balances the load between
Fog nodes, which can be vehicles in IoV environments, and
the Cloud to allow time-sensitive tasks meet their deadlines
[124]. When a Fog is overloaded, SDN dynamically makes
offloading decisions to select the best offloading Fog node
based on computational and network resource information
[125]. Blockchain augments SDN benefits for IoT networks
by providing security, privacy, flexibility, scalability, and
confidentiality to increase energy utilization and throughput
while reducing end-to-end delay [126].
SDN enables network management and protocols to be
adaptable and programmable. It was proposed in 2006, in
the OpenFlow whitepaper, to test experimental protocols in
university campus networks [127]. SDN fits to the rapid
changes and demands in network applications, and eliminates
the need for pre-programmed, vendor-specific, and expensive
network devices. SDN achieves this flexibility by separating
the control and data planes in the network. The control
plane makes decisions on the traffic flow in the network,
whereas the data plane is responsible for forwarding that
traffic. Indeed, the control plane is the network brain, and is
usually a centralized software entity called SDN controller.
The SDN architecture uses the concept of Application
Programming Interface (API) in a three-layer structure. Fig.
7 shows two common interfaces between these layers, i.e.
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
**Defines Network Policies**
### Control Plane: Data Plane:
**Defines Forwarding Rules**
**Performs Packet Forwarding**
**FIGURE 7.** The three-layers SDN architecture.
Northbound and Southbound APIs. Two additional interfaces
are sometimes considered to allow the communication be
tween multiple controllers in the control layer, i.e. Eastbound
and Westbound APIs [27]. A master controller is usually
needed to coordinate the decisions of multiple controllers.
A controller is the interface between network elements and
applications like firewalls and load balancing applications
[128]. It provides agility to network infrastructures, like
routers and switches, by dynamically optimizing the network
resources. Like other emerging technologies, SDN introduces new security challenges to network infrastructures.
At the same time Blockchain can mitigate those challenges
by providing confidentiality, integrity, and availability to the
network devices [18].
Denial of Service Attacks (DoS) and Distributed Denial
of Service Attacks (DDoS) target centralized network architectures. These attacks stop network services from serving legitimate users, devices, and applications. Such attacks
can target SDN controllers to malfunction and paralyze the
whole network. Blockchain can mitigate these attacks and
help avoiding single-points-of-failures in centralized network
architectures. Blockchain can provide decentralized trust in
physically distributed, logically centralized, SDN controller
architectures [18]. Using SDN and Blockchain, network
administrators can easily program and configure network
components using smart contracts. Those components can
securely perform their software updates by accessing policies
and configurations from Blockchain-based SDN controllers.
Blockchain integration with SDN did not attract the required level of attention yet by the research community [18].
Alharbi [18] explained the role of Blockchain in providing
and improving security features in SDN architectures. The
dynamism, adaptability, and remote configuration of SDN
networks provide a lot of support for IoT networks. Bera
*et al.* [27] showed how these features can provide efficient,
scalable, seamless, and cost-effective management of IoT
devices. SDN also realizes the real-time demands of IoT
applications by its ability to optimize traffic flow and load
balancing. Such optimization improves the bandwidth utilization in the network and mitigates bottlenecks.
Jararweh *et al.* [129] proposed a comprehensive SDNbased IoT framework to simplify IoT management and mitigate several problems in traditional IoT architectures. They
integrated software-defined networks, storage, and security
into a single software-defined control model for IoT devices. The software-defined storage manages big IoT data
by separating the data control layer, which controls storage resources, from the underlying infrastructure of storage
assets. Finally, the software-defined security separates the
data forwarding plane from the security control plane. They
included a proof-of-concept to show the performance of their
framework in handling huge amounts of IoT data.
The global network view in SDN controllers addresses
the heterogeneity, scalability, optimal routing, and bottleneck
issues in IoT networks. Kalkan and Zeadally [130] discussed
the benefits and drawbacks of SDNs in IoT networks and focused on single-point-of-failure issues in traditional centralized SDN controllers. They separated the roles of single controllers to multiple hosts using a distribution-of-risks scheme.
The bandwidth utilization was improved by distributing the
communication traffic among three different controllers, i.e.
Intrusion, Key, and Crypto Controllers. The intrusion controller mitigates possible intrusions besides managing and
securing the routes. The key controller controls symmetric
and asymmetric key distribution in the whole ecosystem.
The crypto controller provides cryptographic services for
authentication, integrity, confidentiality, privacy, and identity
management.
LI *et al.* [19] focused on the security challenges and
solutions for Blockchain-based SDN systems. They focused
on DoS and DDoS attacks on centralized SDN controllers,
and insider attacks on distributed SDN controllers. They
highlighted Blockchain ability to secure distributed SDN
controllers and data plane forwarding devices. They also
listed scanning, spoofing, hijacking, DoS, and Man-in-themiddle attacks as SDN vulnerabilities. At the end, they listed
traffic-flow control, policy enforcement, and DoS defence
mechanisms as possible solutions for those vulnerabilities.
Blockchain was also used to ensure the security and consistency of the statistics in SDN-based IoT networks [131].
Medhane *et al.* [132] proposed a security framework for
next generation IoT by integrating Blockchain with SDN,
edge, and cloud computing technologies. The framework features security attack mitigation, continuous confidentiality,
authentication, and robustness. The attacks in IoT networks
are detected in the cloud and reduced at the edge nodes. SDN
controllers examine and manage the traffic flow to actively
mitigate doubtful traffic. Similarly, Sharma *et al.* [133] de
11
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
scribed a distributed cloud architecture at the edge of the
network; i.e. Fog nodes, which is secured using Blockchain
and SDN. They securely aggregate IoT data using fog nodes
before it is sent to the cloud for heavier analysis and/or longterm storage. Their architecture supports large IoT data using
low-cost, high-performance, and on-demand secure services.
They significantly reduced traffic load and delay compared to
traditional IoT architectures.
Sharma *et al.* [89] proposed DistBlockNet, a decentralized
and secure Blockchain-based SDN architecture that updates
flow-rule tables in large-scale IoT networks. Blockchain was
used to verify flow-rule tables’ versions, and securely download them to IoT/forwarding devices. Sharma *et al.* [134]
also proposed SoftEdgeNet to extend their previous works
[89], [133]. SoftEdgeNet improved their previous designs
by pushing the storage and computations to the extreme
edge to manage real-time traffic and avoid resource starvation. It is a distributed network management architecture
for edge computing networks. It mitigates flooding attacks
and provides real-time network analytics using Blockchain,
SDN, fog, and edge nodes. SoftEdgeNet has an efficient
flow-rule allocation and partitioning algorithm at the edge of
the network that minimizes traffic redirection, and creates a
sustainable network.
Blockchain has been also used to secure the configuration,
management, and migration of Virtual Network Functions
(VNFs) [135]. VNF, also called Network Function Virtualization (NFV), allows devices with adequate resources to
perform multiple tasks simultaneously, or at least in realtime. VNF achieves multitasking by separating the control
plane from the physical devices [136]. Alvarenga *et al.*
[135] implemented a prototype that makes VNF configuration immutable, auditable, nonrepudiable, consistent, and
anonymous. Their design eliminates single-points-of-failures
and provides high availability of the network’s configuration
information with a delay of about two seconds. The architecture is resilient to Blockchain collusion attacks, and the
configuration information cannot be compromised even with
a successful collusion attack. Such resiliency is achieved
by using a variant of the Byzantine Fault Tolerant (BFT)
consensus protocol called Ripple protocol [137].
Blockchain was also used in wireless network virtual
ization ecosystems to prevent double spending of wireless
resources at a given time and location [138], [139]. Wireless
network virtualization enables sharing physical wireless infrastructures and radio frequency slices to improve coverage,
capacity, and security. Proof-of-Wireless-Resources (PoWR)
[139] has been proposed to mitigate double spending of the
same wireless resources. Rawat [138] used SDN to provide
dynamic and efficient network configuration, and used Edge
Computing to decrease delays by avoiding the use of highspeed backhauls. Such fusion of Blockchain with SDN and
edge computing guarantees QoS for end users, and provides
trust, transparency, and seamless subleasing of resources in
trustless wireless networks. The use of Blockchain makes it
practically infeasible to create malicious attempts to sublease
12
other’s wireless resources.
The dynamic capabilities of SDN-based networks open
the doors for many applications, specially in IoT ecosystems. These capabilities can be further enhanced by adding
intelligent decision making into their controllers. Such intelligent decisions are now possible to be inferred using
AI and machine learning algorithms. With the advent of
Deep Neural Networks (DNN), these algorithms can learn
on highly complex environments. Recent DNN-based algorithms achieved accuracies that exceed human abilities
in different domains. SDN controllers can deploy DNNbased Reinforcement Learning algorithms to create intelligent agents that dynamically adapt to network changes.
These capabilities are a must for IoT networks in future smart
environments.
**VIII. BLOCKCHAIN & ARTIFICIAL INTELLIGENCE (AI)**
AI and Machine Learning algorithms play an essential role
in adding automation and intelligence into different smart
environment applications, including smart cities applications
[140]. Besides supporting smart applications, AI also augments various underlying technologies, like optimizing SDN
monitoring to minimize network latency [141]. Furthermore,
Blockchain empowers AI decision-making by making it
more secure and efficient [142]. For example, integrating
Blockchain and AI provides decentralized authentication for
smart cities [143], where user identities are kept secret while
attackers are automatically identified. In smart health systems, Blockchain integration with AI helped secure medical
data sharing [144] and protect personal healthcare records
[145]. For smart energy trading, Blockchain-enforced Machine Learning predictive analysis models provide real-time
support and monitoring as well as immutable transaction
logs for decentralized trading [146]. Likewise, Blockchain
integration with Machine Learning in smart factories can
secure system transactions and deliver smarter quality control
schemes [147].
Akter *et al.* [31] defined the ABCD of digital business as
AI, Blockchain, Cloud, and Data analytic. They considered
these emerging technologies as transformation factors for future digital business models. For successful digital business,
Garcia [29] proposed a complete legal doctrine for smart
digital economy. Technologies like Blockchain, AI, IoT, and
big data guide governments’ annual budget plans to simplify
and maximize the application of taxes for digital businesses.
Garcia also showed that Blockchain and cryptocurrencies
augments AI and IoT to create smart economy in smart digital world. Ekramifard *et al.* [30] studied how AI algorithms
improve Blockchain designs and operations. They discussed
the effect of this integration in the medical field, like the
ability to gather, analyse, and make decisions on medical
datasets. AI and Blockchain helped in different medical
applications, including systems for the ongoing COVID-19
pandemic [148], [149]. Mashamba-Thompson and Crayton
[149] integrated Blockchain and AI to create a low cost self
testing and tracking system for COVID-19. Their system
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
is ideal in environments with poor access to laboratory infrastructures. Similarly, Nguyen *et al.* [148] discussed how
Blockchain and AI were used in the literature to compact the
COVID-19 pandemic.
Al-Garadi *et al.* [26] discussed the role of machine learning and deep learning in IoT security. They suggested integrating Edge computing and Blockchain into machine learning and deep learning to provide reliable and effective IoT
security methods. In their work, they investigated the use
of Neural Networks (NN) and other machine learning algorithms (see Fig. 8) to detect attacks in IoT networks. Based
on IoT and network data, IoT network state is classified
into normal (secure), early warning, or attacked state. Fig. 8
shows a brief taxonomy of different algorithms in the field of
AI that can be adopted in IoT ecosystems. Those algorithms
can be classified into classical machine learning algorithms
and algorithms based on DNN. Each of those classes can be
further categorized into supervised, unsupervised, and semisupervised methods. The supervision in learning algorithms
means introducing training labels with training examples that
were prepared by professional or computer software. It is
sometimes impractical or hard to create labeled datasets; in
this case, we can use unsupervised algorithms to categorize
and cluster the data into different groups. Semi-supervised
algorithms lay between those two classes, where only a small
portion of the data is labeled, whereas there are no labels for
the rest of the data.
AI integration is essential to provide smart decisionmaking capabilities into different technologies in a smart environment ecosystem. The breakthroughs in machine learning and Deep Learning algorithms make them suitable for
solving complex problems in rapidly changing environments,
such as IoT networks. AI is important to enhance the performance of technologies, like Blockchain, IoT, SDN, Cloud,
Fog, and Edge Computing. Self-driving vehicles, smart transportation, automatic delivery robots are some examples of
smart environment applications that need AI integration into
Blockchain-IoT solutions. AI can also be used to optimize the
global energy consumption in a smarter and greener world to
decrease the effect of climate change and local air pollution.
Kumari *et al.* [150], for example, studied the advantages
and challenges of integrating Blockchain with AI in Energy
Cloud Management (ECM) systems. Using IoT and Smart
Grids (SG), this integration allows for sustainable energy
management and efficient load prediction in a trustless environment (see Fig. 9). They also proposed a decentralized
Blockchain-based AI-powered ECM framework for energy
management to mitigate security and privacy issues in traditional implementations.
AI can provide best pricing for IoT data to be sold and/or
computations to be performed. AI can also empower SDN
controllers to choose optimal network routes to forward
data traffic. AI-based SDN traffic control can minimize network delay and bandwidth consumption. It is challenging
to optimally offload computations between end-devices and
the Cloud, Fog, and/or Edge servers jointly considering
**FIGURE 8.** A taxonomy of Artificial Intelligence algorithms for IoT ecosystems.
delay, computations, and power resources. AI algorithms,
like Deep Reinforcement Learning (DeepRL), have been
recently considered for task offloading and orchestration in
edge computing applications [151], [152]. Dai *et al.* [151]
used DeepRL to dynamically orchestrate edge computing
and caching resources in complex vehicular networks. The
complexity of such networks comes from vehicles mobility
and content popularity/localization, e.g. a car accident at a
given location. Furthermore, Dai *et al.* [152] proposed to integrate AI and Blockchain to provide intelligent architectures
for flexible and secure resource sharing and content caching
in 5G networks.
13
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
selection algorithm to increase trust, provide protection, and
save time and cost in automotive supply chain ecosystems
that are owned by different organizations.
**FIGURE 9.** Smart energy management for sustainable energy and green
smart environments.
Qiu *et al.* [153] provided trust in SDN Industrial IoT
networks using Blockchain consensus protocols. In their
design, Blockchain collects, synchronizes, and distributes
network views between distributed SDN controllers. To im
prove the throughput, they used a Dueling Deep Q-Learning
(DDQL) approach to jointly optimize view changes, access
selection, and computational resource allocation. Deep QLearning (DQL) was also used in Distributed SoftwareDefined Vehicular Network (DSDVN) to adapt to the variety
of data, network-flow, and vehicle types [154]. To increase
the throughput of a permissioned Blockchain, consensus
schemes were used to reach consensus efficiently and securely in DSDVN using DQL. It optimizes compute and
network resources by jointly considering the trust features of
Blockchain nodes to improve the throughput.
Blockchain integration with all those technologies was a
major factor in the deployment of smart environment applications. Sharma *et al.* [155], for example, used Blockchain to
allow for decentralized coordination and control for vehicular
networks in smart cities. Moreover, Sharma and Park [156]
integrated Blockchain and SDN to provide a two-layer network architecture that was specially designed for smart cities.
It is composed of core and edge networks, and leverages
the benefits of both centralized and distributed architectures.
Their design supports IoT heterogeneity and provides a scalable and secure architecture using edge computing. They
used a memory-hardened PoW scheme to enforce distributed
privacy and security, and to avoid tampering of information
by attackers. Sharma *et al.* [157] used a private version of
Ethereum to simulate a distributed framework for automotive
industries in smart cities. They proposed a novel miner node
14
**IX. OPEN RESEARCH PROBLEMS**
A lot of work is still needed to allow for smoother integration
between Blockchain and IoT technologies to create smarter
things. Law and regulation issues are some of the main problems that are discussed in the literature for using Blockchain
for basic monetary transactions. Hence, these issues will
directly impact machine-to-machine monetary transactions
using Blockchain. Akins *et al.* [67] discussed income taxation of cryptocurrency transactions, such as Bitcoins, when
used for purchases with monetary value. Sapovadia [158]
discussed the legal issues in cryptocurrencies that are similar
to those of foreign currencies. Emelianova and Dementyev
[159] argued for a unified supranational legal act for cryptocurrencies, similar to the European Union (EU) directives.
They discussed the provisions in many European and Asian
countries for the use and taxation of cryptocurrencies, which
differs from one government to another. Omololu [160] emphasized the need for full law enforcement of Blockchain
applications as they still do not conform to current legal
structures. This requires countries to supervise Blockchain
integration in different applications and domains, including
IoT, to ensure that they comply with the law.
To create an IoT-oriented Blockchain platform, both hardware and software should be highly optimized to perform
Blockchain operations. IBM has taken the lead in this path
by creating a 10-cents tiny edge CPU architecture that can
efficiently run Blockchain operations [161]. It can be embedded into IoT devices to support Blockchain operations.
IBM called this project "Crypto Anchors", as they want to
anchor physical objects into Blockchain-IoT applications.
However, the difference between such CPU architectures and
standard Computer CPUs requires specific Blockchain implementations for Crypto Anchor CPUs. Hence, we believe
that this is a good research direction to create an operating
system for those CPUs that is capable of bridging this gap,
such as the work by Wright and Savanah [162]. Building
Blockchain-oriented CPUs, firmware, and operating systems
is a great step towards more robust Blockchain-IoT integration. However, we believe that the research should also
continue on improving Blockchain architectures, including
consensus algorithms and communication protocols.
In terms of consensus algorithms, there has been an effort
to replace the most secure consensus algorithm in practice,
i.e. PoW. The main reason is the power consumption of PoW,
which does not fit resource-limited IoT devices. PoS [66] and
DPoS [54] are two promising alternatives, but they are still
criticized for not being as secure as PoW [163]. Ethereum
plans to migrate from using PoW to PoS [164], because
it is currently the best alternative for PoW [165]. DPoS is
currently adopted in EOS [166] and few other Blockchain
implementations. There is a strong debate around the level
of decentralization in DPoS and PoW [167]. To show the
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
difference, Li and Palanisamy [167] studied a DPoS-based
cryptocurrency for social media, called Steem [168], and the
PoW-based Bitcoin Blockchain. They showed that Bitcoin is
more decentralized compared to Steem among top miners,
but less decentralized in general due to Bitcoin mining pools.
Consensus algorithms are a major factor in determining
Blockchain performance, and that is why researchers try
to increase the security and lower power consumption of
these algorithms [169], [170]. Even lightweight Blockchain
platforms that are built specifically for resource-limited IoT
applications have some drawbacks. DAG-based platforms for
example, like IoTA, suffer from double-spending attacks.
Hence, there is always a security vs. performance tradeoff when choosing between different Blockchain platforms
or consensus algorithms. Such trade-off should be carefully
selected to meet the different requirements of different IoT
applications. Pongnumkul *et al.* [53], for example, studied
the trade-off between choosing the most secure PoW vs.
the fastest PBFT consensus algorithms in Ethereum and
Hyperledger Fabric, respectively.
However, it is sometimes necessary to compare different
Blockchain platforms using factors other than consensus
algorithms. Developers can falsify Blockchain performance,
and attract investors only based on the consensus performance. However, choosing a Blockchain platform based
solely on the consensus performance can badly affect its
performance in large scale IoT networks or in smart city
applications. Hence, Zheng *et al.* [10] argued for Blockchain
testing schemes to help practitioners select Blockchain platforms that fit the requirements of different IoT applications.
They also discussed the drawbacks of mining pools in public
Blockchain implementations, which can cause loss of decentralization. Another Blockchain related problem in IoT
applications are Smart Contract bugs. These bugs shorten
the life-cycle of that code and loosen its agility in the ever
changing IoT networks.
Furthermore, using Oracles [171], which are trusted third
party information sources, exposes Blockchain to lose its
inherited security. Oracles provide external data and information to Blockchain Smart Contracts to enrich the capabilities of Blockchain applications, including IoT applications.
Oracles query, verify, and authenticate external data sources
to provide trust for such sources. Blockchain should trust
oracles, since they make decisions based on the data they
provide. However, the trust issues of Oracles will directly impact Blockchain security that was meant to work in trustless
environments. Oracles might still suffer from centralization,
collusion, Sybil, and Man-in-the-middle attacks [172]. They
are also possibly exposed to physical attacks on IoT devices
that are usually the source for external Blockchain data. An
example of physical attacks in IoT food or drug supply chain
systems is the displacement of temperature or GPS sensors
that are usually attached on supply chain shipping trucks. The
displacement of IoT devices feeds Blockchain ecosystems
with falsified information, which causes those systems to
loose their security features, and hence to fail.
51% attacks, cost, regulations, confirmation time, forks,
and scalability are common technical Blockchain-related
problems [173]. Scalability, for example, has been discussed
a lot in literature, and different solutions have been proposed
[64]. In addition, there are also domain-specific challenges,
such as the challenges for using Blockchain for AI [68],
Security [68], Healthcare [174], Education [175], Product
Traceability [176], E-Voting [73], [74], [177], [178], and IoT
applications. There is a need to reduce Blockchain energy
consumption and operation cost to make it feasible to integrate with various technologies, including IoT. However,
security, privacy, scalability, and Oracle inherited Blockchain
issues need to be addressed before looking for integration
issues [20]. Finally, real deployment of Blockchain-IoT solutions in smart city prototypes might reveal new issues
compared to what simulation results currently demonstrate.
To focus more on Blockchain-IoT integration challenges,
Makhdoom *et al.* [14] used a test case for a supply chain
monitoring system. The challenges include the lack of IoTcentric consensus protocols, IoT-based transaction validation
rules, IoT-oriented Blockchain interfaces, and storage capacity for IoT data. They added other Blockchain-related challenges, like consensus finality, resistance to DoS attacks, fault
tolerance, scalability, and transaction volume. The test case
required a secure and synchronized software upgrade scheme
for IoT devices and the underlying Blockchain platform. The
upgrade scheme can fix bugs and protect the system against
new vulnerabilities. In addition, we need to be careful when
integrating SDN and Blockchain technologies to fully benefit
from SDN features for IoT applications. Blockchain is decentralized by nature, whereas SDN controllers are supposed to
be centralized. Moreover, SDN controllers need to update the
flow-rule tables to the forwarding devices in real-time, while
Blockchain consensus works periodically on a longer time
frame.
The challenges brought by integrating Blockchain with
Cloud, Fog, and Edge Computing directly relate to IoTBlockchain challenges. That is because these technologies
are mainly meant to support IoT networks. The lack for perfectly implemented Blockchain-specific IoT infrastructures,
and the absence of energy-efficient mining are some of those
challenges [13]. Authentication, adaptability, network security, data integrity, verifiable computation, and low latency
are requirements for integrating Blockchain with Cloud, Fog,
and specially Edge Computing [16]. Yang *et al.* [16] identified load balancing, task offloading, resource management
and function integration on heterogeneous platforms as challenges to be addressed for successful integration. Addressing
all these issues is essential to support next generation applications in fully automated futuristic smart environments. In
such environments, all these technologies should be smoothly
integrated, flawlessly functioning, and securely handling IoT
data.
15
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
**X. DISCUSSION AND CONCLUSION**
In this paper, we present the strengths of Blockchain beyond
its traditional use for monetary and digital asset trading. We
focus on Blockchain integration with IoT to create a futuristic
view of smart environments. Implementing such autonomous
smart environment architectures will simplify human lives
and increase their effectiveness. We discuss the role of AI,
SDN, Cloud, Fog, and Edge Computing in enhancing the capabilities of Blockchain-IoT applications, and providing such
automation in smart environments. Blockchain augments IoT
applications with automation, security, privacy and many
other features that are essential for smart environments. We
showed in this work how Blockchain was able to address
a number of issues, limitations, and challenges in all those
technologies.
Table 2 shows the level of technological integration of
some Blockchain-IoT applications and prototypes in the literature. The table also shows the recent research interest to
integrate more technologies into such systems. Powered by
Blockchain-IoT integration, these prototypes served different needs and solved different problems in different smart
applications. To build those systems, some of the authors
had to create their own Blockchain implementation and/or
its consensus algorithm. Specific application requirements
usually ask for different Blockchain characteristics that are
not usually available in traditional implementations. Hence,
there is a need for a Blockchain implementation with the
possibility to plug in new features when needed by developers and practitioners. This will allow the industry and
the research community to focus more on developing more
smart applications, and mitigating different technological
limitations.
Figure 10 shows a simplified architecture for smart environments using the technologies discussed in this paper. It shows the use of Blockchain to securely share and
store IoT data in trustless environments. In this architecture,
Blockchain can be deployed using Cloud, Fog, or Edge
resources, or even a combination of them. The AI-powered
SDN traffic control can manage the traffic flow of IoT data
in a dynamic smart way. AI can also be used to provide
best pricing for IoT data that needs to be sold in such a
system, like sensor data. It can be also used to provide the
best target to process this data using the Cloud, Fog, or
Edge computing. In addition, private data can be securely
stored in Blockchain, and can be obtained by authentic users
using different encryption and security measures. Blockchain
will work as an access control mechanism for data access,
including IoT data, for IoT devices and their users.
There are a number of challenges and open research
problems that still need to be tackled to provide a smoother
technological integration. A major problem of such integration is the difficulty of testing using physical deployment
in real smart environments in order to reveal hidden issues.
Physical deployment is needed because simulation results
alone are insufficient to demonstrate the performance and
issues in such systems. In addition, creating a general purpose
16
**AI** -based data pricing
**FIGURE 10.** Architecture of Smart Envrinoments.
Blockchain platform that can be easily adapted to solve
different problems is of a great importance for the research
community. Creating such a general purpose platform will
remove the burden that is imposed by modifying Blockchain
architectures to meet certain features and requirements.
**REFERENCES**
[1] U. Ghosh, P. Chatterjee, S. Shetty, and R. Datta, “An sdn-iot-based
framework for future smart cities: Addressing perspective,” 2020.
[2] M. Conoscenti, A. Vetrò, and J. C. De Martin, “Blockchain for the
internet of things: A systematic literature review,” in 2016 IEEE/ACS
13th International Conference of Computer Systems and Applications
(AICCSA), Nov 2016, pp. 1–6.
[3] K. Christidis and M. Devetsikiotis, “Blockchains and smart contracts for
the internet of things,” IEEE Access, vol. 4, pp. 2292–2303, 2016.
[4] A. Reyna, C. Martín, J. Chen, E. Soler, and M. Díaz, “On blockchain and
its integration with IoT. challenges and opportunities,” Future Generation
Computer Systems, vol. 88, pp. 173 – 190, 2018. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S0167739X17329205](http://www.sciencedirect.com/science/article/pii/S0167739X17329205)
[5] G. S. Ramachandran and B. Krishnamachari, “Blockchain for the
IoT: Opportunities and challenges,” CoRR, vol. abs/1805.02818, 2018.
[[Online]. Available: http://arxiv.org/abs/1805.02818](http://arxiv.org/abs/1805.02818)
[6] A. Panarello, N. Tapas, G. Merlino, F. Longo, and A. Puliafito,
“Blockchain and IoT integration: A systematic survey,” Sensors, vol. 18,
[no. 8, 2018. [Online]. Available: https://www.mdpi.com/1424-8220/18/](https://www.mdpi.com/1424-8220/18/8/2575)
[8/2575](https://www.mdpi.com/1424-8220/18/8/2575)
[7] T. M. Fernández-Caramés and P. Fraga-Lamas, “A review on the use of
blockchain for the internet of things,” IEEE Access, vol. 6, pp. 32 979–
33 001, 2018.
[8] M. Banerjee, J. Lee, and K.-K. R. Choo, “A blockchain future for
internet of things security: a position paper,” Digital Communications
and Networks, vol. 4, no. 3, pp. 149 – 160, 2018. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S2352864817302900](http://www.sciencedirect.com/science/article/pii/S2352864817302900)
[9] H. F. Atlam, A. Alenezi, M. O. Alassafi, and G. Wills, “Blockchain
with internet of things: Benefits, challenges, and future directions,”
International Journal of Intelligent Systems and Applications, vol. 10,
[no. 6, pp. 40–48, June 2018. [Online]. Available: https://eprints.soton.ac.](https://eprints.soton.ac.uk/421529/)
[uk/421529/](https://eprints.soton.ac.uk/421529/)
[10] Z. Zheng, S. Xie, H.-N. Dai, X. Chen, and H. Wang, “Blockchain
challenges and opportunities: A survey,” International Journal of Web and
Grid Services, vol. 14, no. 4, pp. 352–375, 2018.
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
**TABLE 2.** Technological integration of Cloud, Edge, Fog, SDN, and AI technologies into Blockchain-IoT prototypes.
|Paper|Year|Integration with Edge/Fog Cloud SDN AI|Platform|Functionality|Mining|Type|Transactions|
|---|---|---|---|---|---|---|---|
|[97]|2014||Bitcoin|Data Market|PoW|Public|UTXOs|
|[82]+|2015||Bitcoin|Data Market|PoW|Public|UTXOs & Scripts|
|[179]|2015||Bitcoin6|Decentralized Data Management|PoW|Public|Data & Queries|
|[87]|2016||Ethereum|D2D Communication & Marketplace|PoW|Private|Smart Contracts|
|[92]+|2016||Bitcoin6|Smart Homes||Public|IoT Data|
|[98]|2017||NameCoin|Privacy-Aware PKI|PoW|Public|Digital Signatures|
|[94]+|2017||Bitcoin6|Smart Homes||Private|IoT Data|
|[88]|2017||Ethereum|D2D Communication|PoW|Public|Smart Contracts|
|[112]|2017||Ethereum|D2D Communication|PoW|Public|Smart Contracts|
|[89]+|2017|||Decentralized SDN (DSDN)|PoW|||
|[96]|2017||MultiChain|D2D Marketplace|Round Robin|Private|Digital Assets|
|[133]+|2018| ||Distributed Cloud|PoSer3|Private|Smart Contracts1|
|[120]|2018| |Ethereum|Authentication|PoW|Public|Smart Contracts|
|[117]|2018||Wireless BC7|Offloading PoW Computations|PoW|||
|[139]|2018||Bitcoin6|Wireless Network Virtualization|PoWR5|Public|Network Data|
|[154]|2018| |Fabric|Vehicular Networks|PBFT|Private|Smart Contracts1|
|[134]+|2018| ||DSDN for Edge Computing|PoW|||
|[156]+|2018| |Ethereum|Smart Cities|PoW|Private|IoT Data Hashes|
|[116]|2018| |Ethereum|Edge Computing Mining|PoW|Private|IoT Data|
|[152]+|2019| ||Intelligent Wireless Networks|PBFT|Consortium|Network Data|
|[153]|2019| ||DSDN Consensus|PBFT|Private|Traffic/Control Data|
|[138]|2019| |Bitcoin6|Wireless Network Virtualization||Public|Network Data|
|[157]|2019||Ethereum|Supply Chain Management|PoW|Private|Smart Contracts1|
|[118]|2019| |Ethereum|Offloading PoW Computations|PoW|Private|Random Data|
|[119]|2019| |Bitcoin6|Data Privacy and Integrity|PoW||IoT Data|
|[131]|2020| ||Securing Traffic Measurements|||Network Statistics|
|[132]+|2020| ||Distributed Network Security|||IoT & Network Data|
|[109]|2020||Their Own|Secure Cloud Scheduler|PoSch2|Public|Task Scheduling Info|
|[121]|2020| |Their Own|Secure Fog Computing|Cooperative||Access Control Lists|
|[180]|2020| |Bitcoin6|DSDN Consensus|BFT4|Private|Traffic/Control Data|
1 Contract execution beside monetary and data transactions. 2 Proof-of-Schedule. 3 Proof-of-Service. 4 Aardvark, RBFT, & PBFT. 5 Proof-of-Wireless-Resources.
6 A different Bitcoin implementation based on the same concepts and architecture. 7 A simulation for a wireless Blockchain implementation.
+ The design or prototype has a hierarchical or multilayer architecture. **Blank Cells:** Information was not provided in the corresponding paper.
[11] M. S. Ali, M. Vecchio, M. Pincheira, K. Dolui, F. Antonelli, and M. H.
Rehmani, “Applications of blockchains in the internet of things: A comprehensive survey,” IEEE Communications Surveys Tutorials, vol. 21,
no. 2, pp. 1676–1717, Secondquarter 2019.
[12] H. Dai, Z. Zheng, and Y. Zhang, “Blockchain for internet of things:
A survey,” CoRR, vol. abs/1906.00245, 2019. [Online]. Available:
[http://arxiv.org/abs/1906.00245](http://arxiv.org/abs/1906.00245)
[13] M. A. Ferrag, M. Derdour, M. Mukherjee, A. Derhab, L. Maglaras, and
H. Janicke, “Blockchain technologies for the internet of things: Research
issues and challenges,” IEEE Internet of Things Journal, vol. 6, no. 2, pp.
2188–2204, 2019.
[14] I. Makhdoom, M. Abolhasan, H. Abbas, and W. Ni, “Blockchain’s
adoption in iot: The challenges, and a way forward,” Journal of
Network and Computer Applications, vol. 125, pp. 251 – 279,
[2019. [Online]. Available: http://www.sciencedirect.com/science/article/](http://www.sciencedirect.com/science/article/pii/S1084804518303473)
[pii/S1084804518303473](http://www.sciencedirect.com/science/article/pii/S1084804518303473)
[15] K. Salah, M. H. U. Rehman, N. Nizamuddin, and A. Al-Fuqaha,
“Blockchain for AI: Review and open research challenges,” IEEE Access,
vol. 7, pp. 10 127–10 149, 2019.
[16] R. Yang, F. R. Yu, P. Si, Z. Yang, and Y. Zhang, “Integrated blockchain
and edge computing systems: A survey, some research issues and challenges,” IEEE Communications Surveys Tutorials, vol. 21, no. 2, pp.
1508–1532, 2019.
[17] L. Lao, Z. Li, S. Hou, B. Xiao, S. Guo, and Y. Yang, “A survey of IoT
applications in blockchain systems: Architecture, consensus, and traffic
modeling,” ACM Comput. Surv., vol. 53, no. 1, Feb. 2020. [Online].
[Available: https://doi.org/10.1145/3372136](https://doi.org/10.1145/3372136)
[18] T. Alharbi, “Deployment of blockchain technology in software defined
networks: A survey,” IEEE Access, vol. 8, pp. 9146–9156, 2020.
[19] W. LI, W. MENG, Z. LIU, and M.-H. AU, “Towards blockchain-based
software-defined networking: Security challenges and solutions,” IEICE
Transactions on Information and Systems, vol. E103.D, no. 2, pp. 196–
203, 2020.
[20] C. Luo, L. Xu, D. Li, and W. Wu, “Edge computing integrated
with blockchain technologies,” in Complexity and Approximation: In
Memory of Ker-I Ko, D.-Z. Du and J. Wang, Eds. Cham: Springer
International Publishing, 2020, pp. 268–288. [Online]. Available:
[https://doi.org/10.1007/978-3-030-41672-0_17](https://doi.org/10.1007/978-3-030-41672-0_17)
[21] S. Xie, Z. Zheng, W. Chen, J. Wu, H.-N. Dai, and M. Imran,
“Blockchain for cloud exchange: A survey,” Computers and Electrical
[Engineering, vol. 81, p. 106526, 2020. [Online]. Available: http:](http://www.sciencedirect.com/science/article/pii/S0045790618332750)
[//www.sciencedirect.com/science/article/pii/S0045790618332750](http://www.sciencedirect.com/science/article/pii/S0045790618332750)
[22] B. K. Mohanta, D. Jena, U. Satapathy, and S. Patnaik, “Survey on
IoT security: Challenges and solution using machine learning, artificial
intelligence and blockchain technology,” Internet of Things, vol. 11,
[p. 100227, 2020. [Online]. Available: http://www.sciencedirect.com/](http://www.sciencedirect.com/science/article/pii/S2542660520300603)
[science/article/pii/S2542660520300603](http://www.sciencedirect.com/science/article/pii/S2542660520300603)
[23] V. Chamola, V. Hassija, V. Gupta, and M. Guizani, “A comprehensive review of the covid-19 pandemic and the role of iot, drones, ai, blockchain,
and 5g in managing its impact,” IEEE Access, vol. 8, pp. 90 225–90 265,
2020.
[24] B. L. R. Stojkoska and K. V. Trivodaliev, “A review of internet of
things for smart home: Challenges and solutions,” Journal of Cleaner
Production, vol. 140, pp. 1454 – 1464, 2017. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S095965261631589X](http://www.sciencedirect.com/science/article/pii/S095965261631589X)
[25] B. Bhushan, A. Khamparia, K. M. Sagayam, S. K. Sharma,
17
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
M. A. Ahad, and N. C. Debnath, “Blockchain for smart cities:
A review of architectures, integration trends and future research
directions,” Sustainable Cities and Society, vol. 61, p. 102360,
[2020. [Online]. Available: http://www.sciencedirect.com/science/article/](http://www.sciencedirect.com/science/article/pii/S2210670720305813)
[pii/S2210670720305813](http://www.sciencedirect.com/science/article/pii/S2210670720305813)
[26] M. A. Al-Garadi, A. Mohamed, A. Al-Ali, X. Du, I. Ali, and M. Guizani,
“A survey of machine and deep learning methods for internet of things
(iot) security,” IEEE Communications Surveys Tutorials, pp. 1–1, 2020.
[27] S. Bera, S. Misra, and A. V. Vasilakos, “Software-defined networking for
internet of things: A survey,” IEEE Internet of Things Journal, vol. 4,
no. 6, pp. 1994–2008, Dec 2017.
[28] I. Riabi, H. K. B. Ayed, and L. A. Saidane, “A survey on blockchain based
access control for internet of things,” in 2019 15th International Wireless
Communications Mobile Computing Conference (IWCMC), 2019, pp.
502–507.
[29] A. R. Garcia, AI, IoT, Big Data, and Technologies in Digital Economy
with Blockchain at Sustainable Work Satisfaction to Smart Mankind: Ac
cess to 6th Dimension of Human Rights. Cham: Springer International
Publishing, 2020, pp. 83–131.
[30] A. Ekramifard, H. Amintoosi, A. H. Seno, A. Dehghantanha, and R. M.
Parizi, A Systematic Literature Review of Integration of Blockchain and
Artificial Intelligence. Cham: Springer International Publishing, 2020,
pp. 147–160.
[31] S. Akter, K. Michael, M. R. Uddin, G. McCarthy, and M. Rahman,
“Transforming business using digital innovations: the application of
ai, blockchain, cloud and data analytics,” Annals of Operations
[Research, May 2020. [Online]. Available: https://doi.org/10.1007/](https://doi.org/10.1007/s10479-020-03620-w)
[s10479-020-03620-w](https://doi.org/10.1007/s10479-020-03620-w)
[32] H. Baniata and A. Kertesz, “A survey on blockchain-fog integration
approaches,” IEEE Access, vol. 8, pp. 102 657–102 668, 2020.
[33] Y. Hajjaji, W. Boulila, I. R. Farah, I. Romdhani, and A. Hussain, “Big data
and iot-based applications in smart environments: A systematic review,”
Computer Science Review, vol. 39, p. 100318, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S1574013720304184](https://www.sciencedirect.com/science/article/pii/S1574013720304184)
[34] D. M. El-Din, A. E. Hassanein, and E. E. Hassanien, Smart
Environments Concepts, Applications, and Challenges. Cham: Springer
International Publishing, 2021, pp. 493–519. [Online]. Available:
[https://doi.org/10.1007/978-3-030-59338-4_24](https://doi.org/10.1007/978-3-030-59338-4_24)
[35] H. S. Woods, “Smart homes: domestic futurity as infrastructure,”
Cultural Studies, vol. 0, no. 0, pp. 1–24, 2021. [Online]. Available:
[https://doi.org/10.1080/09502386.2021.1895254](https://doi.org/10.1080/09502386.2021.1895254)
[36] A. Molnar, “Smart cities education: An insight into existing drawbacks,”
Telematics and Informatics, vol. 57, p. 101509, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S0736585320301684](https://www.sciencedirect.com/science/article/pii/S0736585320301684)
[37] U. Majeed, L. U. Khan, I. Yaqoob, S. A. Kazmi, K. Salah,
and C. S. Hong, “Blockchain for iot-based smart cities: Recent
advances, requirements, and future challenges,” Journal of Network and
Computer Applications, vol. 181, p. 103007, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S1084804521000345](https://www.sciencedirect.com/science/article/pii/S1084804521000345)
[38] T. Roy, A. Tariq, and S. Dey, “A socio-technical approach for resilient
connected transportation systems in smart cities,” IEEE Transactions on
Intelligent Transportation Systems, pp. 1–10, 2021.
[39] B. Xu and G. Thakur, “Introduction to the special issue on
smart transportation,” GeoInformatica, Mar 2021. [Online]. Available:
[https://doi.org/10.1007/s10707-021-00432-3](https://doi.org/10.1007/s10707-021-00432-3)
[40] X.-F. Shao, W. Liu, Y. Li, H. R. Chaudhry, and X.-G. Yue, “Multistage
implementation framework for smart supply chain management under
industry 4.0,” Technological Forecasting and Social Change, vol. 162,
[p. 120354, 2021. [Online]. Available: https://www.sciencedirect.com/](https://www.sciencedirect.com/science/article/pii/S004016252031180X)
[science/article/pii/S004016252031180X](https://www.sciencedirect.com/science/article/pii/S004016252031180X)
[41] J. Lee, Y. C. Lee, and J. T. Kim, “Migration from the traditional
to the smart factory in the die-casting industry: Novel process data
acquisition and fault detection based on artificial neural network,”
Journal of Materials Processing Technology, vol. 290, p. 116972, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S0924013620303939)
[S0924013620303939](https://www.sciencedirect.com/science/article/pii/S0924013620303939)
[42] M. Pech, J. Vrchota, and J. Bednáˇr, “Predictive maintenance and
intelligent sensors in smart factory: Review,” Sensors, vol. 21, no. 4,
[2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/4/1470](https://www.mdpi.com/1424-8220/21/4/1470)
[43] Z. Guan, X. Lu, W. Yang, L. Wu, N. Wang, and Z. Zhang,
“Achieving efficient and privacy-preserving energy trading based
on blockchain and abe in smart grid,” Journal of Parallel and
Distributed Computing, vol. 147, pp. 34–45, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S0743731520303609](https://www.sciencedirect.com/science/article/pii/S0743731520303609)
18
[44] J. Sahoo and K. Barrett, “Internet of things (iot) application model for
smart farming,” 2021.
[45] E. M. Ouafiq, A. Elrharras, A. Mehdary, A. Chehri, R. Saadane, and
M. Wahbi, “Iot in smart farming analytics, big data based architecture,”
in Human Centred Intelligent Systems, A. Zimmermann, R. J. Howlett,
and L. C. Jain, Eds. Singapore: Springer Singapore, 2021, pp. 269–279.
[46] N. G. Rezk, E. E.-D. Hemdan, A.-F. Attia, A. El-Sayed, and M. A.
El-Rashidy, “An efficient iot based smart farming system using
machine learning algorithms,” Multimedia Tools and Applications,
[vol. 80, no. 1, pp. 773–797, Jan 2021. [Online]. Available: https:](https://doi.org/10.1007/s11042-020-09740-6)
[//doi.org/10.1007/s11042-020-09740-6](https://doi.org/10.1007/s11042-020-09740-6)
[47] G. Madaan, B. Bhushan, and R. Kumar, Blockchain-Based Cyberthreat
Mitigation Systems for Smart Vehicles and Industrial Automation.
Singapore: Springer Singapore, 2021, pp. 13–32. [Online]. Available:
[https://doi.org/10.1007/978-981-15-7965-3_2](https://doi.org/10.1007/978-981-15-7965-3_2)
[48] S. Nakamoto et al., “Bitcoin: A peer-to-peer electronic cash system,”
[2008. [Online]. Available: https://bitcoin.org/bitcoin.pdf](https://bitcoin.org/bitcoin.pdf)
[49] C. Esposito, M. Ficco, and B. B. Gupta, “Blockchain-based
authentication and authorization for smart city applications,” Information
Processing & Management, vol. 58, no. 2, p. 102468, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S0306457320309584)
[S0306457320309584](https://www.sciencedirect.com/science/article/pii/S0306457320309584)
[50] C. Oham, R. A. Michelin, R. Jurdak, S. S. Kanhere, and S. Jha,
“B-ferl: Blockchain based framework for securing smart vehicles,”
Information Processing & Management, vol. 58, no. 1, p. 102426, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S0306457320309183)
[S0306457320309183](https://www.sciencedirect.com/science/article/pii/S0306457320309183)
[51] A. Hasankhani, S. Mehdi Hakimi, M. Bisheh-Niasar, M. Shafie-khah,
and H. Asadolahi, “Blockchain technology in the future smart grids:
A comprehensive review and frameworks,” International Journal of
Electrical Power & Energy Systems, vol. 129, p. 106811, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S014206152100051X)
[S014206152100051X](https://www.sciencedirect.com/science/article/pii/S014206152100051X)
[52] C. Cachin, “Architecture of the hyperledger blockchain fabric,” in Workshop on Distributed Cryptocurrencies and Consensus Ledgers, vol. 310.
IBM Research, Zurich, 2016.
[53] S. Pongnumkul, C. Siripanpornchana, and S. Thajchayapong, “Performance analysis of private blockchain platforms in varying workloads,”
in 2017 26th International Conference on Computer Communication and
Networks (ICCCN), July 2017, pp. 1–6.
[54] D. Larimer, “Delegated proof-of-stake (dpos),” Bitshare whitepaper,
2014.
[55] M. Castro, B. Liskov et al., “Practical byzantine fault tolerance,” in OSDI,
vol. 99, 1999, pp. 173–186.
[56] D. Kempe and J. Kleinberg, “Protocols and impossibility results for
gossip-based communication mechanisms,” in The 43rd Annual IEEE
Symposium on Foundations of Computer Science, 2002. Proceedings.,
Nov 2002, pp. 471–480.
[57] P. Maymounkov and D. Mazières, “Kademlia: A peer-to-peer information
system based on the xor metric,” in Peer-to-Peer Systems, P. Druschel,
F. Kaashoek, and A. Rowstron, Eds. Berlin, Heidelberg: Springer Berlin
Heidelberg, 2002, pp. 53–65.
[58] E. Ben-Sasson, A. Chiesa, D. Genkin, E. Tromer, and M. Virza, “Snarks
for c: Verifying program executions succinctly and in zero knowledge,”
in Advances in Cryptology – CRYPTO 2013, R. Canetti and J. A. Garay,
Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 90–108.
[59] W. Ou, M. Deng, and E. Luo, “A decentralized and anonymous data
transaction scheme based on blockchain and zero-knowledge proof in
vehicle networking (workshop paper),” in Collaborative Computing:
Networking, Applications and Worksharing, X. Wang, H. Gao, M. Iqbal,
and G. Min, Eds. Cham: Springer International Publishing, 2019, pp.
712–726.
[60] Z. Peng, H. Wu, B. Xiao, and S. Guo, “Vql: Providing query efficiency
and data authenticity in blockchain systems,” in 2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW), April
2019, pp. 1–6.
[61] V. Buterin et al., “A next-generation smart contract and decentralized
application platform,” white paper, vol. 3, no. 37, 2014.
[[62] N. Szabo, “Smart contracts,” 1994. [Online]. Available: http://www.](http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html)
[fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/](http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html)
[LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html](http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html)
[63] G. Wood et al., “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum project yellow paper, vol. 151, no. 2014, pp. 1–32,
2014.
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
[64] A. Hafid, A. S. Hafid, and M. Samih, “Scaling blockchains: A comprehensive survey,” IEEE Access, vol. 8, pp. 125 244–125 262, 2020.
[65] A. Biryukov, D. Khovratovich, and I. Pustogarov, “Deanonymisation
of clients in bitcoin p2p network,” in Proceedings of the 2014 ACM
SIGSAC Conference on Computer and Communications Security, ser.
CCS ’14. New York, NY, USA: Association for Computing Machinery,
[2014, p. 15–29. [Online]. Available: https://doi.org/10.1145/2660267.](https://doi.org/10.1145/2660267.2660379)
[2660379](https://doi.org/10.1145/2660267.2660379)
[66] S. King and S. Nadal, “Ppcoin: Peer-to-peer crypto-currency with proofof-stake,” self-published paper, vol. 19, August 2012.
[67] B. W. Akins, J. L. Chapman, and J. M. Gordon, “A whole new world: Income tax considerations of the bitcoin economy,” Pittsburgh Tax Review,
vol. 12, p. 25, 2014-2015.
[68] W. Meng, E. W. Tischhauser, Q. Wang, Y. Wang, and J. Han, “When intrusion detection meets blockchain technology: A review,” IEEE Access,
vol. 6, pp. 10 179–10 188, 2018.
[69] C. Pirtle and J. Ehrenfeld, “Blockchain for healthcare: The next
generation of medical records?” Journal of Medical Systems, vol. 42,
[no. 9, p. 172, 2018. [Online]. Available: https://doi.org/10.1007/](https://doi.org/10.1007/s10916-018-1025-3)
[s10916-018-1025-3](https://doi.org/10.1007/s10916-018-1025-3)
[70] A. Ekblaw, A. Azaria, J. D. Halamka, and A. Lippman, “A case study
for blockchain in healthcare: “MedRec” prototype for electronic health
records and medical research data,” in Proceedings of IEEE Open and
Big Data Conference, vol. 13. IEEE, 2016, p. 13.
[71] C. Noyes, “Bitav: Fast anti-malware by distributed blockchain consensus
and feedforward scanning,” CoRR, vol. abs/1601.01405, 2016. [Online].
[Available: http://arxiv.org/abs/1601.01405](http://arxiv.org/abs/1601.01405)
[72] Q. Lu and X. Xu, “Adaptable blockchain-based systems: A case study for
product traceability,” IEEE Software, vol. 34, no. 6, pp. 21–27, November
2017.
[73] M. Seifelnasr, H. S. Galal, and A. M. Youssef, “Scalable open-vote
network on ethereum,” Cryptology ePrint Archive, Report 2020/033,
[2020. [Online]. Available: https://eprint.iacr.org/2020/033](https://eprint.iacr.org/2020/033)
[74] X. Yang, X. Yi, S. Nepal, and F. Han, “Decentralized voting: A
self-tallying voting system using a smart contract on the ethereum
blockchain,” in Web Information Systems Engineering – WISE 2018,
H. Hacid, W. Cellary, H. Wang, H.-Y. Paik, and R. Zhou, Eds. Cham:
Springer International Publishing, 2018, pp. 18–35.
[75] D. Khoury, E. F. Kfoury, A. Kassem, and H. Harb, “Decentralized voting
platform based on ethereum blockchain,” in 2018 IEEE International
Multidisciplinary Conference on Engineering Technology (IMCET), Nov
2018, pp. 1–6.
[76] F. Þ. Hjálmarsson, G. K. Hreiðarsson, M. Hamdaqa, and G. Hjálmtýsson,
“Blockchain-based e-voting system,” in 2018 IEEE 11th International
Conference on Cloud Computing (CLOUD), July 2018, pp. 983–986.
[77] M. Sharples and J. Domingue, “The blockchain and kudos: A distributed
system for educational record, reputation and reward,” in Adaptive and
Adaptable Learning, K. Verbert, M. Sharples, and T. Klobuˇcar, Eds.
Cham: Springer International Publishing, 2016, pp. 490–496.
[78] O. Novo, “Blockchain meets iot: An architecture for scalable access
management in iot,” IEEE Internet of Things Journal, vol. 5, no. 2, pp.
1184–1195, 2018.
[79] A. Whitmore, A. Agarwal, and L. Da Xu, “The internet of things–
a survey of topics and trends,” Information Systems Frontiers,
[vol. 17, no. 2, pp. 261–274, 2015. [Online]. Available: https:](https://doi.org/10.1007/s10796-014-9489-2)
[//doi.org/10.1007/s10796-014-9489-2](https://doi.org/10.1007/s10796-014-9489-2)
[80] Zhihong Yang, Yingzhao Yue, Yu Yang, Yufeng Peng, Xiaobo Wang,
and Wenji Liu, “Study and application on the architecture and key
technologies for IOT,” in 2011 International Conference on Multimedia
Technology, July 2011, pp. 747–751.
[81] J. Xie and S. M. Shugan, “Electronic tickets, smart cards, and online prepayments: When and how to advance sell,” Marketing Science, vol. 20,
no. 3, pp. 219–243, 2001.
[82] Y. Zhang and J. Wen, “An IoT electric business model based on the protocol of bitcoin,” in 2015 18th International Conference on Intelligence
in Next Generation Networks, Feb 2015, pp. 184–191.
[83] Y. Zhang and J. Wen, “The IoT electric business model: Using blockchain
technology for the internet of things,” Peer-to-Peer Networking and
Applications, vol. 10, no. 4, pp. 983–994, 2017. [Online]. Available:
[https://doi.org/10.1007/s12083-016-0456-1](https://doi.org/10.1007/s12083-016-0456-1)
[84] D. Sonstebo, S. Ivancheglo, D. Schiener, and D. S. Popov, “IOTA,” 2015.
[[Online]. Available: https://www.iota.org/get-started/what-is-iota](https://www.iota.org/get-started/what-is-iota)
[85] B. Shabandri and P. Maheshwari, “Enhancing iot security and privacy using distributed ledgers with iota and the tangle,” in 2019 6th International
Conference on Signal Processing and Integrated Networks (SPIN), 2019,
pp. 1069–1075.
[86] S. Panikkar, S. Nair, P. Brody, and V. Pureswaran, “ADEPT: An IoT Prac[titioner Perspective,” 2015. [Online]. Available: https://www.coindesk.](https://www.coindesk.com/ibm-reveals-proof-concept-blockchain-powered-internet-things)
[com/ibm-reveals-proof-concept-blockchain-powered-internet-things](https://www.coindesk.com/ibm-reveals-proof-concept-blockchain-powered-internet-things)
[87] A. Bahga and V. K. Madisetti, “Blockchain platform for industrial
internet of things,” Journal of Software Engineering and Applications,
vol. 9, no. 10, pp. 533–546, 2016.
[88] S. Huh, S. Cho, and S. Kim, “Managing iot devices using blockchain
platform,” in 2017 19th International Conference on Advanced Communication Technology (ICACT), Feb 2017, pp. 464–467.
[89] P. K. Sharma, S. Singh, Y. Jeong, and J. H. Park, “Distblocknet: A distributed blockchains-based secure SDN architecture for IoT networks,”
IEEE Communications Magazine, vol. 55, no. 9, pp. 78–85, Sep. 2017.
[90] G. Ateniese, M. T. Chiaramonte, D. Treat, B. Magri, and D. Venturi,
“Hybrid blockchain,” May 28 2019, uS Patent 10,305,875.
[91] M. Samaniego and R. Deters, “Blockchain as a service for iot,” in
2016 IEEE International Conference on Internet of Things (iThings) and
IEEE Green Computing and Communications (GreenCom) and IEEE
Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data
(SmartData), Dec 2016, pp. 433–436.
[92] A. Dorri, S. S. Kanhere, and R. Jurdak, “Blockchain in internet of things:
Challenges and solutions,” 2016.
[93] L. D. Xu, W. He, and S. Li, “Internet of things in industries: A survey,”
IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 2233–
2243, Nov 2014.
[94] A. Dorri, S. S. Kanhere, R. Jurdak, and P. Gauravaram, “Blockchain for
iot security and privacy: The case study of a smart home,” in 2017 IEEE
International Conference on Pervasive Computing and Communications
Workshops (PerCom Workshops), March 2017, pp. 618–623.
[95] A. Dorri, S. S. Kanhere, and R. Jurdak, “Towards an optimized
blockchain for iot,” in 2017 IEEE/ACM Second International Conference
on Internet-of-Things Design and Implementation (IoTDI), 2017, pp.
173–178.
[96] J. J. Sikorski, J. Haughton, and M. Kraft, “Blockchain technology
in the chemical industry: Machine-to-machine electricity market,”
Applied Energy, vol. 195, pp. 234 – 246, 2017. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S0306261917302672](http://www.sciencedirect.com/science/article/pii/S0306261917302672)
[97] D. Wörner and T. von Bomhard, “When your sensor earns money:
Exchanging data for cash with bitcoin,” in Proceedings of the 2014 ACM
International Joint Conference on Pervasive and Ubiquitous Computing:
Adjunct Publication, ser. UbiComp ’14 Adjunct. New York, NY, USA:
Association for Computing Machinery, 2014, p. 295–298. [Online].
[Available: https://doi.org/10.1145/2638728.2638786](https://doi.org/10.1145/2638728.2638786)
[98] L. Axon. and M. Goldsmith., “Pb-pki: A privacy-aware blockchainbased pki,” in Proceedings of the 14th International Joint Conference
on e-Business and Telecommunications - Volume 4: SECRYPT, (ICETE
2017), INSTICC. SciTePress, 2017, pp. 311–318.
[99] C. Fromknecht, D. Velicanu, and S. Yakoubov, “A decentralized
public key infrastructure with identity retention.” IACR Cryptol.
[ePrint Arch., vol. 2014, p. 803, 2014. [Online]. Available: https:](https://allquantor.at/blockchainbib/pdf/fromknecht2014decentralized.pdf)
[//allquantor.at/blockchainbib/pdf/fromknecht2014decentralized.pdf](https://allquantor.at/blockchainbib/pdf/fromknecht2014decentralized.pdf)
[100] H. Shrobe, D. L. Shrier, and A. Pentland, CHAPTER 15 Enigma:
Decentralized Computation Platform with Guaranteed Privacy. MITP,
[2018, pp. 425–454. [Online]. Available: https://ieeexplore.ieee.org/](https://ieeexplore.ieee.org/document/8333139)
[document/8333139](https://ieeexplore.ieee.org/document/8333139)
[101] Y. Hajjaji, W. Boulila, I. R. Farah, I. Romdhani, and A. Hussain, “Big data
and iot-based applications in smart environments: A systematic review,”
Computer Science Review, vol. 39, p. 100318, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S1574013720304184](https://www.sciencedirect.com/science/article/pii/S1574013720304184)
[102] M. Kumar, K. Dubey, and R. Pandey, “Evolution of emerging computing paradigm cloud to fog: Applications, limitations and research
challenges,” in 2021 11th International Conference on Cloud Computing,
Data Science Engineering (Confluence), 2021, pp. 257–261.
[103] J. Vijaykumar, P. Rajkumar, and S. Rakoth Kandan, “Fog computing
based secured mobile cloud for cumulative integrity in smart
environment and internet of things,” Materials Today: Proceedings, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S2214785320408521)
[S2214785320408521](https://www.sciencedirect.com/science/article/pii/S2214785320408521)
[104] Z. Lv, L. Qiao, M. S. Hossain, and B. J. Choi, “Analysis of using
blockchain to protect the privacy of drone big data,” IEEE Network,
vol. 35, no. 1, pp. 44–49, 2021.
19
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
[105] S. Zhang, S. Zhang, X. Chen, and X. Huo, “Cloud computing research
and development trend,” in 2010 Second International Conference on
Future Networks, Jan 2010, pp. 93–97.
[106] Y. Jadeja and K. Modi, “Cloud computing - concepts, architecture and
challenges,” in 2012 International Conference on Computing, Electronics
and Electrical Technologies (ICCEET), March 2012, pp. 877–880.
[107] M. Zhou, R. Zhang, D. Zeng, and W. Qian, “Services in the cloud
computing era: A survey,” in 2010 4th International Universal Communication Symposium, Oct 2010, pp. 40–46.
[108] W. Ma and J. Zhang, “The survey and research on application of cloud
computing,” in 2012 7th International Conference on Computer Science
Education (ICCSE), July 2012, pp. 203–206.
[109] A. Wilczy´nski and J. Kołodziej, “Modelling and simulation of
security-aware task scheduling in cloud computing based on blockchain
technology,” Simulation Modelling Practice and Theory, vol. 99,
[p. 102038, 2020. [Online]. Available: http://www.sciencedirect.com/](http://www.sciencedirect.com/science/article/pii/S1569190X19301698)
[science/article/pii/S1569190X19301698](http://www.sciencedirect.com/science/article/pii/S1569190X19301698)
[110] C. Chen, M. Lin, and C. Liu, “Edge computing gateway of the industrial
internet of things using multiple collaborative microcontrollers,” IEEE
Network, vol. 32, no. 1, pp. 24–32, Jan 2018.
[111] S. Jeong, O. Simeone, and J. Kang, “Mobile edge computing via a UAVmounted cloudlet: Optimization of bit allocation and path planning,”
IEEE Transactions on Vehicular Technology, vol. 67, no. 3, pp. 2049–
2063, March 2018.
[112] A. Kapitonov, S. Lonshakov, A. Krupenkin, and I. Berman, “Blockchainbased protocol of autonomous business activity for multi-agent systems
consisting of UAVs,” in 2017 Workshop on Research, Education and
Development of Unmanned Aerial Systems (RED-UAS), Oct 2017, pp.
84–89.
[113] A. Kumar, A. Kundu, C. A. Pickover, and K. Weldemariam, “Unmanned aerial vehicle data management,” Sep. 20 2018, uS Patent App.
15/463,147.
[114] M. Aazam and E. Huh, “Fog computing and smart gateway based
communication for cloud of things,” in 2014 International Conference
on Future Internet of Things and Cloud, Aug 2014, pp. 464–470.
[115] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp.
450–465, Feb 2018.
[116] Z. Xiong, Y. Zhang, D. Niyato, P. Wang, and Z. Han, “When mobile
blockchain meets edge computing,” IEEE Communications Magazine,
vol. 56, no. 8, pp. 33–39, August 2018.
[117] M. Liu, F. R. Yu, Y. Teng, V. C. M. Leung, and M. Song, “Computation
offloading and content caching in wireless blockchain networks with
mobile edge computing,” IEEE Transactions on Vehicular Technology,
vol. 67, no. 11, pp. 11 008–11 021, Nov 2018.
[118] Z. Xiong, S. Feng, W. Wang, D. Niyato, P. Wang, and Z. Han, “Cloud/fog
computing resource management and pricing for blockchain networks,”
IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4585–4600, June 2019.
[119] S. Tuli, R. Mahmud, S. Tuli, and R. Buyya, “Fogbus: A blockchain-based
lightweight framework for edge and fog computing,” Journal of Systems
and Software, vol. 154, pp. 22 – 36, 2019. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S0164121219300822](http://www.sciencedirect.com/science/article/pii/S0164121219300822)
[120] R. Almadhoun, M. Kadadha, M. Alhemeiri, M. Alshehhi, and K. Salah,
“A user authentication scheme of iot devices using blockchain-enabled
fog nodes,” in 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA), Oct 2018, pp. 1–8.
[121] D. Wu and N. Ansari, “A cooperative computing strategy for blockchainsecured fog computing,” IEEE Internet of Things Journal, pp. 1–1, 2020.
[122] S. L. Aljohani and M. J. F. Alenazi, “Mpresisdn: Multipath resilient
routing scheme for sdn-enabled smart cities networks,” Applied
[Sciences, vol. 11, no. 4, 2021. [Online]. Available: https://www.mdpi.](https://www.mdpi.com/2076-3417/11/4/1900)
[com/2076-3417/11/4/1900](https://www.mdpi.com/2076-3417/11/4/1900)
[123] V. Balasubramanian, M. Aloqaily, and M. Reisslein, “An sdn architecture
for time sensitive industrial iot,” Computer Networks, vol. 186,
[p. 107739, 2021. [Online]. Available: https://www.sciencedirect.com/](https://www.sciencedirect.com/science/article/pii/S1389128620313256)
[science/article/pii/S1389128620313256](https://www.sciencedirect.com/science/article/pii/S1389128620313256)
[124] A. J. Kadhim and J. I. Naser, “Proactive load balancing mechanism
for fog computing supported by parked vehicles in iov-sdn,” China
Communications, vol. 18, no. 2, pp. 271–289, 2021.
[125] L.-A. Phan, D.-T. Nguyen, M. Lee, D.-H. Park, and T. Kim,
“Dynamic fog-to-fog offloading in sdn-based fog computing systems,”
Future Generation Computer Systems, vol. 117, pp. 486–497, 2021.
[[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S0167739X20330831)
[S0167739X20330831](https://www.sciencedirect.com/science/article/pii/S0167739X20330831)
20
[126] A. Rahman, M. J. Islam, A. Montieri, M. K. Nasir, M. M. Reza, S. S.
Band, A. Pescape, M. Hasan, M. Sookhak, and A. Mosavi, “Smartblocksdn: An optimized blockchain-sdn framework for resource management
in iot,” IEEE Access, vol. 9, pp. 28 361–28 376, 2021.
[127] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation
in campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38,
[no. 2, p. 69–74, Mar. 2008. [Online]. Available: https://doi.org/10.1145/](https://doi.org/10.1145/1355734.1355746)
[1355734.1355746](https://doi.org/10.1145/1355734.1355746)
[128] S. Badotra and S. N. Panda, “Software-defined networking: A novel
approach to networks,” in Handbook of Computer Networks and
Cyber Security: Principles and Paradigms, B. B. Gupta, G. M.
Perez, D. P. Agrawal, and D. Gupta, Eds. Cham: Springer
International Publishing, 2020, pp. 313–339. [Online]. Available:
[https://doi.org/10.1007/978-3-030-22277-2_13](https://doi.org/10.1007/978-3-030-22277-2_13)
[129] Y. Jararweh, M. Al-Ayyoub, A. Darabseh, E. Benkhelifa, M. Vouk,
and A. Rindos, “SDIoT: a software defined based internet of
things framework,” Journal of Ambient Intelligence and Humanized
Computing, vol. 6, no. 4, pp. 453–461, 2015. [Online]. Available:
[https://doi.org/10.1007/s12652-015-0290-y](https://doi.org/10.1007/s12652-015-0290-y)
[130] K. Kalkan and S. Zeadally, “Securing internet of things with software
defined networking,” IEEE Communications Magazine, vol. 56, no. 9,
pp. 186–192, Sep. 2018.
[131] L. Huo, D. Jiang, S. Qi, and L. Miao, “A blockchain-based
security traffic measurement approach to software defined networking,”
[Mobile Networks and Applications, 2020. [Online]. Available: https:](https://doi.org/10.1007/s11036-019-01420-6)
[//doi.org/10.1007/s11036-019-01420-6](https://doi.org/10.1007/s11036-019-01420-6)
[132] D. V. Medhane, A. K. Sangaiah, M. S. Hossain, G. Muhammad, and
J. Wang, “Blockchain-enabled distributed security framework for next
generation iot: An edge-cloud and software defined network integrated
approach,” IEEE Internet of Things Journal, pp. 1–1, 2020.
[133] P. K. Sharma, M. Chen, and J. H. Park, “A software defined fog node
based distributed blockchain cloud architecture for iot,” IEEE Access,
vol. 6, pp. 115–124, 2018.
[134] P. K. Sharma, S. Rathore, Y. Jeong, and J. H. Park, “SoftEdgeNet:
SDN based energy-efficient distributed network architecture for edge
computing,” IEEE Communications Magazine, vol. 56, no. 12, pp. 104–
111, December 2018.
[135] I. D. Alvarenga, G. A. F. Rebello, and O. C. M. B. Duarte, “Securing
configuration management and migration of virtual network functions
using blockchain,” in NOMS 2018 - 2018 IEEE/IFIP Network Operations
and Management Symposium, April 2018, pp. 1–9.
[136] T. Wood, K. K. Ramakrishnan, J. Hwang, G. Liu, and W. Zhang, “Toward
a software-based network: integrating software defined networking and
network function virtualization,” IEEE Network, vol. 29, no. 3, pp. 36–
41, May 2015.
[137] D. Schwartz, N. Youngs, A. Britto et al., “The ripple protocol consensus
algorithm,” Ripple Labs Inc White Paper, vol. 5, no. 8, 2014.
[138] D. B. Rawat, “Fusion of software defined networking, edge computing,
and blockchain technology for wireless network virtualization,” IEEE
Communications Magazine, vol. 57, no. 10, pp. 50–55, 2019.
[139] D. B. Rawat and A. Alshaikhi, “Leveraging distributed blockchain-based
scheme for wireless network virtualization with security and qos constraints,” in 2018 International Conference on Computing, Networking
and Communications (ICNC), 2018, pp. 332–336.
[140] R. P. França, A. C. B. Monteiro, R. Arthur, and Y. Iano, An
Overview of the Machine Learning Applied in Smart Cities. Cham:
Springer International Publishing, 2021, pp. 91–111. [Online]. Available:
[https://doi.org/10.1007/978-3-030-60922-1_5](https://doi.org/10.1007/978-3-030-60922-1_5)
[141] K. Rajkumar, M. Ramachandran, F. Al-Turjman, and R. Patan, “A
reinforcement learning optimization for future smart cities using software
defined networking,” International Journal of Machine Learning and
[Cybernetics, Jan 2021. [Online]. Available: https://doi.org/10.1007/](https://doi.org/10.1007/s13042-020-01245-w)
[s13042-020-01245-w](https://doi.org/10.1007/s13042-020-01245-w)
[142] S. S. Panda and D. Jena, “Decentralizing ai using blockchain technology
for secure decision making,” in Advances in Machine Learning and
Computational Intelligence, S. Patnaik, X.-S. Yang, and I. K. Sethi, Eds.
Singapore: Springer Singapore, 2021, pp. 687–694.
[143] W. Serrano, “The blockchain random neural network for cybersecure
iot and 5g infrastructure in smart cities,” Journal of Network and
Computer Applications, vol. 175, p. 102909, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S1084804520303696](https://www.sciencedirect.com/science/article/pii/S1084804520303696)
[144] R. Kumar, W. Wang, J. Kumar, T. Yang, A. Khan, W. Ali, and
I. Ali, “An integration of blockchain and ai for secure data sharing
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
and detection of ct images for the hospitals,” Computerized Medical
Imaging and Graphics, vol. 87, p. 101812, 2021. [Online]. Available:
[https://www.sciencedirect.com/science/article/pii/S0895611120301075](https://www.sciencedirect.com/science/article/pii/S0895611120301075)
[145] S. Aich, N. K. Sinai, S. Kumar, M. Ali, Y. R. Choi, M. I. Joo, and
H. C. Kim, “Protecting personal healthcare record using blockchain
federated learning technologies,” in 2021 23rd International Conference
on Advanced Communication Technology (ICACT), 2021, pp. 109–112.
[146] F. Jamil, N. Iqbal, Imran, S. Ahmad, and D. Kim, “Peer-to-peer energy
trading mechanism based on blockchain and machine learning for sustainable electrical power supply in smart grid,” IEEE Access, vol. 9, pp.
39 193–39 217, 2021.
[147] Z. Shahbazi and Y.-C. Byun, “Integration of blockchain, iot and machine
learning for multistage quality control and enhancing security in smart
manufacturing,” Sensors, vol. 21, no. 4, 2021. [Online]. Available:
[https://www.mdpi.com/1424-8220/21/4/1467](https://www.mdpi.com/1424-8220/21/4/1467)
[148] D. Nguyen, M. Ding, P. N. Pathirana, and A. Seneviratne, “Blockchain
and ai-based solutions to combat coronavirus (covid-19)-like epidemics:
A survey,” Apr 2020. [Online]. Available: [https://www.techrxiv.](https://www.techrxiv.org/articles/preprint/Blockchain_and_AI-based_Solutions_to_Combat_Coronavirus_COVID-19_-like_Epidemics_A_Survey/12121962/1)
[org/articles/preprint/Blockchain_and_AI-based_Solutions_to_Combat_](https://www.techrxiv.org/articles/preprint/Blockchain_and_AI-based_Solutions_to_Combat_Coronavirus_COVID-19_-like_Epidemics_A_Survey/12121962/1)
[Coronavirus_COVID-19_-like_Epidemics_A_Survey/12121962/1](https://www.techrxiv.org/articles/preprint/Blockchain_and_AI-based_Solutions_to_Combat_Coronavirus_COVID-19_-like_Epidemics_A_Survey/12121962/1)
[149] T. P. Mashamba-Thompson and E. D. Crayton, “Blockchain and artificial
intelligence technology for novel coronavirus disease-19 self-testing,”
Diagnostics, vol. 10, no. 4, p. 198, Apr 2020. [Online]. Available:
[http://dx.doi.org/10.3390/diagnostics10040198](http://dx.doi.org/10.3390/diagnostics10040198)
[150] A. Kumari, R. Gupta, S. Tanwar, and N. Kumar, “Blockchain
and AI amalgamation for energy cloud management: Challenges,
solutions, and future directions,” Journal of Parallel and Distributed
Computing, vol. 143, pp. 148 – 166, 2020. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S074373152030277X](http://www.sciencedirect.com/science/article/pii/S074373152030277X)
[151] Y. Dai, D. Xu, S. Maharjan, G. Qiao, and Y. Zhang, “Artificial intelligence
empowered edge computing and caching for internet of vehicles,” IEEE
Wireless Communications, vol. 26, no. 3, pp. 12–18, June 2019.
[152] Y. Dai, D. Xu, S. Maharjan, Z. Chen, Q. He, and Y. Zhang, “Blockchain
and deep reinforcement learning empowered intelligent 5g beyond,”
IEEE Network, vol. 33, no. 3, pp. 10–17, May 2019.
[153] C. Qiu, F. R. Yu, H. Yao, C. Jiang, F. Xu, and C. Zhao, “Blockchain-based
software-defined industrial internet of things: A dueling deep *Q* -learning
approach,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4627–4639,
June 2019.
[154] C. Qiu, F. R. Yu, F. Xu, H. Yao, and C. Zhao, “Blockchain-based
distributed software-defined vehicular networks via deep q-learning,”
in Proceedings of the 8th ACM Symposium on Design and Analysis
of Intelligent Vehicular Networks and Applications, ser. DIVANet’18.
New York, NY, USA: Association for Computing Machinery, 2018, p.
[8–14. [Online]. Available: https://doi.org/10.1145/3272036.3272040](https://doi.org/10.1145/3272036.3272040)
[155] P. K. Sharma, S. Y. Moon, and J. H. Park, “Block-vn: A distributed
blockchain based vehicular network architecture in smart city.” Journal
of information processing systems, vol. 13, no. 1, pp. 184–195, 2017.
[[Online]. Available: http://jips-k.org/q.jips?cp=pp&pn=440](http://jips-k.org/q.jips?cp=pp&pn=440)
[156] P. K. Sharma and J. H. Park, “Blockchain based hybrid network
architecture for the smart city,” Future Generation Computer Systems,
[vol. 86, pp. 650 – 655, 2018. [Online]. Available: http://www.](http://www.sciencedirect.com/science/article/pii/S0167739X1830431X)
[sciencedirect.com/science/article/pii/S0167739X1830431X](http://www.sciencedirect.com/science/article/pii/S0167739X1830431X)
[157] P. K. Sharma, N. Kumar, and J. H. Park, “Blockchain-based distributed
framework for automotive industry in a smart city,” IEEE Transactions
on Industrial Informatics, vol. 15, no. 7, pp. 4197–4205, July 2019.
[158] V. Sapovadia, “Chapter 13 - legal issues in cryptocurrency,” in
Handbook of Digital Currency, D. L. K. Chuen, Ed. San Diego:
[Academic Press, 2015, pp. 253 – 266. [Online]. Available: http:](http://www.sciencedirect.com/science/article/pii/B9780128021170000138)
[//www.sciencedirect.com/science/article/pii/B9780128021170000138](http://www.sciencedirect.com/science/article/pii/B9780128021170000138)
[159] N. N. Emelianova and A. A. Dementyev, “Cryptocurrency, taxation
and international law: Contemporary aspects,” in Artificial Intelligence:
Anthropogenic Nature vs. Social Origin, E. G. Popkova and B. S. Sergi,
Eds. Cham: Springer International Publishing, 2020, pp. 725–731.
[160] A. A. Omololu, “Legal ramifications of blockchain technology,” in
Decentralised Internet of Things: A Blockchain Perspective, M. A.
Khan, M. T. Quasim, F. Algarni, and A. Alharthi, Eds. Cham: Springer
International Publishing, 2020, pp. 217–230. [Online]. Available:
[https://doi.org/10.1007/978-3-030-38677-1_10](https://doi.org/10.1007/978-3-030-38677-1_10)
[161] V. S. K. Balagurusamy, C. Cabral, S. Coomaraswamy, E. Delamarche,
D. N. Dillenberger, G. Dittmann, D. Friedman, O. Gökçe, N. Hinds,
J. Jelitto, A. Kind, A. D. Kumar, F. Libsch, J. W. Ligman, S. Munetoh,
C. Narayanaswami, A. Narendra, A. Paidimarri, M. A. P. Delgado,
J. Rayfield, C. Subramanian, and R. Vaculin, “Crypto anchors,” IBM
Journal of Research and Development, vol. 63, no. 2/3, pp. 4:1–4:12,
2019.
[162] C. S. Wright and S. Savanah, “Operating system for blockchain iot
devices,” May 23 2019, uS Patent App. 16/097,497.
[163] A. Poelstra et al., “Distributed consensus from proof of stake is impossible,” Self-published Paper, 2014.
[164] P. B, M. I, and M. M, “Migration from pow to pos for ethereum,” Oct
2019. [Online]. Available: engrxiv.org/ad8en
[165] F. Saleh, “Blockchain without waste: Proof-of-stake,” Available at SSRN
[3183935, 2020. [Online]. Available: https://ssrn.com/abstract=3183935](https://ssrn.com/abstract=3183935)
[166] D. Larimer, “EOS.IO Technical White Paper v2,” 2018.
[Online]. Available: [https://github.com/EOSIO/Documentation/blob/](https://github.com/EOSIO/Documentation/blob/master/TechnicalWhitePaper.md)
[master/TechnicalWhitePaper.md](https://github.com/EOSIO/Documentation/blob/master/TechnicalWhitePaper.md)
[167] C. Li and B. Palanisamy, “Comparison of decentralization in DPoS and
PoW blockchains,” 2020.
[168] ——, “Incentivized blockchain-based social media platforms: A case
study of steemit,” in Proceedings of the 10th ACM Conference on
Web Science, ser. WebSci ’19. New York, NY, USA: Association
for Computing Machinery, 2019, p. 145–154. [Online]. Available:
[https://doi.org/10.1145/3292522.3326041](https://doi.org/10.1145/3292522.3326041)
[169] M. Vukoli´c, “The quest for scalable blockchain fabric: Proof-of-work vs.
bft replication,” in Open Problems in Network Security, J. Camenisch
and D. Kesdo˘gan, Eds. Cham: Springer International Publishing, 2016,
pp. 112–125.
[170] J. Garay, A. Kiayias, and N. Leonardos, “The bitcoin backbone protocol:
Analysis and applications,” in Advances in Cryptology - EUROCRYPT
2015, E. Oswald and M. Fischlin, Eds. Berlin, Heidelberg: Springer
Berlin Heidelberg, 2015, pp. 281–310.
[171] A. Beniiche, “A study of blockchain oracles,” 2020.
[172] H. Al-Breiki, M. H. U. Rehman, K. Salah, and D. Svetinovic, “Trustworthy blockchain oracles: Review, comparison, and open research challenges,” IEEE Access, vol. 8, pp. 85 675–85 685, 2020.
[173] I.-C. Lin and T.-C. Liao, “A survey of blockchain security issues and
challenges.” IJ Network Security, vol. 19, no. 5, pp. 653–659, 2017.
[174] A. A. Siyal, A. Z. Junejo, M. Zawish, K. Ahmed, A. Khalil, and
G. Soursou, “Applications of blockchain technology in medicine and
healthcare: Challenges and future perspectives,” Cryptography, vol. 3,
[no. 1, p. 3, Jan 2019. [Online]. Available: http://dx.doi.org/10.3390/](http://dx.doi.org/10.3390/cryptography3010003)
[cryptography3010003](http://dx.doi.org/10.3390/cryptography3010003)
[175] V. Garcia-Font, “Blockchain: Opportunities and challenges in the
educational context,” in Engineering Data-Driven Adaptive Trustbased e-Assessment Systems: Challenges and Infrastructure Solutions,
D. Baneres, M. E. Rodríguez, and A. E. Guerrero-Roldán, Eds.
Cham: Springer International Publishing, 2020, pp. 133–157. [Online].
[Available: https://doi.org/10.1007/978-3-030-29326-0_7](https://doi.org/10.1007/978-3-030-29326-0_7)
[176] S. S. Kamble, A. Gunasekaran, and R. Sharma, “Modeling the blockchain
enabled traceability in agriculture supply chain,” International Journal of
Information Management, vol. 52, p. 101967, 2020. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S0268401218312118](http://www.sciencedirect.com/science/article/pii/S0268401218312118)
[177] K. M. Khan, J. Arshad, and M. M. Khan, “Investigating performance
constraints for blockchain based secure e-voting system,” Future Generation Computer Systems, vol. 105, pp. 13 – 26, 2020. [Online]. Available:
[http://www.sciencedirect.com/science/article/pii/S0167739X19310805](http://www.sciencedirect.com/science/article/pii/S0167739X19310805)
[178] U. C. Çabuk, E. Adıgüzel, and E. Karaarslan, “A survey on feasibility
and suitability of blockchain techniques for the e-voting systems,”
IJARCCE, vol. 7, no. 3, p. 124–134, Mar 2018. [Online]. Available:
[http://dx.doi.org/10.17148/IJARCCE.2018.7324](http://dx.doi.org/10.17148/IJARCCE.2018.7324)
[179] G. Zyskind, O. Nathan, and A. . Pentland, “Decentralizing privacy:
Using blockchain to protect personal data,” in 2015 IEEE Security
and Privacy Workshops, 2015, pp. 180–184. [Online]. Available:
[https://ieeexplore.ieee.org/document/7163223/keywords#keywords](https://ieeexplore.ieee.org/document/7163223/keywords#keywords)
[180] J. Luo, Q. Chen, F. R. Yu, and L. Tang, “Blockchain-enabled softwaredefined industrial internet of things with deep reinforcement learning,”
IEEE Internet of Things Journal, pp. 1–1, 2020.
21
-----
Ebrahim *et al.* : Blockchain as privacy and security solution for smart environments: A Survey
MAAD EBRAHIM is currently a Ph.D. student at
the Department of Computer Science and Operations Research (DIRO), University of Montreal,
Canada. He received his M.Sc. degree in 2019
from the Computer Science Department, Faculty
of Computer and Information Technology, Jordan
University of Science and Technology, Jordan. His
B.Sc. degree in Computer Science and Engineering has been received from the University of Aden,
Yemen, in 2013. His research experience includes
Computer Vision, Artificial Intelligence, Machine learning, Deep Learning,
Data Mining, and Data Analysis. His current research interests include
Fog and Edge Computing technologies, Blockchains, and Reinforcement
Learning.
ABDELHAKIM HAFID spent several years as
the Senior Research Scientist with Bell Commu
nications Research (Bellcore), NJ, USA, working in the context of major research projects on
the management of next generation networks. He
was also an Assistant Professor with Western
University (WU), Canada, the Research Director
of Advance Communication Engineering Center
(venture established by WU, Bell Canada, and
Bay Networks), Canada, a Researcher with CRIM,
Canada, the Visiting Scientist with GMD-Fokus, Germany, and a Visiting
Professor with the University of Evry, France. He is currently a Full
Professor with the University of Montreal. He is also the Founding Director
of the Network Research Laboratory and Montreal Blockchain Laboratory.
He is a Research Fellow with CIRRELT, Montreal, Canada. He has extensive
academic and industrial research experience in the area of the management
and design of next generation networks. His current research interests
include the IoT, fog/edge computing, blockchain, and intelligent transport
systems.
ETIENNE ELIE is Solutions and Systems Architect and Engineering Lead at Intel Corporation,
California, USA. Prior to joining Intel Corporation, Dr. Elie was the technology and engineering
manager for CARTaGENE, a public research platform and biobank of the Sainte-Justine Learning
Hospital. He also served as ASIC Architecture
Engineer at Nortel Networks and Advanced Micro
Devices (AMD). Before moving to the US, Elie
spent a short period of time with PSP Investments,
one of Canada’s largest pension investment managers. Beside his role at Intel
Corporation, Dr. Elie is a key contributor for the development of a largescale general-purpose neuromorphic Community Infrastructure (CI). Dr.
Elie holds a Ph.D. in Computer Architecture from Université de Montréal,
with focus on optimization of data movements in computer systems. He also
holds a master’s degree, and Bachelor of Science in Engineering with great
distinction.
22
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.08901, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2203.08901"
}
| 2,022
|
[
"JournalArticle",
"Review"
] | true
| 2022-03-16T00:00:00
|
[
{
"paperId": "ffffb2a637606031e130307997218949fd5776ef",
"title": "A Socio-Technical Approach for Resilient Connected Transportation Systems in Smart Cities"
},
{
"paperId": "7a46f1b84bb84f1ef33fc2d074eb56a51461b5c8",
"title": "Blockchain technology in the future smart grids: A comprehensive review and frameworks"
},
{
"paperId": "8e8f3511597b2ac55458f702d0a17c2eb4b730f3",
"title": "Dynamic fog-to-fog offloading in SDN-based fog computing systems"
},
{
"paperId": "afd020100ef1e125463ac10bb1770996e49e3fdb",
"title": "Migration from the traditional to the smart factory in the die-casting industry: Novel process data acquisition and fault detection based on artificial neural network"
},
{
"paperId": "2a62e54d86f6fe5bd55c37e93ed778295700a98a",
"title": "Smart homes: domestic futurity as Infrastructure"
},
{
"paperId": "dbc0af9f52857be49c75502ff08def22a96fc137",
"title": "Introduction to the special issue on smart transportation"
},
{
"paperId": "a39c07b6394a6289c3c84c91fd433ea73a3eef81",
"title": "MPResiSDN: Multipath Resilient Routing Scheme for SDN-Enabled Smart Cities Networks"
},
{
"paperId": "33f6f9987b07cb69c774b35c99e3aa58f2f5ab3e",
"title": "Blockchain for IoT-based smart cities: Recent advances, requirements, and future challenges"
},
{
"paperId": "5d5633c4df9109419bc7c204005c16074c1af15e",
"title": "Fog computing based secured mobile cloud for cumulative integrity in smart environment and Internet of Things"
},
{
"paperId": "9ea707a2bb054ab2fd08c237aafcefd6d40cce80",
"title": "Protecting Personal Healthcare Record Using Blockchain & Federated Learning Technologies"
},
{
"paperId": "ee5c614a08ea83dd62c1c9f01d26cdd94cc5e24c",
"title": "Predictive Maintenance and Intelligent Sensors in Smart Factory: Review"
},
{
"paperId": "4af84b71e614671902137699069310d769543836",
"title": "The Blockchain Random Neural Network for cybersecure IoT and 5G infrastructure in Smart Cities"
},
{
"paperId": "5e984d5f260355dcd93d2045aa86974a73ae3d1b",
"title": "Integration of Blockchain, IoT and Machine Learning for Multistage Quality Control and Enhancing Security in Smart Manufacturing"
},
{
"paperId": "b4b74ee11a19fef85edb72c1e93ac2336c335644",
"title": "Proactive load balancing mechanism for fog computing supported by parked vehicles in IoV-SDN"
},
{
"paperId": "3b529988da89bf94c4ebdf183831d07d4f31978e",
"title": "Big data and IoT-based applications in smart environments: A systematic review"
},
{
"paperId": "56d5f9c1a25a47ff372529f9320defd218c72d52",
"title": "Evolution of Emerging Computing paradigm Cloud to Fog: Applications, Limitations and Research Challenges"
},
{
"paperId": "3b208ae43be2fb69527669633c2dba67994925da",
"title": "Internet of Things (IoT) Application Model for Smart Farming"
},
{
"paperId": "40f83ee10dd331f3408d939c98d11cd8a3281051",
"title": "A reinforcement learning optimization for future smart cities using software defined networking"
},
{
"paperId": "48c706defd38fd100c81b62b224fc0e694a32ed8",
"title": "Achieving efficient and Privacy-preserving energy trading based on blockchain and ABE in smart grid"
},
{
"paperId": "b554078f4ae1bc7c74cf9739cda77997fd8e36a5",
"title": "Analysis of Using Blockchain to Protect the Privacy of Drone Big Data"
},
{
"paperId": "76bfb2316f4fb7b0122f727ad6b9db2459647396",
"title": "An SDN architecture for time sensitive industrial IoT"
},
{
"paperId": "59ff2b08ad109a3330280183ec4c11a82af4da52",
"title": "Smart Environments Concepts, Applications, and Challenges"
},
{
"paperId": "97f7540d7a270468b9d63be0d3b0bc3650a24de3",
"title": "An Overview of the Machine Learning Applied in Smart Cities"
},
{
"paperId": "15188a99033c1beddeb14f19a4c6f9558fcb9568",
"title": "An Integration of blockchain and AI for secure data sharing and detection of CT images for the hospitals"
},
{
"paperId": "bc93af4375996f95dc6017740d19dd779c734856",
"title": "Multistage implementation framework for smart supply chain management under industry 4.0"
},
{
"paperId": "1a415ec978c083c81148ad9f29a7a807c1acd9f4",
"title": "Blockchain for smart cities: A review of architectures, integration trends and future research directions"
},
{
"paperId": "66836b52b0b658a0d5f448579d0e87906a505d11",
"title": "Smart cities education: An insight into existing drawbacks"
},
{
"paperId": "9aa7afe4664347d65fdaf40a4b3191f808b22f39",
"title": "An efficient IoT based smart farming system using machine learning algorithms"
},
{
"paperId": "b2e1390f9e1d26bf0df91b9d992774a18fa0ecb7",
"title": "Blockchain and AI amalgamation for energy cloud management: Challenges, solutions, and future directions"
},
{
"paperId": "4939f2ab9400c2c8ebc1b0a5fbe515ebdf495bcf",
"title": "Survey on IoT security: Challenges and solution using machine learning, artificial intelligence and blockchain technology"
},
{
"paperId": "67b8efd1d1d67b266447d495d8ad72795ee4ffd3",
"title": "An SDN-IoT-based Framework for Future Smart Cities: Addressing Perspective"
},
{
"paperId": "7e3acf8a52e19479bf759959be67e814368fe4ce",
"title": "B-FERL: Blockchain based Framework for Securing Smart Vehicles"
},
{
"paperId": "03d1b883e9d8474212094e5764646bc6450cf565",
"title": "Blockchain Without Waste: Proof-of-Stake"
},
{
"paperId": "bb7ec6cfa97bbb13e7bcd2d34e40f6288d8768dc",
"title": "Scaling Blockchains: A Comprehensive Survey"
},
{
"paperId": "21b2c213b10b88bdd9976c64bf44bafa49b1c849",
"title": "Modeling the blockchain enabled traceability in agriculture supply chain"
},
{
"paperId": "33469f86bc7ef904c1f4664d1cadf22227e3462a",
"title": "IoT in Smart Farming Analytics, Big Data Based Architecture"
},
{
"paperId": "9fbbd520f24a810a4278e1054f04f481d417c8bd",
"title": "A Comprehensive Review of the COVID-19 Pandemic and the Role of IoT, Drones, AI, Blockchain, and 5G in Managing its Impact"
},
{
"paperId": "a44dff5c3376b5029c8dff928d5c0c5272537e4b",
"title": "Transforming business using digital innovations: the application of AI, blockchain, cloud and data analytics"
},
{
"paperId": "1df83e4dc0ab34d77179962e156b7bd1a65f0e0d",
"title": "Blockchain and AI-Based Solutions to Combat Coronavirus (COVID-19)-Like Epidemics: A Survey"
},
{
"paperId": "a740dcc3da0e3086db21aedb196e5e7ba5b094e1",
"title": "Investigating performance constraints for blockchain based secure e-voting system"
},
{
"paperId": "cd4a6840c9facd0855c882bbd5c1447a4615358a",
"title": "Blockchain and Artificial Intelligence Technology for Novel Coronavirus Disease 2019 Self-Testing"
},
{
"paperId": "55ec2762fa782ac2c580f057e076f33b98e6ddd3",
"title": "A Study of Blockchain Oracles"
},
{
"paperId": "888f1bb4bd8a61f2ba4b8dc4d5991876a107a8f0",
"title": "Cryptocurrency, Taxation and International Law: Contemporary Aspects"
},
{
"paperId": "caaa2906ea4c47516063bb37d86d2e4960e7ea67",
"title": "Blockchain-Enabled Distributed Security Framework for Next-Generation IoT: An Edge Cloud and Software-Defined Network-Integrated Approach"
},
{
"paperId": "63d145309d8c3449a608664f8c63a82b683020f4",
"title": "Edge Computing Integrated with Blockchain Technologies"
},
{
"paperId": "377eaad43415f1b1ddbb94888ac6fec88e13e65e",
"title": "A Cooperative Computing Strategy for Blockchain-Secured Fog Computing"
},
{
"paperId": "28046f46268df13b4d40a5863501d8d6f3e43ae3",
"title": "Scalable Open-Vote Network on Ethereum"
},
{
"paperId": "b1d63649db5db22d418946bcef74a23346332f52",
"title": "A Survey on Feasibility and Suitability of Blockchain Techniques for the E-Voting Systems"
},
{
"paperId": "0b03db96ca7128efd8ee011d3531fcfa0a09b7ac",
"title": "Comparison of Decentralization in DPoS and PoW Blockchains"
},
{
"paperId": "d8e49199939494b41ac30fd2672c05cd8cf3546b",
"title": "A Survey of IoT Applications in Blockchain Systems"
},
{
"paperId": "26b3d3d6a81d536923372f46aea665838a379fe6",
"title": "Towards Blockchain-Based Software-Defined Networking: Security Challenges and Solutions"
},
{
"paperId": "4c83e05e10a47121c2ed8c6868bdbfd3c08f83fe",
"title": "Modelling and simulation of security-aware task scheduling in cloud computing based on Blockchain technology"
},
{
"paperId": "50f7f133fe4f72df1ce89a2e88e9dca4e0184f4f",
"title": "A Blockchain-Based Security Traffic Measurement Approach to Software Defined Networking"
},
{
"paperId": "566dee985717d2dabe7a341185ee3b1139bd0dea",
"title": "Deployment of Blockchain Technology in Software Defined Networks: A Survey"
},
{
"paperId": "550bba213ecf92f6b1fd129ad42ec22c1f8aaecd",
"title": "Blockchain for cloud exchange: A survey"
},
{
"paperId": "46e6fc16fee32c96a5c77c3255fa63765c42e60f",
"title": "Fusion of Software Defined Networking, Edge Computing, and Blockchain Technology for Wireless Network Virtualization"
},
{
"paperId": "7c6bc10f47edffd53ff29d38aa1e072090436dbc",
"title": "Migration from POW to POS for Ethereum"
},
{
"paperId": "694e0e8b4c53df21e6cd4083d36ba05629f648cd",
"title": "A Decentralized and Anonymous Data Transaction Scheme Based on Blockchain and Zero-Knowledge Proof in Vehicle Networking (Workshop Paper)"
},
{
"paperId": "6291cf5f4d01de67deb1c4e39e10ffbcb664cdee",
"title": "Blockchain-Based Distributed Framework for Automotive Industry in a Smart City"
},
{
"paperId": "acca913accc1680d9945a70df29ce4eafb830581",
"title": "A survey on Blockchain based access control for Internet of Things"
},
{
"paperId": "7fe4bbce603abe533f688b888a51d597db600609",
"title": "Blockchain for Internet of Things: A Survey"
},
{
"paperId": "bfe4586b650f39b4db7890a827fe35c108aa1f97",
"title": "Artificial Intelligence Empowered Edge Computing and Caching for Internet of Vehicles"
},
{
"paperId": "53d634e2ec960b4c5356fbef9fa020cbe74eac8d",
"title": "Blockchain-Based Software-Defined Industrial Internet of Things: A Dueling Deep ${Q}$ -Learning Approach"
},
{
"paperId": "cae22ac04006c776a9110802f4b14567e9794d54",
"title": "Blockchain and Deep Reinforcement Learning Empowered Intelligent 5G Beyond"
},
{
"paperId": "eab3cc42abcc604eb2b552076093c18847858f35",
"title": "Incentivized Blockchain-based Social Media Platforms: A Case Study of Steemit"
},
{
"paperId": "3765a0c09976c0e2d0e8825f47388d5c80812e2b",
"title": "VQL: Providing Query Efficiency and Data Authenticity in Blockchain Systems"
},
{
"paperId": "e2666d69d9e354b9184e40ebaffd53f0322f14e6",
"title": "Crypto anchors"
},
{
"paperId": "dcc2e13387dc9d70e576ddf87b1998d84fe37011",
"title": "Enhancing IoT Security and Privacy Using Distributed Ledgers with IOTA and the Tangle"
},
{
"paperId": "0b7a10138d8037d28dff301e207ff92f61554c2a",
"title": "Integrated Blockchain and Edge Computing Systems: A Survey, Some Research Issues and Challenges"
},
{
"paperId": "57004d74645ba5cb9e7732ca733ec910eb27b0be",
"title": "Applications of Blockchain Technology in Medicine and Healthcare: Challenges and Future Perspectives"
},
{
"paperId": "1cccd591e7a72501ff4e418e148d3e0848479a35",
"title": "Blockchain's adoption in IoT: The challenges, and a way forward"
},
{
"paperId": "7a916f5e028e00a482486ffc2e2d2a82516ead8c",
"title": "Smart contracts"
},
{
"paperId": "164e7be27409eff36ad4a8f2ebb355fa3bde3540",
"title": "FogBus: A Blockchain-based Lightweight Framework for Edge and Fog Computing"
},
{
"paperId": "657f57075dab025cc79e6f5b06d8a98176cc6c1f",
"title": "Decentralized Voting: A Self-tallying Voting System Using a Smart Contract on the Ethereum Blockchain"
},
{
"paperId": "b3b42a5963440e844cf692236899ae6dad13c87f",
"title": "Decentralized Voting Platform Based on Ethereum Blockchain"
},
{
"paperId": "03f5de0e96b2cf6c9fe2a8602621809902874e01",
"title": "Blockchain-Based Distributed Software-Defined Vehicular Networks via Deep Q-Learning"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "29b0df183dbba0b3351e1b10b6c92a14977923fc",
"title": "A User Authentication Scheme of IoT Devices using Blockchain-Enabled Fog Nodes"
},
{
"paperId": "868de1bd341d1bcbb9ddf9c5d05ff2af6c824c49",
"title": "SoftEdgeNet: SDN Based Energy-Efficient Distributed Network Architecture for Edge Computing"
},
{
"paperId": "0919faff858302c01be75739f4dd703fea74a82d",
"title": "Blockchain based hybrid network architecture for the smart city"
},
{
"paperId": "c972800be0371d6931030777df3d9548d6906485",
"title": "Securing Internet of Things with Software Defined Networking"
},
{
"paperId": "9233ab0eaa5fc22a00f2fdb6d7d2d0017637b50b",
"title": "Computation Offloading and Content Caching in Wireless Blockchain Networks With Mobile Edge Computing"
},
{
"paperId": "d00afc726b3130909351f3bb2159f12878c303a5",
"title": "Blockchain for Healthcare: The Next Generation of Medical Records?"
},
{
"paperId": "383057f972b11b99cbc8c0d3e6c47170e9d95c1c",
"title": "Blockchain and IoT Integration: A Systematic Survey"
},
{
"paperId": "da8a949f9c9f1df3a38f12c2cac97b789c705465",
"title": "A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security"
},
{
"paperId": "54d50269928dafc6a0744e46044c17d973fdb01c",
"title": "Blockchain-Based E-Voting System"
},
{
"paperId": "16de1f889aa6437c0ca7985e45e8c1bf4707f291",
"title": "Blockchain Technologies for the Internet of Things: Research Issues and Challenges"
},
{
"paperId": "01157f7c700e92323a5933e00c71cf001a8bac88",
"title": "Blockchain with Internet of Things: Benefits, Challenges, and Future Directions"
},
{
"paperId": "02458904f9bd718bd8c6a1a36e9847ad83b0410b",
"title": "A Review on the Use of Blockchain for the Internet of Things"
},
{
"paperId": "de8cec33c0d284f9e1ab657193dde70ff44633ba",
"title": "Blockchain for the IoT: Opportunities and Challenges"
},
{
"paperId": "0c5653210f0a6d9a3680521a45b9ee01e9ae8342",
"title": "Securing configuration management and migration of virtual network functions using blockchain"
},
{
"paperId": "b48a3381491aca0844b47237e9045ea95fc45ef5",
"title": "Leveraging Distributed Blockchain-based Scheme for Wireless Network Virtualization with Security and QoS Constraints"
},
{
"paperId": "9ad22da8353ca1799c7026115c8987e3172ae6dd",
"title": "Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT"
},
{
"paperId": "e7f84b1d7f8378ffaadbf85c33bacc8bcd9e28dd",
"title": "Mobile Edge Computing: A Survey"
},
{
"paperId": "051a8fae323f26a9bd2ca551940b4ba52b99c1be",
"title": "A Software Defined Fog Node Based Distributed Blockchain Cloud Architecture for IoT"
},
{
"paperId": "a30b4b52b1e7b0aff4a5085cdc43ace30ca66f5e",
"title": "When Intrusion Detection Meets Blockchain Technology: A Review"
},
{
"paperId": "ff6ae91ae0e500a1e91f368fca33d2b59aedbe55",
"title": "Edge Computing Gateway of the Industrial Internet of Things Using Multiple Collaborative Microcontrollers"
},
{
"paperId": "461d523f9ba942c7474aef332412fe7b53c731be",
"title": "When Mobile Blockchain Meets Edge Computing"
},
{
"paperId": "7d49d03e62907c45ea3f84cdc626dcbd75dc03f0",
"title": "Adaptable Blockchain-Based Systems: A Case Study for Product Traceability"
},
{
"paperId": "b3f538f13b441c969a998350e63efaded86223e6",
"title": "Cloud/Fog Computing Resource Management and Pricing for Blockchain Networks"
},
{
"paperId": "fa33053af899e6ca0da59ce04802c5437e43ee5a",
"title": "A blockchain future for internet of things security: a position paper"
},
{
"paperId": "e64590b78434a38b931cf86d915335850fb67f2a",
"title": "Blockchain-based protocol of autonomous business activity for multi-agent systems consisting of UAVs"
},
{
"paperId": "76bd712e4908a42c5514c50427a168d6d7952c70",
"title": "DistBlockNet: A Distributed Blockchains-Based Secure SDN Architecture for IoT Networks"
},
{
"paperId": "5bbd5bfa99b4554729ae7bbf5de0388f28ed1151",
"title": "Software-Defined Networking for Internet of Things: A Survey"
},
{
"paperId": "a22f52c7555f955cfc6720f7c1b89eb4af613a4d",
"title": "Performance Analysis of Private Blockchain Platforms in Varying Workloads"
},
{
"paperId": "eabb07994b757d329a434a082e72b81fca9f6237",
"title": "Blockchain technology in the chemical industry: Machine-to-machine electricity market"
},
{
"paperId": "61d69925287bd0e3adea7d6fe9ccaffaf29207cd",
"title": "Towards an Optimized BlockChain for IoT"
},
{
"paperId": "28fe6a3fab2f2097a6f9aac5ae9799577badf883",
"title": "Blockchain for IoT security and privacy: The case study of a smart home"
},
{
"paperId": "cbe2f7dd07d869dd1d0b2ba1cbc03b84dd695bb1",
"title": "Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
},
{
"paperId": "631cc57858eb1a94522e0090c6640f6f39ab7e18",
"title": "Blockchain as a Service for IoT"
},
{
"paperId": "628c2bcfbd6b604e2d154c7756840d3a5907470f",
"title": "Blockchain Platform for Industrial Internet of Things"
},
{
"paperId": "0200d453f5c995c87761e50976ed07692e257a30",
"title": "The Blockchain and Kudos: A Distributed System for Educational Record, Reputation and Reward"
},
{
"paperId": "451729b3faedea24771ac4aadbd267146688db9b",
"title": "Blockchain in internet of things: Challenges and Solutions"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "f572bcaa97e36d79e0cd01fb18dadb2f58eebebd",
"title": "The IoT electric business model: Using blockchain technology for the internet of things"
},
{
"paperId": "96e1bff12b5acf42aae162b6aa339c0b4db49740",
"title": "BitAV: Fast Anti-Malware by Distributed Blockchain Consensus and Feedforward Scanning"
},
{
"paperId": "efb1a85cf540fd4f901a78100a2e450d484aebac",
"title": "The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication"
},
{
"paperId": "658bcb7729770487216843abd422c5088a96b843",
"title": "SDIoT: a software defined based internet of things framework"
},
{
"paperId": "63572fc8fa50afae559feda91b1867c8e336e454",
"title": "Toward a software-based network: integrating software defined networking and network function virtualization"
},
{
"paperId": "4b9184937da308914b9e13c43bfd75845eaf910b",
"title": "Decentralizing Privacy: Using Blockchain to Protect Personal Data"
},
{
"paperId": "9b5618e4d3295ac642e03681e3b9c7bf6db265e3",
"title": "The Bitcoin Backbone Protocol: Analysis and Applications"
},
{
"paperId": "e46b1aca220cc11583b99ca1e612a85d55f7195e",
"title": "An IoT electric business model based on the protocol of bitcoin"
},
{
"paperId": "26157d17ef7f4f42c1b18bb352eaf593059f7999",
"title": "When your sensor earns money: exchanging data for cash with Bitcoin"
},
{
"paperId": "e4b92eccc5bc7ededff579232f5bed5186bf8302",
"title": "Deanonymisation of Clients in Bitcoin P2P Network"
},
{
"paperId": "aa8a6894735dc32aa7212ab7606ccd6d6e7338b9",
"title": "Fog Computing and Smart Gateway Based Communication for Cloud of Things"
},
{
"paperId": "9ace4e2eff677024c730a61146ab2efa06a6bf6a",
"title": "The Internet of Things—A survey of topics and trends"
},
{
"paperId": "7cd16cdb3abb5e08ba253f1c403598c055c69793",
"title": "Internet of Things in Industries: A Survey"
},
{
"paperId": "5fb646dcdbaad786437f37a1b8d54fdcf3729163",
"title": "A Whole New World: Income Tax Considerations of the Bitcoin Economy"
},
{
"paperId": "b179cf70666c9f67fc8c330be3a55a5dd2266c2e",
"title": "SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge"
},
{
"paperId": "a8e0c1e13f9fc1798ac4e213ce56dcbb5ca3a43a",
"title": "The survey and research on application of cloud computing"
},
{
"paperId": "5aa5e0dd8f623f6429650ebf95d5e32e8567fc4a",
"title": "Cloud computing - concepts, architecture and challenges"
},
{
"paperId": "a66d64b90b4c4c0f07000d90146906c90efcb6f9",
"title": "Study and application on the architecture and key technologies for IOT"
},
{
"paperId": "8f2397105f49d3b8b241603e2a319abc0983b81d",
"title": "Services in the Cloud Computing era: A survey"
},
{
"paperId": "60eb6ca2993be1330cb845e1ac60d0188f61f08f",
"title": "Cloud Computing Research and Development Trend"
},
{
"paperId": "0742fa40bf9be455fc6338e3a40ed6f0113d4a61",
"title": "OpenFlow: enabling innovation in campus networks"
},
{
"paperId": "e236eb170d5bf37f8a24622715bcb73e7e8d4ede",
"title": "Protocols and impossibility results for gossip-based communication mechanisms"
},
{
"paperId": "eb51cb223fb17995085af86ac70f765077720504",
"title": "Kademlia: A Peer-to-Peer Information System Based on the XOR Metric"
},
{
"paperId": "b2b23720778727e2e4d9ffba751a6ffafb8c7b35",
"title": "Electronic Tickets, Smart Cards, and Online Prepayments: When and How to Advance Sell"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "f7ddf5129b392cc23411c2ca28ece3bc0ad682d6",
"title": "A Survey Of Blockchain Security Issues And Challenges"
},
{
"paperId": "603b4f0fa052fdea4e1d96eb368c9d29adca562a",
"title": "Peer-to-Peer Energy Trading Mechanism Based on Blockchain and Machine Learning for Sustainable Electrical Power Supply in Smart Grid"
},
{
"paperId": "63009b9c9392e044d30e0877899d43706b186298",
"title": "Blockchain-based authentication and authorization for smart city applications"
},
{
"paperId": "6d789bc632a1d80a9b8333bcfef7821cde10811b",
"title": "SmartBlock-SDN: An Optimized Blockchain-SDN Framework for Resource Management in IoT"
},
{
"paperId": "7f6d0564e546599143cd028c729c1ade5089552e",
"title": "A Survey on Blockchain-Fog Integration Approaches"
},
{
"paperId": "2933e981ce051e9893dbb1059f36bfd38085ef56",
"title": "Correction to: Legal Ramifications of Blockchain Technology"
},
{
"paperId": "44a5532e4a80d1e4f95e070ca4639811f82f8b95",
"title": "Blockchain-Based Cyberthreat Mitigation Systems for Smart Vehicles and Industrial Automation"
},
{
"paperId": "931bc061fef71c1485946aca1bc3dc391bc44e59",
"title": "Software-Defined Networking: A Novel Approach to Networks"
},
{
"paperId": "f77e99d5d15f32cbdcb464c46849e4d47a477887",
"title": "Hybrid Blockchain"
},
{
"paperId": "bc2207c8f7a3c68a334b6b525b8dbe0866edcf41",
"title": "A Systematic Literature Review of Integration of Blockchain and Artificial Intelligence"
},
{
"paperId": "68a8470244c39ca74b3f4fa1e527ebdb93eb6ae5",
"title": "Decentralizing AI Using Blockchain Technology for Secure Decision Making"
},
{
"paperId": "19fae8ff8eb043efada14903025b3754f4b3d788",
"title": "Trustworthy Blockchain Oracles: Review, Comparison, and Open Research Challenges"
},
{
"paperId": "34162f7c8f69e3e892bb22672042c9d4041317d7",
"title": "AI, IoT, Big Data, and Technologies in Digital Economy with Blockchain at Sustainable Work Satisfaction to Smart Mankind: Access to 6th Dimension of Human Rights"
},
{
"paperId": "2614a0d0213e8cc457ea62b435a8f43dea54245a",
"title": "Blockchain for AI: Review and Open Research Challenges"
},
{
"paperId": "3d267bbcce5a599ac9cc42964fefb40e7b49cbb1",
"title": "Applications of Blockchains in the Internet of Things: A Comprehensive Survey"
},
{
"paperId": "64cce7c078749876b4327cea08df6fc7dd95f0f4",
"title": "Blockchain: Opportunities and Challenges in the Educational Context"
},
{
"paperId": "7247762784cc43596f91adde7a094f85edc30bf6",
"title": "CHAPTER 15 Enigma: Decentralized Computation Platform with Guaranteed Privacy"
},
{
"paperId": null,
"title": "Unmanned aerial vehicle data management"
},
{
"paperId": null,
"title": "EOS.IO Technical White Paper v2"
},
{
"paperId": null,
"title": "Mobile edge computing via a UAVmounted cloudlet: Optimization of bit allocation and path planning"
},
{
"paperId": "bc05bfaae0483caead1660d946c4b557a78bd4f1",
"title": "PB-PKI: A Privacy-aware Blockchain-based PKI"
},
{
"paperId": "24711b2a7a4dc4d0dad74bbbfeea9140abab047b",
"title": "Managing IoT devices using blockchain platform"
},
{
"paperId": "490369507b3a1e425ff4b7150fc9d083783bb908",
"title": "A review of Internet of Things for smart home: Challenges and solutions"
},
{
"paperId": "f852c5f3fe649f8a17ded391df0796677a59927f",
"title": "Architecture of the Hyperledger Blockchain Fabric"
},
{
"paperId": "3ed0db58a7aec7bafc2aa14ca550031b9f7021d5",
"title": "A Case Study for Blockchain in Healthcare : “ MedRec ” prototype for electronic health records and medical research data"
},
{
"paperId": "9d1cb62f7fde9b03882567ee01adba31b7108276",
"title": "Distributed Consensus from Proof of Stake is Impossible"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": null,
"title": "ADEPT: An IoT Practitioner Perspective"
},
{
"paperId": null,
"title": "Chapter 13 - legal issues in cryptocurrency"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "57855fea0eea38a503ae58cbb024a2606002f677",
"title": "A Decentralized Public Key Infrastructure with Identity Retention"
},
{
"paperId": "bff4ecdd2c40bb67abab8d49e99c81287a7b2810",
"title": "The Ripple Protocol Consensus Algorithm"
},
{
"paperId": null,
"title": "Delegated proof-of-stake (dpos)"
},
{
"paperId": "0db38d32069f3341d34c35085dc009a85ba13c13",
"title": "PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "249377e09f6da6eda933ed4f39b4dbe6aa74b592",
"title": "the Internet of Things: a Systematic Literature Review"
}
] | 37,289
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010def07d358187bc10b482d05b77f0e27f833dc
|
[] | 0.929743
|
Governance of Blockchain and Distributed Ledger Technology Projects
|
010def07d358187bc10b482d05b77f0e27f833dc
|
Social Science Research Network
|
[
{
"authorId": "123890248",
"name": "Bronwyn E. Howell"
},
{
"authorId": "1828146",
"name": "P. Potgieter"
},
{
"authorId": "2971618",
"name": "B. Sadowski"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SSRN, Social Science Research Network (SSRN) home page",
"SSRN Electronic Journal",
"Soc Sci Res Netw",
"SSRN",
"SSRN Home Page",
"SSRN Electron J",
"Social Science Electronic Publishing presents Social Science Research Network"
],
"alternate_urls": [
"www.ssrn.com/",
"https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e",
"https://www.wikidata.org/wiki/Q53949192",
"www.ssrn.com/en",
"http://www.ssrn.com/en/",
"http://umlib.nl/ssrn",
"umlib.nl/ssrn"
],
"id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62",
"issn": "1556-5068",
"name": "Social Science Research Network",
"type": "journal",
"url": "http://www.ssrn.com/"
}
|
Blockchains are the most well-known example of a distributed ledger technology (DLT). Unlike classic databases, the ledger is not maintained by any central authority. The integrity of the ledger is maintained automatically by an algorithmic consensus process whereby nodes vote and agree upon the authoritative version. In effect, the consensus algorithm operates in the manner of a decision-making process within a governance system.<br><br>The technological characteristics of blockchain systems are well documented (Narayanan, Bonneau, Felton and Miller, 2016). We propose that one of the reasons why it has so far proved very difficult to seed large-scale commercial DLT (blockchain) projects lies in the arena of project ownership and governance. Unlike classic centralised database systems, DLTs have no one central point of “ownership” of any of the system’s infrastructure or data. <br><br>In this piece of exploratory research, we propose applying theories of club governance to both the technical design and operational development of a range of DLT (blockchain) systems, including (but not necessarily limited to) cryptocurrencies and enterprise applications to explore how they can explain the development of (or lack of development of) sustainable solutions to real business problems. There are many parallels to the governance arrangements observed historically in the origins of complex distributed telecommunications networks.
|
Howell, Bronwyn E.; Potgieter, Petrus H.; Sadowski, Bert M.
**Conference Paper**
## Governance of Blockchain and Distributed Ledger Technology Projects
2nd Europe - Middle East - North African Regional Conference of the International
Telecommunications Society (ITS): "Leveraging Technologies For Growth", Aswan, Egypt,
18th-21st February, 2019
**Provided in Cooperation with:**
International Telecommunications Society (ITS)
_Suggested Citation: Howell, Bronwyn E.; Potgieter, Petrus H.; Sadowski, Bert M. (2019) :_
Governance of Blockchain and Distributed Ledger Technology Projects, 2nd Europe - Middle
East - North African Regional Conference of the International Telecommunications Society (ITS):
"Leveraging Technologies For Growth", Aswan, Egypt, 18th-21st February, 2019, International
Telecommunications Society (ITS), Calgary
This Version is available at:
[https://hdl.handle.net/10419/201737](https://hdl.handle.net/10419/201737)
**Standard-Nutzungsbedingungen:**
Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen
Zwecken und zum Privatgebrauch gespeichert und kopiert werden.
Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle
Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich
machen, vertreiben oder anderweitig nutzen.
Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen
(insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten,
gelten abweichend von diesen Nutzungsbedingungen die in der dort
genannten Lizenz gewährten Nutzungsrechte.
**Terms of use:**
_Documents in EconStor may be saved and copied for your personal_
_and scholarly purposes._
_You are not to copy documents for public or commercial purposes, to_
_exhibit the documents publicly, to make them publicly available on the_
_internet, or to distribute or otherwise use the documents in public._
_If the documents have been made available under an Open Content_
_Licence (especially Creative Commons Licences), you may exercise_
_further usage rights as specified in the indicated licence._
-----
# Governance of Blockchain and Distributed Ledger Technology Projects
### Bronwyn E. Howell[*], Petrus H. Potgieter[†], Bert M. Sadowski[‡]
## Abstract
Blockchains are the most well-known example of a distributed ledger technology (DLT). Unlike
classic databases, the ledger is not maintained by any central authority. The integrity of the
ledger is maintained automatically by an algorithmic consensus process whereby nodes vote and
agree upon the authoritative version. In effect, the consensus algorithm operates in the manner
of a decision-making process within a governance system.
The technological characteristics of blockchain systems are well documented (Narayanan,
Bonneau, Felton and Miller, 2016). We propose that one of the reasons why it has so far proved
very difficult to seed large-scale commercial DLT (blockchain) projects lies in the arena of
project ownership and governance. Unlike classic centralised database systems, DLTs have no
one central point of “ownership” of any of the system’s infrastructure or data.
In this piece of exploratory research, we propose applying theories of club governance to both
the technical design and operational development of a range of DLT (blockchain) systems,
including (but not necessarily limited to) cryptocurrencies and enterprise applications to explore
how they can explain the development of (or lack of development of) sustainable solutions to
real business problems. There are many parallels to the governance arrangements observed
historically in the origins of complex distributed telecommunications networks.
_Keywords: blockchain, distributed ledger, governance, club governance, distributed consensus_
## 1 Introduction
“Reform is a profoundly political process, not a technical one.” Fukuyama (2014,
161)
Blockchains are the first, and most well-known example of a distributed ledger technology (DLT).
A distributed ledger (DL) is a database (or file) spread across several nodes or computing devices.
Each node in a network has access to (and probably saves) an identical copy of the ledger. Unlike
*School of Management, Victoria University of Wellington, bronwyn.howell@vuw.ac.nz
†Department of Decision Sciences, University of South Africa, potgiph@unisa.ac.za / php@grensnut.com
‡Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology,
b.m.sadowski@tue.nl
1
-----
classic databases, the ledger is not maintained by any central authority. The integrity of the
ledger is maintained automatically by an algorithmic consensus process whereby nodes vote
and/or agree upon the authoritative version, which is then updated and saved independently
on each node. In effect, the consensus algorithm operates in the manner of a decision-making
process within a governance system.
Blockchain DLs use a chain of blocks linked to one another and secured using public-key
cryptography to provide a secure and valid distributed consensus. A blockchain is usually
distributed across and managed by peer-to-peer networks. Its append-only structure only allows
data to be added to the database: altering or deleting previously entered data on earlier blocks
is impossible. Blockchain technology is therefore well-suited for recording events, managing
records, processing transactions, tracing assets, and voting.
The technological characteristics of blockchain systems are well documented (Narayanan, Bonneau, Felten, Miller and Goldfeder, 2016). Considerable faith has been placed in the technology
as a means of revolutionising digital transacting (Mulligan, Scott, Warren and Rangaswami,
2018; Crosby, Nachiappan, Pattanayak, Verma and Kalyanaraman, 2015; Czepluch, Lollike
and Malone, 2015; Swan, 2015). However to date, outside of the arena of highly-publicised
cryptocurrencies such as Bitcoin and Ethereum, few examples exist of the use of the technology
to support significant economic activities. Nonetheless, plans for many other blockchain systems
have been announced – for example Sovrin for identity management, and Halo for supply chain
management.
Comparatively little has however been documented so far about the governance of blockchain
systems and the commercial activities they support, beyond the algorithmic voting processes
via which the nodes agree on the authoritative version of the DL. This paper represents an
exploratory endeavour to address this gap.
We begin by reviewing current interpretations of blockchain “ownership” and “governance”.
Neither the data in the ledgers nor the software governing blockchain operations is claimed to
be owned by anyone in particular. Nonetheless ongoing responsibility for the rules governing
their ongoing operation must be assumed by someone if they are to be created and operated
successfully for commercial endeavours. We propose that the governance of DL systems can be
analogised to that of clubs.
While some of the governance rules are embedded in system software, and may be costly and
difficult to change, leading to stable ledger content, other rules are embedded in the institutional
arrangements linking system participants (club members) outside of the software, and may not
be so costly to change, depending upon how key decision-making rights are distributed across
them. The stability of a given DL system will depend upon the interaction of the decision rights
allocated and exercised within the software with rights allocated and exercised outside of it.
From the theoretical discussion, we develop a framework for examining a given blockchain
DL system to identify and evaluate the effectiveness of its governance arrangements given its
specific commercial application. We apply the framework in two case studies: the cryptocurrency
Bitcoin and the identity management system Sovrin. While both claim to be public blockchains
using very similar “proof of work” algorithms to agree ledger content, Sovrin’s structure as a
“permissioned” blockchain (it poses barriers to entry for node operators (they are required to
become Stewards) and requires end-users to have a relationship with node operators separately
verifiable from their blockchain relationship) differentiates it from Bitcoin. We suggest that the
costs of changing Sovrin’s governance arrangements as circumstances change are much lower
2
-----
than those of Bitcoin. Thus, Bitcoin is less susceptible to successful forking than Sovrin. Sovrin’s
stability relies more strongly upon the alignment of the interests of its node operators (stewards)
than does Bitcoin (miners are node operators), because Sovrin’s governance arrangements allow
them both greater control over changes to software content and lower costs of co-ordinating
successful forking than their Bitcoin comparators.
## 2 Ownership and governance of distributed ledgers
Who owns and governs a blockchain system? One view (prevailing with the development of
cryptocurrencies) is that a specific DL application is “governed by no-one”, because it is “owned
by no-one” (Sovrin, 2018). Anyone who wishes to use the blockchain application may do so
(it is “public”). Unlike web-based applications, any user operating as a node has a copy of
both the ledger and the software required to participate in the system (although some classes of
user may interact via web pages managed locally by a node operator). Neither the data nor the
software are proprietary to a single controlling entity. As the software code is open source, any
node operator is free at any time to make changes to the code (which they themselves are using)
and institute a new blockchain operating independently of the first (termed “forking”), without
facing the disadvantage of a centralised system of not having access to the accumulated historic
data. All blocks up to the point of forking are identical in both the original and the new chain.
Although the components of a blockchain are “owned by no-one”, it cannot be said that a
blockchain system is “governed by no-one”. All systems operate within a framework of rules,
either derived implicitly from the norms and cultures of the participants or explicitly articulated in
formal agreements (such as constitutions and contracts) (Williamson, 1999; 2000). Collectively,
these rules comprise the governance arrangements under which systemic interaction takes place.
Within them, in order to co-ordinate participants and streamline decision-making, selected
groups of individuals are granted superior decision-making rights by assuming those ceded to
them by specific subsets of system users. Governance arrangements can emerge endogenously
over time (bottom-up) (for example, as has occurred with the constitutional arrangements
of nation states), or be imposed from the outset (top-down)(for example, following military
conquest, or in the Constitutions and Articles of Incorporation of firms, marketplaces (e.g. stock
exchanges, clubs and trusts) (Ostrom, 1990; 2005).
Efficient and effective governance arrangements will specify both the set of rules prevailing
for normal transacting, and provisions whereby those rules can be changed in response to
changing circumstances. When the provisions for rule changes are explicit, and users have
clear means of observing those charged with the responsibility for managing the rules process
and holding them to account for their actions (or inactions), then the systems will tend to be
more efficient than if the rules and/or the identity of the decision-makers are unclear, those with
decision-making rights can exercise them covertly, and there are no clear means of users holding
the decision-makers to account or the costs of doing so are so high as to render the probability
of occurrence remote (Hansmann, 1996; Cordery & Howell, 2017).
The classic shareholder-owned firm provides an example of one such explicit system, where
shareholders give up their rights to make decisions about the day-to-day use of the firm’s assets
to boards and management to facilitate more efficient firm functioning than if the shareholders
themselves were required to undertake co-ordination (Berle and Means, 1937; Williamson,
1985). Other examples include the arrangements pertaining to the management of non-owned
3
-----
non-rival and non-excludable “public” goods where governments as trustees exercise decisionmaking rights on behalf of all citizens, and those pertaining to the management of non-rival but
excludable “club” goods enjoyed by an identified population (as a subset of general citizenry).
### 2.1 Distributed ledger systems as clubs
Club theory, proposed initially by Buchanan (1965) in respect of clubs dealing in rival, excludable
goods provided and consumed by volunteer-members has been expanded subsequently to take
account of the separate dimensions of non-rivalry and non-excludability of the goods provided
by the clubs, and the exclusivity of club membership (e.g. Olson, 1989; Comes and Sandler,
1996). Important work by E Ostrom and V Ostrom (Ostrom, 1990; 2005; 2010; Ostrom,
2014) melded the concept of the club with theories of self-organising governance systems,
federalism and polycentrism in government, demonstrating that common resources could be
managed successfully without government regulation or privatisation, by way of decentralised
entities operating as polities using representation relationships (i.e. “membership”) rather than
contractual assignment of rights proportional to asset ownership to allocate decision-making
control.
In this view, a DL system (DLS) is a club with (arguably) open membership. It is a ‘public
system’ as any member of the public agreeing to abide by the rules can join in order to use it. At
any point in time, the club membership is defined by those participating in the DLS. The DLS is
governed by rules covering both membership and operation. Various classes of membership are
usually determined by the nature and form of interactions the members have with it. Operational
rules cover how routine operations will occur and how, in the event that conflicts arise that
cannot satisfactorily be resolved within the existing rules, the rules can be changed to maintain
the integrity of the system and thereby ensure ongoing use by the members (Cordery and Howell,
2017). The consensus arrangements by which the DLS resolves conflicts about the content of
the DL constitute but one part of the system’s operational rules.
Importantly, decision-making powers attach to and differ by membership status. Axiomatically,
users operating nodes that agree the ledger content participate differently in decision-making
from end-users participating only by using the DLS to transact with other end-users. The
over-arching institutional arrangements under which the DLS operates – including the allocation
to membership status and associated decision-making rights – must be decided first by founding
members in order for the system to be created. The founding members will determine the original
governance rules – both those coded into the software and other non-coded arrangements. They
exercise considerable design control. It matters, therefore, whether these individuals exert
decision-making influence in either or both of operational and other representative roles. The
rules must address the potential for conflicts in these decision-making responsibilities to be
resolved.
The founding members assume fiduciary duties to both future members and the DLS club as a
whole. To the extent that the initial rules establish hierarchies of membership and allocation
of decision-making responsibilities, these can be thought of as defining the club committees
and sub-committees, with the founding members being allocated to different roles. Once the
DLS becomes operational, the roles may transfer to new members as they join and as the new
membership begins exercising its rights. This may include the replication of “branches” or
“sub-branches” of the club with their own committees and sub-committees as the number of
node operators expands. The DLT rules specifying how these distributed entities are federated
4
-----
into the overall club governance functions and how decision-making rights and responsibilities
are distributed across them are effectively constitutions.
However, there is also a threat that new members are offered discriminatory treatment by incumbent members or are excluded altogether. Rey and Tirole (2007) have shown that incumbent
members have an incentive to exploit their monopoly power or restrict entry by new players.
Within the blockchain context, that is a massive centralization problem due to the concentration
of mining power within a small group of initial members. However, there is a movement from
the concept of Proof of Work (PoW) towards the Proof of Stake (PoS) that tries to address this
problem by giving greater voting power to those who have a stake in the venture.
For a discussion of blockchains as constitutional entities, where club members are equated to
citizens, see Berg, Berg and Novak (2018) and Berg, Novak, Potts and Thomas (2018). Our
analysis differs from theirs in that they focus only on the ongoing operation and management of a
DLS once it has been created, whereas we examine both instantiation and ongoing operation. We
also extend our analysis to include relationships between different classes of member outside of
the operation of the ledger – that is, we see the constitutional rules encompassing both software
and non-software elements. Their analysis focuses predominantly on the software-mediated
elements.
### 2.2 DLS governing rules and club stability
The initial DLS is offered as a “take it or leave it” package to the first public users – that is, as a
‘top-down’ imposition as per Ostrom (2009) (e.g. implementation of a new stock exchange). This
differs from the voluntary agreement of federated arrangements when extant groups negotiate
the rules under which their activities become linked (e.g. when two stock exchanges merge) –
that is, bottom-up arrangements enabling large-scale co-operation (Bednar, 2009). In either case,
once the rules are agreed, they can change via either gradual reinterpretation of the rights and
obligations defined by apparently stable governance rules, or substantive episodic change in
the formal structure of the governance rules (analogous respectively to Buchanan’s distinction
between regular political activity and constitutional moments – Buchanan and Tullock, 1962;
Buchanan and Brennan 1985; Congleton, 2014).
Both clubs and political entities will function effectively so long as there are credible commitments by members/citizens to monitoring, enforcement/sanction and conflict resolution within
the existing rules (North, 1993; Ostrom, 2005), and the ability to bring about changes to those
rules to meet the demands of changing circumstances (Tarko, Schlager and Lutter, 2018). The
former are enhanced by rule stability and certainty; yet too much stability can lead to fragility if
the governance rules are not well-adapted to new challenges. But allowing for ready change
may also lead to instability as competing centres of authority may attempt to devise and impose
rules to benefit themselves at the expense of others. The challenge is to find a workable balance
between stability and flexibility in the governing arrangements.
As long the original (or extant) arrangements suit the (ever-changing) membership, a DLS will
survive – albeit in a dynamically-adapting form as constitutionally defined. However, when an
issue arises which cannot be agreed using the current rules, a fork may occur (a ‘break-away
club’ forms). The ongoing success of both clubs/DSLs now depends on the proportion of the
members who move to the new DLS or remain with the original one.
5
-----
On the one hand, the ease with which disaffected node operators can break away provides
significant pressure on the existing DLS to be designed for and operate consistently in node
operator interests, which may not necessarily be in the interests of end-users. On the other hand,
if end-users’ interests are compromised, they will not participate in the DLS in the first place.
Careful balancing of interests of both node operators and end users is necessary to both attract
a critical mass of node operators and user-members and maintain DLS stability. If these are
not well-balanced, the DLS will be unstable – that is, prone to either failure (no members) or
forking.
### 2.3 Trading flexibility and stability in a dynamic context
However, to the extent that many of the governance arrangements must be coded into the
software in advance of the system beginning operation, DLS design is subject to the bounded
rationality of human designers. Arrangements that are satisfactorily balanced at one point in time
within one set of wider environmental circumstances may not be optimal if those circumstances
change (e.g. substantial changes in electricity prices for node operators). Typically, the more
flexible the governance rules are, the more easily they can be altered to take account of the
changes and the DLS as an institutional entity will be more stable. However, the co-ordination
required to institute the changes can be costly.
A strength of DLS consensus algorithms is that the costs of co-ordinating to change the softwaregoverned outcomes are very high. This leads to high confidence in the integrity of the data held
in the ledgers. However, the costs of changing the non-software-governed elements can, but
need not, be high. The lower are the costs of co-ordinating the activities of the human entities
using the system, the easier it is to institute changes – either to enhance the operation of the
existing DLS by a general agreement of all with sufficient powers to amend the software to
maintain its stability, or to facilitate a successful fork by persuading a critical mass of users of
the existing system to support the break-away rules and system, instead of the original.
If all end users are definitively linked via mechanisms that enable them to be easily identified,
then it is cheaper to communicate with them to co-ordinate any action than if they cannot be
easily identified. Change of either type (forking; mutually-agreed rule and software changes) is
more likely as co-ordination costs are lower. Furthermore, the greater is the extent to which the
member activities are linked outside the day-to-day operation of the DL, the cheaper is the cost
of co-ordinating activities for (for example), end users to follow the decisions of their relevant
node operator when it becomes necessary to decide whether to support a software change for
the existing system or to support a fork.
Assume, for example, that the node operators are required by the DLS rules to be identified
and known to each other in order to be authorised to act in this capacity. That is, the DLS is a
permissioned system. The cost of co-ordinating activity amongst a known number of identified
individuals is less than where neither the number nor the identity of the members is known. Both
agreed changes maintaining existing DLS integrity and forking will be less costly, suggesting
that permissioned systems may be less stable. In the long run, this could have the effect of
making it harder for the DLS to attract and retain new node operators, and thus build membership
scale quickly. This effect could be overcome in part by adopting rules that make forking more
expensive – for example, requiring the payment of a substantial bond (membership fee) that is
forfeited if the member initiates or joins a fork.
6
-----
The requirement for a node operators to develop their own software to interface with the DLS
offers only a weak form of a bond against forking once the desired system has been selected
from the range available. If a successful fork occurs, then the software will be equally useful on
either variant (at least initially). Thus expected DLS scale rather than stability likely has a greater
influence on the selection of the system by a given node operator. However, the advantages of a
flexible, permissioned system become more evident when certainty about the future environment
in which the DLS will operate is lower.
Assume now that a DLS is operating in a volatile environment, where changes in these external
circumstances may alter the returns to a subset of node operators such that they may find it
desirable to change the existing rules or create/support a fork with rules more conducive to their
interests. Such actions will be frustrated by the high costs of identifying the likely disaffected
operators and co-ordinating the requisite change. If no one operator once having joined a system
can co-ordinate a defection at low cost, the incentives to join in the first instance, and to invest
in developing the requisite code for a fork are likely low. In these circumstances, it may be
feasible to instigate a new club only if, in the first instance, the rules serve to lower the costs
of changing them when circumstances indicate. This could be the case in the early days of
developing the business case for a DLS to be used for a specific application or in a specific
industry (e.g. identity verification or supply chain management). However, it is not axiomatic
that this state will prevail indefinitely. Generally, the more mature is the DL application, the
more widely used it is, and the more diverse are the interests of ts user groups, the more costly it
will be to gain a consensus on changes to the non-software-mediated governance rules, and the
more stable will be the DLS.
We note that, by analogy, the internet did not originate as the public, open entity governed by the
cultures, norms and formal arrangements currently prevailing. Rather, it began as a closed, permissioned entity with substantial restrictions placed by its governance arrangements in its users.
It was initially a network of peers in government and academia with very narrow, homogeneous
research interests in network technology development, beginning in 1969. The governance
arrangements expanded gradually to include users with broader, more general research-oriented
interests with their own network resources that specifically excluded commercial network operators. While users may have utilised PSTN connections to make their connections, the public
telephone companies were unable to participate meaningfully in the internet ecosystem until
changes in the governance arrangements in 1995 enabled wide private sector participation
(Leiner, Cerf, Clark, Kahn, Kleinrock, Lynch, Postel, Roberts and Wolff, 2009). Over time, as
more and more user groups were added, and users became more and more heterogeneous, the
changes in governance arrangements became less and less frequent as the costs of co-ordinating
their interests in order to institute changes became greater. Substantive changes now occur very
rarely indeed, via costly consensus-seeking international processes organised by entities such as
the International Telecommunications Union and the Internet Corporation for Assigned Names
and Numbers.
Furthermore, as DLs exist to serve applications for communities of specific interest, the extent
to which the environments in which they operate are stable or volatile, and hence the costs of coordination will vary depending on a wide range of context-specific factors. In part, this explains
why, despite the existence of several thousand cryptocurrencies, only a handful are operating at
a meaningful scale. Unless nearly all operators are equally affected by the exogenous change
in circumstances or an internally-agreed rule change, then even if the high co-ordination costs
could be overcome, the proportion of defections to a forked variant will be small and it will
7
-----
be unlikely to appeal to a significant number of end-users. The greater is the choice of nodes
available to an end-user to interact with the original DLS, the lower are the incentives for a
single node to defect, unless the choice of end-users to patronise other nodes is also restricted. It
now matters how end-users’ interaction with the DL is mediated. If their choice of mediating
node is restricted to a limited number of operators, then the costs of co-ordinating the migration
of end-users’ future use from the original DLS to the forked version are substantially reduced
compared to the alternative of multiple (or ‘free’) node choices.
## 3 Developing an inquiry framework
To analyse a DLS as a club, it is first necessary to identify its membership and the rules
governing its operation and governance. As with any club, there may be many different states
of membership, defined in the rules. The rules will specify their relationships with each other,
along with the various powers each class of member may exercise in both regular operation and
in club governance.
### 3.1 Membership
The fundamental technical entities in any DLS are the nodes on which the DL copies are stored.
Each node is managed by human actor, which may be either a real (unique human) or a legal
(corporate) person. The human actor makes the decision to join the club, and in doing so agrees
to abide by the club rules. Human node operators will subsequently be termed node-members of
the DLS club.
Upon joining, node operators must acquire the current version of the DLS operating software
from the club’s software bank or find equivalent and indistinguishable (from the point of the
network) software elsewhere. The software bank is managed by a sub-committee of members,
who may or may not fulfil other roles within the club at the point of time the analysis is being
undertaken. However, as the origin of any DLS relies upon the development of the relevant
software, and no nodes can join the until the software has been ‘released’ for production, most
DLS clubs will begin with a small number of members all of whom have strong stakes in
the software development. Members who participate in the development and maintenance of
software alone will be termed software-members. At the origin of a DLS, the club is most
likely comprised almost exclusively of software-members. Over time, however, as more nodemembers join, the proportions will change. Software-members play a vital role as they have
the knowledge and skills necessary to evaluate the effectiveness of the existing software and
implement changes to it – such as those necessary to generate a fork. Members of the software
subcommittee therefore exert significant power (control) over the software content and hence
the likelihood of forking or changes to the existing software occurring.
Node-members are typically remunerated for holding ledger copies and processing transactions
via a combination of payments from the DLS (in redeemable tokens - system currency) upon
becoming the ‘winner’ who first posts the ultimately-agreed block, and from the entities who
requested the transaction in the first place. They have strong vested interests in both the systemgenerated rules for token payments and any other agreements about the setting and collection of
transaction fees paid by those using the system.
8
-----
The development of the software for a DLS and its promotion, are not costless. Softwaremembers may contribute to software developments without being paid, but nonetheless, they
face an opportunity cost for the time invested. In effect, they donate that time (albeit that they
may expect to be rewarded subsequently via returns from DLS operation as node operators,
transaction generators or end-users). However, the DLS may have members who support the
club’s activities with explicit financial contributions. Members acting in this capacity will
subsequently be termed donor-members. The club nature of a DLS is distinguished from a
proprietary firm by the fact that these donor-members are not shareholders. They have no defined
claim on either the club’s assets or any profits made from operating (though of course, like
software-members they may obtain benefits in other capacities of interaction in the system).
For a DLS to be operational, it requires two other classes of member. These are
- end user-members, who wish to use the system to undertake transactions with other end
users or request information held in the DL, and
- transaction-members, who manage the interfaces via which end-users participate.
Transaction members may hold copies of the DL content, to facilitate raising transactions and
answering queries. However, they do not participate in processing the transactions, which is
undertaken by the node-members.
In some cases, the transaction-members may also be end-users, interacting with the DL for their
own purposes. These users are typically dependent upon using software and/or code provided
by the DLS in order to generate transactions to it or queries on it. However, they do not exercise
any rights in the development of that software/code, unless they also participate separately as
software-members. They must take the code ‘as given’. An example is a cryptocurrency wallet,
managed by an individual end-user.
In other cases, transaction-members may undertake a vast range of activities separate and distinct
from the DL, with a vast range of end-users, as well as posting transactions for node-members
to process and queries to be responded to. These transaction-members may generate their
own bespoke code and applications (e.g. web pages) that build on code supplied by the DLS,
but once again, unless they engage separately as software-members, they exert no influence
on the DLS code per se. An example is a currency exchange, which may interact with many
different cryptocurrency DLs in addition to banks handing traditional fiat currencies and payment
mechanisms. However, to the extent that transaction-members have access to the DLS code, and
have the capacity to understand and modify it, they provide an important discipline on the DLS
because of their potential to create a fork.
Transaction-members are typically remunerated in fees paid to them by end-users. These may
(but do not need to be) determined by club rule processes. However, transaction-members
must pay fees to node operators when transactions are successfully completed. These may
be encoded within the DLS and paid using system tokens, or ‘off-system’ via club rules or
other ‘private’ contracts agreed between members. Club rules can be used to outlaw the latter
agreements, but enforcement is contingent upon the ability to detect their existence and use.
Their primary governance concerns will pertain to the level of, and rules setting, these fees, and
rules concerning how they must relate with equivalent and adjacent club members - that is, other
transaction members, node-members and end user-members.
End user-members are those who participate in the DLS only to the extent that they are originators
or beneficiaries of transactions, or they make inquiries on the ledger. They are the equivalent of
9
-----
customers of a shareholder-owned firm. They will pay fees to transaction-members for services
requested. They may pay these in stocks of system tokens via processes encoded in the DLS,
but equally, in other currencies, via arrangements that need not be agreed by or encoded in the
DLS. End user-member interests in DLS governance will pertain largely to the rules via which
these fees are determined. As for transaction-members, their primary governance concerns
will pertain to how they must relate with equivalent and adjacent club members - that is, other
user-members and transaction-members.
### 3.2 Governance
As identified above, a single club member may interact with the DL in many different capacities.
The potential overlaps are illustrated in Figure 1. In the early stages of the DLS life cycle, and
especially during its development, all roles (except perhaps donor-membership) may overlap, as
in the hatched portion of Figure 1. However, as the DLS matures, and especially as it increase
its scale of operations, the roles would be expected to gradually separate out (i.e. specialisation,
as per Williamson, 1986 emerges). The nature of the separation will now be governed by the
rules implicitly embedded in the DLS software and explicitly articulated in its “offline” rules.
Figure 1: Membership Status
**3.2.1** **Control**
Broadly speaking, Figure 1 identifies a hierarchy in membership status for mature systems. The
higher-up in the hierarchy a member sits, the more power potentially is conferred in decisionmaking in the governance arrangements. End-user members and transaction-members can exert
very little formal power via the governance and decision-making processes, as they must ‘take
as given’ the package offered by node members. Their power is confined to ‘voting with their
10
-----
feet’ and either choosing not to patronise the DLS, or (to the extent possible given the costs),
co-ordinating a successful fork.
The costs of organising a successful fork depend on the extent to which the disaffected
transaction-members can ensure that if they leave, end-users will follow them and not defect to transaction-members remaining on the original DLS. This is largely a matter of the design
of the contractual relationships between transaction members and end-users. If these allow a
transaction-member to limit the extent to which an end-user can patronise other transactionmembers, then the costs of co-ordinating to organise a fork will be lower. On the one hand,
DLS designers may not want to place many restrictions on these relationships, as reducing the
likelihood of forking reinforces system stability. While the power of members higher up the
membership hierarchy in Figure 1 is reinforced, it will rarely need to be exercised to change the
software and/or other governance rules. On the other hand, as discussed in the preceding theory,
if change is anticipated, then it may be necessary to co-ordinate the actions of all members in
order to change key elements of the DLS rules (software and other rules) without exposing the
DLS to undue risks of forking.
Node-members are pivotal, as the DLS cannot operate without them, but equally, they too may
have little choice but to accept a ‘take it or leave it’ package offered by the founder-members.
Once again, they can opt not to join in the first place, or like transaction members, co-ordinate
to instigate a successful fork. However, to the extent that they are formally engaged in the
process of DLS governance outside of the software channels, they can work constructively with
the software and donor members to change the rules in a manner that ensures their ongoing
patronage.
By either custom or explicit design, therefore, DLS governance is effectively controlled by a
small coalition of software-members, who may also participate as node-members or be closely
affiliated with influential node-members (i.e. they form the club committee). In order to motivate
their participation, it would be expected they anticipate remuneration from either their node
operation activities, or some other arrangement such as an honorarium paid from DLS funds held
off the ledger – for example, financial or in-kind contributions (e.g. time, computing resources)
made by donor-members. Donor-members without other membership stakes are unlikely to
make substantial contributions of this kind unless they too exercise some influence over DLS
governance and management – for example, having some powers to appoint or veto candidates
to the club committee, or specifying in advance how their donations are to be managed and/or
applied – in the same manner as expected by donors to clubs.
Thus, it is more likely that formal articulation of DLS governance arrangements outside of the
software itself (e.g. formal agreement of rules, club or trust agreements, etc.) will be necessary
the more donor-members there are, and the greater is their contribution of resources towards
DLS operation. Sponsorship of these formal arrangements may arise in the event that a group of
donor-members form a club to establish a new DLS for a specific purpose (e.g. to serve a trade
organisation or similar). Formal governance arrangements may also be necessary for a group
of informally-organised software-members wishing to make use of existing entities (e.g. firms,
trade organisations) to take an embryonic DLS from test-state to production.
11
-----
**3.2.2** **Rule and relationship formalisation**
When a DLS is new and small, all members have homogeneous interests, and all are known to
each other (i.e. all participate in the club in the same manner, as illustrated by the memberships
intersecting in the hatched area of Figure 1), then the need for formal rules articulating the
relationships between member groups and how conflicts will be resolved are less necessary.
However, as it grows, role specialisation increases and member interests begin to diverge, then
rule formalisation becomes more likely to be important. In particular, the allocation of important
decision-making powers, processes of appointments to decision-making bodies, the relationships
between different member categories and expectations and obligations of the relevant members
should be made explicit in order to allow members to make appropriate decisions and enabling
them to expect consistent predictable outcomes when interacting with the club.
Figure 2: Transacting Relationships
Figure 2 illustrates some potential patterns in relationships between different classes of members
in a hypothetical DLS. This can be used to illustrate how different restrictions placed upon the
interrelationships of club members affect costs of co-ordination.
For example, node N3 operates in a closed environment with a limited number of transactionmembers (TP4 and TP5) who interact with no other node operator. Furthermore, the transactionmembers interact with a limited number of end-users (EU7, EU8 and EU9), who do not interact
with any other transaction-members who do not operate through node N3. This arrangement
could be achieved by having rules restricting interactions to a closed subset of members. That is,
12
-----
N3 will only accept transactions from transaction members known to or recognised by it, and
these members are precluded by software-mediated rules from interacting via any other nodemember. Similar obligations can attend the interactions of end user-members with transaction
members. In this example, EU8 can interact with any transaction member affiliated with N3
(TP4 or TP5), but EU7 and EU9 may be limited to interacting with TP4 and TP5 respectively.
The N3 limb of Figure 2 is an example of a “permissioned” DLS - each member needs the
‘permission’ of one higher up the membership tree to interact with the DLS. As these arrangements prevent ‘client’ transaction- and end user-members from interacting via any other node,
considerable power is vested in N3. If it instigates a fork, then it can be sure of maintaining its
existing transaction volume at negligible cost. The higher are a node-member’s investments
in the DLS and its operation, and the greater the share of its remuneration it gets from fees
paid by transaction members, as opposed to the DLS, the more likely it is that a node-member
will prefer to use the governance rules to restrict transaction-member and end user member
choices. If all node operators are comparatively homogeneous in their identities and operations,
and the proportion of their remuneration received from payments agreed ‘off system’ with
downstream affiliates (rather than the internal DLS payments), then the more likely it is that a
strictly hierarchical system will emerge. Even though the ledger and software are decentralised,
each node will operate as the principal of its own federated ‘sub-branch’ of the DLS club. A
commercial analogy is franchisees operating with exclusive territories. As with the franchise system, inducing participation by the node operators is contingent upon these protections. However,
unlike franchise systems, the node operator can relatively costlessly exit, taking existing systems
and customers along. If the club is to attract node operators in the first place, and remain stable
into the future, those members must be protected from the effects of competition emerging from
forking ex-members. The governance rules must contain provisions that make forking costly
(e.g. very large membership fees, forfeited on forking).
By contrast, nodes N1 and N2 in Figure 2 can interact with any of TP1, TP2 and TP3. It is
an example of a “permissionless’ or fully public system. If N2 forks, TP1 and TP3 can shift
their interaction to N1. Fewer (or no) interaction restrictions lower the likelihood of forking and
therefore the risks of joining for a new node. There is less need for governance arrangements to
constrain member defection by forking. Indeed, such a system may be able to operate without
any special rules governing interaction. Competition within and between members may be
satisfactory.
However, in both cases, the more heterogeneous are the node members, the more likely it is
that a ‘one size fits all’ set of rules (especially for remuneration) will be optimal for all member
types. Tensions between members are more likely to arise in these circumstances. However,
unless members are identifiable and known to each other, and formal channels established for
resolving disputes, the costs of achieving a satisfactory resolution are likely so large as to be
prohibitive. Change is unlikely to occur, either to the existing rules or via forking. To the extent
that these problems can be anticipated, cost-reducing dispute resolution mechanisms may be
contained within the DLS governance provisions. If they are not, then arrangements external
to it may also facilitate co-ordination - for example, if specific member groups are affiliated in
other ways, such as by being members of an industry association. Knowledge of such potentials
may alter the strategies by which a specific DLS may seek to expand - for example, by engaging
with the aggregating entry directly, or seeking to include it as a member, and thereby relying
on its resources to assist in dispute resolution. In this case, it may not be necessary for the
DLS to have direct knowledge of the identity of, or direct communication with, individual club
13
-----
members. Nonetheless, it is noted that the outcomes of such co-ordination may not be aligned
with preserving the viability of the DLS unless its governance rules contain means of ensuring
the aggregate members are required to prioritise this outcome.
## 4 Case study: Bitcoin
The Bitcoin blockchain is a distributed ledger (DL) which is used to record transactions in the
Bitcoin cryptocurrency. In this paper, however, we consider only the mechanism by which new
blocks are added to the ledger rather than the operation of the cryptocurrency which is well
described elsewhere, for examply by Böhme, Christin, Edelman and Moore (2015).
### 4.1 Decription of the Bitcoin protocol
On the most basic level, the Bitcoin “network” consists of a large number of entirely independent
computers that exchange messages conforming to certain specifications using the same protocol
and each with a copy of the Bitcoin DL. The Internet protocol (IP) addresses of some key
servers are published on authoritative websites and it is free to join. These servers can store and
distribute the addresses of other servers on the network.
Each server (or, node) can check whether its version of the DL corresponds to those stored by
others (up to a certain number of blocks) on the peer-to-peer network but this is really most
easily done buy consulting some authoritative website. The basic function of a node is to copy
the DL but it can also submit transactions for possible inclusion in the chain using an identity
based on a randomly generated public-private key pair. Before considering which transactions
(which are actually simply messages in a specific format) are included in the DL, we have to
consider the integrity of the system as a whole.
Suppose, as a thought experiment, the Internet were suddenly split into two fully functioning
parts. For example, by a single large country detaching itself from the global network. Bitcoin
nodes on the detached part of the network might be unable to find some of the servers with IP
addresses published on the authoritative websites (if these were available) but as long as some
of the authoritative servers are based in the detached portion of the Internet, Bitcoin nodes in
the rump would continue to functions as normal. The same would be true for the other portion
of the Internet and the two versions of the Bitcoin DL would simply grow differently. For the
paranoid, in short, there is no way of knowing that they are operating on the “true” DL.
Nevertheless, Bitcoin has proven quite successful as a payment system and has maintained its
integrity and support remarkably well. The main reason for this is the ingenuous design of its
proof-of-work system for generating new blocks for inclusion in the ledger.
### 4.2 Adding new blocks
The work is done by “miners” which are nodes on the network that generate candidates for new
blocks. Each of these candidates must contain valid transactions (moderately easily checked by
other nodes) and the solution (very easily checked) to a mathematical problems that necessarily
involves generating a lot of random candidate solutions, on average. The solution is included in
14
-----
the candidate block broadcast to the network, as is a transaction that includes awarding a bounty
to the miner.
As nodes receive valid candidate blocks from miners, they accept them, add them to their copy
of the ledger and rebroadcast them. Subsets of the nodes can at this stage receive and accept
different new blocks. This is a dilemma but one that is completely resolved (usually within about
an hour) by the nature of the Bitcoin protocol. Whichever new block is accepted by more nodes
will tend to be the block that is accepted by miners and that they use to build subsequent blocks.
This is entirely consistent with miners’ self-interest – they would have no incentive to mine on
chains that are likely to be abandoned. With the operation of this majoritarian mechanism, nodes
that have accepted a less favoured block will eventually find that the chain is a dead-end and
will revert to the surviving chains. This mechanism delivers a consensus that is driven entirely
by the self-interest of all the parties and is subject to no prior explicit arrangement.
This majoritarian system of vetting new blocks[1] is the source of fear of the so-called “51%
attack” which would consists of putative malicious control of a majority of the mining capacity
and the possible introduction of improper blocks (for example, containing invalid transaction
messages) that are then included in the DL. Since all nodes are able to relative inexpensively
check the validity of blocks, this is extremely unlikely to go unnoticed for very long but it is
likely to cause a great deal of confusion and distrust. Nevertheless, since the miners are awarded
in Bitcoin, they probably have very little incentive to engage in behaviour that reduces trust in
the underlying cryptocurrency.
The governance of the Blockchain ledger is therefore mechanically implied by the protocol,
which is the genial invention that engenders great robustness and stability. Nevertheless, this
does not exclude the use of explicit agreements among participating nodes (or, indeed natural or
juristic persons). Should, for example, a 51% attack introduce and invalid block, there is nothing
preventing a large number of stakeholders to agree to make a certain change to the ledger and to
restart from that point onward and in a specific way. This could however be costly and disruptive
because of the ongoing demand for the DL to record the processing of payments.
### 4.3 Forking the chain
It has happened on several occasions that a sufficiently large section of Bitcoin users managed
to agree to change the protocol that they at a certain point started following new rules (“forking”
the blockchain at that point) and that this change has been sufficiently sustainable. Bitcash
is one example. At the point of creating Bitcash, anyone with (say) 2.3 Bitcoin would retain
that amount but would also have 2.3 units of Bitcash as well, attributed to the same public key
identity. Such a fork does not require more than for a viable number of participants to (agree to)
do it. In the thought experiment above the splitting of the Internet in two, the fork would have
been involuntary.
In August 2010, a notable fork to correct a technical error, took place. A block had been mined
that created 184 467 440 737.09551616 units of Bitcoin (van Wirdum, 2016) and sent them to
two addresses. The number is remarkable since the Bitcoin protocol only allows for a total of 21
million Bitcoin to ever exist. A bulletin board message on 15 August from “Satoshi Nakamoto”
warned
[1New blocks and who mines them can be observed directly at https://www.blockchain.com/explorer.](https://www.blockchain.com/explorer)
15
-----
*** WARNING *** We are investigating a problem. DO NOT TRUST ANY
TRANSACTIONS THAT HAPPENED AFTER 15.08.2010 17:05 UTC (block
74638) until the issue is resolved.
and the error was reversed by a software update within a few hours. This was the most serious
protocol or software error in the history of Bitcoin and it happened when the DL was only
two years old. A similar breakdown today would hardly be tolerable in view of the number of
transactions per day.
### 4.4 Informal and formal governance
In addition to the large miners, “a small core of highly skilled developers” (De Filippi and
Loveluck, 2016) for Bitcoin Core, the most widely used Bitcoin client, have an outsize influence
in practice on the arrangements for this DL. The software was initially published by Satoshi
Nakamoto (pseudonym) who also released the Bitcoin founding whitepaper. Development of
Bitcoin Core has been funded by MIT Media Lab and others (van Wirdum, 2016). Given the
practical dominance of Bitcoin Core software, it would not be entirely out of place to view
governance of Bitcoin as identical to the governance of the software project, as a first-order
approximation.
This is nevertheless very informal and unstructured, even anarchic, since there is still absolute
freedom to fork the open source software project (at the same time as the chain). It would not be
incorrect to say that there is no formal governance arrangement for Bitcoin.
### 4.5 Applying the analytical framework to Bitcoin
Applying Figure 1 to Bitcoin, the following membership stakes are identified:
- software-members are an unidentified person/group of people acting under the soubriquet
“Satoshi Nakamoto”. It would appear that this group exercises ultimate control over
Bitcoin governance;
- MIT Media Lab was an original donor-member, but it is not known whether it or any other
unidentified funders continue to contribute actively to Bitcoin governance;
- node members are the miners, who can freely enter and exit of their own accord. There is
no explicit relationship between them and any other members;
- transaction members and end user members can freely enter and exit of their own accord.
There are no explicit requirements governing their interactions. One entity may participate
as all of end user, transaction- and node-members. Overlaps almost certainly occur, given
the majority of issued bitcoin are held by small number of system participants
Applying Figure 2, we conclude that in the absence of any apparent rules to the contrary,
Bitcoin is a fully-public DLS with no explicit or externally-articulated governance arrangements.
All applicable rules are embedded in and executed by the software. Changing the rules is
extremely costly - as evidenced by the perpetuation and growth of the DLS despite the absence
of substantive changes to the software-based rules since the August 2010 fork. Given the large
number of nodes and the comparative inability of member interests to organise successful rule
changes or forks where large numbers of members defect, it would seem that Bitcoin members
16
-----
are heterogeneous, and lack ‘off-system’ means of cost-reducing co-ordination. The inherent
anonymity of Bitcoin members also militates against such actions.
That said, we note that the Bitcoin DLS is underpinned by a relatively simple and reasonably
well-understood financial transacting business case. While the distributed ledger component of
the system is is novel, the payments processing function is not. It is much easier at the outset
to identify the governance requirements for a stable, well-understood system where change in
the business model is unlikely to occur. Arguably, the Bitcoin DLS has been remarkably stable
because these characteristics have meant that the circumstances of tensions arising between
different members or class of members have simply not come about, since 2010 at least. And
that the changes made in 2010 were successful was likely in large part attributable to the fact
that at that stage, membership was small, much more homogeneous and more likely to be
comprised of people known to each other (or at least, a sufficiently large-enough coalition was
well-enough known to each other to co-ordinate the forking at a comparatively lower price.
To the extent that forking has occurred to start new currencies, it s likely that this has been
steered by software=-skilled members, iof not software-members per se.That none of the forked
currencies has grown to rival Bitcoin simply serves to reinforce the dominance of the existing
arrangements.
## 5 Case study: Sovrin
Sovrin is a “global public, permissioned identity utility for exchanging identity more securely”
(Patel, 2018) based on a distributed ledger overseen by the Sovrin Foundation. It is based on
opensource blockchain software and trusted participants that issue and verify identities and
other identifying pertinent information about natural and juristic persons. The main aim of the
project is to facilitate the reuse of verified information (with the permission of the data subject),
to incentivise the release of information and to record the withdrawal of the right to use such
information (Ldapwiki, 2018).
### 5.1 Description of the Sovrin network
Figure 3 shows the Sovrin Goverance Network in which the Sovrin Governance Framework
Master Document defines the “constitution” of the Sovrin Network laying down the purpose, core
principles and links to other main documents. The Sovrin organisation is formally constituted as
a nonprofit organisation incorporated in Utah, USA on February 2 2018. Its purposes include,
but are not limited to
(a) To develop, govern and promote an international nonprofit private sector self-sovereign
digital identity system based on the Sovrin distributed ledger;
(b) To own, lease, sell, exchange or otherwise deal with all property, real and personal,
tangible or intangible, to be used in furtherance of these purposes; and
(c) To engage in any and all lawful activities incidental, useful or necessary to the accomplishment of the above-referenced purposes.[2] While it has no legally-defined members,
the term “members” is used “to refer to donors, technology contributors, ledger stewards,
2Sovrin Articles of Incorporation, February 2, 2018. [https://drive.google.com/file/d/](https://drive.google.com/file/d/1QC7Ma9DZUiOjY3G4S1URLXD2CBJzvxqw/view)
[1QC7Ma9DZUiOjY3G4S1URLXD2CBJzvxqw/view](https://drive.google.com/file/d/1QC7Ma9DZUiOjY3G4S1URLXD2CBJzvxqw/view)
17
-----
Figure 3: The Sovrin Governance Network
members of Corporation committees or work groups, and other participants in the Sovrin
community whose roles may be further defined in the Bylaws, agreements, or other governing documents”.[3] Its principal office and mailing address is identified as 151 S 1150 E,
Lindon, UT 84042.
The Sovrin Foundation is overseen by a Board of Trustees with no less than three and no more
than twenty one members. The original Trustees are identified as Phillip J Windley and Jason
A Law, both of Utah, and Drummond S Reed of Washington State. In mid January 2019, it
comprised 12 members. A nominations committee selected by the Board identifies eligible
nominees, who are elected annually at the Annual Meeting. Trustees are not remunerated,
but are (subject to approval by the Board), reimbursed for expenses incurred on behalf of
the organisation. The Board appoints an Executive Director to supervise operations. The
Chief Executive Officer Heather Dahl is the Executive Director. Along with the CEO, the
Chief Financial Officer Roy Avondet, Chief Technology Officer Nathan George and Director
of Marketing Helen Garneau comprise the Executive Leadership. Seventeen other staff are
identified on the website. The Board also has the power to create Advisory Councils and
other committees as required, in addition to the Executive Committee and Finance Committee
identified in the Bylaws. In mid-January 2019, a fifteen-member Technical Governance Board,
including CTO Nathan George and founder-trustee Jason Law is identified as having been
appointed to govern the “technical design, architecture, implementation and operation of the
Sovrin Network as a global public utility for self-sovereign identity”.
Day-to-day activities are managed by an executive team comprising CEO Heather Dahl, Chief
Financial Officer Roy Avondet, Chief Technology Officer Nathan George and Director of
Marketing Helen Garneau. CEO Heather Dahl is one of the twelve Board Members. Seventeen
staff are identified on the website. The Board
3Sovrin Bylaws, Jan 31 2018 https://drive.google.com/file/d/1kkuiEp0vA620ydcAND9pIY_hLHVjHKsG/view
18
-----
Central to the mode of governance within the Sovrin network is the Sovrin Steward Agreement
document specifying the legal obligations, liabilities, etc. for Stewards and the Sovrin Foundation[4]. The agreement, governed by the law of the State of Delaware,contains explicit provisions
to be followed in the event of a dispute between the Stewards and Sovrin.
The trusted participants acting as node members in the project are called Stewards. In mid
January 2019, forty-eight are identified on the Sovrin website.[5] They include banks, telecommunications companies and universities as well as IT companies such as Cisco and IBM. On
the Sovrin network, it is these Stewards that approve transactions for inclusion in the ledger
and that, in fact, submit transactions for inclusion (Tobin, 2018). There is a strong emphasis
on anonymous identity in ledger records (as actual users can create unique reference numbers
for each relationship) as well as zero-knowledge proofs where a user can, for example, use the
system to prove that they are over 18 without revealing their actual age – based on the trusted
information embedded in the system. This feature is unique to the Sovrin Identity Network
(SIDN) which allows to create self-sovereign identity (SSI) for end user members (Muhle et. al
2018).
The software used has been part of The Linux Foundation’s Hyperledger project since 2017
under the name Hyperledger Indy.[6] Like the Bitcoin software, it is opensource. Sovrin facilitates
engagement of individuals in the Hyperledger Indy project, including direct links on its website
to the weekly Indy Group calls, Chat room and Mailing List (https://sovrin.org/developers/).
No sensitive data is stored in the DL at all – only the Stewards’ identifying information as well
as pointers to the end-users’ data are included in the ledger. Stewards act as validators (similar
to Bitcoin miners) as well as clients (who submit transactions).
### 5.2 Adding to the ledger
The validation of information submitted to the Sovrin network is entirely the work of the
Stewards. This is strikingly dissimilar to the case of Bitcoin where anyone may start mining.
One cannot be a Steward without formally entering into an agreement with the Foundation. The
validity of transaction is only vouched for by other Stewards and cannot necessarily (as in the
case of Bitcoin) be easily checked by anyone with access to the ledger.
### 5.3 Forking the chain
It is definitely technically possible for a subset of Stewards to agree to defect with a current copy
of the ledger but this act itself would deprive them of the governance arrangements embodied by
the Sovrin Foundation.
The consensus algorithm in Sovrin is called plenum, an enhancement of the redundant byzantine
fault tolerance algorithm.[7] In most general terms, this is a voting algorithm that executes very
quickly and resolves faults possibly introduced by errant nodes.
4https://sovrin.org/library/steward-agreement/
5https://sovrin.org/stewards/
[6https://github.com/hyperledger/indy-sdk/blob/rc/doc/getting-started/getting-started.md](https://github.com/hyperledger/indy-sdk/blob/rc/doc/getting-started/getting-started.md)
[7https://github.com/hyperledger/indy-plenum/wiki](https://github.com/hyperledger/indy-plenum/wiki)
19
-----
### 5.4 Informal and formal governance
Essentially all governance in the case of Sovrin is formal. There is however a certain devolution
of power as regards the Stewards’ dealing with individuals.
### 5.5 Applying the analytical framework to Bitcoin
Applying Figures 1 and 2 to Sovrin, the following membership stakes and interactions are
identified:
- The three founder-trustees appear to have acted as initial donor-members. At least one of
these has a technical background, so appears to have been an original software-member.
As Sovrin has expanded, the Technical Governance Board appears to have assumed
responsibility for software oversight. This includes “reviewing the technical qualifications
of the Steward candidates to ensure they meet the requirements and principles of the Sovrin
Trust Framework” - suggesting a degree of central control over not just the blockchain
application but also the applications used by node-member stewards in their engagement
with transaction-members and end user-members (the Sovrin Steward Agreement 3b
obliges the Stewards to “only run software code that has been approved by the Sovrin
Foundation as referenced in the Sovrin Governance Framework”).
- Stewards as node-members are central. Entry is strictly controlled by Sovrin, and steward
identities are known to all other members. Only stewards can raise transactions on the
ledger (although other entities may read data from it), so they fulfil the role of transactionmembers as well. The reliability of the identification system relies upon the fact that the
stewards themselves can be trusted by other stewards and users of identity information.
The steward agreement does not mention any membership payments, but there is likely
some considerable expense involved in satisfying the Foundation that admission as a
steward will not bring undue risk to the DLS. The more arduous are these processes, the
more costly they are for the firms concerned. Only firms who are serious about belonging
will put in the effort; if they walk away from the arrangements (i.e. instigate a fork) this
cost is truly sunk. This means the number of nodes is likely to be much smaller than a
public system like Bitcoin, but it also means that the stewards are both known and can be
pursued for any costs caused by seceding.
- End user-members may have many different identities on the ledger, and be effectively
anonymous on it, but each has to raise identity transactions via a Steward who is prepared
to vouch for their identity when posting to the ledger. Hence end user-members cannot be
anonymous in respect of at least that Steward. The knowledge comes from interactions
the stewards have with individuals in other ways, for example as customers of a financial
institution or telecommunications provider. Stewards thus provide the bedrock of trust in
the identities lodged on the DLS. User-member participation will not be possible without
an of-system interaction with at least one steward.
The Sovrin system is in a very early stage of development. It is not clear yet how payments
for services undertaken will be made. However, Sovrin does embody a token system to reward
end-user and steward/node participation.
In contrast to Bitcoin, Sovrin is a new application with a business case that is yet to be fully
20
-----
proven in any context - either physical or virtual. It is clear that the current Sovrin Framework is
a ‘work in progress’, though one which has moved from the truly novel innovation to small-scale
testing phase. Arguably, the stewards who have joined so far are participating as a means of
learning how they can make use of the system as it scales up rather than to achieve an already
clearly-understood outcome. It is not clear at this point what the future might bring for this
venture. However, from our analysis, the much tighter control exerted from the centre, with
clearly-articulated responsibilities, rules and disputes resolution processes binding all parties
is consistent with a system where future developments are uncertain. The strict governance
rules allow for greater flexibility in the direction that can be taken in incorporating rules into the
software. While current abilities for Stewards to formally influence directions are less than for
the software and donor members of the system, they exert significant commercial power as they
mediate the relationships with end users. This close economic co-dependence between them
and the Sovrin Foundation affords them a degree of influence in the governance of their DLS far
greater than that of the Bitcoin miners over their DLS.
## 6 Discussion of the case studies
Ultimately, in the case of Bitcoin, the validity of the entire blockchain can be checked by any
interested party. This includes that veracity of the solutions to the mining problems. It also
includes checking that the cryptographic signatures of the individual transactions are valid. The
only thing that is not possible to check is whether all other nodes might not have conspired to
pick certain valid mine blocks rather than others but the Bitcoin protocol (involving free entry)
makes it rather intuitively unlikely that this would take place.
It has been suggested by Vitalik Buterin, creator of Ethereum, that the degree of (de)centralisation
of a network can be examined along three axes (Siriwardena, 2017).
1. Architecture (de)centralisation – what is the physical nature of the system and how robust
is it?
2. Political (de)centralisation – how is membership of and participation in the system governed?
3. Logical (de)centralisation – how flexible are interfaces and data structures in the system?
Considering Bitcoin and Sovrin, we suggest that the decentralisation of the two systems can be
categorised as follows.
Degree of centralisation Bitcoin Sovrin
Architectural Low Medium
Political Low High
Logical High High
Centralisation is linked to governance, a topic to be explored further in future work.
21
-----
## 7 Conclusion
The authors have investigated distributed ledger (DL) governance in the context of the theory of
clubs. The incentives to passively join and to take part in operating the consensus mechanism
of the DL can be understood using this theory. The examples of Bitcoin and Sovrin illustrate
how formal and informal arrangements operate – either through formal agreement or through
technical arrangements embedded in the software.
We note that DL systems tend to be effectively controlled by a small coalition of softwaremembers, who may also participate as node-members or be closely affiliated with influential
node-members. In order to motivate their participation, it would be expected they anticipate
remuneration from either their node operation activities, or some other arrangement such as an
honorarium paid from contributions made by donor-members. The more donor members there
are, the more likely it is that formal articulation of DLS governance arrangements outside of the
software itself would be required.
## References
Berg, Alastair, Berg, Chris, & Novak, Mikayla. 2018a. Blockchains and Constitutional Catallaxy.
Available at SSRN 3295477.
Berg, Chris, Novak, Mikayla, Potts, Jason, & Thomas, Stuart J. 2018b. From Industry Associations to Ecosystem Associations: Blockchain, Interest Groups and Public Choice. Interest
Groups and Public Choice (November 16, 2018).
Berle, Adolph, & Means, Gardiner. 1932. Private property and the modern corporation. New
York: Mac-millan.
Buchanan, James M. 1962. Predictability: The criterion of monetary constitutions. In search of
a monetary constitution, 155–83.
Buchanan, James M. 1965. An economic theory of clubs. Economica, 32(125), 1–14.
Buchanan, James M. 1987. The constitution of economic policy. The American economic
review, 77(3), 243–250.
Böhme, Rainer, Christin, Nicolas, Edelman, Benjamin, & Moore, Tyler. 2015. Bitcoin: Economics, Technology, and Governance. Journal of Economic Perspectives, 29(2), 213–38.
http://www.aeaweb.org/articles?id=10.1257/jep.29.2.213
Cordery, Carolyn, & Howell, Bronwyn. 2017. Ownership, Control, Agency And Residual
Claims In Healthcare: Insights On Cooperatives And Non-profit Organizations. Annals of Public
and Cooperative Economics, 88(3), 403–424.
Cornes, Richard, & Sandler, Todd. 1996. The theory of externalities, public goods, and club
goods. Cambridge University Press.
Crosby, Michael, Nachiappan, Pattanayak, Pradhan, Verma, Sanjeev, & Kalyanaraman, Vignesh. 2015. Blockchain technology: beyond Bitcoin. Sutardja Center for Entrepreneurship &
Technology. http://scet.berkeley.edu/wp-content/uploads/BlockchainPaper.pdf
22
-----
Czepluch, Jacob Stenum, Lollike, Nikolaj Zangenberg, & Malone, Simon Oliver. 2015. The use
of block chain technology in different application domains. The IT University of Copenhagen,
Copenhagen.
De Filippi, Primavera, & Loveluck, Benjamin. 2016. The invisible politics of Bitcoin: governance crisis of a decentralised infrastructure. Internet Policy Review, 5(3).
Elinor, Ostrom. 1990. Governing the commons: the evolution of institutions for collective
action.
Foroglou, George, & Tsilidou, Anna-Lali. 2015. Further applications of the blockchain.
Columbia University PhD in Sustainable Development, 10.
Fukuyama, Francis. 2014. Political Order and Political Decay: From the Industrial Revolution
to the Globalization of Democracy. New York: Farrar, Straus and Giroux, 455–466.
Hansmann, Henry. 1996. The changing roles of public, private, and nonprofit enterprise in
education, health care, and other human services. Pages 245–276 of: Individual and social
responsibility: Child care, education, medical care, and long-term care in America. University
of Chicago Press.
Krecké, Elisabeth. 2004. 14 The emergence of private lawmaking on the Internet. Markets,
Information and Communication: Austrian Perspectives on the Internet Economy, 289.
Ldapwiki. 2018. Sovrin. Retrieved on 2019-01-13. https://ldapwiki.com/wiki/Sovrin
Leiner, Barry M, Cerf, Vinton G, Clark, David D, Kahn, Robert E, Kleinrock, Leonard, Lynch,
Daniel C, Postel, Jon, Roberts, Larry G, & Wolff, Stephen. 2009. A brief history of the Internet.
ACM SIGCOMM Computer Communication Review, 39(5), 22–31.
Mattila, Juri. 2016. The blockchain phenomenon. Berkeley Roundtable of the International
Economy.
Mazieres, David. 2015. The stellar consensus protocol: A federated model for internet-level
consensus. Stellar Development Foundation.
Mulligan, CJ, Scott, Z, Warren, S, & Rangaswami, JP. 2018. Blockchain Beyond the Hype. In:
World Economic Forum. http://www3.weforum.org/docs/48423_Whether_Blockchain_WP.pdf.
Accessed, vol. 2.
Narayanan, Arvind, & Clark, Jeremy. 2017. Bitcoin’s academic pedigree. Communications of
the ACM, 60(12), 36–45.
Narayanan, Arvind, Bonneau, Joseph, Felten, Edward, Miller, Andrew, & Goldfeder, Steven.
2016. Bitcoin and cryptocurrency technologies: a comprehensive introduction. Princeton
University Press.
Olson, Mancur. 1989. Collective Action. London: Palgrave Macmillan UK. Pages 61–69.
https://doi.org/10.1007/978-1-349-20313-0_5
Ostrom, Elinor. 2005. Understanding institutional diversity. Princeton University Press.
Ostrom, Elinor. 2010. Beyond markets and states: polycentric governance of complex economic
systems. American economic review, 100(3), 641–72.
23
-----
Ostrom, Vincent. 2014. Polycentrictiy: The Structural Basis of Self-Governing Systems. Choice,
Rules and Collective Action: The Ostrom’s on the Study of Institutions and Governance, 45.
Patel, Milan. 2018. IBM Blockchain Trusted Identity: Sovrin Steward closed beta offering.
Retrieved on 2019-01-13. https://www.ibm.com/blogs/blockchain/2018/08/ibm-blockchaintrusted-identity-sovrin-steward-closed-beta-offering/
Reijers, Wessel, O’Brolcháin, Fiachra, & Haynes, Paul. 2016. Governance in blockchain
technologies & social contract theories. Ledger, 1, 134–151.
Swan, Melanie. 2015. Blockchain: Blueprint for a new economy. O’Reilly Media, Inc.
Szabo, Nick. 1997. Formalizing and securing relationships on public networks. First Monday,
2(9).
Tarko, Vlad, Schlager, Edella, & Lutter, Mark. 2018. The Faustian Bargain: Power-Sharing,
Constitutions, and the Practice of Polycentricity in Governance. Governing Complexity: Analyzing and Applying Polycentricity, eds. William A. Blomquist, Dustin Garrick and Andreas
Thiel (Cambridge University Press, Forthcoming).
Tobin, Andrew. 2018. Sovrin: What Goes on the Ledger? Retrieved on 2019-01-11.
https://sovrin.org/wp-content/uploads/2018/10/What-Goes-On-The-Ledger.pdf
van Wirdum, Aaron. 2016 (Apr). Who Funds Bitcoin Core Development? How the Industry
Supports Bitcoin’s ‘Reference Client’. https://bitcoinmagazine.com/articles/who-funds-bitcoincore-development-how-the-industry-supports-bitcoin-s-reference-client-1459967859/
Williamson, OE. 1985. 1985: The economic institutions of capitalism. Firms, markets, relational
contracting. New York: Free Press.
Williamson, Oliver E. 1999. Strategy research: governance and competence perspectives.
Strategic management journal, 20(12), 1087–1108.
Williamson, Oliver E. 2000. The new institutional economics: taking stock, looking ahead.
Journal of economic literature, 38(3), 595–613.
Windley, Phillip J. 2016. How Sovrin Works. Retrieved on 2019-01-11. https://sovrin.org/wpcontent/uploads/2018/03/How-Sovrin-Works.pdf
24
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3365519?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3365519, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GREEN",
"url": "https://www.econstor.eu/bitstream/10419/201737/1/ITS2019-Aswan-paper-24.pdf"
}
| 2,019
|
[] | true
| 2019-02-01T00:00:00
|
[
{
"paperId": "50f1ae445429b401d3fbce145266b05f37f9af62",
"title": "Blockchains and constitutional catallaxy"
},
{
"paperId": "2ac3eb4a601b831c2c88d4960f5cc57cddb10d68",
"title": "The Faustian Bargain: Power-Sharing, Constitutions, and the Practice of Polycentricity in Governance"
},
{
"paperId": "50e03e0797144e6d69aaeb89a024dd6124fccab7",
"title": "From Industry Associations to Ecosystem Associations: Blockchain, Interest Groups and Public Choice"
},
{
"paperId": "1b5dd2f547344ec010a79c2d228ca2470921dab4",
"title": "Ownership, Control, Agency and Residual Claims in Healthcare: Insights on Cooperatives and Non‐Profit Organizations"
},
{
"paperId": "ce64501d692952193be72d450cabe285f7637b9f",
"title": "Bitcoin's academic pedigree"
},
{
"paperId": "01f08210264ee1f338a86e5c95a00d7531c83fe2",
"title": "Governance in Blockchain Technologies & Social Contract Theories"
},
{
"paperId": "2d04c4388dd2b5929cd273b8d1cfb197304cf52b",
"title": "The Invisible Politics of Bitcoin: Governance Crisis of a Decentralized Infrastructure"
},
{
"paperId": "c2de5385c197aab309abd859b36bee1362147688",
"title": "Bitcoin and Cryptocurrency Technologies - A Comprehensive Introduction"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "f02d4551fb7832a5209b7c05c2f46a335217bd92",
"title": "Political order and political decay: From the industrial revolution to the globalization of democracy"
},
{
"paperId": "7b8de1a148d61005e92d2cc0de745674fb6faf2f",
"title": "Bitcoin: Economics, Technology, and Governance"
},
{
"paperId": "3a5c55353bb7d4b29bc6ed45d062e78bd8291a66",
"title": "Beyond Markets and States: Polycentric Governance of Complex Economic Systems"
},
{
"paperId": "78be4997d0e889fe88ad2fd3d5dc830219d76103",
"title": "The emergence of private lawmaking on the Internet"
},
{
"paperId": "11040e39cb84397ac26f063058fefcb394da9400",
"title": "The New Institutional Economics: Taking Stock, Looking Ahead"
},
{
"paperId": "1c49353abaea57d33f3297c2ab0919aa52dea5be",
"title": "STRATEGY RESEARCH: GOVERNANCE AND COMPETENCE PERSPECTIVES"
},
{
"paperId": "44773b8f9248b861e80e76166f42e47db556f88d",
"title": "A brief history of the internet"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "f1f7964241c12d4c55e2c46195791c115f754080",
"title": "Governing the Commons: The Evolution of Institutions for Collective Action"
},
{
"paperId": "d8c08a4abbd5a601df2c7e11d9b754b1d9af9f34",
"title": "The constitution of economic policy."
},
{
"paperId": "7254b5d903ebe9ec164706304080458e7d821b03",
"title": "The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting."
},
{
"paperId": "81c7ca626f1587b342ccc79eb37156fc5c2ed2f7",
"title": "The Theory of Externalities, Public Goods, and Club Goods"
},
{
"paperId": "3dfb55a740236ab335674b3697c7268d5f75d01e",
"title": "VI PREDICTABILITY: THE CRITERION OF MONETARY CONSTITUTIONS"
},
{
"paperId": "a13b9f1931a71c3f958bec5f62b88cbed1041e93",
"title": "In search of a monetary constitution"
},
{
"paperId": null,
"title": ": What Goes on the Ledger?"
},
{
"paperId": null,
"title": "Sovrin: What Goes on the Ledger? Retrieved on 2019-01-11"
},
{
"paperId": null,
"title": "IBM Blockchain Trusted Identity: Sovrin Steward closed beta offering"
},
{
"paperId": null,
"title": "Blockchain Beyond the Hype"
},
{
"paperId": null,
"title": "Who Funds Bitcoin Core Development? How the Industry Supports Bitcoin’s ‘Reference Client’"
},
{
"paperId": null,
"title": "How Sovrin Works"
},
{
"paperId": null,
"title": "The blockchain phenomenon. Berkeley Roundtable of the International Economy"
},
{
"paperId": "3babb89369eed603ce3c702f447ee6274429cda1",
"title": "The Stellar Consensus Protocol: A Federated Model for Internet-level Consensus"
},
{
"paperId": null,
"title": "Further applications of the blockchain"
},
{
"paperId": null,
"title": "Blockchain technology: beyond Bitcoin"
},
{
"paperId": null,
"title": "The use of block chain technology in different application domains. The IT University of Copenhagen, Copenhagen"
},
{
"paperId": "fde4ebe0f9747f285df5cfaabd95b54e59ecd3e4",
"title": "Collective Action"
},
{
"paperId": null,
"title": "Polycentrictiy: The Structural Basis of Self-Governing Systems. Choice, Rules and Collective Action: The Ostrom's on the Study of Institutions and Governance"
},
{
"paperId": "46881a8dfd3655945909ded6a9c2ff17c2703191",
"title": "Understanding Institutional Diversity"
},
{
"paperId": "588d7b2ce8cbdf172f3321594aff458282170e65",
"title": "The Economic Theory of Clubs"
},
{
"paperId": "5652ad2015c852f02df63c5cc5c9a38e0920305c",
"title": "The Changing Roles of Public, Private, and Nonprofit Enterprise in Education, Health Care, and Other Human Services"
},
{
"paperId": null,
"title": "1985: The economic institutions of capitalism"
},
{
"paperId": null,
"title": "Private property and the modern corporation"
},
{
"paperId": null,
"title": "b) To own, lease, sell, exchange or otherwise deal with all property, real and personal, tangible or intangible, to be used in furtherance of these purposes"
}
] | 17,699
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/010f3407c141dfafbcddf8db50f4b8914f9a3d0d
|
[
"Computer Science"
] | 0.87953
|
Scheduling for Responsive Grids
|
010f3407c141dfafbcddf8db50f4b8914f9a3d0d
|
Journal of Grid Computing
|
[
{
"authorId": "3086404",
"name": "C. Germain"
},
{
"authorId": "143932705",
"name": "C. Loomis"
},
{
"authorId": "29120186",
"name": "J. Moscicki"
},
{
"authorId": "48840171",
"name": "R. Texier"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Grid Comput"
],
"alternate_urls": [
"https://link.springer.com/journal/10723",
"http://www.springer.com/journal/10723"
],
"id": "993eb4fa-4cb7-4aed-980e-20e5298efad0",
"issn": "1570-7873",
"name": "Journal of Grid Computing",
"type": "journal",
"url": "https://www.springer.com/journal/10723"
}
| null |
#### EGEE-PUB-2008-002
# Scheduling for Responsive Grids
## Germain-Renaud, C LRI - CNRS and Universit Paris-Sud et al
### 20 December 2008
Journal of Grid Computing
#### EGEE-II is a project funded by the European Commission Contract number INFSO-RI-031688
The electronic version of this EGEE Publication is available
on the CERN Document Server at the following URL:
```
<http://cdsweb.cern.ch/search.py?p=EGEE-PUB-2008-002>
#### EGEE-PUB-2008-002
```
-----
### Scheduling for Responsive Grids
C´ecile Germain-Renaud
LRI and LAL
Charles Loomis
LAL
Jakub T. Mo´scicki
CERN
Romain Texier
LRI
June 2006
Abstract. Grids are facing the challenge of seamless integration of the grid power
into everyday use. One critical component for this integration is responsiveness,
the capacity to support on-demand computing and interactivity. Grid scheduling
is involved at two levels in order to provide responsiveness: the policy level and
the implementation level. The main contributions of this paper are as follows.
First, we present a detailed analysis of the performance of the EGEE grid with
respect to responsiveness. Second, we examine two user-level schedulers located
between the general scheduling layer and the application layer. These are the DIANE
(DIstributed ANalysis Environment) framework, a general-purpose overlay system,
and a specialized, embedded scheduler for gPTM3D, an interactive medical image
analysis application. Finally, we define and demonstrate a virtualization scheme,
which achieves guaranteed turnaround time, schedulability analysis, and provides
the basis for differentiated services. Both methods target a brokering-based system
organized as a federation of batch-scheduled clusters, and an EGEE implementation
is described.
Keywords: Responsiveness, Interactive Grids, Meta-scheduler, User-level Scheduling
1. Introduction
The exponential increases in network performance and storage capacity
[41], together with ambitious national and international efforts, have
already enabled the virtualization and pooling of processors and storage
in advanced and relatively stable systems such as the EGEE grid. However, it is more and more evident that the exploitation model for these
grids is somehow lagging behind. At a time where industry acknowledges interactivity as a critical requirement for enlarging the scope of
high performance computing [35, 43, 6], grids cannot anymore be envisioned only as very large computing centres providing batch-oriented
⃝c 2006 Kluwer Academic Publishers. Printed in the Netherlands.
GCJv14.tex; 17/12/2006; 0:08; p.1
-----
2 Grid Scheduling for Interactive Analysis
access to complex scientific applications with high job throughput as
the primary performance metric.
A much larger range of grid usage scenarios is possible. Seamless
integration of the grid power into everyday use calls for unplanned and
interactive access to grid resources. We define responsive grids as grid
infrastructures that support on-demand computing and interaction.
This paper describes a set of scheduling methods providing different
levels and types of Quality of Service (QoS) required by responsiveness.
Compared to many recent proposals in this area, our methods target
production grids. They have been implemented within EGEE, on top
of the gLite middleware. EGEE is the largest production grid worldwide, comprising more than 20000 CPUs, 200 sites and 20000 jobs per
day and requiring the strongest constraints on dependability. In this
framework, responsiveness must be built on top of the traditional grid
scheduling tools, which are batch-oriented and dominated by fair-share
policies at institutional time-scales. The associated constraints are:
− delays incurred by non-interactive jobs are bounded,
− resource utilization is not degraded (e.g. by idling processors), and
− the local policies governing resource sharing (Virtual Organizations, advance reservation, etc. ) are not impacted.
This rest of this paper is organized as follows. Section 2 describes
use-cases for grid responsiveness. Section 3 presents the scheduling architecture of the EGEE grid and an experimental study of the EGEE
profiles of execution time and overhead. Section 4 presents two examples of user-level scheduling deployed on top of gLite, which is the
EGEE middleware. The first one exemplifies a generic overlay system.
The second one is an application-dedicated environment, which exemplifies grid-enabled computational steering in medical image analysis.
We show that user-level scheduling does improve the quality of service,
by eliminating the middleware overhead, providing a sustained job
output rate, and optimizing the failure recovery. On the other hand,
user-level scheduling is limited to best-effort. Section 5 presents the
Virtual Reservation scheme, which provides guarantees on the overall turnaround time, and its implementation inside gLite. Section 5
discusses related work, and Section 6 presents the conclusions.
GCJv14.tex; 17/12/2006; 0:08; p.2
-----
Scheduling for Responsive Grids 3
2. Motivation
Responsiveness is a key component for real-world grid usage; this section presents a few examples. The first one is grid-enabling medical
image analysis [11, 45, 23]. In a clinical context, medical image analysis
(segmentation, registration) and exploitation (augmented reality for intervention planning or intra-operative support) require full interaction
because computer programs are not yet competitive with the human
visual system for mining these structured and noisy data. Analyzing
large images at a sufficient speed to support smooth visualization requires not only substantial computing power, which can be provided
by the grid, but also unplanned access and sophisticated interaction
protocols.
The second use case is digital libraries. Most of the resource consumption in digital libraries management is related to bulk, off-line
tasks such as indexing. When humans query this massive amount of
data, various actions are triggered such as feature extraction in a queryby-example scheme, which must take place before the actual search
can be carried out, or content protection (e.g. watermarking). User
satisfaction requires nearly instantaneous response.
Finally, in the larger perspective of ubiquitous computing and ambient intelligence, multi-modal interfaces that are capable of natural and seamless interaction with and among individual human users
are mandatory. Responsiveness is a key component for grid-enabling
the methods and technologies that form the back-end of these interfaces, such as pattern analysis, statistical modelling and computational
learning.
Interactive grid applications require a specific grid guarantee, namely
a bound on the overall turnaround time of the grid jobs contributing
to the application. Because such jobs have typically a short execution
time and require completion by a deadline, we call them Short Deadline
Jobs (SDJ) in the remainder of this paper.
3. A case for responsiveness
3.1. EGEE Scheduling
EGEE combines globally-distributed computational and storage resources into a single production infrastructure available to EGEE users.
Each participating site configures, runs, and maintains a batch system containing its computational resources and makes those resources
available to the grid via a gatekeeper. The scheduling policy for each
GCJv14.tex; 17/12/2006; 0:08; p.3
-----
4 Grid Scheduling for Interactive Analysis
site is defined by the site administrator. Common scheduling policies
use either FIFO (often with per-user or per-group limits) or fair-share
algorithms. Consequently the overall EGEE scheduling policy is not
centrally defined, but the effect of the interaction of largely autonomous
policies.
The gLite middleware deployed on the EGEE infrastructure integrates the sites’ computing resources through the Workload Management System (WMS) [3]. The WMS is a set of middleware-level
services responsible for the distribution and management of jobs. The
site computational resources present a common interface to the WMS,
the Computing Element (CE) service. The CE specification is one of
the core parts of the Glue information model [4], which is the current
basis for interoperability between EGEE and other grids. From the
middleware point of view, a CE has multiple functions: running jobs,
staging the files required by the job, providing information about resource availability, and notifying the WMS of the job-related events. In
the framework of this paper, a CE can be simply considered as a batch
queue, subject to the above-mentioned policies.
The core of the WMS is the Workload Manager which accepts jobs
from users and dispatches them to computational resources based on
the users requirements on one hand, and the characteristics (e.g. hardware, software, localization) and state of the resources on the other
hand. The WM is implemented as a distributed set of resource brokers, with some tens of them currently installed; all the brokers get
an approximatively consistent view of the resource availability through
the grid information system. Each broker reaches a decision of which
resource should be used by a matchmaking process between submission
requests and available resources. Job requirements are exposed to the
various services of the WMS via the Job Description Language (JDL)
[38], derived from the Condor ClassAd language [39]. The users can
rank acceptable resources (in JDL language) by using an arbitrary
expression which uses state information published by the resources.
Once a job is dispatched, the broker only reschedules it if it failed; it
does not reschedule jobs based on the changing state of the resources.
3.2. EGEE usage
The relevant quantities for measuring the responsiveness of the grid are
the running time t, the on-site queuing delay q, and the middleware
overhead s, which includes the various delays experienced by the job
in the WMS. The turnaround time m = s + q + t is the total time from
submission to notification that the job has completed. For the study
presented here, these quantities were derived from information in the
GCJv14.tex; 17/12/2006; 0:08; p.4
-----
Scheduling for Responsive Grids 5
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55100 101 102 103 104 105 106
t(seconds)
Figure 1. Cumulative distribution of execution times.
Logging and Bookkeeping service (LB). This is a companion service to
the resource broker which maintains the state of all jobs managed by
the resource broker.
Because the detailed LB data were not available for all jobs, the
analysis below is limited to a particular broker (grid09.lal.in2p3.fr).
These data cover one year (October 2004 to October 2005) and include
more than 50000 successful production jobs from 66 distinct users.
Fig. 1 shows the distribution of execution time from this trace. The
striking feature is the importance of short jobs: the 80% quantile is
20 s. The second important point is the dispersion of t; the mean is
2 s, but the standard deviation is of the order of 10[4] s. The very large
fraction of extremely short jobs is partially due to the high usage of this
particular broker by the EGEE Biomed Virtual Organization. However,
for more than 50% of the overall EGEE jobs at the same period, the
execution time is less than 3 minutes.
Fig. 2 shows the distribution of the dimensionless overhead factor
or = (m − t)/t, which is the overhead normalized by the execution
time. The leftmost histogram shows the distribution of the full sample:
only 26% of the 53000 jobs are in the first bin, meaning than 74%
of the jobs suffer an overhead factor larger than 25. A closer look at
the small overheads (rightmost histogram) shows that only 13% of the
jobs experience an overhead factor lower than 2. Clearly, the EGEE
infrastructure can make no claims for responsiveness using only the
base middleware services.
The next question is the respective impacts of the middleware and
the queuing time on the global overhead. Fig. 3 plots the distribution
GCJv14.tex; 17/12/2006; 0:08; p.5
-----
6 Grid Scheduling for Interactive Analysis
7E3
1.4E4
6E3
1.2E4
1.0E4 5E3
8.0E3 4E3
6.0E3 3E3
4.0E3 2E3
2.0E3 1E3
0 0 100 200 300 400 500 0 0 10 20 30 40 50
(m−t)/t (m−t)/t
Figure 2. Distribution of the overhead factor.The left histogram is the distribution
of the full sample, the right histogram is the distribution of the small overheads.
8000
7000
6000
5000
4000
3000
2000
1000
0
0 0.5 1 1.5 2 2.5 3
q/s
Figure 3. Impact of the queuing time and the middleware on the overhead.
of q/s, and shows that the queuing time is a significant component
of the overhead. This behaviour was exhibited at an early stage of
EGEE usage, where the pressure on the resource was only starting
to increase. Finally, the median queuing time is 91 seconds, and the
median middleware overhead is 221 seconds.
4. User-level scheduling
Submitting, scheduling and mapping of jobs on a grid take at least
one order of magnitude more time than the execution time for SDJ
even in absence of competition for resources. For instance, with the
most recent and tuned EGEE middleware, gLite 3.0, the middleware
latency remains on the order of minutes. User-level scheduling is the
GCJv14.tex; 17/12/2006; 0:08; p.6
-----
Scheduling for Responsive Grids 7
most promising way to address the difference of scale between short
execution times and the large grid middleware latencies.
User-level (or application-level) scheduling is a virtualization layer
on the application side. Instead of being executed directly, the application is executed via an overlay scheduling layer (user-level scheduler).
The overlay scheduling layer runs as a set of regular user jobs and therefore it operates entirely inside user space. Because user-level scheduling
does not require any modification to the Grid middleware and infrastructure nor the deployment of special services in the Grid sites, it
provide immediate exploitation of the full range of a Grid sites which
are available for a given user.
The user-level scheduling approach has the following constraints:
− user jobs must be instrumented with the scheduling functionality,
and
− jobs run under user-level scheduling must compete on the same
basis with all other jobs on the grid, and their resource usage be
fully reported to the corresponding user.
A user-level scheduler may be embedded into the application or external to it. A scheduler embedded into the application is developed and
optimized specifically for a given application, typically by re-factoring
and instrumenting the original application code. It allows fine tuning
and customizing the scheduling according to the specific execution patterns of the application. Such a scheduler is intrusive at the application
source code level which means that the code reuse of the scheduler is
reduced and the development effort is high for each application. A
scheduler external to the application relies on the general properties
of the application such as a particular parallel decomposition pattern
(e.g. iterative decomposition, geometric decomposition or divide-andconquer). An application adapter connects the external scheduler to
the application at runtime. Depending on the decomposition pattern,
the application re-factoring at the source code level may or may not
be required. The disadvantage of external schedulers is that it may be
very hard to generalize execution patterns for irregular or speculative
parallelism. In this case, which occurs in various situations ranging from
medical image processing to portfolio optimization [50], a development
of a specialized embedded scheduler may be necessary.
In the next sections we examine two user-level schedulers: an external scheduler for generic master-worker applications (DIANE) and an
embedded scheduler for medical image processing (gPTM3D).
GCJv14.tex; 17/12/2006; 0:08; p.7
-----
8 Grid Scheduling for Interactive Analysis
4.1. DIANE: a generic, external scheduler
4.1.1. Overview
DIANE (DIstributed ANalysis Environment) is a R&D project developed in the Information Technology Department at CERN in Geneva
Switzerland. [36]. It is a generic user-level scheduler based on the extended task farm (master/slave) processing . The runtime behaviour
of the framework, such as failure recovery or task dispatching, may be
customized with a set of hot-pluggable policy functions. This enables
fine-tuning of the scheduler according to the needs of particular application and provides support for other parallel decomposition patterns
(e.g. divide-and-conquer).
4.1.2. Applications
DIANE provides a python-based framework and enables a rapid integration with existing applications. Both transparent and intrusive application integrations have been demonstrated. Data analysis in Athena
framework for Atlas experiment [1], is an example of transparent application integration; the application adapters in the form of python
packages have been developed without modifying the original application code. The examples of intrusive integrations include the particle
simulation in medical physics using Geant 4 toolkit [22]. The parallelization of these applications has been based on the iterative decomposition and master/worker processing model with fully independent
tasks. Other applications using DIANE include, among others, the
Geant 4 statistical regression testing application [34], Autodock [10]
tools for bioinformatics and telecommunication applications [32].
4.1.3. Execution model
In the DIANE execution model, a temporary virtual master/worker
overlay network is created for each user job and is destroyed when
the job terminates. The job is split into a number of tasks which are
executed by a number of lightweight worker agents in the Grid. The
worker agents run as regular Grid jobs submitted with credentials and
authentity of a single user. Therefore the full user-based accounting
from the system administration point of view is possible. The agents
are time-limited and the computing resources are released when the
processing terminates (all tasks processed) or if they exceed the time
limit on the batch queue, whatever occurs first. The number of resouces
acquired by a user is limited by standard mechanisms i.e. the fair-share
policies in the Grid and in the local sites.
Each task is defined by a set of application-specific parameters. The
dispatching of tasks is the process of allocating the tasks to workers by
GCJv14.tex; 17/12/2006; 0:08; p.8
-----
Scheduling for Responsive Grids 9
sending appropriate parameters to the worker agents. The communication overhead is typically much smaller than in the systems based on
checkpointing and task migration and it allows scheduling with a high
rate of incoming and outgoing tasks. For example the DIANE Master
routinely achieves peaks of 110-120 Hz without observable degradation
in the performance. This means that scheduling overhead is negligible
for N ∗ 120 worker agents if average task duration is N seconds.
The default scheduling algorithm used in DIANE is based on dynamic pull approach also known as self-load-balancing. DIANE makes
it easy to plug-in alternative algorithms, however the results described
in this paper use the default one.
DIANE allows the standard GSI-based authentication and authorization of the worker agents. The Grid proxy certificate is shipped via
standard Grid submission mechanisms to the worker node, while the
master retains the original. The secure mode prevents the accidental
mixing of user credentials in a single overlay. The results described in
this paper refer to the default, non-authenticated mode.
The following sections present three examples of improved QoS characteristics with DIANE User Level Scheduling: the job turnaround
time, job completion rate, and error recovery.
4.1.4. Job turnaround time with high-granularity splitting
DIANE supports high-granularity job splitting, i.e. partitioning a job
into a large number of short or very short tasks. For example, the
radio-frequency compatibility analysis jobs for ITU RRC06 conference
[32], have been split into approximately 40 000 tasks performed simultaneously by around 200 worker agents at 6 EGEE Grid sites across
Europe.
Task duration was highly variable (Fig. 4) lasting from few seconds
(majority of the tasks) to 20 minutes (few individual tasks). The exact
distribution of the task duration was not known until the job was fully
executed. Consequently, it was not possible to a priori aggregate short
tasks and isolate long tasks. The efficiency of user-level scheduling was
high with the number of tasks executing in parallel very close to the
size of the worker pool (Fig. 5). As shown in previous sections (Fig. 2)
the job turnaround time is orders of magnitude higher in a plain grid
environment.
4.1.5. Job completion rate
User-level scheduling provides a more sustained job completion rate.
Fig. 6 shows the job completion rate for a Geant 4 release statistical
regression testing application [34]. The job has been split in 207 tasks
and average task duration was around 400 seconds. In the Grid, the
GCJv14.tex; 17/12/2006; 0:08; p.9
-----
10 Grid Scheduling for Interactive Analysis
Task duration histogram
100000
10000
1000
100
10
1
1 10 100 1000 10000
Time [seconds]
Figure 4. High-granularity splitting with exponential distribution of the task execution time. Most of 40000 tasks execute in less then 10 seconds, with individual
tasks executing in 1000 seconds.
Efficiency of resource utilization
200
150
100
50
0
0 1000 2000 3000 4000 5000
Time [seconds]
Figure 5. Comparison of the number of concurrently processed tasks (the number
of busy workers) and the number of available workers (the worker pool size). The
difference represents the scheduling overhead, including the network communication
cost. Currently, the scheduler does not remove excessive workers from the pool,
hence the number of idle workers increases at 4000s due to few long-lasting tasks.
GCJv14.tex; 17/12/2006; 0:08; p.10
-----
Scheduling for Responsive Grids 11
Job completion rate
100
90
80
DIANE scheduling
70
62 workers
60
35 workers
50
21 workers
40
30 plain Grid
scheduling
20
10
0
100 1000 10000 100000
Time [seconds]
Figure 6. Comparison of job completion rate between user-level scheduling based
on DIANE (A) and plain Grid scheduling(B). Geant 4 regression testing jobs were
run simultaneously in both scheduling modes. Equal number of available computing
resources (85 worker nodes) within EGEE Grid in each mode was guaranteed. The
figure shows three selected jobs with typical behaviour. This figure has been taken
from [34].
load on the Computing Elements (queuing time) and the load on the
Resource Broker (efficiency of matchmaking) may change dynamically
in short periods of time resulting in a job completion curve which is less
predictable (B1 and B3) or jobs being stuck in the Grid for a very long
time and appear as incomplete (B2). The user-level scheduler assures
that, even if the number of effectively available resources is low and
varying, the job output throughput is stable if splitting granularity is
correctly chosen.
4.1.6. Error recovery
Efficient and accurate failure recovery is an important factor for Quality
of Service. Large distributed systems such as the grid are prone to
diverse configuration and system errors. A generic strategy of handling
errors does not exist and the specific strategies depend on the application as well as the environment. An application-oriented scheduler such
as DIANE is capable of distinguishing application and system errors
and reacting appropriately via customizable error recovery methods.
Crashing worker agents are automatically taken out of the worker
pool. Transient connectivity problems in the WAN are detected; the
GCJv14.tex; 17/12/2006; 0:08; p.11
-----
12 Grid Scheduling for Interactive Analysis
failed tasks are automatically re-dispatched to another worker agents.
The mechanism uses a direct, highly efficient communication links in
the virtual master/worker network and is much more efficient than a
standard metascheduling techniques implemented in the middleware
(JDL RetryCount parameter) which involve the full submission cycle.
A part of recent Avian Flu Drug Search [29] have been performed
using DIANE scheduler. A master agent spanning several weeks was
taking care of efficient error recovery so the system could be operated by a single person. Because of the long duration of the job, the
worker agents were often aborted because they exceeded the time limits
in the queues at the Computing Elements. The operator was adding
new worker agents to the system so that at least 200 were available
at any time. DIANE was able to dynamically reconfigure the virtual
master/worker network to accommodate the new worker agents. The
overall efficiency of DIANE user-level scheduling was 84%, compared
to 38.4% efficiency of pure grid scheduling.
4.2. gPTM3D
PTM3D [42] is a fully-featured DICOM image analyzer developed at
LIMSI. PTM3D transfers, archives and visualizes DICOM-encoded data.
Besides moving independently along the usual three axes, the user is
able to view the cross-section of the DICOM image along an arbitrary
plane and to move it. PTM3D provides computer-aided generation of
three-dimensional (3D) representations from CT, MRI, PET-scan, or
echography 3D data. A reconstructed volume (organ, tumour) is displayed inside the 3D view. The reconstruction also provides the volume
measurement required for therapeutic decisions. The system currently
runs on standard PC computers and it is used online in radiology
centres. Clinical motivation for grid-enabled volume reconstruction is
described in [21].
The first step in grid-enabling PTM3D (gPTM3D) is to speedup
compute-intensive tasks such as the volume reconstruction of the whole
body used in percutaneous nephrolithotomy planning [37]. The volume
reconstruction algorithm includes a semi-automatic segmentation component based on an active contours method where the user initiates
the segmentation, and can correct it at anytime. It also includes a
tessellation component which is the compute-intensive part of the algorithm. The gPTM3D application requires fine-grained parallelism.
The parallel tasks are the reconstruction of one slice; in the examples
presented Fig. 7, the execution time of the majority of the tasks is in
the order of a few hundreds of milliseconds but with high variability.
GCJv14.tex; 17/12/2006; 0:08; p.12
-----
Scheduling for Responsive Grids 13
Figure 7. gPTM3D performance
When the geometry of the volume becomes complex, the reconstruction
of the critical slices can last for 20 seconds or more.
The architecture has two components: scheduler/worker agents at
the user-level and the Interaction Bridge (IB) as an external service.
The IB acts as a proxy between the PTM3D workstation, which is
not EGEE-enabled and the EGEE world. When opening an interactive
session, the PTM3D workstation connects to the IB. In turn, the IB
launches a scheduler and a set of workers on an EGEE site, through
fully standard requests to an EGEE User Interface. A stream is established between the scheduler and the PTM3D front-end through the
IB. When the actual volume reconstruction is required, the scheduler
receives contours. The scheduler/worker agents follow a pull model with
each worker computing one slice of the reconstructed volume at a time,
and sending it back to the scheduler, which forwards them to IB from
where they finally reach the front-end.
The overall response time is compatible with user requirements (less
than 2 minutes), while the sequential time on a 3GHz PC with 2GB
of memory can reach 20 minutes and more than 30 minutes on less
powerful front-ends. So far, the only bottleneck is the rate at which
the front-end is able to generate contours. Fig. 7 presents the speedup
achieved on EGEE, with one scheduler and up to 14 workers in the
largest case. For small reconstructions, the grid is obviously not necessary; we have included them to prove that there is no penalty (in fact
a small advantage) in this case. Thus there is no need to switch from a
local mode to a grid one in an interactive session. For the largest reconstruction, the speedup is nearly optimal. Lowering the execution time to
this point has strictly no impact on the local interaction scheme, which
includes stopping, restarting and improving locally the segmentation.
GCJv14.tex; 17/12/2006; 0:08; p.13
-----
14 Grid Scheduling for Interactive Analysis
5. Grid differentiated services
5.1. Virtual Reservations
As a shared resource, a grid supports a broad spectrum of workloads
ranging from long-running batch workloads executed under best-effort
policy to workflows [28, 20] or parallel applications for which specific
scheduling strategies have been proposed. Examples of these strategies include static [18] or dynamic [47] gang-scheduling using advance
reservation [38] and middleware mechanisms favouring simultaneous
allocation such as the EGEE DAG job type. Grid advance reservation
suffer from two drawbacks: first, planning is not consistent with the goal
of seamless integration with everyday computing practice, for instance
the use cases described in Section 2; second, reservation is inherently
not work-conserving, meaning that processors might idle while eligible
jobs are queued [46].
Providing differentiated QoS either at the processor or network level
usually relies on some implementation of Generalized Processor Sharing
(GPS). However, the fundamental concept required for schedulability
analysis and schedule construction in these frameworks is that the allocation of resources may be broken along quanta of time. The problem
for grid scheduling is that such quanta do not exist. Jobs are not
partitionable. Except for checkpointable jobs, a job that has started
running cannot be suspended and restarted later. Moreover, as shown
before, the execution times exhibit an extremely high variance.
We have defined and implemented the concept of a Virtual Reservation (VRes), which addresses both issues of advance reservation and
scheduling quanta by allowing controlled time-sharing. VRes permits
the definition of time quanta and their exposure at the grid level.
At the site level, each of the p physical processors is virtualized into
k virtual processors, providing pk slots to the site scheduler. When
a virtual slot is unused, the computing bandwidth is transparently
returned to the other classes sharing the same physical processor. Thus,
a fraction of these slots can then be permanently reserved for some class
of applications without jeopardizing utilization.
The mapping of classes first to the virtual processors, then onto the
physical ones is obviously the key for full processor utilization. This
mapping must be controlled so that each class maps to the full range
of physical processors, as shown in Fig. 8. Provided that the mapping
is controlled, the reservation ensures both application isolation with
respect to computational bandwidth and full processor utilization.
GCJv14.tex; 17/12/2006; 0:08; p.14
-----
Scheduling for Responsive Grids 15
3 3 3 3
2 2 2 2 12 virtual
processors
1 1 1 3
4 physical processors
Figure 8. Example of VRes: class 1, 2 and 3 are allocated respectively 1/4, 1/3 and
5/12 of the computational bandwidth
5.2. EGEE Implementation
An implementation of VRes has been developed for the MAUI scheduler
and the gLite middleware. It can be downloaded from the EGEE SDJ
Working Group site http://egee-na4.ct.infn.it/wiki/index.php/ShortJobs.
The Job Description language (JDL) has been modified to include a
Boolean attribute SDJ. Sites willing to accept SDJ jobs set up a CE
which permits running one job per SDJ slot. Jobs submitted to this
CE either are immediately scheduled or rejected. The broker is notified in case of rejection and can either reschedule the job on another
resource or notify the user. These sites also configure their scheduler
with parameters controlling the computational bandwidth dedicated
to SDJ. In particular, the wall-clock time and CPU time of SDJ jobs
are limited. While these parameters are lower for SDJ jobs than for
the usual batch jobs, all EGEE jobs are subject to the same kind of
limitations, and all are aborted if they exceed these.
This work has exposed a problem with scheduling in the EGEE
middleware. The system does not permit a CE to provide access control based on job type, which is required for application isolation in
general and for QoS in our case. As a temporary solution, a namebased dispatch has been set up in gLite 3.2. The SDJ-dedicated CEs
GCJv14.tex; 17/12/2006; 0:08; p.15
|3|3|3|3|
|---|---|---|---|
|2|2|2|2|
|1|1|1|3|
-----
16 Grid Scheduling for Interactive Analysis
SDJ
5 dteam
4
3
2
1
0
58000 58200 58400 58600 58800 59000 59200
Time (seconds)
Figure 9. Number of concurrent jobs on a single dual-processor node as a function
of time.
are named such that they have a trailing “sdj”. The submission system
introduces an appropriate regular expression in job requirements so as
the WMS will select select SDJ CEs for short deadline jobs and prevents
batch jobs from being scheduled on SDJ CEs. It is worth mentioning
that this method can be adopted for early experiments of other classes,
because it requires only minor modifications of the gLite code. A more
elegant and general solution is being investigated. However, the Glue
schema must be modified and such modifications are a long process.
Tests that have been conducted at LAL to ensure the correct behaviour of the SDJ configuration. Fig. 9 shows a breakdown of the
occupation of one dual-processor node. On a background of batch jobs,
which never exceed 2 (one per processor), SDJ can run within the
same limit, and also concurrently with a third class (dteam) required
by EGEE operational monitoring. Hence there are five slots per dualprocessor node. Fig. 10 exemplifies control of the global computational
bandwidth at the site level dedicated to SDJ. In this configuration, a
maximum of ten concurrent SDJ were permitted.
The virtual reservation mechanism and the SDJ CE have been put
in production at LAL since May 2006. The LAL site is equipped with
a mixture of IBM and HP 1U rack-mounted dual-processor (AMD
Opteron, 2.2 GHz) machines with 1 GB of RAM per CPU (2 GB total)
and 80 GB of disk. The SDJ slots are routinely used in production
by several biomedical applications and also for EGEE demonstrations
(one cannot wait in queues when the audience is waiting for a live
demonstration), and run concurrently with the usual batch jobs. The
site utilization is extremely high, approaching a steady 100%. This
experimental result provides an empirical answer to the often raised
GCJv14.tex; 17/12/2006; 0:08; p.16
-----
Scheduling for Responsive Grids 17
40
35
30
25
20
15
10
5
0
58000 58200 58400 58600 58800 59000 59200
Time (seconds)
Figure 10. Number of concurrent jobs on the site as a function of time.
issue of the negative impact of concurrency (from cache to IO) on
real-world workloads running on high-end processors.
6. Related work
Existing approaches to grid scheduling for QoS follow three distinct
paths: Virtual Machines (VM) encapsulation, statistical prediction,
and service level agreements. Virtual machines provide a powerful new
layer of abstraction in centralized computing environments in order to
ensure fault isolation. Distributed scheduling based on VM encapsulation has been explored as a general tool in the PlanetLab project
[7]. The Virtuoso project has more specifically explored virtualization
for differentiated services [30, 31], and the Virtual Workspaces project
[27] investigates the large-scale deployment of VM inside the Globus
middleware. Virtual machines provide complete freedom of scheduling
and even migrating an entire OS and associated computations which
considerably eases time-sharing between deadline-bound short jobs and
long running batch jobs. On the other hand, the virtual machines
strategy is extremely invasive. All of, or a significant fraction of, the
computations must be run inside virtual machines to provide scheduling
opportunities—something for which traditional batch users have little
incentive. Another issue is that VM interactivity follows the remote
desktop model. In this model, which has been often been adopted for
grid-enabling computational steering [44, 24, 26, 40], the user front-end
is a passive terminal. With Grid Differentiated Services and user-level
scheduling, we provide a much more modular environment that can
support any combination of local and remote computations.
GCJv14.tex; 17/12/2006; 0:08; p.17
-----
18 Grid Scheduling for Interactive Analysis
Accurate statistical prediction of the workloads is possible in large
range of situations including shared clusters [16] and batch-scheduled
parallel machines [13]. In particular, [48] shows that statistical prediction allows efficient support of interactive computations in unreserved
cluster environments. At the grid scale, in the current status where
time-sharing is possible only through control mechanisms such as VRes,
predictive methods would apply for instance to the availability of SDJ
slots provided by VRes.
Service level agreements (SLAs) are the standard to represent the
agreed constraints between service consumers and service providers on
a grid [2]. SLAs by themselves do not provide scheduling solutions,
but allow expressing flexible requirements and incorporating multicriteria approaches. SLAs could be applied to differentiated services
in our context. For instance proposing a choice between a quick and
reliable turnaround time, with strong completion constraints, and a
more unreliable turnaround time without constraints. SLAs also offer
the perspective of a general framework for renegotiation of resources
[33] by running jobs. In our context this could be used to switch from
the first mode to the second one, for instance when a SDJ approaches
the end of its allocated time and must be prorogated.
User-level scheduling has been proposed in many other contexts,
and a case for it has been made in the AppLeS [14, 8] project. In
a production grid framework, the DIRAC [51] project has proposed
a permanent grid overlay where scheduling agents pull work from a
central dispatching component. Our work differs from DIRAC on a
major point: both for DIANE and gPTM3D, the execution agents
are regular gLite jobs, and are thus subject to all grid policies and
accounting. The abuse of glideIn techniques, which would permanently
launch execution agents, would be counter-productive. The local EGEE
schedulers (typically MAUI or PBS) do enforce fair share across VO
and users. Thus, running a useless execution agent will prevent it to be
run on the same site at the next scheduler decision. Obviously, if the site
allows infinite execution, there will never be a scheduler decision, but
the resource usage of this agent would be charged to the appropriate
user or VO.
7. Conclusion
We have presented complementary strategies to address the QoS requirements of a responsive grid: Grid Differentiated Services and userlevel schedulers. Grid Differentiated Services provide a general framework for the isolation of classes of applications and the realization
GCJv14.tex; 17/12/2006; 0:08; p.18
-----
Scheduling for Responsive Grids 19
at the grid level of the concepts required for hard or soft real-time
scheduling. User-level schedulers cope with high latencies associated
with grid middleware. Equally important is a clean separation between two optimization problems: at the grid level, the optimization
is related to fair-share and load balancing, while at the user-level, the
optimization is for a specific application workload. Depending on the
application requirements, Grid Differentiated Services and user-level
schedulers can be used separately or combined. In the example of
gPTM3D, combining Grid Differentiated Services and an embedded
user-level scheduler provides a fully transparent coupling of the grid
resources with an augmented reality desktop software. The scope of
applications deployed on top of the DIANE generic scheduler exemplify
the impact of user-level scheduling on a number of QoS characteristics.
Both strategies have been deployed on the EGEE grid, as autonomous
site decisions (for the Grid Differentiated Services) or as regular user
jobs (for the user-level schedulers). They are fully compatible with
gLite, the existing EGEE middleware. Their architecture and to a
large extent their implementation depend only on generic grid concepts.
We are convinced that this non-intrusiveness is a key to a progressive
convergence of QoS and grid technology.
Acknowledgements
This work was partially funded by the EGEE EU project (INFSO-RI508833 Grant). gPTM3D is part of the AGIR project funded by the
ACI Masses de Donn´ees program of the French ministry of research .
References
1. Atlas Computing - Technical Design Report CERN-LHCC-2005-022.
http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhcc-2005022.pdf
2. R. AlAli, K. Amin, G. von Laszewski, O. Rana, D. Walker, M. Hategan and
N. Zaluzec. Analysis and Provision of QoS for Distributed Grid Applications.
Journal of Grid Computing, 2(2):163-182. June 2004
3. P. Andreetto et al. Practical approaches to Grid workload and resource
management in the EGEE grid. In Procs. CHEP’04
4. S. Andreozzi et al. GLUE Schema Specification version 1.2.
http://infnforge.cnaf.infn.it/glueinfomodel/
5. S. Baruah, N. Cohen, C. Plaxton and D. Varvel. Proportionate Progress: A
Notion of Fairness in Resource Allocation. Algorithmica 15(6), pp. 600-625,
1996.
6. S. Basu, V. Talwar, B. Agarwalla and R. Kuma. Interactive Grid Architecture
for Application Service Providers. HP Technical Report HPL-2003-84R1. 2003.
GCJv14.tex; 17/12/2006; 0:08; p.19
-----
20 Grid Scheduling for Interactive Analysis
7. A. Bavier, M. Bowman, B. Chun, D. Culler, S. Karlin, S. Muir, L. Peterson, T. Roscoe, T. Spalink, and M. Wawrzoniak. Operating System Support
for Planetary-Scale Network Services. In Procs. of the First Symposium on
Networked Systems Design and Implementation (NSDI, pp 253–266. 2004.
8. F. Berman, R. Wolski and H. Casanova. Adaptive Computing on the Grid
Using AppLeS. IEEE Trans. Parallel Distrib. Syst., 14(4): 369–382. 2003.
9. G. Shao and R. Wolski and F. Berman: Master/Slave Computing on the Grid;
in Proceedings of the 9th Heterogeneous Computing Workshop, Cancun, Mexico, May 2000, pp. 3-16. http://citeseer.ist.psu.edu/wolski00masterslave.html
10. Morris, G. M. et al.(1998), Automated Docking Using a Lamarckian Genetic
Algorithm and and Empirical Binding Free Energy Function J. Computational
Chemistry, 19: 1639-1662
11. D. Berry, C. Germain-Renaud, D. Hill, S. Pieper and J. Saltz. Report on the
Workshop IMAGE’03: Images, medical analysis and grid environments. TR
UKeS-2004-02, UK National e-Science Centre, Feb. 2004.
12. Christophe Blanchet et al. GPSA: Bioinformatics grid portal for protein sequence analysis on EGEE grid. In Procs of Healthgrid 2006, Studies in Health
Technology and Informatics 120, IOS Press. May 2006
13. J. Brevik, D. Nurmi and R. Wolski. Predicting bounds on queuing delay for
batch-scheduled parallel machines. In Procs. of the eleventh ACM SIGPLAN
symposium on Principles and practice of parallel programming (PPoPP ’06),
pp 110–118. 2006.
14. H. Casanova, G. Obertelli, F. Berman and R. Wolski. The AppLeS parameter
sweep template: user-level middleware for the grid. In Procs 2000 ACM/IEEE
conference on Supercomputing SC’00. 2000.
15. A. Chandra, M. Adler, and P. Shenoy. Deadline fair scheduling: Bridging the
theory and practice of proportionate-fair scheduling in multiprocessor servers.
In Procs. of the 7th IEEE Real-Time Technology and Applications Symposium.
June 2001.
16. P. Dinda. Design, Implementation, and Performance of an Extensible Toolkit
for Resource Prediction In Distributed Systems. IEEE Transactions on Parallel
and Distributed Systems, 17(2). 2006.
17. D.H.J. Epema, M. Livny, R. van Dantzig, X. Evers and J. Pruyne. A worldwide
flock of Condors: Load sharing among workstation clusters. Future Generation
Computer Systems, 12:53–65. 1996.
18. D. G. Feitelson and M. Jette. Improved Utilization and Responsiveness with
Gang Scheduling. In Procs of the IPPS ’97 Workshop on Job Scheduling
Strategies for Parallel Processing, 1997
19. I. Foster, M. Fidler, A. Roy, V, Sander and L. Winkler. End-to-End Quality
of Service for High-end Applications. Computer Communications, 27(14):13751388. 2004.
20. T. Glatard, J. Montagnat and X. Pennec. Efficient services composition for
grid-enabled data-intensive applications. In Procs of the IEEE Int. Symp. on
High Performance Distributed Computing (HPDC’06), June 2006.
21. C. Germain-Renaud, R. Texier and A. Osorio. Interactive Reconstruction and
Measurement on the Grid. Methods of Information in Medecine, 44(2):227- 232.
2005.
22. S. Guatelli et al.: Geant4 Simulation in A Distributed Computing Environment;
submitted to IEEE Trans. Nucl. Sci. 2006
23. S. Hastings, M. Kurc, S. Langella, U. V. Catalyurek, T. C Pan and J. H. Saltz,
Image Processing for the Grid: A Toolkit for Building Grid-enabled Image
GCJv14.tex; 17/12/2006; 0:08; p.20
-----
Scheduling for Responsive Grids 21
Processing Applications. In Proc. 3rd Int. Symp. on Cluster Computing and
the Grid, pp 36–43. 2003
24. P. Heinzlreiter and D. Kranzlm??ller. Visualization Services on the Grid: The
Grid Visualization Kernel. Parallel Processing Letters 13(2):135–148. 2003.
25. E. Heymann, M. A. Senar and E. Luque and M. Livny”. Adaptive Scheduling for Master-Worker Applications on the Computational Grid. In Procs 1st
IEEE/ACM Intl Workshop on Grid Computing (GRID 2000). 2000
26. R. Kumar, V. Talwar and S. Basu. A resource management framework for
interactive Grids. Concurrency and Computation: Practice and Experience,
16(5):489–50. April 2004.
27. K. Keahey, I. Foster, T. Freeman, and X. Zhang. Virtual Workspaces: Achieving
Quality of Service and Quality of Life in the Grid. To appear in The Scientific
Programming Journal. 2006.
28. G. von Laszewski, M. Hategan and D. Kodeboyina.
Work Coordination for Grid Computing. To appear 2006.
http://www.mcs.anl.gov/ gregor/papers/vonLaszewski- workcoordination.pdf
29. H.C.Lee et al.: Grid-enabled High Throughput in-silico Screening Against
Influenza A Neuraminidase; NETTAB 2006, Santa Margherita di Pula, July 1013, 2006; submitted to a special issue of IEEE Transactions on Nanobioscience
30. B. Lin, P. A. Dinda. VSched: Mixing Batch And Interactive Virtual Machines
Using Periodic Real-time Scheduling. In Procs. ACM/IEEE conference on
Supercomputing 2005 (SC’05). 2005
31. B. Lin, P. A. Dinda and D. Lu. User-driven Scheduling of Interactive Virtual
Machines. In Procs. of the Fifth IEEE/ACM Int.Workshop on Grid Computing
(GRID’04), pp 380–387. 2004
32. A.Manara et al.: Integration of new communities in the Grid for mission critical applications: distributed radio-requency compatibility analysis for the ITU
RRC06 conferece; submitted to EGEE’06 Conference, 25-29 September 2006,
Geneva, Switzerland
33. J. MacLaren, R. Sakellariou, K. T. Krishnakumar, J. Garibaldi, and D. Ouelhadj. Towards Service Level Agreement Based Scheduling on the Grid. In Procs
14th Int. Conf. on Automated Planning and Scheduling (ICAPS 04). 2004
34. P. Mendez-Lorenzo et al.: Distributed Release Validation of the Geant4
Toolkit in the LCG/EGEE Environment; submitted to IEEE Nuclear Science
Symposium 2006
35. I. Mirman. Going Parallel the New Way. Desktop Computing, 10(11), June
2006
36. J.T. Mo´scicki: Distributed analysis environment for HEP and interdisciplinary
applications; Nuclear Instruments and Methods in Physics Research A 502
(2003) 426-429
37. A. Osorio, O. Traxer, S. Merran, F. Dargent, X. Ripoche, J. Ati. Real time
fusion of 2D fluoroscopic and 3D segmented CT images integrated into an
Augmented Reality system for percutaneous nephrolithotomies (PCNL). In
InfoRAD 2004, RSNA’04. 2004.
38. F. Pacini and P. Kunzt. Job Description Language Attribute Specification.
EGEE Tech. Rep. 555796. 2005 http://glite.web.cern.ch/glite/documentation/
39. R. Raman, M. Livny, and M. Solomon. Matchmaking: Distributed resource
management for high throughput computing. In Procs. of the Seventh IEEE
International Symposium on High Performance Distributed Computing. July
1998.
GCJv14.tex; 17/12/2006; 0:08; p.21
-----
22 Grid Scheduling for Interactive Analysis
40. H. Rosmanith and D. Kranzlmuller. glogin - A Multifunctional, Interactive
Tunnel into the Grid. In Procs 5th IEEE/ACM Int. Workshop on Grid
Computing (GRID’04), 2004.
41. L. G. Roberts. Beyond Moore’s law: Internet growth trends. IEEE Computer,
33(1),pp 117-119, Jan. 2000
42. V. Servois, A. Osorio et al. A new PC based software for prostatic 3D segmentation and measurement : application to permanent prostate brachytherapy
(PPB) evalution using CT and MR images fusion. In InfoRAD RSNA 2002.
88th Annual Meeting of the Radiological Society of NorthAmerica. Dec. 2002.
43. SGI. High-performance computers become interactive.
www.sgi.com/company info/newsroom/3rd party/111505 starp.pdf. Jan.
2006
44. J. Shalf and E. W. Bethel Cactus and Visapult: An Ultra-High Performance
Grid- Distributed Visualization Architecture Using Connectionless Protocols.
IEEE Computer Graphics and Applications, 23(2):51–59. 2003.
45. S. Smallen, H. Casanova and F. Berman. Applying scheduling and tuning
to on-line parallel tomography. In Procs 2001 ACM/IEEE conference on
Supercomputing (SC’01). 2001.
46. Q. Snell, M. J. Clement, D. B. Jackson and C. Gregory. The Performance
Impact of Advance Reservation Meta- scheduling. In Procs of the Workshop
on Job Scheduling Strategies for Parallel Processing at IPDPS ’00/JSSPP ’00,
pp 137–153. Springer-Verlag. 2000.
47. A. C. Sodan, S. Doshi, L. Barsanti and Darren Taylor. Gang Scheduling
and Adaptive Resource Allocation to Mitigate Advance Reservation Impact.
InProcs Sixth IEEE Int. Symp. on Cluster Computing and the Grid, pp
649-653, IEEE Comp. Soc. Press, 2006.
48. A. Sundararaj, A. Gupta, and P. Dinda. Increasing Application Performance In
Virtual Environments Through Run-time Inference and Adaptation. In Procs of
the 14th IEEE Int. Symp. on High Performance Distributed Computing (HPDC
2005). 2005.
49. M. Thomas, J Burruss, L Cinquini, G Fox, D. Gannon, I. Glilbert, G. von
Laszewski, K. Jackson, D. Middleton, R. Moore, M. Pierce, B. Plale, A. Rajasekar, R. Regno, E. Roberts, D. Schissel, A. Seth, and W. Schroeder. Grid
Portal Architectures for Scientific Applications. Journal of Physics, 16, pp
596-600. 2005.
50. V. Tola, F. Lillo, M. Gallegati and R. N. Mantegna. Cluster analysis for
portfolio optimization. Preprint. 2005.
51. A. Tsaregorodtsev, V. Garonne, and I. Stokes-Rees. DIRAC: A Scalable
Lightweight Architecture for High Throughput Computing. In Procs 5th
IEEE/ACM Int. Workshop on Grid Computing (GRID’04), 2004.
GCJv14.tex; 17/12/2006; 0:08; p.22
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s10723-007-9086-4?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s10723-007-9086-4, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://cds.cern.ch/record/1152703/files/EGEE-PUB-2008-002.pdf"
}
| 2,008
|
[
"JournalArticle"
] | true
| 2008-03-01T00:00:00
|
[] | 12,706
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0112123cfb29e45be6d0a80751184ab7f20888df
|
[] | 0.825609
|
An Approach to Develop a Transactional Calculus for Semi-Structured Database System
|
0112123cfb29e45be6d0a80751184ab7f20888df
|
International Journal of Computer Network and Information Security
|
[
{
"authorId": "8419191",
"name": "R. Ganguly"
},
{
"authorId": "144341858",
"name": "A. Sarkar"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Comput Netw Inf Secur"
],
"alternate_urls": null,
"id": "75298332-e3d9-44ab-af5e-03b23d7bd4e6",
"issn": "2074-9090",
"name": "International Journal of Computer Network and Information Security",
"type": "journal",
"url": "http://www.mecs-press.org/ijcnis/"
}
|
— Traditional database system forces all data to adhere to an explicitly specified, rigid schema and most of the limitations of traditional database may be overcome by semi-structured database. Whereas a traditional transaction system guarantee that either all modifications are done or none of these i.e. the database must be atomic (either occurs all or occurs nothing) in nature. In this paper transaction is treating as a mapping from its environment to compensable programs and provides a transaction refinement calculus. The motivation of the Transactional Calculus for Semi Structured Database System (TCSS) is-finally, on a highly distributed network, it is desirable to provide some amount of fault tolerance. The paper proposes a mathematical framework for transactions where a transaction is treated as a mapping from its environment to compensable programs and also provides a transaction refinement calculus. It proposes to show that most of the semi structured transaction can be converted to a calculus based model which is simply consists of a forward activity and a compensation module of CAP (consistency, availability, and partition tolerance) [12] and BASE (basic availability, soft state and eventually consistent) [45] theorem. It proposes to show that most of the semi-structured transaction can be converted to a calculus based model which is simply consists of a forward activity and a compensation module of CAP and BASE theorem
|
Published Online September 2019 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijcnis.2019.09.04
# An Approach to Develop a Transactional
Calculus for Semi-Structured
Database System
## Rita Ganguly
Department of Computer Applications; Dr.B.C.Roy Engineering College; Durgapur: 713206; West Bengal, India
E-mail: ganguly.rita@gmail.com
## Anirban Sarkar
Department of Computer Science; National Institute of Technology; Durgapur;713209; West Bengal, India
E-mail: sarkar.anirban@gmail.com
Received: 06 August 2019; Accepted: 25 August 2019; Published: 08 September 2019
**_Abstract—Traditional database system forces all data to_**
adhere to an explicitly specified, rigid schema and most
of the limitations of traditional database may be
overcome by semi-structured database. Whereas a
traditional transaction system guarantee that either all
modifications are done or none of these i.e. the database
must be atomic (either occurs all or occurs nothing) in
nature. In this paper transaction is treating as a mapping
from its environment to compensable programs and
provides a transaction refinement calculus. The
motivation of the Transactional Calculus for Semi
Structured Database System (TCSS) is-finally, on a
highly distributed network, it is desirable to provide some
amount of fault tolerance. The paper proposes a
mathematical framework for transactions where a
transaction is treated as a mapping from its environment
to compensable programs and also provides a transaction
refinement calculus. It proposes to show that most of the
semi structured transaction can be converted to a calculus
based model which is simply consists of a forward
activity and a compensation module of CAP (consistency,
availability, and partition tolerance) [12] and BASE
(basic availability, soft state and eventually consistent)
[45] theorem. It proposes to show that most of the semistructured transaction can be converted to a calculus
based model which is simply consists of a forward
activity and a compensation module of CAP and BASE
theorem. It is important that the service still perform as
expected if some nodes crash or communication links fail,
Verification of several useful properties of the proposed
TCSS includes in this article. Moreover, a detailed
comparative analysis has been providing towards
evaluation of the proposed TCSS.
**_Index Terms—Semi-structured, transactional calculus,_**
X-Query, GOOSSDM, CAP, BASE, GQL-SS.
I. INTRODUCTION
In recent years, researches have produced several
proposals [2, 3, 4, 5, 7, 8, and 9] towards conceptual
modelling of semi-structured database system compare to
the proposals of conceptual modelling. To overcome
traditional transactional problems, extending the
transactional processing system in semi-structured
database by addition of compensation and coordination
of consistency, availability, and partition tolerance
(CAP)[12] and basic availability, soft state and
eventually consistent (BASE) [45] theorem and enrich a
standard design model with new healthiness conditions.
There is no specific transactional calculus for semistructured data. The proposed Transactional Calculus for
Semi-structured database (TCSS) puts forward a
mathematical framework for transactions where a
transaction is treated as a mapping from its environment
to compensable program. Further, the transactional
calculus is derive from an algebra based query language
GQL-SS [11] and illustrated using examples of real life.
The motivation of the Transactional Calculus for Semistructured System, it is desirable to provide some amount
of fault tolerance, on a highly distributed network. It is
important that the service still perform as expected, when
some nodes crash or communication links fail. The ACID
(Atomicity, Consistency, Isolation and Durability)
acronym says that database transactions should be firstly,
seem indispensable, and yet they are incompatible with
availability and performance in very large systems. The
semi-structured database violates the ACID properties.
According to ACID properties in Atomic the entire
transaction will fail if one node element of a transaction
fails, but in semi-structured database, it is not possible. In
semi-structured database, if one node is damaged the
entire network should not be affected. **Secondly, no**
-----
transaction has access to any other transaction in
Isolation that is in an intermediate or unfinished state.
Thus, each transaction is independent unto itself. This is
required for both performance and consistency of
transactions within a database. The semi-structured
database violates this property because it works in path
basis and every node is inter linked to each other. The
benefits of the transactional calculus for Semi-structured
databases are manifold. It provides supports towards (1)
structural and functional design concerns with enriched
semantics and syntaxes for transactional calculus of
semi-structured database represented by precise
knowledge of domain independent conceptualization;(2)
a systematic methodology which used to transforming
calculus for functional design; (3)Transactional Calculus
to Semi-structured database query system provides
guidelines for the purpose of mapping .The proposed
Transactional system for semi-structured is based on path
expression. The path expressions may also contain label
variables to preserve labels or tags. Three types of
algorithms are using to evaluate the path in Graph Object
Oriented Semi-Structured Data Model (GOOSSDM)[2,
19, 20, and 21] schema and Graphical Query Language
for Semi-structured (GQL-SS) [11] schema, one for
searching return node, second for searching the path from
root of GOOSSDM schema to the desired node and the
third one is for the searching and listing of the tail nodes..
Here trying to use the CAP theorem in the broader
context of distributed computing theory. An important
contribution of this paper is to discuss some of the
practical implication of CAP Theorem of a transactional
calculus for Semi-structured database. There are some
proposal; they are only using CAP [12] or BASE [25]
theorem or without these. To introduce the transactional
calculus for Semi-structured database, with the help of
CAP theorem, the CAP theorem was introducing as a
trade-off between consistency, availability and partition
tolerance. **Consistency: A read sees all previously**
completed writes i.e. all nodes see the same data at the
same time. **Availability: A guarantee that every request**
receives a response about whether it succeeded or failed
i.e. read and write always succeed. This means that in
GOOSSDM schema there should be a searching path and
its return some value. The path value should not be null.
**Partition** **Tolerance:** Guaranteed properties are
maintained even when network failures prevent some
machines from communicating with others. The system
continues to operate despite arbitrary partitioning due to
network failures.
However, developers face some challenges despite of
several advantages of existing Semi-structured databases,
when they apply the transaction processing system. Such
challenges are as follows
_Ch1:_ Lack of transactional methodology that blends
semi-structured databases specification with syntaxes of
transactional calculus for semi-structured database
system.
_Ch.2: Majority of existing transactional procedure are not_
usable for large semi-structured database queries.
_Ch.3:_ Few transactional calculus for semi-structured
database approaches are present in literatures that may
represent evolving knowledge of transaction in semistructured databases but not in precise.
_Ch.4: Appropriate guidelines and tools are absent which_
may help designers for specification.
_Ch.5: XML-based semi-structured database systems_
characterized by an expressive global schema. The main
issue here concerns the presence of a significant set of
integrity constraints expressed over the schema and the
concept of node identity, which requires particular
attention when data come from autonomous data sources.
This paper fulfils the deficiency of systematic
methodology in transactional calculus of GOOSSDM
model[44]. The paper is structuring as follows. Several
related works in this field specified in Section 2 briefly.
Section 3 is about the GOOSSDM modelling framework
and this portion is subdividing into two parts components
of GOOSSDM and Illustration of GOOSSDM. The
proposed Transaction calculus for semi-structured
database system (TCSS) has been describing and
formalised in Section 4. Next, guidelines about the way
in which the validation of TCSS can be applied databases
by using CAP and BASE theorem and application
specific conceptualisations have been suggesting in
Section 5. Further, the proposed TCSS have been
implementing and visualised using different operators
and practically illustrates the proposed work using
suitable example in Section 6. Following this, Section 7
practically illustrates the proposed work using a suitable
programming code. Finally, the paper is concluding in
Section 8.Aiming to overcome issues explained in above
mentioned challenges this paper proposes several
objectives. First, the proposed framework of
Transactional system for semi-structured is based on path
expression. They may also contain path variables, which,
are evaluating to the empty path or to a path having a
length of n edges. The path expressions may also contain
label variables to preserve labels or tags. At second, the
path operator is using to set the root node in GOOSSDM
[2, 19, 20, and 21] schema and useful to find the path
from the root node to desired node for any transaction. At
Third, the propose work facilitate the early verification of
the semi-structured data schema structure in
correspondence with the desired transactional calculus.
Finally, the transactional calculus is introducing to Semistructured database, with the help of CAP and BASE
theorem. This objective addresses the issues described in
Ch.2, Ch.3, Ch.4 and Ch.5.The benefits of the
Transactional Calculus for Semi-structured system will
represents a framework for specifying the semantics of a
transactional facility integrated within a Semi-structured
database system. The motivation of the Transactional
Calculus for Semi-structured System is-finally, on a
highly distributed network, is that when some nodes
crash or communication links fail, it is important that the
service still perform as expected. This paper fulfils the
deficiency of systematic methodology in transactional
calculus of GOOSSDM model. In addition, this paper
proposes a formal transactional calculus called
-----
Transactional Calculus for Semi-structured database
(TCSS) in terms of concepts, relations and axioms for
domain independent systems. It provides syntaxes and
semantics for TCSS. Further, the transactional calculus is
derived from a algebra based query language GQL-SS
[11] and illustrated using examples of real life. Moreover,
TCSS are proved by CAP and BASE theorems properties
to show the expressiveness of the propose calculus.
II. RELATED WORK
In previous work [11], focused on path expression in
semi-structured database system. More precisely (i)
described GOOSSDM [2,19,20 and 21] schema and
GQL-SS [11] data are amalgamate to leaves so the path
expression may carry data variables as abstractions of the
content of leaves. They may also carry path variables
those are evaluating to the void path or to a path having a
length of n edges. The path expressions may also contain
label variables to preserve labels or tags. (ii) Develop
three types of algorithms. Three types of algorithms use
to evaluate the path in GOOSSDM schema, one for
searching return node, second for searching the path from
root of GOOSSDM schema to the desired node and the
third one is for the searching and listing of the tail nodes.
(iii) Define the GQL-SS algebra for GOOSSDM model
that operate on semi-structured schema concept and / or
several constructs described in the model. The algebra
consists of a set of operators and few of them can be
using with the constructs like ESG, CSG separately.
As a result, point out that have to develop a
transactional calculus related to this GQL-SS model. To
the best of knowledge, there are no other global solutions
addressing the transactional calculus for semi-structured
database system. A small number of research works exist
in the literatures those are in general semi-structured and
used query language. However, still there is no specific
transactional calculus, which is devoted enough to
conceal the five challenges specified in the introduction
section. The work in Supporting Multi Data Stores
Applications in Cloud Environments [23] has given some
idea about the semi-structured query but no proposed
calculus. The amalgamation of transactions with
programming control structures has provenance in
systems such as Argus [28, 29].There is a composition of
work that enquire into the formal specification of various
zest of transactions [35, 36, 37]. However, these act of
striving do not explore the semantics of transactions
when integrated into a high-level programming language.
Most closely related to goal is the work of Black et. al.
[38], Choithia, and Duggan [39]. The former presents a
theory of transactions that specify atomicity, isolation
and durability properties in the form of an equivalence
relation on processes. Beyond significant technical
differences in the specification of the semantics, results
differ most significantly from theirs insofar as [6] present
a stratified semantics for a realistic kernel language
intended to express different concurrency control models
within the same framework. Choithia and Duggan
present the pik-calculus and pike-calculus, extension of
the pi calculus that supports various abstractions for
distributed transactions and optimistic concurrency. Their
work is relating to other efforts [40, 41] that encode
transaction-style semantics into the pi-calculus and its
variants. Haines et.al. [31] describes a compassable
transaction facility in ML that supports persistence; undo
ability, locking and threads. Their abstractions are
modular and first class, although their implementation
does not rely on optimistic concurrency mechanisms to
handle commits. Consequently, none of the existing
approaches is appropriate enough to cover the 5
challenges specified in the introduction section. In this
regard, devising a new proposal, which is essential to
resolve the issues, addressed in the 5 challenges.
In this case, since dealing with the combination of
CAP and BASE theorem, this proposal for expressing
and executing queries and real time applications shown
by using the calculus. Introducing an approach for a
mapping language to map attributes of the data sources to
the global schema and bridge query language to write the
calculus.
III. GOOSSDM: THE BASIC
Extending the object-oriented paradigm to semi
structured data model, the GOOSSDM introduced. It’s
specifying the irregular and heterogeneous structure,
hierarchical and non-hierarchical relations, n – array
relationships, cardinality and participation constraint of
instances with all details that are required for semistructured data model. The entire semi-structured database
to be viewing as a Graph (V, E) in layered organization
that is allowed by the proposed data model
(GOOSSDM).At the lowest layer, each vertex represents
an occurrence of an attribute or a data item.
Let consider an example of Project Management
System (PMS),[11], associated with Project. Project has
attributes like members, department and publications.
Several members are associated with project and each
member can participated in any project. Department
contains member, and each individual members may have
or not have publication. The PMS is semi-structured is in
nature. The GOOSSDM schema for PMS has been shown
in fig. 1. The sample data is showing in Table 1.
Fig.1. GOOSSDM Schema for PMS
-----
Table 1. Sample Data Set for PMS
|Project 1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|Pname|PID|Topics|Member|||Department||Publication||
||||MID|MName|Maddress|DID|DeptName|PuID|Ptopics|
|ABC|P1001|AAAA|M01|Bipin|XX|D01|CSE|P001|RRR|
|XYZ|P1003|CCCC|M03|Ashu|PP|D02|CA|P003|SSS|
|DEF|P1004|DDDD|M04|Rashi|YY|D03|EE|P004|TTT|
|XYZ|P1005|QQQQ|M06|Sashi|RR|D03|EE|P005|VVV|
|ABC|P1001|BBBB|M07|Priya|CC|D01|CSE|P006|MMM|
|Project 2||||||||||
|Pname|PID|Topics|Member|||Department||Publication||
||||MID|MName|Maddress|DID|DeptName|PuID|Ptopics|
|PQR|P1006|YYYY|M07|Priya|CC|D02|CA|P007|NNN|
IV. CALCULUS FOR SEMI-STRUCTURED DATABASE
SYSTEM
In previous work, defining the GQL-SS algebra for
GOOSSDM model that employ on semi-structured
schema impression and / or various form reportein the
model. Using GOOSSDM schema the semi-structured
data seen as single rooted or multi rooted graph. In every
case, while initiating any query, one needs to set an
immediate root for the desired CSG and then need to find
the tail nodes in respect to the desired CSG.
In all the algorithms, the searching node and return
node must be a type of CSG in GOOSSDM semantics.
The GOOSSDM schema will use as input for the
algorithms. The algorithms will invoke when the path
operator (ρ) will execute. In case of proposed calculus
whenever any operator will invoke, internally it will also
invoke the path operator (ρ) to set the path from root
node to the desired node in GOOSSDM schema by
invoking algorithm 1 and algorithm 2. Moreover, the tail
node list will create by invoking the algorithm 3 on next.
If algorithm 1 and / or algorithm 2 return null value, then
the actual operator need not to execute as there is no root
available for the transactional calculus. This will
facilitate the early verification of the semi-structured data
schema structure in correspondence with the desired
transactional calculus. The running example of Project
Management System (PMS) used to illustrate the
functionalities of operators. As specified earlier path
operator (ρ) is also inclusive part of the algebra and
invoked every time it is required to invoke any other
operator specifically defined for management of semistructured data. In the example, if Project is set as root
then the path from Project to Department can be
established and expressed as Project
(Root)MemberDepartment [11].
_Algorithm 1: Searching of Node in GOOSSDM Schema_
Step 1: Start
Step 2: Input a node C= (CSG).
Step 3: let op: = search node C
And return node C
Step 4: let P1:=layer 0
P2:= Immediate to layer 0
P3:= Next to immediate layer
P4:= Next to Next Immediate
layer
Step 5: for i = 1 to 4
If (op Pi (C) ≠ᴓ) then
Goto for next layer
Else
(Op Pi (C))= (Root)
Step 6: stop
Fig.2. Searching tail node
Fig.3. Searching path from root to desired node
Searching tail node from the desired node layer by
layer, when the return operator of path is equal to the
preceding one, then it is the last node i.e. tail node.
-----
_Algorithm 2: Searching path from Root to Desired Node_
Step 1: Start
Step 2: Input C=CSG // CSG for
searching path.
Step 3: If (IS Root(C) =false) then
N1:=op Pi (CSG, P, ϴ)
Step 4: If (N1==ᴓ) then
Path =N1
Else
Goto step 3.
Step 5: Exit
Root is Project, and then it searches the desired node
layer by layer. Let N1is a path operator ρ with arguments
layer no, CSG, and N1 value should not be Root. If N1
value is null, then the path value will be N1and if not
then it will be check again from Root node.
_Algorithm 3: Searching Tail Nodes from the Desired_
_Node_
Step 1: Start
Step 2: Input G=GOOSSDM schema
Step 3: let the path structured σ= (r,(E)), where E
is a binary relation of(CSG,P,ϴ)
Step 4: For i = 1 to n // n= No. of
iterations
If( IS Root(CSG))=True then
Op<get the i[th]
node>(CSG,P,ϴ)={CSGR,PR,ϴR}
Step 5: for i = 1 to n
Op< get the i[th]node>( CSG,P,ϴ)={CSG
i[,P ]i[,ϴ ]i[} ]
If( (op(CSGi-1,Pi-1,ϴi-1 ))== (op(CSG i,P
i[,ϴ ]i[))) then // Finding the Tail node ]
Tail =(CSG i,P i,ϴ i)
Else goto step 4.
Step 6 : The destination will be denoted as path
ρ(CSG i,P i,ϴ i)= ρ{(CSG R,P R,ϴ R ),
(CSGR-1,PR-1,ϴR-1),( CSGR-2,PR-2,ϴR
2[),.........,(CSG]i[, P]i[, ϴ]i[)} ]
Else goto step 3.
Step 7: Stop.
Fig.4. Searching tail node from the desired node
Searching tail node from the desired node layer by
layer, when the return operator of path is equal to the
preceding one, then it is the last node i.e. tail node.
_A. Propose Operator_
In this section, the propose operator of Transactional
Calculus for Semi-structured (TCSS) of GOOSSDM
model is defined. It consists of a set of operators that take
one or two CSG as input and produce a new list of CSG.
The fundamental operators of TCSS consist of a set of
operators and few of them also can be used with
constructs like CSG, ESG separately.
- _Select (σ) Operator_
The select operator will select CSG and returns CSG
that satisfy a given predicate of a given list of ESGs or
CSGs from the GOOSSDM schema. Thus to select those
CSG from GOOSSDM schema, the tuple relational
calculus (TRC) notation may be write as,
{ | (1)
Its denote that tuple C is in CSG.
{ | (2)
Its mean, it is the set of all tuples C such that predicate
list is true for C.
[List (CSG) =OUTPUT CSG where list= {list of ESG}]
(3)
If the set of all CSG for which the _List(C) evaluates_
true. And the path expression will be like that
[ for all levels, existential
CSG set the path and if it does not have any edge then it
is set to Root]
( )
( )
[Searching for desired
CSG level by level and get ultimate CSG.] (4)
- _Retrieve (π) Operator:_
The retrieve operation allows producing the CSG from
GOOSSDM schema that satisfies a given condition. The
retrieve operator extracts ESG or CSG from the CSG
using some constraints _CON over one or more ESG or_
CSG defined in GOOSSDM schema.
{ | )}[C1 belongs to some CSG with
satisfied condition] (5)
It is meaning that the set of all tuples C such that for
all tuples C1 is in predicate CSG is true for CON.
-----
( ) (6)
[C1 belongs to CSG with specified Condition and that
returns the restricted CSG.]It’s mean that for all tuplesC1
there exists predicate CON is true for C1is exists in CSG
implies predicate
CON is true for specified CSG.
Let; Constraints=CON
( ) (7)
[The dot operator extracts ESG or CSG from the CSG
using some specified constraints CON over one or more
ESG or CSG defined in schema.]CON1 contains all
tuples of C1 extracts the exists predicate such that C1 is
exits CSG and filename (f1) and CON (C1.f1) is true.
(8)
CON2 contains all tuples of C2 extracts the exists
predicate such that C2 is exits CSG and filename (f2) and
CON (C2.f2) is true.
{ |
( ) (9)
- _Union, Intersection (ᴗ,ᴖ)operators:_
These operators will have usual meaning. The union of
any two sets A and B, denoted by AB, is the set of all
elements which belong to A or B or both. Hence, A B
_={_
(( ) )
(10)
[C1 or _C2 or specify constraints of dot product of_ _C1_
and C2 that returns CSG or ESG which belongs to C1 or
_C2 or both.] For all C1 and C2, C1 is in exist CSG or C2_
is in exist CSG or CON over both CSG implies the C1
union C2.
Intersection denoted by AB, is the set of elements
which belong to A and B both and can be expressed as
_AB= { x: x_ _A AND x_ _B }._
(( ) )
(11)
[C1 or C2 or specified constraints of dot product of C1
and C2 that returns CSG or ESG which belongs to _C1_
and C2.]
For all C1 and C2, C1 is in exist CSG and C2 is in
exist CSG and CON over both CSG implies the C1 union
C2.
- _Join (|X|) operator:_
The join operator is a special case of Cartesian product
operator. It is a binary operator to relate two CSGs where
one identical ESG must be common. Let, two CSGs are
_CSG1 and CSG2. Also let, a set of ESG E1=(E11, E12,...,_
_E1R)_ and a set of ESG _E2=( E21, E22,..., E2s) is related_
with theCSG1 and _CSG2 respectively. The join operator_
between _CSG1and_ _CSG2 is possible iffE1ɅE2≠. Now_
let E1ɅE2= {Ea, Eb, Ec} then,
{ | _Ʌ_ _Ʌ_
(12)
[SpecifiedCSG in _C1 with Existential CSG in_ _C2 and_
both will satisfy a common ESG field.]
(( ) )
(13)
[All CSG in _C1 and CSG in_ _C2 and a common ESG_
field is satisfied then this will return the all common
ESG.]
V. ILLUSTRATION OF TRANSACTIONAL CALCULUS OF
SEMI-STRUCTURED (TCSS) DATABASE BY CAP THEOREM
AND BASE THEOREM
In this section, CAP theorem is as described in propose
Semi-structured calculus system is as follows:
_In a web concern to transmission collapse, it is difficult_
_for any web service to execute an atomic read/write_
_shared memory that promises a response to every request._
**Proof Sketch: Having stated the CAP theorem, it is**
relatively straightforward to prove it correct. Consider an
execution in which the nodes (servers) are partitioned
into 2 disjoint set :{ N1} and (N2 ...Nn}. Some node
(client) sends a read request to server node N2.Since N1 is
in a divergent component of the partition from N2, every
message from N1to N2 is lost. Thus it is intolerable for N2
to differentiate the following 2 expressions:
**_i._** There has been a preceding write of path value p1
requested of node N1, and N1has sent an ok
response.
**_ii._** There has been a preceding write of path value p2
requested of node N1, and N1has sent an ok
response.
No matter how long N2 waits, it cannot differentiate
these 2 cases, and as a consequence it cannot ascertain
whether to return response p1 or p2. Server node N2
eventually must return a response, even if the system is
segregated; if the message delay from N1 to N2 is
-----
sufficiently large that N2 believes the system to be
differentiated, then it may return an erroneous response,
despite the scarcity of partitions.
The paramount explanation for extending the CAP
theorem is to make the point that in the majority of
instances, a distributed system can only guarantee two of
the features, not all three. To ignore such a decision could
have catastrophic results that include the possibility of all
three elements falling apart simultaneously.
**_Consistency: A read sees all previously completed writes_**
i.e. all nodes see the same data at the same time .g: As the
above figure I show that, if Project is set as Root then the
Path from Project to department can be established and
expressed as ; Project (Root)→Member→Department.
Let; the path denoted as ρ.
Then, it can be expressed as_ρ(R,C)= the path from Root to CSG._
_Root_ denoted as _R, C(CSG) and E is a_ trinary relation
of(CSG,P,ϴ)
[ ]
[ ] (14)
[ ] . (15)
(16)
For all i,ρ satisfies the layer, for all i and existential C
if operator _ρ_ with layer and CSG is satisfied Root then
Root implies the operator _ρ with layer and CSG. If the_
operator _ρ_ with layer and CSG satisfies the preceding
layer and CSG then it implies the tail node.
_Therefore, all nodes see the same data at the same time._
_In addition, it also satisfy the Base Theorem Basic_
_Availability that means it response to any request._
**_Availability: Guarantee that every request receives a_**
response about whether it succeeded or failed i.e. read
and write always succeed. This means that in
GOOSSDM schema there should be a searching path and
its return some value. The path value should not be null.
Here defining a path means it guarantees that every
request receives a response about whether it succeeded or
failed i.e. read and write always succeed. When it
succeeded then it is succeeded path otherwise, it is failed
path.
Succeeded path =N1
Failed Path or
[ ]
(17)
[ ] (18)
(19)
( ) [ ]
(20)
( ) (
( ))
(21)
For existential C, let succeeded path is not root.
Succeeded path implies for existential N1 if N1 value is
null then this will be the path value, should not be null,
then again for existential N1 returns failed path or
succeeded path or succeeded path with not null value.
_Therefore, all searching path must return some value._
_Again, it is also satisfying the Base Theorem Soft State_
_that according to the users’ requirement the desired path_
_will change and it must return some value._
**_Partition_** **_Tolerance:_** Guaranteed properties are
maintained even when network failures prevent some
machines from communicating with others. The system
continues to operate despite arbitrary partitioning due to
network failures.
[ ]
[ ] (22)
[ ] (23)
(24)
For all i, the Root implies operator _ρ with layer and_
CSG. If the operator _ρ_ with layer and CSG satisfies the
preceding layer and CSG then it implies the tail node.
The all-possible paths of OR operation implies the
desired node.
_Therefore, the every Node will cultivate q to everywhere_
_it should sooner or later, but the path will continue to_
_receive input and is not checking the consistency of every_
_transaction before it moves onto the next node._
_Read-Write Operation Algorithm_
Assuming node R is the Root node. The algorithm
behaves as follows and A is desired node.
_Algorithm 1: Read at node A_
_Step 1: A sends a request to R for the recent value._
_Step 2: If A receives a response from R that means find a_
_path value, then save the value and send it to \\\the client._
By applying algorithm R is the root node and scanning
from R to the desired node, A returns the path value with
arguments in operator layer no and CSG and it is the
finding of path value.
_Algorithm 2: Write at node A_
_Step 1: A sends a message to R with the new path value._
-----
_Step 2: If A receives an ACK from R, then A sends an_
_ACK to the client and stop._
_Step 3: If A has not yet received an ACK from R, then A_
_sends a message to R with the new value._
Fig.5. Example of read at node A
Fig.6. Example of write at node A
A sends request to R for the new path value and R
scans it from right to left, i.e. R→B→A; A have to wait
to get the ACK and B will get the ACK prior to A and
then A sends a message to R with the new value.
_Algorithm 3: New value is receiving at node R_
_Step 1: R increments its sequence no by 1._
_Step 2: R sends out the new value and sequence no to_
_every node._
Fig.7. Example of New value is received at node R.
According to previous algorithm Root will increment
its layer value by 1 and every node will getting there
layer no i.e. sequence no.
VI. VALIDATION OF TRANSACTIONAL CALCULUS OF
SEMI-STRUCTURED (TCSS) DATABASE BY CAP
AND BASE THEOREM
Data validation intended to provide certain well
defined guarantees for fitness, accuracy, and consistency
for any of various kinds of user input into an application
or automated system. Data validation rules can be
defined and designed using any of various methodologies,
and be deployed in any of various contexts.
Data validation, as explained above, is making sure
that all data (whether user input variables, read from file
or read from a database) are valid for their intended data
types and stay valid throughout the application that is
driving this data. What this means is data validation, in
order to be as successful as it can be, must implemented
at all parts that get the data, processes it and saves or
prints the results.
**_Validation_**
In evaluating the basics of data validation,
generalizations can made regarding the different types of
validation, according to the scope, complexity, and
purpose of the various validation operations to be carried
out. For example:
**_Data type validation: Data type validation customarily_**
carried out on one or more simple data fields. The
simplest kind of data type validation verifies that the
individual characters provided through user input are
consistent with the expected characters of one or more
known primitive data types; as defined in a programming
language or data storage and retrieval mechanism. As the
above figure I show that, if Project are set as Root then
the Path from Project to Department can be established
and expressed as ; **_Project_**
**_(Root)→Member→Department._**
Let ; the path denoted as ρ.
Then, it can be expressed as
_ρ(R,C)=i.e. the path from Root to CSG._
[ ]
_And_ (25)
_Then, [ ]_ (26)
[ ]
(27)
(28)
This is the simple example of data validation that
verifies that the individual characters provided through
user input are consistent with the expected characters of
-----
one or more known primitive data types; as defined in a
programming language or data storage and retrieval
mechanism and in previous section it is already proved
that it satisfy the CAP and BASE Theorem.
**_Constraint_** **_validation:_** Constraint validation may
examine user input for consistency with a
minimum/maximum range, or consistency with a test for
evaluating a sequence of characters,
**_Consistency: A read sees all previously completed writes_**
i.e. all nodes see the same data at the same time.
_E .g: As the above figure I show that, if Project are set as_
Root then the Path from Project to department can be
established and expressed as ; _Project_
_(Root)→Member→Department._
Let ; the path denoted as ρ.
Then, it can be expressed as_ρ(R,C)=i.e. the path from Root to CSG._
_Root_ denoted as _R, C(CSG) and E_ trinary relation
of(CSG,P,ϴ)
_ρ(i) [_ _is the layer]_
_And_ (29)
_Then, [ ]_ (30)
[ ]
(31)
(32)
**_Therefore, all nodes see the same data at the same time._**
This is the simple example of constraint validation and in
constraint validation examine for consistency. In
previous section it is already proved that consistency
satisfy the CAP and BASE Theorem.
**_Structured validation: Structured validation allows for_**
the combination of any of various basic data-type
validation steps, along with more complex processing.
Such complex processing may include the testing of
conditional constraints for an entire complex data object
or set of process operations within a system.
_Path(Root,E) [ C(CSG) and E_ trinary relation
of(CSG,P,ϴ)]
[ ] (33)
[ ] (34)
(35)
_Therefore, the every Node will propagate to everywhere_
_it should sooner or later, but the path will continue to_
_receive input._
This is the example of Structured validation it include
complex processing such complex processing may
include the testing of conditional constraints for an entire
complex data object or set of process operations within a
system.
VII. TCSS OPERATORS WITH EXAMPLE
In previous work defining the GQL-SS algebra for
GOOSSDM model that operate on semi-structured
schema concept and / or several constructs described in
the model. The algebra consists of a set of operators and
few of them also can be used with the constructs like
ESG, CSG separately. The running example of _Project_
_Management System (PMS)_ used to illustrate the
functionalities of operators. As specified earlier _path_
_operator (ρ)_ is also inclusive part of the algebra and
invoked every time it is required to invoke any other
operator specifically defined for management of semistructured data.
Let consider an example of Project Management
System (PMS) where a project has several members and
members are associated with some departments.
Individual members either may or may not have
publications. Moreover, each member may participate in
any number of projects. The database for PMS is purely
semi-structured in nature. The sample data has been
showing in table I.
_A. Operators in GOOSSDM_
Let us note that in GOOSSDM the data are seen as
single rooted graphs or multi rooted graph. In every cases
have to set an immediate root for the desired CSG and
then also find the tail node in respect to the desired CSG.
- **_Select (σ) Operator: The select operator will_**
select CSG and returns CSG that satisfy a given
list of ESGs or CSGs from the GOOSSDM
schema. The tuple relational calculus (TRC)
notation is,
{C|C CSG (36)
{C|List C (37)
[List(CSG)=OUTPUT CSG where list={list of ESG}]
If the set of all CSG for which the List(C) evaluates
true. And the path expression will be like that
(38)
[for all levels, existential CSG set the path and if it
does not have any edge then it is set to Root]
(39)
(40)
( )
(41)
-----
[Searching for desired CSG level by level and get
ultimate CSG.]
- **_Retrieve (π) Operator:_** The retrieve operator
extracts ESG or CSG from the CSG using some
constraints _CON over one or more ESG or CSG_
defined in GOOSSDM schema.
{ | )} (42)
[C1 belongs to some CSG with satisfied condition]
( ) (43)
[C1 belongs to CSG with specified condition and that
returns the restricted CSG.]Let; Constraints=CON
( )
(44)
[The dot operator extracts ESG or CSG from the CSG
using some specified constraints CON over one or more
ESG or CSG defined in schema.]
(45)
{ | (
) (46)
- **_Union, Intersection and Difference (ᴗ,ᴖ,and -_**
**_)operators:_** These operators will have usual
meaning. The union of any two sets _A and_ _B,_
denoted by AB, is the set of all elements which
belong to A or B or both. Hence, A B ={ x: x
_A OR x_ _B}._
(( ) )
(47)
[C1 or C2 or specified constraints of dot product of C1
and C2 that returns CSG or ESG which belongs to C1 or
C2 or both.]
Intersection denoted by AB, is the set of elements,
which belong to A, and B both, expressed as
_AB= { x: x_ _A AND x_ _B }._
(( ) )
(48)
[C1 or C2 or specified constraints of dot product of C1
and C2 that returns CSG or ESG which belongs to C1
and C2.]
- **_Join (|X|) operator:_** The join operator is a special
case of Cartesian Product operator. It is a binary
operator to relate two CSGs where one identical
ESG must be common. Let, two CSGs are _CSG1_
and CSG2. Also let, a set of ESG _E1= (E11, E12..._
_E1R)_ and a set of ESG _E2= (E21, E22... E2s) is_
related with theCSG1 and CSG2 respectively. The
join operator between _CSG1and_ _CSG2 is possible_
iffE1ɅE2≠. Now letE1ɅE2={Ea,Eb, Ec} then,
{ |
(49)
[Specified CSG in C1 with Existential CSG in C2 and
both will satisfy a common ESG field.]
(( )
) (50)
[All CSG in C1 and CSG in C2 and a common ESG
field is satisfied then this will return the all common
ESG.]
_B. Capabilities of the proposed calculus TCSS_
In this section, the expressiveness capabilities of the
proposed calculus of TCSS demonstrated by applying the
tuple relational calculus to suitable example queries.
_a. Find the project name and project id from the CSG_
_project1._
In this query, the _Select operator has been using to_
select list like _Pname and_ _PID from_ _Project1.The_
calculus can be expressed as follows,
{P.Pname, P.PID|Project1 (P)}.
Result:
<Project1>
<PName> ABC</PName>
<PID>P1001</PID>
<PName> XYZ</PName>
<PID>P1003</PID>
<PName> DEF</PName>
<PID>P1004</PID>
<PName> XYZ</PName>
<PID>P1005</PID>
<PName>ABC</PName>
<PID>P1001</PID>
</Project1>
_b. Find the details of publication whose Member Id_
_MID= M03 and Publication Id PuID= P003 ._
In this query, the Retrieve operator has been used with
the constraints of _select operation on select list asMID_
_= M03 from_ _Member CSG and also select the list_
asPID= P003 from Publication CSG. The calculus can
be expressed as follows,
{P.Publication|Project1(p)Ʌ(Ǝ)(Member(M)ɅM.MID=’
M03’)Ʌ(Ǝ)(Publication(B)ɅB.PuID=’P003’)}
-----
Result:
<Project1>
<Publication>
<PuID> P003 </PuID>
<Ptopics>SSS</Ptopics>
</Publication>
</Project1>
_c. Find the details of member where MName= Bipin_
_from project1 and also find the details of Member where_
_MName= Priya from Project2._
In this query, the Retrieve operator has been used with
the constraints of _select operation on the list_ _Mname_
_= Bipin andMname = Priya_ from _Member CSG. The_
calculus can be expressed as follows,
{P.Member|Project1(P)Ʌ(Ǝ)(Member(M)ɅM.MName=’
Bipin’)}V{P.Member|Project2(P)Ʌ(Ǝ)(Member(M)ɅM.
MName=’Priya’}
Result:
<Project1>
<Member>
<MID> M01</MID>
<MName>Bipin</MName>
<MAddress> XX </MAddress>
</Member>
</Project1>
<Project2>
<Member>
<MID> M07</MID>
<MName>Priya</MName>
<MAddress> CC </MAddress>
</Member>
</Project2>
_d. Find the name of all members who have the same_
_department id “DID=D03 and department name “EE ._
In this query, the Retrieve operator has been used with
the constraints of select operation as the list DID= D03
from _Department CSG. Also another Retrieve operator_
has been used with constraints on select operation as the
list DName= Electrical from Department CSG. Finally
the intersection operator has been used. The calculus can
be expressed as follows
{P.Member|Project1(P)Ʌ (Member(M))Ʌ(Ǝ)(Depart
ment(D)ɅD.DID=’D03’ɅD.Dname=’EE’} Result:
<Project1>
<Member>
<MName>Rashi</MName>
</Member>
<Member>
<MName>Sashi</MName>
</Member>
</Project1>
_e. Find the name of the all members who have the_
_department id same._
In this query, required to set the custom root and then
required to apply the join operator. For the purpose,
theMemberCSG needs to set the root. The calculus can be
expressed by semantics and corresponding result are as
follows
{P.Member|Project1(P)Ʌ( (M)((Member(M))Ʌ(Depart
ment(D))→ D.DID=D.DID)}
Result:
<Member>
<MName>Bipin</MName>
<MName>Rashi</MName>
<MName>Sashi</MName>
<MName>Priya</MName>
</Member>
_f. Find the project name and project id from the CSG_
_project1 and CSG project2._
In this query, the _Select operator has been used to_
select list like _Pname and_ _PID from_ _Project1 and also_
_Project2.The calculus can be expressed as follows:_
{P.Pname,P.PID|Project1(P)}.V{P.Pname,P.PID|Project2
(P)}.
Result:
<Project1>
<PName> ABC</PName>
<PID>P1001</PID>
<PName> XYZ</PName>
<PID>P1003</PID>
<PName> DEF</PName>
<PID>P1004</PID>
<PName> XYZ</PName>
<PID>P1005</PID>
<PName>ABC</PName>
<PID>P1001</PID>
</Project1>
<Project2>
<PName> ABC</PName>
<PID>P1001</PID>
<Project2>
_g._ _Find_ _the_ _details_ _of_ _publications_ _where_
_MName= Bipin from project1 and also find the details_
_of publication where MName= Priya from Project2._
In this query, the Retrieve operator has been used with
the constraints of _select operation on the list_ _Mname_
_= Bipin andMname = Priya_ from _Member CSG. The_
calculus can be expressed as follows
{P.Publication|Project1(P)Ʌ(Ǝ)(Member(M)ɅM.MName
=’Bipin’)}V{P.Publication|Project2(P)Ʌ(Ǝ)(Member(M)
ɅM.MName=’Priya’}
Result:
<publication>
<puid> P001</puid>
<ptopics> RRR </ptopics>
<puid> P007</puid>
-----
<ptopics> NNN</ptopics>
</publication>
VIII. AN IMPLEMENTATION OF PROPOSED TCSS
_A. Transaction Execution:_
Fig.8. Example of transaction execution
The above figure 8 shows the root node is 1 and then
scanning from right, the next node is 2 and the next after
next node is 4 after that it scans for the left node
3.Focusing on a simplified variant of TCSS, that is
dynamically typed. To introduce the syntaxes and
semantics of TCSS, let us starting with a simple example
of transactional query by using x-query. In this section,
the expressiveness capabilities of the proposed
transactional calculus of TCSS demonstrated by applying
the calculus to suitable example queries.
<project>
<project1>
<pname>ABC</pname>
<pid>P1001</pid>
<topics>AAAA</topics>
<member>
<mid>M01</mid>
<mname>BIPIN</mname>
<maddress>xx</maddress>
<department>
<did>D01</did>
<dname>CSE</dname>
<publication>
<puid>P001</puid>
<ptopics>RRR</ptopics>
</publication>
</department>
</member>
<pname>XYZ</pname>
<pid>P1003</pid>
<topics>CCCC</topics>
<member>
<mid>M03</mid>
<mname>ASHU</mname>
<maddress>PP</maddress>
<department>
<did>D02</did>
<dname>CA</dname>
<publication>
<puid>P003</puid>
<ptopics>SSS</ptopics>
</publication>
</department>
</member>
<pname>DEF</pname>
<pid>P1004</pid>
<topics>DDDD</topics>
<member>
<mid>M04</mid>
<mname>RASHI</mname>
<maddress>YY</maddress>
<department>
<did>D03</did>
<dname>EE</dname>
<publication>
<puid>P004</puid>
<ptopics>TTT</ptopics>
</publication>
</department>
</member>
<pname>XYZ</pname>
<pid>P1005</pid>
<topics>QQQQ</topics>
<member>
<mid>M06</mid>
<mname>SASHI</mname>
<maddress>RR</maddress>
<department>
<did>D03</did>
<dname>EE</dname>
<publication>
<puid>P005</puid>
<ptopics>VVV</ptopics>
</publication>
</department>
</member>
<pname>ABC</pname>
<pid>P1001</pid>
<topics>BBBB</topics>
<member>
<mid>M07</mid>
<mname>PRIYA</mname>
<mid>M07</mid>
<mname>PRIYA</mname>
<maddress>CC</maddress>
<department>
<did>D01</did>
<dname>CSE</dname>
<publication>
<puid>P006</puid>
<ptopics>MMM</ptopics>
</publication>
</department>
</member>
</project1>
<project2>
<pname>PQR</pname>
<pid>P1006</pid>
<topics>YYYY</topics>
<member>
<mid>M07</mid>
<mname>PRIYA</mname>
<maddress>cc</maddress>
<department>
<did>D02</did>
<dname>CA</dname>
-<publication>
<puid>P007</puid>
<ptopics>NNN</ptopics>
</publication>
</department>
</member>
</project2>
</project>
**_1._** **_Find the project name and project id from the CSG_**
**_project1._**
_for $p1 in doc("demo1.xml")//project1_
_for $p2 in doc("demo1.xml")//project1_
_where $p1//topics != $p2//topics_
_return<table ID="project">_
_<pname>{data($p1//pname)}</pname>_
_<pid>{data($p1//pid)}</pid>_
_</project>_
_</table>_
-----
<table ID=” project”>
<pname> ABC XYZ DEF XYZ ABC
</pname>
<pid> P1001 P1003 P1004 P1005 P1001
</pid>
</project>
</table>
**_2._** **_Find the details of publication whose Member Id_**
**_MID=”M03” and Publication Id PID=”P003”._**
_for $p in doc("demo1.xml")//member_
_where $p//mid = "M03"_
_and $p//puid = "P003"_
_return $p//publication_
<publication>
<puid> P003 </puid>
<ptopics> SSS</ptopics>
</publication>
**_3._** **_Find the details of member where MName=”Bipin” from_**
**_project1 and also find the details of Member where_**
**_MName=”Priya” from Project2._**
_for $p1 in doc("demo.xml")/project/project1/member_
_for $p2 in doc("demo.xml")/project/project2/member_
_where $p1//mname = "BIPIN"_
_and $p2//mname = "PRIYA"_
_return<table ID= "project">_
_<member>_
_{$p1//(mid,mname,maddress)}_
_{$p2//(mid,mname,maddress)}_
_</member>_
_</table>_
< table ID= ”project”>
<member>
<mid> M01</mid>
<mname> BIPIN </mname>
<maddress> XX </maddress>
<mid> M07</mid>
<mname> PRIYA</mname>
<maddress> CC</maddress>
</member>
</project>
</table>
**_4. Find the name of all members who have the same department_**
**_id “DID=D03” and department name “EE”._**
_for $p in doc("demo1.xml")//member_
_where $p//dname = "EE"_
_and $p//did = "D03"_
_return<project1>_
_<member>_
_{$p//(mid,mname,maddress)}_
_</member>_
_</project1>_
<project1>
<member>
<mname> RASHI </mname>
</member>
</project1>
<project1>
<member>
<mname> SASHI </mname>
</member>
</project1>
**_5._** **_Find the name of the all members who have the_**
**_department id same_**
_for $p1 in doc("demo1.xml")/project/project1/member_
_for $p2 in doc("demo1.xml")/project/project1/member_
_where $p1//did = $p2//did_
_and $p1//puid != $p2//puid_
_return<member>_
_<mname>{data($p1//mname)}</mname>_
_</member>_
<member>
<mname> BI PIN</mname>
</member>
<member>
<mname> RASHI </mname>
</member>
<member>
<mname> SASHI </mname>
</member>
<member>
<mname> PRIYA </mname>
</member>
**_6._** **_Find the project name and project id from the CSG_**
**_Project1 and Project2_**
_for $p1 in doc("demo1.xml")//project1_
_for $p2 in doc("demo1.xml")//project_
_where $p1//topics != $p2//topics_
_return<table ID="project">_
_<pname>{data($p1//pname)}</pname>_
_<pid>{data($p1//pid)}</pid>_
_<pname>{data($p2//pname)}</pname>_
_<pid>{data($p2//pid)}</pid>_
_</table>_
< table ID=”project”>
<pname>ABC XYZ DEF XYZ ABC</pname>
<pid> P1001 P1003 P1004 P1005 P1001</pid>
<pname> PQR</pname>
<pid> P1006</pid>
</table>
**_7._** **_Find the details of publications where MName=”Bipin”_**
**_from project1 and also find the details of publication_**
**_where MName=”Priya” from Project2._**
_for $p1 in doc("demo1.xml")/project/project1/member_
_for $p2 in doc("demo1.xml")/project/project2/member_
_where $p1//mname = "BIPIN"_
_and $p2//mname = "PRIYA"_
_return<table ID= "project">_
_<publication>_
_{$p1//(puid,ptopics)}_
_{$p2//(puid,ptopics)}_
_</publication>_
_</table>_
<table ID=”project”>
<publication>
<puid> P001</puid>
<ptopics> RRR </ptopics>
<puid> P007</puid>
<ptopics> NNN</ptopics>
</publication>
</table>
_B. Implementation of TCSS X-Query_
To examine the scalability of proposed TCSS X-Query
implementation, trying to perform an experimental
evaluation using “Project” xml data. Here also trying to
perform a comparison of TCSS X-Query with open
source xml processors: BASE-X.
_Queries_
Here considering 5 basic types of queries: Selection,
Retrieve, Union, Intersection and Join.
**_Selection:_** Query 1 finds the project name and project id
from the CSG project1
_for $p1 in doc("demo1.xml")//project1_
_for $p2 in doc("demo1.xml")//project1_
_where $p1//topics != $p2//topics_
_return<table ID="project">_
_<pname>{data($p1//pname)}</pname>_
_<pid>{data($p1//pid)}</pid>_
_</project>_
_</table>_
_Query 1_
**_Retrieve: Query 2 finds the details of publication whose_**
Member Id MID=”M03” and Publication Id PID=”P003”.
_for $p in doc("demo1.xml")//member_
_where $p//mid = "M03"_
_and $p//puid = "P003"_
_return $p//publication_
_Query 2_
-----
**_Union:_** Query 3 finds the details of member where
MName=”Bipin” from project1 and also find the details
of Member where MName=”Priya” from Project2.
_for $p1 in_
_doc("demo.xml")/project/project1/member_
_for $p2 in_
_doc("demo.xml")/project/project2/member_
_where $p1//mname = "BIPIN"_
_and $p2//mname = "PRIYA"_
_return<table ID= "project">_
_<member>_
_{$p1//(mid,mname,maddress)}_
_{$p2//(mid,mname,maddress)}_
_</member>_
_</table>_
_Query 3_
**_Intersection:_** Query 4 finds the name of all members who
have the same department id “DID=D03” and department
name=“EE”.
_for $p in doc("demo1.xml")//member_
_where $p//dname = "EE"_
_and $p//did = "D03"_
_return<project1>_
_<member>_
_{$p//(mid,mname,maddress)}_
_</member>_
_</project1>_
_Query 4_
**_Join:_** Query 5finds the name of the all members who
have the department id same.
_for $p1 in_
_doc("demo1.xml")/project/project1/member_
_for $p2 in_
_doc("demo1.xml")/project/project1/member_
_where $p1//did = $p2//did_
_and $p1//puid != $p2//puid_
_return<member>_
_<mname>{data($p1//mname)}</mname>_
_</member>_
_Query 5_
_C. Experimental Results_
This paper performance study explores TCSS X-Query
ability. Here in Fig 9 it shows that in case of TCSS XQuery each query execution time is near about same to
each other and its maintain a parity, whereas BASE-X xquery processor takes more time for selection procedure
and takes less time for join queries. Whereas TCSS xquery time remains comparable, i.e. the additional data is
processing in the same amount of time. Here TCSS XQuery demonstrated using a real 10 KB XML dataset
(trying to perform an experimental evaluation using
“Project” xml data.’) for various XML selection, retrieve,
union, and intersection and join queries. In future,
planning to analysing of big xml data and optimization of
the query compiler.
Fig.9. above TCSS X-Query and below BASE-X X-Query
VIII. CONCLUSION
The proposed framework blends semantic of
transactional calculus specification and abstraction
mechanism with syntaxes in specific modelling. Thus,
the paper fulfils the deficiency of systematic
methodology in transactional calculus of GOOSSDM
model. In addition to this paper proposes a formal
transactional calculus called Transactional Calculus for
Semi-structured database (TCSS) Further, the
transactional calculus is derived from a algebra based
query language [11] and illustrated using examples of
real life. The benefits of the proposed work are manifold.
It provides supports towards (1) representation of precise
knowledge of domain independent conceptualisation
from structural and functional design concerns with
enriched semantics and syntaxes for transactional
calculus of semi-structured. (2) realisation of proposed
TCSS working with CAP and BASE theorem. (3) a
systematic methodology that pave the way of
transforming domain analysis. (4) providing guidelines
for the purpose of mapping of Transactional Calculus for
Semi-structured database. (5) the proposed Transactional
system for semi-structured is based on path expression.
(6) the path operator is used to set the root node in
GOOSSDM schema and also useful to find the path from
the root node to desired node for any transaction. (7)
facilitate the early verification of the semi-structured data
schema structure in correspondence with the desired
transactional calculus. The perspective is an extension to
this calculus allowing to support larger class of complex
queries like aggregates, group by operations.
REFERENCES
[1] Conrad R., Scheffner D., Freytag J. C., "XML conceptual
modeling using UML", 19[th]Intl. Conf. on Conceptual
Modeling, PP: 558-574, 2000.
-----
[2] Anirban Sarkar, “Design of Semi-structured Database
System: Conceptual Model to Logical Representation”,
Book Titled: Designing, Engineering, and Analyzing
Reliable and Efficient Software, Editors: H. Singh and K.
Kaur, IGI Global Publications, USA, PP 74 – 95, 2013.
[3] McHugh J., Abiteboul S., Goldman R., Quass D., Widom
J., "Lore: a database management system for
semistructured data", Vol. 26 (3), PP: 54 - 66, 1997.
[4] Badia, A., "Conceptual modeling for semistructured data",
3[rd]International Conference on Web Information Systems
Engineering, PP: 170 – 177, 2002.
[5] Mani M., “EReX: A Conceptual Model for XML”,
2[nd]International XML Database Symposium, PP 128-142,
2004.
[6] Suresh Jagannathan, Jan Vitek,Adam Welc, Antony
Hosking, A Transactional Object Calculus, Dept of
Comp.sc,Purdue University, West Lafayette, IN 47906,
United States.
[7] Liu H., Lu Y., Yang Q., "XML conceptual modeling with
XUML", 28[th]International Conference on Software
Engineering, PP: 973–976, 2006.
[8] Combi C., Oliboni B., "Conceptual modeling of
XMLdata", ACM Symposium on Applied Computing, PP:
467– 473, 2006.
[9] Wu X., Ling T. W., Lee M. L., Dobbie G.," Designing
semistructured databases using ORA-SSmodel",
2[nd]International Conference on Web Information Systems
Engineering, Vol. 1, PP: 171 –180, 2001.
[10] Seth Gilbert and Nancy Lynch. Brewer’s conjecture and
the feasibility of consistentavailable, partition tolerant
web services.SigActNews, June2002.
[11] Rita Ganguly, RajibKumarchatterjee,Anirban Sarkar.
“Graph Semantic based Approach for Quering Semistructured Database System.”22[nd] International
Conference on SEDE-2013, pp: 79-84.
[12] Seth Gilbert National University of Singapore and Nancy
Lynch. Brewer’sMassachusetts Institute of
Technology,”Perspectives on the CAP Theorem.
[13] Soichiro Hidaka Zhenjiang Hu Kazuhiro Inaba Hiroyuki
Kato, “Bidirectionalizing Structural Recursion on
Graphs”,Techical Report, National Institute of
Informatics, The University of Tokyo/JSPS Research
Fellow, The University of Electro-Communications,
August 31, 2009
[14] Data Validation, Data Integrity, Designing Distributed
Applications with Visual Studio NET, Arkady
Maydanchik (2007), "Data Quality Assessment",
Technics Publications, LLC
[15] Object Oriented Transaction Processing in the KeyKOS®
Microkernel. William S. Frantz,Periwinkle Computer
Consulting, 16345 Englewood Ave. Los Gatos, CA USA
95032 rantz@netcom.com Charles R. Landau,Tandem
Computers Inc. 19333
[16] Vallco Pkwy, Loc 3-22,Cupertino, CA USA 95014
landau_charles@tandem.com. Introduction to ObjectOriented Databases. Prof. Kazimierz
Subieta,subieta@pjwstk.edu.pl,http://www.ipipan.waw.pl
/~subieta Ni W., Ling T. W., “GLASS: A Graphical
Query Language for Semi-structured Data”, 8th
International Conference on Database Systems for
Advanced Applications, PP 363 –370, 2003.
[17] R. K. Lomotey and R. Deters, “Datamining from
document-append NoSQL,” Int. J. Services Comput., vol.
2, no. 2, pp. 17–29, 2014.
[18] Braga, D., Campi, A. and Ceri, S., “XQBE (XQuery By
Example): A visual interface to the standard XML query
language”, ACM Transactions on Database
Systems(TODS), Vol.30 (5), pp. 398 – 443, 2003.
[19] AnirbanSarkar, "Conceptual Level Design of Semi
structured Database System: Graph-semantic Based
Approach", International Journal of Advanced Computer
Science and Applications, The SAI Pubs., New York,
USA, Vol. 2, Issue 10, PP 112 – 121,November, 2011.
[ISSN: 2156-5570(Online) &ISSN : 2158-107X(Print)].
[20] T. W. Ling. A normal form for sets of not-necessarily
normalized relations. In Proceedings of the 22nd Hawaii
International Conference on System Sciences, pp. 578586. United States: IEEE Computer Society Press, 1989.
[21] T. W. Ling and L. L. Yan. NF-NR: A Practical Normal
Form for Nested Relations. Journal of Systems
Integration. Vol4, 1994, pp309-340.
[22] Rita Ganguly,Anirban Sarkar “ Evaluations of Conceptual
Models for Semi-structured Database system “.
International Journal of ComputerApplications.Vol 50,
Issue 18, PP 5-12,july,2012.[ISBN:973-93-80869-67-3].
[23] Rami Sellami, Sami Bhiri, and Bruno Defude,
“Supporting Multi Data Stores Applications in cloud
Environments.” IEEE Transactions on services computing,
vol-9, No-1,pp-59-71, January/February2016.
[24] O. Cur_e, R. Hecht, C. Le Duc, and M. Lamolle, “Data
integration over NoSQL stores using access path based
mappings,” inProc. 22nd Int. Conf. Database Expert Syst.
Appl., Part I, 2011, pp. 481–495.
[25] ACID vs. BASE: The Shifting pH of Database
Transaction Processing, By Charles Roe,
www.dataversity net
[26] Martin Abadi Microsoft Research.university of california
santa cruz, Tim Harris, Microsoft Research, Katherine F
Moore Microsoft Research, University of Washington, “A
Model of Dynamic Seperation for Transactional
Memory”.
[27] Manfred Schmidt-Schau_, David Sabel Goethe
University, Frankfurt, Germany, ICFP '13, Boston, USA,
Correctness of an STM Haskell Implementation.
[28] B. Liskov and R. Scheifler. Guardians and actions:
Linguistic support for robust distributed programs. ACM
Transactions on Programming Languages and Systems,
5(3):381–404, July 1983.
[29] J. Eliot B. Moss. Nested Transactions: An Approach to
Reliable Distributed Computing.MIT Press, Cambridge,
Massachusetts, 1985.
[30] Jeffrey L. Eppinger, Lily B. Mummert, and Alfred Z.
Spector, editors. Camelot and Avalon: A Distributed
Transaction Facility. Morgan Kaufmann, 1991.
[31] D. D. Detlefs, M. P. Herlihy, and J. M. Wing. Inheritance
of synchronization and recovery in Avalon/C++. IEEE
Computer, 21(12):57–69, December 1988.
[32] Nicholas Haines, Darrell Kindred, J. Gregory Morrisett,
Scott M. Nettles, and Jeannette M. Wing. Composing
first-class transactions. ACM Transactions on
Programming Languages and Systems, 16(6):1719–1736,
November 1994.
[33] Alex Garthwaite and Scott Nettles. Transactions for Java.
In Malcolm P. Atkinson and Mick J. Jordan, editors,
Proceedings of theFirst International Workshop on
Persistenceand Java, pages 6–14. Sun Microsystems
Laboratories Technical Report 96-58, November1996.
[34] Richard J. Lipton. Reduction: a new method of proving
properties of systems of processes. InProceedings of the
2nd ACM SIGACT-SIGPLAN symposium on Principles
of programming languages, pages 78–86. ACM Press,
1975.
[35] Shaz Qadeer, Sriram K. Rajamani, and JakobRehof.
Summarizing procedures in concurrent programs. In
-----
Proceedings of the 31st ACM SIGPLAN-SIGACT
symposium on Principlesof programming languages,
pages 245–255. ACM Press, 2004.
[36] Nancy Lynch, Michael Merritt, William Weihl, and Alan
Fekete. Atomic Transactions. Morgan-Kaufmann, 1994.
[37] Panos Chrysanthis and Krithi Ramamritham. Synthesis of
Extended Transaction Models Using ACTA. ACM
Transactions on Database Systems, 19(3):450–491, 1994.
[38] Jim Gray and Andreas Reuter. Transaction Processing:
Concepts and Techniques. Data Management Systems.
Morgan Kaufmann, 1993.
[39] Andrew Black, Vincent Cremet, Rachid Guerraoui, and
Martin Odersky. An Equational Theory for Transactions.
Technical Report CSE 03-007, Department of Computer
Science, OGI School of Science and Engineering, 2003.
[40] Tom Chothia and Dominic Duggan. Abstractions for
Fault-Tolerant Computing. Technical Report 2003-3,
Department of Computer Science, Stevens Institute of
Technology, 2003.
[41] N. Busi, R. Gorrieri, and G. Zavattaro. On the
Serializability of Transactions in Java Spaces.
InConCoord 2001, International Workshop on
Concurrency and Coordination, 2001.
[42] R. Bruni, C. Laneve, and U. Montanari. Orchestrating
Transactions in the Join Calculus.In 13th International
Conference on Concurrency Theory, 2002.
[43] E. Preston Carman, Jr.1, Till Westmann2§, Vinayak R.
Borkar3*, Michael J. Carey4, Vassilis J.
Tsotras11University of California, Riverside 2Couchbase
_3X15 Software, Inc. 4University of California, Irvine_
_Email: ecarm002@ucr.edu A Scalable Parallel XQuery_
Processor 2015 IEEE International Conference on Big
Data (Big Data)978-1-4799-9926-2/15/$31.00 ©2015
IEEE 164.
[44] Shreya Banerjee and Anirban Sarkar “Ontology-driven
approach towards domain-specific system design
“.International journal semantics and ontologies, vol 11,
no 1, pp- 39-60.
[45] ACID vs. BASE: The Shifting pH of Database
Transaction Processing,By Charles Roe,www.dataversity
net.
**Authors’ Profiles**
**Rita Ganguly, received the M.Tech degree**
from the NIT, Durgapur, India and entitled
her name as a Research Scholar (Part-time)
in Computer Science department(formerly
known as Computer Application
department.),NIT, Durgapur under the
supervision of Dr. Anirban Sarkar. Presently
she is working as an Asst. Prof of Computer Application
Department, in Dr. B.C.Roy Engineering College, Durgapur,
India.
**Anirban Sarkar is presently a faculty**
member in the Department of Computer
Applications, National Institute of
Technology, Durgapur, India. He received
his PhD degree from National Institute of
Technology, Durgapur, India in 2010. His
areas of research interests are Database
Systems and Software Engineering. His total numbers of
publications in various international platforms are above 100.
He is actively involved in collaborative research with several
Institutes in India and USA and has also served in the
committees of several international conferences in the area of
software engineering and computer applications.
**How to cite this paper: Rita Ganguly, Anirban Sarkar, "An**
Approach to Develop a Transactional Calculus for SemiStructured Database System", International Journal of
Computer Network and Information Security(IJCNIS), Vol.11,
No.9, pp.24-39, 2019.DOI: 10.5815/ijcnis.2019.09.04
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5815/ijcnis.2019.09.04?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5815/ijcnis.2019.09.04, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://www.mecs-press.org/ijcnis/ijcnis-v11-n9/IJCNIS-V11-N9-4.pdf"
}
| 2,019
|
[] | true
| 2019-09-08T00:00:00
|
[
{
"paperId": "02bc9dffd65ad41edd6b3a51ebfe747f64dc9d2d",
"title": "DATA MINING FROM DOCUMENT-APPEND NOSQL"
},
{
"paperId": "1c3554d61c41996149e2edde56f8732633061293",
"title": "Correctness of an STM Haskell implementation"
},
{
"paperId": "21595186c0570f654034cc189faa8ec00fa2dc44",
"title": "Evaluations of Conceptual Models for Semi-structured Database System"
},
{
"paperId": "5feaafaa4103a751dc39b1be389a56004536c218",
"title": "Conceptual Level Design of Semi-structured Database System: Graph-semantic Based Approach"
},
{
"paperId": "e7cf963e61e767f480a6e03300b0a4cd74686435",
"title": "Perspectives on the CAP Theorem"
},
{
"paperId": "05352f0809cd42eabd275738af180d9311df7448",
"title": "Data Integration over NoSQL Stores Using Access Path Based Mappings"
},
{
"paperId": "b69295dafcd49459eef48d69cf6a5d06b26ddbf2",
"title": "XML conceptual modeling with XUML"
},
{
"paperId": "7b8d3ff9559708c7bc61ab6b8458eeab452fdaae",
"title": "A transactional object calculus"
},
{
"paperId": "180570e4a015111874cc7bfde1aee923da93338e",
"title": "XQBE (XQuery By Example): A visual interface to the standard XML query language"
},
{
"paperId": "0ad47673386f7630a25b2ab571cbfc17bd6229f8",
"title": "EReX: A Conceptual Model for XML"
},
{
"paperId": "9311704307e73b74e9860b55e12b0c1e3cacd32f",
"title": "Summarizing procedures in concurrent programs"
},
{
"paperId": "24b3d948ad277862d96c0a77579b9649176a9949",
"title": "An Equational Theory for Transactions"
},
{
"paperId": "18c84d6e03edc5b482ea76326a81b8fd89d8890c",
"title": "Conceptual modeling for semistructured data"
},
{
"paperId": "f8525b50a77f8730a661807be7b04f394a6dbd44",
"title": "Orchestrating Transactions in Join Calculus"
},
{
"paperId": "bbe7a291e74d12d0bcad6f623d8a13e084b83255",
"title": "Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services"
},
{
"paperId": "f192c76b6fcda1b274cd1fd65efd1467f20e0127",
"title": "Designing semistructured databases using ORA-SS model"
},
{
"paperId": "acf312724831f680eeb9198cc7cacbb522f3b03e",
"title": "XML Conceptual Modeling Using UML"
},
{
"paperId": "ff486da7373f75a6f20d3b64f4c07c0067542683",
"title": "Transactions for Java"
},
{
"paperId": "099f9eccf15aed58801051148a5e937d05f4ede3",
"title": "Lore: a database management system for semistructured data"
},
{
"paperId": "34e72fa4beed59a8fa9adbaafe0dcf7923887dc5",
"title": "NF-NR: A practical normal form for nested relations"
},
{
"paperId": "9e6263d1e02acd28601c2e2ccfc0fbed0c36fca9",
"title": "Composing first-class transactions"
},
{
"paperId": "b7d2b569049706ed33b36605adcaefef22da08f8",
"title": "Synthesis of extended transaction models using ACTA"
},
{
"paperId": "66f8fcd2cbbaa169943b6b6b3b7633e4c08f85e7",
"title": "Object-Oriented Transaction Processing in the KeyKOS Microkernel"
},
{
"paperId": "66ec4f8c1e6f471881cae3e2d97735713c5c071d",
"title": "Transaction Processing: Concepts and Techniques"
},
{
"paperId": "9d4fb7fba5b1a3fe006cdc5bc083629826a2df69",
"title": "Camelot and Avalon: A Distributed Transaction Facility"
},
{
"paperId": "38d1c6d72afd3e215d537226236f0e18d16fd29b",
"title": "A normal form for sets of not-necessarily normalized relations"
},
{
"paperId": "b422a8fff5ce4a46eb4c964c88b1003c2a2a710e",
"title": "Guardians and actions: linguistic support for robust, distributed programs"
},
{
"paperId": "6a5632a789cbe7c1a690ca310731c2cde0155c76",
"title": "Ontology-driven approach towards domain-specific system design"
},
{
"paperId": "4a41efb7d12d11eec7e672c836e7a45ed7e3b66a",
"title": "Supporting Multi Data Stores Applications in Cloud Environments"
},
{
"paperId": null,
"title": "Tsotras11University of California, Riverside 2Couchbase 3X15 Software, Inc. 4University of California, Irvine Email: ecarm002@ucr.edu A Scalable Parallel XQuery Processor 2015"
},
{
"paperId": null,
"title": "Graph Semantic based Approach for Quering Semistructured Database System.”22"
},
{
"paperId": "715653150bddd42c017220596bef7e9601991f8c",
"title": "Design of Semi-Structured Database System: Conceptual Model to Logical Representation"
},
{
"paperId": null,
"title": "“Bidirectionalizing Structural Recursion on Graphs”"
},
{
"paperId": null,
"title": "Data Validation, Data Integrity, Designing Distributed Applications with Visual Studio NET"
},
{
"paperId": null,
"title": "Abstractions for Fault-Tolerant Computing"
},
{
"paperId": null,
"title": "USA 95014 landau_charles@tandem.com. Introduction to Object-Oriented Databases. Prof. Kazimierz Subieta ,subieta@pjwstk"
},
{
"paperId": null,
"title": "On the Serializability of Transactions in Java Spaces"
},
{
"paperId": "39667791d5a1e289d47f2fb93b557a503ce6603f",
"title": "Atomic Transactions"
},
{
"paperId": null,
"title": "Inheritance of synchronization and recovery in Avalon/C++"
},
{
"paperId": "cfdc0b56a1789e2e67128a0e48534641f80443c9",
"title": "Nested Transactions: An Approach to Reliable Distributed Computing"
},
{
"paperId": "f4a00bdc978e316a85509d9b872dfc996c3da1cd",
"title": "Reduction: a new method of proving properties of systems of processes"
},
{
"paperId": null,
"title": "ACID vs. BASE: The Shifting pH of Database Transaction Processing, By Charles Roe"
},
{
"paperId": null,
"title": "“A Model of Dynamic Seperation for Transactional Memory”"
}
] | 18,891
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0112d65d2ea11c065d7eb5f6fada9287002aa158
|
[] | 0.895491
|
A Data Management Model for Intelligent Water Project Construction Based on Blockchain
|
0112d65d2ea11c065d7eb5f6fada9287002aa158
|
Wireless Communications and Mobile Computing
|
[
{
"authorId": "2005591",
"name": "Zhoukai Wang"
},
{
"authorId": "2124633689",
"name": "Kening Wang"
},
{
"authorId": "2143491502",
"name": "Yichuan Wang"
},
{
"authorId": "2158400663",
"name": "Zheng Wen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Wirel Commun Mob Comput"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/15308677",
"http://www.interscience.wiley.com/jpages/1530-8669/"
],
"id": "501c1070-b5d2-4ff0-ad6f-8769a0a1e13f",
"issn": "1530-8669",
"name": "Wireless Communications and Mobile Computing",
"type": "journal",
"url": "https://www.hindawi.com/journals/wcmc/"
}
|
The engineering construction-related data is essential for evaluating and tracing project quality in industry 4.0. Specifically, the preservation of the information is of great significance to the safety of intelligent water projects. This paper proposes a blockchain-based data management model for intelligent water projects to achieve standardization management and long-term preservation of archives. Based on studying the concrete production process in water conservancy project construction, we first build a behavioral model and the corresponding role assignment strategy to describe the standardized production process. Then, a distributed blockchain data structure for storing the production-related files is designed according to the model and strategy. In addition, to provide trust repository and transfer on the construction data, an intelligent keyless signature based on edge computing is employed to manage the data’s entry, modification, and approval. Finally, standardized and secure information is uploaded onto the blockchain to supervise intelligent water project construction quality and safety effectively. The experiments showed that the proposed model reduced the time and labor cost when generating the production data and ensured the security and traceability of the electronic archiving of the documents. Blockchain and intelligent keyless signatures jointly provide new data sharing and trading methods in intelligent water systems.
|
Hindawi
Wireless Communications and Mobile Computing
Volume 2022, Article ID 8482415, 16 pages
[https://doi.org/10.1155/2022/8482415](https://doi.org/10.1155/2022/8482415)
# Research Article A Data Management Model for Intelligent Water Project Construction Based on Blockchain
## Zhoukai Wang,[1,2] Kening Wang,[3] Yichuan Wang,[1,2] and Zheng Wen 4
1School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
2Shaanxi Provincial Key Laboratory of Network Computing and Security Technology, Xi’an, 710048, China
3School of Automation and Information Engineering, Xi’an University of Technology, Xi’an, 710048, China
4School of Fundamental Science and Engineering, Waseda University, Tokyo 169-8050, Japan
Correspondence should be addressed to Zhoukai Wang; zkwang@xaut.edu.cn
Received 7 December 2021; Accepted 16 February 2022; Published 9 March 2022
Academic Editor: Qingqi Pei
[Copyright © 2022 Zhoukai Wang et al. This is an open access article distributed under the Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/)
[License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is](https://creativecommons.org/licenses/by/4.0/)
properly cited.
The engineering construction-related data is essential for evaluating and tracing project quality in industry 4.0. Specifically, the
preservation of the information is of great significance to the safety of intelligent water projects. This paper proposes a
blockchain-based data management model for intelligent water projects to achieve standardization management and long-term
preservation of archives. Based on studying the concrete production process in water conservancy project construction, we first
build a behavioral model and the corresponding role assignment strategy to describe the standardized production process.
Then, a distributed blockchain data structure for storing the production-related files is designed according to the model and
strategy. In addition, to provide trust repository and transfer on the construction data, an intelligent keyless signature based on
edge computing is employed to manage the data’s entry, modification, and approval. Finally, standardized and secure
information is uploaded onto the blockchain to supervise intelligent water project construction quality and safety effectively.
The experiments showed that the proposed model reduced the time and labor cost when generating the production data and
ensured the security and traceability of the electronic archiving of the documents. Blockchain and intelligent keyless signatures
jointly provide new data sharing and trading methods in intelligent water systems.
## 1. Introduction
In the water conservancy project management, archives have
the characteristics of large numbers and comprehensive coverage, and they play an essential role in all aspects of engineering construction. With the increasing investment of
water conservancy projects, the scale gradually grows, and
the project gradually becomes complex. The management
of water conservancy project archives also faces more and
more problems, which restrict the development of water
conservancy projects. On the other side, the traditional file
management mode can no longer adapt to the rapidly developing economic needs, so the introduction of digital archives
for water conservancy projects has become an inevitable
trend [1, 2, 3]. However, because the construction of water
conservancy projects requires the global deployment and
management of various units and resources, making the digitization process of its archives difficult, the status of library
management leading to the water conservation institutions
requires acceleration transformation [4].
At present, digital archives of water conservancy projects
have less relevant research in foreign countries, and the
research in China is also in the initial stage [5, 6]. Although
the new “Archives Law of the People’s Republic of China”
provides legal and policy guarantees for the informationization of construction files of water conservancy project construction, the relevant research and application are still
focused on the initial stage of construction [7]. Other important aspects of water conservancy project construction, such
as concrete production and mixing, and metal structure
installation, still lack effective information management
means [8]. In addition, the current digital file management
-----
2 Wireless Communications and Mobile Computing
methods are relatively simple, with the drawbacks of poor
antitampering and antirepudiation capabilities, and their
application range is also limited. In total, the current digital
file management methods cannot undertake the engineering
construction works involving significant safety needs [9].
In response to the shortcomings of traditional file management methods, this paper introduces blockchain and
keyless signature techniques [10], takes the concrete mixing
process as the research object, and conducts research on data
management in intelligent water conservancy construction.
The main contributions of this paper are demonstrated as
follows.
(1) By employing the smart keyless signatures, this
paper established a paperless concrete production
and operation management model to monitor the
concrete mixing process and prevent data tampering
during the process
(2) With the help of the consortium blockchain, this
paper built up an intelligent document storage
method to effectively supervise the progress, quality,
and safety of concrete production and then explore
general methods for encryption, storage, and traceability of production files
(3) Integrated with the corresponding model and
method, this paper proposed a blockchain-based file
management system for concrete mixing procedures
and then implemented it in the Hanjiang to Weihe
River Project to improve the production management capacity markedly
## 2. Motivation
Concrete mixing is a vital link in the construction of water
conservancy projects, and many engineering archival documents are generated during the mixing process to record
the concrete mixing details [11]. These files are crucial basic
information for project quality control and problem tracing
and are related to the whole life cycle safety of the project.
However, the management of concrete production files has
problems such as low informationization and insufficient
security at present [12, 13]. Firstly, the current management
method wastes paper. The volume of files related to concrete
mixing production is enormous. The amount of grouting
required for reservoir construction is usually more than
100,000 cubic meters, which will generate a massive amount
of paper data that is difficult to store and manage. Secondly,
the paper-based management method has less credibility.
The manually dumped paper files are not standardized,
and falsification of the paper files often occurs. Thirdly, the
traceability of the paper files is feeble. Currently, the cataloging and archiving of concrete production files have not yet
formed a strict and complete discipline and management
system. Therefore, it is difficult to achieve practical traceability issues tracing. At last, the current management methods
obtain insufficient security since the lack of security and
confidentiality control measures for the massive paper files.
In response to the above problems, more and more
researchers have devoted their efforts to studying the digital
file management of water conservancy projects, especially
for the informatization of the concrete mixing process, and
preliminary research results have been achieved. The representative projects include the Jingtaichuan Dam in Gansu
Province, the Daxing Water Conservancy Hub Project in
Guizhou Province, and Chushandian Reservoir in Henan
Province [14]. However, these research results still fail to
completely solve the shortcomings of low antitampering
ability and poor antirepudiation ability of digitized archives
[15]. Digital archives are still exposed to risks, and they are
difficult to effectively manage the concrete mixing procedure
and ensure the procedure’s safety.
The quality of concrete production and the management
of related production documents are closely associated with
the safety of people’s lives and property. They have a high
level of tamper-proof and repudiation-proof requirements
[16]. Although the Chinese government has established the
corresponding laws to push forward electronic signatures
steadily, digital files are bound to be severe trust and security
concerns when transmitted over the Internet and stored in
centralized servers for long periods [17]. The higher the sensitivity of the data, the greater the risk of using high-tech
means to “blacken” it. Apparently, there are substantial technical difficulties in achieving the highly informative management of concrete mixing files paperless. How to help the
concrete mixing files and the corresponding data get rid of
the security threat becomes a hot topic in the intelligent
water conservancy field.
In 2009, blockchain technology was proposed to guarantee the security of data. Recently, applications based on
blockchain have increasingly appeared in various fields of
daily social life, such as finance, public services, culture and
entertainment, data insurance, and general welfare [18–20].
However, the present archival research on the blockchain
mainly focuses on the feasibility of document archive management and specific application methods [21]. Many
scholars have proposed their application for standard archival management based on blockchains, such as museum
archives, student archives, and medical information archives
[22, 23]. Other scholars have also discussed the challenges
and troubles that blockchain technology may face when
applied in archival management [24, 25]. But there are only
a few cases of the practical application of blockchain-based
archive management in hydraulic engineering fields [26].
In summary, applying blockchain and related technologies
in engineering construction archive management, especially
to critical aspects such as concrete production, has received
less attention from relevant studies domestic and abroad.
Based on the current research foundation in related
fields, this paper takes the whole concrete production process as the research object, integrates blockchain technology
with the specific needs of water conservancy projects, and
improves the management quality of the electronic files in
the concrete mixing process. Meanwhile, this paper also proposes a highly integrated information management system
to guarantee the data security of each step in the concrete
mixing procedure. The specific steps are as follows: first,
-----
Wireless Communications and Mobile Computing 3
study the relationship between the different concrete production departments and establish a behavioral model
describing the concrete production process; second, design
and implement a distributed blockchain data structure for
concrete production process management; third, use keyless
signature technology to manage the type-in, modification,
and approval process of the concrete production files; finally,
all the files generated in the concrete production process are
uploaded to the blockchain to achieve openness and transparency of the entire process, guaranteeing accurate traceability of production files and quick location of quality
problems, thus effectively supervising the data quality and
safety in the intelligent water conservancy projects
construction.
## 3. Behavioral Model for the Concrete Production Process
3.1. Process Sorting and Role Assignment. To establish a
behavioral model for the concrete production process, we
first need to sort out the production process. As shown in
Figure 1, the concrete production process is divided into
raw material preparation and concrete production parts.
Specifically, the raw material preparation part can be divided
into the import and test subpart. In contrast, the concrete
production part can be divided into the mix proportion
design subpart, the concrete mixture subpart, and the concrete test subpart.
The raw materials for concrete production include
cement, fly ash, admixtures, coarse aggregate, and fine aggregate. The first three materials are transported and supplied
by the corresponding manufacturers, while the other materials can be produced by the mixing plant itself. As
Figure 1 illustrates, in the material import stage, the quality
and quantity reports are provided along with the entry of
the purchased raw materials. When the raw materials are
in storage, the laboratory of the mixing plant will sample
and measure them and then record the report of the material
test results in the ledger by computer. Besides, self-made raw
materials like coarse and fine aggregates are also tested in
detail and recorded by the laboratory of the mixing plant
either. At last, all these raw material inspection reports are
submitted to the supervision, and the supervision’s approval
allows the materials to participate in concrete production.
In the concrete production stage, the construction unit
submits an application of concrete to the mixing plant.
Moreover, the required concrete grade and performance
requirements, the required quantity, and the use purpose
are also informed to the mixing plant at the same time. After
receiving the application, the laboratory personnel in the
mixing plant will inspect the moisture content of sand and
stone, check the exceeding and inferior grain in aggregate
according to the relevant regulations, and then design the
concrete mixing proportion. After the supervisor confirms
the mixing ratio, the relevant mixing information is provided to the mixing plant. The mixing plant strictly follows
the ratio, sets the raw material feeding value, and operates
the mixing plant for concrete production. Besides, the raw
material temperature and weighing information are
recorded during the concrete mixing process according to
the regulations. After the concrete mixture, samples are
taken from the outlet of the mixing plant; then, the construction unit tests the samples’ quality and forms the sample
record and test report.
The role assignment could be set as follows by sorting
the concrete production process. The main characters
involved in the production process are the mixing plant,
the laboratory of the mixing plant, the construction department of China Railway 12th Bureau (CR-12 in short), the
laboratory of CR-12, the supervisor, and the third-party testing center. In specific, the mixing plant and its laboratory
worked in the raw material preparation stage, while CR-12
and the corresponding laboratory worked in the concrete
production stage. At last, the supervisor and the thirdparty testing center took part in every stage of the concrete
production process to ensure the safe and reliable quality
of the whole concrete production process.
3.2. Classification of Concrete Production Files. The second
step of building the concrete production behavioral model
is to classify all the files involved in the concrete production
process according to their attributes. The files include the
raw material performance testing records before concrete
mixing, the concrete supply contact sheets, the descriptions
on concrete mixing proportion, the records about the mixing process, the result of the concrete performance testing,
forms related to each cycle errata, and summaries. The cooperation of these files is demonstrated as follows: The manufacturers supply the raw materials to the mixing plant for
concrete production. After production, the mixing plant’s
laboratory samples the concrete and conducts a quality
inspection. If the concrete meets the quality standards, it
would be transported to the construction department of
CR-12 by vehicles. After the additional tests conducted by
the laboratory of CR-12, the construction department of
CR-12 builds the water conservancy facilities with qualified
concrete. At last, as a neutral third party, the supervisor
keeps on inspecting the concrete by commissioning a
third-party laboratory to sample and test the concrete at all
stages during the production.
In total, after summarizing the files involved in the concrete production process, 50 categories of forms are
obtained. There are a total of 29 forms related to raw materials, 1 contact sheet for material supply, 7 forms related to
the concrete mixing process, 12 forms related to testing,
and 1 form for erratum summary. The details are in
Figure 2.
## 4. Distributed Blockchain Data Structure
4.1. General Framework Design. Based on the behavioral
model, the distributed blockchain data structure can be constructed, and then, the preservation, categorization, and
management of the concrete production-related archives
can be achieved. The general framework design is illustrated
in Figure 3. In Figure 3, the archives generated in concrete
production are divided into temporal and spatial levels in
the order of warehouse blocks, procedure blocks, branch
-----
4 Wireless Communications and Mobile Computing
Concrete
Material test
- Material quality - Aggregate moisture mixture - Sample record
- Material quantity content - Test report
- Sample record - Operating record
- … … - Aggregate inferior - … …
- Inspection report - Discharge port record
grain
- … … - … … - … …
Material Mix proportion
Concrete test
import design
Figure 1: Schematic diagram of the concrete production process.
Feedback 10 forms
Concrete mixtureplant Into storage Material Spot check Concrete mixtureplant Supervision Project supervision
(laboratory) 3 forms manufacturer 3 forms (field) 14 forms
Concrete Set
Constructor requirement Concrete mixtureplant proportion Concrete mixtureplant Sampling Concrete mixtureplant
(field) 1 form (laboratory) 2 forms (field) 1 form (laboratory)
Feedstock 4forms
Supervision report 2 forms
Commit
Constructor Test Constructor Cooperate Project supervision record Tird-party
(field) 2 forms (laboratory) 4 forms 2 forms laboratory
Figure 2: Classification and statistics of the concrete production files.
Warehouse Warehouse x-1 Warehouse x Warehouse x+1
block
Procedure Data before mixing Supply list Data in mixing Outlet sampling Errata and summary
block @A @B @C @D @E
Mixing Mix Mixing Weight
Branch Producer Lab Supervisor Detail list Lab Constructor Supervisor Errata Summary
plant proportion records and temp
block @Ab @Ac @Ad @Ba @Da @Db @Dc @Ea @Eb
@Aa @Ca @Cb @Cc
@Aa001 @Ab001 @Ba001 @Ca001 @Cb001 @Cc001 @Da001 @Db001 @Dc001 @Ea001 @Eb001
@Aa002 @Ab002 @Ca002 @Cc002 @Da002 @Db002 @Dc002
@Aa003 @Ab003 @Cc003 @Db003
@Cc004 @Db004
Unit
block @Ac001 @Ac006 @Ac011 @Ad001 @Ad006
@Ac002 @Ac007 @Ac012 @Ad002 @Ad007
@Ac003 @Ac008 @Ac013 @Ad003 @Ad008
@Ac004 @Ac009 @Ac014 @Ad004 @Ad009
@Ac005 @Ac010 @Ad005 @Ad010
Figure 3: General framework of the distributed blockchain data structure.
|Warehouse block|Warehouse x-1 Warehouse x Warehouse x+1|
|---|---|
|Procedure block Branch block Unit block|Data before mixing Supply list Data in mixing Outlet sampling Errata and summary @A @B @C @D @E Mixing Mix Mixing Weight Producer Lab Supervisor Detail list Lab Constructor Supervisor Errata Summary plant proportion records and temp @Ab @Ac @Ad @Ba @Da @Db @Dc @Ea @Eb @Aa @Ca @Cb @Cc @Aa001 @Ab001 @Ba001 @Ca001 @Cb001 @Cc001 @Da001 @Db001 @Dc001 @Ea001 @Eb001 @Aa002 @Ab002 @Ca002 @Cc002 @Da002 @Db002 @Dc002 @Aa003 @Ab003 @Cc003 @Db003 @Cc004 @Db004 @Ac001 @Ac006 @Ac011 @Ad001 @Ad006 @Ac002 @Ac007 @Ac012 @Ad002 @Ad007 @Ac003 @Ac008 @Ac013 @Ad003 @Ad008 @Ac004 @Ac009 @Ac014 @Ad004 @Ad009 @Ac005 @Ac010 @Ad005 @Ad010|
blocks, and cell blocks. The structure in Figure 3 represents a
comprehensive mixing procedure for a warehouse of concrete, and the warehouse is the fundamental quantity unit
in the concrete production process. Inside the data structure,
the subblock is composed of one or several distributed led
gers. The functions and properties of each subblock in the
distributed blockchain are described below.
The top element in the distributed blockchain data structure is the warehouse block. The warehouse block contains
all the files during the concrete mixing process. In the actual
-----
Wireless Communications and Mobile Computing 5
environment, the whole construction procedure of the water
conservancy project is often divided into unit projects, division projects, and cell projects. Furtherly, the cell projects are
refined into a series of sequential subtasks, and the data and
corresponding files generated in each subtask are formed as
a warehouse block. Like the example in Figure 3, during the
mixing process, the computers automatically record the production data for each tray of concrete, including the set and
actual usage amount of the raw materials, the mixing time,
the use of the concrete, and other detailed information.
The warehouse blocks are made up of cyclic packets, and
they are numbered sequentially from 0001 onwards in chronological order.
The procedure blocks are the blocks that indicate the
specific flows of the concrete production. Note that the block
could not be formed until the previous one is generated, and
all blocks in the same layer are chained together in a tandem
pattern. As shown in Figure 3, the concrete production process contains five procedure blocks: data before mixing, supply list, data in mixing, outlet sampling, errata, and
summary.
The blocks in the branch block layer are the distributed
ledgers created and categorized by different roles in the concrete mixing procedure. For example, @Aa,@Ab,@Ac,@Ad
are the branch blocks under the same procedure block A
in Figure 3; they represent the file collections in the mixing
plant, the producer, the lab, and the supervisor, respectively.
The bottom layer in the distributed blockchain data
structure is the unit block layer. In this layer, the unit blocks
are the specific files, forms, images, or other media boundaries with archival requirements in branch blocks, marked
with 001, 002, and so on. As shown in Figure 3, each unit
block refers to one file created by a specific role.
Besides, the naming scheme of the proposed distributed
blockchain data structure is as follows: Firstly, “@” and “#”
in the front of each unit block number indicate if this block
is shared or not. Secondly, the warehouse block is often
divided into blocks for the raw material test, blocks for the
concrete inspection, and the other blocks. Among them,
the raw material inspection blocks record the samples and
the test results of raw materials, such as 200~400 t a sampling unit of cement, 100~200 t a sampling unit of fly ash,
and 50 t a sampling unit of admixture. Thirdly, if the unit
block is shared, the provenance of the shared data should
be indicated, and the indication method is to add the name
of the warehouse block which contains the shared block.
For instance, if the cement test report is “×××××Ac013,”
suppose that a new cement test report is generated in warehouse block “0020,” then its number is “0020#Ac013.” If the
next five bunker blocks “0021,” “0022,” “0023,” “0024,” and
“0025” need to quote the previous report rather than generating new cement inspection reports, then the quoted report
is named as “0020@Ac013.” But if the warehouse block
“0026” generates a new cement inspection report, then the
name of the report is “0026#Ac013.”
4.2. Distributed Storage Architecture for Digital Archives.
Based on the design of blockchain data structure, this paper
classifies the files according to different production roles and
then stores them in distribution. Specifically, the main characters participating in the concrete production procedure
keep their own files locally. For instance, the construction
department that initialized and transmitted the supply list
would leave a copy of the list in the server of CR-12. Similarly, if the mixing plant initiates the batching notification
form, then the form is stored in the computer of the mixing
plant. The rules for the rest of the file storage locations are
similar, except that the files that are shared by different
branches should be stored by both the sending and the
receiving units.
Moreover, the data-sharing scheme is another crucial
part of distributed storage architecture, and it consists of
two parts: the data sharing between files and inside files.
The data sharing between files means keeping the same sections’ consistency and accuracy in different files. There is a
mapping or logical relationship between information in
some files and information in the other files during transmission. Therefore, when creating such files, we first store
this information in public memory and then automatically
obtain the corresponding data with the same content on different files. For illustration, “construction site, strength
grade, collapse level, and planned quantity” in the batching
order are derived from the contents “construction site, seepage and frost resistance, collapse level and outlet temperature, and concrete supply order” in the supply list.
Similarly, the “oversize content” and “undersize content”
on the batching notice come from the same contents on
the coarse aggregate test records. However, if the shared data
is inconsistent, we will issue warning messages to senders
and receivers. Then, the file is rejected by the receiver until
the sender makes corrections. During the revision, the character who makes the file first checks if the inconsistency is
indeed caused by himself then resolves this dispute by
amending the filled-in content. Otherwise, the inconsistency
is caused by the incorrect data in the system. Then, the dispute will be temporarily put on hold through the consensus
mechanism and resolved through the errata at the end of
this warehouse block.
The data-sharing scheme inside files means that the files
are shared between different warehouse blocks. For illustration, in the raw material test stage, an inspection form for
raw materials may cover more than one warehouse block;
then, these blocks share the same inspection form. As mentioned above, the specific method to distinguish the shared
and the unshared data uses “@” and “#” symbols as
indicators.
## 5. Edge Computing Supported Intelligent Keyless Signature
During the concrete mixing, every authentic and valid file
requires the principal’s signature of every department, and
the signature means the approval of the file content. This
signature process is represented as the form-filling operation
in the proposed model. However, there is a risk of tampering
with the file during the filling process. The traditional
approach is to introduce asymmetric encryption technology
in the file approval process to ensure the security of
-----
6 Wireless Communications and Mobile Computing
transmission and the file’s integrity. But this technical
approach has certain management risks because it involves
the management of an individual’s private key. Therefore,
this paper employs a keyless signature technology based on
edge computing to standardize the form-filling process and
provide security for electronic files.
5.1. Hash Tree Construction Based on Edge Computing. The
fundamental method for data security during the file transmission is to use the hash function to make a calculation
on the file and then regard the calculation result as a digital
fingerprint to prove the file’s authenticity. In detail, the proposed management model uses the SHA-256 hash algorithm
to calculate the file, generate a 256-bit hash value, perform a
series of operations with the hash value, and build up a hash
tree. The process is in Figure 4.
In Figure 4, x1 to x8 represent the hash values calculated
with the SHA-256 algorithm, and these values are the input
of the leaf nodes in the hash tree. h denotes the hash funcðÞ
tion, and the vertical line represents the join operation, but
_hðx1 ∣_ _x2Þ ≠_ _hðx2 ∣_ _x1Þ. The hash tree introduces the hash_
function to fulfill zero-knowledge proof and ensure that
the file is authentic. For example, suppose the initial data
_x3 knows the hash values fx4, x12, x58g and their position_
markers f1,0,1g. In that case, the root value can be recreated,
thus proving that x3 is involved in calculating the generated
root value. In total, based on hash chains, goals including a
fast comparison on massive data, locating the modified data,
and constructing zero-knowledge proofs, can be easily
achieved. The hash chain computing process is shown in
Figure 5.
Further, to secure data transfer and file integrity from
the spatial dimension, a large number of hash trees need to
be aggregated into Merkle trees simultaneously, and edge
computing is the best way to achieve such goals. A Merkle
tree consists of a root node, some intermediate nodes, and
a set of leaf nodes. Each leaf node is labeled with the hash
value of the digital file, while intermediate nodes other than
the leaf nodes are marked with the cryptographic hash of
their child node labels. Creating a complete Merkle tree
requires recursively hashing a set of nodes and inserting
the generated hash nodes into the tree until only one hash
node remains, which is also called the Merkle root. The construction process of the Merkle tree is in Figure 6.
As shown in Figure 6, Merkle trees are created and
destroyed once per second. These trees are composed of a
hierarchical network of geographically independent distributed computing nodes. Each operates in an asynchronous
aggregation fashion, generating a hash tree by receiving hash
values from its subtrees transmitting the hash root values to
multiple parents. The aggregation process is theoretically
unbounded and runs on top of virtual machines or dedicated
hardware. Moreover, in a keyless signature system with a
multilayer aggregation hierarchy, the acceptable theoretical
limit of the system is 2[64] signatures per second.
5.2. The Intelligent Keyless Signature System. The keyless signature system based on Merkle trees is shown in Figure 7,
and the specific tree construction process can be described
as follows. Firstly, the department participating in the concrete mixing procedure submits the hash value (the blue dots
in Figure 7) of the file to the customized keyless signature
gateway. Secondly, the adjacent hash values are connected
in series, and then, an additional hash operation on the
concatenated values is performed again to calculate the
result. Subsequently, the newly calculated hash value is submitted to the upper layer for serial hash operation until the
Merkle tree’s root is created. Finally, the keyless signature
gateway returns a keyless signature to the department. The
keyless signature contains the hash value submitted in the
previous step and the sequence to regenerate the hash root
value. This keyless signature is a hash chain composed of
coordinates like the red dots in Figure 7. With this keyless
signature system, the concrete construction department
can ensure the spatial integrity of electronic data.
Except for guaranteeing the spatial integrity of the electronic files, the intelligent keyless signature system based
on Merkle trees can also ensure the temporal reliability of
the electronic files. The mechanism is illustrated as follows:
First, the keyless signature system stores the hash root values
in a shared database called the calendar database while creating and destroying every second. Specifically, since 0 : 00,
0 seconds on January 1, 1970, each second of hash values
has been regarded as a leaf node, forming a particular type
of permanent hash tree, also known as a Merkle forest.
The calendar hashes are periodically aggregated to generate
the integrity code’s hash value. In a keyless signature system,
the calendar database’s integrity code is regularly issued in
electronic and paper form in the world media, as shown in
Figure 8 [27]. After the integrity code is released in the electronic or paper-based public media, the authenticity of all
signatures can be evaluated by tracing back the integrity
code, thus ensuring the temporal integrity of the data. [28].
5.3. Signing and Verification of Production Files. Signing and
verifying the production files based on keyless signature are
illustrated in Figure 9. As the description at the top of
Figure 9, when a file is created and needs to be signed during
the concrete production process, first, the signatories make a
hash calculation on the file with the SHA-256 function and
then submit the hash value to the distributed keyless signature server. From the one-way nature of the hash function,
it is clear that the hash value is only the credential for applying a keyless signature, so the privacy of the original file is
still kept. In the second step, the keyless signature server that
receives the hash value performs a calculation through the
hash chain and returns a keyless signature starting from
the root node of the Merkle tree to the signatories as a
response. In the third step, the keyless signature server
timely releases the integrity code through newspapers or
other forms. Note that the integrity code is preserved in
the online calendar database after its release.
The verification of signed files usually occurs in the file
approval stage. As shown at the bottom of Figure 9, when
the validator receives a signed file from the previous signatory, in order to verify the authenticity of the data, first
and foremost, the received file and its corresponding keyless
signature should be aggregated to conduct a hash
-----
Wireless Communications and Mobile Computing 7
xroot = (x14|x58)
x14 = (x12|x34) x58 = (x56|x78)
x12 = (x1|x2) x34 = (x3|x4) x56 = (x5|x6) x78 = (x7|x8)
x1 x2 x3 x4 x5 x6 x7 x8
Data item
Figure 4: Schematic diagram on hash tree construction.
x4
x58
Figure 5: Schematic diagram on hash chain computing.
January 1[st], 1970 00:00:00 Tis second
Hash
…… Time
calendar
Hash tree
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Hash Keyless
value signature
Hash Keyless
value signature
Hash Keyless
value signature
Figure 6: Parallel construction of the Merkle tree based on edge computing.
computation. Next, the integrity code associated with the
signature file is figured out from the online database. Then,
a comparison of the integrity code with the hash computation result is conducted subsequently. If the comparison
result is consistent, it indicates that the signature data is
accurate and trustworthy, the file transmission and approval
process is in line with the standard requirements, and there
is no tampering with the data. If the comparison result is
-----
8 Wireless Communications and Mobile Computing
Value Pos
1
Hash
calendar
0
1
Calendar 0
Top
1
aggregation
0
Aggregation 1
Aggregation
Bottom
aggregation
Aggregate
hash chain
Hash Keyless
value signature
Figure 7: The keyless signature system legend.
Figure 8: The Merkel forest structure.
inconsistent, it proves that the concrete production file management process is not standardized and data security risks
have occurred.
In summary, the implementation of the keyless signature
system could standardize the approval and writing process
of files during the concrete production process, supervise
-----
Wireless Communications and Mobile Computing 9
User A Service provider
1
Meta data
Publish 3
integrity code
Server
Keyless
signature
2
4
Store online
User B
Meta data Keyless signature
Calculate by public tools 5
Verification code
6 Make a comparison
Figure 9: Data signing and verification process based on the keyless signature.
Accredited blockchain
Block Block Block
X-1 ę ę X ę ę X+1 ę ę
Current block's hash value
Previous block's Root's hash
hash value value
Chain structure
Figure 10: Schematic diagram of the chain structure.
|Col1|4 Store o|
|---|---|
Block Block Block
X-1 ę ę X ę ę X+1
3
2
6
every step of data generation, and eliminate irregular data
recording and internal tampering, thus protecting the security of the concrete production file to a higher degree and
for a long time.
## 6. Production File Management Based on Blockchains
6.1. Chain Structure Design. Based on the data structure and
the keyless signature system, the chain structure of the concrete production file and the corresponding data onchaining process are in Figure 10. When all the files involved
in each concrete warehouse are collected, each file’s hash
1
5
value, also known as the unit block in the distributed blockchain data structure, is calculated separately. Then, a series
of unit blocks aggregate two by two to form a binary tree,
the root of which is called a procedure block. Thirdly, many
procedure blocks polymerize to a compound as a Merkle
tree, and the root is regarded as the warehouse block. Finally,
by aggregating the current warehouse block with the previous warehouse block into a Merkle tree and storing the root
of the tree on a trusted blockchain, the information security
of the adjacent two warehouse blocks can be ensured.
As Figure 10 illustrates, compared with the traditional
paper form files, the electronic files are more conducive to
data search and analysis. Besides, electronic information
4
-----
10 Wireless Communications and Mobile Computing
Warehouse X Warehouse
Hash-X block
Data before mixing
A
Hash-A
Procedure
block
Branch
block
Unit block
Figure 11: The organization of the warehouse block.
security and traceability can be improved markedly by the
blockchain-based information deposition mechanism compared with the conventional centralized storage database.
6.2. Automatic Data On-Chaining Mechanism. Creating one
warehouse block and uploading it onto blockchain means
that the mixing plant finishes an entire concrete production
task from batching, mixing to the end according to the
instructions. A warehouse usually produces tens to hundreds
of cubic meters of concrete. In the proposed model, the
warehouse blocks are at the top level, and adjacent warehouse blocks are linked in tandem with time stamps. The
organization of each warehouse is in Figure 11.
As shown in Figure 11, the blocks of each warehouse
employ the Merkle tree structure to organize data, which is
compatible with the signature generation mechanism in
the keyless signature system. In Merkle trees, the two leaf
nodes on each set of forks represent two files, and the files
are paired two-by-two in the order of their generation time.
In Figure 11, the file hash is regarded as a unit hash, and the
branch hash is generated by two-by-two aggregation of all
unit hashes. Furtherly, the procedure hash is composed of
a two-by-two accumulation of branch hashes, and the warehouse hash is made up of pair-wise procedure hashes.
Finally, the automatic data uploading is finished when all
unit hashes are chained to form a warehouse hash.
Due to the structural characteristics of the Merkle tree,
any changes in the underlying data will lead to changes in
its parent nodes and eventually affect the changes in the
Merkle root. So the Merkle tree has the advantages of efficient comparison of a large amount of data, fast location of
modified data, and fast verification of incorrect data, which
are all demonstrated explicitly in the proposed management
model. For illustration, when two Merkle tree roots are the
same, the data they represent must be the same, which
makes data verification between different users possible.
Besides, when the underlying data is changed, its location
can be quickly detected by inspecting the corresponding
branch. With this feature, the proposed model can easily fulfill fast querying of the information about the abnormal data.
Last but not least, when it is necessary to prove the originality and authenticity of the data, only the hash summary of
the data needs to be validated without knowing the exact
content of the data.
6.3. Smart Contracts and Consensus Algorithm. The smart
contracts in our blockchain management model are fulfilled
by introducing various forms of notification measures such
as emails and cell phone applets to inform users of pending
matters and remind them of the approval delays during file
flow. Beyond that, to ensure data consistency during the
automatic data on-chaining process, our model adopts
-----
Wireless Communications and Mobile Computing 11
Transaction Broadcasting Broadcasting Validation Consensus Data writing
User
Node 1
Node 2
Node 3
Node 4
1 2 3 4 5
Figure 12: The implementation model of the consensus mechanism.
|Broadcasting|Broadcasting|Validation|Consensus|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
Byzantine Fault Tolerance (BFT) [29] as the consensus algorithm to synchronize the data to be recorded. The automatic
concrete production data on-chaining mechanism based on
the BFT consensus algorithm is shown in Figure 12, and it
can be precisely divided into five steps, which are as follows:
(1) When the supervisor starts the approval of the concrete mixing order, the action will be considered a
transaction, and the proposed model broadcasts this
transaction to all blockchain nodes, including raw
material providers and construction units
(2) After hash computation, the supervisor broadcasts
the hash value of the transaction to all blockchain
nodes
(3) Each blockchain node (participating construction
unit) makes a hash after receiving the transaction
and compares it with the supervisor’s hash sequence
(4) After all nodes receive the message that more than
half of the comparisons are approved, the transaction is deemed to be established
(5) The transaction is recorded into the block
6.4. Validation and Abnormal Block Tracking. The validation and tracking of the production files can also be fulfilled
by the blockchain. As Figure 13 shows, the process is to verify the file’s integrity and record the location of the files that
failed the verification. Specifically, the information of the
abnormal file is retrieved from the database; then, the values
of the file and the corresponding warehouse are recalculated
according to the calculation rules at the time of uploading.
Subsequently, the newly calculated warehouse values are
compared with the corresponding uploaded warehouse
values on the blockchain in sequence according to the warehouse organization order.
Suppose the comparison of the hash values is consistent.
In that case, all the electronic files in the warehouse are safe
and secure. It has not been tampered with, so it is unnecessary to continue comparing the detailed information of this
warehouse. But if inconsistency happens, the files contained
in that warehouse are lost or tampered with, so it is neces
sary to continue to compare the hash value of each file in
that warehouse. The processes of file hash matching and
warehouse hash matching are the same; the newly calculated
file hash is compared with the file hash recorded on the
blockchain. The file that contains inconsistent hash comparison results is recorded. Thus, the traceability of the problematic blocks can be achieved.
## 7. Model Application
7.1. Overall Architecture. In this paper, a concrete production management system based on the proposed model has
been developed and implemented in the Hanjiang to Weihe
River Project in Shaanxi Province to verify the model’s practicality and security. The system adopts a B-S architecture,
and all users can log in and use it directly through a browser.
Figure 14 shows the overall architecture.
The concrete production information management system mainly manages data related to concrete production in
the water conservancy project construction, including standardized management of file filling, unified management of
data archiving, and automatic uploading of production files.
The management system consists of user management,
menu management, process management, parameter management, authority management, and log management.
Through the network interface provided by the management
system, different construction units in the concrete production system automatically import or manually enter various
information about concrete production and create electronic
files. After that, the file, branch, procedure, and warehouse
hash are generated sequentially, and then, they are organized
to the tree structure according to the distributed blockchain
data structure.
The generated hash values are uploaded to a credible
blockchain for deposition. The information interaction
between the blockchain and the information management
platforms is fulfilled through port calls. In our information
management system, the blockchain is the consortium
blockchain called the Blockchain-based Service Network.
This blockchain was jointly initiated by the State Information Center, China Mobile Communications Corporation,
China UnionPay Corporation, and Beijing Red Date
-----
12 Wireless Communications and Mobile Computing
Start
Consistent? No
Local
database Online
Caculate hash value of database
Yes
each warehouse
Obtain the published
integrity code Verification success Compare every
hash value
Aggregate hash values to
generate Merkle tree
Compare integrity code
End Abnormalities found
with Merkle root
Figure 13: Schematic diagram of the abnormal block tracking flow.
Accredited blockchain
System
Accredited Accredited Block management
Data on-chaining Block tracing
consensus smart contract management
User
Merkel forest management
Hash aggregation Warehouse block Procedure block Branch block Unit block
Menu
management
Keyless signatures
Merkel tree building Hash calculation Signature release ę ę
Process
management
Summary of production data
Raw material Form of Errata and Parameter
Supply list Form in mixing Other form
form the concrete summary form management
Intelligent data reporting
Authority
management
Real-time collection Automatic import Manual dumped ę ę
Character Basic operating environment Log
management
Concrete Third-party Network Storage Distributed
Supervisor Constructor Database
mix plant lab system system platform
Figure 14: Architecture of concrete production information management system.
|Concrete mix plant|Supervisor|Constructor|
|---|---|---|
|Storage system|Distributed platform|Database|
|---|---|---|
|Interface system|Accredited blockchain Accredited Accredited Block Data on-chaining Block tracing consensus smart contract management|System management User management Menu management Process management Parameter management Authority management Log management|
|---|---|---|
||||
||Merkel forest Hash aggregation Warehouse block Procedure block Branch block Unit block||
||||
||Keyless signatures Merkel tree building Hash calculation Signature release ę ę||
||||
||Summary of production data Raw material Form of Errata and Supply list Form in mixing Other form form the concrete summary form||
||||
||Intelligent data reporting Real-time collection Automatic import Manual dumped ę ę||
||Character Basic operating environment Concrete Third-party Network Storage Distributed Supervisor Constructor Database mix plant lab system system platform||
||Character Concrete Third-party Supervisor Constructor mix plant lab||
Consistent?
Obtain the published
integrity code
Compare integrity code
with Merkle root
Compare every
hash value
End
Online
database
Technology Corporation [30]. Besides, this consortium
blockchain provides the storage, verification, and traceability
of hash values and facilitates historical data security
verification.
In practice, the system was implemented in the Hanjiang
to Weihe River Project to collect and organize the concrete
production-related files in 2020. The total concrete produc
Caculate hash value of
each warehouse
Verification success
Start
tion volume in the project in 2020 was about 170,000 square
meters, which generated about 16,000 related paper forms in
total. At present, we have entered and uploaded some of the
files, including 18,000 square meters of concrete related to
more than 3,500 forms, and stored these records on the consortium blockchain. The data server and the application
server configurations in our system are the same: both are
Aggregate hash values to
generate Merkle tree
Local
database
-----
Wireless Communications and Mobile Computing 13
80
70
60
50
40
30
20
10
⁎ ⁎
0
0 1 2
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
⁎
Keyless signature Data on-chaining
Create Merkle tree ⁎ Signature verification
Figure 15: Comparison of system time consumption by phase.
dual-core and quad-threaded, with 8 G memory and 500 G
storage space, which meet the minimum requirements for
civil engineers [31].
7.2. Experiment and Analysis. The experiment for testing the
system performance is designed as follows: one warehouse
block is selected randomly as the experiment object from
the actual concrete production process. The target warehouse block contains 40 files, including supply contact
sheets, production notification sheets, quality inspection
sheets, and errata summary sheets, recording a complete
concrete production process. The file creation and transmission are standardized with the keyless signature. After the
files are filled and verified, they are uploaded on the blockchain for permanent storage. Besides, the time for the Merkle tree construction, the keyless signature creation, the
signature verification, and files’ chaining are recorded separately. The specific time spent on the four steps of the 40
concrete production files is shown in Figure 15.
As shown in Figure 15, the overall trend of keyless signature generation time per file raises as the production data
increases. By fitting the linear regression model, it can be
seen that the slope of the total keyless signature time is about
0.109. For each integrated keyless signature registration,
when the size of the Merkle tree increases, the keyless signature generation time of the following file will also increase by
about 0.109 s.
Secondly, the average time consumption for data onchaining is about 3.39 s, with a slope of -0.009. This erratic
fluctuation is caused by blockchain instability and network
fluctuations.
Thirdly, the time for the Merkle tree generation also
shows an increasing trend correlation with the keyless signature generation time because the keyless signature is based
on the combination of hash values from the root to the leaf
sequence of the Merkle tree and its corresponding sequence
coordinates. The slopes of Merkle tree creation and keyless
signature generation are similar by linear fitting, which indicates that the creation time of the Merkle tree is the main
factor that increases the generation time of keyless signature.
Fourthly, the average time to verify the on-chain data is
about 1.38 s. The slope of the linear fit function is 0.019,
indicating that the verification is swift for on-chain data.
The verification efficiency is mainly affected by the structure
of the warehouse block.
Besides, we also conduct the tests on keyless signature
sizes. As shown in Figure 16, the keyless signature size of
each file is about 157 kb, and its storage cost is less than 1
penny. The keyless signature storage cost of the whole warehouse is less than 0.1 yuan, and this cost is almost negligible
compared with the benefits of data security.
Finally, we use the number of transactions processed per
second as the criterion for system throughput to evaluate the
entire performance. The throughput of the relevant smart
contracts is calculated for different concurrent requests.
The number of concurrent requests is set from 100 to
1000, and 10 experiments are conducted in sequence. At last,
the average values are taken as the experimental results. The
throughput of the smart contracts is in Figure 17.
In Figure 17, the throughput of the write operation (data
on-chaining) is overall lower than that of the read operation
(signature verification). In other words, the write operation
-----
14 Wireless Communications and Mobile Computing
180
160
140
120
100
80
60
40
20
0
0
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
File number
Figure 16: Storage space consumed by keyless signatures.
80
60
40
20
0
0
100 200 300 400 500 600 700 800 900 1000 1100
Concurrent requests
+
Data on-chaining
Signature verification
Figure 17: Throughput of the smart contract with the concurrent requests.
tends to be more time-consuming than the read operation. It
is because the write operation needs to hash the data and
generate a historical version of the old data to ensure the
traceability of the blockchain. On the other hand, read operation only needs to search and validate data based on index
positions, thus taking less time. Besides, the system’s
throughput increases with the number of concurrent
requests. However, when the number of simultaneous
requests reaches a certain value, the growth trend slows
down slightly. By calculation, the throughput stays 71 for
the smart contract with data on-chaining. In contrast, the
throughput for a signature verification smart contract is
about 62.
## 8. Conclusions
The digital archiving of engineering construction files in
intelligent water projects is of great significance. The blockchain can provide security verification and integrity check
for electronic files, which guarantees the security of archive
-----
Wireless Communications and Mobile Computing 15
informatization and contributes to the realization of electronic archiving of files. This paper proposes a comprehensive data management model for smart water system
construction based on blockchain and edge intelligence
and then implements it in the Hanjiang to Weihe River Project in Shaanxi Province. Firstly, the behavioral model for
the concrete production process is summarized, and the corresponding roles that participate in the process are
abstracted out simultaneously. Secondly, the intelligent keyless signature based on parallel edge computing is introduced to ensure data security. The proposed model uses
the Merkle tree to construct a chained file structure and
standardizes the data entering, uploading, and checking procedure by the consensus mechanism. In the case study, we
have created a blockchain of 3,500 blocks according to the
decentralization requirement. In total, the proposed model
and the corresponding system have already taken a big step
forward in saving workforce and material resources and
improving the security and traceability of construction
archives markedly. We believe that through a more extensive
scope of application and continuous improvement, the management of archives in civil engineering, especially in smart
water projects, will eventually achieve the goal of
digitalization.
## Data Availability
The experiment data used to support the findings of this
study are available from the corresponding author upon
request.
## Conflicts of Interest
All authors declare no conflict of interest in this paper.
## Acknowledgments
This research work is supported by the National Natural Science Funds of China (62072368), Basic Research in Natural
Science and Enterprise Joint Fund of Shaanxi (2021JLM-58),
and Special Scientific Research Project of Education Department of Shaanxi (21JK0781).
## References
[1] N. Nizamuddin, K. Salah, and M. A. Azad, “Decentralized document version control using ethereum blockchain and IPFS,”
Computers & Electrical Engineering, vol. 76, pp. 183–197,
2019.
[2] C. Feng, B. Liu, Z. Guo, K. Yu, Z. Qin, and K. K. R. Choo,
“Blockchain-based cross-domain authentication for intelligent
5G-enabled Internet of drones,” IEEE Internet of Things Journal, 2021.
[3] W. Wang, H. Xu, M. Alazab, T. R. Gadekallu, Z. Han, and
C. Su, “Blockchain-based reliable and efficient certificateless
signature for IIoT devices,” IEEE Transactions on Industrial
Informatics, p. 1, 2021.
[4] Y. Zhang, W. Luo, and F. Yu, “Construction of chinese smart
water conservancy platform based on the blockchain: technol
ogy integration and innovation application,” Sustainability,
vol. 12, no. 20, p. 8306, 2020.
[5] J. Song, Z. Han, W. Wang, J. Chen, and Y. Liu, “A new secure
arrangement for privacy-preserving data collection,” Computer Standards & Interfaces, vol. 80, article 103582, 2022.
[6] C. Feng, B. Liu, and K. Yu, “Blockchain-empowered decentralized horizontal federated learning for 5G-enabled UAVs,”
IEEE Transactions on Industrial Informatics, 2022.
[7] T. Bui, D. Cooper, and J. Collomosse, “Tamper-proofing video
with hierarchical attention autoencoder hashing on blockchain,” IEEE Transactions on Multimedia, vol. 22, no. 11,
pp. 2858–2872, 2020.
[8] B. Zhong, H. Wu, and L. Ding, “Hyperledger fabric-based consortium blockchain for construction quality information management,” Frontiers of Engineering Management, vol. 7, no. 4,
pp. 512–527, 2020.
[9] L. Zhang, M. Peng, and W. Wang, “Secure and efficient data
storage and sharing scheme for blockchain-based mobileedge computing,” Transactions on Emerging Telecommunications Technologies, no. article e4315, 2021.
[10] G. Nagasubramanian, R. K. Sakthivel, and R. Patan, “Securing
e-health records using keyless signature infrastructure blockchain technology in the cloud,” Neural Computing and Applications, vol. 32, no. 3, pp. 639–647, 2020.
[11] J. Zhang, Y. Huang, and Y. Wang, “Multi-objective optimization of concrete mixture proportions using machine learning
and metaheuristic algorithms,” Construction and Building
Materials, vol. 253, article 119208, 2020.
[12] Y. Gong, L. Zhang, and R. Liu, “Nonlinear MIMO for industrial Internet of Things in cyber-physical systems,” IEEE
Transactions on Industrial Informatics, vol. 17, no. 8,
pp. 5533–5541, 2021.
[13] C. Feng, K. Yu, M. Aloqaily, M. Alazab, Z. Lv, and S. Mumtaz,
“Attribute-based encryption with parallel outsourced decryption for edge intelligent IoV,” IEEE Transactions on Vehicular
Technology, vol. 69, no. 11, pp. 13784–13795, 2020.
[14] K. A. Nguyen, R. A. Stewart, H. Zhang, O. Sahin, and
N. Siriwardene, “Re-engineering traditional urban water management practices with smart metering and informatics,” Environmental Modelling & Software, vol. 101, pp. 256–267, 2018.
[15] J. Feng, L. Liu, and Q. Pei, “Min-Max cost optimization for efficient hierarchical federated learning in wireless edge networks,” IEEE Transactions on Parallel and Distributed
Systems, 2022.
[16] H. Li, K. Yu, B. Liu, C. Feng, Z. Qin, and G. Srivastava, “An
efficient ciphertext-policy weighted attribute-based encryption
for the Internet of health things,” IEEE Journal of Biomedical
and Health Informatics, vol. PP, 2021.
[17] K. Yu, M. Arifuzzaman, and Z. Wen, “A key management
scheme for secure communications of information centric
advanced metering infrastructure in smart grid,” IEEE transactions on instrumentation and measurement, vol. 64, no. 8,
pp. 2072–2085, 2015.
[18] L. Tan, K. Yu, and N. Shi, “Towards secure and privacypreserving data sharing for covid-19 medical records: a
blockchain-empowered approach,” IEEE Transactions on Network Science and Engineering, 2022.
[19] L. Zhao, J. Li, and A. Al-Dubai, “Routing schemes in softwaredefined vehicular networks: design, open issues and challenges,” IEEE Intelligent Transportation Systems Magazine,
vol. 13, no. 4, pp. 217–226, 2020.
-----
16 Wireless Communications and Mobile Computing
[20] J. Feng, F. R. Yu, and Q. Pei, “Cooperative computation offloading and resource allocation for blockchain-enabled
mobile-edge computing: a deep reinforcement learning
approach,” IEEE Internet of Things Journal, vol. 7, no. 7,
pp. 6214–6228, 2019.
[21] L. Tan, K. Yu, F. Ming, X. Chen, and G. Srivastava, “Secure and
resilient artificial intelligence of things: a HoneyNet approach
for threat detection and situational awareness,” IEEE Consumer Electronics Magazine, p. 1, 2021.
[22] K. Yu, L. Tan, and L. Lin, “Deep-learning-empowered breast
cancer auxiliary diagnosis for 5GB remote E-health,” IEEE
Wireless Communications, vol. 28, no. 3, pp. 54–61, 2021.
[23] L. Liu, C. Chen, and Q. Pei, “Vehicular edge computing and
networking: a survey,” Mobile Networks and Applications,
vol. 26, no. 3, pp. 1145–1168, 2021.
[24] S. Hakak, W. Z. Khan, and G. A. Gilkar, “Securing smart cities
through blockchain technology: architecture, requirements,
and challenges,” IEEE Network, vol. 34, no. 1, pp. 8–14, 2020.
[25] K. Yu, Z. Guo, and Y. Shen, “Secure artificial intelligence of
things for implicit group recommendations,” IEEE Internet
of Things Journal, 2022.
[26] T. Alladi, V. Chamola, and R. M. Parizi, “Blockchain applications for industry 4.0 and industrial IoT: a review,” IEEE
Access, vol. 7, pp. 176935–176951, 2019.
[27] H. Huang, J. Lin, and B. Zheng, “When blockchain meets distributed file systems: an overview, challenges, and open issues,”
IEEE Access, vol. 8, pp. 50574–50586, 2020.
[28] L. Liu, J. Feng, and Q. Pei, “Blockchain-enabled secure data
sharing scheme in mobile-edge computing: an asynchronous
advantage actor–critic learning approach,” IEEE Internet of
Things Journal, vol. 8, no. 4, pp. 2342–2353, 2020.
[29] H. Xiong, C. Jin, M. Alazab et al., “On the design of
blockchain-based ECDSA with fault-tolerant batch verication
protocol for blockchain-enabled IoMT,” IEEE Journal of Biomedical and Health Informatics, vol. PP, p. 1, 2021.
[30] J. Ma, S. Zhang, and H. Li, “Sparse Bayesian learning for the
time-varying massive MIMO channels: acquisition and tracking,” IEEE Transactions on Communications, vol. 67, no. 3,
pp. 1925–1938, 2018.
[31] L. Liu, M. Zhao, M. Yu, M. A. Jan, D. Lan, and A. Taherkordi,
“Mobility-aware multi-hop task offloading for autonomous
driving in vehicular edge computing and networks,” IEEE
Transactions on Intelligent Transportation Systems, pp. 1–14,
2022.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/8482415?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/8482415, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/wcmc/2022/8482415.pdf"
}
| 2,022
|
[] | true
| 2022-03-09T00:00:00
|
[
{
"paperId": "c795828f5390d4208bc70b8a110082a7fc3884fd",
"title": "Mobility-Aware Multi-Hop Task Offloading for Autonomous Driving in Vehicular Edge Computing and Networks"
},
{
"paperId": "50bf499b417fae819700f1050a9f4c92c21bf818",
"title": "Blockchain-Empowered Decentralized Horizontal Federated Learning for 5G-Enabled UAVs"
},
{
"paperId": "cfea5d552e6ab2dbfe2de80f6c830640dd09f84f",
"title": "Secure and Resilient Artificial Intelligence of Things: A HoneyNet Approach for Threat Detection and Situational Awareness"
},
{
"paperId": "7cdf2f3e5ff6ced1c31dd6f21ded0f9b40451d9a",
"title": "Blockchain-Based Cross-Domain Authentication for Intelligent 5G-Enabled Internet of Drones"
},
{
"paperId": "04bfc754c0e28df90de3b7e0a3194b8df7707cdc",
"title": "A new secure arrangement for privacy-preserving data collection"
},
{
"paperId": "f0ce78953f7a5f68a13c7f5b8614850f3e73fc24",
"title": "Towards Secure and Privacy-Preserving Data Sharing for COVID-19 Medical Records: A Blockchain-Empowered Approach"
},
{
"paperId": "d5aba97f9927528f304be85cd3133d17b462fb36",
"title": "On the Design of Blockchain-Based ECDSA With Fault-Tolerant Batch Verification Protocol for Blockchain-Enabled IoMT"
},
{
"paperId": "7dd7966b17fb3f97cd7de85bfaa6994951312a1d",
"title": "Nonlinear MIMO for Industrial Internet of Things in Cyber–Physical Systems"
},
{
"paperId": "3b230f14c46e7e177e9bebb2ebc9f46b346b646d",
"title": "Secure and efficient data storage and sharing scheme for blockchain‐based mobile‐edge computing"
},
{
"paperId": "bb258e5d79ea55d9275db74268830b525367fafa",
"title": "Deep-Learning-Empowered Breast Cancer Auxiliary Diagnosis for 5GB Remote E-Health"
},
{
"paperId": "3ffc477b8a6bbc33fa62c126bd64bebf5194c2ac",
"title": "Blockchain-Based Reliable and Efficient Certificateless Signature for IIoT Devices"
},
{
"paperId": "84769d6c3c7c387a49f2fa5562900e5236138adb",
"title": "An Efficient Ciphertext-Policy Weighted Attribute-Based Encryption for the Internet of Health Things"
},
{
"paperId": "42272a6c530601c22f782872061809031c2a50c8",
"title": "Secure Artificial Intelligence of Things for Implicit Group Recommendations"
},
{
"paperId": "9dad453f4b0273354074e1735b3c73fb829f25f3",
"title": "Blockchain-Enabled Secure Data Sharing Scheme in Mobile-Edge Computing: An Asynchronous Advantage Actor–Critic Learning Approach"
},
{
"paperId": "0f8d81afc0574e8da70447c0c060e141b8ea9076",
"title": "Construction of Chinese Smart Water Conservancy Platform Based on the Blockchain: Technology Integration and Innovation Application"
},
{
"paperId": "21358e2b08b18dad28716e38841002d1d05abf2b",
"title": "Attribute-Based Encryption With Parallel Outsourced Decryption for Edge Intelligent IoV"
},
{
"paperId": "d37ae1ef49f11980515703e0ebf0bbd7cc75fc61",
"title": "Multi-objective optimization of concrete mixture proportions using machine learning and metaheuristic algorithms"
},
{
"paperId": "405769586840e523e5f56fe38c68c7f45509b618",
"title": "Hyperledger fabric-based consortium blockchain for construction quality information management"
},
{
"paperId": "ad35de1aeb563d5ce74b7554bde592f20ba6e0a6",
"title": "Cooperative Computation Offloading and Resource Allocation for Blockchain-Enabled Mobile-Edge Computing: A Deep Reinforcement Learning Approach"
},
{
"paperId": "f375e69427fc50251d8fa089fbe089ac6ca721a1",
"title": "When Blockchain Meets Distributed File Systems: An Overview, Challenges, and Open Issues"
},
{
"paperId": "5ae45bad7cbd6e3e174d6b60193790e314583899",
"title": "Tamper-Proofing Video With Hierarchical Attention Autoencoder Hashing on Blockchain"
},
{
"paperId": "ee59451deedd0c7532b5b4c34340a615c1d76312",
"title": "Securing Smart Cities through Blockchain Technology: Architecture, Requirements, and Challenges"
},
{
"paperId": "282112cc8470c0bac43e6e0266adeb8b7800fd67",
"title": "Blockchain Applications for Industry 4.0 and Industrial IoT: A Review"
},
{
"paperId": "5973e94435ad78899e8d94c786e02193c539c76c",
"title": "Vehicular Edge Computing and Networking: A Survey"
},
{
"paperId": "aaa545332c9da2f75ed8399f3f26200c92509ff6",
"title": "Decentralized document version control using ethereum blockchain and IPFS"
},
{
"paperId": "450a3df92973f32af833173a941c703fd4e424a7",
"title": "Sparse Bayesian Learning for the Time-Varying Massive MIMO Channels: Acquisition and Tracking"
},
{
"paperId": "bfbdba5420d45da7679811595e9393a8948d8890",
"title": "Securing e-health records using keyless signature infrastructure blockchain technology in the cloud"
},
{
"paperId": "08e6b3959f1d0c62f84820eafe83237025042156",
"title": "Re-engineering traditional urban water management practices with smart metering and informatics"
},
{
"paperId": "520809931bafc3a95d8ea43b88c68d477a00d53e",
"title": "A Key Management Scheme for Secure Communications of Information Centric Advanced Metering Infrastructure in Smart Grid"
},
{
"paperId": "42f98bfbba0c06653f60c8317373c614911029c1",
"title": "Min-Max Cost Optimization for Efficient Hierarchical Federated Learning in Wireless Edge Networks"
},
{
"paperId": "2273984cc4c17d8e96216d7bdaa481ce4bf10546",
"title": "Routing Schemes in Software-Defined Vehicular Networks: Design, Open Issues and Challenges"
},
{
"paperId": null,
"title": "production management system based on the proposed model has been developed and implemented in the Hanjiang to Weihe River Project in Shaanxi Province to verify the model ’ s practicality"
},
{
"paperId": null,
"title": "After hash computation, the supervisor broadcasts the hash value of the transaction to all blockchain nodes"
},
{
"paperId": null,
"title": "Each blockchain node (participating construction unit) makes a hash after receiving the transaction and compares it with the supervisor ’ s hash sequence"
},
{
"paperId": null,
"title": "By employing the smart keyless signatures"
}
] | 14,370
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01155ceeaefeab3737c49cebde0b4b9e01f7d9cd
|
[] | 0.821515
|
TCP/UDP-Based Exploitation DDoS Attacks Detection Using AI Classification Algorithms with Common Uncorrelated Feature Subset Selected by Pearson, Spearman and Kendall Correlation Methods
|
01155ceeaefeab3737c49cebde0b4b9e01f7d9cd
|
Revue d'Intelligence Artificielle
|
[
{
"authorId": "48536036",
"name": "Kishore Babu Dasari"
},
{
"authorId": "9364224",
"name": "N. Devarakonda"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The Distributed Denial of Service (DDoS) attack is a serious cyber security attack that attempts to disrupt the availability security principle of computer networks and information systems. It's critical to detect DDoS attacks quickly and accurately while using as less computing power as possible in order to minimize damage and cost efficient. This research proposes a fast and high-accuracy detection approach by using features selected by proposed method for Exploitation-based DDoS attacks. Experiments are carried out on the CICDDoS2019 datasets Syn flood, UDP flood, and UDP-Lag, as well as customized dataset. In addition, experiments were also conducted on a customized dataset that was constructed by combining three CICDDoS2019 datasets. Pearson, Spearman, and Kendall correlation techniques have been used for datasets to find un-correlated feature subsets. Then, among three un-correlated feature subsets, choose the common un-correlated features. On the datasets, classification techniques are applied to these common un-correlated features. This research used conventional classifiers Logistic regression, Decision tree, KNN, Naive Bayes, bagging classifier Random forest, boosting classifiers Ada boost, Gradient boost, and neural network-based classifier Multilayer perceptron. The performance of these classification algorithms was also evaluated in terms of accuracy, precision, recall, F1-score, specificity, log loss, execution time, and K-fold cross-validation. Finally, classification techniques were tested on a customized dataset with common features that were common in all of the dataset’s common un-correlated feature sets.
|
Vol. 36, No. 1, February, 2022, pp. 61-71
Journal homepage: http://iieta.org/journals/ria
# TCP/UDP-Based Exploitation DDoS Attacks Detection Using AI Classification Algorithms with Common Uncorrelated Feature Subset Selected by Pearson, Spearman and Kendall Correlation Methods
Kishore Babu Dasari[1*], Nagaraju Devarakonda[2]
1 Department of CSE, Acharya Nagarjuna University, Guntur 522510, Andhra Pradesh, India
2 School of Computer Science and Engineering, VIT-AP University, Amaravati 522237, India
Corresponding Author Email: dasari2kishore@gmail.com
https://doi.org/10.18280/ria.360107 **ABSTRACT**
|Corresponding Author Email:|dasari2kishore@gmail.com|
|---|---|
**Received: 13 January 2022**
**Accepted: 24 February 2022**
**_Keywords:_**
_CICDDoS2019, classification algorithms,_
_DDoS attacks, Kendall correlation, Pearson_
_correlation, spearman correlation, syn flood,_
_UDP flood, UDP-Lag_
**1. INTRODUCTION**
The Distributed Denial of Service (DDoS) attack is a serious cyber security attack that
attempts to disrupt the availability security principle of computer networks and information
systems. It's critical to detect DDoS attacks quickly and accurately while using as less
computing power as possible in order to minimize damage and cost efficient. This research
proposes a fast and high-accuracy detection approach by using features selected by
proposed method for Exploitation-based DDoS attacks. Experiments are carried out on the
CICDDoS2019 datasets Syn flood, UDP flood, and UDP-Lag, as well as customized
dataset. In addition, experiments were also conducted on a customized dataset that was
constructed by combining three CICDDoS2019 datasets. Pearson, Spearman, and Kendall
correlation techniques have been used for datasets to find un-correlated feature subsets.
Then, among three un-correlated feature subsets, choose the common un-correlated
features. On the datasets, classification techniques are applied to these common uncorrelated features. This research used conventional classifiers Logistic regression,
Decision tree, KNN, Naive Bayes, bagging classifier Random forest, boosting classifiers
Ada boost, Gradient boost, and neural network-based classifier Multilayer perceptron. The
performance of these classification algorithms was also evaluated in terms of accuracy,
precision, recall, F1-score, specificity, log loss, execution time, and K-fold cross-validation.
Finally, classification techniques were tested on a customized dataset with common features
that were common in all of the dataset’s common un-correlated feature sets.
Availability-based attacks are network security attacks
carried out by a malicious node with the goal of denying access
to resources on computer networks. Denial of service (DoS) is
an available-based security attack in which the attacker aims
to make network resources unavailable to its intended users by
temporarily or indefinitely disrupting the services of a host
connected to a network. A DoS attack launched by more than
one attacker is called a Distributed Denial of Service (DDoS)
attack [1].
DDoS attacks make use of a variety of vulnerabilities in the
TCP/UDP-based protocols at the application layer to deny its
users are called Exploitation based DDoS attacks. DDoS has
become more prevalent among cyberattacks due to the
extensive use of TCP protocol and easier to exploit features of
the TCP three-way handshake mechanism. Syn flood is a TCPbased exploitation DDoS attack, UDP flood, and UDP-Lag are
UDP-based exploitation DDoS attacks.
SYN flood [2] is a commonly used exploitation-based
DDoS attack that exploits the advantage of a feature of the
TCP three-way handshake to overflow the TCP queue of the
server and make it consume resources resulting in it being
unavailable to legitimate users' requests. A TCP connection is
established between a client and a server using the TCP threeway handshake mechanism. A client must send a synchronized
flag packet (SYN) to the server to establish a TCP connection.
The server sends the client an acknowledgment flag for the
synchronized packet (SYN-ACK) after receiving the SYN
packet delivered by the client. The client sends an
acknowledgment flag to the server after receiving the SYNACK flag from the server. With these three steps, a connection
between the client and the server is established, and data
transformation can now commence. In order to launch a TCP
SYN flood attack on a server, attackers take advantage of the
server's half-opened connection state. This is the state in which
the server is waiting for the client's ACK flag before
attempting to establish a connection. The server would have
already allocated Memory resources to the client at this point.
To take advantage of this behavior, the attacker sends a large
number of SYN flags to the server for a number of spoofed IP
addresses. The server treats these requests as legitimate,
allocating memory and resources to these IP sources and
sending the client a SYN-ACK flag. The server would now
wait in a half-open state for the client to respond with an ACK
flag which would never receive. The attacker's large number
of illegitimate SYN requests leads the TCP backlog queue to
overflow, resulting in half-opened connections until all system
resources are consumed. The legitimate user's request is not
accepted by the server due to an overflow of the TCP queue.
The primary objective of the TCP SYN flood attack is to
disrupt the system's availability.
-----
[ ] yp p
attack in which the attacker overflows random ports on the
targeted host with IP packets containing UDP datagrams. UDP
flood attack’s main objective is to saturate the Internet pipe. A
UDP flood operates by taking advantage of the steps taken by
a server when responding to UDP packets transmitted to one
of its ports. When a server receives a UDP packet at a specific
port, it goes through two steps in response to normal
circumstances: First, the server looks to determine if any
programs are currently listening for requests on the specified
port. If no programs are receiving packets on that port, the
server sends an ICMP (ping) message to the sender to alert
them that the destination is unavailable. When the server
receives a new UDP packet, it goes through a series of steps to
process the request, consuming server resources in the process.
When a huge flood of UDP packets is received from different
sources with spoof IP addresses, the target's resources can
quickly become exhausted as a result of the targeted server
using resources to check and then respond to each single UDP
packet.
The UDP-Lag attack [4] is an attempt to break the
connection between the client and the server. This attack is
most commonly used in online gaming to outsmart other
players slowing down/interrupting their movement. This
attack can be carried out in two ways: using a hardware switch
known as a lag switch, or with a software program that runs on
the network and consumes other users' bandwidth.
According to research findings on DDoS attacks, due of
their distributed nature, fast detection, less computation, and
accuracy in detection is three key challenges in DDoS attack
detection. DDoS attacks have caused significant damage in all
aspects of business; hence, early detection is essential. As
computation is so expensive these days, reducing the number
of features is essential to make the computation process more
cost-effective. To avoid inconvenience to legitimate users,
accurate detection is essential. This research proposes a
method for select the un-correlated feature subset using three
correlation techniques. It builds a fast and high-accuracy
DDoS attack detection approach with very few features.
This section introduces the TCP/UDP based Exploitation
DDoS attacks and the research motivation and objective of
detecting DDoS attacks. In section II of this paper, the
methodology is explained, including proposed framework,
algorithm, preprocessing, and machine learning classification
algorithms. The results and discussion are explained with
experimental results in section III of this paper. The study's
conclusion is found in Section IV of this paper.
**2.** **METHODOLOGY**
Proposed model framework depicted in Figure 1.
**Proposed Algorithm**
1. Start.
2. Read DDoS attack dataset.
3. Preprocessing:
3.1. Remove uninfluential socket features
3.2. Removing missing and infinity values
3.3. Encoding Benign and Attack labels
3.4. Removing constant features (Threshold==0)
3.5. Removing quasi-constant features
(Threshold==0.01)
4. Split the dataset into the train and test data in 80:20
5. Apply Pearson, Spearman and Kendall correlations
to test and train data.
6. Apply threshold >=80 and collect Pearson,
Spearman and Kendall un-correlated feature subsets.
7. Apply intersection of Pearson, Spearman and
Kendall un-correlated feature subsets and find
common un-correlated feature set.
8. Apply classification algorithms to train and test data
to classify Benign and Attack labels.
9. Stop.
**Figure 1. Proposed model framework**
**Data set**
This study uses the CICDDoS2019 data set which includes
a wide variety of DDoS attacks and fills up the gaps in the
previous data sets. Every DDoS attack dataset contains 87
features.
**Data preprocessing**
Data Preprocessing [5] is the first and most important step
in building a classification model. It is a process of clean and
formatted data suitable for the classification model. It
increases the accuracy and efficiency of classification models.
First, remove socket features that vary from network to
network. Next to clean the data by removing missing and
infinity values. Encoding the label string values for Benign
and attack label to the binary value of 0 and 1 respectively.
And finally, standardize the independent feature values.
Initially each dataset contains 88 features, after removing
uninfluential socket features each dataset contain 81 features.
Pre-processing results are statistically shown in Figure 2 with
a bar chart in order of the number of records processed.
**Feature selection**
Feature Selection [6] is a very critical component in
Machine learning algorithms. Machine learning algorithms
typically choke when provided with data with a large
dimensionality because the number of features raises the
training time exponentially and an increasing amount of
-----
, g
methods help in the resolution of these issues by reducing the
dimensions while preserving the overall information. It also
helps in identifying the features and their importance.
Variance threshold and correlation feature selection methods
are used in this study. A variance threshold is used to remove
constant and quasi-constant features. Correlation methods are
used to find uncorrelated features.
**Figure 2. Pre-processing results bar-chart**
**Variance threshold**
A simple baseline technique for feature selection is the
variance threshold. This method eliminates features that vary
below a specific threshold. It removes all zero-variance
features by default, that is, features that have the same value
throughout all samples. More useful information is contained
in features with a higher variance. The variance threshold
doesn’t consider the relationship of features with the target
variable.
**Correlation**
Correlation [5] is a bivariate analysis that determines the
level of association and the direction of the relationship
between two variables. The value of the correlation coefficient
varies between +1 and -1 in terms of the strength of the
association. A value of ±1 shows that the two variables are
perfectly associated. The value of 0 shows that the two
variables are weakly associated. The sign of the coefficient
specifies the relationship's direction; +sign indicates a positive
relationship that means one variable goes up, then the second
variable also goes up, while –sign indicates a negative relation
that means one variable increase then another variable
decrease.
We can predict one variable from the other using correlation.
When two features are correlated, the model only needs one of
them, as the other does not provide any extra information. This
study uses three types of correlations: Pearson correlation,
Spearman correlation, and Kendall rank correlation.
Pearson correlation
The Pearson correlation is the most generally used
correlation statistic for determining the degree of association
between linearly related variables. The Pearson correlation is
based on information about the mean and standard deviation.
Pearson correlation coefficient calculated by:
,
r is the correlation coefficient;
xi - is the value of the x-feature in a sample;
𝑥̅ – is the mean of the values of the x-feature;
yi - is the value of the y-feature in a sample;
𝑦̅ – is the mean of the values of the y-feature.
Spearman correlation
Spearman rank correlation is a non-parametric measure of
correlation used to determine the degree of relationship
between two variables. Non-parametric correlations rely
solely on ordinal data and pair scores. The Pearson correlation
between the rank scores of two variables is equivalent to the
Spearman correlation between those two variables.
Spearman's correlation evaluates monotonic relationships,
whereas Pearson's correlation evaluates linear relationships.
The strength of a monotonic relationship between two
variables with the same scaling as the Pearson correlation is
measured by the Spearman correlation.
Spearman correlation coefficient calculated by:
𝜌= 1 − 6 ∑𝑑𝑖2 (2)
𝑛(𝑛[2 ] −1)
𝑟=
∑(𝑥𝑖 −𝑥̅)(𝑦𝑖 −𝑦̅)
√∑(𝑥𝑖 −𝑥̅)[2] ∑(𝑦𝑖 −𝑦̅)
(1)
Here,
_ρ is the Spearman’s rank correlation coefficient;_
di is the difference between the two ranks of each
observation;
n is the number of observations.
Kendall rank correlation
Kendall rank correlation is a non-parametric test that
assesses the degree of association between two variables. Nonparametric correlations rely solely on ordinal data and pair
scores. Kendall correlation outperforms Spearman correlation
in terms of robustness and efficiency. When there are few
samples or some outliers, Kendall correlation is preferred.
Kendall correlation coefficient calculated by:
𝜏= [𝑁][𝑐] [−𝑁][𝑑]
𝑛(𝑛−1) (3)
2
Here,
𝜏 is the Kendall rank correlation coefficient;
Nc is the number of concordant;
Nd is the number of discordant.
**Classification algorithms**
Machine learning is becoming more widely used to detect
and classify DDoS attacks [7]. One of the most important steps
in machine learning algorithms is feature selection. Feature
Selection is essential for reducing dimensionality and
removing redundant and irrelevant features.
Logistic regression
Logistic regression [8] is a machine learning classification
method borrowed from statistics to predict the target variable.
It uses the logistic function also called as the Sigmoid function.
Sigmoid function is:
1
∅(𝑧) = (4)
1 + 𝑒[−𝑧]
-----
p
weights and features.
# z = w xT = w0 + w x1
g g
learners. Gradient Boosting classification algorithm depends
on the loss function. The gradient descent optimization
procedure is used to determine the contribution of the weak
learner to the ensemble.
Multilayer perceptron
A multilayer perceptron (MLP) [15] is the most standard
form of feed-forward artificial neural network. MLP consists
of an input layer to receive input data, output layers that make
predictions about the input, and at least one hidden layer is
capable of approximating any continuous function.
**3.** **RESULTS AND DISCUSSION**
The objective of this study has been to reduce data
computation and execution time in order to improve the
accuracy of TCP/UDP-based exploitation DDoS attack
detection. Data processing or computation is accomplished by
reducing the number of features in the input data sets. Data
computation is proportionate to the model's execution time. It
means that as data computation time reduces, execution time
significantly reduces as well. So, the main objectives of this
paper is to reduce the number of features in data sets without
decreasing the accuracy of exploitation-based DDoS attack
detection. In this paper, we propose a model for reducing the
number of features with improving DDoS attack detection
accuracy. The proposed model depicted in Figure 1.
TCP/UDP-based exploitation DDoS attack data sets are
collected for this study from the CICDDoS2019 data set,
which contains various TCP/UDP based DDoS attack data sets.
Syn flood is TCP based exploitation DDoS attack data set
while UDP flood and UDP-Lag are UDP based exploitation
DDoS attack data sets. Experiments have also been conducted
on a customized exploitation DDoS attack data set in this
research. Concatenated 400000 records from each of the Synflood, UDP flood, and UDP-Lag datasets to create a
customized exploitation DDoS attack data set.
In this section results are discussed in the order of removing
constant and quasi-constant features by using variance
threshold, finding un-correlated feature subsets with Pearson,
Spearman and Kendall correlation methods, finding common
un-correlated features from un-correlated feature subsets of
Pearson, Spearman and Kendall correlation methods,
discussed performance evaluation metrics of classification
algorithms with common uncorrelated feature subsets on
TCP/UDP based exploitation DDoS attack datasets of Synflood, UDP-flood, UDP-Lag and customized DDoS attack and
finally discussed performance evaluation metrics of
classification algorithms on customized dataset with common
features that were common in all of the dataset’s common uncorrelated feature sets.
After pre-processing, variance threshold filter-based feature
selection is being used to remove constant and quasi-constant
features from the data sets in order to reduce the number of
features. The features that are almost constant are known as
quasi-constant features. Constant features have a variance
threshold value of 0, whereas quasi-constant features have a
variance threshold value of 0.01. Constant features are those
that have the same value across the entire dataset's rows.
Remove these features because they provide no information to
the classification algorithms. Table 1 shows the number of
constant and quasi-constant feature counts for the Syn flood,
UDP flood, UDP-Lag, and customized exploitation data sets.
# +w x2 2 ++ w xn n
(5)
_ϕ(z) values limits in the range [0,1]. It indicates that if z goes_
to infinity, the function becomes one, and if z goes minus
infinity, the function reaches zero.
Decision tree
Decision Tree [9] is a supervised learning method that can
be used to display a model's visual representation. A decision
tree employs a hierarchical model resembling a flow chart with
multiple connected nodes. These nodes indicate tests on the
dataset's features, with a branch that leads to either another
node or a classification result. The prediction data is passed
through the nodes until it can be classified, with the training
data used to form the tree.
K-Nearest neighbor
One of the most basic machine learning classification
models is K-Nearest Neighbor (KNN) [10]. With KNN, there
is no training; the training data is used to make predictions in
order to classify the data. KNN works on the notion that
comparable data points would group together, and it uses the
K value, which can be any number, to locate the closest data
points.
Naive bayes classifier
A typical NB classifier [11] also relies on Bayes’ theorem
and applies probability density information to the training data.
It is used to calculate the chance of an event occurring based
on previous occurrences that have occurred.
Random forest
The random forest [12] is based on the principle of bagging,
which is used to train a number of decision trees and enhance
them based on their attributes. Random attribute selection is
used in the random forest training process to improve the
relative independence of the generated decision tree and hence
improve performance. Assuming that there are n nodes, the
standard decision tree selects the best attribute based on all of
the n nodes' characteristics, but each node of the random
forest's decision tree is based on k attributes that are randomly
selected in advance. The magnitude of the k parameter, which
is commonly set to log2 d, determines the degree of
randomness. Furthermore, the k value can be 1 or d, which
reflects a random selection of an attribute and a selection
procedure utilizing a traditional decision tree, respectively.
Ada boost
AdaBoost, also known as Adaptive Boosting [13], is a
Machine Learning ensemble classification model. It is an
iterative ensemble classification algorithm that means weak
learners grow sequentially and become strong ones. The
classifier should be interactively trained using a variety of
weighted training instances. It tries to provide an excellent fit
for these instances in each iteration by minimizing training
errors.
Gradient boost
Gradient Boost [14] is an ensemble boosting classification
-----
q p
**Number of Constant Features** **Number of Quasi-constant Features**
**Data Set**
**(Variance Threshold=0)** **(Variance Threshold=0.01)**
Syn Flood attack 12 7
UDP flood attack 12 8
UDP-Lag attack 12 5
Customized Exploitation DDoS attack 12 6
**Table 2. Number of correlated features, which has a**
threshold value >= 80 by Pearson, Spearman, and Kendall
correlation methods for TCP/UDP Exploitation-based DDoS
attack data sets
**Correlation Methods**
**Data Sets** **Pearson** **Spearman** **Kendall**
Syn Flood attack 37 50 48
UDP flood attack 34 46 46
UDP-Lag attack 36 50 46
Customized Exploitation
39 48 47
DDoS attack
**Table 3. Number of common un-correlated features count**
with a proposed feature selection method on TCP/UDP
Exploitation-based DDoS attack data sets
**Number of common un-**
**Data Set**
**correlated features**
Syn Flood attack 9
UDP flood attack 11
UDP-Lag attack 10
Customized Exploitation
12
DDoS attack
Apply the Pearson, Spearman, and Kendall correlations
individually on the exploitation-based DDoS attack data sets
after deleting constant and quasi-constant features, then collect
un-correlated features sub-sets of each correlation method.
Table 2 shows the number of correlated feature counts for the
Syn flood, UDP flood, UDP-Lag, and customized exploitation
data sets. To find the common un-correlated feature subset,
apply intersection to un-correlated feature subsets of the
Pearson, Spearman, and Kendall correlation methods. Table 3
shows the number of common un-correlated feature counts for
the Syn flood, UDP flood, UDP-Lag, and customized
exploitation data sets. Table 4 shows the common uncorrelated feature list for the Syn flood, UDP flood, UDP-Lag,
and customized exploitation data sets. Unnamed: 0, Flow
Duration, Flow IAT Min, Total Length of Bwd Packets, and
Protocol are common in the lists of common un-correlated
features of Syn flood, UDP flood, UDP-Lag, and customized
exploitation DDoS attack data sets. Classification algorithms
are applied to Syn flood, UDP flood, UDP-Lag, and
customized exploitation DDoS attack data sets with their
common un-correlated feature subsets and results evaluated.
On customized exploitation DDoS attack data set with
common features in the lists of common un-correlated features
of Syn flood, UDP flood, UDP-Lag, and customized
exploitation DDoS attack data sets, classification algorithms
are applied and the results also evaluated.
**Confusion matrix**
The actual and predicted values of label classes are
displayed in a confusion matrix. It shows the four key values
that are True Positive, False Negative, False Positive, and True
Negative. These values are used to calculate the evaluation
metrics.
TRUE POSITIVE (TP): The amount of DDoS attacks
properly identified by the classifier.
TRUE NEGATIVE (TN): The number of BENIGN class
labels accurately detected by the classifier.
FALSE POSITIVE (FP): The number of BENIGN class
labels, classified as DDoS attacks by the classifier.
FALSE NEGATIVE (FN): The number of DDoS attack
labels, classified as BENIGN class labels by the classifier.
**Accuracy**
Accuracy is defined as the proportion of benign and attack
data in the right classification to the total data.
𝑇𝑃+ 𝑇𝑁
𝐴𝐶𝐶𝑈𝑅𝐴𝐶𝑌= (6)
𝑇𝑃+ 𝑇𝑁+ 𝐹𝑃+ 𝐹𝑁
**Precision**
Precision refers to the ratio of the number of attacks
correctly classified into attacks to the entire proportion of
attack data, which indicates the model's capability to detect
attack data.
𝑇𝑃+ 𝑇𝑁
𝐴𝐶𝐶𝑈𝑅𝐴𝐶𝑌= (7)
𝑇𝑃+ 𝑇𝑁+ 𝐹𝑃+ 𝐹𝑁
**Recall/TPR**
The recall or true positive rate (TPR) is the percentage of
accurately detected attack data instances among all attack data.
𝑇𝑃+ 𝑇𝑁
𝐴𝐶𝐶𝑈𝑅𝐴𝐶𝑌= (8)
𝑇𝑃+ 𝑇𝑁+ 𝐹𝑃+ 𝐹𝑁
**F1-Score**
The F1 score is the weighted average precision and recall.
Logistic Regression, Gradient Boost, and Naive Bayes provide
the best F-score value. Ada Boost and KNN almost provide
the best F1-score value. Decision Tree also provides a poor
F1-score value.
𝐹1 𝑆𝑐𝑜𝑟𝑒= [2 ∗𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛∗𝑅𝑒𝑐𝑎𝑙𝑙] (9)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+ 𝑅𝑒𝑐𝑎𝑙𝑙
**Specificity**
Specificity is the ratio of the truly classified BENIGN class
labels out of the total actual BENIGN class labels.
𝑇𝑁
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦= (10)
𝑇𝑁+ 𝐹𝑃
Table 5 shows performance evaluation metrics in terms of
accuracy, precision, recall, F-score, and specificity for the
different classification algorithms on Syn flood attack with
common un-correlated features. The classification methods of
multilayer perceptron and Ada boost produce the best
-----
y p,
all classification methods produce good results in terms of
precision, recall, and F-score. Multilayer perceptron produces
better precision and F-score values for benign classification,
while KNN produces better recall values. For attack
classification, logistic regression produces a better specificity
value. All classification methods produce better specificity
values for benign classification.
Table 6 shows performance evaluation metrics in terms of
accuracy, precision, recall, F-score, and specificity for the
different classification algorithms on UDP flood attacks with
common un-correlated features. The classification methods of
KNN, Ada boost, and multilayer perceptron produce the best
accuracy results compared to others. In terms of precision,
recall, and F-score, all classification algorithms produce good
results for attack classification. Multilayer perceptron
produces the best precision score, random forest produces the
best recall score, and KNN produces the best F-score value for
benign classification. Except for Naive Bayes, all
classification methods produce good specificity scores for
benign classification. For attack classification, logistic
regression and Naive Bayes produce a high specificity score.
Table 7 shows performance evaluation metrics in terms of
accuracy, precision, recall, F-score, and specificity for the
different classification algorithms on UDP-Lag attacks with
common un-correlated features. Random Forest and
multilayer perceptron produce the best accuracy results
compared to other classifiers. All classification algorithms
produce good results for attack classification in terms of
precision, recall, and F-score except Naive Bayes classifier.
Ada Boost and Multilayer perceptron produce the best
p, g g p
recall value and KNN, Random forest, and Multilayer
perceptron produce the best F-score values for benign
classification. Logistic regression produces the best specificity
score for attack classification. All classification algorithms
produce good results for benign classification in terms of
specificity except the Naive Bayes classifier.
Table 8 shows performance evaluation metrics in terms of
accuracy, precision, recall, F-score, and specificity for the
different classification algorithms on Customized Exploitation
DDoS attacks with common un-correlated features. Multilayer
perceptron produces the best accuracy results compared to
other classifiers. All classification algorithms produce good
results for attack classification in terms of precision, recall,
and F-score. The random forest produces the best precision
score, while Logistic regression produces the best recall value
and KNN produces the best F score value for benign
classification. All classification algorithms produce good
results for benign classification in terms of specificity.
Logistic regression produces the best specificity value for
attack classification.
**K-fold cross validation**
Cross-fold validation is a statistical method for evaluating
machine learning classification models. A test set should still
be kept aside for final evaluation when employing Crossvalidation, but the validation set is no longer required. The
training set is partitioned into k smaller sets in a k-Cross-fold
validation. The training data for a model is taken from k-1
folds. After that, the model is tested against the remaining data.
**Table 4. Common un-correlated feature list with a proposed feature selection method on TCP/UDP Exploitation-based DDoS**
attack data sets
**Customized Exploitation DDoS**
**Syn Flood attack** **UDP flood attack** **UDP-Lag attack**
**attack**
1 Unnamed: 0 Unnamed: 0 Unnamed: 0 Unnamed: 0
2 Flow Duration Flow Duration Flow Duration Flow Duration
3 Flow IAT Min Flow IAT Min Flow IAT Min Flow IAT Min
Total Length of Bwd Total Length of Bwd Total Length of Bwd
4 Total Length of Bwd Packets
Packets Packets Packets
5 Protocol Protocol Protocol Protocol
6 min_seg_size_forward min_seg_size_forward Inbound min_seg_size_forward
7 Fwd Packet Length Std Fwd Packet Length Max Fwd Packet Length Std Fwd Packet Length Std
8 Total Backward Packets Bwd Packet Length Min Total Backward Packets Total Backward Packets
9 Total Fwd Packets Active Std Active Std Active Std
10 Fwd Header Length Fwd Header Length Fwd Header Length
11 Active Mean Active Mean
12 Down/Up Ratio
**Table 5. Accuracy, Precision, Recall, F-score and Specificity score values of the classification algorithms with common un-**
correlated feature subset selected by the proposed model on Syn flood attack dataset
**Precision** **Recall** **F-Score** **Specificity**
**Classification algorithms** **Accuracy (%)**
**Attack** **Benign** **Attack** **Benign** **Attack** **Benign** **Attack** **Benign**
Logistic Regression 1.00 0.01 0.97 0.84 0.99 0.02 0.84 0.97 97.06
Decision Tree 1.00 0.21 1.00 0.72 1.00 0.32 0.72 1.00 99.89
KNN 1.00 0.03 0.99 0.82 0.99 0.05 0.82 0.99 98.91
Naive Bayes 1.00 0.33 1.00 0.80 1.00 0.47 0.80 1.00 99.93
Random Forest 1.00 0.24 1.00 0.78 1.00 0.36 0.78 1.00 99.90
Ada Boost 1.00 0.71 1.00 0.50 1.00 0.59 0.50 1.00 99.97
Gradient Boost 1.00 0.22 1.00 0.79 1.00 0.35 0.79 1.00 99.89
Multilayer Perceptron 1.00 1.00 1.00 0.48 1.00 0.64 0.48 1.00 99.98
-----
y,,, p y g
correlated feature subset selected by the proposed model for the UDP flood attack
**Precision** **Recall** **F-Score** **Specificity**
**Classification algorithms** **Accuracy (%)**
**Attack** **Benign** **Attack** **Benign** **Attack** **Benign** **Attack** **Benign**
Logistic Regression 1.00 0.58 1.00 1.00 1.00 0.73 1.00 1.00 99.92
Decision Tree 1.00 0.46 1.00 0.77 1.00 0.57 0.77 1.00 99.88
KNN 1.00 0.93 1.00 0.83 1.00 0.88 0.83 1.00 99.98
Naive Bayes 1.00 0.00 0.04 1.00 0.07 0.00 1.00 0.04 3.87
Random Forest 1.00 0.64 1.00 0.94 1.00 0.76 0.94 1.00 99.94
Ada Boost 1.00 0.90 1.00 0.79 1.00 0.84 0.79 1.00 99.97
Gradient Boost 1.00 0.70 1.00 0.23 1.00 0.35 0.23 1.00 99.91
Multilayer Perceptron 1.00 0.95 1.00 0.67 1.00 0.78 0.67 1.00 99.96
**Table 7. Accuracy, Precision, Recall, F-score and Specificity score values of the classification algorithms with common un-**
correlated feature subset selected by the proposed model for the UDP - Lag attack
**Precision** **Recall** **F-Score** **Specificity**
**Classification algorithms** **Accuracy (%)**
**Attack** **Benign** **Attack** **Benign** **Attack** **Benign** **Attack** **Benign**
Logistic Regression 1.00 0.17 0.95 0.93 0.97 0.28 0.93 0.95 94.77
Decision Tree 1.00 0.28 0.97 0.86 0.99 0.42 0.86 0.97 97.37
KNN 1.00 0.93 1.00 0.89 1.00 0.91 0.89 1.0 99.80
Naive Bayes 1.00 0.01 0.01 1.00 0.01 0.02 1.00 0.01 01.63
Random Forest 1.00 0.94 1.00 0.88 1.00 0.91 0.88 1.00 99.81
Ada Boost 1.00 0.98 1.00 0.76 1.00 0.86 0.76 1.00 99.71
Gradient Boost 1.00 0.90 1.00 0.87 1.00 0.89 0.87 1.00 99.75
Multilayer Perceptron 1.00 0.98 1.00 0.85 1.00 0.91 0.85 1.00 99.81
**Table 8. Accuracy, Precision, Recall, F-score and Specificity score values of the classification algorithms with common un-**
correlated feature subset selected by the proposed model for the Customized Exploitation DDoS attack
**Precision** **Recall** **F-Score** **Specificity**
**Classification algorithms** **Accuracy (%)**
**Attack** **Benign** **Attack** **Benign** **Attack** **Benign** **Attack** **Benign**
Logistic Regression 1.00 0.15 0.98 0.92 0.99 0.25 0.92 0.98 97.55
Decision Tree 1.00 0.07 0.96 0.71 0.98 0.13 0.71 0.96 95.81
KNN 1.00 0.87 1.00 0.90 1.00 0.89 0.90 1.00 98.90
Naive Bayes 1.00 0.35 1.00 0.55 1.00 0.43 0.55 1.00 99.35
Random Forest 1.00 1.00 1.00 0.66 1.00 0.80 0.66 1.00 99.85
Ada Boost 1.00 0.96 1.00 0.06 1.00 0.11 0.06 1.0 99.58
Gradient Boost 1.00 0.86 1.00 0.74 1.00 0.79 0.74 1.00 99.83
Multilayer Perceptron 1.00 0.99 1.00 0.75 1.00 0.85 0.75 1.00 99.88
**Table 9. K-fold cross-validation accuracy scores (with a standard deviation) in % of the different classification algorithms with**
common un-correlated feature subset selected by the proposed model
**Classification Algorithms** **Syn flood attack** **UDP flood attack** **UDP-Lag attack** **Customized Exploitation DDoS attack**
Logistic Regression 92.4385(0.7917) 99.9141(0.0111) 95.4938(0.1625) 97.0752(0.0245)
Decision Tree 99.9974 (0.0009) 99.9831(0.0021) 99.9723(0.0064) 99.9558(0.0036)
KNN 99.9960 (0.0010) 99.9774(0.0035) 99.8371(0.0079) 99.9342(0.0033)
Naive Bayes 99.9138 (0.0032) 99.4898(0.0181) 99.3405(0.0291) 99.3236(0.0594)
Random Forest 99.9978 (0.0006) 99.9630(0.0061) 99.9835(0.0060) 99.9316(0.0042)
Ada Boost 99.9868 (0.0022) 99.9731(0.0041) 99.7678(0.0190) 99.8365(0.0112)
Gradient Boost 99.9925 (0.0024) 99.9176(0.0360) 99.9165(0.0169) 99.8549(0.0309)
Multilayer Perceptron 99.9844(0.0130) 99.9609(0.0059) 99.7955(0.0082) 99.9104(0.0138)
**Table 10. ROC-AUC Scores of the different classification algorithms with common un-correlated feature subset selected by the**
proposed model
**Classification Algorithms** **Syn flood attack** **UDP flood attack** **UDP-Lag attack** **Customized Exploitation DDoS attack**
Logistic Regression 0.9375 0.9998 0.9907 0.9892
Decision Tree 0.8593 0.8821 0.9167 0.8364
KNN 0.9070 0.9635 0.9529 0.9777
Naive Bayes 0.9154 0.9997 0.8921 0.9369
Random Forest 0.9566 0.9999 0.9950 0.8364
Ada Boost 0.9037 0.9999 0.9949 0.9950
Gradient Boost 0.9381 0.6153 0.9933 0.9204
Multilayer Perceptron 0.9681 0.9999 0.9941 0.9948
-----
y
(with a standard deviation) in % of the different classification
algorithms with common un-correlated feature subset on Syn
flood, UDP flood, UDP-Lag, and Customized Exploitation
DDoS attacks. Random forest produces the best K-fold cross
validation accuracy score with less standard deviation while
logistic regression produces lowest value on Syn flood DDoS
attack dataset. On the UDP flood DDoS attack dataset,
decision tree produces the best K-fold cross validation
accuracy score with less standard deviation, whereas Naive
Bayes produces the lowest value. Random forest produces the
best K-fold cross validation accuracy score with less standard
deviation while logistic regression produces lowest value on
UDP-Lag DDoS attack dataset. On the customized
exploitation DDoS attack dataset, decision tree produces the
best K-fold cross validation accuracy score with less standard
deviation, whereas logistic regression produces the lowest
value.
**ROC-AUC score**
The Receiver Operating Characteristic (ROC) curve is used
to evaluate the model's accuracy. The ROC curve depicts the
relationship between True and False classes. The area
underneath the ROC Curve (AUC) measures separability
between false positive and true positive rates. A ROC curve is
a graph that shows a classification model's performance
overall decision threshold. A decision threshold is a value used
to translate a probabilistic prediction into a class label. Scores
between 0 and 1 on the ROC-AUC. When the ROC-AUC
value is 1, the classifier correctly classifies all labels. When
the ROC-AUC value is zero, the classifier classifies all labels
not accordingly, that is, it classifies TRUE labels as FALSE
labels and FALSE labels as TRUE labels.
The ratio of benign data misclassification to the proportion
of all attack data filled with abnormal data is known as the
false-positive rate.
Table 10 shows ROC-AUC Scores of the different
classification algorithms with common un-correlated feature
subset on Syn flood, UDP flood, UDP-Lag, and Customized
Exploitation DDoS attacks. On a Syn flood attack, Multilayer
Perceptron produces the best ROC-AUC score, while Decision
Tree produces a lesser ROC-AUC score. Figure 3 shows the
Receiver Operating Curve (ROC) of the classification
algorithms with common un-correlated feature subset selected
by the proposed model for Syn flood attack. On UDP flood
attacks, Random forest, Ada boost, and Multilayer perceptron
produce the best ROC-AUC scores, whereas Gradient boost
produces the lowest ROC-AUC scores. Figure 4 shows the
Receiver Operating Curve (ROC) of the classification
algorithms with common un-correlated feature subset selected
by the proposed model for the UDP flood attack. Random
forest and Ada boost produce the best ROC-AUC scores for
UDP-Lag attacks, whereas Naïve Bayes classifier produces
the lowest ROC-AUC score. Figure 5 shows the Receiver
Operating Curve (ROC) of the classification algorithms with
common un-correlated feature subset selected by the proposed
model for the UDP-Lag attack. On customized exploitation
DDoS attacks, Ada boost and Multilayer perceptron produce
the best ROC-AUC scores, while Decision tree and Random
forest produce the lowest ROC-AUC scores. Figure 6 shows
the Receiver Operating Curve (ROC) of the classification
algorithms with common un-correlated feature subset selected
by the proposed model for the Customized Exploitation DDoS
attack. Even if Ada boost does not produce the best ROC-AUC
score on the Syn-flood attack data set, it does so on the UDP
g,
exploitation DDoS attack dataset. Multilayer perceptron
produces the best scores on Syn flood and UDP flood DDoS
attack datasets, good scores on UDP-Lag DDoS attack datasets,
and better scores on customized exploitation DDoS attack
datasets in terms of ROC AUC.
**Figure 3. Receiver Operating Curve (ROC) of the**
classification algorithms with common un-correlated feature
subset selected by the proposed model for Syn flood attack
**Figure 4. Receiver Operating Curve (ROC) of the**
classification algorithms with common uncorrelated features
subset selected by the proposed model on UDP flood attack
**Figure 5. Receiver Operating Curve (ROC) of the**
classification algorithms with common un-correlated features
subset selected by the proposed model on UDP-Lag attack
-----
**Figure 6. Receiver Operating Curve (ROC) of the**
classification algorithms with common un-correlated feature
subset selected by the proposed model for the Customized
Exploitation DDoS attack
**Log loss**
The most important probability-based classification metric
is log loss. The lower the log-loss number, the better the
predictions; the log loss value is 0 for a perfect model.
𝑁
𝐿𝑜𝑔−𝑙𝑜𝑠𝑠= − [1]
𝑁 [∑[𝑦][𝑖] [ln 𝑝][𝑖] [+ (1 −𝑦][𝑖][)𝑙𝑛(1] (11)
𝑖=1
−𝑝𝑖)]
where, N is the number of observations, p is the prediction
probability and y is the actual value.
Table 11 shows Log-loss values of the different
classification algorithms with common un-correlated features
subset on Syn flood, UDP flood, UDP-Lag, and Customized
Exploitation DDoS attacks. On the Syn flood DDoS attack
dataset, the multilayer perceptron classifier produces the best
log value, whereas logistic regression produces the poorest log
loss value. On a UDP flood DDoS attack dataset, the KNN
classifier produces the best log value, whereas the Naive
y p p g
lag DDoS attack dataset, the multilayer perceptron classifier
produces the best log value, whereas the Naive Bayes
classifier produces the poorest log loss value. On a customized
exploitation DDoS attack dataset, the KNN classifier produces
the best log value, whereas the Decision tree classifier
produces the poorest log value. On all exploitation-based
DDoS attack datasets, boosting type classifiers perform well
in terms of log-loss evaluation metrics.
**Run time**
Run time means the execution time of the model. Table 12
shows Execution times (in seconds) of the different
classification algorithms with common un-correlated feature
subset on Syn flood, UDP flood, UDP-Lag, and Customized
Exploitation DDoS attacks. In terms of execution time, the
Naive Bayes classifier takes less time while the Gradient
boosting classifier takes more time on the Syn flood DDoS
attack dataset. On the UDP flood DDoS attack dataset, the
Naive Bayes classifier takes less time to run, whereas the
Gradient boosting classifier takes longer. The Naive Bayes
classifier takes less time to execute on the UDP-Lag DDoS
attack data set, whereas the multilayer perceptron takes longer.
The Naive Bayes classifier takes less time to execute on the
customized exploitation DDoS attack data set, whereas the
random forest classifier takes longer. Ada boost classifier
takes less time for execution than Gradient boost classifier,
random forest bagging classifier, and multilayer perceptron
neural network classifier.
**Results of classification algorithms on customized data set**
**with common feature subset**
Table 4 shows the common un-correlated feature list for the
Syn flood, UDP flood, UDP-Lag, and customized exploitation
data sets. Unnamed: 0, Flow Duration, Flow IAT Min, Total
Length of Bwd Packets, and Protocol are common in the lists
of common un-correlated features of Syn flood, UDP flood,
UDP-Lag, and customized exploitation DDoS attack data sets.
Now classification algorithms applied to a customized DDoS
attack dataset with these common feature subsets and results
are evaluated.
**Table 11. Log-loss values of the different classification algorithms with common un-correlated feature subset selected by the**
proposed model
**Classification Algorithms** **Syn flood attack** **UDP flood attack** **UDP-Lag attack** **Customized Exploitation DDoS attack**
Logistic Regression 1.0144 0.0273 1.8063 0.8448
Decision Tree 0.0375 0.0427 0.9096 1.4486
KNN 0.3775 0.0085 0.0693 0.0359
Naive Bayes 0.0228 33.2010 33.9747 0.2251
Random Forest 0.0342 0.0221 0.0662 0.0523
Ada Boost 0.0088 0.0109 0.0988 0.1466
Gradient Boost 0.0372 0.0326 0.0864 0.0590
Multilayer Perceptron 0.0065 0.0138 0.0642 0.0400
**Table 12. Execution times (in seconds) of the different classification algorithms with common un-correlated feature subset**
selected by the proposed model
**Classification Algorithms** **Syn flood attack** **UDP flood attack** **UDP-Lag attack** **Customized Exploitation DDoS attack**
Logistic Regression 19.9021 15.6982 3.3212 13.9782
Decision Tree 4.9124 1.6632 0.9543 3.7729
KNN 2.3210 2.7505 0.6286 2.3074
Naive Bayes 0.2645 0.2528 0.0674 0.2312
Random Forest 113.9609 59.7390 22.2232 148.4150
Ada Boost 34.0015 40.5446 9.8165 48.5320
Gradient Boost 121.1507 143.1666 31.9532 125.2996
Multilayer Perceptron 82.8440 78.7160 65.5630 138.0118
-----
y,,, p y g
subset selected for Customized Exploitation DDoS attack
**Precision** **Recall** **F-Score** **Specificity**
**Classification algorithms** **Accuracy (%)**
**Attack** **Benign** **Attack** **Benign** **Attack** **Benign** **Attack** **Benign**
Logistic Regression 1.00 0.09 0.96 0.84 0.98 0.16 0.84 0.96 95.96
Decision Tree 1.00 0.95 1.00 0.64 1.00 0.76 0.64 1.00 99.82
KNN 1.00 0.85 1.00 0.72 1.00 0.78 0.72 1.00 99.82
Naive Bayes 1.00 0.68 1.00 0.26 1.00 0.38 0.26 1.00 99.61
Random Forest 1.00 0.96 1.00 0.66 1.00 0.78 0.66 1.00 99.83
Ada Boost 1.00 0.82 1.00 0.13 1.00 0.23 0.13 1.00 99.60
Gradient Boost 1.00 0.99 1.00 0.50 1.00 0.67 0.50 1.00 99.77
Multilayer Perceptron 1.00 0.95 1.00 0.60 1.00 0.73 0.60 1.00 99.81
**Table 14. K-fold cross-validation accuracy scores (with a standard deviation) in %, ROC-AUC Scores, Log- loss value and**
execution times of the different classification algorithms on Customized Exploitation DDoS attack with common feature subset
**Classification Algorithms** **K-fold cross-validation accuracy** **AUC Score** **Log-loss value** **Execution-time**
Logistic Regression 88.4218(0.1191) 0.9640 1.3963 7.3079
Decision Tree 99.8955(0.0025) 0.8202 0.0612 4.5834
KNN 99.8627(0.0039) 0.9205 0.0633 1.3982
Naive Bayes 99.5840(0.0143) 0.7968 0.1338 0.1801
Random Forest 99.8859(0.0144) 0.9372 0.0571 105.7972
Ada Boost 99.8089(0.0088) 0.9762 0.1389 37.1230
Gradient Boost 99.8200(0.0087) 0.8213 0.0778 117.3310
Multilayer Perceptron 99.8360(0.0040) 0.9768 0.0673 133.2027
**Figure 7. Receiver Operating Curve (ROC) of the**
classification algorithms with common feature subset on
Customized Exploitation DDoS attack
Table 13 shows performance evaluation metrics in terms of
accuracy, precision, recall, F-score, and specificity for the
different classification algorithms on Customized Exploitation
DDoS attacks with common features which are common in
four common un-correlated feature subsets. Decision tree,
KNN, and Multilayer perceptron provide better accuracy
scores. In terms of precision, recall, and F-score, all
classification methods produce good results for attack
classification. The Gradient boost classifier has a higher
accuracy score, while the Logistic regression classifier has a
higher benign score, and the KNN and Random forest
classifiers have a higher F-score for benign classification.
Except for Logistic regression, all classification methods have
a higher specificity score benign classification, while Logistic
regression has a higher specificity score for attack
classification.
Table 14 shows K-fold cross-validation accuracy scores
(with a standard deviation) in %, ROC-AUC Scores, Log-loss,
value and execution times of the different classification
algorithms on Customized Exploitation DDoS attack with
common feature subset. Multilayer perceptron gives the best
ROC-AUC value while Naive Bayes provides the lowest
ROC-AUC score values in customized exploitation DDoS
attack dataset. Figure 7 shows the ROC curves of the
classification algorithms with common feature subset on the
Customized Exploitation DDoS attack. On a customized
exploitation DDoS attack dataset with common features set,
KNN provides the best log loss value, whereas logistic
regression provides the lowest log loss values. On a
customized exploitation DDoS attack dataset with common
features set, Naive Bayes takes less time for execution,
whereas multilayer perceptron takes more time for execution.
On a customized exploitation DDoS attack dataset with
common features set, the Decision tree provides the best Kfold cross-validation accuracy value, whereas logistic
regression provides the lowest K-fold cross-validation
accuracy score values.
**4.** **CONCLUSIONS**
This research evaluates the effectiveness of the
classification algorithms for detecting exploitation DDoS
attacks on three CIC-DDoS2019 datasets and customized
exploitation DDoS attack dataset with common un-correlated
feature subset selected by Pearson, Spearman and Kendall
correlation methods. The classification methods of multilayer
perceptron and Ada boost produce the best accuracy results
compared to others on Syn flood DDoS attack dataset.
Decision tree, KNN, and Multilayer perceptron provide better
accuracy scores on UDP-flood attack dataset. Random Forest
and multilayer perceptron produce the best accuracy results
compared to other classifiers on UDP-lag attacks. Decision
tree, KNN, and Multilayer perceptron provide better accuracy
scores on customized exploitation DDoS attacks. Multilayer
perceptron produces the best accuracy results compared to
other classifiers on customized exploitation DDoS attacks
dataset with common features which are common features of
in un-correlated feature subsets. Overall, multilayer
-----
p p p y p
DDoS attacks datasets. It also provides good results in
remaining evaluation metrics.
**REFERENCES**
[1] Dasari, K.B., Nagaraju, D. (2018). Distributed denial of
service attacks, tools and defense mechanisms.
International Journal of Pure and Applied Mathematics,
120(6): 3423-3437.
[2] Ramkumar, B.N., Subbulakshmi, T. (2021). Tcp Syn
flood attack detection and prevention system using
adaptive thresholding method. ITM Web of Conferences,
37: 01016.
https://doi.org/10.1051/itmconf/20213701016
[3] Amaizu, G.C., Nwakanma, C.I., Bhardwaj, S., Lee, J.M.,
Kim, D.S. (2021). Composite and efficient DDoS attack
detection framework for B5G networks. Computer
Networks, 188: 107871.
https://doi.org/10.1016/j.comnet.2021.107871
[4] Moubayed, A., Aqeeli, E., Shami, A. (2020). Ensemble
based feature selection and classification model for DNS
typo-squatting detection. IEEE Canadian Conference on
Electrical and Computer Engineering (CCECE), pp. 1-6.
https://doi.org/10.1109/CCECE47787.2020.9255697
[5] Li, Z.L., Hu, G.M., Yang, D. (2008). Global abnormal
correlation analysis for DDoS attack detection. 2008
IEEE Symposium on Computers and Communications,
pp. 310-315.
http://dx.doi.org/10.1109/ISCC.2008.4625614
[6] Mekala, S., Rani, B.P. (2020). Kernel PCA based
dimensionality reduction techniques for preprocessing of
Telugu text documents for cluster analysis. International
Journal of Advanced Research in Engineering and
Technology, 8(12): 785-793.
[7] Dasari, K.B., Devarakonda, N. (2021). Detection of
different DDoS attacks using machine learning
classification algorithms. Ingénierie des Systèmes
d’Information, 26(5): 461-468.
http://dx.doi.org/10.18280/isi.260505
[8] Yan, Y.D., Tang, D., Zhan, S.J., Dai, R., Chen, J.W., Zhu,
( )
improved logistic regression. IEEE 21st International
Conference on High-Performance Computing and
Communications, pp. 468-476.
http://dx.doi.org/10.1109/HPCC/SmartCity/DSS.2019.0
0076
[9] Lakshminarasimman, S., Ruswin, S., Sundarakandam, K.
(2017). Detecting DDoS attacks using decision tree
algorithm. Fourth International Conference on Signal
Processing, Communication and Networking (ICSCN),
pp. 1-6. http://dx.doi.org/10.1109/ICSCN.2017.8085703
[10] Dong, S., Sarem, M. (2019). DDoS attack detection
method based on improved KNN with the degree of
DDoS attack in software-defined networks. IEEE Access,
8: 5039-5048.
http://dx.doi.org/10.1109/ACCESS.2019.2963077
[11] Singh, N.A., Singh, K.J., De, T. (2016). Distributed
denial of service attack detection using Naive Bayes
classifier through info gain feature selection. ICIA-16:
Proceedings of the International Conference on
Informatics and Analytics, pp. 1-9.
https://doi.org/10.1145/2980258.2980379
[12] Chen, Y., Hou, J., Li, Q.M., Long, H.Q. (2020). DDoS
attack detection based on random forest. 2020 IEEE
International Conference on Progress in Informatics and
Computing (PIC), pp. 328-334.
https://doi.org/10.1109/PIC50277.2020.9350788
[13] Rachmadi, S., Mandala, S., Oktaria, D. (2021). Detection
of DoS attack using AdaBoost algorithm on IoT system.
2021 International Conference on Data Science and Its
Applications (ICoDSA), pp. 28-33.
http://dx.doi.org/10.1109/ICoDSA53588.2021.9617545
[14] Chen, Z., Jiang, F., Cheng, Y., Gu, X., Liu, W., Peng, J.
(2018). XGBoost classifier for DDoS attack detection
and analysis in SDN-based cloud. In 2018 IEEE
International Conference on Big Data and Smart
Computing (BigComp), pp. 251-256.
http://dx.doi.org/10.1109/BigComp.2018.00044
[15] Wang, M., Lu, Y., Qin, J. (2020). A dynamic MLP-based
DDoS attack detection method using feature selection
and feedback. Computers & Security, 88: 101645.
http://dx.doi.org/10.1016/j.cose.2019.101645
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.18280/ria.360107?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.18280/ria.360107, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://www.iieta.org/download/file/fid/69435"
}
| 2,022
|
[] | true
| 2022-02-28T00:00:00
|
[] | 14,744
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01157f7c700e92323a5933e00c71cf001a8bac88
|
[
"Computer Science"
] | 0.916781
|
Blockchain with Internet of Things: Benefits, Challenges, and Future Directions
|
01157f7c700e92323a5933e00c71cf001a8bac88
|
International Journal of Intelligent Systems and Applications
|
[
{
"authorId": "17746180",
"name": "Hany F. Atlam"
},
{
"authorId": "144901593",
"name": "A. Alenezi"
},
{
"authorId": "145568788",
"name": "M. Alassafi"
},
{
"authorId": "144118261",
"name": "G. Wills"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Intell Syst Appl"
],
"alternate_urls": null,
"id": "f9b6bddb-d88a-424d-a940-2e092886ca71",
"issn": "2074-904X",
"name": "International Journal of Intelligent Systems and Applications",
"type": "journal",
"url": "http://www.mecs-press.org/ijisa/"
}
|
The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
|
Published Online June 2018 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijisa.2018.06.05
# Blockchain with Internet of Things: Benefits,
Challenges, and Future Directions
## Hany F. Atlam
Electronic and Computer Science Dept., University of Southampton, Southampton, UK
Computer Science and Engineering Dept., Faculty of Electronic Engineering, Menoufia University, Menoufia, Egypt
E-mail: hfa1g15@soton.ac.uk
## Ahmed Alenezi, Madini O. Alassafi, Gary B. Wills
Electronic and Computer Science Dept., University of Southampton, Southampton, UK
E-mail: {aa4e15, moa2g15, gbw}@soton.ac.uk
Received: 04 November 2017; Accepted: 09 February 2018; Published: 08 June 2018
**_Abstract—The Internet of Things (IoT) has extended the_**
internet connectivity to reach not just computers and
humans, but most of our environment things. The IoT has
the potential to connect billions of objects simultaneously
which has the impact of improving information sharing
needs that result in improving our life. Although the IoT
benefits are unlimited, there are many challenges facing
adopting the IoT in the real world due to its centralized
server/client model. For instance, scalability and security
issues that arise due to the excessive numbers of IoT
objects in the network. The server/client model requires
all devices to be connected and authenticated through the
server, which creates a single point of failure. Therefore,
moving the IoT system into the decentralized path may be
the right decision. One of the popular decentralization
systems is blockchain. The Blockchain is a powerful
technology that decentralizes computation and
management processes which can solve many of IoT
issues, especially security. This paper provides an
overview of the integration of the blockchain with the IoT
with highlighting the integration benefits and challenges.
The future research directions of blockchain with IoT are
also discussed. We conclude that the combination of
blockchain and IoT can provide a powerful approach
which can significantly pave the way for new business
models and distributed applications.
**_Index Terms—Blockchain, Blockchain with IoT, Internet_**
of Things, Centralized, Decentralized.
I. INTRODUCTION
The Internet of Things (IoT) has the ability to connect
and communicate billions of things simultaneously. It
provides various benefits to consumers that will change
the way that users interact with the technology. Using a
collection of cheap sensors and interconnected objects,
information can be collected from our environment that
will allow improving our way of living [1].
The IoT concept is not new. In 1999, Ashton, who is
the founder of MIT auto-identification center has said,
“The Internet of Things has the potential to change the
world, just as the Internet did. Maybe even more so”[2].
Later in 2005, the ITU officially define the IoT as “a
global infrastructure for the information society, enabling
advanced services by interconnecting (physical and
virtual) things based on, existing and evolving,
interoperable information and communication
technologies” [3].
Current IoT systems are built on centralized
server/client model, which requires all devices to be
connected and authenticated through the server. This
model would not be able to provide the needs to
outspread the IoT system in the future [4]. Therefore,
moving the IoT system into the decentralized path may be
the right decision. One of the popular decentralization
platforms is blockchain.
A blockchain is a distributed database of records that
contains all transactions that have been executed and
shared among participating parties in the network. This
distributed databased is called distributed ledger. Each
transaction is stored in the distributed ledger and must be
verified by consent of the majority of participants in the
network. All transactions that have ever made are
contained in the blockchain. Bitcoin, the decentralized
peer-to-peer digital currency, is the most popular example
that uses blockchain technology [5].
Integrating IoT with blockchain will have many
benefits. The decentralization model of the blockchain
will have the ability to handle processing of billions of
transactions between IoT devices, which will
significantly reduce the costs associated with installing
and maintaining large centralized data centers and will
distribute computation and storage needs across the
billions of devices that form IoT networks. In addition,
working with the blockchain technology will eliminate
the single point of failure associated with the centralized
IoT architecture [6]. Moreover, integrating blockchain
with IoT will allow the peer-to-peer messaging, file
distribution and autonomous coordination between IoT
devices with no need for the centralized server-client
model [4].
-----
This paper provides an overview of integrating
blockchain with the IoT system; this involves an
examination of the benefits resulting from the integration
process and the implementation challenges encountered.
The ultimate goal of this work is to provide a detailed
description of the benefits and the challenges that result
from combing blockchain with IoT so as to make the
decision whether to go with the decentralization for the
IoT or not.
The remainder of this paper is organized as follows:
Section II presents related work discussing integration
blockchain with IoT applications; Section III discusses
centralized IoT architecture; Section IV presents the
blockchain technology and its structure; Section V
introduces essential characteristics of the blockchain;
Section VI discusses how the blockchain works; Section
VII presents blockchain with IoT; Section VIII illustrates
benefits of integrating blockchain with IoT; Section IX
challenges of blockchain with IoT; Section X discusses
future research directions, and Section XI is the
conclusion.
II. RELATED WORK
The integration of blockchain with IoT have
investigated in a few papers. For instance, the IBM
Autonomous Decentralized Peer-to-Peer Telemetry
(ADEPT) project [7] leverages the blockchain to build a
distributed network of devices. As for the ADEPT
project, many other approaches are trying to design a
solution that will be able to merge all the different
blockchain based applications [8]. Also, Slock.it
introduced the first implementation of IoT and
Blockchain using the Ethereum platform [9]. It is so
called Slocks to reflect real-world physical objects that
can be controlled by the Blockchain. They use the
Ethereum Computer which is a piece of electronics that
brings Blockchain technology to the entire home, making
it possible to rent access to any compatible smart object
and accept payments without intermediaries.
In addition, Dorri et al. [10] have proposed a new
secure, private, and lightweight architecture for IoT,
based on blockchain technology that eliminates the
overhead while maintaining most of its security and
privacy benefits was investigated on a smart home
application as a representative case study for broader IoT
applications. The proposed architecture was hierarchical,
and consists of smart homes, an overlay network and
cloud storages coordinating data transactions with
blockchain to provide privacy and security.
Blockchain in healthcare IoT application has
introduced as a solution for many challenges facing
healthcare sector. For example, Gupta et al. [11] have
proposed an approach to explain how Blockchain could
enable an interoperable and secure electronic health
records exchange in which health consumers are the
ultimate owners. They have proposed a scenario to store
only the metadata about health and medical events on the
Blockchain. Otherwise the Blockchain infrastructure will
have to scale massively to support complete health
records. So, metadata such as patient identity, visit ID,
provider ID, payer ID, etc. can be kept on a Blockchain,
but the actual records should be stored in a separate
universal health cloud.
Another study for Blockchain in Healthcare utilize
Ethereum's smart contracts to create representations of
existing medical records [12]. These contracts are stored
directly within individual nodes on the network. They
have proposed a solution called “MedRec” to structure
the large amount of data into three types of contracts. The
first one is Registrar Contract. It stores the participants’
identity with all the needed details and of course, the
public keys. This kind of identity registration can be
restricted only to certified institutions. The second
contract is the Patient-Provider Relationship Contract. It
is issued when one node stores or manages data for
another node. The main usage will be when there is a
smart contract between the care provider and patient. The
last one is Summary Contract which helps the patient to
locate her medical history. As a result of this contract, all
previous and current engagements with other nodes in the
system are listed.
III. CENTRALIZED IOT ARCHITECTURE
Basically, the IoT is the connection and
communication of different devices over the Internet.
These devices are composed of networking nodes
whether serves or computer which are connected together
to share their data. All devices are provided with sensors,
which collect data that can be transmitted, stored,
analyzed, and presented in a useful way [13].
There are many architectures for IoT, which is
approved commonly. Different researchers and
organizations proposed different architectures. According
to the ITU, the IoT architecture is composed of four
layers as shown in Fig.[3]:
- Application layer
- Service support and application layer
- Network Layer
- Device layer
Fig.1. IoT reference model and architecture [3]
Application layer encompasses IoT applications. There
are many IoT application such as healthcare, smart cities,
-----
connected car, smart energy, smart agriculture and ...etc.
Service support and application layer contains common
capabilities which can be used by different IoT
applications [14]. The Network layer includes devices
such as routers, switches, gateways, and firewalls that are
used to construct local and wide-area networks to provide
Internet connectivity. In addition, it enables devices to
communicate with one another and to communicate with
application platforms such as computers, remote-control
devices, and smartphones [13]. The device layer is
similar to the physical layer of the Open System
Interconnection (OSI) model of the network architecture.
It is composed of physical devices and controllers that
control objects. These objects represent things in the IoT
that include a wide range of endpoint devices that send
and receive a variety of information. For instance, sensors
that collect information about the surrounding
environment [15].
The current IoT architecture is built as a centralized
model which is known as server/client model. In this
model, all devices cannot talk to each other but talk to a
centralized gateway instead. The centralized model has
used to connect a wide range of computing devices for
many years and will continue to support small-scale IoT
networks, however, it will not be capable of providing the
needs to extend the IoT system in the future [4].
The number of IoT devices will increase dramatically
such that a network capacity will be at least 1,000 times
the level of 2016. Cisco has reported that the number of
IoT devices is about to reach 20 billion in 2020 [16].
Therefore, the amount of communication that needs to be
handled will definitely increase costs exponentially. Even
if costs and communication challenges are managed, the
server/client model will still a point of failure that can
interrupt the entire network [6].
In addition, the centralized model is vulnerable to data
manipulation. Collecting real-time data does not ensure
that the information is put to good and appropriate use.
For example, if energy companies found that smart meter
data analysis will be the evidence that might result in
high costs or lawsuits. They will edit or delete these data
[17].
A decentralized approach for the IoT would solve
many of these issues. One of the popular decentralization
techniques is blockchain. The next section discusses the
blockchain technology.
IV. BLOCKCHAIN TECHNOLOGY
Blockchain technology provides an efficient way of
recording transactions or any digital interaction in a way
that makes it secure, transparent, highly resistant to
outages, auditable. This technology is still new and
changing very fast; adopting it in the commercial market
is still a few years off. However, decision-makers across
industries and business functions should pay their
attention now and start to investigate applications of this
technology to avoid disruptive surprises or missed
opportunities [6].
In 2008, Satoshi Nakamoto has introduced the concept
of Bitcoin. This was by releasing the popular paper,
“Bitcoin: A Peer-to-Peer Electronic Cash System” [18].
The paper presented a proposal for distributing electronic
transactions rather than maintaining it dependent on
centralized institutions for the exchange [19].
There are many definitions for the blockchain.
According to [5], the blockchain is defined as “ a
distributed database of records, or public ledger of all
transactions or digital events that have been executed and
shared among participating parties”. Each transaction in
the public ledger is verified by consensus of a majority of
the participants in the system. Once entered, information
can never be erased. The blockchain contains a certain
and verifiable record of every single transaction ever
made [5].
A blockchain consists of two main elements [6], as
shown in Fig.1:
- _Transactions: are the actions generated by the_
participants in the system.
- _Blocks: record the transactions and make sure they_
are in the correct sequence and have not been
tampered with.
Fig.1. Structure of blockchain [13]
V. CHARACTERISTICS OF BLOCKCHAIN
The blockchain has many features that make it very
attractive for the IoT to solve many of its issues. As
shown in Fig.2, according to [10] blockchain
characteristics include:
1. **Immutability: Building immutable ledgers is one**
of the key value of blockchain. All centralized
databases can be corrupted and commonly requires
trust in a third party to keep the information
integrity. Once you have agreed on a transaction
and recorded it, it can never be changed.
2. **Decentralization: The lack of centralized control**
ensures scalability and robustness by using
resources of all participating nodes and
eliminating many-to-one traffic flows, which in
turn decreases latency and solve the problem of
single point of failure that exists in the centralized
model.
3. **Anonymity: The anonymity provides an efficient**
way of hiding the identity of users and keeps their
identities private.
4. **Better Security: Blockchain provides better**
security because there is no single point of failure
-----
to shut down the entire network.
5. **Increased Capacity: One of the significant things**
about blockchain technology is that it can increase
the capacity of an entire network. Having
thousands of computers working together as a
whole can have greater power than a few
centralized servers.
Fig.2. Characteristics of blockchain
VI. HOW BLOCKCHAIN WORKS?
Although the blockchain is still new and in
experimenting stage, it is being perceived as a
revolutionary solution that addresses modern technology
issues such as decentralization, identity, trust, data
ownership and data-driven decisions [7].
The blockchain is generally a database that stores all
the transactions in blocks. When a new transaction is
created, the sender broadcasts it to the Peer–to–Peer
communication channel to all other nodes in the network.
The transaction is still new and not verified. When the
nodes receive the transaction, they validate it and keep it
in their ledger [20].
Transaction validation is performed by running
predefined checks on the structure and the actions of the
transaction. Special node types called miners create a new
block and include all or some of the available transactions
from their transaction pool. Then the block is mined,
which is a process of finding the proof of work using
variable data from the new block’s header [20]. Finding
the proof of work is the continuous calculation of a
cryptographic hash that fits the defined difficulty target.
Mining requires a lot of processing power and the miners
use a dedicated mining hardware. The miner that first
finds a solution for its block is the winner. His candidate
block becomes the new block in the chain. Because
transactions are added in the mining block as they arrive,
therefore, the latest block in the blockchain contains the
latest transactions [4].
When a new block is created, it is time-stamped and
propagated to all network nodes. Every node receives the
block, validates it, validates the transactions, and adds the
block to his ledger. When the majority of nodes accepted
the block, it becomes authorized and non-reversible part
of the blockchain. In addition to transactions, every block
stores some metadata and the hash value of the previous
block. So, every block has a pointer to its parent block.
That is how the blocks are linked, creating a chain of
blocks called blockchain [4].
The distributed ledger is available for everyone in the
network to check the blocks and the transactions within.
However, the users stay anonymous, they only identified
by their public key as an address. Moreover, the
transactions are encrypted. Invalid transactions are
rejected and are not included in the blocks. Malicious
attempt to make a change in the transactions will require
repeated calculation of the proof of work for the attached
block and all the blocks afterwards. These calculations
are infeasible unless the majority of the nodes in the
network are malicious [21].
Fig.3. Simple example of blockchain technology
This section will discuss the blockchain with a simple
example. Suppose that we have four nodes A, B, C and D
who want to use the blockchain to transfer money, as it’s
known as Bitcoin. To transfer the money from one node
to another node, there will be no intermediate third party
to make the transfer process, which is the idea of
decentralization. Therefore, if node A wanted to transfer
money to node B, it will be transferred directly. As shown
in Fig.3, suppose node A wants to send £5 to node B, then
a transaction will be created and verified by all other
nodes in the network to include it in the ledger. In
addition, if node B wants to send £10 to node D, then a
transaction will be created and verified by all other nodes
in the network to include it in the ledger. This will be the
same scenario when node C wants to send £20 to node D.
All the transactions are chained together in what is called
ledger. This ledger is distributed across all nodes in the
network to make sure that all nodes have the same copy
or version from the ledger, that is why it’s called
distributed ledger.
VII. BLOCKCHAIN WITH IOT
The IoT is an interesting developing system that
-----
provides unlimited benefits, but there are many
challenges with the current centralized IoT architecture
such that all devices are identified, authenticated and
connected through the centralized servers [4]. This model
was used to connect a wide range of computing devices
for many years and will continue to support small-scale
IoT networks, however, it will not be capable of
providing the needs to extend the IoT system in the future
[22].
Table presents a comparison between blockchain and
IoT. There are many advantages of both technologies,
which can be combined, and get an improved outcome.
The IoT has unlimited benefits and adopting a
decentralized approach for the IoT would solve many
issues especially security. Adopting a standardized peerto-peer communication model to process the hundreds of
billions of transactions between devices will significantly
reduce the costs associated with installing and
maintaining large centralized data centers and will
distribute computation and storage needs across the
billions of devices that form IoT networks. This will
prevent failure in any single node in a network from
bringing the entire network to a halting collapse [17,18].
Table 2. Comparison between blockchain and IoT
**Blockchain** **IoT**
Decentralized Centralized
Resource consuming Resource restricted
Block mining is time
Demands low latency
consuming
IoT considered to contains
Scale poorly with large
large number of devices
network
IoT devices have limited
High bandwidth consumption bandwidth and resources
Security is one of the big
Has better security challenges of IoT
The decentralized, autonomous, and trustless
capabilities of the blockchain make it an ideal component
to become a foundational element of IoT solutions. It is
no surprise that enterprise IoT technologies have quickly
become one of the early adopters of blockchain
technology. However, establishing peer-to-peer
communications will present its own set of challenges
especially security. IoT security is much more than just
about protecting sensitive data. Therefore, the blockchain
solutions will have to maintain privacy and security in
IoT networks and use validation and consent of
participants for transactions to prevent spoofing and theft
[6].
In addition, blockchain technology is considered the
key solutions to solve privacy and reliability issues in the
IoT. It can be used in tracking billions of connected
devices, enabling the processing of transactions and
coordination between devices; this allows for significant
savings for IoT industry manufacturers[24]. Moreover,
this decentralized approach would eliminate single points
of failure, creating a more resilient system for devices to
run on. The cryptographic algorithms used by
blockchains would make consumer data more private
[25].
In an IoT network, the blockchain can keep an
immutable record of the history of smart devices. This
feature enables the autonomous functioning of smart
devices without the need for centralized authority [26].
As a result, the blockchain will open a series of IoT
scenarios that were difficult, or even impossible to
implement without it. For example, by leveraging the
blockchain, IoT solutions can enable secure, trustless
messaging between devices in an IoT network [27]. In
this model, the blockchain will treat message exchanges
between devices similar to financial transactions in a
bitcoin network. To enable message exchanges, devices
will leverage smart contracts which then model the
agreement between the two parties [20].
One of the most exciting capabilities of the blockchain
is the ability to maintain a duly decentralized, trusted
ledger of all transactions occurring in a network. This
capability is essential to enable the many compliances
and regulatory requirements of industrial IoT (IIoT)
applications without the need to rely on a centralized
model [6].
Many large organizations have started to adopt
blockchain with IoT systems to get all benefits of the
blockchain. For instance, IBM in partnership with
Samsung has developed a platform ADEPT (Autonomous
Decentralized Peer- To- Peer Telemetry) that uses
elements of the bitcoin’s underlying design to build a
distributed network of devices, a decentralized IoT.
ADEPT uses three protocols-BitTorrent ( file sharing),
Ethereum ( Smart Contracts) and TeleHash ( Peer-ToPeer Messaging)-in the platform [28].
VIII. BENEFITS OF INTEGRATING BLOCKCHAIN WITH IOT
There are many benefits of adopting blockchain with
IoT, as shown in Fig.4. These benefits can be
summarized as follows:
1. **Publicity: All participants have the ability to see**
the all the transactions and all blocks as each
participant has its own ledger. The content of the
transaction is protected by participant’s private
key [19], so even all participants can see them,
they are protected. The IoT is a dynamic system in
which all connected devices can share information
together and at the same time protecting users’
privacy.
2. **Decentralization: The majority of participants**
must verify the transactions in order to approve it
and add it to the distributed ledger. There is no
single authority that can approve the transactions
or set specific rules to have transactions accepted.
Therefore, there is a massive amount of trust
included since the majority of the participants in
the network have to reach an agreement to validate
transactions [28]. Therefore, the blockchain will
provide a secure platform for IoT devices. In
addition, eliminating centralized traffic flows and
single point of failure of the current centralized
IoT architecture.
|Blockchain|IoT|
|---|---|
|Decentralized|Centralized|
|Resource consuming|Resource restricted|
|Block mining is time- consuming|Demands low latency|
|Scale poorly with large network|IoT considered to contains large number of devices|
|High bandwidth consumption|IoT devices have limited bandwidth and resources|
|Has better security|Security is one of the big challenges of IoT|
-----
3. **Resiliency: Each node has its own copy of the**
ledger that contains all transactions that have ever
made in the network. So, the blockchain is better
able to withstand attack. Even if one node was
compromised, the blockchain would be maintained
by every other node [29]. Having a copy of data at
each node in the IoT will improve information
sharing needs. However, it introduces new
processing and storage issues.
4. **Security: Blockchain has the ability to provide a**
secure network over untrusted parties which is
needed in IoT with numerous and heterogeneous
devices [10]. In other words, all IoT network
nodes must be malicious to perform an attack.
5. **Speed: A blockchain transaction is distributed**
across the network in minutes and will be
processed at any time throughout the day [16].
6. **Cost saving: Existing IoT solutions are expensive**
because of the high infrastructure and maintenance
cost associated with centralized architecture, large
server farms, and networking equipment. The total
amount of communications that will have to be
handled when there are tens of billions of IoT
devices will increase those costs substantially [30].
7. **Immutability: Having an immutable ledger is one**
of the main advantages of blockchain technology.
Any changes in the distributed ledger must be
verified by the majority of the network nodes.
Therefore, the transactions cannot be altered or
deleted easily [14, 25]. Having an immutable
ledger for IoT data will increase security and
privacy which are the major challenges in this
technology and all new technologies.
8. **Anonymity: To process the transaction, both**
buyer and seller use anonymous and unique
address numbers which keep their identity private.
This feature has been criticised as it increases the
use of cryptocurrencies in the illegal online market.
However, it could be seen as an advantage if used
for other purposes, for example, electoral voting
systems [14, 26].
Fig.4. Benefits of integrating blockchain with IoT
IX. CHALLENGES OF BLOCKCHAIN WITH IOT
There is no doubt that integrating blockchain would
have many advantages. However, the blockchain
technology is not a perfect model which has its own flaws
and challenges, as shown in Fig.5. These challenges can
be summarized as follow:
1. **Scalability: Scalability issues in the blockchain**
might lead to centralization, which is casting a
shadow over the future of the cryptocurrency. The
blockchain scales poorly as the number of nodes in
the network increases. This issue is serious as
IoT networks are expected to contain a large number of
nodes [28].
2. **Processing Power and Time: The processing**
power and time needed to achieve encryption for
all the objects included in a blockchain system.
IoT systems have different types of devices which
have very different computing capabilities, and not
all of them will be able to run the same encryption
algorithms at the required speed [14, 27].
3. **Storage: One of the main benefits of blockchain is**
that it eliminates the need for a central server to
store transactions and device IDs, but the ledger
has to be stored on the nodes themselves [33]. The
distributed ledger will increase in size as time
passes and with increasing number of nodes in the
network. As said earlier, IoT devices have low
computational resources and very low storage
capacity [34].
4. **Lack of skills: The blockchain technology is still**
new. Therefore, a few people have large
knowledge and skills about the blockchain,
especially in banking. In other applications, there
is a widespread lack of understanding of how the
blockchain works [6]. The IoT devices exist
everywhere, so adopting the blockchain with IoT
will be very difficult without public awareness
about the blockchain.
5. **Legal and Compliance: The blockchain is a new**
technology that will have the ability to connect
different people from different countries without
having any legal or compliance code to follow,
which is a serious issue for both manufacturers
and service providers. This challenge will be the
major barrier for adopting blockchain in many
businesses and applications [35].
6. **Naming** **and** **Discovery:** The blockchain
technology has not been designed for the IoT,
meaning that nodes were not meant to find each
other in the network. An example is the Bitcoin
application in which the IP addresses of some
“senders” are embedded within the Bitcoin client
and used by nodes to build the network topology.
This approach will not work for the IoT as IoT
devices will keep moving all the time which will
change the topology continuously [23].
-----
Fig.5. Blockchain and IoT challenges
X. FUTURE RESEARCH DIRECTIONS
The blockchain has changed the concept of centralized
authorities. The integration of blockchain with IoT will
be the starting point for opening new businesses and
applications. This section discusses future research
directions of blockchain with IoT. This can be
summarized as follows:
_A. Smart Contracts_
Smart contracts are scripts stored on the blockchain.
They are so powerful because of their flexibility. They
can encrypt and store data securely, restrict access to data
to only the desired parties and then be programmed to
utilize the data within a self-executing logical workflow
of operations between parties. Smart contracts translate
business process into the computational process, greatly
improving operational efficiency [5].
Using smart contracts within the IoT systems will
provide an efficient way to improve security and Integrity
of IoT data. The research questions that need to be
addressed regarding conducting smart contracts within
IoT systems are:
Q1: Are the smart contracts able to execute all event
functions of IoT devices, which are in billions?
Q2: How the smart contract will respond to changing
environmental conditions of the IoT as it is a dynamic
system?
Q3: What is the appropriate platform to implement
smart contracts within IoT systems?
_B. Regulatory Laws_
Regulatory Laws are the procedures created by
authorities and local administrative agencies to define
legal ways of working with a product or technology
within a certain country or region. As said earlier, the
blockchain is a new technology which has not any legal
or compliance code to follow. The research question that
needs to be addressed regarding blockchain legal and
compliance issues is:
Q1: What are regulatory rules that ensure the best
practice of blockchain in IoT globally?
_C. Security_
For all new technologies, the security is still the most
challenging topic that takes the attention of researchers
and organizations. Integrating blockchain with IoT can
improve security as it uses use consent of the majority of
participants to validate transactions to prevent spoofing
and theft. However, IoT devices have low computational
resources and storage space that cannot be able to process
cryptographic algorithms. The research questions that
need to be addressed regarding security are:
Q1: What is the optimum platform for IoT to integrate
with blockchain?
Q2: How to overcome low capabilities of IoT devices
to provide a secure IoT system?
_D. IOTA_
[IOTA is a new generation of public and distributed](https://iota.org/)
ledger that uses a concept called “Tangle”. The Tangle is
a new data structure that based on a Directed Acyclic
Graph (DAG). IOTA provide an efficient, secure,
lightweight, and real-time transaction without fees. It is
open-source, decentralized cryptocurrency, designed
specifically for the IoT [36].
As IOTA is designed specifically for the IoT, it may be
more appropriate to different IoT applications. However,
it’s still under construction. The research questions that
need to be addressed regarding IOTA are:
Q1: What the appropriate decentralization technology
for the IoT, blockchain or IOTA?
Q2: What are major challenges with IOTA?
XI. CONCLUSION
The IoT technology has extended that reached to every
home in the universe. It has the ability to connect
everyday objects to the Internet. Through cheap sensors,
a lot of information can be collected from the surrounding
environment that results in improving our life. However,
current IoT architecture that based on server/client model
has many issues that need to be addressed especially
scalability and security. One of the solutions to address
IoT issues is blockchain. Blockchain provides distributed
peer-to-peer communication network where non-trusting
nodes can interact with each other without a trusted
intermediary, in a verifiable manner. In this paper, we
provided an overview of integrating blockchain with IoT
with highlighting benefits and challenges. The discussion
also focused on future research directions. At the end, we
can conclude that integrating blockchain with IoT can
bring many advantages that improve many of IoT issues
but at the same time, it introduces new challenges that
should be addressed. There is still need more research to
investigate implementing blockchain with IoT in more
details.
ACKNOWLEDGMENT
We acknowledge Egyptian cultural affairs and
missions sector and Menoufia University for their
-----
scholarship to Hany Atlam that allows the research to be
funded and undertaken.
REFERENCES
[1] H. F. Atlam, A. Alenezi, R. J. Walters, and G. B. Wills,
“An Overview of Risk Estimation Techniques in Riskbased Access Control for the Internet of Things,” in
_Proceedings of the 2nd International Conference on_
_Internet of Things, Big Data and Security (IoTBDS 2017),_
2017, pp. 254–260.
[2] K. Ashton, “That ‘Internet of Things’ Thing,” RFiD J., p.
4986, 2009.
[3] ITU, “Overview of the Internet of things,” Ser. Y Glob. Inf.
_infrastructure, internet Protoc. Asp. next-generation_
_networks - Fram. Funct. Archit. Model., p. 22, 2012._
[4] E. Karafiloski, “Blockchain Solutions for Big Data
Challenges A Literature Review,” in _IEEE EUROCON_
_2017_ _-17th_ _International_ _Conference_ _on_ _Smart_
_Technologies, 2017, no. July, pp. 6–8._
[5] A. Stanciu, “Blockchain based distributed control system
for Edge Computing,” in _21st International Conference_
_on Control Systems and Computer Science Blockchain,_
2017, pp. 667–671.
[6] A. Banafa, “IoT and Blockchain Convergence: Benefits
and Challenges,” _IEEE IoT Newsletter, 2017. [Online]._
Available: http://iot.ieee.org/newsletter/january-2017/iotand-blockchain-convergence-benefits-and-challenges.html.
[7] IBM, “ADEPT: An IoT Practitioner Perspective,” 2015.
[8] J. H. Ziegeldorf, F. Grossmann, M. Henze, N. Inden, and
K. Wehrle, “CoinParty: Secure Multi-Party Mixing of
Bitcoins,” in Proceedings of the 5th ACM Conference on
_Data_ _and_ _Application_ _Security_ _and_ _Privacy_ _-_
_CODASPY ’15, 2015, no. August, pp. 75–86._
[9] C. Jentzsch, “Decentralized Autonomous Organization to
Automate Governance,” white Pap., pp. 1–30, 2016.
[10] A. Dorri, S. S. Kanhere, and R. Jurdak, “Blockchain in
internet of things: Challenges and Solutions,”
_arXiv1608.05187 [cs], no. August, 2016._
[11] N. Gupta, A. Jha, and P. Roy, “Adopting Blockchain
Technology for Electronic Health Record
Interoperability,” 2016.
[12] A. Ekblaw, A. Azaria, J. D. Halamka, and A. Lippman,
“A Case Study for Blockchain in Healthcare:‘MedRec’
prototype for electronic health records and medical
research data,” _Proc. IEEE Open Big Data Conf., pp. 1–_
13, 2016.
[13] W. Stallings, “The Internet of Things: Network and
Security Architecture,” Internet Protoc. J., vol. 18, no. 4,
pp. 2–24, 2015.
[14] A. Torkaman and M. A. Seyyedi, “Analyzing IoT
Reference Architecture Models,” _Int. J. Comput. Sci._
_Softw. Eng. ISSN, vol. 5, no. 8, pp. 2409–4285, 2016._
[15] Cisco, “The Internet of Things Reference Model,” _White_
Pap., pp. 1–12, 2014.
[16] Nir Kshetri, “Can blockchain Strengthen the Internet of
Things?,” IEEE Compu ter So ciet y, no. August, pp. 68–
72, 2017.
[17] M. Conoscenti, D. Torino, A. Vetr, D. Torino, and J. C.
De Martin, “Peer to Peer for Privacy and Decentralization
in the Internet of Things,” in 2017 IEEE/ACM 39th IEEE
_International Conference on Software Engineering_
_Companion Peer, 2017, pp. 288–290._
[18] S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash
System,” Www.Bitcoin.Org, p. 9, 2008.
[19] T. Ahram, A. Sargolzaei, S. Sargolzaei, J. Daniels, and B.
Amaba, “Blockchain technology innovations,” 2017 IEEE
_Technol. Eng. Manag. Conf., no. 2016, pp. 137–141, 2017._
[20] M. Conoscenti, A. Vetro, and J. C. De Martin,
“Blockchain for the Internet of Things: A systematic
literature review,” _2016 IEEE/ACS 13th Int. Conf._
_Comput. Syst. Appl., pp. 1–6, 2016._
[21] A. M. Antonopoulos, _Mastering Bitcoin: Unlocking_
_Digital Cryptocurrencies., M 1st ed. Sebastopol, CA,_
USA: O’Reilly Media, Inc., 2014.
[22] A. Dorri, S. S. Kanhere, R. Jurdak, and P. Gauravaram,
“Blockchain for IoT security and privacy: The case study
of a smart home,” _2017 IEEE Int. Conf. Pervasive_
_Comput. Commun. Work. (PerCom Work., pp. 618–623,_
2017.
[23] V. Daza, R. Di Pietro, I. Klimek, and M. Signorini,
“CONNECT: CONtextual NamE disCovery for
blockchain-based services in the IoT,” _IEEE Int. Conf._
_Commun., 2017._
[24] H. F. Atlam, A. Alenezi, R. J. Walters, G. B. Wills, and J.
Daniel, “Developing an adaptive Risk-based access
control model for the Internet of Things,” in _2017 IEEE_
_International Conference on Internet of Things (iThings)_
_and IEEE Green Computing and Communications_
_(GreenCom) and IEEE Cyber, Physical and Social_
_Computing (CPSCom) and IEEE Smart Data (SmartData),_
2017, no. June, pp. 655–661.
[25] A. Boudguiga _et al., “Towards Better Availability and_
_Accountability for IoT Updates by means of a_
Blockchain,” in _2017 IEEE European Symposium on_
_Security and Privacy Workshops (EuroS&PW), 2017, pp._
50–58.
[26] H. F. Atlam, A. Alenezi, A. Alharthi, R. Walters, and G.
Wills, “Integration of cloud computing with internet of
things: challenges and open issues,” in _2017 IEEE_
_International Conference on Internet of Things (iThings)_
_and IEEE Green Computing and Communications_
_(GreenCom) and IEEE Cyber, Physical and Social_
_Computing (CPSCom) and IEEE Smart Data (SmartData),_
2017, no. June, pp. 670–675.
[27] H. F. Atlam, A. Alenezi, R. K. Hussein, and G. B. Wills,
“Validation of an Adaptive Risk-based Access Control
Model for the Internet of Things,” I.J. Comput. Netw. Inf.
_Secur., no. January, pp. 26–35, 2018._
[28] M. Samaniego and R. Deters, “Blockchain as a Service
for IoT,” _2016 IEEE Int. Conf. Internet Things IEEE_
_Green Comput. Commun. IEEE Cyber, Phys. Soc. Comput._
_IEEE Smart Data, pp. 433–436, 2016._
[29] D. Geist, “Using the Bitcoin Blockchain as a Botnet
Resilience Mechanism,” 2016.
[30] K. Christidis and G. S. Member, “Blockchains and Smart
Contracts for the Internet of Things,” IEEE Access, vol. 4,
pp. 2292–2303, 2016.
[31] S. Huh, S. Cho, and S. Kim, “Managing IoT Devices
using Blockchain Platform,” in _The_ _19th_ _IEEE_
_International Conference on Advanced Communications_
_Technology (ICACT 2017), 2017, pp. 464–467._
[32] T. Bocek, B. B. Rodrigues, T. Strasser, and B. Stiller,
“Blockchains Everywhere - A Use-case of Blockchains in
the Pharma Supply-Chain,” in _2017_ _IFIP/IEEE_
_International_ _Symposium_ _on_ _Integrated_ _Network_
_Management (IM2017):, 2017, pp. 772–777._
[33] A. Alenezi, N. H. N. Zulkipli, H. F. Atlam, R. J. Walters,
and G. B. Wills, “The Impact of Cloud Forensic
Readiness on Security,” in _Proceedings of the 7th_
_International Conference on Cloud Computing and_
_Services Science (CLOSER 2017), 2017, pp. 511–517._
[34] H. F. Atlam, G. Attiya, and N. El-Fishawy, “Integration of
Color and Texture Features in CBIR System,” _Int. J._
-----
_Comput. Appl., vol. 164, no. April, pp. 23–28, 2017._
[35] Diana Asatryan, “4 Challenges to Blockchain Adoption
From Fidelity CEO,” 2017. .
[36] H. F. Atlam, M. O. Alassafi, A. Alenezi, R. J. Walters,
and G. B. Wills, “XACML for Building Access Control
Policies in Internet of Things,” in _in Proceedings of the_
_3rd International Conference on Internet of Things, Big_
_Data and Security (IoTBDS 2018), 2018, pp. 1–6._
**Authors’ Profiles**
**Hany F. Atlam has born in Menoufia,**
Egypt in 1988. He has completed his
Bachelor of Engineering and computer
science from Faculty of Electronic
Engineering, Menoufia University, Egypt in
2011, then completed the master’s degree in
computer science from the same university
in 2014. He joined the University of
Southampton as a Ph.D. student since January 2016. Hany’s
now is a lecturer in Faculty of Electronic Engineering,
Menoufia University, Egypt and a Ph.D. candidate at the
University of Southampton, UK.
He has large experiences in networking as he holds
international Cisco certifications, Cisco Instructor certifications,
and database certifications. He also a member of Institute for
Systems and Technologies of Information, Control and
Communication (INSTICC), and Institute of Electrical and
Electronics Engineers (IEEE). Hany’s research areas include
IoT security and privacy, Cloud computing security, Blockchain,
Big data, digital forensics, computer networking and image
processing.
**Ahmed Alenezi a lecturer at Northern**
Border University, Saudi Arabia and a Ph.D.
candidate at the University of Southampton,
UK. Ahmed is interested in
multidisciplinary research topics that related
to computer science. His research interests
include Parallel Computing, Digital
forensics, Cloud Forensics, Cloud Security,
Internet of Things Forensics and Internet of Things Security.
**Madini O. Alassafi born in Saudi Arabia.**
Alassafi received Bachler’s degree in
Computer Science from King Abdul-Aziz
University in Saudi Arabia, 2006, and his
master’s degree in Advanced Computer
Science from California Lutheran University,
Thousand Oaks, USA, 2013. He works as a
lecturer at King Abdul-Aziz University,
Saudi Arabia. Now he is a Ph.D. candidate at the University of
Southampton, UK. His current research interested in
multidisciplinary research topics to pertain to computer science,
which includes and not limited to Cloud Computing, Security,
Risks, Cloud Migration Project Management and Cloud of
Things, Security Threats.
**Gary B. Wills is an Associate Professor in**
Computer Science at the University of
Southampton. He graduated from the
University of Southampton with an
Honours degree in Electromechanical
Engineering, and then a PhD in Industrial
Hypermedia system. He is a Chartered
Engineer, a member of the Institute of
Engineering Technology and a Principal Fellow of the Higher
Educational Academy. He is also a visiting associate professor
at the University of Cape Town and a research professor
[at RLabs. Gary’s research projects focus on Secure System](http://www.rlabs.org/)
Engineering and applications for industry, medicine and
education.
**How to cite this paper:** Hany F. Atlam, Ahmed Alenezi,
Madini O. Alassafi, Gary B. Wills, "Blockchain with Internet of
Things: Benefits, Challenges, and Future Directions",
International Journal of Intelligent Systems and
Applications(IJISA), Vol.10, No.6, pp.40-48, 2018. DOI:
10.5815/ijisa.2018.06.05
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5815/IJISA.2018.06.05?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5815/IJISA.2018.06.05, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://www.mecs-press.org/ijisa/ijisa-v10-n6/IJISA-V10-N6-5.pdf"
}
| 2,018
|
[
"Review"
] | true
| 2018-06-08T00:00:00
|
[
{
"paperId": "d723c3da0c3b55212092afe755a8d2a8bc4a2ba0",
"title": "Anonymity"
},
{
"paperId": "7b76cdb354e120f0473e0aaf61e4d36a932f493f",
"title": "XACML for Building Access Control Policies in Internet of Things"
},
{
"paperId": "4c6f3d83b1e3097c642931a9ea3bd1fedf447e75",
"title": "Validation of an adaptive risk-based access control model for the Internet of Things"
},
{
"paperId": "e8709e2906361ade9064cc605b9c7637bec474a0",
"title": "Can Blockchain Strengthen the Internet of Things?"
},
{
"paperId": "37d9283061bb8057adff53ff4033dd11ccdf2a0c",
"title": "Blockchain solutions for big data challenges: A literature review"
},
{
"paperId": "28924ee0429260d4c90e3c9c61760b974bc4b466",
"title": "Integration of Cloud Computing with Internet of Things: Challenges and Open Issues"
},
{
"paperId": "9092a7802f6e56dd5b6d1be30c8b5588a22e53fe",
"title": "Blockchain technology innovations"
},
{
"paperId": "9ee3c4be30d0e33ed9e43e53ebd7a1ab98e67217",
"title": "Developing an Adaptive Risk-Based Access Control Model for the Internet of Things"
},
{
"paperId": "45fd39f70614062cdfc59309b64e9d48892d993d",
"title": "Blockchain Based Distributed Control System for Edge Computing"
},
{
"paperId": "04b0873fdf91e4ef79090eda55181afe434456c6",
"title": "Peer to Peer for Privacy and Decentralization in the Internet of Things"
},
{
"paperId": "0356360ce4e31a901f5cc48b090af30f56bb3f2d",
"title": "Blockchains everywhere - a use-case of blockchains in the pharma supply-chain"
},
{
"paperId": "33d0d59b44a93a985140a746a9300ebcb843d4a9",
"title": "CONNECT: CONtextual NamE disCovery for blockchain-based services in the IoT"
},
{
"paperId": "7400152e6e475eaa2f88a91182df7991c9519156",
"title": "Towards Better Availability and Accountability for IoT Updates by Means of a Blockchain"
},
{
"paperId": "a2acc206b32850719a6d4903f7a649d84d365033",
"title": "Integration of Color and Texture Features in CBIR System"
},
{
"paperId": "ba022e0f1b3abbb3e74ef84c7f8e3be08c806e77",
"title": "An Overview of Risk Estimation Techniques in Risk-based Access Control for the Internet of Things"
},
{
"paperId": "28fe6a3fab2f2097a6f9aac5ae9799577badf883",
"title": "Blockchain for IoT security and privacy: The case study of a smart home"
},
{
"paperId": "c04cfb8074797c8fdb688da0b64f1dd19c49773a",
"title": "IoT and Blockchain Convergence: Benefits and Challenges"
},
{
"paperId": "631cc57858eb1a94522e0090c6640f6f39ab7e18",
"title": "Blockchain as a Service for IoT"
},
{
"paperId": "451729b3faedea24771ac4aadbd267146688db9b",
"title": "Blockchain in internet of things: Challenges and Solutions"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "139cfb65d375bba4ca59acc19efb0b7ac99247dc",
"title": "CoinParty: Secure Multi-Party Mixing of Bitcoins"
},
{
"paperId": "148f044225ce7433e5fcf2c214b3bb48d94f37ef",
"title": "Mastering Bitcoin: Unlocking Digital Crypto-Currencies"
},
{
"paperId": "24711b2a7a4dc4d0dad74bbbfeea9140abab047b",
"title": "Managing IoT devices using blockchain platform"
},
{
"paperId": "447c7fb53a6528cabb445209e03e59114e8bfc0b",
"title": "The Impact of Cloud Forensic Readiness on Security"
},
{
"paperId": null,
"title": "“4 Challenges to Blockchain Adoption From Fidelity CEO,”"
},
{
"paperId": "7e07327ce9bdd65791968f59b508e2047fb7d1b9",
"title": "Analyzing IoT Reference Architecture Models"
},
{
"paperId": "3ed0db58a7aec7bafc2aa14ca550031b9f7021d5",
"title": "A Case Study for Blockchain in Healthcare : “ MedRec ” prototype for electronic health records and medical research data"
},
{
"paperId": null,
"title": "“Decentralized Autonomous Organization to Automate Governance,”"
},
{
"paperId": null,
"title": "“Adopting Blockchain Technology for Electronic Health Record Interoperability,”"
},
{
"paperId": null,
"title": "“Using the Bitcoin Blockchain as a Botnet Resilience Mechanism,”"
},
{
"paperId": null,
"title": "“The Internet of Things: Network and Security Architecture,”"
},
{
"paperId": null,
"title": "ADEPT: An IoT Practitioner Perspective"
},
{
"paperId": "539091726bef30c8837dc197e89ae7f5f0d6dbf8",
"title": "Overview of the Internet of Things"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "4ea759dc35b564d4d795c554f407fdb8652b8bed",
"title": "That ‘Internet of Things’ Thing"
},
{
"paperId": null,
"title": "Speed : A blockchain transaction is distributed across the network in minutes and will be processed at any time throughout the day"
},
{
"paperId": null,
"title": "Security : Blockchain has the ability to provide a secure network over untrusted parties which is needed in IoT with numerous and heterogeneous devices"
},
{
"paperId": null,
"title": "Lack of skills : The blockchain technology is still new. Therefore, a few people have large knowledge and skills about the blockchain, especially in banking"
},
{
"paperId": null,
"title": "Processing Power and Time : The processing power and time needed to achieve encryption for all the objects included in a blockchain system"
}
] | 10,813
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0118bd5580dc6d23a0c0b67241c735b25e405e09
|
[
"Computer Science"
] | 0.886177
|
Communication Requirements and Deployment Challenges of Cloudlets in Smart Grid
|
0118bd5580dc6d23a0c0b67241c735b25e405e09
|
International Conference on Smart Communications and Networking
|
[
{
"authorId": "73771375",
"name": "Stephen Ugwuanyi"
},
{
"authorId": "9136586",
"name": "Jidapa Hansawangkit"
},
{
"authorId": "145678109",
"name": "Rabia Khan"
},
{
"authorId": "39158735",
"name": "Kinan Ghanem"
},
{
"authorId": "153758455",
"name": "Ross McPherson"
},
{
"authorId": "143667221",
"name": "J. Irvine"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SmartNets",
"Int Conf Smart Commun Netw"
],
"alternate_urls": null,
"id": "661ae2a4-ae88-4ce1-a7a3-73bb04510f92",
"issn": null,
"name": "International Conference on Smart Communications and Networking",
"type": "conference",
"url": null
}
|
Intelligent and distributed power networks are becoming more complex with the addition of cloudlets infrastructure. This paper is the initial output of a Low Latency Edge Containerisation and Virtualisation project at PNDC. This project investigates the performance and communication requirements of deploying edge computing devices to enable low-latency applications. The edge network illustrates how additional intelligent network resources can help realise better-distributed automation and remote configuration in smart grids. With cloudlets, the performance of smart grid networks will be enhanced in terms of low latency, higher bandwidth, and other network requirements. This paper presents cloudlets technology's current state of the art, including network design requirements, implementation techniques, and integration challenges with the legacy power networks. It also explored the main challenges of processing smart grid data by constrained IoT devices. The feasibility of using edge containers to provide low latency communication to several critical end applications in the smart grid could improve the performance of the networks. Furthermore, the key use cases of cloudlets in critical end applications for power utility networks were identified
|
# q p y Challenges of Cloudlets in Smart Grid
1Stephen Ugwuanyi
_Electrical and Electronic Engineering_
_University of Strathclyde_
Glasgow, United Kingdom
Stephen.ugwuanyi@strath.ac.uk
2Kinan Ghanem
_Power Networks Demonstration Centre_
_University of Strathclyde_
Glasgow, United Kingdom
kinan.ghanem@strath.ac.uk
3Jidapa Hansawangkit
_Electrical and Electronic Engineering_
_University of Strathclyde_
Glasgow, United Kingdom
jidapa.hansawangki@strath.ac.uk
4Ross McPherson
_Electrical and Electronic Engineering_
_University of Strathclyde_
Glasgow, United Kingdom
ross.mcpherson@strath.ac.uk
5Rabia Khan
_Power Networks Demonstration Centre_
University of Strathclyde
Glasgow, United Kingdom
rabia.khan@strath.ac.uk
6James Irvine
_Electrical and Electronic Engineering_
_University of Strathclyde_
Glasgow, United Kingdom
j.m.irvine@strath.ac.uk
**_Abstract—Intelligent and distributed power networks are_**
**becoming more complex with the addition of cloudlets**
**infrastructure. This paper is the initial output of a Low Latency**
**Edge Containerisation and Virtualisation project at PNDC. This**
**project investigates the performance and communication**
**requirements of deploying edge computing devices to enable**
**low-latency applications. The edge network illustrates how**
**additional intelligent network resources can help realise better-**
**distributed automation and remote configuration in smart grids.**
**With cloudlets, the performance of smart grid networks will be**
**enhanced in terms of low latency, higher bandwidth, and other**
**network** **requirements.** **This** **paper** **presents** **cloudlets**
**technology’s current state of the art, including network design**
**requirements, implementation techniques, and integration**
**challenges with the legacy power networks. It also explored the**
**main challenges of processing smart grid data by constrained**
**IoT devices. The feasibility of using edge containers to provide**
**low latency communication to several critical end applications**
**in the smart grid could improve the performance of the**
**networks. Furthermore, the key use cases of cloudlets in critical**
**end applications for power utility networks were identified**
**_Keywords—Cloudlets,_** **_Containerisation,_** **_Communication_**
**_Requirements, Deployment, Security, Smart Grid., Virtualisation_**
I. INTRODUCTION
Overcoming service availability and delay is one of the
deployment challenges of the Internet of Things (IoT)
technology in power utility networks without centralised cloud
infrastructures. Cloud computing, as defined by the National
Institute of Standards and Technology (NIST), is a model for
facilitating ubiquitous and on-demand access to a shared pool
of network resources in three different service models;
Software as a Service (SaaS); Platform as a Service (PaaS);
and Infrastructure as a Service (IaaS) [1]. While each service
model can be provisioned with network characteristics such as
broad network access, on-demand self-service, resource
pooling, rapid elasticity, and measured service, they could be
used in private or public networks or a combination of both.
For instance, PaaS resources can be rented to install
Supervisory and Data Acquisition System (SCADA) systems
to be shared by distributed generation sources and Distributed
Network Operators (DNOs) [2].
IoT-based smart grid networks generate massive data from
heterogeneous edge devices that require high computing
resources to process the data in the cloud and deliver optimal
network performance. Strategically placed cloud resources
closer to the edge devices will help to offload the massive
dynamic traffic generated by the distributed IoT devices to
guarantee high quality of services. It will free the smart grid
from Wide Area Network (WAN) related delays, jitter,
congestion and network failures [3]. Cloud process and
compute time required for such high numbers of IoT devices
is a bottleneck in today’s networks without technology such as
cloudlets. It is essential in IoT networks to move data
processing points closer to the data sources to meet the
communication requirements of real-time applications. It will
ensure fast response time and reduce the amount of
unnecessary data migrated to the centralised data centre.
Moreover, power utilities avoid using public clouds for
critical applications due to latency and security issues.
However, they seek an alternative private cloud infrastructure
to process the generated data locally and securely before the
fine-grained data is sent to the centralised private cloud for
other processing that may involve machine learning and
artificial intelligence algorithms. This process will provide
computing power support for IoT devices when deployed
correctly using adequate communication technology such as
5G cellular networks in good coverage [4].
Reducing network response time in mobile utility IoT
applications may be challenging as traffic may not be
offloaded optimally. To offload traffic optimally requires
many factors, like ensuring that cloudlets are optimally
positioned with adequate coverage of reliable and bandwidthefficient communication technology and that the edge devices
have enough processing, storage and memory capabilities. An
entropy-weight-based proof of concept algorithm found to be
optimal, cost-effective and can meet network delay
requirements has been proposed to tackle the cloudlets
placement problems [5]. Similarly, a dynamic clustering
algorithm-based cloudlets deployment is another approach to
solving latency issues in cloudlets due to edge device mobility
in smart grids [4].
Cloudlets concepts in the smart grid are seen as a data
processing approach to moving cloud computing capabilities
closer to intelligent field devices and serve limited and
localised utility assets like Remote Terminal Units (RTUs)
and smart transformers than wider utility IoT resources. As a
mini data centre designed to provide cloud computing services
to IoT field devices within a close geographical area, it will
facilitate edge virtualisation and intelligence. Both
enhancements require the proper hardware and software with
sufficient processing, memory and real-time operating
capability to enable accurate grid data synchronisation for
controlled functions, efficiency and the effectiveness of the
-----
Cloud infrastructure could be either on-premise or offpremises, but both play essential roles in seamlessly
integrating utility assets for easy computation of smart grid
data. Cloudlets are the intermediate layers between the cloud
infrastructure and utility assets [6]. Introducing a data centre
closer to the edge devices would facilitate the virtualisation of
substation devices such that edge devices can easily instantiate
Virtual Machine (VM) software on the cloudlets.
II. RELATED WORK
Cloud computing is on-demand storage, data processing
and information exchange model for enabling global and
continuous access to network resource management systems.
In [7], it is seen as a trusted cluster of computers connected to
the Internet and designed to deliver cloud computing services
to IoT devices within a specific geographical neighbourhood.
Cloud computing is one solution to reducing resourceconstrained IoT devices’ impacts on network performance.
This can be achieved via VMs, as each connecting user or end
application is associated with VM instances created within
cloudlets.
In smart grid ecosystems, cloudlet is an evolving cloud
computing infrastructure used to federate the associated
processing of networking logics embedded in the edge and
wireless cloud [2], [8]. Its performance is mainly affected by
communication and processing factors. Cloudlet performance
in a rural area has been investigated and likened to locations
of utility assets. In this study [9], cloudlet is noted to reduce
the lack of communication infrastructure barrier in hard-toreach locations and make power networks more open,
inexpensive, adaptable, and an extensible platform only when
implemented with higher compute devices. Cloudlets have
been deployed in many sectors, including the smart grid [2].
Cloudlet has been used to reduce the execution end time of
workflow applications in metropolitan area networks [10].
Well-deployed cloudlets in the secondary or primary
substations will reduce the substation-to-substation
communication latency between substation devices in Figure
1. The intermediate mini cloud data centre will ensure efficient
and secure communication between the grid entities. As
shown in Figure **1, a resource-rich cloudlet will act as an**
intermediates layer between the cloud resources and utility
assets to deliver time-critical applications with minimal hop
path and bandwidth. However, cloudlets have key deployment
challenges that must be tackled. These challenges will include
questions about how smart grid communication protocols
could support virtualisation and integrate with new and
existing network resources. It will also involve identifying
locations for cloudlets placement with adequate network
coverage, compatibility, interoperability, scalability, trust, and
security.
Notwithstanding that the use of cloudlets in the industry is
continuing to grow especially for achieving low latency
communication and reducing costs, there are still open
research questions on using cloudlets in smart grids. Studies
on cloudlets’ performance in real smart grid networks are
limited. Navantia industrial AR (IAR) network architecture is
designed to leverage cloud, cloudlets and fog computing to
deliver traffic-efficient industrial IoT networks [11]. The
findings indicate that cloudlet’s response rate outperformed
cloud and fog computing at payloads greater than 128 KB.
While this value is four times greater than fog computing
when many applications were served, fog computing achieved
the fastest response rate for small payloads. Both the cloud and
applications that decrease network latency response.
Similarly, in a cloudlets-based Wireless Local Area
Network (WLAN), offloading the MAC layer processing from
the access points to the cloudlets allowed flexibility in service
provisioning at reduced costs [12]. The capital and operational
costs of running a cloudlets-based network in a smart grid will
reduce as the network operators could easily implement new
services without procuring more expensive equipment.
Network Function Virtualisation (NFV) is another aspect of
cloudlets that will simplify network management and remote
service provisioning, reduce access latency, and make
deployment easier to implement.
Figure 1. Proposed Cloudlets Supported Smart Grid to
Reduced Substation-to-Substation Latency
Figure 1 shows a three-level smart grid infrastructure with
integrated IoT systems. The IoT devices provide the field
measurements transmitted to the cloud centre for processing
through the gateway. In the smart grid, collecting field data
measurement, communicating, monitoring, and controlling
the distrusted end devices are part of the IoT system’s
objectives. Field data are better processed in the cloudlets with
adequate resources because IoT devices have the lowest
computing, storage, and power capabilities to support
connectivity solutions for data delivery and analysis. These
limitations are responsible for technologies that drive data
processing towards the edge to reduce communication latency,
especially for time-critical applications. Cloudlets reduce
network latency as a localised cloud/data processing point.
The unique nature of the power network as an Operational
Technology (OT) infrastructure means that connectivity
solutions for data transmission between smart grid IoT devices
and the cloud centre are not straightforward solutions. It will
require open communication frameworks such as OpenFMB
for network integration. Hence, deploying cloudlets in the
smart grid will allow DNOs to have more network control and
quickly provision service functions such as security and
privacy.
Cloudlets also need to be coordinated during design and
implementation. Coordinated cloudlets are described as small
clouds in network infrastructures that are interconnected [13],
and each server is discoverable, localised and stateless with
one or more VM in operation. An end-to-end direct connection
through a Software-Defined Wide Area Network (SD-WAN)
could facilitate real-time applications for geographically
distributed cloudlets. Installing a cloudlet closer to the IoT
field devices will open the door for complete local processing
and reduces the time to migrate data to the central cloud or
data centre. The cloudlet will be able to host virtual access
points, which will also avoid the complexity of a physical
access point common in legacy networks [12].
-----
As shown in Figure 2, the proposed implementation
architecture is a new test setup regime at PNDC in
collaboration with DNOs for investing cloudlet’s performance
in smart grids. The RTDS Simulator generates IEC 61850
GOOSE and Active Network Management (ANM) packets,
whereas SCADA Simulator generates Modbus and DNP3
used to investigate IT/OT convergence edge container that
converts field protocols to OPC/UA. Both simulators
exchange data with external hardware or software devices in
real time through many different communication protocols.
Input and output via Ethernet (though standard-compliant data
packets) allow the closed-loop testing of digital substations
and other non-wires alternatives. The edge container
(cloudlet) has virtualisation and real-time operating system
capabilities to process IEC 61850, NDP3, and Modbus
protocols while communicating with the ANM system. The
development stage of the test platform is shown in Figure 2,
along with the design requirements needed to develop the IEC
61850 protocol adaptor in the OT/IT convergence testbed.
Figure 2. High-Level Cloudnet Test Network Architecture to
Facilitate IT/OT Convergence at PNDC.
IV. CLOUDLETS REQUIREMENTS IN SMART GRID
Power networks consist of generation, transmission, and
distribution subsystems that deliver electricity to consumers.
Its implementation requires dedicated, secure and reliable
technologies for monitoring, communicating and controlling
grid assets in a two-way fashion. Cloudlets could help deliver
security [14] - [15], computing [7], communication [16], and
data storage network requirements in smart grids. Some of
these performance specifications are better defined by the
DNOs, regulators, policy-makers and vendors. They include:
_A._ _Communication Technology_
Connectivity for cloudlets-based critical applications
such as teleprotection requires very low latency delivered by
LTE or 5G networks to maintain data synchronisation among
multiple devices and systems [17]. The frequent exchange of
Phasor Measurement Unit (PMU) data and control commands
between substations and the control centre requires a robust,
reliable, low-latency communication link across the network.
The same communication requirements are needed for critical
IEC 61850 Sample Values (SV) and GOOSE messages.
Cloudlets could be seen as a proper technique for providing
low-latency communication if the field measurement data are
processed at the network edge [18]. For some critical
applications, the required end-to-end latency to enable the
functionality of PMU synchrophasor is 100 ms and around 1
ms for critical control commands, which can be challenging
to satisfy.
In the smart grid, fibre optics, LTE, and 5G will play an
important role in completing the path to full digitalisation of
power networks. Critical end applications such as
synchrophasor data and any routable GOOSE messages will
require redundancy to ensure resilience in case of any
problem in the connection to the data centre or with another
cloudlet. As fully identified in the meshed cloudlets in Figure
**5, smart grid resources must be integrated with robust,**
reliable, low latency communication technology. With
cloudlets, smart grid assets must support network
virtualisation technologies such as Network Function
Virtualisation (NFV) and Software Defined Network (SDN)
to simplify such complex and meshed networks.
_C._ _Privacy and Security_
Cloudlets may face privacy and security requirements in
smart grids, especially when data is distributed and shared
across multiple entities with less computational and storage
capabilities. When moving the processing capabilities to the
edge, the security approach used at the main data centre could
be applied in the cloudlets. Implementing end-to-end
encryption at the edge without compromising security and
privacy has always been challenging. In [14], a TLS-based
secure protocol extension proposed could allow edge
functions to process encrypted traffic at the edge of an IoT
network. However, our previous study identified that
implementing encryption techniques such as TLS and IPsec
within the utility assets will increase the data overhead by a
significant fold [19]. Encrypting the exchanged data among
different systems and devices using strong encryption keys
will raise an issue about the bandwidth requirements needed
to enable low latency communication [20]. Cloudlets in the
OT environment could become challenging to manage from a
security perspective. Smart grid networks could be made less
secure due to the limited number of Information Technology
(IT) security layers and its interoperability issues with the OT
security layers in resource-constrained devices. Because
cloudlets are more accessible and interfaced with less secure
IoT devices, their use in smart grids will require more
protection to secure the local data storage systems and the
communication link between the cloud, cloudlets and the field
devices.
Ensuring that cloudlets technology satisfies cyber security
standards like NIST/ENA/IEC guidelines and policies,
including supporting utility protocols such as Modbus, DNP3,
IEC 60870-5-104, 60870-5-101 and IEC61850 securely,
reliably and at low costs are essential. Smart grid
communication protocols carry a large amount of sensitive
time-critical utility data prone to intrusion, subversion, or
spoofing attacks. Because cloudlets have the capability to
create meshed networks with a shared medium of high
availability when deployed in smart grids, their vulnerabilities,
threats, and impacts need to be risk assessed.
_D._ _Storage_
Smart grid network resources like PMUs and Intelligent
Electronic Devices (IEDs) generate crucial time-sensitive data
requiring appropriate data storage systems. According to the
Utility AMI Working Group, utility data storage systems must
be long-term and able to store data within the device securely.
In the case of a smart meter, the data must be stored for 45
days [21]. The data must be accessible remotely and with a
standard model that allows it to be exchanged between
multiple vendors’ equipment. The rise of the Smart Storage
-----
reliable storage systems. Cloudlets will enable fast
implementation of Virtual Power Plant (VPP) and microgrids
for controlling the DERs. It will also help DNOs identify and
locate DERs for energy usage measurements and analysis.
_E._ _Disaster Recovery_
Cloudlets introduce redundancy to the power networks as
critical processing at the edge can be continued through
cloudlets when the main cloud infrastructure fails [18]. The
capability of the smart grid to function as a standalone in
disaster and blackout scenarios is essential to enable basic
functions like communicating with the secondary data centre
when required. However, cloudlets functionalities may be lost
in natural disasters, including processing, storage and power
sources. Cloudlet outage must be avoided as critical services
could be interrupted, frozen and QoS violated.
To analyse the impact of these network characteristics on
smart grid performance, we have discussed three different
scenarios in which cloudlets architecture could be configured.
The first is a multi-access cloudlets architecture shown in
Figure **3. Multi-access cloudlets enable the integration of**
several smart grid assets and deliver services and computing
functions needed by the end applications. With the massive
amount of devices at the edge and high variation of dataintensive applications, 5G/6G networks have been noted to
meet such demands in the future cloud and communications
networks [22]. This will improve application response,
enhance outage control, and support new applications in smart
grids.
Figure 3. Multi-Access Cloudlets Architecture
In this paper, we see cloudlets deployment in power
networks as a standalone edge processing box within the
secondary or primary substations with sufficient power
backup, where direct connectivity is provided to restore
service outages. As shown in Figure **4, connectivity to the**
private cloud infrastructure is not always needed to provide
the required services, and data synchronisation with the
private cloud could be initiated when cloud connectivity
becomes available.
Figure 4. High-Level Standalone Cloudlet Topology at the
Edge.
of a cloudlet s architecture in a smart grid is a fully meshed
network, as shown in Figure 5, it must be designed following
the European Telecommunication Standards Institute (ETSI)
Multi-Access Edge Computing (MEC) standard [22]. The
operation of fully meshed cloudlets in power networks
requires a private network fully operational and managed by
the power utilities. Accurate data synchronisation and the
number of hops between cloudlets and IoT field devices,
connectivity and security are a few challenges that might be
seen in a fully meshed scenario.
Figure 5. Fully Meshed Multi-Access Cloudlet Architecture
V. CLOUDLETS CHALLENGES IN POWER NETWORKS
Significant benefits can be obtained by deploying
cloudlets in critical infrastructures such as power networks, as
they can give the DNOs full control over their distributed
infrastructure in terms of implementation, management and
security. However, its deployment faces the following
challenges:
_A._ _Lack of Skilled Professionals_
Cloudlet is a cloud computing technology which can
deliver hosted services to IoT devices over a network. It is
advantageous to deploy cloudlets on the power network’s
edge compared to the public cloud, where the enterprise cloud
operators manage it. With cloudlet technology at the edge of
power networks, DNOs can easily deploy and manage their
infrastructure if adequate skilled network professionals are
available. The lack of skilled professionals is challenging to
many existing DNOs, and relying on a third party to operate
the cloudlet may not apply to some DNOs for security and
privacy reasons. Notwithstanding, skilled IT professionals
will be needed to define the most appropriate cloudlet
technology and its operational requirements in power
networks. Recent developments are expanding the use of
OpenFMB and OpenADR to meet the needs of utilities, but
their implementation requires skilled professionals.
_B._ _Communication Frameworks Integration_
One challenge facing utility operators is communicating
with more distributed energy resources (PVs, batteries, EVs,
etc.) without improving the network architecture. Open Field
Message Bus (OpenFMB) and Open Automated Demand
Response Communications Specification (OpenADR) are
two widely used open communication frameworks in smart
grids for network integration [23]. OpenFMB is a framework
that functions with new and existing standards, such as the
IEC 61968 for distributed edge intelligence designed to drive
interoperability and facilitate data exchange between field
devices. It has an agile and evolving architecture that is
flexible enough to handle data models and publish/subscribe
protocols like MQTT and AMQP. OpenADR is designed to
-----
continuous dynamic price signals such as hourly day-ahead
or day-of real-time pricing. Today, utilities worldwide are
investigating OpenADR to manage the growing demand for
electricity and the peak capacity of electric systems. Demandside resource aggregation by Pearlstone Energy and National
grid [24] and Project ELBE [25] are good examples.
_C._ _Data Synchronisation Challenges_
Smart grid networks use precision timing of grid assets’
information to manage and maintain the operation of the
power network. Connecting IEDs to a cloudlet requires
precise synchronisation of various data types and several
sources of measurement. Synchronising data measurements is
crucial for critical end applications such as synchrophasors
and protection systems. The required synchronisation
accuracy varies based on the criticality of the end
applications. Achieving local synchronisation is easier than
remote synchronisation, as it is easier to coordinate precisely
with other systems’ components in real time with fewer errors
and duplications.
Moreover, ensuring synchronisation among multiple
cloudlets is significant for any future development of the
cloudlets in the power networks. Losing such synchronisation
limits cloudlet usage in power networks and creates a less
efficient power network management system that performs
below expectations. Measurements such as outputs from
synchrophasors (Sampled Values (SV) and GOOSE) are
synchronised with precise timing stamps. Deviations from the
synchronisation could create disruption and instability in the
power networks. In today’s networks, DNOs need accurate
time synchronisation in phasor measurement units, merging
units and IEDs to coordinate the electrical grid and monitor
protection functions. Deploying cloudlets in smart grids
requires precise time synchronisation to utilise the cloudlet
capability fully.
_D._ _Systems Integration and Legacy Challenge_
Legacy systems in power networks may not directly
communicate with new utility IoT devices without protocol
conversions. These devices could create several issues
affecting the power grid’s digitalisation. Legacy hardware is
seen as a severe bottleneck to enhancing the future operating
capacity of power networks. Upgrading such field devices
may not be the right solution for digitalisation, where some
old devices may not be upgradeable. Assuming that some of
the distributed assets can be upgraded, the time and cost of
procuring and installing new devices will be saved. Systems
integration is another challenge for DNOs to overcome during
the digital transition period. Dealing with too many different
protocols and the lack of interoperability will add more
complexity to the network architecture that the DNO’s whole
system could affect cloudlets integration and field
implementations.
_E._ _Connectivity and Power_
A significant part of the distributed power networks is located
in hard-to-reach areas where affording simple connectivity
can be challenging. Lack of connectivity does not just limit
the ability to deploy cloudlet but also slows the transition into
a smarter digital grid. This is a common issue for many power
utilities across the world. Another point to consider is that
cuts, power outages and blackouts. Ensuring a sufficient
source of backup power (i.e., from battery storage for instant
or renewable energy sources) will allow the power networks
to rely on the communication network to bring the electricity
back in case of an unexpected power loss or black start
scenario. Full backup power will be required to enable the full
functionality of cloudlets. Reliable backup power for
distributed cloudlets is a must to maintain any mission-critical
applications.
A real concern in the existing power networks is the
proprietary interfaces and protocols operated by legacy
software. This will significantly affect any future deployment
of intelligence at the edge. Converting several field protocols
into a unified platform transparent to cloudlet can be seen as
a tool to mitigate such challenges. However, there is a need
to check the power system performance in terms of data
handling and data polling mechanisms.
_F._ _Security_
Security and the trust environment used for data storage is a
prime issue for cloudlets integration in smart grid. This is
because compromised cloudlets cannot attain the missioncritical requirements of power networks. An example of a
physical security challenge is protecting a cluster of digital
substations connected using cloudlets, where they could share
and exchange data locally without involving the centralised
data centre. Data storage in the distributed local cloudlets
could make them vulnerable to physical attack, as the location
of some cloudlet boxes in the rural areas will make it easier
for physical access. Cloudlet’s physical proximity to the edge
is also very essential to achieving end-to-end response time,
low-latency, one-hop, high bandwidth wireless access to the
cloud [7]. Physical security is a less explored area in
cloudlets, according to a study that investigated cloudlets
deployment options in rural and remote areas to improve
service availability and support community-based local
services during network or power outages [9]. Collaboration
Intrusion and Detection System proposed for incorporating
security, data sharing, and interruption discovery in cloudlets
to protect network privacy and the distribution of cloud and
cloud medical applications could offer the needed security for
smart grid [15].
To deploy cloudlets in the smart grid, security is one factor
of high interest to the DNOs. The requirement is that cloudlets
have to be remotely managed and provisioned adequately to
enhance security and performance [13]. As new security
measures can be provided on-demand and customisable
through NFV, security functions like next-generation
firewalls, gateways, and access policies could easily be
implemented. A good example is the proof of concept
described in [26] that would allow device-to-device and
device-to-infrastructure communication in a cloudletssupported network and can ensure the reliability, security and
privacy of peer-to-peer communication in an intelligent
transportation system.
VI. BENEFITS OF DEPLOYING CLOUDLETS IN POWER
NETWORKS
As an emerging technology, the use of cloudlets in smart
grids is accelerating daily and has several benefits, such as
rapid response, disaster recovery, outage control, and new
technologies [18]. The edge devices and the centralised cloud
introduce intelligence to support new utility technology. This
-----
security. It can also open the door for more real-time
applications at the edge, where low latency distributed
functions and applications are needed.
Cloudlets technology benefits various sections of the
power network. In the primary substations, the data generated
from the distributed IoT devices (i.e., distributed IEDs such as
Merging Units (MUs) and protections relays) could be
processed locally, thereby helping to meet the strict latency
requirements for different real-time applications and at the
same time ensure that the generated data are kept on-premise
securely. It could also facilitate using real-time applications
such as Virtual Reality (VR), Augmented Reality (AR), and
live machine learning models at the edge. The cloudlets at the
primary substation should have more processing capabilities
than those installed at the secondary substation because the
required processing capability is higher at the primary
substation level rather than at the secondary substation level.
This will allow sychrophasors, SV and GOOSE messages to
be processed quicker than MMS and SCADA messages
generated at the secondary substation level.
Another key benefit for energy utilities from implementing
cloudlets in smart grids is horizontal and vertical network
extendibility. Cloudlets allow the smart grid utilities to timely
respond to distribution, generation, market or regulatory
changes across various services. Implementing cloudlets with
rich computational capability will help deploy several security
approaches and techniques between the cloudlet and the end
devices as larger key sizes will be supported.
VII. POSSIBLE FUTURE RESEARCH
The next phase of this low latency edge container and
virtualisation project at PNDC will be testing and data analysis
to demonstrate how such intelligence at the edge could help
improve the performance of the power networks
communications in terms of latency and bandwidth to ensure
the reliability and resilience of smart grid networks. The cost
implications of deploying cloudlets in a secondary substation
will also be considered.
IX. CONCLUSION
This paper has evaluated how the intelligence at the edge
of a smart grid can help achieve network performance
improvements in distributed automation and remote
configuration of smart grid assets. We, therefore, conclude
that deploying cloudlets in a smart grid comes with challenges
such as systems integration, connectivity, security and the
absence of standards-based field bus protocol to enable the
interoperability of distributed field devices and data exchange.
The challenges are summarised as service description,
management and orchestration, monitoring and optimisation,
VM placement, and elasticity and scalability-related problems
similar to challenges identified in [12]. With the possibility of
implementing OpenFMB and OpenADR standards inside the
cloudlets, devices from different vendors and utilities will be
able to interoperate directly and exchange data. Additionally,
if configured correctly, the cloudlets will process data locally
and respond to any field requests, facilitating remote-oriented
functions that are very useful in an unexpected event or black
start scenario.
ACKNOWLEDGMENT
The authors acknowledge the contributions of PNDC tier
1 members (mainly - Scottish Power Energy Networks,
Scottish and Southern Electricity Networks, UK Power
REFERENCES
[1] P. Mell and T. Grance, “The NIST Definition of Cloud Computing
Recommendations of the National Institute of Standards and Technology,”
2011.
[2] M. Muzakkir Hussain, M. Saad Alam, and M. M. Sufyan Beg, “Fog
Computing for Smart Grid Transition: Requirements, Prospects, Status
Quos, and Challenges,” EAI/Springer Innov. Commun. Comput., pp. 47–61,
2021.
[3] S. Bouzefrane, A. F. B. Mostefa, F. Houacine, and H. Cagnon, “Cloudlets
authentication in nfc-based mobile computing,” Proc. - 2nd IEEE Int. Conf.
_Mob. Cloud Comput. Serv. Eng. MobileCloud 2014, pp. 267–272, 2014._
[4] X. Jin, F. Gao, Z. Wang, and Y. Chen, “Optimal deployment of mobile
cloudlets for mobile applications in edge computing,” J. Supercomput., vol.
78, no. 6, pp. 7888–7907, Apr. 2022.
[5] C. Guo et al., “Optimal Placement of Cloudlets Considering Electric Power
Communication Network and Renewable Energy Resource,” _Proc. - 4th_
_IEEE Int. Conf. Smart Cloud, SmartCloud 2019 3rd Int. Symp. Reinf. Learn._
_ISRL 2019, pp. 199–203, Dec. 2019._
[6] I. Stojmenovic, “Fog computing: A cloud to the ground support for smart
things and machine-to-machine networks,” _2014 Australas. Telecommun._
_Networks Appl. Conf. ATNAC 2014, pp. 117–122, Jan. 2015._
[7] M. Satyanarayanan, P. Bahl, R. Cáceres, and N. Davies, “The case for VMbased cloudlets in mobile computing,” IEEE Pervasive Comput., vol. 8, no.
4, pp. 14–23, Oct. 2009.
[8] S. Mehmi, H. K. Verma, and A. L. Sangal, “Comparative Analysis of
Cloudlet Completion Time in Time and Space Shared Allocation Policies
During Attack on Smart Grid Cloud,” Procedia Comput. Sci., vol. 94, pp.
435–440, Jan. 2016.
[9] S. Helmer Claus Pahl Julian Sanin Lorenzo Miori Stefan Brocanelli Filippo
Cardano Daniele Gadler Daniel Morandini Alessandro Piccoli Saifur Salam
Alam Mahabub Sharear Angelo Ventura, P. Abrahamsson, and T. Daniel
Oyetoyan, “Bringing the Cloud to Rural and Remote Areas via Cloudlets,”
2016.
[10] X. Zhao, C. Lin, and J. Zhang, “Cloudlet deployment for workflow
applications in a mobile edge computing-wireless metropolitan area
network,” Peer-to-Peer Netw. Appl., vol. 15, no. 1, pp. 739–750, Jan. 2022.
[11] T. M. Fernández-Caramés, P. Fraga-Lamas, M. Suárez-Albela, and M.
Vilar-Montesinos, “A Fog Computing and Cloudlet Based Augmented
Reality System for the Industry 4.0 Shipyard,” 2018.
[12] F. Ben Jemaa, G. Pujolle, and M. Pariente, “Cloudlet- and NFV-based carrier
Wi-Fi architecture for a wider range of services,” _Ann. des Telecommun._
_Telecommun., vol. 71, no. 11–12, pp. 617–624, Dec. 2016._
[13] A. Alsaleh, “Can cloudlet coordination support cloud computing
infrastructure?,” J. Cloud Comput., vol. 7, no. 1, pp. 1–12, Dec. 2018.
[14] K. Bhardwaj, M.-W. Shih, A. Gavrilovska, T. Kim, and C. Song, “SPX:
Preserving End-to-End Security for Edge Computing,” Sep. 2018.
[15] M. P. Reddy, A. M. F. Anwar, A. Sahithi, and A. K. Shravani, “Data Security
and Vulnerability Prevention for Cloudlet-Based Medical Data Sharing,”
_Proc. 5th Int. Conf. Electron. Commun. Aerosp. Technol. ICECA 2021, pp._
1477–1481, 2021.
[16] M. Rihan and M. Rihan, “Applications and Requirements of Smart Grid,”
pp. 47–79, 2019.
[17] K. Ghanem, S. Ugwuanyi, R. Asif, and J. Irvine, “Challenges and Promises
of 5G for Smart Grid Teleprotection Applications,” _2021 Int. Symp._
_Networks, Comput. Commun. ISNCC 2021, 2021._
[18] M. Babar, M. S. Khan, F. Ali, M. Imran, and M. Shoaib, “Cloudlet
Computing: Recent Advances, Taxonomy, and Challenges,” IEEE Access,
vol. 9, pp. 29609–29622, 2021.
[19] K. Ghanem, J. Hansawangkit, R. Asif, S. Ugwuanyi, R. McPherson, and J.
Irvine, “Bandwidth efficient secure authentication and encryption
techniques on IEC-60870-5-104 for remote outstations,” _2021 Int. Conf._
_Smart Appl. Commun. Networking, SmartNets 2021, Sep. 2021._
[20] K. Ghanem, R. Asif, S. Ugwuanyi, and J. Irvine, “Bandwidth and security
requirements for smart grid,” in _IEEE PES Innovative Smart Grid_
_Technologies Conference Europe, 2020, vol. 2020-Octob._
[21] T. Sato _et al., “Smart Grid Standards: Specifications, Requirements, and_
Technologies,” Smart Grid Stand. Specif. Requir. Technol., pp. 1–463, Feb.
2015.
[22] T. Koketsurodrigues, J. Liu, and N. Kato, “Offloading Decision for Mobile
Multi-Access Edge Computing in a Multi-Tiered 6G Network,” IEEE Trans.
_Emerg. Top. Comput., 2021._
[23] S. Laval, “Duke Energy Emerging Technology Office Open Field Message
Bus (OpenFMB): Enabling Distributed Intelligence,” 2019.
[24] OpenADR, “Demand-Side Resource Aggregation.” 2020.
[25] H. Energie and S. Hamburg, “OPENADR EUROPEAN CASE STUDY
PROJECT ELBE.” 2020.
[26] M. Gupta, J. Benson, F. Patwa, and R. Sandhu, “Secure V2V and V2I
Communication in Intelligent Transportation using Cloudlets,” IEEE Trans.
_Serv. Comput., pp. 1–1, Sep. 2020._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/SmartNets55823.2022.9993993?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/SmartNets55823.2022.9993993, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://strathprints.strath.ac.uk/82542/1/Ugwuanyi_etal_ICSACN_2022_Communication_requirements_and_deployment_challenges_of_cloudlets_in_smart_grid.pdf"
}
| 2,022
|
[
"JournalArticle",
"Conference"
] | true
| 2022-11-29T00:00:00
|
[] | 8,932
|
en
|
[
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/011c4c415a99465d1d643d4f3f5371f3d885637c
|
[] | 0.855318
|
Edge AI and Blockchain for Smart Sustainable Cities: Promise and Potential
|
011c4c415a99465d1d643d4f3f5371f3d885637c
|
Sustainability
|
[
{
"authorId": "48527054",
"name": "E. Badidi"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
Modern cities worldwide are undergoing radical changes to foster a clean, sustainable and secure environment, install smart infrastructures, deliver intelligent services to residents, and facilitate access for vulnerable groups. The adoption of new technologies is at the heart of implementing many initiatives to address critical concerns in urban mobility, healthcare, water management, clean energy production and consumption, energy saving, housing, safety, and accessibility. Given the advancements in sensing and communication technologies over the past few decades, exploring the adoption of recent and innovative technologies is critical to addressing these concerns and making cities more innovative, sustainable, and safer. This article provides a broad understanding of the current urban challenges faced by smart cities. It highlights two new technological advances, edge artificial intelligence (edge AI) and Blockchain, and analyzes their transformative potential to make our cities smarter. In addition, it explores the multiple uses of edge AI and Blockchain technologies in the fields of smart mobility and smart energy and reviews relevant research efforts in these two critical areas of modern smart cities. It highlights the various algorithms to handle vehicle detection, counting, speed identification to address the problem of traffic congestion and the different use-cases of Blockchain in terms of trustworthy communications and trading between vehicles and smart energy trading. This review paper is expected to serve as a guideline for future research on adopting edge AI and Blockchain in other smart city domains.
|
## sustainability
_Review_
# Edge AI and Blockchain for Smart Sustainable Cities: Promise and Potential
**Elarbi Badidi**
Department of Computer Science and Software Engineering, College of Information Technology, UAE University,
Al-Ain P.O. Box 15551, United Arab Emirates; ebadidi@uaeu.ac.ae; Tel.: +971-3-713-5552
[����������](https://www.mdpi.com/article/10.3390/su14137609?type=check_update&version=1)
**�������**
**Citation: Badidi, E. Edge AI and**
Blockchain for Smart Sustainable
Cities: Promise and Potential.
_Sustainability 2022, 14, 7609._
[https://doi.org/10.3390/](https://doi.org/10.3390/su14137609)
[su14137609](https://doi.org/10.3390/su14137609)
Academic Editors: Miguel de
Simón-Martín, Stefano Bracco,
Enrique Rosales-Asensio, Alberto
González-Martínez and Marc A.
Rosen
Received: 28 March 2022
Accepted: 18 June 2022
Published: 22 June 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Modern cities worldwide are undergoing radical changes to foster a clean, sustainable and**
secure environment, install smart infrastructures, deliver intelligent services to residents, and facilitate
access for vulnerable groups. The adoption of new technologies is at the heart of implementing
many initiatives to address critical concerns in urban mobility, healthcare, water management, clean
energy production and consumption, energy saving, housing, safety, and accessibility. Given the
advancements in sensing and communication technologies over the past few decades, exploring the
adoption of recent and innovative technologies is critical to addressing these concerns and making
cities more innovative, sustainable, and safer. This article provides a broad understanding of the
current urban challenges faced by smart cities. It highlights two new technological advances, edge
artificial intelligence (edge AI) and Blockchain, and analyzes their transformative potential to make
our cities smarter. In addition, it explores the multiple uses of edge AI and Blockchain technologies
in the fields of smart mobility and smart energy and reviews relevant research efforts in these two
critical areas of modern smart cities. It highlights the various algorithms to handle vehicle detection,
counting, speed identification to address the problem of traffic congestion and the different use-cases
of Blockchain in terms of trustworthy communications and trading between vehicles and smart
energy trading. This review paper is expected to serve as a guideline for future research on adopting
edge AI and Blockchain in other smart city domains.
**Keywords: edge computing; edge intelligence; Blockchain; smart grids; smart mobility; smart energy**
**1. Introduction**
Many countries have created strategies to transform their cities into smart cities
to exploit the opportunities arising from urbanization. Smart cities enable operational
efficiencies, maximize environmental sustainability, and develop new services for citizens.
For example, the United Arab Emirates has launched its initiative to transform its cities
into smart cities. The UAE government has also outlined its overall Blockchain strategy for
increased security, immutability, resilience, and transparency.
With the climate change issues that have surfaced in the last few years, cities and civil
society are increasingly demanding a more sustainable future for their citizens and communities [1,2]. The long-term sustainability of cities requires new, innovative, and disruptive
solutions and services that are good for people, the planet, and businesses [3]. Building
sustainable cities and environments will not be possible without the right technologies to
digitize all city and business processes and obtain and share insights from data [4].
The advancements in sensing and communication technologies, the proliferation of
mobile devices, and the widespread use of social media networks have resulted in an
exponential growth in the information generated and exchanged. The phenomenon of big
data refers to this exponential growth in data volume. It is made up of a set of technologies
and algorithms that allow processing massive amounts of data in real time to derive
insights from it. The processed information and the resulting insights are made available to
decision-makers. Therefore, the reliability of the data is of the utmost importance to permit
their exchange and facilitate transactions between businesses.
-----
_Sustainability 2022, 14, 7609_ 2 of 30
Billions of edge devices are connected to the Internet and generate zettabytes of data.
Extracting value from these massive volumes of data at the required speed of the applications remains the main problem to be solved [5]. For many applications, the processing
power offered by cloud computing is often used to process data. However, sending data to
cloud servers for processing reveals limitations due to increased communication delays and
network bandwidth consumption. Therefore, using cloud computing is not the best solution for real-time and latency-sensitive applications [6–8]. There is a growing trend towards
using edge and fog computing to process data and extract value for these latency-sensitive
applications. The use of streaming data analytics, machine learning, and deep learning for
data processing at the edge resulted in the emergence of a new interdisciplinary technology
known as edge AI that enables distributed intelligence with edge devices [9,10]. Research
on edge AI and commercial solutions of this new technology are still relatively new.
The execution of transactions generally depends on many intermediaries who authenticate the information exchanged to establish “trust” between the parties in the transaction.
A typical example is banking, where banks are responsible for validating financial transactions, and building trust between the parties in the transaction [11]. The essence of trusted
intermediaries, such as banks, notaries, lawyers, and the government, is to facilitate a
transaction that does not force the parties to trust each other. In today’s digital age, reliance
on these trusted intermediaries is just the result of a fundamental “lack of faith.”
The recent years have witnessed the emergence of Blockchain technology to address
this issue of trust [12–14]. A blockchain creates a source of truth that allows peer-to-peer
(P2P) transactions to get rid of the need for trusted intermediaries. Its distributed ledger
securely stores transaction information across multiple computer systems on the blockchain.
Each block in the chain contains information concerning several transactions. Each time
a new transaction occurs between two peers on the blockchain network, the ledger of
each participant appends a record of that transaction with a hash, which is an immutable
cryptographic signature. A change in a block of a chain means tampering with the block.
To corrupt a blockchain system, hackers would have to change every block in the chain,
and in all versions of the chain distributed across the blockchain network [15].
Blockchain is poised to revolutionize the way businesses, as well as governments, conduct all types of transactions [16]. It will significantly impact everyone (logistics, industry,
government, banking, real estate, health, education, and citizen services). Blockchain technology has the potential to improve government services, streamline government processes
and provide secure yet efficient information sharing [17,18]. Moreover, by using Blockchain
technology, governments can finally offer different services, eliminate bureaucracy and the
lack of transparency, prevent tax evasion and reduce waste.
_1.1. Contributions_
Although edge computing and blockchain have been extensively studied in the literature, very few works survey the integration of edge AI and blockchain in smart cities. This
article reviews recent research efforts on edge AI and blockchain for enabling intelligent
and secure edge applications and networks in two fundamental areas of smart cities—smart
mobility and smart energy. Beginning with an introduction to edge AI and blockchain, we
then review research efforts to integrate these two emerging technologies, including training learning models at the edge, security, privacy, scalability, and model sharing. Mainly,
we provide a survey on the use of edge AI in various applications in smart mobility, such
as traffic monitoring and management in intelligent transport systems, and smart energy,
such as optimized energy management in smart buildings, green energy management,
and energy efficiency in smart cities. Furthermore, we review recent research efforts on the
use of Blockchain in various applications in smart mobility, including distributed credential
management, reputation systems, key and trust management, and smart energy, including
distributed energy management and energy trading. Possible research challenges and
future directions are also outlined. The key contributions of this article are highlighted
as follows:
-----
_Sustainability 2022, 14, 7609_ 3 of 30
1. It provides an overview of edge AI and blockchain fundamentals.
2. It analyzes the opportunities brought by edge AI in smart mobility and smart energy.
3. It analyzes the opportunities brought by Blockchain in smart mobility and smart
energy.
4. It reviews some efforts to integrate these two emerging technologies in the context of
smart cities.
5. Finally, it outlines key open research issues and future directions toward the full
realization of edge AI and Blockchain in smart cities.
For the reader’s convenience, the studies discussed in this review are shown in
Figure 1.
**Figure 1. Classification of the studies of this review.**
_1.2. Structure of the Review_
The remainder of this review is organized as follows: Section 2 unfolds the challenges
facing smart cities. Sections 3 and 4 present the fundamentals of edge AI, federated learning,
and Blockchain technology, and describe their potential to support smart city operations.
The methodology used in this review is described in Section 5. Section 6 describes the
transformative potential and applications of edge AI and Blockchain in two vital areas of
smart cities, smart mobility and smart energy. Section 7 highlights some efforts showing
the convergence of these two technologies. The open research issues and future directions
are highlighted in Section 8. Finally, Section 9 concludes this review.
**2. Smart City Systems and Key Challenges**
As the world population grows, small and large cities are witnessing large migratory
waves that pressure local governments and officials to deal with many social issues. These
issues essentially concern ensuring a steady supply of water and electricity, providing appropriate healthcare services for all citizens, building and maintaining road infrastructure,
providing adequate public transportation, ensuring security and safety throughout the city,
and offering adequate education services [19].
The future of cities looks bright as many local governments start to build on smart
city initiatives and embrace new digital technologies and innovations to tackle all of these
issues, maximize the use of resources, provide a better quality of life for residents and a
favorable investment climate for business [20,21]. For companies, smart city initiatives offer
-----
_Sustainability 2022, 14, 7609_ 4 of 30
many innovation opportunities to develop new services and provide smart solutions for
the cities. The vast amounts of data obtained by smart city systems and advancements in
data stream processing, machine learning, and artificial intelligence enable entrepreneurs
to develop new smart solutions and new business models [22]. Smart cities such as Dubai,
Barcelona, Amsterdam, Singapore, New York, and Stockholm, to name a few, are enticing
other cities to jump on the bandwagon [23].
Smart cities are complex entities that integrate various systems to support the human
life cycle. These systems include smart healthcare, smart transportation, smart manufacturing, smart buildings, smart energy, and smart farming, among others.
_2.1. Smart Healthcare_
Smart healthcare is a set of technologies that are harnessed to actively manage healthcare data and respond to the needs of the medical ecosystem intelligently to increase
longevity and improve the quality of life for citizens. These technologies include mobile devices, Internet of Things (IoT) devices, and mobile Internet, which enable dynamic access to
information, connecting people, materials, and health-related institutions. Smart healthcare
aims to foster interaction between all entities in health care, including hospitals, pharmacies,
healthcare insurers, help them make informed decisions, ensure that participants have
access to the services they need, and facilitate the rational allocation of resources [24,25].
_2.2. Smart Transportation_
With the emergence of intelligent transportation systems, the proliferation of IoT-based
solutions, and advances in artificial intelligence, smart cities are entering a new era of a development called smart transportation. Smart city traffic management and smart transportation are revolutionizing the way cities approach mobility and emergency response while
solving traffic problems by reducing congestion and the number of accidents on the streets
and roads of cities [26,27]. Smart transportation relies on the deployment and use of sensors,
advanced communication technologies, high-speed networks, and automation [28].
_2.3. Smart Manufacturing_
Smart manufacturing is a technology-driven approach for monitoring the production
process using machines connected to the Internet. Its main goal is to present opportunities for automating operations using data analytics to boost manufacturing and energy
efficiency, enhance labor security, and reduce environmental pollution levels [29]. Smart
manufacturing deployments involve integrating IoT devices into manufacturing machinery to collect operational status and performance data. In addition, many technologies
are being used to help enable smart manufacturing, including data streams processing,
edge and fog computing, artificial intelligence, robotics, driverless vehicles, blockchain,
and digital twins [30,31].
_2.4. Smart Buildings_
Smart buildings are buildings in the tertiary sector or residential buildings for which
high-tech tools, such as sensors and sophisticated control systems, make it possible to
adapt the settings according to the needs of the occupants [32]. The proliferation of
new information and communication technologies now makes it possible to considerably
improve our living environment by managing and controlling lighting, ventilation, and air
conditioning, in short, the entire infrastructure of a modern building. The implementation
of intelligent buildings brings more comfort and convenience to its residents, reduces
energy consumption, and mitigates our negative impact on the environment.
_2.5. Smart Energy Systems_
Smart energy systems represent one of the most attractive smart city opportunities.
Unlike smart grids, which primarily focus on the electricity sector, smart energy systems
focus on the comprehensive integration of more sectors, including electricity, cooling,
-----
_Sustainability 2022, 14, 7609_ 5 of 30
heating, buildings, manufacturing, and transportation. They aim to transform existing
solutions into future renewable and sustainable energy solutions [33].
_2.6. Smart Farming_
Smart farming is an emerging concept in modern agriculture that refers to managing
farms using digital technologies such as IoT, soil scanning, drones, robots, edge and cloud
data management solutions, and AI [34,35]. It aims to increase the quantity and improve
the quality of crops and agricultural products while optimizing the human labor required
for production. When equipped with these technologies, farmers can remotely monitor
crops and field conditions without going into the field. In addition, they will be able to
make strategic decisions for their farms based on data collected from various devices.
Despite the promising and potential benefits that digital technologies bring to smart
cities, there are many challenges in the way of a successful digital transformation [36].
These challenges mainly relate to the aging infrastructure, which hampers the development
of many cities, security and privacy concerns with the proliferation of digital technologies,
and social inclusion, which requires the design of solutions that address all categories
of citizens and not only tech-savvy people. Addressing these challenges and concerns
requires the use of new technologies and the development of new data-driven urban
planning methods that challenge traditional models of urban development. The technology
and innovative spirit of the new generation of entrepreneurs are the main catalysts for
smart cities to be sustainable, safer, and more livable. These technologies and innovations
are dramatically changing the way residents, businesses and government entities interact
with each other for the benefit of all. Two promising technologies that are starting to make
their way into several smart city projects are Blockchain and edge AI, which can potentially
disrupt many of the areas above related to smart cities. They can make the various smart
city operations and initiatives safer, transparent, efficient, smart, and resilient, resulting in
more efficient and productive cities.
**3. Edge AI and Federated Learning to Support Smart Cities**
_3.1. Edge AI Overview_
Edge computing, sometimes referred to as IoT, is proliferating and is becoming an
essential component in most business strategies over the last few years [5,37–39]. IoT
devices, sensors, and smartphones transform many businesses from top to bottom. Furthermore, the emergence of artificial intelligence has been phenomenally stunning in its ability
to impact the operations at the network edge. Increased computing power at the edge
combined with the light deployment of machine learning and deep learning help make
edge devices extremely smart [10,40]. Edge AI enables devices to deliver real-time insights
and predictive analytics without sending data to remote cloud servers. Many businesses
are now taking advantage of this by deploying intelligent solutions in production. With the
various industrial IoT devices deployed in modern factories, manufacturers can be alerted
with issues in their supply chain and proactively avoid unplanned downtime [41]. Additionally, a small device on a street radar can now instantly recognize a car that is speeding,
the passengers in the vehicle, and whether the driver has a license or not [42].
Artificial intelligence (AI) with pre-trained models has the potential to empower smart
cities by permitting decision-makers to make informed decisions, which will benefit both
the city and citizens [43]. For instance, many smart city sectors will benefit from two typical
vision-based image processing tasks, image classification and object detection, which arise
in many edge-based AI applications [44–46].
AI continues to enter new segments with great promise at a high rate. Currently,
digital industries such as finance, retail, advertising, and multimedia have been the sectors
that have exploited AI the most. AI has created real value in these fields. However,
the significant and vital problems in several other areas remain unresolved. The solution
to the problems of cities concerning transport, energy and water supply, citizen security,
healthcare, and many others is to replace or upgrade old and ineffective technologies.
-----
_Sustainability 2022, 14, 7609_ 6 of 30
New and AI-driven technologies have the potential to enable efficient transport systems,
clean energy, and efficient health systems and industry [47]. A critical element in these
areas is introducing and deploying intelligence “at the network edge” of high-speed and
broadband networks. The edge is the bulk of our world at present. Bringing intelligence
to the edge means that even the smallest devices deployed everywhere are capable of
detecting, learning from, and reacting to their environments. AI enables, for example,
devices on certain streets or public spaces in the city to make higher-level decisions, act
autonomously, and report significant flaws or improvements to affected users or the cloud.
Edge AI means that AI algorithms are executed locally on a hardware edge device [48,49].
The AI device can process its local data and make decisions independently without requiring a connection to function correctly. The device must have sensors connected to a
small microcontroller unit (MCU) to use edge AI. The MCU is loaded with specific machine
learning models that have been pre-trained on certain typical scenarios that the device will
encounter. The learning process can also be continuous, allowing the device to learn as
soon as it faces new situations. The AI reaction can be a physical actuation on the device’s
immediate environment or a notification to a specific user or the cloud for further analysis
and assistance.
Recently, special-purpose hardware has emerged to accelerate specific compute- or
I/O-intensive operations at the edge. These edge hardware accelerators include Google’s
edge Tensor Processing Unit (TPU) [50,51], Nvidia’s Jetson Nano and TX2 edge Graphical Processing Units (GPUs) [52,53], Intel’s Movidius Vision Processing Unit (VPU) [54],
and Apple’s Neural Engine, which have emerged recently. They are explicitly designed for
edge computing to support edge AI applications such as visual and speech analytics, face
recognition, object detection, and deep learning inference.
Edge computing and edge AI encompass operations such as data collection, parsing,
aggregation, and forwarding, as well as rich and advanced analytics that involve machine
learning and event processing and actions at the edge. Edge AI will enable real-time operations, including data creation, decision making, and reaction time in milliseconds. These
operations are essential for monitoring public spaces with crowds of people, self-driving
cars, robots, monitoring machines in a factory, and many other areas. Edge AI will reduce
data communication costs and power consumption as edge devices process data locally
and transmit fewer data to the cloud, improving battery life, which is extremely important.
Smart cities are ideal for the use of edge computing and edge AI. Indeed, sensors and
actuators can receive commands based on local decisions without waiting for decisions
made in another distant place. Cities can use edge computing for video surveillance
applications and getting up-to-date data concerning the conditions of roads, intersections,
and buildings to take remedial actions before accidents occur. They also can use it for
controlling lighting, energy and power management, water consumption, and many more.
Municipalities and local governments can push the processing of urban IoT data streams
from the cloud to the edge, reducing network traffic congestion and shortening end-to-end
latency. By processing the data generated by edge devices locally, urban facilities can avoid
the problem of streaming and storing large amounts of data in the cloud, which impact
privacy and make them vulnerable.
_3.2. Federated Learning at the Edge_
Machine learning techniques typically rely on centrally managed training data, even
when the training process is performed on a cluster of machines. This process often takes
advantage of the characteristics of the overall training data set and the availability of
validation data sets to adjust several parameters. However, centralizing data management for training is often not feasible or practical because of data privacy, confidentiality,
and regulatory compliance.
Privacy regulatory frameworks require that data holders maintain the privacy of
personal information and limit how to use the data. Examples of these frameworks include
the European Union’s General Data Protection Regulation (GDPR) [55] and the Health
-----
_Sustainability 2022, 14, 7609_ 7 of 30
Insurance Portability and Accountability Act 1996 (HIPAA) [56]. These restrictions make
the management of central data repositories very expensive and a burden for data holders.
Federated learning (FL) is a learning approach that aims to solve the issues mentioned
above of centralized training data management and data privacy. It allows collaboratively
building a learning model without having to move the data beyond the firewalls of the
participating organizations [57,58]. Instead, as shown in Figure 2, an initial AI model,
hosted in a central server, is transferred to multiple organizations. Each organization trains
the AI mode with its data to obtain new weight parameters sent back to the central server.
The central server then uses any new weight settings from the participating organizations
to create an updated single model. Several iterations of this process may be necessary to
obtain an AI model good enough to be used in production. Several research efforts have
evaluated the performance of models trained by FL. They have found that they achieve
performance levels comparable to models trained on centrally hosted data sets and superior
to models that only use isolated data from a single organization [59,60].
**Figure 2. Federated learning architecture.**
**4. Blockchain to Support Smart Cities’ Operations**
_4.1. Blockchain Overview_
Blockchains are essentially shared databases that enable the participants, called nodes
in a network, to confirm, reject, and view transactions. They facilitate recording transactions
and tracking asset movements in a business network. Assets can be tangible, such as
property, cars, land, or intangible, such as patents and copyrights. Transaction data are
stored in a block-based structure, where blocks are linked to each other through a method
known as cryptographic hashing. Combined with the distributed and decentralized nature
of the blockchain ledger, this method makes each block of data virtually impossible to
change once it is added to the chain. Therefore, the blockchain distributed ledger is
cryptographically secure and immutable. It works in append-only mode and can only be
updated by consensus or peer-to-peer agreement. Blockchain is often viewed as a specific
subset of the larger universe of distributed ledger technology (DLT) [61]. The distributed
ledger makes Blockchain technology resilient since the network does not have a single
point of vulnerability. In addition, each block uniquely connects to previous blocks via a
digital signature. Making a change to a record without disrupting earlier records in the
chain is impossible, making the information tamper-proof. Allowing its participant to
-----
_Sustainability 2022, 14, 7609_ 8 of 30
transfer assets over the Internet without a centralized third party is the essential innovation
in Blockchain technology.
Blockchain technology emerged over the last few years as the underlying technology
for Bitcoin. The consequences of the subprime crisis in 2008 reduced confidence in the
existing financial system [62]. Satoshi Nakamoto wrote a white paper describing the
“bitcoin protocol”, which used a distributed ledger and consensus to compute algorithms
in the same year. The protocol was authored to facilitate direct P2P transactions and
disintermediate traditional financial intermediaries [63].
Since the birth of the Internet, many attempts to create virtual currencies have failed
due to the double-spending problem. The current solution to eliminate the double-spending
problem is introducing “trusted intermediaries” such as banks. Blockchain technology
solves the double-spending problem without these trusted intermediaries, making it easier
to securely move assets such as virtual currencies over the Internet. Other areas other than
currencies could benefit from this concept, making Blockchain technology very promising.
As illustrated in Figure 3, the blockchain architecture allows participants in a business
network, for example, to share an updated ledger using peer-to-peer replication each time
a transaction occurs. Each participant acts as a publisher and subscriber and can receive
or send transactions to other participants, and data are synchronized across the network.
The blockchain network eliminates duplication of effort and reduces the need to use the
services of intermediaries, making it economical and efficient. Using consensus models to
validate transaction information also makes the network less vulnerable. Transactions are
secure, authenticated, and verifiable.
**Figure 3. Network of business parties and intermediaries without and with Blockchain. (a) Trans-**
actions between Org. A, B, and C involve intermediaries. (b) Participants share an updated ledger
using P2P replication each time a transaction occurs.
_4.2. Blockchain Benefits_
The blockchain network stores data in a tamper-proof form, and it permits valid users
only to append data to the blockchain. Understanding the primary attributes, depicted in
Figure 4, of Blockchain that make this technology unique is essential to comprehend its
full potential.
- **Distributed shared ledger: This is a distributed append-only system shared across**
the corporate or business network, making the system more resilient by eliminating
the centralized database, which is a single point of failure.
- **Consensus: A transaction is only committed and appended to the ledger when all**
validating parties consent to a network verified transaction.
- **Provenance: The entire history of an asset is available over a blockchain.**
-----
_Sustainability 2022, 14, 7609_ 9 of 30
- **Immutability: Records are indelible and cannot be tampered with once committed to**
the shared ledger, thereby making all information trustworthy.
- **Finality: Once a transaction is completed over a blockchain, it can never be reverted.**
- **Smart contracts: Code is built within a blockchain that computers/nodes execute based**
on a triggering event. Essentially, an “if this then that” statement can be auto-executed.
**Figure 4. Blockchain benefits.**
Blockchain has the potential to disrupt any form of transaction that requires information to be trusted. With the advent of Blockchain technology, all trusted intermediaries
are the subject of disruption in one form or another, and Blockchain technology solves the
problems associated with the way information-related transactions occur today. Blockchain
creates a permanent and unalterable ledger of information by validating transactions
through its distributed network of peers.
_4.3. Types of Blockchain Networks_
Blockchain networks are either public or private. A public blockchain network operates in a decentralized open environment with no restriction on the number of people
joining the network, and the private blockchain network functions within limits defined
by a control entity. The intrinsic technology of both networks remains the same; however,
the dynamics and utility of closed and open networks are different. This difference plays
out based on the incentives for nodes to remain a part of the network. The key idea is that
in a public blockchain, the consensus mechanism rewards each participant for staying a
part of the network, and in a private blockchain, the need for creating this incentive does
not exist.
A genuinely transparent public registry’s democratized nature may not be helpful
to an organization or corporate network since the parties are known, and there is a level
of understanding of the members who can participate in the network and transactions.
The consensus is that while public blockchains work well for specific applications such as
cryptocurrency (bitcoin) based transactions, the most important application of Blockchain
-----
_Sustainability 2022, 14, 7609_ 10 of 30
technology as an enterprise solution would not be possible than with the increased regulatory control associated with a private Blockchain ecosystem.
Blockchain technology is still emerging, and therefore its different applications evolve
continuously and iteratively. An ecosystem where multiple private blockchains interact
with each other on a publicly distributed network may address the issue of public vs.
private blockchain networks. In that shared ecosystem, public and private blockchains
work in symbiosis in the same way private networks interact with the Internet.
Blockchain technology is being applied in numerous domains of smart cities, such as
healthcare, power grid, transportation, supply chain management, education, manufacturing, the construction industry, and many others. Several works survey and describe the
application of Blockchain in these areas [64–66].
_4.4. Blockchain Suitability_
Blockchain technology is only suitable when multiple parties share data and need a
common information view. However, sharing data is not the only qualifying criteria for
Blockchain to be a viable solution. The following situations make Blockchain a viable and
efficient solution:
- A transaction depends on several intermediaries whose presence increases the transaction’s time, cost, and complexity.
- Reducing delays and speeding up a transaction is incredibly advantageous for the business.
- Transactions created by the business participants depend on each other.
- Actions undertaken by multiple participants should be recorded and involved validated data updated.
- Building trust between the participants is necessary for the business.
To sum up, Blockchain technology is certainly not a solution to all transaction issues.
**5. Methodology**
This review paper uses a qualitative research approach to synthesize the relevant literature on the article’s subject. Given the descriptive nature of the present study, the qualitative
approach allows for reviewing and synthesizing a large amount of pertinent literature.
A systematic review strategy was adopted without claiming to be exhaustive in pursuing
this objective.
_5.1. Search Criteria Formulation_
The search criteria used were:
- C1: (“Edge AI” OR “edge intelligence”) AND “Blockchain”;
- C2: (“Edge AI” OR “edge intelligence”) AND (“smart mobility” OR “smart
transportation”);
- C3: “Blockchain” AND (“smart mobility” OR “smart transportation”);
- C4: (“Edge AI” OR “edge intelligence”) AND “smart energy”;
- C5: “Blockchain” AND “smart energy”.
The purpose of this review paper is to answer the following research questions.
- RQ-1: What are the applications of edge AI and Blockchain regarding smart mobility
and smart energy? This research question intends to identify the state-of-the-art
research regarding the applications of edge AI and Blockchain technology in these
two key areas of a smart city.
- RQ-2: What are the potential open research issues and future directions in edge AI and
Blockchain implementation in these two vital areas of a smart city? This question aims
to define the open questions and research directions for the wide adoption of edge AI
and Blockchain to address the challenges in implementing smart mobility and smart
energy. Consequently, answering this question encourages researchers to understand
the current research findings and trends in edge intelligence and Blockchain.
-----
_Sustainability 2022, 14, 7609_ 11 of 30
_5.2. Source Selection and Approach_
The review included articles published between 2017 and 2021. A search for relevant
research on the topic of this review was conducted using the following databases and search
engines: (i) Scopus, (ii) ScienceDirect, and (iii) Google Scholar, which provide excellent
coverage of the study topics. The search used the search criteria above and revolved around
the terms “Edge AI” and “Blockchain” while including synonyms as additional terms such
as “edge intelligence” and “distributed ledger” to increase the search results.
Most of the papers reviewed are journal articles, with some conference papers also
included. Papers were selected based on the quality of the journal, relevance to the topic,
and filtered by date of publication. Edge intelligence and Blockchain are still in their
infancy and are evolving rapidly. Article selection was based on titles, keywords, abstracts,
and conclusions relevant to the topic. References cited in this review paper published
before 2017 mainly concern the background and literature review on smart city areas and
challenges, edge computing, and Blockchain.
The initial search for the five search criteria (C1–C5) found 417 references from Scopus,
533 from ScienceDirect, and 931 from Google Scholar (review articles). However, the total
number of papers was reduced to 150 after the title and abstract screening, excluding,
and eliminating duplicates. Afterwards, the papers were classified into four main classes:
background and fundamentals, edge AI and Blockchain convergence, applications of edge
AI in smart mobility and smart energy, and applications of Blockchain in smart mobility
and smart energy.
**6. Transformative Potential of Edge AI and Blockchain in Smart Cities**
Modern cities struggle to automate many of their processes and coordinate them with
various stakeholders. Citizens expect their governments and smart city entities to respond
quickly to their demands and needs while ensuring transparency, fairness, and accountability to the public. Success in these endeavors, especially in the digital age, requires that
up-to-date data be collected and processed in near real-time. Much of the challenge is in the
management and processing of data. Unfortunately, traditional centralized databases and
data management tools are not enough to meet the new challenges that smart cities face.
The data exchanged between the various city actors can be tampered with. The single point
of failure of the standard database client-server model compromises data security, making
transparency challenging to achieve when city databases are centralized. Additionally,
using centralized databases results in slow and inefficient operations such as registering
identifications (IDs) and electoral voting.
Smart cities and government entities can address the above issues by taking advantage
of the recent advances in edge AI and using an innovative data management structure.
This data structure uses distributed ledgers and cryptography. Furthermore, they can offer
citizens smart on-demand services while ensuring data privacy and security, unprecedented
transparency, fairness, and accountability [67,68]. Here, we discuss the potential of these
two technologies in two crucial subsystems of a smart city, smart mobility, and smart energy
management, and review relevant research works on their usage in these areas.
_6.1. Smart Mobility_
Modern cities suffer from major issues such as traffic congestion, emissions, and safety.
Without innovative solutions, mobility problems will intensify due to the continued growth
of the population, which leads to an increase in the number of vehicles on the roads,
the kilometers traveled, and consequently the increase in emissions. In response, the mobility industry is developing a fascinating range of innovations designed for urban roads,
such as intelligent traffic and parking management systems, mobility as a service, and carpooling solutions. “Smart transport” often refers to the use of new digital technologies
and data-driven management techniques in transport systems to address the mobility
problems [28,69]. The phenomenal technological developments in recent years, which have
brought about significant changes in all aspects of our life, promise to improve transport
-----
_Sustainability 2022, 14, 7609_ 12 of 30
in cities in all its forms. Smart transport, being a dream, is becoming more and more a
reality. We are seeing more and more applications that integrate live data and feedback
from multiple sources to gain a holistic and real-time view of the traffic status, helping
stakeholders better manage road traffic and deliver quality services to road users. Other
innovations that contribute to smart transport and mobility include:
- The development of new models of shared mobility;
- The development of more reliable and convenient public transport;
- The development of applications allowing to alert drivers of hazardous situations quickly;
- The development of navigation applications that allow drivers to find in real-time the
best route possible;
- The ability to adjust road signals and speed limits in real-time based on current
traffic conditions;
- The development of new concepts of electric, connected, and autonomous vehicles.
Because of the costly computations of traffic management systems, the improvement
of the real-time processing of data is one of the best ways to optimize traffic management
systems [27]. Traffic data are obtained from various sensors and IoT devices deployed
on urban roads and vehicles by transportation systems. Intelligent transport systems are
evolving towards intensive use of edge computing and edge AI technologies, especially
for traffic management processes [70]. Gigabytes of sensory data are analyzed, filtered,
and compressed locally before being transmitted through IoT edge gateways to multiple
systems for later use. Edge processing for traffic management solutions allows one to save
on storage, network expenses, and operating costs.
6.1.1. Edge AI for Traffic Monitoring and Management
Traffic management is an undeniable component of smart mobility, which combines
different measures to preserve traffic capacity, reduce congestion at roads and intersections,
and improve the safety and reliability of the overall road transport system. Modern traffic
management systems are composed of advanced sensing and monitoring technologies,
management tools, and a set of intelligent applications to achieve these goals. These technological solutions prepare smart cities for future cutting-edge technological developments,
in particular the proliferation of autonomous vehicles, connected vehicles, and the largescale deployment of Fifth Generation (5G) cellular networks and edge AI systems [71].
Several works investigated edge computing-based solutions for traffic management in
smart cities. Barthélemy et al. [70] designed a visual sensor for monitoring the flow of
bicycles, vehicles, and pedestrians traffic. Their complete edge-computing-based solution
aims to deploy multiple visual sensors and collect data through a framework called Agnosticity. The visual sensor hardware uses the NVIDIA Jetson TX2 on-board computing
platform to perform all computations onboard. Its software pairs YOLOv3 [72], a popular
convolutional deep neural network, with Simple Online and Realtime Tracking (SORT) [73],
a real-time tracking algorithm. The metadata are then extracted and transmitted using
Ethernet or LoRaWAN protocols. The sensor provides a privacy-compliant tracking solution by transmitting only metadata instead of raw or processed images. Municipalities
can combine the sensors with the existing Closed-circuit television (CCTV) infrastructure,
and this integration helps optimize infrastructure usage and add value to the network by
leveraging the vast video data collected. Besides, the Long Range Wide Area Network
(LoRaWAN) protocol facilitates the deployment of additional cameras in areas where
conventional internet connectivity is not available.
Dinh et al. [74] proposed an inexpensive and efficient edge-based system integrating
object detection models to perform vehicle detection, tracking, and counting. They created
a Video Detection Dataset (VDD) in Vietnam and then examined it on two different types
of edge devices. They evaluated their proposed traffic counting system in a Coral Dev TPU
Board and then a Jetson Nano GPU Board and implemented several models in the two
boards. The MobileDet 320 × 320 SSD model implemented in the Coral Dev TPU Board
-----
_Sustainability 2022, 14, 7609_ 13 of 30
for the vehicle detection context achieves an accuracy of 92.1%, and the proposed method
achieves a maximum inference speed of around 26.8 Frames per second (FPS) on VDD.
Additionally, Kumar et al. [75] investigated how to detect and track vehicles effectively.
Their proposed method detects tracks and extracts vehicle parameters for speed estimation
using a single camera. They used the Automatic Number Plate Recognition (ANPR) system
to select keyframes where a speed limit violation occurs. The average detection accuracy
obtained is approximately 87.7%. The proposed approach uses cropping operations to
minimize the scope of any detection of false positives on both sides of the road. The average
detection accuracy obtained is 87.7%. The proposed approach tracks vehicles moving in
one direction but fails to detect vehicles coming from opposite directions.
Likewise, Song et al. [76] proposed a vision-based vehicle detection and counting
system for highways. The proposed method is not expensive, is highly stable, and does not
require a significant investment in terms of monitoring equipment. They used a “Vehicle
dataset” to train a YOLOv3 network to obtain the vehicle object detection model. Image
segmentation and YOLOv3 allowed them to detect three types of vehicles: cars, buses,
and trucks. A convolutional neural network and the Oriented FAST and Rotated BRIEF
(ORB) algorithm [77] were used to extract the features of detected vehicles. The authors
stated that vehicles’ detection speed is fast, and its accuracy is high. Traffic footages taken
by highway surveillance video cameras have good adaptability to the YOLOv3 network.
Multi-object tracking uses the object box detected in vehicle detection using YOLOv3.
The ORB algorithm uses the Features from the Accelerated Segment Test (FAST) to detect
feature points, and the Harris operator performs corner detection.
In many cities, a segment of a public or private road can be used to load and unload
goods at specific times or at any time. Parking signs and road markings are typically used
to warn drivers of parking regulations. These areas are known as loading bays. Parking
inspectors generally monitor these areas, and motorists found violating the rules can be
fined. These restrictions on urban freight deliveries require establishing a loading bay
system and dividing the last mile delivery into driving and walking segments. Loading
bays are sometimes occupied, requiring rerouting delivery vehicles and searching for an
alternative loading bay. The authors in [78] introduced a fuzzy clustering method to test
different optimization approaches and make the system flexible enough to accommodate
this problem. We believe that edge AI and computer vision can help address where and
how many loading bays should be used to perform this transshipment and execute last-mile
delivery most efficiently.
6.1.2. Blockchain for Smart Mobility
With the population growth of cities and the rapid increase in demand for smart
transport and mobility solutions, there is an urgent need for innovative solutions that
use existing infrastructure in cities and on external roads and highways between cities.
Smart mobility technologies aim to provide many new applications and perspectives for
efficient and safe movement on roads while reducing Carbon dioxide (CO2) emissions and
improving air quality [69]. Transportation systems management is a challenging endeavor
in many modern cities [79].
Blockchain technology can improve information sharing between different stakeholders in cities, improve the robustness of the overall transport system and facilitate
communication between vehicles, contact with road units, and transport traffic control
centers. In addition, Blockchain in the transport sector also can reduce the processing time
of transport-related transactions, approvals, and exchange of documents and speed up
customs clearance. This section summarizes relevant work on Blockchain-based solutions
for smart transportation and mobility. Figure 5 depicts the main areas where Blockchain
has been used to contribute to the smart mobility goals, and Table 1 summarizes the focus
area of each of the reviewed works and the Blockchain mechanisms they used.
-----
_Sustainability 2022, 14, 7609_ 14 of 30
**Figure 5. Blockchain for smart mobility.**
**Table 1. Summary of Blockchain-based smart mobility literature review.**
**Ref.** **Focus** **Blockchain Used Mechanisms**
Blockchain as the operating system of smart cities, with transportation
[80] Etherium-like Blockchain, smart contracts
management as one of the main focus areas
Blockchain in vehicular communications, in particular, a sys
[81] tem for revocation and accountability in a Security Credential Distributed Ledger, hierarchical consensus
Management System.
Distributed blockchain vehicular network, Miner
[82] A blockchain-based vehicular network architecture in smart city. Vehicular Node, revocation authority, Block
Node Controller
Reputation systems in vehicular networks based on Blockchain Vehicular blockchain, Miner Vehicle, Trusted Au
[83]
technology. thority, distributed consensus
Blockchain-based key management scheme to transfer security keys
[84] between distributed security managers in heterogeneous Vehicular
Communication Systems.
Blockchain structure without the third-party authorities, Transaction format, Mining, and Proof
algorithm.
Blockchain-based decentralized Key Management Mechanism Vehicular blockchain network, Ethereum-based
[85]
for VANET. Smart contract, mining functions.
Decentralized Trust Management system in vehicular networks based Vehicular blockchain, Miner Vehicle, Trusted Au
[86]
on Blockchain technology. thority, distributed consensus
Decentralized Trust Management system in vehicular networks based Vehicular blockchain, Tendermint( consensus with
[87]
on Blockchain technology and the Tendermint consensus protocol. out mining), BFT based consensus.
A location privacy protection system based on trust management in Vehicular blockchain to record the trustworthiness
[88]
Blockchain-based VANET. of vehicles, PBFT consensus algorithm.
A Blockchain-based system combined with auctions to enable BEVs Blockchain to record trading contracts, Smart
[89]
to trade energy using day-ahead and real-time trading markets. contract.
Roaming charging process of electric vehicles and Blockchain technol
[90] ogy to support user identity management and record energy transac- Distributed ledge to record energy transactions
tions securely.
[91] Blockchain to mitigate trust Issues in Electric Vehicle Charging. Hyperledger Fabric, smart contract.
-----
_Sustainability 2022, 14, 7609_ 15 of 30
Bagloee et al. [80] suggested that to reduce traffic congestion and achieve system
equilibrium, traffic authorities may issue a limited number of mobility permits, distributed
equally to all drivers, which may be tradable in an open market. Such a progressive
scheme is now possible in light of the ever-increasing use of various kinds of sensors,
cameras, RFIDs, radars, and lidars. Blockchain technology and smart contracts can be
used as a valid, promising, and feasible solution for implementing the tradable part of
this scheme. The authors also suggested that drivers and passengers use the Tradable
Mobility Pass (TMP) equally to pay parking fees, public transport tickets, car registration
fees, and highway tolls. An Ethereum-like blockchain and “smart contracts” can be used
to program their mobility credits for trading in the open market and spending against the
above payments and mileage. They can also be used to trade TMPs en-route by permitting
vehicles to communicate with each other and place bids for faster routes at higher prices.
Blockchain can also facilitate communication between connected vehicles and the road
infrastructure by considering data exchange requests as transactions to be stored and
retrieved from a blockchain database.
Additionally, Blockchain can provide safe, secure, and well-informed access to driving
behavior information for driving license agencies and insurance companies, which typically
know little about driving behavior. Insurance companies’ predictions are based on claims
history [92]. Access to data from connected vehicles can help them set insurance premiums
commensurate with drivers’ risk levels.
**Blockchain in vehicular communications. Some works proposed Blockchain-based**
solutions to help create a secure, trusted, and distributed autonomous Intelligent Transportation System (ITS) capable of controlling and managing physical and digital assets.
At the same time, most ITSs were centralized [93]. The authors in [81] described the design
of a Blockchain-based decentralized alternative to existing security credential management systems, which aimed to get rid of the need of using the services of a centralized
trusting authority.
Vehicle-to-Everything (V2E) communications are an essential component in any
ITS. They help provide information on road accidents, road conditions, traffic jams, allowing road drivers to be aware of critical situations, thus enhancing transport safety.
Sharma et al. [82] proposed a distributed transport management system that allows vehicles to share their resources and create a network where value-added services, such
as automatic gas refill and ride-sharing, can be produced. Additionally, Yang et al. [83]
proposed reputation systems in vehicular networks based on Blockchain technology.
Lei et al. [84] proposed a Blockchain-based key management scheme to transfer
security keys between distributed security managers in heterogeneous Vehicular Communication Systems (VCS). The blockchain structure enables secure key transfer between
participating network security managers and eliminates the need for a central manager or
third-party authority.
Likewise, the authors in [85] proposed a decentralized key management mechanism
for Vehicular Ad-hoc Networks (VANETs) with Blockchain to automatically register, update,
and revoke the user’s public key. They also described a lightweight mutual authentication
and key agreement protocol based on the bivariate polynomial. Additionally, they analyzed
the security of their proposed mechanism for managing distributed keys and have shown
that it can prevent typical attacks, including insider attacks, public key tampering attacks,
Denial-of-Service (DoS) attacks, and collusion attacks.
Additionally, Yang et al. [86] proposed a decentralized Blockchain-based trust management system in vehicular networks. Vehicles can query the trust values of neighboring
vehicles and assess the credibility of received messages. The RSUs aggregate the confidence
values based on evaluations generated by the messages’ recipients. Using Blockchain, all
RSUs contribute to maintaining a reliable database.
Similarly, Arora et al. [87] proposed a Blockchain-based trust management system
for VANETs based on the Tendermint protocol to eliminate the possibility of malicious
nodes entering the network and reduce power consumption. Vehicles assess the messages
-----
_Sustainability 2022, 14, 7609_ 16 of 30
received from neighboring vehicles using the gradient boosting technique (GBT). Based
on the assessment results, the message source vehicle generates the ratings, uploads them
to RSUs, and calculates the trust offset value. All RSUs maintain the trust blockchain,
and each RSU adds its blocks to the trust blockchain.
In another work, Luo et al. [88] proposed a location privacy protection system based
on trust in Blockchain-based VANET. Their trust management approach uses Dirichlet
distribution to allow requesters to cooperate only with vehicles they trust. In addition,
they also developed the blockchain data structure to record the trustworthiness of vehicles on publicly accessible blocks promptly to allow any vehicle to access historical trust
information of counterparties whenever necessary.
**Blockchain for Electrical Vehicles. Battery Electric Vehicles (BEVs) are known for**
their low operating costs because they have fewer moving parts that require maintenance.
In addition, they are very environmentally friendly as they do not use fossil fuels. Modern
BEVs use rechargeable lithium-ion batteries, which have a longer life and retain energy
very well with a self-discharge rate of only 5% per month. In many cities around the
world, Charging Stations (CSs) are increasingly deployed in various geographic locations,
residential garages, and public/private parking lots to meet the energy needs of BEVs,
increasing the load on electrical distribution systems.
Intelligent car parking lots offer BEVs parking and recharging services during their
parking time for a fee. Customers of these parking lots want fast charging services at low
cost, while parking lot operators aim to maximize their profit. BEV owners increasingly
tend to purchase power from other electric vehicles to reduce recharging costs and reliance
on the primary electricity grid.
Huang et al. [89] proposed a Blockchain-based system to enable BEVs to trade energy
using day-ahead and real-time trading markets. Users of BEVs submit their price offers
to participate in a double auction. Then, the operator of the charging system performs
intelligent matching of the different offers to reduce the impact on the power grid by
programming the charging and discharging behavior of electric vehicles taking into account
the satisfaction of EV users and the social benefits. The operator of the charging system
uploads the trading contract to the blockchain once the trading results are cleared. Case
studies have demonstrated the effectiveness of the proposed model. Ferreira et al. [90]
studied the roaming charging process of electric vehicles and used Blockchain technologies
to support user identity management and record energy transactions securely. They used
off-chain cloud storage to record transaction details. Blockchain-based digital identity
management avoids charging cards used as an authentication process in charging systems.
It can achieve interoperability between different countries, allowing a roaming process of
BEV charging. In [91], Gorenflo et al. described a methodology for the design of Blockchainbased systems. They have demonstrated its usefulness in creating a system for recharging
electric vehicles in a decentralized network of recharging stations. The proposed system
aims to solve the problem of trust between the different actors of the system, including
customers, providers of electric vehicle charging services, and property owners. Trust
problems arise from the potential for tampering with transaction data. The blockchain
ledger in the proposed solution contains a record of every transaction and acts as an
immutable audit trail.
_6.2. Smart Energy_
In recent years, the term “Smart Energy” has been used more and more to mean an
approach that goes beyond the concept of “Smart Grid.” While the smart grid concept
mainly focuses on the electricity sector, smart energy embodies a holistic approach that
includes many sectors (electricity, heating, cooling, buildings, industry, and transport). It
allows the development of affordable solutions for transforming existing systems into future
renewable and sustainable energy solutions [33]. Smart energy solutions typically use
various disruptive technologies, including artificial intelligence, deep learning, Blockchain
-----
_Sustainability 2022, 14, 7609_ 17 of 30
and distributed ledger technologies, distributed sensing and actuation technologies, and,
recently, edge computing and federated learning technologies.
6.2.1. Edge AI for Smart Energy Management
Several research efforts are increasingly studying and developing smart energy solutions. Shah et al. [94] reviewed several research works that use different energy optimization techniques in smart buildings and rely on IoT solutions. Their study aimed to identify
algorithms and methods for optimized energy use and edge and fog computing techniques
used in smart home environments. From an initial batch of 3800 papers, they found only
56 articles relevant to their study. The detailed analysis of these papers revealed that many
researchers had developed new optimization algorithms to optimize energy consumption
in smart homes.
Zhang et al. [95] proposed an IoT-based green energy management system to improve
the energy management of power grids in smart cities. With the implementation of IoT,
smart cities can control energy through ubiquitous monitoring and secure communications.
The proposed system uses deep reinforcement learning. The authors’ results show that IoT
sensors help detect energy consumption, predict energy demand in smart cities, and reduce
costs. Aided by a systematic learning process, the energy management system can balance
energy availability and demand by stably maintaining grid states.
Abdel-Basset et al. [96] proposed a smart edge computing framework to achieve efficient energy management in smart cities. They reviewed relevant work on data-driven load
forecasting (LF) techniques used in real-life scenarios such as smart buildings to predict the
day’s energy demand in advance and make appropriate energy demands on smart grids.
These short-term forecasts help to avoid energy shortages and promote fair consumption.
They classified these techniques into two classes: statistical or machine learning-based
techniques and deep learning-based techniques. They introduced a new deep learning architecture, called Energy-Net, to predict energy consumption by integrating the spatial and
temporal learning capability. They validated the robustness of their proposed architecture
through a comparative analysis of public datasets with recent cutting-edge approaches.
According to the authors, the trained Energy-Net system is deployable on resource-limited
edge devices to forecast potential energy needs sent as a request to the smart grid through
cloud-fog servers. As a result, the smart grid supplies the demanded energy to different
smart city sectors. Energy management is, therefore, performed efficiently.
The authors in [97] studied and proposed an energy management framework based
on edge computing for a smart city. They developed an energy scheduling scheme based
on deep reinforcement learning to deal with the intermittency and uncertainty of energy
supplies and demands in cities for a long-term goal. They analyzed the efficiency of the
energy scheduling scheme in the cases with and without edge servers, respectively. Their
results demonstrate that the proposed model can achieve low energy costs while exhibiting
lower delays than traditional schemes.
6.2.2. Blockchain for Smart Energy Management
Blockchain technology in the energy sector is up-and-coming. It can significantly
reduce energy trading costs, increase process efficiency, and deliver customer cost benefits.
It can establish direct interactions between all the actors involved, which guarantees the
optimal use of existing production capacities while offering energy at the best price. The
application of Blockchain in emerging smart energy systems in smart cities has recently
received a great deal of attention. In addition to the BEV charging we mentioned, there is
an increasing need for decentralized energy management, energy trading platforms development, and secure data and financial transactions between the different actors involved.
This need arises from the proliferation of new devices, technologies, renewable energy
resources, and electric vehicles. Additionally, there is a growing interest worldwide in using
Blockchain technologies to create a secure and more resilient environment for the smart
-----
_Sustainability 2022, 14, 7609_ 18 of 30
energy industry. Several research efforts investigated the opportunities, benefits, challenges,
as well as drawbacks of Blockchain technologies in the context of smart energy [98–100].
This section reviews some efforts regarding the use of Blockchain in smart energy
systems. We do not intend to provide a full survey. Andoni et al. [101] reviewed and ranked
about 140 Blockchain-based projects in the energy sector. Additionally, the authors in [102]
reviewed several research works regarding the applications of Blockchain technology in
smart grids. They categorized them in decentralized energy management, energy trading,
BEVs, financial transactions, cybersecurity, testbeds, environmental issues, and demand
response (DR). A common aspect of most of the efforts is the usage of Blockchain to address
decentralized energy management, energy trading, transparency, and its perceived benefits
to system security. However, system security and user privacy are typically dependent on
the type of blockchain used. Table 2 summarizes these efforts.
**Table 2. Summary of Blockchain-based smart energy literature review.**
**Ref.** **Focus** **Blockchain Used Mechanisms**
Smart contracts, consensus-based DR validation
[98] Distributed management of DR in smart grids
approach
[99] Smart energy trading Smart contracts
pay-to-public-key-hash with multiple signatures to
[100] P2P energy and carbon trading
secure transaction
[101] Review of challenges of Blockchain technology in the energy sector
[102] Review of blockchain in future smart grids
Review of blockchain applications in different areas of a smart city,
[103]
including smart energy
Smart contracts, noncooperative game for consump
[104] Automated energy DR, P2P energy trading
tion strategy to reach consensus
[105] Distributed energy system (short review) Smart contracts, consensus
[106] Federated power plants with P2P energy trading
Distributed energy management in a multi-energy market en
[107] Smart contracts, consensus
hanced with blockchain
[108] Distributed energy exchange Smart contracts
[109] Microgrid energy market, P2P energy trading
[110] Electricity Trading for Neighborhood Renewable Energy P2P Blockchain network
[111] Smart homes energy trading Ethereum’s smart contracts, consensus
[112] P2P solar energy market Auction mechanism in the smart contracts.
Ethereum-based blockchain, Smart contracts, Dis
[113] P2P Energy Trading tributed consensus for verification and group
management
Federated Learning-based P2P Energy Sharing assisted with
[114] smart contracts for energy demand prediction
Blockchain
Electrical energy transaction ecosystem between smart homes pro
[115] Smart contracts (energy tags)
sumers and consumers, P2P energy trading
Review of applications of smart communities, including energy
[116] Smart contracts, miners, consensus.
trading in ITS using blockchain.
**Decentralized Energy Management. The ever-growing deployment of renewable**
energy systems in smart grids highlights the need to develop distributed energy management systems and trigger fundamental changes in energy trading [117,118]. A large body
of literature has investigated the usage of Blockchain technologies to ease decentralized
energy management according to the P2P model used by Blockchain [103–106,119].
-----
_Sustainability 2022, 14, 7609_ 19 of 30
Real-time energy management has the potential to resolve the impact of various uncertainties in the energy market, provide instant energy balance and improve business returns.
Wang et al. [107] proposed a bidding strategy for the energy market, with multiple participants, which uses an adaptive learning process that incorporates a reserve price adjustment
and a mechanism of dynamic compensation. Participants perform bid adjustments based
on adaptive learning leveraging real-time market information to increase transaction rate
and maximize profits. Blockchain technology guarantees the transparent and efficient
performance of the presented bidding strategy. A decentralized Blockchain application
showed that the system could achieve real-time energy management and dynamic trading
in practice.
**Energy trading. Recent years have seen the high penetration of renewable energy**
systems in smart grids and homes. However, complex energy trading and complicated
monitoring procedures are obstacles to developing renewable energies. Energy trading
involves various actors, including residential consumers, renewable energy producers,
BEVs, and energy storage, which can participate in a Blockchain-based market for energy
trading with the roles of prosumer and consumer. Actors propose their energy costs due to
their resources and capabilities, which leads to a competitive energy market. Therefore,
the blockchain can facilitate energy trading and data transactions while guaranteeing
transaction security, improving transparency, and easing financial transactions. The data
flow between prosumers and consumers without human involvement [108].
A significant body of research has studied and proposed Blockchain-based networks
to enable energy trading and related transactions. For example, the authors in [109,110]
have studied renewable energy developments, including wind and solar power, in smart
homes. They proposed to use Blockchain technology to trade energy between smart homes
and increase their financial benefits.
Additionally, Kang et al. [111] investigated energy trading between smart homes
using Blockchain technology. Smart homes store energy in energy storage, and consumer
nodes equipped with miners monitor energy consumption. Therefore, if the stored energy
is not sufficient to power the loads, the additional energy is purchased from the prosumer
nodes by having Ethereum smart contracts manage the energy trade according to the
following rules:
- Energy trading conditions should be specified to permit energy exchange between
prosumers and consumers.
- Prosumers and consumers should determine price and exchange procedures beforehand, and the prosumers should complete the proof-of-work.
- If a consumer’s stored energy falls below a certain level, her home miners should send
energy trading requests to appropriate prosumers.
- Energy trading takes place when consumer requirements match prosumer conditions.
It is widely expected that the global demand for clean and stable energy sources will
continue to increase over the coming decades. With the recent penetration of distributed
resources into energy trading, communities can take advantage of cheaper electricity prices
while supporting green energy locally. However, this poses new challenges mainly in the
auction process to ensure individual rationality and economic efficiency, mitigated with the
help of Blockchain technology. Lin et al. [112] studied the application of P2P energy trading
and Blockchain technology in the development of photovoltaic (PV) units. They proposed
a P2P energy trading model using a Discriminatory and Uniform k-Double Auction (k-DA).
They verified the financial benefits of the proposed model through simulation.
The authors in [113] have exploited the opportunities offered by Blockchain in building
the prosumer group in the context of P2P energy trading. They proposed a Blockchainassisted adaptive model, named SynergyChain, to improve the scalability and decentralization of the prosumer aggregation mechanism in the context of P2P energy trading.
The model showed that the coalition of multiple energy prosumers through aggregation
outperformed the case in which individual prosumers participated in the energy market.
They implemented a reinforcement learning module that decides whether the system
-----
_Sustainability 2022, 14, 7609_ 20 of 30
should act as a group or independently. The complete analysis using the hourly energy
consumption dataset showed a substantial improvement in system performance and scalability compared to centralized systems. Furthermore, their system worked better with
the learning module, in terms of cost-effectiveness and performance, than without it. In
another work [114], the authors proposed FederatedGrids, a platform that uses federated
learning and Blockchain for P2P energy trading and sharing. It creates a collaborative
environment that maintains a good balance between the participants of the different microgrids. The blockchain helps to ensure trust and privacy between all participants. Smart
contracts and federated learning allow the platform to predict future energy production
and system load, thus allowing prosumers to make optimal decisions related to their energy
sharing and exchange strategies.
Smart cities can significantly benefit from Blockchain capabilities to maximize energy
efficiency and improve energy resource planning and management. Blockchain-based
networks can directly connect multiple energy resources and household appliances, thereby
providing users with high-quality, inexpensive, and efficient energy [115]. They can help
regulate the distribution and transformation of energy in smart grids, bringing more
transparency to energy transactions [116].
**7. Edge AI and Blockchain Convergence**
Several research efforts studied the convergence between Blockchain and edge computing without considering or giving details about the AI component at the edge [68,120–126].
However, as AI techniques further proliferate at the edge in various smart city systems
(healthcare, transportation, power grid, etc.) and ensure huge benefits, they also introduce
increased privacy and security threats. Therefore, robust security measures are needed
to protect data and AI models at the edge. These measures include security features for
data storage, encryption, data dissemination, and key/certificate management. As we
discussed earlier, edge AI and Federated Learning are emerging technologies for building smart latency-sensitive services at the edge while protecting data privacy. On the
other hand, Blockchain technology shows significant possibilities with its immutable,
distributed, and auditable data recording for safeguarding against data breaches in a
distributed environment.
The convergence between Blockchain and AI is attracting much interest in academia
and industry to solve many challenging problems to manage effectively a few years ago.
The characteristics of blockchain technology and its decentralized architecture, which we
discussed in Section 4.1, can help build robust and secure AI applications. Blockchain
attributes of immutability, provenance, consensus, and transparency enable secure sharing
of AI training data and pre-trained AI models using a permanent and unalterable record
of AI data and models. Secure sharing of AI data and models is associated with increased
trust in AI models and the data they work with.
More and more research efforts study the convergence of edge AI and Blockchain.
Table 3 summarizes those efforts. Jiang et al. [127] argued that conventional approaches
for object detection that rely on classic and connectionist AI models are not adequate to
support the large-scale deployment of the Visual Internet of Vehicles (V-IoV). On the other
hand, edge intelligence, which integrates edge computing and AI, demonstrated a balance
between efficiency and computational complexity. Edge AI involves training learning
models and analyzing V-IoV data, reducing latency, improving time to action, and minimizing network bandwidth usage. Object detection tasks can be offloaded and executed
on Roadside Units (RSUs) using the edge’s storage and computing power capabilities.
The authors proposed an edge AI framework for object detection in the V-IoV system
and a You Only Look Once (YOLO)-based abductive learning algorithm for robust and
interpretable AI. The abductive model combines symbolic and connectionist AI to learn
from data. Additionally, Blockchain complements edge AI with security, privacy, reliability,
scalability, and enables model sharing.
Lin et al. [128] consider that extracting knowledge, such as classification models, detection, and predictions from physical environments, from sensory data, could be achieved
-----
_Sustainability 2022, 14, 7609_ 21 of 30
by introducing edge computing and edge AI into the Internet of Things. Since multiple
nodes with heterogeneous Edge AI devices generate isolated knowledge, collaboration
and data exchange between nodes are essential to building intelligent applications and
services. The authors proposed a P2P knowledge marketplace to make knowledge tradable
in edge AI-enabled IoT and a knowledge consortium blockchain for secure and efficient
knowledge management and exchange in the market. The blockchain consortium includes
a cryptographic knowledge coin, smart contracts, and a consensus mechanism as proof
of trade.
Rahman et al. [129] addressed in their work the challenge of bringing intelligent and
cognitive processing to the edge where the massive amount of IoT data are generated and
processed by mobile edge computing (MEC) nodes. Key transactions are anonymized and
securely recorded in the blockchain, where big data are securely stored in the decentralized off-chain solutions with an immutable ledger. Qiu et al. [130] proposed AI-Chain,
a Blockchain-based edge intelligence for Beyond Fifth-Generation (B5G) networks. AIChain is an immutable and distributed record of local learning outcomes that can lay a new
foundation for sharing information between edge nodes. Leveraging the portability of deep
learning, each node at the edge trains neural network components and applies AI-Chain
to share its learning results. This process dramatically reduces the wastage of computing
power and improves the learning power of the edge node through the learning power of
other edge nodes. Du et al. [131] reviewed the existing literature on Blockchain-enabled
edge intelligence in the IoT domain, identified emerging trends, and suggested open issues
for further research, including transaction rejection, selfish learning, and fork issues. Fork
problems arise when edge nodes disagree on the same learning model and alternative
chains (i.e., forked chains) emerge.
As a use case of the convergence of Blockchain and edge AI, we consider in the
following some efforts in the context of smart mobility. IoV is an emerging technology
that has the potential to alleviate traffic problems in smart cities. In an IoV network,
the vehicles are equipped with modern communication and sensing technologies that
allow the sharing and exchanging of data between the vehicles and the RSUs. The massive
volume of data captured by vehicle sensors, including GPS and RADAR, favors data-driven
AI models. Attacks against vehicles using polymorphic viruses cannot be easily recognized
and predicted because their signatures continually change. The centralized ML paradigm is
evolving towards a more decentralized and distributed learning framework, especially in a
federated learning setup, to accommodate the increase in likely privacy and security issues.
Several works proposed federated learning-based solutions for the IoV [132–135].
Although federated learning provides incredible security to learning structures, it faces
several other security issues as it operates based on a centralized aggregator. For model
training, federated learning relies on local workers, who may be vulnerable to cyber
intrusions. If a local model is attacked, it can mislead other models, and therefore the
global update is erroneous. Because of the likelihood of such possible attacks in federated
learning, Blockchain is used with federated learning to give a decentralized arrangement
to control incentives and reliably ensure security and protection. Due to the promising
capability of federated learning, especially for building an ITS, and the requirement to
alleviate potential attacks, some Blockchain-enabled federated learning schemes for IoV
have been proposed over the last few years.
The authors in [136] proposed a framework for knowledge sharing in IoV based
on a hierarchical federated learning algorithm and a hierarchical blockchain. Vehicles
and RSUs learn surrounding data through machine learning methods and share learning
knowledge. The use of blockchain framework targets large-scale vehicle networks, and the
hierarchical federated learning algorithm aims to meet the distributed model and privacy
requirements of IoVs. They modeled knowledge sharing as a trading market process to
drive sharing behaviors and formulated the trading process as a multi-leader, multi-player
game. The authors stated that their simulation results showed that the proposed hierarchical
algorithm improves sharing efficiency and learning quality and achieves approximately
-----
_Sustainability 2022, 14, 7609_ 22 of 30
10% more accuracy than conventional federated learning algorithms. RSUs reach optimal
utility during the sharing process. Moreover, the blockchain-enabled framework effectively
protects against malicious workers during the sharing process.
The authors in [137] proposed a blockchain-enabled federated learning framework
to improve the performance and privacy of autonomous vehicles. The framework facilitates the efficient communication of autonomous vehicles, where on-board local learning
modules exchange and verify their updates in a fully decentralized manner without any
centralized coordination by leveraging the blockchain consensus mechanism. The framework extends the reach of its federation to untrustworthy public network vehicles via
a validation process of local training modules. By offering rewards proportional to the
usefulness of data sample sizes, the framework encourages vehicles with immense data
samples to join the federated learning.
In the IoV, exchanging messages between vehicles is essential to ensure road safety,
and broadcasting is generally used for emergencies. To solve the low probability of receiving broadcast messages in high-density and vehicle mobility scenarios, the authors of [138]
proposed a blockchain-assisted federated learning solution for message broadcasting. Similar to the Proof-of-Work (PoW) consensus used in several blockchains, vehicles compete
to become a relay (minor) node by processing the proposed Proof-of-Federated-Learning
(PoFL) consensus embedded in the smart contract of the blockchain. The Stackelberg game
further analyzes the business model to incentivize vehicles to be involved in federated
learning and message delivery. The authors stated that their solution outperforms the same
solution without blockchain, allowing more vehicles to upload their local models and yield
a more accurate aggregated model in less time. It also outperforms other blockchain-based
approaches by reducing the consensus time by 65.2%, improving the message delivery rate
by at least 8.2%, and more effectively maintaining the privacy of neighboring vehicles.
Doku et al. [139] proposed a federated learning framework called iFLBC to bring
artificial intelligence to edge nodes through a shared machine learning model powered by
Blockchain technology. Their motivation is to filter relevant data from irrelevant data using
a mechanism called Proof of Common Interest (PoCI). The relevant data of an edge node
are used to train a model, which is then aggregated with models trained by other edge
nodes to generate a shared model stored on the blockchain. Network members download
the aggregated model to provide intelligent services to end-users.
**Table 3. Summary of edge AI and Blockchain convergence literature review.**
**Ref.** **Focus Area** **Edge AI Use Case** **Blockchain Use Case**
Knowledge management
[127] and exchange in the
Internet of Vehicles (IoV)
Making Knowledge
[128] Tradable in Edge-AI
Enabled IoT.
Security, privacy, reliability, scalability,
and model sharing.
A knowledge consortium blockchain
for secure and efficient knowledge
management and exchange in the
market. The blockchain consortium
includes a cryptographic knowledge
coin, smart contracts, and a consensus
mechanism as proof of trade.
Key transactions are anonymized and
securely recorded in the blockchain,
where big data are securely stored in
the decentralized off-chain solutions
with an immutable ledger.
Object detection in and a YOLO-based
abductive learning algorithm for
robust and interpretable AI.
Extracting knowledge, such as
classification models, detection,
and predictions from physical
environments and sensory data at the
edge. A P2P knowledge marketplace
to make knowledge tradable in edge
AI-enabled IoT
Bringing intelligent and cognitive
processing to the edge where the
massive amount of IoT data are
generated and processed by mobile
edge computing (MEC) nodes.
[129]
Blockchain and IoT-Based
Cognitive Edge
Framework for Sharing
Economy Services in a
Smart City
-----
_Sustainability 2022, 14, 7609_ 23 of 30
**Table 3. Cont.**
**Ref.** **Focus Area** **Edge AI Use Case** **Blockchain Use Case**
Blockchain Energized
[130] Edge Intelligence for
Beyond 5G Networks
Edge Intelligence using a
[139] federated learning
Blockchain network
[136] Knowledge sharing in IoV
Federated Learning With
[137] Blockchain for
Autonomous Vehicles.
Messages dissemination in
[138]
the IoV
AI-Chain, a Blockchain-based edge
intelligence for B5G networks. Each
node at the edge trains neural network
components and applies AI-Chain to
share its learning results.
iFLBC, a federated learning
framework called to bring AI to edge
nodes through a shared machine
learning model. powered by
Blockchain technology. The relevant
data of an edge node is used to train a
model, which is then aggregated with
models trained by other edge nodes to
generate a shared model.
Hierarchical federated learning.
Vehicles and RSUs learn surrounding
data through machine learning
methods and share learning
knowledge. Aims to meet the
distributed model and privacy
requirements of IoVs.
Federated learning framework. The
framework extends the reach of its
federation to untrustworthy public
network vehicles via a validation
process of local training modules.
Blockchain-assisted federated learning
solution for message broadcasting.
The Stackelberg game further analyzes
the business model to incentivize
vehicles to be involved in federated
learning and message delivery.
An immutable and distributed record
of local learning outcomes that lays the
foundation for sharing information
between edge nodes.
The shared model is stored on the
blockchain. Network members
download the aggregated model to
provide intelligent services to
end-users.
Hierarchical blockchain. Knowledge
sharing is modeled as a trading market
process to drive sharing behaviors. The
trading process is formulated as a
multi-leader, multi-player game.
On-board local learning modules
exchange and verify their updates in a
fully decentralized manner without
any centralized coordination by
leveraging the blockchain consensus
mechanism.
Vehicles compete to become a relay
(minor) node by processing the
Proof-of-Federated-Learning (PoFL)
consensus embedded in the smart
contract of the blockchain.
**8. Open Research Issues**
The research initiatives reported above represent attempts to mitigate the challenges
of implementing edge AI and Blockchain in two key areas of smart cities, smart mobility
and smart energy. However, there remain unresolved challenges. This section examines
four potential prospective research trends for future implementation.
- **Collaboration and data exchange. As we described earlier, since multiple nodes with**
heterogeneous edge devices generate isolated knowledge, collaboration and data
exchange between nodes are essential to building intelligent applications and services
for smart mobility and smart energy. Storing, sharing, querying, and exchanging
data training models require additional security and privacy measures. Blockchain
technology helps meet these requirements. However, edge devices with limited
storage may not be able to store the training model or the blockchain structure that
grows as transaction blocks are added to the blockchain. Moreover, it is common
for edge devices to store distributed ledger data that are not even useful for their
transactions. Therefore, cutting-edge blockchain-specific equipment or platforms to
support decentralized blockchain data storage are required.
-----
_Sustainability 2022, 14, 7609_ 24 of 30
- **Impact of edge connections on Blockchain-enabled smart mobility. In a smart mo-**
bility scenario, edge devices on connected vehicles, for example, are often connected
to other edge devices or cloud servers through unreliable wireless channels. As we
discussed earlier, Blockchain can facilitate communication between connected vehicles
and the road infrastructure by considering data exchange requests as transactions to
be stored and retrieved from a blockchain database. Due to the inevitable network
delays, a vehicle participating in the blockchain may not receive the most recent block.
It may then create an alternative chain that branches off the main chain. This problem
is known as the forking problem. It can also arise when edge nodes disagree on the
same learning model and forked chains emerge. Such forking reduces throughput
because only one chain survives, ultimately, while all other blocks in different chains
are removed. Further research in this area is needed.
- **Prediction of future energy production and system load. In P2P smart energy trad-**
ing scenarios, the decentralization of prosumers brings many issues. Blockchain helps
to ensure trust and privacy between all players in the energy market. Smart contracts
and learning models at participating nodes should help predict future energy production and system load, allowing prosumers to make optimal decisions about sharing
and pricing their energy. Further research on federated learning models for energy
trading and pricing is needed.
- **Energy efficiency. Incorporating AI in edge devices is challenging because of the**
power-hungry features of deep learning algorithms, such as convolutional neural
networks (CNNs). Therefore, energy efficiency is a critical issue for edge AI applications. Some research efforts investigated the usage of reservoir computing as an
alternative, which promises to provide good performance while exhibiting low-power
characteristics [140]. Additionally, with the growing calls for the application of rigid
environmental standards and the rapidly rising energy costs, smart cities increasingly take the energy efficiency issue more seriously. However, some Blockchain
consensus mechanisms such as PoW (Proof of Work) are computationally expensive
as blockchain nodes perform complex computations to mine the next block. PoW is
not an energy-efficient approach and consumes a large amount of electricity due to
computation redundancy. Researchers are developing alternative less computationally
expensive consensus mechanisms for blockchain systems. Although highly promising,
these consensus mechanisms are still in their infancy and suffer from scalability issues,
and their security has not been rigorously investigated. Therefore, further research is
needed concerning the design of energy-efficient edge AI applications and consensus
mechanisms for blockchain systems.
**9. Conclusions**
Smart cities face several challenges due to population growth and migratory waves.
This article examines the current and potential contributions of edge AI and Blockchain
technology in coping with smart city challenges through the lens of sustainability in two
main areas, which are smart mobility and smart energy. It contributes to the sustainability
literature by identifying and bringing together recent research on edge AI and Blockchain,
highlighting their positive impacts and potential implications on smart cities.
This review highlights the existing and potential convergence of edge AI and Blockchain.
It shows that edge AI and Blockchain technology can help address the problem of traffic
congestion and management by automating the detection, counting, and identification of
vehicle speeds. Furthermore, these technologies can help establish trustworthy communications and energy trading between vehicles and reliable and secure distributed smart
energy management. Finally, this article discusses potential research trends for future implementations of edge AI and Blockchain to provide innovative solutions in smart mobility
and smart energy. It is expected that this review will serve as a guideline for future research
on the adoption of edge AI and Blockchain in other areas of smart cities.
-----
_Sustainability 2022, 14, 7609_ 25 of 30
**Funding: This work is supported by the UAEU Program for Advanced Research Grant N. G00003443.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The author declares no conflict of interest.**
**References**
1. Hugo Priemus, S.D. Climate Change and Sustainable Cities; Taylor & Francis: Andover, UK, 2016.
2. Grimmond, C.S.B.; Roth, M.; Oke, T.R.; Au, Y.C.; Best, M.; Betts, R.; Carmichael, G.; Cleugh, H.; Dabberdt, W.; Emmanuel, R.; et al.
Climate and More Sustainable Cities: Climate Information for Improved Planning and Management of Cities (Produc[ers/Capabilities Perspective). Procedia Environ. Sci. 2010, 1, 247–274. [CrossRef]](http://doi.org/10.1016/j.proenv.2010.09.016)
3. Albert, S. Innovative Solutions for Creating Sustainable Cities; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2019.
4. Mondejar, M.E.; Avtar, R.; Diaz, H.L.B.; Dubey, R.K.; Esteban, J.; Gómez-Morales, A.; Hallam, B.; Mbungu, N.T.; Okolo, C.C.;
Prasad, K.A.; et al. Digitalization to achieve sustainable development goals: Steps towards a Smart Green Planet. Sci. Total
_[Environ. 2021, 794, 148539. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.scitotenv.2021.148539)_
5. [Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE IoT J. 2016, 3, 637–646. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2016.2579198)
6. [Shi, W.; Dustdar, S. The Promise of Edge Computing. Computer 2016, 49, 78–81. [CrossRef]](http://dx.doi.org/10.1109/MC.2016.145)
7. [Satyanarayanan, M. The Emergence of Edge Computing. Computer 2017, 50, 30–39. [CrossRef]](http://dx.doi.org/10.1109/MC.2017.9)
8. [Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile Edge Computing: A Survey. IEEE IoT J. 2017, 5, 450–465. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2017.2750180)
9. Xu, D.; Li, T.; Li, Y.; Su, X.; Tarkoma, S.; Jiang, T.; Crowcroft, J.; Hui, P. Edge Intelligence: Architectures, Challenges, and
Applications. arXiv 2020, arXiv:2003.12172.
10. Wang, X.; Han, Y.; Leung, V.C.M.; Niyato, D.; Yan, X.; Chen, X. Convergence of Edge Computing and Deep Learning:
[A Comprehensive Survey. IEEE Commun. Surv. Tutor. 2020, 22, 869–904. [CrossRef]](http://dx.doi.org/10.1109/COMST.2020.2970550)
11. Kowalski, M.; Lee, Z.W.Y.; Chan, T.K.H. Blockchain technology and trust relationships in trade finance. Technol. Forecast. Soc.
_[Chang. 2021, 166, 120641. [CrossRef]](http://dx.doi.org/10.1016/j.techfore.2021.120641)_
12. Werbach, K. The Blockchain and the New Architecture of Trust (Information Policy); The MIT Press: London, UK 2018.
13. Gaggioli, A.; Eskandari, S.; Cipresso, P.; Lozza, E. The Middleman Is Dead, Long Live the Middleman: The “Trust Factor” and the
[Psycho-Social Implications of Blockchain. Front. Blockchain 2019, 2. [CrossRef]](http://dx.doi.org/10.3389/fbloc.2019.00020)
14. Shala, B.; Trick, U.; Lehmann, A.; Ghita, B.; Shiaeles, S. Blockchain and Trust for Secure, End-User-Based and Decentralized IoT
[Service Provision. IEEE Access 2020, 8, 119961–119979. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3005541)
15. Bashir, I. Mastering Blockchain: Distributed Ledger Technology, Decentralization, and Smart Contracts Explained, 2nd ed.; Packt
Publishing: Mumbai, India, 2018.
16. Konstantinidis, I.; Siaminos, G.; Timplalexis, C.; Zervas, P.; Peristeras, V.; Decker, S. Blockchain for Business Applications: A
[Systematic Literature Review. In Business Information Systems; Springer: Cham, Switzerland, 2018; pp. 384–399._28. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-93931-5_28)
17. Ølnes, S.; Ubacht, J.; Janssen, M. Blockchain in government: Benefits and implications of distributed ledger technology for
[information sharing. Gov. Inf. Q. 2017, 34, 355–364. [CrossRef]](http://dx.doi.org/10.1016/j.giq.2017.09.007)
18. [Shen, C.; Pena-Mora, F. Blockchain for Cities—A Systematic Literature Review. IEEE Access 2018, 6, 76787–76819. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2880744)
19. Batty, M.J.; Axhausen, K.W.; Giannotti, F.; Pozdnoukhov, A.; Bazzani, A.; Wachowicz, M.; Ouzounis, G.K.; Portugali, Y. Smart
[cities of the future. Eur. Phys. J. Spec. Top. 2012, 214, 481–518. [CrossRef]](http://dx.doi.org/10.1140/epjst/e2012-01703-3)
20. [Law, K.H.; Lynch, J.P. Smart City: Technologies and Challenges. IT Prof. 2019, 21, 46–51. [CrossRef]](http://dx.doi.org/10.1109/MITP.2019.2935405)
21. Silva, B.N.; Khan, M.; Jung, C.; Seo, J.; Muhammad, D.; Han, J.; Yoon, Y.; Han, K. Urban Planning and Smart City Decision
[Management Empowered by Real-Time Data Processing Using Big Data Analytics. Sensors 2018, 18, 2994. [CrossRef]](http://dx.doi.org/10.3390/s18092994)
22. [Perätalo, S.; Ahokangas, P. Toward Smart City Business Models. J. Bus. Model. 2018, 6, 65–70. [CrossRef]](http://dx.doi.org/10.5278/ojs.jbm.v6i2.2466)
23. [Smart Cities Initiatives Around the World are Improving Citizens’ Lives. 2020. Available online: https://www.dronedek.com/](https://www.dronedek.com/news/smart-cities-initiatives-around-the-world-are-improving-citizens-lives/)
[news/smart-cities-initiatives-around-the-world-are-improving-citizens-lives/ (accessed on 4 January 2022).](https://www.dronedek.com/news/smart-cities-initiatives-around-the-world-are-improving-citizens-lives/)
24. Tian, S.; Yang, W.; Grange, J.M.L.; Wang, P.; Huang, W.; Ye, Z. Smart healthcare: Making medical care more intelligent. Glob.
_[Health J. 2019, 3, 62–65. [CrossRef]](http://dx.doi.org/10.1016/j.glohj.2019.07.001)_
25. Zhu, H.; Wu, C.K.; Koo, C.H.; Tsang, Y.T.; Liu, Y.; Chi, H.R.; Tsang, K.F. Smart Healthcare in the Era of Internet-of-Things. IEEE
_[Consum. Electron. Mag. 2019, 8, 26–30. [CrossRef]](http://dx.doi.org/10.1109/MCE.2019.2923929)_
26. [Bodhani, A. Smart transport. Eng. Technol. 2012, 7, 70–73. [CrossRef]](http://dx.doi.org/10.1049/et.2012.0611)
27. Elsagheer Mohamed, S.A.; AlShalfan, K.A. Intelligent Traffic Management System Based on the Internet of Vehicles (IoV). J. Adv.
_[Transp. 2021, 2021, 4037533. [CrossRef]](http://dx.doi.org/10.1155/2021/4037533)_
28. Jimenez, J.A. Smart Transportation Systems. In Smart Cities Applications, Technologies, Standards, and Driving Factors; Springer:
[Cham, Switzerland, 2017; pp. 123–133._8. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-59381-4_8)
29. Tao, F.; Qi, Q.; Liu, A.; Kusiak, A. Data-driven smart manufacturing. _J. Manuf._ _Syst._ **2018, 48, 157–169.**
[j.jmsy.2018.01.006. [CrossRef]](http://dx.doi.org/10.1016/j.jmsy.2018.01.006)
-----
_Sustainability 2022, 14, 7609_ 26 of 30
30. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep learning for smart manufacturing: Methods and applications. J. Manuf. Syst.
**[2018, 48, 144–156. [CrossRef]](http://dx.doi.org/10.1016/j.jmsy.2018.01.003)**
31. Huang, Z.; Shen, Y.; Li, J.; Fey, M.; Brecher, C. A Survey on AI-Driven Digital Twins in Industry 4.0: Smart Manufacturing and
[Advanced Robotics. Sensors 2021, 21, 6340. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s21196340)
32. Al Dakheel, J.; Del Pero, C.; Aste, N.; Leonforte, F. Smart buildings features and key performance indicators: A review. Sustain.
_[Cities Soc. 2020, 61, 102328. [CrossRef]](http://dx.doi.org/10.1016/j.scs.2020.102328)_
33. Lund, H.; Østergaard, P.A.; Connolly, D.; Mathiesen, B.V. Smart energy and smart energy systems. Energy 2017, 137, 556–565.
[[CrossRef]](http://dx.doi.org/10.1016/j.energy.2017.05.123)
34. Farooq, M.S.; Riaz, S.; Abid, A.; Abid, K.; Naeem, M.A. A Survey on the Role of IoT in Agriculture for the Implementation of
[Smart Farming. IEEE Access 2019, 7, 156237–156271. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2949703)
35. Zamora-Izquierdo, M.A.; Santa, J.; Martínez, J.A.; Martínez, V.; Skarmeta, A.F. Smart farming IoT platform based on edge and
[cloud computing. Biosyst. Eng. 2019, 177, 4–17. [CrossRef]](http://dx.doi.org/10.1016/j.biosystemseng.2018.10.014)
36. Panori, A.; Kakderi, C.; Komninos, N.; Fellnhofer, K.; Reid, A.; Mora, L. Smart systems of innovation for smart places: Challenges
[in deploying digital platforms for co-creation and data-intelligence. Land Use Policy 2021, 111, 104631. [CrossRef]](http://dx.doi.org/10.1016/j.landusepol.2020.104631)
37. Ai, Y.; Peng, M.; Zhang, K. Edge computing technologies for Internet of Things: A primer. Digit. Commun. Netw. 2018, 4, 77–86.
[[CrossRef]](http://dx.doi.org/10.1016/j.dcan.2017.07.001)
38. [Sun, X.; Ansari, N. EdgeIoT: Mobile Edge Computing for the Internet of Things. IEEE Commun. Mag. 2016, 54, 22–29. [CrossRef]](http://dx.doi.org/10.1109/MCOM.2016.1600492CM)
39. Alnoman, A.; Sharma, S.K.; Ejaz, W.; Anpalagan, A. Emerging Edge Computing Technologies for Distributed IoT Systems. IEEE
_[Netw. 2019, 33, 140–147. [CrossRef]](http://dx.doi.org/10.1109/MNET.2019.1800543)_
40. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and
[Artificial Intelligence. IEEE IoT J. 2020, 7, 7457–7469. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2020.2984887)
41. Sun, W.; Liu, J.; Yue, Y. AI-Enhanced Offloading in Edge Computing: When Machine Learning Meets Industrial IoT. IEEE Netw.
**[2019, 33, 68–74. [CrossRef]](http://dx.doi.org/10.1109/MNET.001.1800510)**
42. Jung, S.; Kim, Y.; Hwang, E. Real-time car tracking system based on surveillance videos. J. Image Video Proc. 2018, 2018, 1–13.
[[CrossRef]](http://dx.doi.org/10.1186/s13640-018-0374-7)
43. Han, X.; Zhang, Z.; Ding, N.; Gu, Y.; Liu, X.; Huo, Y.; Qiu, J.; Zhang, L.; Han, W.; Huang, M.; et al. Pre-Trained Models: Past,
[Present and Future. AI Open 2021, 2, 225–250. [CrossRef]](http://dx.doi.org/10.1016/j.aiopen.2021.08.002)
44. González García, C.; Meana-Llorián, D.; Pelayo G.; Bustelo, B.C.; Cueva Lovelle, J.M.; Garcia-Fernandez, N. Midgar: Detection of
people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities, Smart Towns, and
[Smart Homes. Future Gener. Comput. Syst. 2017, 76, 301–313. [CrossRef]](http://dx.doi.org/10.1016/j.future.2016.12.033)
45. Ho, G.T.S.; Tsang, Y.P.; Wu, C.H.; Wong, W.H.; Choy, K.L. A Computer Vision-Based Roadside Occupation Surveillance System
[for Intelligent Transport in Smart Cities. Sensors 2019, 19, 1796. [CrossRef]](http://dx.doi.org/10.3390/s19081796)
46. Mittal, V.; Bhushan, B. Accelerated Computer Vision Inference with AI on the Edge. In Proceedings of the 2020 IEEE 9th
International Conference on Communication Systems and Network Technologies (CSNT), Gwalior, India, 10–12 April 2020;
[pp. 55–60. [CrossRef]](http://dx.doi.org/10.1109/CSNT48778.2020.9115770)
47. Ullah, Z.; Al-Turjman, F.; Mostarda, L.; Gagliardi, R. Applications of Artificial Intelligence and Machine learning in smart cities.
_[Comput. Commun. 2020, 154, 313–323. [CrossRef]](http://dx.doi.org/10.1016/j.comcom.2020.02.069)_
48. Shi, Y.; Yang, K.; Jiang, T.; Zhang, J.; Letaief, K.B. Communication-Efficient Edge AI: Algorithms and Systems. IEEE Commun.
_[Surv. Tutor. 2020, 22, 2167–2191. [CrossRef]](http://dx.doi.org/10.1109/COMST.2020.3007787)_
49. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge
[Computing. Proc. IEEE 2019, 107, 1738–1762. [CrossRef]](http://dx.doi.org/10.1109/JPROC.2019.2918951)
50. Hsu, K.C.; Tseng, H.W. Accelerating applications using edge tensor processing units. In SC ’21: Proceedings of the International
_Conference for High Performance Computing, Networking, Storage and Analysis; Association for Computing Machinery: New York,_
[NY, USA, 2021; pp. 1–14. [CrossRef]](http://dx.doi.org/10.1145/3458817.3476177)
51. Edge TPU–Run Inference at the Edge|Google Cloud. 2021. Available online: [https://cloud.google.com/edge-tpu](https://cloud.google.com/edge-tpu)
(accessed on 5 January 2022).
52. NVIDIA Jetson Nano For Edge AI Applications and Education. 2022. Available online: [https://www.nvidia.com/en-us/](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano)
[autonomous-machines/embedded-systems/jetson-nano (accessed on 5 January 2022).](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano)
53. [NVIDIA Jetson TX2: High Performance AI at the Edge. 2022. Available online: https://www.nvidia.com/en-us/autonomous-](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2)
[machines/embedded-systems/jetson-tx2 (accessed on 5 January 2022).](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2)
54. Intel[®] Movidius™ [Vision Processing Units (VPUs). 2022. Available online: https://www.intel.com/content/www/us/en/](https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html)
[products/details/processors/movidius-vpu.html (accessed on 5 January 2022).](https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html)
55. [Voigt, P.; von dem Bussche, A. The EU General Data Protection Regulation (GDPR); Springer: Cham, Switzerland, 2017. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-57959-7)
56. Annas, G.J. HIPAA Regulations: A New Era of Medical-Record Privacy? Sch. Commons Boston Univ. Sch. Law 2003, 348, 1486.
[[CrossRef] [PubMed]](http://dx.doi.org/10.1056/NEJMlim035027)
57. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol.
**[2019, 10, 1–19. [CrossRef]](http://dx.doi.org/10.1145/3298981)**
58. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process.
_[Mag. 2020, 37, 50–60. [CrossRef]](http://dx.doi.org/10.1109/MSP.2020.2975749)_
-----
_Sustainability 2022, 14, 7609_ 27 of 30
59. Nilsson, A.; Smith, S.; Ulm, G.; Gustavsson, E.; Jirstrand, M. A Performance Evaluation of Federated Learning Algorithms. In
_DIDL ’18: Proceedings of the Second Workshop on Distributed Infrastructures for Deep Learning; Association for Computing Machinery:_
[New York, NY, USA, 2018; pp. 1–8. [CrossRef]](http://dx.doi.org/10.1145/3286490.3286559)
60. Sarma, K.V.; Harmon, S.; Sanford, T.; Roth, H.R.; Xu, Z.; Tetreault, J.; Xu, D.; Flores, M.G.; Raman, A.G.; Kulkarni, R.; et al.
Federated learning improves site performance in multicenter deep learning without data sharing. J. Am. Med. Inform. Assoc. 2021,
_[28, 1259–1264. [CrossRef]](http://dx.doi.org/10.1093/jamia/ocaa341)_
61. Rauchs, M.; Glidden, A.; Gordon, B.; Pieters, G.C.; Recanatini, M.; Rostand, F.; Vagneur, K.; Zhang, B.Z. Distributed ledger technol[ogy systems: A conceptual framework. 2018. Available online: https://ssrn.com/abstract=3230013 (accessed on 15 January 2022).](https://ssrn.com/abstract=3230013)
[[CrossRef]](http://dx.doi.org/10.2139/ssrn.3230013)
62. [Sanders, A. The subprime crisis and its role in the financial crisis. J. Hous. Econ. 2008, 17, 254–261. [CrossRef]](http://dx.doi.org/10.1016/j.jhe.2008.10.001)
63. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decentralized Business Review 2008; p. 21260. Available online:
[https://www.debr.io/article/21260.pdf (accessed on 15 January 2022)](https://www.debr.io/article/21260.pdf)
64. Xie, J.; Tang, H.; Huang, T.; Yu, F.R.; Xie, R.; Liu, J.; Liu, Y. A survey of blockchain technology applied to smart cities: Research
[issues and challenges. IEEE Commun. Surv. Tutor. 2019, 21, 2794–2830. [CrossRef]](http://dx.doi.org/10.1109/COMST.2019.2899617)
65. Upadhyay, A.; Mukhuty, S.; Kumar, V.; Kazancoglu, Y. Blockchain technology and the circular economy: Implications for
[sustainability and social responsibility. J. Clean. Prod. 2021, 293, 126130. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2021.126130)
66. Teisserenc, B.; Sepasgozar, S. Adoption of Blockchain Technology through Digital Twins in the Construction Industry 4.0: A
[PESTELS Approach. Buildings 2021, 11, 670. [CrossRef]](http://dx.doi.org/10.3390/buildings11120670)
67. Xiong, Z.; Zhang, Y.; Niyato, D.; Wang, P.; Han, Z. When Mobile Blockchain Meets Edge Computing. IEEE Commun. Mag. 2018,
_[56, 33–39. [CrossRef]](http://dx.doi.org/10.1109/MCOM.2018.1701095)_
68. Guo, S.; Hu, X.; Guo, S.; Qiu, X.; Qi, F. Blockchain Meets Edge Computing: A Distributed and Trusted Authentication System.
_[IEEE Trans. Ind. Inform. 2019, 16, 1972–1983. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2938001)_
69. Benevolo, C.; Dameri, R.P.; D’Auria, B. Smart Mobility in Smart City. In Empowering Organizations; Springer: Cham, Switzerland,
[2015; pp. 13–28._2. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-23784-8_2)
70. Barthélemy, J.; Verstaevel, N.; Forehead, H.; Perez, P. Edge-Computing Video Analytics for Real-Time Traffic Monitoring in a
[Smart City. Sensors 2019, 19, 2048. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s19092048)
71. de Souza, A.M.; Brennand, C.A.; Yokoyama, R.S.; Donato, E.A.; Madeira, E.R.M.; Villas, L.A. Traffic management systems: A
[classification, review, challenges, and future perspectives. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147716683612. [CrossRef]](http://dx.doi.org/10.1177/1550147716683612)
72. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767.
73. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE
[International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [CrossRef]](http://dx.doi.org/10.1109/ICIP.2016.7533003)
74. Dinh, D.L.; Nguyen, H.N.; Thai, H.T.; Le, K.H. Towards AI-Based Traffic Counting System with Edge Computing. J. Adv. Transp.
**[2021, 2021, 1–15. [CrossRef]](http://dx.doi.org/10.1155/2021/5551976)**
75. Kumar, T.; Kushwaha, D.S. An Efficient Approach for Detection and Speed Estimation of Moving Vehicles. Procedia Comput. Sci.
**[2016, 89, 726–731. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2016.06.045)**
76. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway
[scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [CrossRef]](http://dx.doi.org/10.1186/s12544-019-0390-4)
77. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011
International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571.
78. Letnik, T.; Mencinger, M.; Peruš, I. Flexible Assignment of Loading Bays for Efficient Vehicle Routing in Urban Last Mile Delivery.
_[Sustainability 2020, 12, 7500. [CrossRef]](http://dx.doi.org/10.3390/su12187500)_
79. Liao, R. Smart Mobility: Challenges and Trends. In Toward Sustainable and Economic Smart Mobility; World Scientific (Europe):
[London, UK, 2019; pp. 1–11._0001. [CrossRef]](http://dx.doi.org/10.1142/9781786347862_0001)
80. Bagloee, S.A.; Heshmati, M.; Dia, H.; Ghaderi, H.; Pettit, C.; Asadi, M. Blockchain: The operating system of smart cities. Cities
**[2021, 112, 103104. [CrossRef]](http://dx.doi.org/10.1016/j.cities.2021.103104)**
81. van der Heijden, R.W.; Engelmann, F.; Mödinger, D.; Schönig, F.; Kargl, F. Blackchain: Scalability for resource-constrained
accountable vehicle-to-x communication. In SERIAL ’17: Proceedings of the 1st Workshop on Scalable and Resilient Infrastructures for
_[Distributed Ledgers; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1145/3152824.3152828)_
82. Sharma, P.K.; Moon, S.Y.; Park, J.H. Block-VN: A distributed blockchain based vehicular network architecture in smart city. J. Inf.
_Process. Syst. 2017, 13, 184–195._
83. Yang, Z.; Zheng, K.; Yang, K.; Leung, V.C. A blockchain-based reputation system for data credibility assessment in vehicular
networks. In Proceedings of the 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio
Communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–5.
84. Lei, A.; Cruickshank, H.; Cao, Y.; Asuquo, P.; Ogah, C.P.A.; Sun, Z. Blockchain-Based Dynamic Key Management for Heteroge[neous Intelligent Transportation Systems. IEEE Internet Things J. 2017, 4, 1832–1843. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2017.2740569)
85. Ma, Z.; Zhang, J.; Guo, Y.; Liu, Y.; Liu, X.; He, W. An Efficient Decentralized Key Management Mechanism for VANET With
[Blockchain. IEEE Trans. Veh. Technol. 2020, 69, 5836–5849. [CrossRef]](http://dx.doi.org/10.1109/TVT.2020.2972923)
86. Yang, Z.; Yang, K.; Lei, L.; Zheng, K.; Leung, V.C.M. Blockchain-Based Decentralized Trust Management in Vehicular Networks.
_[IEEE Internet Things J. 2019, 6, 1495–1505. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2836144)_
-----
_Sustainability 2022, 14, 7609_ 28 of 30
87. Arora, S.K.; Kumar, G.; Kim, T.H. Blockchain Based Trust Model Using Tendermint in Vehicular Adhoc Networks. Appl. Sci.
**[2021, 11, 1998. [CrossRef]](http://dx.doi.org/10.3390/app11051998)**
88. Luo, B.; Li, X.; Weng, J.; Guo, J.; Ma, J. Blockchain Enabled Trust-Based Location Privacy Protection Scheme in VANET. IEEE
_[Trans. Veh. Technol. 2020, 69, 2034–2048. [CrossRef]](http://dx.doi.org/10.1109/TVT.2019.2957744)_
89. Huang, Z.; Li, Z.; Lai, C.S.; Zhao, Z.; Wu, X.; Li, X.; Tong, N.; Lai, L.L. A Novel Power Market Mechanism Based on Blockchain
[for Electric Vehicle Charging Stations. Electronics 2021, 10, 307. [CrossRef]](http://dx.doi.org/10.3390/electronics10030307)
90. Ferreira, J.C.; Ferreira da Silva, C.; Martins, J.P. Roaming Service for Electric Vehicle Charging Using Blockchain-Based Digital
[Identity. Energies 2021, 14, 1686. [CrossRef]](http://dx.doi.org/10.3390/en14061686)
91. Gorenflo, C.; Golab, L.; Keshav, S. Mitigating Trust Issues in Electric Vehicle Charging using a Blockchain. In e-Energy ’19:
_Proceedings of the Tenth ACM International Conference on Future Energy Systems; Association for Computing Machinery: New York,_
[NY, USA, 2019; pp. 160–164. [CrossRef]](http://dx.doi.org/10.1145/3307772.3328283)
92. Löffler, M.; Mokwa, C.; Münstermann, B.; Wojciak, J. Shifting gears: Insurers adjust for connected-car ecosystems. In Digital
_Mckinsey; Mckinsey & Company: Austin, TX, USA, 2016; pp. 1–13._
93. Yuan, Y.; Wang, F.Y. Towards blockchain-based intelligent transportation systems. In Proceedings of the 2016 IEEE 19th
International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 2663–2668.
[[CrossRef]](http://dx.doi.org/10.1109/ITSC.2016.7795984)
94. Shah, A.S.; Nasir, H.; Fayaz, M.; Lajis, A.; Shah, A. A Review on Energy Consumption Optimization Techniques in IoT Based
[Smart Building Environments. Information 2019, 10, 108. [CrossRef]](http://dx.doi.org/10.3390/info10030108)
95. Zhang, X.; Manogaran, G.; Muthu, B. IoT enabled integrated system for green energy into smart cities. Sustain. Energy Technol.
_[Assess. 2021, 46, 101208. [CrossRef]](http://dx.doi.org/10.1016/j.seta.2021.101208)_
96. Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M. Energy-Net: A Deep Learning Approach for Smart Energy
[Management in IoT-Based Smart Cities. IEEE Internet Things J. 2021, 8, 12422–12435. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2021.3063677)
97. Liu, Y.; Yang, C.; Jiang, L.; Xie, S.; Zhang, Y. Intelligent Edge Computing for IoT-Based Energy Management in Smart Cities. IEEE
_[Netw. 2019, 33, 111–117. [CrossRef]](http://dx.doi.org/10.1109/MNET.2019.1800254)_
98. Pop, C.; Cioara, T.; Antal, M.; Anghel, I.; Salomie, I.; Bertoncini, M. Blockchain Based Decentralized Management of Demand
[Response Programs in Smart Energy Grids. Sensors 2018, 18, 162. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s18010162)
99. Pee, S.J.; Kang, E.S.; Song, J.G.; Jang, J.W. Blockchain based smart energy trading platform using smart contract. In Proceedings of
the 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 11–13
[February 2019; pp. 322–325. [CrossRef]](http://dx.doi.org/10.1109/ICAIIC.2019.8668978)
100. Hua, W.; Sun, H. A Blockchain-Based Peer-to-Peer Trading Scheme Coupling Energy and Carbon Markets. In Proceedings of the
2019 International Conference on Smart Energy Systems and Technologies (SEST), Porto, Portugal, 9–11 September 2019; pp. 1–6.
[[CrossRef]](http://dx.doi.org/10.1109/SEST.2019.8849111)
101. Andoni, M.; Robu, V.; Flynn, D.; Abram, S.; Geach, D.; Jenkins, D.; McCallum, P.; Peacock, A. Blockchain technology in the energy
[sector: A systematic review of challenges and opportunities. Renew. Sustain. Energy Rev. 2019, 100, 143–174. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2018.10.014)
102. Hasankhani, A.; Mehdi Hakimi, S.; Bisheh-Niasar, M.; Shafie-khah, M.; Asadolahi, H. Blockchain technology in the future smart
[grids: A comprehensive review and frameworks. Int. J. Electr. Power Energy Syst. 2021, 129, 106811. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2021.106811)
103. Casino, F.; Dasaklis, T.K.; Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification
[and open issues. Telemat. Inform. 2019, 36, 55–81. [CrossRef]](http://dx.doi.org/10.1016/j.tele.2018.11.006)
104. Yang, X.; Wang, G.; He, H.; Lu, J.; Zhang, Y. Automated Demand Response Framework in ELNs: Decentralized Scheduling and
[Smart Contract. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 50, 58–72. [CrossRef]](http://dx.doi.org/10.1109/TSMC.2019.2903485)
105. Kumar, N.M. Blockchain: Enabling wide range of services in distributed energy system. Beni-Suef Univ. J. Basic Appl. Sci. 2018,
_[7, 701–704. [CrossRef]](http://dx.doi.org/10.1016/j.bjbas.2018.08.003)_
106. Morstyn, T.; Farrell, N.; Darby, S.J.; McCulloch, M.D. Using peer-to-peer energy-trading platforms to incentivize prosumers to
[form federated power plants - Nature Energy. Nat. Energy 2018, 3, 94–101. [CrossRef]](http://dx.doi.org/10.1038/s41560-017-0075-y)
107. Wang, L.; Liu, J.; Yuan, R.; Wu, J.; Zhang, D.; Zhang, Y.; Li, M. Adaptive bidding strategy for real-time energy management in
[multi-energy market enhanced by blockchain. Appl. Energy 2020, 279, 115866. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2020.115866)
108. Mylrea, M.; Gourisetti, S.N.G. Blockchain for smart grid resilience: Exchanging distributed energy at speed, scale and security. In
Proceedings of the 2017 Resilience Week (RWS), Wilmington, DE, USA, 18–22 September 2017; pp. 18–23.
109. Mengelkamp, E.; Gärttner, J.; Rock, K.; Kessler, S.; Orsini, L.; Weinhardt, C. Designing microgrid energy markets: A case study:
[The Brooklyn Microgrid. Appl. Energy 2018, 210, 870–880. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.06.054)
110. Xie, P.; Yan, W.; Xuan, P.; Zhu, J.; Wu, Y.; Li, X.; Zou, J. Conceptual Framework of Blockchain-based Electricity Trading for
Neighborhood Renewable Energy. In Proceedings of the 2018 2nd IEEE Conference on Energy Internet and Energy System
[Integration (EI2), Beijing, China, 20–22 October 2018; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1109/EI2.2018.8581887)
111. Kang, E.S.; Pee, S.J.; Song, J.G.; Jang, J.W. A Blockchain-Based Energy Trading Platform for Smart Homes in a Microgrid. In
Proceedings of the 2018 3rd International Conference on Computer and Communication Systems (ICCCS), Nagoya, Japan,
[27–30 April 2018; pp. 472–476. [CrossRef]](http://dx.doi.org/10.1109/CCOMS.2018.8463317)
112. Lin, J.; Pipattanasomporn, M.; Rahman, S. Comparative analysis of auction mechanisms and bidding strategies for P2P solar
[transactive energy markets. Appl. Energy 2019, 255, 113687. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2019.113687)
-----
_Sustainability 2022, 14, 7609_ 29 of 30
113. Ali, F.S.; Bouachir, O.; Ozkasap, O.; Aloqaily, M. SynergyChain: Blockchain-Assisted Adaptive Cyber-Physical P2P Energy
[Trading. IEEE Trans. Ind. Inform. 2021, 17, 5769–5778. [CrossRef]](http://dx.doi.org/10.1109/TII.2020.3046744)
114. Bouachir, O.; Aloqaily, M.; Ozkasap, O.; Ali, F. FederatedGrids: Federated Learning and Blockchain-Assisted P2P Energy Sharing.
_[IEEE Trans. Green Commun. Netw. 2022, 6, 424–436. [CrossRef]](http://dx.doi.org/10.1109/TGCN.2022.3140978)_
115. Park, L.W.; Lee, S.; Chang, H. A Sustainable Home Energy Prosumer-Chain Methodology with Energy Tags over the Blockchain.
_[Sustainability 2018, 10, 658. [CrossRef]](http://dx.doi.org/10.3390/su10030658)_
116. Aggarwal, S.; Chaudhary, R.; Aujla, G.S.; Kumar, N.; Choo, K.K.R.; Zomaya, A.Y. Blockchain for smart communities: Applications,
[challenges and opportunities. J. Netw. Comput. Appl. 2019, 144, 13–48. [CrossRef]](http://dx.doi.org/10.1016/j.jnca.2019.06.018)
117. Zhu, S.; Song, M.; Lim, M.K.; Wang, J.; Zhao, J. The development of energy blockchain and its implications for China’s energy
[sector. Resour. Policy 2020, 66, 101595. [CrossRef]](http://dx.doi.org/10.1016/j.resourpol.2020.101595)
118. Mihaylov, M.; Razo-Zapata, I.; Nowé, A. NRGcoin—A Blockchain-based Reward Mechanism for Both Production and Consumption of Renewable Energy. In Transforming Climate Finance and Green Investment with Blockchains; Academic Press: Cambridge, MA,
[USA, 2018; pp. 111–131. [CrossRef]](http://dx.doi.org/10.1016/B978-0-12-814447-3.00009-4)
119. Zahed Benisi, N.; Aminian, M.; Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl.
**[2020, 162, 102656. [CrossRef]](http://dx.doi.org/10.1016/j.jnca.2020.102656)**
120. Stanciu, A. Blockchain Based Distributed Control System for Edge Computing. In Proceedings of the 2017 21st International
[Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 29–31 May 2017; pp. 667–671. [CrossRef]](http://dx.doi.org/10.1109/CSCS.2017.102)
121. Casado-Vara, R.; de la Prieta, F.; Prieto, J.; Corchado, J.M. Blockchain framework for IoT data quality via edge computing.
In Proceedings of the 1st Workshop on Blockchain-enabled Networked Sensor Systems, Shenzhen, China, 4 November 2018;
pp. 19–24.
122. Shafagh, H.; Burkhalter, L.; Hithnawi, A.; Duquennoy, S. Towards blockchain-based auditable storage and sharing of IoT data. In
Proceedings of the 2017 on Cloud Computing Security Workshop, Dallas, TX, USA, 3 November 2017; pp. 45–50.
123. Zhang, X.; Li, R.; Cui, B. A security architecture of VANET based on blockchain and mobile edge computing. In Proceedings of the
2018 1st IEEE International Conference on Hot Information-Centric Networking (HotICN), Shenzhen, China, 15–17 August 2018;
pp. 258–259.
124. Mendki, P. Blockchain enabled iot edge computing: Addressing privacy, security and other challenges. In Proceedings of the
2020 The 2nd International Conference on Blockchain Technology, Hilo, HI, USA, 12–14 March 2020; pp. 63–67.
125. Tuli, S.; Mahmud, R.; Tuli, S.; Buyya, R. Fogbus: A blockchain-based lightweight framework for edge and fog computing. J. Syst.
_[Softw. 2019, 154, 22–36. [CrossRef]](http://dx.doi.org/10.1016/j.jss.2019.04.050)_
126. Zhaofeng, M.; Xiaochang, W.; Jain, D.K.; Khan, H.; Hongmin, G.; Zhen, W. A blockchain-based trusted data management scheme
[in edge computing. IEEE Trans. Ind. Inform. 2019, 16, 2013–2021. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2933482)
127. Jiang, X.; Yu, F.R.; Song, T.; Leung, V.C. Edge Intelligence for Object Detection in Blockchain-Based Internet of Vehicles:
[Convergence of Symbolic and Connectionist AI. IEEE Wirel. Commun. 2021, 28, 49–55. [CrossRef]](http://dx.doi.org/10.1109/MWC.201.2000462)
128. Lin, X.; Li, J.; Wu, J.; Liang, H.; Yang, W. Making Knowledge Tradable in Edge-AI Enabled IoT: A Consortium Blockchain-Based
[Efficient and Incentive Approach. IEEE Trans. Ind. Inform. 2019, 15, 6367–6378. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2917307)
129. Rahman, M.A.; Rashid, M.M.; Hossain, M.S.; Hassanain, E.; Alhamid, M.F.; Guizani, M. Blockchain and IoT-Based Cognitive
[Edge Framework for Sharing Economy Services in a Smart City. IEEE Access 2019, 7, 18611–18621. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2896065)
130. Qiu, C.; Yao, H.; Wang, X.; Zhang, N.; Yu, F.R.; Niyato, D. AI-Chain: Blockchain Energized Edge Intelligence for Beyond 5G
[Networks. IEEE Netw. 2020, 34, 62–69. [CrossRef]](http://dx.doi.org/10.1109/MNET.021.1900617)
131. Du, Y.; Wang, Z.; Leung, V.C.M. Blockchain-Enabled Edge Intelligence for IoT: Background, Emerging Trends and Open Issues.
_[Future Internet 2021, 13, 48. [CrossRef]](http://dx.doi.org/10.3390/fi13020048)_
132. Lim, W.Y.B.; Huang, J.; Xiong, Z.; Kang, J.; Niyato, D.; Hua, X.S.; Leung, C.; Miao, C. Towards federated learning in uav-enabled
internet of vehicles: A multi-dimensional contract-matching approach. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5140–5154.
[[CrossRef]](http://dx.doi.org/10.1109/TITS.2021.3056341)
133. Pokhrel, S.R.; Choi, J. Improving TCP performance over WiFi for internet of vehicles: A federated learning approach. IEEE Trans.
_[Veh. Technol. 2020, 69, 6798–6802. [CrossRef]](http://dx.doi.org/10.1109/TVT.2020.2984369)_
134. Manias, D.M.; Shami, A. Making a Case for Federated Learning in the Internet of Vehicles and Intelligent Transportation Systems.
_[IEEE Netw. 2021, 35, 88–94. [CrossRef]](http://dx.doi.org/10.1109/MNET.011.2000552)_
135. Peng, Y.; Chen, Z.; Chen, Z.; Ou, W.; Han, W.; Ma, J. Bflp: An adaptive federated learning framework for internet of vehicles.
_[Mob. Inf. Syst. 2021, 2021, 6633332. [CrossRef]](http://dx.doi.org/10.1155/2021/6633332)_
136. Chai, H.; Leng, S.; Chen, Y.; Zhang, K. A hierarchical blockchain-enabled federated learning algorithm for knowledge sharing in
[internet of vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3975–3986. [CrossRef]](http://dx.doi.org/10.1109/TITS.2020.3002712)
137. Pokhrel, S.R.; Choi, J. Federated learning with blockchain for autonomous vehicles: Analysis and design challenges. IEEE Trans.
_[Commun. 2020, 68, 4734–4746. [CrossRef]](http://dx.doi.org/10.1109/TCOMM.2020.2990686)_
138. Ayaz, F.; Sheng, Z.; Tian, D.; Guan, Y.L. A Blockchain Based Federated Learning for Message Dissemination in Vehicular
[Networks. IEEE Trans. Veh. Technol. 2022, 71, 1927–1940. [CrossRef]](http://dx.doi.org/10.1109/TVT.2021.3132226)
-----
_Sustainability 2022, 14, 7609_ 30 of 30
139. Doku, R.; Rawat, D.B. IFLBC: On the Edge Intelligence Using Federated Learning Blockchain Network. In Proceedings of
the 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High
Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS), Baltimore,
[ML, USA, 25–27 May 2020; pp. 221–226. [CrossRef]](http://dx.doi.org/10.1109/BigDataSecurity-HPSC-IDS49724.2020.00047)
140. Morán, A.; Canals, V.; Galan-Prado, F.; Frasser, C.F.; Radhakrishnan, D.; Safavi, S.; Rosselló, J.L. Hardware-Optimized Reservoir
[Computing System for Edge Intelligence Applications. Cogn. Comput. 2021, 1–9. [CrossRef]](http://dx.doi.org/10.1007/s12559-020-09798-2)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su14137609?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su14137609, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/14/13/7609/pdf?version=1655902500"
}
| 2,022
|
[
"Review"
] | true
| 2022-06-22T00:00:00
|
[
{
"paperId": "417ceef82fbff346e4559fc2850e4f59a2f0d95d",
"title": "Adoption of Blockchain Technology through Digital Twins in the Construction Industry 4.0: A PESTELS Approach"
},
{
"paperId": "8e76c41f4df07fdc512cb13f8e9b14e1692577ed",
"title": "A Survey on AI-Driven Digital Twins in Industry 4.0: Smart Manufacturing and Advanced Robotics"
},
{
"paperId": "b62e98582bbbfc925ac0720c9f5aceb14aafd7f1",
"title": "IoT enabled integrated system for green energy into smart cities"
},
{
"paperId": "7a46f1b84bb84f1ef33fc2d074eb56a51461b5c8",
"title": "Blockchain technology in the future smart grids: A comprehensive review and frameworks"
},
{
"paperId": "9a1319e93f8fca656391491e81935fe62d03d7ad",
"title": "Towards AI-Based Traffic Counting System with Edge Computing"
},
{
"paperId": "31feaa7ed2f37e021021c8c2725586fe067837fe",
"title": "Accelerating Applications using Edge Tensor Processing Units"
},
{
"paperId": "0d8af7d81340934768a85a4c95369d602b99999c",
"title": "Digitalization to achieve sustainable development goals: Steps towards a Smart Green Planet."
},
{
"paperId": "feba0c47bf12a02c3a725174bb53df78658a72a8",
"title": "Pre-Trained Models: Past, Present and Future"
},
{
"paperId": "d78cf8ee82b2535a7700a18786706b9a7adf0297",
"title": "Intelligent Traffic Management System Based on the Internet of Vehicles (IoV)"
},
{
"paperId": "4dfff13a5d469069cf50a39e75a0b862993aa400",
"title": "Blockchain: The operating system of smart cities"
},
{
"paperId": "9d64cebb8033ff0db31d4e8f96d49d08bed337d0",
"title": "Blockchain technology and trust relationships in trade finance"
},
{
"paperId": "c0c62eb787f72607575221accd1ef3476c430a56",
"title": "Blockchain technology and the circular economy: Implications for sustainability and social responsibility"
},
{
"paperId": "4c9d81b73ff97e514ac0822980dbca83b7a3acb4",
"title": "Roaming Service for Electric Vehicle Charging Using Blockchain-Based Digital Identity"
},
{
"paperId": "1f83c5f5570dd7f0c1129178f76916b476398be8",
"title": "Blockchain-Enabled Edge Intelligence for IoT: Background, Emerging Trends and Open Issues"
},
{
"paperId": "b26df53a87692e1abc5eab1ee29b66665c9257e6",
"title": "Hardware-Optimized Reservoir Computing System for Edge Intelligence Applications"
},
{
"paperId": "e688b468fcc8fea57d495c107696ae41f5ac02fa",
"title": "Blockchain Based Trust Model Using Tendermint in Vehicular Adhoc Networks"
},
{
"paperId": "ef5abdbf50baa6f719338ffa8a4fe1e41fed0352",
"title": "Federated learning improves site performance in multicenter deep learning without data sharing"
},
{
"paperId": "1e66cb1b5918603e4e75b77c15cf0eca16406d97",
"title": "A Novel Power Market Mechanism Based on Blockchain for Electric Vehicle Charging Stations"
},
{
"paperId": "20afdc74da7374c90c5c148343a01b3e998c7bf4",
"title": "Adaptive bidding strategy for real-time energy management in multi-energy market enhanced by blockchain"
},
{
"paperId": "085432ab4db6957e3e7517650b1198fedf6d2d46",
"title": "Smart buildings features and key performance indicators: A review"
},
{
"paperId": "09d696e18a2425fd8a54ecf1abbb4ee2c630299b",
"title": "Flexible Assignment of Loading Bays for Efficient Vehicle Routing in Urban Last Mile Delivery"
},
{
"paperId": "d1fdc4d37721f56a68738961406884766d379d94",
"title": "Blockchain-based decentralized storage networks: A survey"
},
{
"paperId": "cf763cc175d8b1c29c517b3826c6c5a7ab5c5225",
"title": "The development of energy blockchain and its implications for China's energy sector"
},
{
"paperId": "d9e3057a5855216d92ec2a900872c2463d06adf3",
"title": "Smart systems of innovation for smart places: Challenges in deploying digital platforms for co-creation and data-intelligence"
},
{
"paperId": "0bbb17245edf92d08ff283e2acaec82df509ed74",
"title": "Edge Intelligence: Architectures, Challenges, and Applications"
},
{
"paperId": "2e8d3c3eaf763246578b67e521f6df3d26b291ca",
"title": "Applications of Artificial Intelligence and Machine learning in smart cities"
},
{
"paperId": "c1ec26b28edf355fd5c7094dcaa9a37e3095b84e",
"title": "Blockchain Enabled IoT Edge Computing: Addressing Privacy, Security and other Challenges"
},
{
"paperId": "3bc5e91d3331120e0bfe3e97d9bbf3c3ec72ad97",
"title": "Comparative analysis of auction mechanisms and bidding strategies for P2P solar transactive energy markets"
},
{
"paperId": "f45065ede28784c69371cfa51c8ea26ee8797ba8",
"title": "Vision-based vehicle detection and counting system using deep learning in highway scenes"
},
{
"paperId": "8437fda2c8fa342a5fd874c73ee9b0686fa4260c",
"title": "The Middleman Is Dead, Long Live the Middleman: The “Trust Factor” and the Psycho-Social Implications of Blockchain"
},
{
"paperId": "d2b5eec036d9b5094a1e242e0d9a7c3e26c00d4b",
"title": "Blockchain for smart communities: Applications, challenges and opportunities"
},
{
"paperId": "3116671bbae44a1033171da733c4842a1268c174",
"title": "Smart healthcare: making medical care more intelligent"
},
{
"paperId": "47923a27ece1661af7ec0833128fc0c04e2e2178",
"title": "Smart Transport"
},
{
"paperId": "4512701ae56c6792416ff0c6eccdc4031c16c68e",
"title": "Mitigating Trust Issues in Electric Vehicle Charging using a Blockchain"
},
{
"paperId": "a7221e8a4d5b833aa58d2b02d34eafae78d02934",
"title": "Edge-Computing Video Analytics for Real-Time Traffic Monitoring in a Smart City"
},
{
"paperId": "7fcf0b09e89982b2f45594e088f2961c3f481384",
"title": "A Computer Vision-Based Roadside Occupation Surveillance System for Intelligent Transport in Smart Cities"
},
{
"paperId": "d3c63f5a4bf5791dbb19afd776eca98d8db39931",
"title": "A Review on Energy Consumption Optimization Techniques in IoT Based Smart Building Environments"
},
{
"paperId": "4c0945cb52d0734b25ecea49e3ae1c1b243fca66",
"title": "A systematic literature review of blockchain-based applications: Current status, classification and open issues"
},
{
"paperId": "60be2610dba19761d6458bbac27527b744b0109e",
"title": "Blockchain technology in the energy sector: A systematic review of challenges and opportunities"
},
{
"paperId": "62ccd99a65bfc7c735ae1f33b75b107665de95df",
"title": "Federated Machine Learning"
},
{
"paperId": "3d95b2fdd4babe475134324f49c5453580fd2993",
"title": "A Performance Evaluation of Federated Learning Algorithms"
},
{
"paperId": "d9c43d716a8a737b7a1d12bd7d14817df356e306",
"title": "Blockchain: Enabling wide range of services in distributed energy system"
},
{
"paperId": "164e7be27409eff36ad4a8f2ebb355fa3bde3540",
"title": "FogBus: A Blockchain-based Lightweight Framework for Edge and Fog Computing"
},
{
"paperId": "d5af8facee788b7799911e2a71bc9326bbf15c00",
"title": "Real-time car tracking system based on surveillance videos"
},
{
"paperId": "5b8e9b3546d1785341003fbc6146ad1882c86130",
"title": "Blockchain framework for IoT data quality via edge computing"
},
{
"paperId": "14fe35149aed6a47b6ebfd207deb7681b9446bb6",
"title": "Urban Planning and Smart City Decision Management Empowered by Real-Time Data Processing Using Big Data Analytics"
},
{
"paperId": "c8061285333f8bb3ff10e23e4304ede05c1188d7",
"title": "A security architecture of VANET based on blockchain and mobile edge computing"
},
{
"paperId": "354923bc917ac774438a048b87db9fabed4d757e",
"title": "Deep learning for smart manufacturing: Methods and applications"
},
{
"paperId": "624cd8b4dac3b11e0ed97729582c1324c302b0be",
"title": "Data-driven smart manufacturing"
},
{
"paperId": "ebc96892b9bcbf007be9a1d7844e4b09fde9d961",
"title": "YOLOv3: An Incremental Improvement"
},
{
"paperId": "3a3ae82c0933f23221648978a23213a1f7c655c9",
"title": "A Sustainable Home Energy Prosumer-Chain Methodology with Energy Tags over the Blockchain"
},
{
"paperId": "5a9a8a80ddbfd3169d92c5d7005377d237ce1c47",
"title": "Using peer-to-peer energy-trading platforms to incentivize prosumers to form federated power plants"
},
{
"paperId": "8d3f008403e73226955699e31361f5ae02628dc8",
"title": "Designing microgrid energy markets"
},
{
"paperId": "3e921b2a61649b70209bb93aeee73645beeec6c4",
"title": "Blockchain Based Decentralized Management of Demand Response Programs in Smart Energy Grids"
},
{
"paperId": "ce21142d20575403293bcfa1eb962616339f505e",
"title": "Blackchain: scalability for resource-constrained accountable vehicle-to-x communication"
},
{
"paperId": "fecc02639e25554c85337574a059da16a75652eb",
"title": "Smart energy and smart energy systems"
},
{
"paperId": "14547ea3ab20720b201f899acfc1467616416dde",
"title": "A blockchain-based reputation system for data credibility assessment in vehicular networks"
},
{
"paperId": "488ebe4db7190efe445c225aa67a10f70bc46d8d",
"title": "Blockchain in government: Benefits and implications of distributed ledger technology for information sharing"
},
{
"paperId": "5d2eaf88f653dbf6157e80d7bda993bbfe79dc26",
"title": "Blockchain for smart grid resilience: Exchanging distributed energy at speed, scale and security"
},
{
"paperId": "1a64721d1e9bbdcb7a9d7105e241935630992dde",
"title": "The EU General Data Protection Regulation (GDPR)"
},
{
"paperId": "952d4d7515cc878a9aaee1fdd22779dc3c28b54c",
"title": "Edge computing technologies for Internet of Things: a primer"
},
{
"paperId": "24747c62e558cad008e9a65c9a7e2d463cd9f2de",
"title": "Towards Blockchain-based Auditable Storage and Sharing of IoT Data"
},
{
"paperId": "ef573a42c7f3fe05c64d373c74efa23f76faf8de",
"title": "Traffic management systems: A classification, review, challenges, and future perspectives"
},
{
"paperId": "cbe2f7dd07d869dd1d0b2ba1cbc03b84dd695bb1",
"title": "Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
},
{
"paperId": "5369bd51a019355c983fc8979071b36bc7cbf017",
"title": "Midgar: Detection of people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities, Smart Towns, and Smart Homes"
},
{
"paperId": "976e29fe6c9baeb39732bca0e35f66f84d5bdd90",
"title": "ORB: An efficient alternative to SIFT or SURF"
},
{
"paperId": "8d32405265c8d6bc3071f6a1f5f1a88a5f04de48",
"title": "The subprime crisis and its role in the financial crisis"
},
{
"paperId": "a7639cae1b62c72082ccf40d33d90435a5564c9c",
"title": "A conceptual framework"
},
{
"paperId": "36bf057681b70b07e7a1fed50728235a6f804cd6",
"title": "BFLP: An Adaptive Federated Learning Framework for Internet of Vehicles"
},
{
"paperId": "0b02189d15ef82c23cc4305f4fceabc72807131e",
"title": "Smart farming IoT platform based on edge and cloud computing"
},
{
"paperId": null,
"title": "Shifting gears: Insurers adjust for connected-car ecosystems"
},
{
"paperId": "047191b3e9c6709efe7fbf1da6afe421a1dde6a7",
"title": "An Efficient Approach for Detection and Speed Estimation of Moving Vehicles"
},
{
"paperId": "cc10f02972a26d06114340023be1835b74e31c91",
"title": "Ucl Centre for Advanced Spatial Analysis Smart Cities of the Future"
},
{
"paperId": "ac3a978b51d42728456b8caad06ac0ac648e05b4",
"title": "Climate and more sustainable cities: climate information for improved planning and management of cities (producers/capabilities perspective)"
}
] | 31,436
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/011d61920ae95bbe63193f2e73e7eab7bc116206
|
[
"Economics"
] | 0.865556
|
ANALYSIS OF BITCOIN MARKET EFFICIENCY BY USING MACHINE LEARNING
|
011d61920ae95bbe63193f2e73e7eab7bc116206
|
CBU International Conference Proceedings
|
[
{
"authorId": "119596549",
"name": "Yukikazu Hirano"
},
{
"authorId": "1763214",
"name": "L. Pichl"
},
{
"authorId": "144594961",
"name": "Cheoljun Eom"
},
{
"authorId": "2001544",
"name": "T. Kaizoji"
}
] |
{
"alternate_issns": [
"1805-997X"
],
"alternate_names": [
"CBU Int Conf Proc"
],
"alternate_urls": [
"https://ojs.journals.cz/index.php/CBUIC/issue/archive"
],
"id": "11318c27-eea7-4f77-842c-5fce350e9794",
"issn": "1805-9961",
"name": "CBU International Conference Proceedings",
"type": null,
"url": "http://ojs.journals.cz/index.php/CBUConference2013/issue/archive"
}
|
The issue of market efficiency for cryptocurrency exchanges has been largely unexplored. Here we put Bitcoin, the leading cryptocurrency, on a test by studying the applicability of the Efficient Market Hypothesis by Fama from two viewpoints: (1) the existence of profitable arbitrage spread among Bitcoin exchanges, and (2) the possibility to predict Bitcoin prices in EUR (time period 2013-2017) and the direction of price movement (up or down) on the daily trading scale. Our results show that the Bitcoin market in the time period studied is partially inefficient. Thus the market process is predictable to a degree, hence not a pure martingale. In particular, the F-measure for XBTEUR time series obtained by three major recurrent neural network based machine learning methods was about 67%, i.e. a way above the unbiased coin tossing odds of 50% equal chance.
|
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
# **ANALYSIS OF BITCOIN MARKET EFFICIENCY BY USING MACHINE LEARNING ** Yuki Hirano [1], Lukáš Pichl [2], Cheoljun Eom [3], Taisei Kaizoji [4]
**Abstract:** The issue of market efficiency for cryptocurrency exchanges has been largely unexplored. Here we put Bitcoin, the
leading cryptocurrency, on a test by studying the applicability of the Efficient Market Hypothesis by Fama from two viewpoints:
(1) the existence of profitable arbitrage spread among Bitcoin exchanges, and (2) the possibility to predict Bitcoin prices in
EUR (time period 2013-2017) and the direction of price movement (up or down) on the daily trading scale. Our results show
that the Bitcoin market in the time period studied is partially inefficient. Thus the market process is predictable to a degree,
hence not a pure martingale. In particular, the F-measure for XBTEUR time series obtained by three major recurrent neural
network based machine learning methods was about 67%, i.e. a way above the unbiased coin tossing odds of 50% equal chance.
**UDC Classification:** 004.8, 33; **DOI:** http://dx.doi.org/10.12955/cbup.v6.1152
**Keywords** : Bitcoin, XBT, Neural Network, Gated Recurrent Unit, Long Short-Term Memory
**Introduction**
Bitcoin was the first open source distributed cryptocurrency released in 2009 after it was introduced in
a paper “Bitcoin: A Peer-to-Peer Electronic Cash System” by a developer under the pseudonym Satoshi
Nakamoto. It has been quickly followed by a number of alternative coins (altcoins), derivatives of the
original concept, and other block-chain based cryptocurrencies of more or less sophisticated design,
such as Ethereum. As of the writing of this article (Feb. 15, 2018), the market capitalization of all
cryptocurrencies is about USD 475 billion, with Bitcoin share being around USD 166 billion, followed
by Ethereum (USD 92 billion; Coinmarketcap 2018). Considering the fact that no cryptocurrency has
become a regular means of payment in any national economy or global sector yet, cryptocurrencies
present a remarkable speculative enterprise in cyberspace with a theoretical potential of disrupting
financial systems by the emergent digital commodity aspiring to function as a global means of payment
and value storage.
The future of Bitcoin and other currencies appears at stake however, because of the following problems.
First, large prices of Bitcoin made micropayments impractical as the transaction fees for each payment
rocketed, in spite of the original concept. Second, from the viewpoint of a stable currency, daily price
fluctuations as high as 10 percent up or down on Bitfinex exchange market are a way too high; on 16
December 2017 the price of Bitcoin was more than 20 times higher relative to the same date a year
earlier – just to fall down to a half of that maximum value 5 weeks later. Third, the Bitcoin mining
process that sustains the integrity of the block chain has an enormous carbon footprint, consuming as
much electric power as the entire country of Nigeria, according to CBS News (November 27, 2017).
Therefore it is quite questionable whether the Bitcoin payment system can be scaled up in order to take
the role of a national or even global currency. Consequently, some authorities maintain that
cryptocurrencies, including Bitcoin, are just a pyramid scheme scam, whereas others proclaim the
emergence of a new global monetary system.
Given the absence of a substantial economic sector behind Bitcoin, and the above mentioned volatility
with abundant bubbles and crashes, it is an open question as to what extent is the Bitcoin market system
efficient. The prices of Bitcoin are very sensitive to market making news, such as the recognition of
Bitcoin as a legal payment method by Japan from April 1, 2017, or the ban of cryptocurrency exchanges
in China effective from November 1, 2017.
The central question addressed in this article is whether Bitcoin exchange markets are efficient. In an
efficient market, all the available information including the entire price history is fully reflected in the
current price of the asset. Thus the Efficient Market Hypothesis (EMH) introduced by Eugene Fama
(Fama, 1970 and 1991) implies that asset prices should follow a random walk which is impossible to
forecast; in general, the price dynamics is then a martingale process, in which the expectation of the next
value equals the current value of the asset, and the direction of price change is impossible to predict.
Since the EMH assumes complete information efficiency with regard to price formation, it rules out the
1 International Christian University, Mitaka, Tokyo, Japan, kyabaria17@gmail.com
2 International Christian University, Mitaka, Tokyo, Japan, lukas@icu.ac.jp
3 Pusan National University, Busan, Republic of Korea, shunter@pusan.ac.kr
4 International Christian University, Mitaka, Tokyo, Japan, kaizoji@icu.ac.jp
175
-----
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
possibility of arbitrage transactions. In other words, if a profit-making arbitrage transaction is possible
among markets, then it is a certain manifestation of partial inefficiency of the market system. In what
follows we will show that in the case of Bitcoin, profitable arbitrage windows may open among Bitcoin
exchanges to various fiat currencies, and that the next-day price-change direction (sign of logarithmic
return) for a single time series may be predicted to a certain degree by using machine learning methods
trained on the past daily data and at a prediction level that is higher than the equal odds of fair coin
tossing.
Fi g ure 1 : Time series of XBT prices in E U R o v er a p eriod of 9 27 tradin g da y s .
Source: Authors ( Plot g enerated using R- p acka g e q uantmod )
This paper is organized as follows. Following the literature review in the next section, in Section 3 we
explain the dataset and outline the methods of its analysis. Section 4 wraps up our results and
discussions, which are followed by the concluding section.
**Literature Review**
Scientific literature on Bitcoin has become abundant recently. Most of the papers are related to the
statistical analysis of Bitcoin and other cryptocurrencies, using methods from econometrics and general
data analysis. As of present, we are not aware of any article that would apply machine learning for the
estimation of cryptocurrency market efficiency.
In a recent research work, Gkillas & Katsiampa, (2018) have studied the behavior of returns of five
major cryptocurrencies using extreme value analysis, finding out that “Bitcoin Cash is the riskiest, while
Bitcoin and Litecoin are the least risky cryptocurrencies”. In a statistical study by Phillip et al., (2018)
diverse stylized facts such as long memory and heteroscedasticity have been explored for 224 different
cryptocurrencies, which are found to “exhibit leverage effects and Student- error distributions”. The
design issues of Bitcoin are revisited by Ziegeldorf et al., (2018) in a study proposing a novel oblivious
shuffle protocol “to improve reliance against malicious attackers”. Their method is claimed to be
“scalable, increasing anonymity and enabling deniability”. In a study of market efficiency, AlwarezRamirez et al., (2018) analyzed Bitcoin to find that the “Bitcoin market is not uniformly efficient, and
asymmetries and inefficiency are replicated over different time scales”. In contrast to our work, their
method is based on the detrended fluctuation analysis estimating long-range correlations for price
returns, thus not covering the relation among Bitcoin exchanges and nonlinear dynamics patterns. Corbet
et al., (2018) applied a time and frequency domain analysis to estimate the relationships between 3 major
cryptocurrencies and a variety of financial assets, arriving to the conclusion that “cryptocurrencies may
offer diversification benefits for investors with short investment horizons”.
In a work motivated by market efficiency reasons related to ours, Lahmiri et al., (2018) analyzed the
Bitcoin time series in seven different exchanges, finding that “the values of measured entropy indicate
a high degree of randomness in the series”. In contrary to this finding, they claim however, “strong
evidence against the EMH”. Compared to the present approach, they do investigate nonlinear patterns
in volatility dynamics, but the work is limited by broad assumption of the four diverse statistical
176
-----
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
distributions employed. The interdependence of Bitcoin and altcoin markets was studied on short- and
long-term scales by Ciaian et al., (2018) who found the price relationship stronger in the short-term run.
Bariviera et al., (2018) studied the statistical features and long-range dependence of Bitcoin returns,
focusing on the behavior of the Hurst exponent computed in sliding windows, showing that it has a
similar behavior at different time scales. Luther & Salter, (2017) examined the relationship of possible
hedging in Bitcoin for countries with troubled financial systems, such as Cyprus, finding little significant
evidence that would support such transitions.
Price clustering of Bitcoin at round numbers is found in the work of Urquhart, (2017) who also studies
this effect in volume distributions and market liquidity. Hendrickson & Luther, (2017) employed a
monetary model with endogenous search and random consumption preferences, in which they show that
governments of sufficient size are capable of banning Bitcoin without serious consequences. The degree
of synchronization of prices of Bitcoin across exchanges is studied by Pieters & Vivanco, (2017) who
claim that the law of one price does not apply due to a reason ascribed to market efficiency failure,
especially for markets with anonymous trading accounts. In a search for the determinants of Bitcoin
price Hayes, (2017) argues that it closely follows the cost of production, in particular predominantly the
energy consumption, which drives the relative value formation at the cost margin.
In summary, the above reviewed literature deals directly with the issue of market efficiency of Bitcoin
only in two cases, Alwarez-Ramirez et al., (2018), and Lahmiri et al., (2018) neither of which consider
arbitrage opportunities among Bitcoin exchanges or use machine learning algorithms to predict price
movement direction. Thus the present work provides a novel complementary insight into the issue of
Bitcoin market efficiency.
**Data and Methods**
The dataset for triangular arbitrage has been retrieved from Yahoo finance using the R-package
quantmod (R Core Team 2018; Ryan and Ulrich, 2017). It contains all 822 closing Bitcoin prices for the
selected fiat currencies of AUD, CAD, CNY, EUR, GBP, JPY, and USD between January 1, 2015 and
February 16, 2018. In order to analyze the triangular arbitrage of the type USD-XBT-CRC-USD, we
retrieved the closing values of the USDAUD, USDCAD, USDCNY, USDEUR, USDGBP, and USDJPY
exchange rates, correspondingly diminishing the amount of data points by the holidays of each particular
foreign exchange market.
The profit rate of the trianguar arbitrage transaction, in which the USD currency is first used to buy one
Bitcoin, which is then sold for CRC and converted by the exchange rate CRCUSD=1/USDCRC back to
USD, normalized to the initial expense for 1 XBT (i.e. the value of XBTUSD), reads
𝑋𝐵𝑇𝐶𝑅𝐶
ρ = 𝑋𝐵𝑇𝑈𝑆𝐷 ⁄ [𝑈𝑆𝐷𝐶𝑅𝐶−1 ≡𝑈𝑆𝐷𝐶𝑅𝐶(𝑋𝐵𝑇) 𝑈𝑆𝐷𝐶𝑅𝐶−1] ⁄ (1)
The dataset used for machine learning prediction of price trend of Bitcoin is the XBTEUR time series
retrieved from Bloomberg, shown in Fig. 1. The three methods of machine learning applied for
prediction are (1) a Recurrent Neural Network in Elman configuration (Elman, 1990), depicted in Fig. 2,
Fi g ure 2: Recurrent neural network in Elman’s to p olo gy .
S ource : Authors
177
-----
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
(2) a LSTM network depicted in Fig. 3, and (3) a GRU network shown in Fig. 4. Since these methods
are standard in deep learning libraries, such as TensorFlow, which we applied, we do not repeat the
equations, only briefly comment on the notation in the schematic figures. In particular, in Fig. 2, the
input vector (components *i* taken from the time series as 20-element long moving window) at time *t* are
fed to the hidden layer using a weight matrix of *W(in)* . Then hidden unit values are computed, which
are fed back in a recurrent connection with parameter matrix *W* (recurrence shown by the bold arrow),
and also passed over to the output layer with the weights *W(out)* . For the machine learning example, we
assign 70% of the dataset to training, 15% of the dataset for validation (using early stopping criterion),
and 15% of the dataset to testing (our result data). Figure 3 shows the far-more complicated design of
the LSTM network. The recurrent unit is shown as the black circle. In addition to the input and output
gates depicted on the right, there is an additional forget gate shown on the left, which regulates what
data will be remembered and for how long. The addition and multiplication symbols are shared with Fig.
4. Finally, in Fig. 4 a design of the GRU network is presented, which is a simplification of the LSTM
method that uses less parameters but is capable of producing results of similar accuracy as those by the
LSTM algorithm in most cases. Reset and update gates regulate the flow of the neural signal through
the network. Sigmoid and tangent-hyperbolic activation functions are used as shown in the legend.
Fi g ure 3: Lon g S hort-Term Memory ( LSTM ) neural net w ork schematic
Source: Authors
**Results and Discussions**
Table 1 shows the results for the triangular arbitrage. The medians of the distributions are very small,
close to zero (i.e. never exceeding 1 percent, which, if transaction fees are considered, is probably not a
profitable value). The main result are the standard deviation values, measuring the width of the
distribution, which is often asymmetric and exhibits outliers. The minimum value, maximum value,
mean, median, skewness and kurtosis parameters complement the standard-deviation based analysis.
We can see that USD-EUR based Bitcoin arbitrage offers virtually no profit opportunities whereas the
arbitrage window widens to almost 6% for the Chinese currency. It can be said that as the currency
becomes minor, the arbitrage window broadens. Table 2 shows the information retrieval measures for
the trend prediction results of the three ML algorithms, using 2 different predictors (prices and log
returns).
178
-----
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
|Table 1: Summary of triangular arbitrage distributions (normalized profit rate) for the USD-XBT- CRC-USD scheme using 6 different currencies as CRC|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
||Currency|Min|Max|Mean|Median|St. Dev.|Skewness|Kurtosis||
||AUD|-0.2405|0.3389|0.0308|0.0162|0.0478|1.6583|10.0418||
||CAD|-0.1655|0.3953|0.0232|0.0093|0.0510|2.4611|12.8804||
||CNY|-0.4321|0.3998|0.0053|0.0059|0.0585|0.5499|16.0649||
||EUR|-0.0817|0.0706|0.002|0.0021|0.0101|0.1192|14.3642||
||GBP|-0.1616|0.6654|0.0085|0.0065|0.0298|13.3933|290.9942||
||JPY|-0.1208|0.267|0.0219|0.0122|0.0407|3.3021|16.6164||
|Source: Authors||||||||||
Fi g ure 4: Gated Recurrent Unit ( GRU ) neural network schematic
Source: Authors
Table 2 : Machine Learnin g Algorithm results for binar y trend p rediction of XBTE U R time series
( a ) Trainin g b y p rices ( b ) Trainin g b y lo g arithmic return
|Method|Accuracy|Recall|Precision|F-measure|Accuracy|Recall|Precision|F-measure|
|---|---|---|---|---|---|---|---|---|
|RNN|0.58|0.69|0.66|0.67|0.60|0.95|0.60|0.73|
|LSTM|0.57|0.69|0.64|0.67|0.54|0.73|0.58|0.65|
|GRU|0.58|0.69|0.66|0.67|0.56|0.85|0.58|0.69|
Source: Authors
**Conclusion**
We have established partial information inefficiency of the Bitcoin market by means of triangular
arbitrage between USD-XBT-CRC-USD where CRC stands for one of 6 major currencies. Whereas on
the daily data trading scale, the profit window is very narrow in case of major currencies such as EUR,
it widens up for currencies such as AUD, CAD and CNY, beyond the standard transaction fee levels. In
addition, by using three machine learning algorithms, the RNN, LSTM and GRU methods, we have
proved that machine learning algorithms are capable of predicting the direction of the price change for
179
-----
CBU I NTERNATIONAL C ONFERENCE ON I NNOVATIONS IN S CIENCE AND E DUCATION
M ARCH 21-23, 2018, P RAGUE, C ZECH R EPUBLIC WWW . CBUNI . CZ, WWW . JOURNALS . CZ
the next day based on the past data with the F-measure in the range of 67% to 73%. When compared to
the USDEUR exchange rate values, the Bitcoin market shows a substantially greater deal of inefficiency.
These results present a significant argument to question the validity of the EMH in case of Bitcoin
exchanges.
**Acknowledgement**
This research was supported by JSPS Grants-in-Aid Nos. 2538404, 2628089.
**References**
Alvarez-Ramirez, J., Rodriguez, E., Ibarra-Valdez, C. (2018) Long-range correlations and asymmetry in the Bitcoin market,
Physica A: Statistical Mechanics and its Applications vol. 492, pp. 948-955.
Bariviera, A. F., M. J. Basgall, W. Hasperue, and M. Naiouf (2017) Some Stylized Facts of the Bitcoin Market, Physica A
vol. 484, pp. 82-90.
Ciaian, P., Rajcaniova, M., Kancs d'A (2018) Virtual relationships: Short- and long-run evidence from BitCoin and altcoin
markets, Journal of International Financial Markets, Institutions and Money vol. 52, pp. 173-195.
Coinmarketcap (2018) Cryptocurrency Market Capitalizations, https://coinmarketcap.com/, Accessed 2018/02/15.
Corbet, S., Meegan, A., Larkin, C., Lucey, B., Yarovaya, L. (2018) Exploring the dynamic relationships between
cryptocurrencies and other financial assets, Economics Letters, vol. 165 pp. 28-34.
Elman, J. L. (1990) Finding Structure in Time, Cognitive Science vol. 14 No. 2, pp. 179-211.
Fama, E.F. (1970) Efficient Capital Markets: A Review of Theory and Empirical Work, The Journal of Finance vol. 25, pp.
383-417.
Fama, E.F. (1991) Efficient Capital Markets: II, The Journal of Finance vol. 46, pp. 1575-1617.
Gkillas K., Katsiampa, P. (2018) An application of extreme value theory to cryptocurrencies, Economics Letters vol. 164, pp.
109-111.
Hayes, A. S. (2017) Cryptocurrency value formation: An empirical study leading to a cost of production model for valuing
bitcoin, Telematics and Informatics vol. 34 No. 7, pp. 1308-1321.
Hendrickson, J. R., Luther, W. J. (2017) Banning bitcoin, Journal of Economic Behavior & Organization vol. 141, pp. 188195.
Lahmiri, S., Bekiros, S.. Salvi A. (2018) Long-range memory, distributional variation and randomness of bitcoin volatility,
Chaos, Solitons & Fractals vol. 107, pp. 43-48.
Luther, W. J., Salter, A. W. (2017) Bitcoin and the bailout, The Quarterly Review of Economics and Finance vol. 66, pp. 5056.
Phillip, A., Chan, J. S. K., Peiris, S. (2018) A new look at Cryptocurrencies, Economics Letters vol. 163, pp. 6-9.
Pieters, G., Vivanco, S. (2017) Financial regulations and price inconsistencies across Bitcoin markets, Information
Economics and Policy vol. 39, pp. 1-14.
R Core Team (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing,
Vienna, Austria. URL https://www.R-project.org/.
Ryan, J. A., Ulrich, J. M. (2017). quantmod: Quantitative Financial Modelling Framework. R package version 0.4-12.
https://CRAN.R-project.org/package=quantmod
Urquhart, A. (2017) Price clustering in Bitcoin, Economics Letters vol. 159, pp. 145-148.
Ziegeldorf, J. H., Matzutt, R., Henze, M., Grossmann, F., Wehrle, K. (2018) Secure and anonymous decentralized Bitcoin
mixing, Future Generation Computer Systems vol. 80, pp. 448-466.
180
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.12955/CBUP.V6.1152?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.12955/CBUP.V6.1152, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://ojs.journals.cz/index.php/CBUIC/article/download/1152/pdf/"
}
| 2,018
|
[
"Conference"
] | true
| 2018-09-24T00:00:00
|
[] | 5,521
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/011e06a3b06257e73ee37540df71cd6a595d3c03
|
[
"Economics"
] | 0.878632
|
Blockchain for good
|
011e06a3b06257e73ee37540df71cd6a595d3c03
|
[
{
"authorId": "49827510",
"name": "B. Kewell"
},
{
"authorId": "144118186",
"name": "R. Adams"
},
{
"authorId": "38464485",
"name": "G. Parry"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
**Blockchain for Good?[1]**
Beth Kewell, University of Surrey, Surrey, UK.
Richard Adams, University of Surrey, Surrey, UK.
Glenn Parry, University of the West of England, UK.
Explores key areas of Blockchain innovation that appear to represent viable
catalysts for achieving global Sustainable Development targets.
Projects and initiatives seeking to extend the reach of Distributed Ledger
Technology (DLTs), seem mostly intended for the benefit of for-profit businesses,
governments, and consumers.
DLT projects, devised for the public good, could aim, in theory, to fulfil the
United Nation’s current Sustainable Development Goals (SDG).
Our overview of these initiatives suggests that blockchain technology is being
applied in ways that could transform this ambition for good into a practical reality.
Current examples of blockchain deployment are being specified within a value-creation
remit that is most likely to benefit for-profit businesses, governments, and consumers
(Ng, 2013; Bohme et al., 2015; Swan, 2015; Potts et al., 2016; McWaters et al., 2016;
Walport, 2016). Received ideas about what blockchain can and should be used for are
based on perceptions that the key role of this technology is to unlock cost savings and
secure efficiency gains, whilst also enabling widespread business model transformation
(Walport, 2016). Within this scenario, blockchain affordances (Gibson, 1978) are
principally seen to ‘do good’ by resolving longstanding obstacles to profitability and
value-capture (Walport, 2016).
The aim of this paper is to consider how blockchain solutions could be used to achieve
good outcomes for the sustainable development agenda by, for example, helping to
fulfil the UN’s Sustainable Development Goals (UN, 2015). Kranzberg’s first law of
1 JEL Codes: O38; O39; D20.
Acknowledgements: The authors gratefully acknowledge the support of a
BA/Leverhulme Small Research Grant (SG160335) in the preparation of this work.
Page 1 of 32
-----
technology avers that ‘Technology is neither good nor bad; nor is it neutral’
(Kranzberg’s, 1986, p.545). In doing so, Kranzberg reminds us that innovations are
morally and ethically instantiated. To date, research has tended to focus on the technical
characteristics, efficiency gains - and profits - to be yielded from blockchain projects
and experimental Distributed Ledger Technology (DLTs) and ‘permissioned ledgers’
being run by private consortia (Ng, 2013; Bohme et al., 2015; Swan, 2015; Potts et al.,
2016; McWaters et al., 2016; Walport, 2016). While initially fixed on the commercial
and consumer benefits to be drawn from blockchain innovation, attention is beginning
to shift toward the appropriation of socially and environmentally beneficial use cases
that aim to tackle global challenges such as, for example, financial exclusion (CTPM,
2016).
Drawing on affordance theory, this exploratory paper reflects on innovative applications
of blockchain projects that could help deliver socially and environmentally beneficial
outcomes by challenging existing business models and providing new opportunities for
value creation that also serve a philanthropic purpose (Botsman and Rogers, 2010). We
call this ‘Blockchain for Good’, where ‘Good’ can be framed in terms of the UN’s
Sustainable Development Goals (SDG) (UN, 2015). The SDGs provide a vision for
governmental, corporate and civic action leading the way towards ‘development that
meets the needs of the present without compromising the ability of future generations to
meet their own needs’ (WCED, 1987, para 27).
Page 2 of 32
-----
The paper proceeds as follows: First, we describe our approach to this exploratory
research. Second, we offer a brief overview of the technological characteristics of DLT.
Third, we examine the notion that DLTs have unique affordances rendering them
appropriate solutions to the SDGs. Consequently, in this article we begin to explore the
impact of DLTs on the UN’s Sustainable Development Goals which is the contribution
of the paper.
**Affordances**
The repositioning of blockchain technologies as a device for mobilising good causes,
including those positioned at a global level, represents a considerable departure from
their original remit as payments reconciliation systems which may be utilised without
the need for banks and clearing houses (Ng, 2013; Bohme et al., 2015; Swan, 2015;
Welch, 2015; Potts et al., 2016; McWaters et al., 2016; Walport, 2016). The
identification of such an important ‘change of use’ draws attention to a concomitant
shift in perceptions of blockchain affordances – that is to say, discernment of what the
software can do for sustainable development and environmental protection in parallel
with an appreciation of what novel deployment could realise for vulnerable and
impoverished communities (Seidel et al. 2013). Key organisations, such as the UN, are
actively focused on establishing blockchain’s capacity for achieving SDGs in, for
example, identity provision and financial inclusion (CTPM, 2016).
Scoping exercises, focused on pinpointing blockchain’s potential contribution to the
sustainability field may represent a first step towards developing a future ‘affordance
taxonomy’ (Conole and Dyke, 2004) to guide the deployment of blockchain-for-good in
Page 3 of 32
-----
the third sector and among social enterprises, including those already taking advantage
of crowd-funding and other charitable activities made possible by virtual platforms
(Choy and Schlagwein, 2016).
Affordances are bestowed upon artefacts – they are the qualities users perceive objects,
places, contexts, and constructs, uphold and encompass (Gaver, 1991; Zammuto et al.,
2007; Maier and Fadel, 2009; Faraj et al., 2011; Withagen, 2012; Majchrzak and
Markus, 2012; Xenakis and Arnellos, 2013; Lankton et al., 2015; Ciavola and
Gershenson, 2016; Choy and Schlagwein, 2016; Beynon-Davies and Lederman, 2017).
Affordances are bound to expectations of what artefacts can be/can do and thus
reputational information (Kewell, 2007) that tells us whether the actions they ought to
assist Ciavola and Gershenson (2016, p.252) or facilitate are worthwhile, valuable, risky
or unwise (Zammuto et al., 2007). Affordances can therefore be seen to possess an
implicit moral imperative (Dierksmeier and Seele, 2016).
Artefacts by themselves have no power; they do nothing (Geels, 2005). Affordance
theory suggests that an artefact is fundamentally perceived in terms of its ‘action
_possibilities’ (Withagen, 2012, p.521). Drawing on Gibson’s (1978) work on the_
ecology of perception, Pea (1993, p. 51) describes an ‘affordance’ as the perceived and
_actual properties of a thing, primarily those functional properties that determine just_
_how the thing could possibly be used. An affordance, then, is what an object or_
technology offers, provides or furnishes in the context of use: depending on the user, a
chair ‘affords’ sitting or an improvised ladder (Maier and Fadel, 2009); a bicycle
‘affords’ travel, or exercise or the delivery of health benefits (Conole and Dyke, 2004;
Page 4 of 32
-----
Maier and Fadel, 2009; Volkoff and Strong, 2013; Lankton et al., 2015; Ciavola and
Gershenson, 2016).
Affordance theory subsequently delineates between intended uses built into the design
process and consequential affordances, which avail themselves as prototypes are tested,
and end-products are evaluated by potential users and consumers leading to the
development of ‘sequential’ and ‘nested’ affordances (Gaver, 1991, p.4). Original
construals of affordance can change markedly by the end of this learning curve (Gaver,
1991). Dual affordances can also emerge over time, once an artefact, prototype or
design has acquired up-take (Beynon-Davies and Lederman, 2017). Thus, the recycling
movement has also recently shown how the original meaning of an artefact’s
affordances can be usurped or overturned by, for example, making objects with
established or traditional affordance perform tasks for which they were not originally
intended. A good illustration of the latter is provided by the current trend in urban cities
for ‘container living,’ whereby redundant sea freight containers are converted into
sustainable homes.
A relatively new area of technological affordance theory examines the development of
simulated computer technologies and the impact perceptions of what they can do has on
human and organizational relations in the ‘real world’ (Zammuto et al., 2007; Boyd,
2010; Faraj et al., 2011). The affordances of software products can be altered by
multiple designers and mass users (Boyd, 2010, p.7). By itself, this capacity rewrites
existing conceptions of the interaction between designers, users, artefacts, and the
environment in which they are embedded (Zammuto et al., 2007; Faraj et al., 2011;
Page 5 of 32
-----
Ciavola and Gershenson, 2016). When positioned within social networks (Boyd, 2010),
the affordances of simulated technology (such as platforms and blockchains) are
percieved to foster new forms of communitarian action and social exchange (Faraj et
_al., 2011; Choy and Schlagwein, 2016). When considered within an organisational_
context, these technologies are said to change perceptions of what may be afforded by
systems, structures and processes, for example, those illustrated in workflow
visualisation software (Beynon-Davies and Lederman, 2017), allowing previously
hidden sources of value to become more self-evident (Zammuto et al., 2007; Faraj, et
_al., 2011; Ciavola and Gershenson, 2016)._
The discovery of affordances related to blockchain technology is following patterns
identified within inter and intraorganisational contexts. Blockchain is part of a thriving
ecosystem, populated, en masse, by designers and users, who are continually
improvising new affordances, as they tweak the technology for use in different settings.
The advent of usecases for intraorganizational and consortia based blockchain
deployment (as DLTs and permissioned ledgers), suggests that it will not be very long
before companies begin to perceive blockchains as instruments of change. In what
follows, we consider how perceptions of blockchain affordances are likely to change
understandings of what can be achieved in the sustainable development field, as an
ecosystem that must address mutiple requirements (from disaster relief to
microfinance), using extremely complex networks of interactions. Could these
interactions be placed on blockchains? Could this placement deliver better outcomes for
aspects of society and the environment that are most in need and generate new sources
of good?
Page 6 of 32
-----
**Affordances for good**
It is important to consider what we mean by good before addressing these questions.
The western philosophical tradition has, for millennia, distinguished between intrinsic
and extrinsic good (Smith, 1948) the former, good for its own sake, the latter being
derivatives of the former – that is, an extrinsic good is good not for its own sake, but
because its enactment leads back to an intrinsic good. The debates about the ontological
status of intrinsic and extrinsic good, what constitutes them, the sorts of things that are
or have intrinsic or extrinsic good(ness) and how these might be assessed or computed
are beyond the scope of this paper. Frankena (1973) provides a comprehensive list of
those things which are intrinsically good – as deemed by other authors to be good or
rational to desire for their own sakes. Others, for example George Moore (1903), reject
the notion of intrinsic good and take a more consequentialist view that things are good
when they are perceived to be good, where their consequences are in some sense better
than those of alternatives.
There is a substantial literature on ethical issues surrounding ICTs, much of it framed
around what constitutes ‘better’ and how that might be evaluated, including: the impact
of technological progress on society (Lee, 2005) and the influence of technology on the
development of virtuous interactions (Benkler and Nissenbaum, 2006). Arguing that
ICT’s beneficial impact can be evaluated by distinguishing between local and systemic
levels, the difference between content and process, the implication of Taddeo and
Vaccaro’s (2011) framing is that an ethical understanding of technologies can be gained
Page 7 of 32
-----
through an interrogation of how the ways in which they work enable new beneficial
actions and outcomes.
In ascribing a DLT initiative as being good, we are undertaking an evaluation. Value is
said to be the measure of goodness (Ng, 2013) and pragmatically we seek to make a
judgment of what is good in our case. Our evaluation of DLTs is not based on
judgement of an intrinsic or extrinsic goodness. Rather the judgement is based on the
decisions made by the people who invent, develop, distribute and use them (Argandoña,
2003) in relation to the consequences of those technologies for the UN’s 17 SDGs and
169 targets which, on September 25th, 2015 the 193 Member States of the United
Nations unanimously adopted.
DLTs have, in some quarters, received an unfavourable press largely grounded in the
observation that the DLT enabled cryptocurrencies – notably Bitcoin –have been
associated with illicit and illegal activities such as drug dealing and arms trading (it
should be said, a critique that applies equally to cash). Leading financial institutions and
banking consortia are currently looking for ways to create their own permissioned or
private cryptocurrency ecosystems, (as seen in the example of the ‘r3’ consortium[2]).
Blockchain use cases focus, typically, on mapping the affordances DLTs might convey
within largescale finance services (European Central Bank, 2012; Ali et al., 2014;
McWaters et al., 2016).With little consensus about the potential impact of DLTs for
good or ill, it is clear that the subject requires serious analysis. To focus on a single
application or specific usage of the technology is to overlook its possible significance
2 http://r3members.com/
Page 8 of 32
-----
for ethical impacts at a global level. To ensure that the opportunities for ethical action
potentially engrained in new technologies such as DLTs may be realized, it is important
that the wider significance of the so-called ‘Blockchain for Good’ (B4G) is understood.
The blockchain first appeared, largely unheralded, in 2008. Attention, instead, was
directed toward the application whose existence blockchain technology made possible.
The focal application and the first to run on a blockchain was the crypto-currency
Bitcoin (Nakamoto, 2008; Lemieux, 2013).
The significance of the underlying DLT is that it enables the digital transfer of value
between two unknown entities without the need for a trusted third party. Simply put,
DLT allows anyone to transact with anyone anywhere on a P2P basis. DLTs enhance
the transparency of information exchanges (including payments and deposits), making
trust obligations much easier to discharge between transacting parties. The service of
value transfer is normally provided by intermediaries such as banks. DLT reallocates
the responsibilities of transfer management to computers and algorithms (Ali et al.,
2014; Welch, 2015; McWaters et al., 2016). Because of the way in which the
technology is configured to allow P2P digital exchange of value, the blockchain, to
many observers, represents a revolutionary and disruptive innovation (Swan, 2015;
Zuberi and Levin, 2016).
Fundamentally, a blockchain is a ledger of transactions of digital assets: of who owns
what, who transacts what, of what is transacted and when. Transactions are not recorded
on a single database but distributed on the computers of the network of users (nodes) of
Page 9 of 32
-----
the system. No single entity owns or controls the ledger, and so network members can
view the recorded transactions. Transactions are recorded and stored in ‘blocks,’ and
each block linked chronologically (hence chain) and cryptographically to those which
precede it to create an immutable, tamper-resistant record. All transactions are time
stamped to provide a record of when transactions occurred and in what order: this
assures against ‘double spending’ and tampering with previous transaction records
(Reber and Feuerstein, 2014). The ledger is ‘kept honest’ by network consensus, a
transaction validation process undertaken by network users, which includes checking
that digital signatures are correct through a process known as ‘mining’: mining is
incentivised by reward systems. Once a block is accepted by the network and added to
the chain, it cannot be changed: it is a permanent, transparent and immutable record.
Consequently, DLTs may be characterised as globally distributed, P2P, open ledgers of
exchange providing an immutable and verifiable record and encrypting the identities of
users that is hard to tamper with. Davidson et al. (2016) describe DLTs as a new
general purpose technology which are, by definition, highly pervasive and can impact
entire economies giving rise to creative destruction (Schumpeter, 1934; Jovanovic and
Rousseau, 2005) with the potential to disrupt any centralized system that coordinates
valuable information (Wright and De Filippi, 2015).
DLTs represent a fundamental change in the way in which humans can exchange value,
and two important implications follow. First, because the technology provides the
required trust to give peers the confidence to exchange value directly, the requirement
for socially-constructed institutional third-party providers of trust is significantly
Page 10 of 32
-----
reduced: they become disintermediated. The second implication is that the blockchain
presages a new functionality for the internet: it moves from an internet of information to
_an internet of value (Swan, 2015). It means, that for objects that can be expressed in_
code, multiple novel application possibilities are opened up, and raises the question,
how can blockchain technology that creates immutable, tamper-resistant distributed
records of transactions of digital assets be applied in the service of SDGs?
Mattila (2016) points out that the technology stack components of DLTs is diverse and
can be configured in a variety of ways, resulting in different DLT architectures,
implying the need for design decisions. Blockchains can be categorized as, for example,
Permissioned/Permissionless and Specific Purpose Blockchains optimized for the
management of assets and General Purpose Blockchains designed to allow users to
write their own programmes to be stored on the blockchain and automatically executed
in a distributed manner. Notwithstanding these divergences, DLTs share certain
characteristics which may be more or less attenuated depending on the context of the
application, in particular: the distributed (decentralized) consensus mechanism,
immutability, algorithmic trust, resilience against manipulation, and secure information
sharing.
Nakamoto’s (2008) white paper describes what might be considered to be a pure form
of DLT, that is to say a permissionless blockchain encompassing a network of
participants that are not known to one another and each of them can access the
blockchain with complete freedom to read or write to it, no actor can prevent any other
actor from contributing content nor can any actor remove any previously validated
Page 11 of 32
-----
contribution; and consensus is incentivised through economic mechanisms.
Permissionless Blockchains are therefore highly censorship resistant and can provide an
immutable[3], network-validated global record of transaction histories – right up to the
present moment.
On the other hand, anyone[4] may have a copy of the ledger in a permissioned blockchain,
but only certain authorised parties may write to it and the consensus process is
determined by the owner(s) of that blockchain, usually carried out by trusted actors in
the network (CPTM, 2016). Assuming that chosen actors honestly and disinterestedly
validate transactions, then permissioned blockchains can offer certain advantages, in at
least two respects: first, they can be designed with specific functionality in mind and,
second, alternatives to economically-incentivized validation mechanisms (proof-of
work) can be incorporated. As a result, permissioned blockchains can be more efficient
and faster than unpermissioned versions (CPTM, 2016) but at the cost of reduced
security, immutability and censorship-resistance (Mattila, 2016).
A sub-category of the permissioned blockchain is the private blockchain in which only
certain authorised users have access to the database, whether for reading or writing,
which tend to exist behind some organizational firewall but offer within-group
transparency, privacy, and control, for a defined set of users. Whether or not they truly
are DLTs continues to be debated, but the permissioned blockchain does have a role in
helping deliver the SDG agenda. In the following, we explore some of these further and
consider their affordance in terms of the SDGs.
3 Immutable to the extent that that particular blockchain continues to be maintained. It is not clear what
happens in the circumstance that a particular blockchain ceases to be maintained by a network.
4 Anyone, subject to, of course, the nature of the permissions.
Page 12 of 32
-----
_Blockchain mining_
In the Bitcoin blockchain, transactions are validated by network members (nodes) in a
process known as mining. This distributed, network-member-driven process, performs
the function of the centralized trusted third party intermediary model. Network
participants compete with each other using computer power (known as proof-of-work)
to validate blocks of transactions every 10 minutes or so. The proof-of-work is difficult
to produce but easy for other nodes to verify and so transaction validity is established by
majority consensus of network members. The miner that first successfully validates a
block is rewarded with newly minted Bitcoins[5].
That network members commit resources to validating transactions, which in turn
contributes to the cryptographic security and fraud resilience of the Bitcoin blockchain.
The network is configured in such a way that it makes more sense for would-be
attackers to participate as miners (greater opportunity for reward at lesser cost), thus
increasing the resilience of the blockchain (Doguet, 2013; Fox-Brewster, 2015; Welch,
2015).
However, the computationally intensive method of proof-of-work has been described as
costly and wasteful (McWaters et al., 2016). As miners around the world competitively
dedicate resources to validate transactions, Aste (2016) estimates about a billion Watts
of electricity are consumed globally every second to produce a valid proof of work for
Bitcoin. In light of this, alternative validation mechanisms are being investigated, some
5 For more details on mining, see Antonopoulos, A.M. (2014). Mastering Bitcoin: unlocking digital
cryptocurrencies, O'Reilly Media, Inc.; Swan, M. (2015). Blockchain: Blueprint for a New Economy,
O'Reilly Media, Inc., and: http://www.coindesk.com/information/how-Bitcoin-mining-works/
Page 13 of 32
-----
of which resonate with the SDG agenda but also relax some of the communitarian
properties of the proof-of-work approach (such as openness to the whole community).
Dierksmeier and Seele (2016) argue that it should be possible to promote ethical goals
in society, by for example, hitching the ‘mining’ to the creation of ecological or social
benefits. Certainly, reducing energy consumption in the process would ameliorate
ecological harms and a small number of initiatives have emerged in this area.
SolarCoin[6], for example, rewards generators of solar energy with new coin; another,
GridCoin (Halford, 2014) introduces a novel algorithm based on work done in BOINC
(Berkeley Open Infrastructure for Network Computing) projects: miners are
incentivized to participate in scientific projects (as in healthcare and space exploration)
aiming to provide benefit to humanity. In the CureCoin blockchain, the Bitcoin
validation calculations are replaced by (useful) protein folding tasks: mining CureCoin
helps science through simulating protein behaviour and providing these data to research
scientists.
_The internet of value(s)_
The previous section describes how social or ecological benefit can be linked to the
production of alt-currencies. This section focuses on how these benefits can be related
to currency use. The notion of coloured coins (Bradbury, 2013) is used to denote a
small part of a coin with specific attributes which may represent anything from physical
assets to a community’s values. By moving coloured coins through the network, asset
ownership can be securely transferred. Similarly, coins coloured with values, in which
6 https://solarcoin.org/
Page 14 of 32
-----
morals, principles or ethics are embedded in the code, can allow individuals to align
their spending closely with their values.
Taghiyeva et al. (2016) describe a proof-of-concept pilot for a blockchain-based Islamic
crypto-currency in which transactions and Muslim values, including a blended anti
radicalisation agenda, are aligned: a currency with a community’s desirable social
principles engineered-in. This resonates with Helbing’s (2013, 2014) concept of
_Qualified Money where values can be embedded in DLTs. CarbonCoin[7] claims to be the_
first digital currency with a conscience, designed to engage the environmentally
conscious community. Such possibilities raise important questions about whose values
are embedded into a currency and who does the engineering.
In terms of assets, DLTs provide a mechanism both for their registration and transfer. A
number of commentators have argued that this may prove a boon in developing or
politically unstable economies for the registration of individual’s property rights. Where
there is a lack of trust in central authorities to maintain uncorrupted registers of assets,
such as property title, these may be recorded immutably, transparently, and verifiably
on a blockchain. A number of pilots and trial projects are underway: Bitland[8] use DLT
to map land title in Ghana providing a registry of ownership which subsequently
facilitates the mobilization of capital as well as a transparent property market. Similar
initiatives can be found in Honduras (Alejandro, 2016), Sweden (Rizzo, 2016) and
Georgia (Shin, 2016). Progress has been slow and success mixed (ODI, 2016), attesting
to the still emergent nature of the technology. Indeed, it is too easy to get carried away
7 http://carboncoin.cc/
8 http://bitlandglobal.com
Page 15 of 32
-----
by the theoretical potential of DLTs. While a blockchain based registry of assets may be
transparent and immutable, for it to be meaningful in terms of economic participation
and activity it must exist within a stable infrastructure: armed aggressors, for example,
may still unlawfully seize property regardless of whether or not it is recorded on the
blockchain. However, the existence and immutability of the record may act as a
deterrent against such behaviour.
_Supply chains_
Assets can be registered to the blockchain using unique keys. This provides a register of
ownership as well as tracking and pattern of ownership over time. Initiatives that have
leveraged this affordance include Everledger[9], a permanent ledger for diamond
certification and related transaction history transparently recording ownership history
and reducing crime, and Provenance[10] who provide a system for tracking materials and
products in a manner that is public, secure and inclusive. For the SDGs, this means that
claims (albeit excluding those blood diamonds or sustainably fished tuna) can be
demonstrated to be authentic right through the supply chain, shifting the value system
towards origin and provenance (Greenspan, 2015).
DLT applications are also being explored in the energy market both as a system
enabling individuals to sell excess solar-generated electricity to each other without
going through third parties (such as PowerLedger[11] and TransActive[12]) as well as
developing a market infrastructure for carbon trading, an independent ledger of the
9 http://www.everledger.io/
10 https://www.provenance.org/
11 http://powerledger.io/
12 http://transactivegrid.net/
Page 16 of 32
-----
permits to emit Earth’s allowance of greenhouse gases (Casalotti, 2016). One scenario
is that, within a short time, every individual on the planet, for example, be issued with
an annual carbon allocation that may be traced via the DLT network.
_Innovations in governance_
Within the DLT code substitutes for trust and allows for new types of commerce.
Appropriately designed, these can be the building blocks of new forms of economic and
social governance that meet the objectives of the SDGs.
Smart contracts are computer protocols that facilitate, verify and enforce the
performance of a contract: self-executing code. They are the automation of the
performance of contracts which only execute when pre-specified conditions are met,
thus removing the need for third party resolution. This is an assured and low-cost
mechanism that can offer for Bottom of the Pyramid economic actors increased speed,
efficiency, and trust that the contract will be executed as agreed, thus enabling arm’s
length transactions and payments triggered on receipt of goods. A further application is
in the realm of providing more secure and inclusive voting and elections. The danger, of
course, is that the contract performs no matter what: this raises questions about who
writes them (Quis custodiet ipsos custodies?), how to write-in flexibility to respond to
and incorporate external events, and individual’s free will in connecting with them.
It is a small step from smart contracts to Decentralized Autonomous Organizations
(DAOs) which are similarly executed by code but, unlike smart contracts, may include a
potentially unlimited number of participants (Buterin, 2014). DAOs remain largely
Page 17 of 32
-----
untested and use cases relating to SDGs are hard to find: nevertheless, indicative of the
infancy of the technology, one major DAO initiative fell victim to misappropriation of
approximately $80m (Price, 2016), indicating the need for further developmental work.
One area where the concept has been developed is in the creation of DLT mediated
organisations made of people but where the governance structure is encoded directly
into the technical infrastructure stipulating and enabling the rules and procedures of the
organisation that every member of the organisation will have to abide by such design
propositions may help to eliminate fraud and corruption.
_Sharing economy_
The sharing economy has been heralded as one solution to the challenges of
sustainability by promoting environmentally sensitive forms of consumption,
encouraging different models of ownership and addressing issues such as the under
utilisation of assets. However, some scholars recognise a Dark Side (Malhotra and Van
Alstyne, 2014), partly for its tendency to reinforce the contemporary unsustainable
economic paradigm (Martin, 2016), partly because some providers’ business models
are argued to be as much about evading regulations as about sharing, partly for
spreading precarity throughout the workforce, for middlemen sucking profits out of
previously un-monetized interactions (Scholz, 2016) and for being unavailable to
disadvantaged groups, those of low socioeconomic status and users from emerging
regions (Thebault-Spieker et al., 2015).
DLTs address some of these criticisms by decentralising and disintermediating.
Embedding sensors into existing assets, our ‘things’ can collect and share data. By
Page 18 of 32
-----
integrating these data into the blockchain, we can keep an immutable ledger of shared
transactions without the need for middlemen (Huckle et al., 2016). La’Zooz[13] is a
decentralized transportation platform owned by the community and utilising vehicles’
unused space, enabling people with private cars to share their drive with others
travelling the same route: a decentralized Uber.
La’Zooz generates new tokens from ‘Proof of Movement’ not ‘Proof of Work.' As they
drive, drivers earn Zooz, passengers pay using Zooz and can also earn Zooz by
providing route advice to drivers. La’Zooz offers to provide a ride-sharing service that
is based on truer sharing economy principles, rather than monetary incentives
(Bheemaiah, 2015). The business model moves from rent extraction to value creation in
networks: value is distributed amongst those who created it, offering a greater reward
and opportunity for inclusion.
_Financial inclusion_
The opportunity for wider financial inclusion is held up as one of the great promises for
SDGs of DLTs. Through automation, disintermediation, low cost and security of
transfer comes the opportunity for transactions involving low-value units and for
remote, disenfranchised, peripheral and marginal communities to connect in new ways
either amongst themselves or with activities in the wider world. DLTs allow the almost
instantaneous transfer of digital tokens, if not at zero cost then at a significantly cheaper
rate than established services. This makes the transfer of small amounts of currency
economically viable, enabling new actors to enter the field and new opportunities for e
13 http://lazooz.net/
Page 19 of 32
-----
commerce (Athey, 2015). It might be anticipated, then, that reductions in the cost of
financial transactions through DLTs will result in widening financial inclusion.
One critical factor in enabling greater financial inclusion is identity which, it is argued
(Birch, 2014) will underpin future digital transactions and lies at the heart of realising
the potential of DLT. The question of what defines identity is challenging, not least
because it ‘does not lend itself easily to definition nor does it remain unchangeable’
(Ajana, 2010, p.5). Identities are made up of multiple attributes: date and place of birth,
parents’ names, school, criminal record, employment record, biometrics, papers
published, etc. These attributes reflect who we are and are configurable depending on
whom we need to identify ourselves to and for what purpose.
For most, it is relatively straightforward to assemble authenticated attributes of identity
(passport, utility bill, etc.), but approximately 1.8bn of the world’s population have no
legally recognised identity (Dahan and Gelb, 2015). The reasons are various, but the
consequence is that the ‘identityless’ exist on the margins of society unable formally to
participate in democratic, educative, healthcare and economic activity.
Part of the problem of identitylessness is the extent to which identity has been a
centralised phenomenon, something that, to a large extent, is given to people by some
authority. The affordances of DLTs offer an alternative approach to building identities
from the bottom up, as the gradual accretion of different attributes of identity. This way,
an individual’s identity is not under the control or the gift of any central authority, nor is
it vulnerable to tampering or theft from malicious third parties. Further, individuals are
Page 20 of 32
-----
able to control which attributes may/may not be made public depending on the
authentication need. This is currently an area of intense DLT development including
initiatives from ID2020[14], BitNation[15], BlockchainBorderBank[16], BanQu[17], and
NevTrace[18].
**Conclusion**
In 2013 Nobel Prize-winning economist, Paul Krugman declared that ‘Bitcoin is evil.’
Others, too, have been critical (Lemieux, 2013; Doguet, 2013; Fox-Brewster, 2015;
Welch, 2015; Böhme et al., 2015). Despite these criticisms, DLTs have also been
heralded as an incremental innovation with the potential for inducing efficiency gains
_and ethically empowering business, or disruptive innovations (triggering the emergence_
of new economic systems), that may prove to be more socially and environmentally
responsible (Swan, 2015; Davidson et al., 2016; Walport, 2016).
This paper has explored, through affordance theory, how DLTs might contribute to the
sustainability agenda. On the face of it, the potential appears significant. DLTs provide
a technical basis for a degree of change that many observers have found exciting. The
way we relate to DLTs is not merely a technical matter but strongly relates to the ways
in which we configure our social world (Reijers and Coeckelbergh, 2016).
Consequently, we propose the notion of Blockchain for Good as an emergent
phenomenon or shared interpretative schema that is being co-constructed by a wide
ecosystem of actors as a means of giving direction and catalyzing actions, choices, and
14 http://id2020.org/
15 https://bitnation.co/
16 http://law.mit.edu/blockchainborderbank
17 http://www.banquap.com/
18 http://nevtrace.com/
Page 21 of 32
-----
behaviours (Ranson et al., 1980). Crucially, this approach unlocks the potential for
more detailed examinations of the moral and ethical impetus behind blockchain
projects.
Within this limited space, we have presented a rather one-sided perspective and are
aware that DLTs are not a universal panacea. The notion of Blockchain for Good
inevitably raises questions about its counter, ‘Blockchain for Bad’ and there exists,
beyond the scope of this paper, a body of cautionary literature. Analysing crypto
currencies through the lens of ethical impact, Dierksmeier and Seele (2016) also find
detrimental outcomes, such as the facilitation of nefarious consumption. Physicist
Stephen Hawking, Elon Musk and, as of 12 November 2016, 8,749 others have signed
an open letter counselling against the incautious application of artificial intelligence and
DAOs (Russell et al., 2015). DLTs feel no guilt, regret or remorse. This raises questions
about who will do the coding. As yet, there is little regulation specific to DLT. Still,
might DLTs yet be subsumed by incumbent organizations and authorities as another
tool of control and surveillance, or can they really deliver a more democratic,
egalitarian, collaborative and sustainable society?
DLTs are still at an early stage of development, and it remains unclear in which
direction they will go. The essential premise of technology affordance is that to
understand the uses and consequences of technologies, they must be considered in the
context of their dynamic interactions between people and organizations (Majchrzak and
Markus, 2012), DLTs are a case-in-point. Dozens of crypto-currencies now exist, each
optimized for different purposes, each idiosyncratic in terms of its operation, uptake,
Page 22 of 32
-----
exchange rate and convertibility. Similarly, others are exploring DLT applications that
are not currency-oriented. Given this variety, further research is required to understand
which type works best in which circumstances and why, as well as the extent to which
they can deliver on the sustainability agenda.
**References**
Ajana B. 2010. Recombinant identities: Biometrics and narrative bioethics. Journal of
_Bioethical Inquiry_ **7: 237-258.**
Alejandro J. 2016. Blockchain for good – a beginner’s guide. Available at:
http://tech.newstatesman.com/guest-opinion/blockchain-for-good-beginners-guide
Ali R, Barrdear J, Clews R, Southgate J. 2014. Innovations in payment technologies and
the emergence of digital currencies. Bank of England Quarterly Bulletin **54(3): 262-275.**
Argandoña A. 2003. The New Economy: Ethical issues. Journal of Business Ethics **44:**
3-22.
Aste T. 2016. The Fair Cost of Bitcoin Proof of Work. Available at:
https://ssrn.com/abstract=2801048
Athey S. 2015. 5 ways digital currencies will change the world. Available at:
https://www.weforum.org/agenda/2015/01/5-ways-digital-currencies-will-change-the
world/.
Bateman A. 2015. Tracking the value of traceability. Supply Chain Management Review
(November): 8-10.
Benkler Y, Nissenbaum H. 2006. Commons‐based peer production and virtue. Journal
_of Political Philosophy_ **14: 394-419.**
Page 23 of 32
-----
Bheemaiah K. 2015. Why business schools need to teach about the blockchain.
Available at: http://ssrn.com/abstract=2596465.
Birch D. 2014. Identity Is The New Money. London: London Publishing Partnership.
Böhme R, Christin N, Edelman B, Moore T. 2015. Bitcoin: Economics, technology, and
governance. The Journal of Economic Perspectives **29: 213-238.**
Botsman R, Rogers R. 2010. What's Mine Is Yours: How Collaborative Consumption is
_Changing the Way We Live. HarperBusiness: London._
Beynon-Davies, P. and Lederman, R., 2017. Making sense of visual management
through affordance theory. Production and Planning, 28(2): 142-157.
Boyd, D., 2010. Social networked sites as networked publics: Affordances, dynamics
and implications. In: Z. Papacharissi, ed. Networked Self: Identity, Community, and
_Culture on Social Networking Sites. Routledge: New York; 39-58._
Bradbury D. 2013. Colored Coins Paint Sophisticated Future for Bitcoin. Available at:
http://www.coindesk.com/colored-coins-paint-sophisticated-future-for-Bitcoin/.
Buterin V. 2014. DAOs, DACs, DAS and more: An incomplete terminology guide.
Available at: https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an
incomplete-terminology-guide/
Choy K, Schlagwein, D.2016. Crowdsourcing for a better world: On the relation
between IT affordances and donor motivations in charitable crowdsourcing. Information
_Technology and People 29(1): 221-247._
Ciavola B, Gershenson, J. 2016. Affordance theory for engineering design. Research
_Engineering Design 27: 251-263._
Page 24 of 32
-----
Conole G, Dyke M. 2004. What are the affordances of information and communication
technologies? ALT-J Research in Learning Technology **12(2): 113-124.**
Casalotti A. 2016. Global Carbon Trading on the Blockchain. Bitcoin and Blockchain
Leadership Forum. Available at:
http://cptm.org/documents/CFMM_Brief_percent202016.pdf
CPTM. 2016. Commonwealth Partnership for Technology Management (CPTM) Brief
on Adaptive Flexibility Approaches to Financial Inclusion in a Digital Age:
Recommendations and Proposals. London: CPTM Smart Partners’ Hub.
Dahan M, Gelb A. 2015. The Role of Identification in the Post-2015 Development
Agenda. World Bank Working Paper.
Davidson S, De Filippi P, Potts J. 2016. Economics of Blockchain. Available at:
http://ssrn.com/abstract=2744751.
Dierksmeier C, Seele P. 2016 Cryptocurrencies and Business Ethics. Journal of
_Business Ethics 1-14._
Doguet J. 2013. The nature of the form: Legal and regulatory issues surrounding Bitcoin
digital currency system. Louisiana Law Review **73(4):1118-1153.**
European Central Bank. 2012. Virtual Currency Schemes. Available at:
https://www.ecb.europa.eu/pub/.../virtualcurrencyschemes201210en.pdf
Fox-Brewster T. 2015. How hackers abused Tor to rob blockchain, steal Bitcoin, target
private email and get away with it. Forbes **24 (02):7.**
Frankena WK.1973. Ethics. Prentice Hall: Englewood Cliffs.
Page 25 of 32
-----
Faraj J, Jarvenpaa S, Majchrzak A. 2011. Knowledge collaboration in online
communities. Organization Science 22(5): 1224-1239.
Gaver W. 1991. Technology Affordances. ACM: New Orleans, Lousiana, New York:
79-84.
Geels FW. 2005. Technological Transitions and System Innovations: A Co
_Evolutionary and Socio-Technical Analysis. Edward Elgar Publishing: Cheltenham UK:_
Gibson JJ. 1978. The ecological approach to the visual perception of pictures. Leonardo
**11: 227-235.**
Greenspan G. 2015. MultiChain Private Blockchain — White Paper. Available at:
http://www.multichain.com/white-paper/: Multichain
Halford R. 2014. Gridcoin - Crypto-Currency using Berkeley Open Infrastructure
Network Computing Grid as a Proof Of Work. Available at:
http://www.gridcoin.us/images/gridcoin-white-paper.pdf
Helbing D. 2013. Economics 2.0: The natural step towards a self-regulating,
participatory market society. Evolutionary and Institutional Economics Review **10: 3-**
41.
Helbing D. 2014. Qualified Money-A Better Financial System for the Future. Available
at: 2526022.
Huckle S, Bhattacharya R, White M, Beloff N. 2016. Internet of things, blockchain and
shared economy applications. Procedia Computer Science **98: 461-466.**
Jovanovic B, Rousseau PL. 2005. General purpose technologies. Handbook of
_Economic Growth_ **1: 1181-1224.**
Page 26 of 32
-----
Kewell B. 2007. Linking risk and reputation: A research agenda and methodological
analysis.’ Risk Management: An International Journal **9(4): 238-254.**
Kranzberg M. 1986. Technology and History: ‘Kranzberg's Laws.’ Technology and
_Culture_ **27: 544-560.**
Krugman PR. 2013. Bitcoin is evil. Available at:
http://krugman.blogs.nytimes.com/2013/12/28/Bitcoin-is-evil/?_r=1: New York Times.
Lee E. 2005. The ethics of innovation: P2P software developers and designing
substantial noninfringing uses under the Sony Doctrine. Journal of Business Ethics
**62:147-162.**
Lemieux P. 2013. Who is Satoshi Nakamoto? Regulation (Fall):14-15.
Lankton N, McKnight D, Tripp J. 2015. Technology, humanness, and trust: rethinking
trust in technology. Journal of the Association for Information Systems 16(10): 880-918.
Maier J, Fadel G. 2009. Affordance-based design: A relational theory of design.
_Research Design Engineering 20: 13-27._
Majchrzak A, Markus L. 2012. Technology Affordances and Constraint Theory of MIS.
Thousand Oaks: CA. Sage Publishing.
Malhotra A, Van Alstyne M. 2014. The dark side of the sharing economy… and how to
lighten it. Communications of the ACM **57: 24-27.**
Martin CJ. 2016. The sharing economy: A pathway to sustainability or a nightmarish
form of neoliberal capitalism? Ecological Economics **121: 149-159.**
Martin, K.E, Freeman, R.E. 2004. The separation of technology and ethics in Business
Ethics. Journal of Business Ethics **53: 353-364.**
Page 27 of 32
-----
Mattila J. 2016. The Blockchain Phenomenon: The Disruptive Potential of Distributed
Consensus Architectures. Available at: http://brie.berkeley.edu/BRIE/:
Berkeley Roundtable on the International Economy (BRIE), University of California,
Berkeley.
McWaters R, Galaski R, Chaterjee S. 2016. The Future of Financial Infrastructure: An
Ambitious Look at How Blockchain can Reshape Financial Services. World Economic
_Forum Available at: https://www.weforum.org/reports/the-future-of-financial-_
infrastructure-an-ambitious-look-at-how-blockchain-can-reshape-financial-services/
Nakamoto S. 2008. Bitcoin: A peer-to-peer electronic cash system. Consulted, 1, 28.
Ng I. 2013. Value and Worth: Creating New Markets In The Digital Economy.
Innovorsa: Cambridge.
ODI. 2016. Applying Blockchain Technology in Global Data Infrastructure. Available
at: http://theodi.org/technical-report-blockchain-technology-in-global-data
infrastructure: Open Data Institute
Pea RD. 1993. Practices for distributed intelligence and designs for education. In
Salomon G (ed) Distributed Cognitions: Psychological and Educational
_Considerations. Cambridge University Press: Cambridge, UK; 47-87._
Potts J, Davidson S, De Filippi P. 2016. Disrupting governance: The new institutional
_economics of Distributed Ledger Technology. Available at:_
http://ssrn.com/abstract=1295507
Price R. 2016. Digital currency Ethereum is cratering amid claims of a $50 million
hack. Business Insider (17 June) Available at: http://uk.businessinsider.com/dao
hacked-ethereum-crashing-in-value-tens-of-millions-allegedly-stolen-2016-6
Page 28 of 32
-----
Ranson S, Hinings B, Greenwood R. 1980. The structuring of organizational structures.
_Administrative Science Quarterly_ **25: 1-17.**
Reber D, Feuerstein S. 2014. Bitcoins-hype or real alternative? Internet Economics VIII
(81).
Reijers W, Coeckelbergh, M. 2016. The blockchain as a narrative technology:
Investigating the social ontology and normative configurations of cryptocurrencies.
_Philosophy of Technology (October):1-28._
Rizzo P. 2016. Sweden tests blockchain smart contracts for Land Registry. CoinDesk
June 16.
Russell SV, Dewey D, Tegmark M. 2015. Research priorities for robust and beneficial
artificial intelligence: An open letter. Association for the Advancement of Artificial
Intelligence.
Scholz T. 2016. Platform Cooperativism: Challenging the Corporate Sharing Economy.
New York. Available at : www.rosalux-nyc.org, Rosa Luxemburg Stiftung
Schumpeter JA. 1934. The Theory of Economic Development: An Inquiry into Profits,
_Capital, Credit, Interest and the Business Cycle. London, UK: Oxford University Press._
Seidel S, Recker J, Vom Brocke J. 2013. Sensemaking and sustainable practicing:
Functional affordances of information systems in green transformations. MIS Quarterly
**37:** 1275-1299.
Shin L. 2016. Republic of Georgia to develop blockchain Land Registry system.
_Forbes, 21 April._
Smith JW. 1948. Intrinsic and extrinsic good. Ethics **58: 195-208.**
Page 29 of 32
-----
Swan M. 2015. Blockchain: Blueprint for a New Economy. O'Reilly Media, Inc.:
Beijing, Cambrdige, Farnham, Koln, Sebastopol, Tokyo.
Taddeo M, Vaccaro A. 2011. Analyzing peer-to-peer technology using information
ethics. Information Society **27: 105-112.**
Taghiyeva M, Mellish B, Ta'eed O. 2016. Currency of Intangible Non-Financial Value.
Available at: https://github.com/seratio/whitepaper: IoV Blockchain Alliance for Good
(www.bisgit.org)
Thebault-Spieker J, Terveen LG, Hecht B. 2015. Avoiding the South Side and the
Suburbs: The Geography of Mobile Crowdsourcing Markets. Proceedings of the 18th
ACM Conference on Computer Supported Cooperative Work and Social Computing;
265-275.
UN. 2015. Transforming our world: The 2030 Agenda for Sustainable Development.
Available at: https://sustainabledevelopment.un.org/post2015/transformingourworld.
Volkoff O, Strong DM. 2013. Critical realism and affordances: Theorizing IT
associated organizational change processes. MIS Quarterly **37: 819-834.**
Xenakis I, Arnellos A. 2013. The relation between interaction aesthetics and
affordances. Design Studies, 34 (1): 57-73.
Walport. 2016. Distributed Ledger Technology: Beyond Blockchain. Government
Office for Science. Available at:
https://www.gov.uk/government/publications/distributed-ledger-technology-blackett
review
WCED. 1987. Our Common Future. World Commission on Environment and
_Development. Oxford University Press: Oxford, UK._
Page 30 of 32
-----
Welch A. 2015. The Bitcoin blockchain as financial market infrastructure: A
consideration of operational risk. Legislation and Public Policy **8: 837-893.**
Withagen R, Chemero, A. 2012. Affordances and classification: On the significance of a
sidebar in James Gibson's last book. Philosophical Technology 25(4): 521-537.
Wright A, De Filippi P.2015. Decentralized Blockchain Technology and the Rise of Lex
_Cryptographia. Available at:_
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2580664
Zuberi M, Levin R. 2016. Schumpeter's Revenge: The gale of creative destruction.
_Banking and Financial Services Policy Report_ **35(5):1-8.**
Zammuto R, Griffith T, Majchrak A, Dougherty D, Faraj S. 2007. Information
technology and the changing fabric of organization. Organizational Science 18(5): 749
762.
**Biographical Notes**
Beth Kewell is a Research Fellow at Surrey University Business School’s Centre for the
Digital Economy (CoDE), where she specialises in interpretative research, positioned at
the boundary between innovation management, Science and Technology Studies (STS),
and risk analysis.
Correspondence to: Surrey Centre for the Digital Economy, University of Surrey,
Surrey GU2 7XH, UK.
[Email: e.kewell@surrey.ac.uk](mailto:e.kewell@surrey.ac.uk)
Richard Adams is a Senior Research Fellow at Surrey University Business School’s
Centre for the Digital Economy (CoDE). His research interests lie at the intersection of
(responsible) innovation, digital disruption, and sustainability and business models.
Correspondence to: Surrey Centre for the Digital Economy, University of Surrey,
Surrey GU2 7XH, UK.
[Email: r.adams@surrey.ac.uk](mailto:r.adams@surrey.ac.uk)
Page 31 of 32
-----
Glenn Parry is Professor of Strategy and Operations Management at Bristol Business
School, University of the West of England. He is primarily interested in what 'Good'
means for an organisation, exploring value as a measurement of 'goodness'. He uses
business models as a framework to understand value co-creation between provider and
client in context.
Correspondence to: Faculty of Business and Law, UWE Frenchay Campus,
Coldharbour Lane, Bristol, BS16 1QY, UK.
[Email: Glenn.Parry@uwe.ac.uk](mailto:Glenn.Parry@uwe.ac.uk)
Page 32 of 32
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1002/JSC.2143?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/JSC.2143, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://openresearch.surrey.ac.uk/view/delivery/44SUR_INST/12139402000002346/13140709980002346"
}
| 2,017
|
[] | true
| 2017-09-01T00:00:00
|
[] | 13,312
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/011f7eb8584458d0676216165e63dbefce1d393a
|
[
"Computer Science"
] | 0.887264
|
MINERVA infinity : A Scalable Efficient Peer-to-Peer Search Engine.
|
011f7eb8584458d0676216165e63dbefce1d393a
|
[
{
"authorId": "2241164441",
"name": "Sebastian Michel"
},
{
"authorId": "47463435",
"name": "P. Triantafillou"
},
{
"authorId": "1751591",
"name": "G. Weikum"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# MINERVA : A Scalable Efficient Peer-to-Peer ∞
Search Engine
Sebastian Michel[1], Peter Triantafillou[2], and Gerhard Weikum[1]
1 Max-Planck-Institut f¨ur Informatik, 66123 Saarbr¨ucken, Germany
_{smichel, weikum}@mpi-inf.mpg.de_
2 R.A. Computer Technology Institute and University of Patras, 26500 Greece
peter@ceid.upatras.gr
**Abstract. The promises inherent in users coming together to form data**
sharing network communities, bring to the foreground new problems formulated over such dynamic, ever growing, computing, storage, and networking infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility of content from economically-powerful sources. To meet these challenges, the work described here puts forward MINERVA, a novel search
_∞_
engine architecture, designed for scalability and efficiency. MINERVA
_∞_
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA .
_∞_
## 1 Introduction
The peer-to-peer (P2P) approach facilitates the sharing of huge amounts of data
in a distributed and self-organizing way. These characteristics offer enormous
potential benefit for the development of internet-scale search engines, powerful in terms of scalability, efficiency, and resilience to failures and dynamics.
Additionally, such a search engine can potentially benefit from the intellectual
input (e.g., bookmarks, query logs, click streams, etc.) of a large user community participating in the sharing network. Finally, but perhaps even more
importantly, a P2P web search engine can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude the formation of information-resource monopolies and the biased visibility of content from
economically powerful sources.
G. Alonso (Ed.): Middleware 2005, LNCS 3790, pp. 60–81, 2005.
_⃝c_ IFIP International Federation for Information Processing 2005
-----
MINERVA : A Scalable Efficient P2P Search Engine 61
_∞_
Our challenge therefore was to exploit P2P technology’s powerful tools for
efficient, reliable, large-scale content sharing and delivery to build a P2P web
search engine. We wish to leverage DHT technology and build highly distributed
algorithms and data infrastructures that can render P2P web searching feasible.
The crucial challenge in developing successful P2P Web search engines is
based on reconciling the following high-level, conflicting goals: on the one hand,
to respond to user search queries with high quality results with respect to precision/recall, by employing an efficient distributed top-k query algorithm, and, on
the other hand, to provide an infrastructure ensuring scalability and efficiency
in the presence of a very large peer population and the very large amounts of
data that must be communicated in order to meet the first goal.
Achieving ultra scalability is based on precluding the formation of central
points of control during the processing of search queries. This dictates a solution
that is highly distributed in both the data and computational dimensions. Such a
solution leads to facilitating a large number of nodes pulling together their computational (storage, processing, and communication) resources, in essence increasing
the total resources available for processing queries. At the same time, great care
must be exercised in order to ensure efficiency of operation; that is, ensure that engaging greater numbers of peers does not lead to unnecessary high costs in terms
of query response times, bandwidth requirements, and local peer work.
With this work, we put forward MINERVA, a P2P web search engine
_∞_
architecture, detailing its key design features, algorithms, and implementation.
MINERVA features offer an infrastructure capable of attaining our scalability
_∞_
and efficiency goals. We report on a detailed experimental performance study
of our implemented engine using real-world, web-crawled data collections and
queries, which showcases our engine’s efficiency and scalability. To the authors’
knowledge, this is the first work that offers a highly distributed (in both the
data dimension and the computational dimension), scalable and efficient solution
toward the development of internet-scale search engines.
## 2 Related Work
Recent research on structured P2P systems, such as Chord [17], CAN [13], SkipNets [9] or Pastry [15] is typically based on various forms of distributed hash
tables (DHTs) and supports mappings from keys to locations in a decentralized
manner such that routing scales well with the number of peers in the system.
The original architectures of DHT-based P2P networks are typically limited to
exact-match queries on keys. More recently, the data management community
has focused on extending such architectures to support more complex queries
[10,8,7]. All this related work, however, is insufficient for text queries that consist of a variable number of keywords, and it is absolutely inappropriate for
full-fledged Web search where keyword queries should return a ranked result list
of the most relevant approximate matches [3].
Within the field of P2P Web search, the following work is highly related to our
efforts. Galanx [21] is a P2P search engine implemented using the Apache HTTP
-----
62 S. Michel, P. Triantafillou, and G. Weikum
server and BerkeleyDB. The Web site servers are the peers of this architecture;
pages are stored only where they originate from. In contrast, our approach leaves
it to the peers to what extent they want to crawl interesting fractions of the Web
and build their own local indexes, and defines appropriate networks, structures,
and algorithms for scalably and efficiently sharing this information.
PlanetP [4] is a pub/sub service for P2P communities, supporting content
ranking search. PlanetP distinguishes local indexes and a global index to describe
all peers and their shared information. The global index is replicated using a
gossiping algorithm. This system, however, appears to be limited to a relatively
small number of peers (e.g., a few thousand).
Odissea [18] assumes a two-layered search engine architecture with a global
index structure distributed over the nodes in the system. A single node holds the
complete, Web-scale, index for a given text term (i.e., keyword or word stem).
Query execution uses a distributed version of Fagin’s threshold algorithm [5].
The system appears to create scalability and performance bottlenecks at the
single-node where index lists are stored. Further, the presented query execution
method seems limited to queries with at most two keywords. The paper actually
advocates using a limited number of nodes, in the spirit of a server farm.
The system outlined in [14] uses a fully distributed inverted text index, in
which every participant is responsible for a specific subset of terms and manages the respective index structures. Particular emphasis is put on minimizing
the bandwidth used during multi-keyword searches. [11] considers content-based
retrieval in hybrid P2P networks where a peer can either be a simple node or a
directory node. Directory nodes serve as super-peers, which may possibly limit
the scalability and self-organization of the overall system. The peer selection for
forwarding queries is based on the Kullback-Leibler divergence between peerspecific statistical models of term distributions.
Complementary, recent research has also focused into distributed top-k query
algorithms [2,12] (and others mentioned in these papers which are straightforward distributed versions/extensions of traditional centralized top-k algorithms,
such as NRA [6]). Distributed top-k query algorithms are an important component of our P2P web search engine. All these algorithms are concerned with
the efficiency of top-k query processing in environments where the index lists
for terms are distributed over a number of nodes, with index lists for each term
being stored in a single node, and are based on a per-query coordinator which
collects progressively data from the index lists. The existence of a single node
storing a complete index list for a term undoubtedly creates scalability and efficiency bottlenecks, as our experiments have showed. The relevant algorithms
of MINERVA ensure high degrees of distribution for index lists’ data and
_∞_
distributed processing, avoiding central bottlenecks and boosting scalability.
## 3 The Model
In general, we envision a widely distributed system, comprised of great numbers
of peers, forming a collection with great aggregate computing, communication,
-----
MINERVA : A Scalable Efficient P2P Search Engine 63
_∞_
and storage capabilities. Our challenge is to fully exploit these resources in order
to develop an ultra scalable, efficient, internet-content search engine.
We expect that nodes will be conducting independent web crawls, discover
ing documents and computing scores of documents, with each score reflecting
a document’s importance with respect to terms of interest. The result of such
activities is the formation of index lists, one for each term, containing relevant
documents and their score for a term. More formally, our network consists of a set
of nodes N, collectively storing a set D of documents, with each document having
a unique identifier docID, drawn from a sufficiently large name space (e.g., 160
bits long). Set T refers to the set of terms. The notation _S_ denotes the cardi_|_ _|_
nality of set S. The basic data items in our model are triplets of the form (term,
_docID, score). In general, nodes employ some function score(d, t) : D_ (0, 1],
_→_
which for some term t, produces the score for document d. Typically, such a
scoring function utilizes tdf*idf style statistical metadata.
The model is based on two fundamental operations. The Post(t, d, s) op
eration, with t _T, d_ _D, and s_ (0, 1], is responsible for identifying a
_∈_ _∈_ _∈_
network node and store there the (t, d, s) triplet. The operation Query(Ti, k) :
_return(Lk), with Ti ⊆_ _T, k an integer, and Lk = {(d, T otalScore(d)) : d ∈_
_D, T otalScore(d)_ _RankKscore_, is a top-k query operation. T otalScore(d)
_≥_ _}_
denotes the aggregate score for d with respect to terms in Ti. Although there
are several possibilities for the monotonic aggregate function to be used, we employ summation, for simplicity. Hence, T otalScore(d) = [�]t∈Ti _[score][(][d, t][). For a]_
given term, RankKscore refers to the k-th highest TotalScore, smin (smax) refers
to the minimum (maximum) score value, and, given a score s, next(s) (prev(s))
refers to the score value immediately following (preceding) s.
All nodes are connected on a global network G. G is an overlay network,
modeled as a graph G = (N, E), where E denotes the communication links
connecting the nodes. E is explicitly defined by the choice of the overlay network;
for instance, for Chord, E consists of the successor, predecessor, and finger table
(i.e., routing table) links of each node.
In addition to the global network G, encompassing all nodes, our model
employs term-specific overlays, coined Term Index Networks (TINs). I(t) denotes
the TIN for term t and is used to store and maintain all (t, d, s) items. TIN I(t)
is defined as I(t) = (N (t), E(t)), N (t) _N_ . Note that nodes in N (t) have
_⊆_
in addition to the links for participating in G, links needed to connect them
to the I(t) network. The model itself is independent of any particular overlay
architecture.
_I(t).n(si) defines the node responsible for storing all triplets (t, d, s) for which_
_score(d, t) = s = si. When the context is well understood, the same node is_
simply denoted as n(s).
## 4 Design Overview and Rationale
The fundamental distinguishing feature of MINERVA is its high distribu_∞_
tion both in the data and computational dimensions. MINERVA goes far
_∞_
-----
64 S. Michel, P. Triantafillou, and G. Weikum
beyond the state of the art in distributed top-k query processing algorithms,
which are based on having nodes storing complete index lists for terms and
running coordinator-based top-k algorithms [2,12]. From a data point of view,
the principle is that the data items needed by top-k queries are the triplets
(term, docID, score) for each queried term (and not the index lists containing
them). A proper distributed design for such systems then should appropriately
distribute these items controllably so to meet the goals of scalability and efficiency. Thus, data distribution in MINERVA is at the level of this, much finer
_∞_
data grain. From a system’s point of view, the design principle we follow is to
organize the key computations to engage several different nodes, with each node
having to perform small (sub)tasks, as opposed to assigning single large task
to a single node. These design choices, we believe, will greatly boost scalability
(especially under skewed accesses).
Our approach to materializing this design relies on the employment of the
novel notion of Term Index Networks (TINs). TINs may be formed for every term
in our system, and they serve two roles: First, as an abstraction, encapsulating
the information specific to a term of interest, and second, as a physical manifestation of a distributed repository of the term-specific data items, facilitating
their efficient and scalable retrieval. A TIN can be conceptualized as a virtual
node storing a virtually global index list for a term, which is constructed by the
sorted merging of the separate complete index lists for the term computed at different nodes. Thus, TINs are comprised of nodes which collectively store different
horizontal partitions of this global index list. In practice, we expect TINs to be
employed only for the most popular terms (a few hundred to a few thousand)
whose accesses are expected to form scalability and performance bottlenecks.
We will exploit the underlying network G[′]s architecture and related algo
rithms (e.g., for routing/lookup) to efficiently and scalably create and maintain
TINs and for retrieving TIN data items, from any node of G. In general, TINs
may form separate overlay networks, coexisting with the global overlay G[1].
The MINERVA algorithms are heavily influenced by the way the well_∞_
known, efficient top-k query processing algorithms (e.g., [6]) operate, looking
for docIDs within certain ranges of score values. Thus, the networks’ lookup(s)
function, will be used using scores s as input, to locate the nodes storing docIDs
with scores s.
A key point to stress here, however, is that top-k queries Q({t1, ..., tr}, k)
can originate from any peer node p of G, which in general is not a member of
any I(ti), i = 1, ..., r and thus p does not have, nor can it easily acquire, the
necessary routing state needed to forward the query to the TINs for the query
terms. Our infrastructure, solves this by utilizing for each TIN a fairly small
number (relative to the total number of data items for a term) of nodes of G
1 In practice, it may not always be necessary or advisable to form full-fledged separate
overlays for TINs; instead, TINs will be formed as straightforward extensions of G:
in this case, when a node n of G joins a TIN, only two additional links are added to
the state of n linking it to its successor and predecessor nodes in the TIN. In this
case, a TIN is simply a (circular) doubly-linked list.
-----
MINERVA : A Scalable Efficient P2P Search Engine 65
_∞_
which will be readily identifiable and accessible from any node of G and can act
as gateways between G and this TIN, being members of both networks.
Finally, in order for any highly distributed solution to be efficient, it is cru
cial to keep as low as possible the time and bandwidth overheads involved in the
required communication between the various nodes. This is particularly challenging for solutions built over very large scale infrastructures. To achieve this, the
algorithms of MINERVA follow the principles put forward by top-performing,
_∞_
resource-efficient top-k query processing algorithms in traditional environments.
Specifically, the principles behind favoring sequential index-list accesses to random accesses (in order to avoid high-cost random disk IOs) have been adapted in
our distributed algorithms to ensure that: (i) sequential accesses of the items in
the global, virtual index list dominate, (ii) they require either no communication,
or at most an one-hop communication between nodes, and (iii) random accesses
require at most O(log _N_ ) messages.
_|_ _|_
To ensure the at-most-one-hop communication requirement for successive se
quential accesses of TIN data, the MINERVA algorithms utilize an order pre_∞_
_serving hash function, first proposed for supporting range queries in DHT-based_
data networks in [20]. An order preserving hash function hop() has the property
that for any two values v1, v2, if v1 > v2 then hop(v1) > hop(v2). This guarantees
that data items corresponding to successive score values of a term t are placed
either at the same or at neighboring nodes of I(t). Alternatively, similar functionality can be provided by employing for each I(t) an overlay based on skip
graphs or skip nets [1,9]. Since both order preserving hashing and skip graphs
incur the danger for load imbalances when assigning data items to TIN nodes,
given the expected data skew of scores, load balancing solutions are needed.
The design outlined so far, leverages DHT technology to facilitate efficiency
and scalability in key aspects of the system’s operation. Specifically, posting (and
deleting) data items for a term from any node can be done in O(log _N_ ) time,
_|_ _|_
in terms of the number of messages. Similarly, during top-k query processing,
the TINs of the terms in the query can be also reached in O(log _N_ ) messages.
_|_ _|_
Furthermore, no single node is over-burdened with tasks which can either require
more resources than available, or exhaust its resources, or even stress its resources
for longer periods of time. In addition, as the top-k algorithm is processing
different data items for each queried term, this involves gradually different nodes
from each TIN, producing a highly distributed, scalable solution.
## 5 Term Index Networks
In this section we describe and analyze the algorithms for creating TINs and
populating them with data and nodes.
**5.1** **Beacons for Bootstrapping TINs**
The creation of a TIN has these basic elements: posting data items, inserting
nodes, and maintaining the connectivity of nodes to ensure the efficiency/scalability properties promised by the TIN overlay.
-----
66 S. Michel, P. Triantafillou, and G. Weikum
As mentioned, a key issue to note is that any node p in G may need to post
(t, d, s) items for a term t. Since, in general, p is not a member of I(t) and does
not necessarily know members of I(t), efficiently and scalably posting items to
_I(t) from any p becomes non-trivial. To overcome this, a bootstrapping process_
for I(t) is employed which initializes an TIN I(t) for term t. The basic novelty
lies in the special role to be played by nodes coined beacons, which in essence
become gateways, allowing the flow of data and requests between the G and I(t)
networks.
In the bootstrap algorithm, a predefined number of “dummy” items of the
form (t, ⋆, si) is generated in sequence for a set of predefined score values si,
_i = 1, ..., u. Each such item will be associated with a node n in G, where it_
will be stored. Finally, this node n of G will also be made a member of I(t) by
randomly choosing a previously inserted beacon node (i.e., for the one associated
with an already inserted score value sj, 1 ≤ _j ≤_ _i −_ 1) as a gateway.
The following algorithm details the pseudocode for bootstrapping I(t). It
utilizes an order-preserving hash function hop() : T × (0, 1] → [m], where m is
the size of the identifiers in bits and [m] denotes the name space used for the
overlay (e.g., all 2[160] ids, for 160-bit identifiers). In addition, a standard hash
function h() : (0, 1] [m], (e.g. SHA-1) is used. The particulars of the order
_→_
preserving hash function to be employed will be detailed after the presentation
of the query processing algorithms which they affect. The bootstrap algorithm
selects u “dummy” score values, i/u, i = 1, ..., u, finds for each such score value
the node n in G where it should be placed (using hop()), stores this score there
and inserts n into the I(t) network as well. At first, the I(t) network contains
only the node with the dummy item with score zero. At each iteration, another
node of n is added to I(t) using as gateway the node of G which was added
in the previous iteration to I(t). For simplicity of presentation, the latter node
**Algorithm 1. Bootstrap I(t)**
1: input: u: the number of “dummy” items (t, ⋆, si), i = 1, ..., u
2: input: t: the term for which the TIN is created
3: p = 1/u
4: for i = 1 to u do
5: _s = i × p_
6: _lookup(n.s) = hop(t, s) { n.s in G will become the next beacon node of I(t) }_
7: **if s = p then**
8: _N_ (t) = {n.s}
9: _E(t) = ∅{Initialized I(t) with n.s with the first dummy item}_
10: **end if**
11: **if s ̸= p then**
12: _n1 = hop(t, s −_ _p) {insert n(s) into I(t) using node n(s −_ _p) as gateway}_
13: call join(I(t), n1, s)
14: **end if**
15: store (t, ⋆, s) at I(t).n(s)
16: end for
-----
MINERVA : A Scalable Efficient P2P Search Engine 67
_∞_
can be found by simply hashing for the previous dummy value. A better choice
for distributing the load among the beacons is to select at random one of the
previously-inserted beacons and use it as a gateway.
Obviously, a single beacon per TIN suffices. The number u of beacon scores
is intended to introduce a number of gateways between G and I(t) so to avoid
potential bottlenecks during TIN creation. u will typically be a fairly small
number so the total beacon-related overhead involved in the TIN creation will
be kept small. Further, we emphasize that beacons are utilized by the algorithm posting items to TINs. Post operations will in general be very rare compared to query operations and query processing does not involve the use of
beacons.
Finally, note that the algorithm uses a join() routine that adds a node n(s)
storing score s into I(t) using a node n1 known to be in I(t) and thus, has the
required state. The new node n(s) must occupy a position in I(t) specified by the
value of hop(t, s). Note that this is ensured by using h(nodeID), as is typically
done in DHTs, since these node IDs were selected from the order-preserving
hash function. Besides the side-effect of ensuring the order-preserving position
for the nodes added to a TIN, the join routine is otherwise straightforward: if the
TIN is a full-fledged DHT overlay, join() is updating the predecessor/successor
pointers, the O(log _N_ ) routing state of the new node, and the routing state of
_|_ _|_
each I(t) node pointing to it, as dictated by the relevant DHT algorithm. If the
TIN is simply a doubly-linked list, then only predecessor/successor pointers are
the new node and its neighbors are adjusted.
**5.2** **Posting Data to TINs**
The posting of data items is now made possible using the bootstrapped TINs.
Any node n1 of G wishing to post an item (t, d, s) first locates an appropriate
node of G, n2 that will store this item. Subsequently, it inserts node n2 into I(t).
To do this, it randomly selects a beacon score and associated beacon node, from
all available beacons. This is straightforward given the predefined beacon score
values and the hashing functions used. The chosen beacon node has been made
a member of I(t) during bootstrapping. Thus, it can “escort” n2 into I(t).
The following provides the pseudocode for the posting algorithm. By design,
the post algorithm results in a data placement which introduces two characteristics, that will be crucial in ensuring efficient query processing. First, (as the
bootstrap algorithm does) the post algorithm utilizes the order-preserving hash
function. As a result, any two data items with consecutive score values for the
same term will be placed by definition in nodes of G which will become one-hop
neighbors in the TIN for the term, using the join() function explained earlier.
Note, that within each TIN, there are no ‘holes’. A node n becomes a member
of a TIN network if and only if a data item was posted, with the score value
for this item hashing to n. It is instructing here to emphasize that if TINs were
not formed and instead only the global network was present, in general, any
two successive score values could be falling in nodes which in G could be many
hops apart. With TINs, following successor (or predecessor) links always leads to
-----
68 S. Michel, P. Triantafillou, and G. Weikum
**Algorithm 2. Posting Data to I(t)**
1: input: t, d, s: the item to be inserted by a node n1
2: n(s) = hop(t, s)
3: n1 sends (t, d, s) to n(s)
4: if n(s) /∈ _N_ (t) then
5: _n(s) selects randomly a beacon score sb_
6: _lookup(nb) = hop(t, sb) { nb is the beacon node storing beacon score sb }_
7: _n(s) calls join(I(t), nb, s)_
8: end if
9: store ((t, d, s)
nodes where the next (or previous) segment of scores have been placed. This feature in essence ensures the at-most-one-hop communication requirement when
accessing items with successive scores in the global virtual index list for a term.
Second, the nodes of any I(t) become responsible for storing specific segments
(horizontal partitions) of the global virtual index list for t. In particular, an I(t)
node stores all items for t for a specific (range of) score value, posted by any
node of the underlying network G.
**5.3** **Complexity Analysis**
The bootstrapping I(t) algorithm is responsible for inserting u beacon items. For
each beacon item score, the node n.s is located by applying the hop() function
and routing the request to that node (step 5). This will be done using G’s lookup
algorithm in O(log _N_ ) messages. The next key step is to locate the previously
_|_ _|_
inserted beacon node (step 11) (or any beacon node at random) and sending
it the request to join the TIN. Step 11 again involves O(log _N_ ) messages. The
_|_ _|_
actual join() routine will cost O(log[2] _N_ (t) ) messages, which is the standard
_|_ _|_
join() message complexity for any DHT of size N (t). Therefore, the total cost is
_O(u_ (log _N_ + log[2] _N_ (t) ) messages.
_×_ _|_ _|_ _|_ _|_
The analysis for the posting algorithm is very similar. For each post(t, d, s)
operation, the node n where this data item should be stored is located and
the request is routed to it, costing O(log _N_ ) messages (step 3). Then a random
_|_ _|_
beacon node is located, costing O(log _N_ ) messages, and then the join() routine is
_|_ _|_
called from this node, costing O(log[2] _N_ (t) ) messages. Thus, each post operation
_|_ _|_
has a complexity of O(log _N_ ) + O(log[2] _N_ (t) ) messages.
_|_ _|_ _|_ _|_
Note that both of the above analysis assumed that each I(t) is a full-blown
DHT overlay. This permits a node to randomly select any beacon node to use
to join the TIN. Alternatively, if each I(t) is simply a (circular) doubly-linked
list, then a node can join a TIN using the beacon storing the beacon value that
is immediately preceding the posted score value. This requires O(log _N_ ) hops
_|_ _|_
to locate this beacon node. However, since in this case the routing state for each
node of a TIN consists of only the two (predecessor and successor) links, the cost
to join is in the worst case O( _N_ (t) ), since after locating the beacon node with
_|_ _|_
the previous beacon value, O( _N_ (t) ) successor pointers may need to be followed
_|_ _|_
in order to place the node in its proper order-preserving position. Thus, when
-----
MINERVA : A Scalable Efficient P2P Search Engine 69
_∞_
TINs are simple doubly-linked lists, the complexity of both the bootstrap and
post algorithms are O(log _N_ + _N_ (t) ) messages.
_|_ _|_ _|_ _|_
## 6 Load Balancing
**6.1** **Order-Preserving Hashing**
The order preserving hash function to be employed is important for several reasons. First, for simplicity, the function can be based on a simple linear transform.
Consider hashing a value f (s) : (0, 1] _I, where f_ (s) transforms a score s into
_→_
an integer; for instance, f (s) = 10[6] _× s. Function hop() can be defined then as_
_f_ (s) − _f_ (smin)
_hop(s) = (_ _f_ (smax) − _f_ (smin) _[×][ 2][m][)][ mod][ 2][m]_ (1)
Although such a function is clearly order-preserving, it has the drawback that
it produces the same output for items of equal scores of different terms. This
leads to the same node storing for all terms all items having the same score. This
is undesirable since it cannot utilize all available resources (i.e., utilize different
sets of nodes to store items for different terms). To avoid this, hop() is refined
to take as input the term name, which provides the necessary functionality, as
follows.
_f_ (s) − _f_ (smin)
_hop(t, s) = (h(t) +_ _f_ (smax) − _f_ (smin) _[×][ 2][m][)][ mod][ 2][m]_ (2)
The term h(t) adds a different random offset for different terms, initiating the
search for positions of term score values at different, random, offsets within the
namespace. Thus, by using the h(t) term in hop(t, s) the result is that any data
items having equal scores but for different terms are expected to be stored at
different nodes of G.
Another benefit stems from ameliorating the storage load imbalances that
result from the non-uniform distribution of score values. Assuming a uniform
placement of nodes in G, the expected non-uniform distribution of scores will
result in a non-uniform assignment of scores to nodes. Thus, when viewed from
the perspective of a single term t, the nodes of I(t) will exhibit possibly severe
storage load imbalances. However, assuming the existence of large numbers of
terms (e.g., a few thousand), and thus data items being posted for all these
terms over the same set of nodes in G, given the randomly selected starting
offsets for the placement of items, it is expected that the severe load imbalances
will disappear. Intuitively, overburdened nodes for the items of one term are
expected to be less burdened for the items of other terms and vice versa.
But even with the above hash function, very skewed score distributions will
lead to storage load imbalances. Expecting that exponential-like distributions
of score values will appear frequently, we developed a hash function that is
order-preserving and handles load imbalances by assigning score segments of
exponentially decreasing sizes to an exponentially increasing number of nodes.
For instance, the sparse top 1/2 of the scores distribution is to be assigned to a
single node, the next 1/4 of scores is to be assigned to 2 nodes, the next 1/8 of
scores to 4 nodes, etc. The details of this are omitted for space reasons.
-----
70 S. Michel, P. Triantafillou, and G. Weikum
**6.2** **TIN Data Migration**
Exploiting the key characteristics of our data, MINERVA can ensure further
_∞_
load balancing with small overheads. Specifically, index lists data entries are
small in size and are very rarely posted and/or updated. In this subsection we
outline our approach for improved load balancing.
We require that each peer posting index list entries, first computes a (equi
width) histogram of its data with respect to its score distribution. Assuming a
targeted _N_ (t) number of nodes for the TIN of term t, it can create _N_ (t) equal_|_ _|_ _|_ _|_
size partitions, with lowscorei, highscorei denoting the score ranges associated
with partition i, i = 1, ..., _N_ (t) . Then it can simply utilize the posting algorithm
_|_ _|_
shown earlier, posting using the lowscorei scores for each partition. The only
exception to the previously shown post algorithm is that the posting peer now
posts at each iteration a complete partition of its index list, instead of just a
single entry.
The above obviously can guarantee perfect load balancing. However, subse
quent postings (typically by other peers) may create imbalances, since different
index lists may have different score distributions. Additionally, when ensuring
overall load balancing over multiple index lists being posting by several peers,
the order-preserving property of the placement must be guaranteed. Our approach for solving these problems is as follows. First, again the posting peer
is required to compute a histogram of its index list. Second, the histogram of
the TIN data (that is, the entries already posted) is stored at easily identifiable nodes. Third, the posting peer is required to retrieve this histogram and
‘merge’ it with his own. Fourth, the same peer identifies how the total data
must now be split into _N_ (t), equal-size partitions of consecutive scores. Fi_|_ _|_
nally, it identifies all data movements (from TIN peer to TIN peer) necessary to
redistribute the total TIN data so that load balancing and order preservation is
ensured.
Detailed presentation of the possible algorithms for this last step and their
respective comparison is beyond the scope of this paper. We simply mention
that total TIN data sizes is expected to be very small (in actual number of
bytes stored and moved). For example, even with several dozens of peers posting
different, even large, multi-million-entry, index lists, in total the complete TIN
data size will be a few hundred MBs, creating a total data transfer movement
equivalent to that of downloading a few dozen MP3 files. Further, index lists’
data posting to TINs is expected to be a very infrequent operation (compared
to search queries). As a result, ensuring load balancing across TIN nodes proves
to be relative inexpensive.
**6.3** **Discussion**
The approaches to index lists’ data posting outlined in the previous two sections
can be used competitively or even be combined. When posting index lists with
exponential score distributions, by design the posting of data using the orderpreserving hash function of Section 5.1, will be adequately load balanced and
nothing else is required. Conversely, when histogram information is available and
-----
MINERVA : A Scalable Efficient P2P Search Engine 71
_∞_
can be computed by posting peers, the TIN data migration approach will yield
load balanced data placement.
A more subtle issue is that posting with the order-preserving hash function
also facilitates random accesses of the TIN data, based on random score values.
That is, by hashing for any score, we can find the TIN node holding the entries
with this score. This becomes essential if the web search engine is to employ
top-k query algorithms which are based on random accesses of scores. In this
work, our top-k algorithms avoid random accesses, by design. However, the above
point should be kept in mind since there are recently-proposed distributed top-k
algorithms, relying on random accesses and more efficient algorithms may be
proposed in the future.
## 7 Top-k Query Processing
The algorithms in this section focus on how to exploit the infrastructure presented previously in order to efficiently process top-k queries. The main efficiency
metrics are query response times and network bandwidth requirements.
**7.1** **The Basic Algorithm**
Consider a top-k query of the form Q({t1, ..., tr}, k) involving r terms that is
generated at some node ninit of G. Query processing is based on the following
ideas. It proceeds in phases, with each phase involving ‘vertical’ and ‘horizontal’
communication between the nodes within TINs and across TINs, respectively.
The vertical communications between the nodes of a TIN are occuring in parallel
across all r TINs named in the query, gathering a threshold number of data items
from each term. There is a moving coordinator node, that will be gathering the
data items from all r TINs that enable it to compute estimates of the top-k
result. Intermediate estimates of the top-k list will be passed around, as the
coordinator role moves from node to node in the next phase where the gathering
of more data items and the computation of the next top-k result estimate will
be computed.
The presentation shows separately the behavior of the query initiator, the
(moving) query coordinator, and the TIN nodes.
**Query Initiator**
The initiator calculates the set of start nodes, one for each term, where the
query processing will start within each TIN. Also, it randomly selects one of the
nodes (for one of the TINs) to be the initial coordinator. Finally, it passes on the
query and the coordinator ID to each of the start nodes, to initiate the parallel
vertical processing within TINs.
The following pseudocode details the behavior of the initiator.
**Processing Within Each TIN**
Processing within a TIN is always initiated by the start node. There is one start
node per communication phase of the query processing. In the first phase, the
-----
72 S. Michel, P. Triantafillou, and G. Weikum
**Algorithm 3. Top-k QP: Query Initiation at node G.ninit**
1: input: Given query Q = {t1,.., tr}, k :
2: for i = 1 to r do
3: _startNodei = I(ti).n(smax) = hop(ti, smax)_
4: end for
5: Randomly select c from [1, ..., r]
6: coordID = I(tc).n(smax)
7: for i = 1 to r do
8: send to startNodei the data (Q, coordID)
9: end for
start node is the top node in the TIN which receives the query processing request
from the initiator. The start node then starts the gathering of data items for
the term by contacting enough nodes, following successor links, until a threshold
number γ (that is, a batch size) of items has been accumulated and sent to the
coordinator, along with an indication of the maximum score for this term which
has not been collected yet, which is actually either a locally stored score or the
maximum score of the next successor node. The latter information is critical for
the coordinator in order to intelligently decide when the top-k result list has
been computed and terminate the search. In addition, each start node sends to
the coordinator the ID of the node of this TIN to be the next start node, which is
simply the next successor node of the last accessed node of the TIN. Processing
within this TIN will be continued at the new start node when it receives the
next message from the coordinator starting the next data-gathering phase.
Algorithm 4 presents the pseudocode for TIN processing.
**Algorithm 4. Top-k QP: Processing by a start node within a TIN**
1: input: A message either from the initiator or the coordinator
2: tCollectioni = ∅
3: n = startNodei
4: while |tCollectioni| < γ do
5: **while |tCollectioni| < γ AND more items exist locally do**
6: define the set of local items L = {(ti, d, s) in n}
7: send to coordID : L
8: _|tCollectioni| = |tCollectioni| + |L|_
9: **end while**
10: n = succ(n)
11: end while
12: boundi = max score stored at node n
13: send to coordID : n and boundi
Recall, that because of the manner with which items and nodes have been
placed in a TIN, following succ() links, items are collected starting from the item
with the highest score posted for this term and proceeding in sorted descending
order based on scores.
-----
MINERVA : A Scalable Efficient P2P Search Engine 73
_∞_
**Moving Query Coordinator**
Initially, the coordinator is randomly chosen by the initiator to be one of the
original start nodes. First, the coordinator uses the received collections and runs
a version of the NRA top-k processing algorithm, locally producing an estimate
of the top-k result. As is also the case with classical top-k algorithms, the exact
result is not available at this stage since only a portion of the required information is available. Specifically, some documents with high enough TotalScore
to qualify for the top-k result are still missing. Additionally, some documents
may also be seen in only a subset of the collections received from the TINs so
far, and thus some of their scores are missing, yielding only a partially known
TotalScore.
A key to the efficiency of the overall query processing process is the ability
to prune the search and terminate the algorithm even in the presence of missing
documents and missing scores. To do this, the coordinator first computes an
estimate of the top-k result, which includes only documents whose TotalScores
are completely known, defining the RankKscore value (i.e. the smallest score in
the top-k list estimate). Then, it utilizes the boundi values received from each
start node. When a score for a document d is missing for term i, it can be
replaced with boundi to estimate the T otalScore(d). This is done for all such
_d with missing scores. If RankKscore > T otalScore(d) for all d with missing_
scores then there is no need to continue the process for finding the missing scores,
since the associated documents could never belong to the top-k result. Similarly,
if RankKscore > [�]i=1,...,r _[bound][i][, then similarly there is no need to try to find]_
any other documents, since they could never belong to the top-k result. When
both of these conditions hold, the coordinator terminates the query processing
and returns the top-k result to the initiator.
If the processing must continue, the coordinator starts the next phase, send
ing a message to the new start node for each term, whose ID was received in the
message containing the previous data collections. In this message the coordinator also indicates the ID of the node which becomes the coordinator in this next
phase. The next coordinator is defined to be the node in the same TIN as the
previous coordinator whose data is to be collected next in the vertical processing
in this TIN (i.e., the next start node at the coordinator’s TIN). Alternatively,
any other start node can be randomly chosen as the coordinator.
Algorithm 5 details the behavior of the coordinator.
**7.2** **Complexity Analysis**
The overall complexity has three main components: the cost incurred for (i) the
communication between the query initiator and the start nodes of the TINs, (ii)
the vertical communication within a TIN, and (iii) the horizontal communication
between the current coordinator and the current set of start nodes.
The query initiator needs to lookup the identity of the initial start nodes
for each one of the r query terms and route to them the query and the chosen
coordinator ID. Using the G network, this incurs a communication complexity of
_O(r_ _log_ _N_ ) messages. Denoting with depth the average (or maximum) number
_×_ _|_ _|_
-----
74 S. Michel, P. Triantafillou, and G. Weikum
**Algorithm 5. Top-k QP: Coordination**
1: input: For each i: tCollectioni and newstartNodei and boundi
2: tCollection = [�]i _[tCollection][i]_
3: compute a (new) top-k list estimate using tCollection, and RankKscore
4: candidates = {d|d /∈top-k list}
5: for all d ∈ _candidates do_
6: _worstScore(d) is the partial TotalScore of d_
7: _bestScore(d) := worstScore(d) +_ [�]j∈MT _[bound][j][ {][Where][ MT][ is the set of term]_
ids with missing scores
_}_
8: **if bestScore(d) < RankKscore then**
9: remove d from candidates
10: **end if**
11: end for
12: if candidates is empty then
13: exit()
14: end if
15: if candidates is not empty then
16: _coordIDnew = pred(n)_
17: calculate new size threshold γ
18: **for i = 1 to r do**
19: send to startNodei the data (coordIDnew, γ)
20: **end for**
21: end if
of nodes accessed during the vertical processing of TINs, overall O(r _depth)_
_×_
messages are incurred due to TIN processing, since subsequent accesses within a
TIN require, by design, one-hop communication. Each horizontal communication
in each phase of query processing between the coordinator and the r start nodes
requires O(r _log_ _N_ ) messages. Since such horizontal communication takes
_×_ _|_ _|_
place at every phase, this yields a total of O(phases _r_ _log_ _N_ ) messages.
_×_ _×_ _|_ _|_
Hence, the total communication cost complexity is
_cost = O(phases_ _r_ _log_ _N_ + r _log_ _N_ + r _depth)_ (3)
_×_ _×_ _|_ _|_ _×_ _|_ _|_ _×_
This total cost is the worst case cost; we expect that the cost incurred in
most cases will be much smaller, since horizontal communication across TINs
can be much more efficient than O(log _N_ ), as follows. The query initiator can
_|_ _|_
first resolve the ID of the coordinator (by hashing and routing over G) and
then determine its actual physical address (i.e., its IP address), which is then
forwarded to each start node. In turn, each start node can forward this from
successor to successor in its TIN. In this way, at any phase of query processing,
the last node of a TIN visited during the vertical processing, can send the data
collection to the coordinator using the coordinator’s physical address. The current coordinator also knows the physical address of the next coordinator (since
this was the last node visited in its own TIN from which it received a message
with the data collection for its term) and of the next start node for all terms
(since these are the last nodes visited during vertical processing of the TINs,
-----
MINERVA : A Scalable Efficient P2P Search Engine 75
_∞_
from which it received a message). Thus, when sending the message to the next
start nodes to continue vertical processing, the physical addresses can be used.
The end result of this is that all horizontal communication requires one message,
instead of O(log _N_ ) messages. Hence, the total communication cost complexity
_|_ _|_
now becomes
_cost = O(phases_ _r + r_ _log_ _N_ + r _depth)_ (4)
_×_ _×_ _|_ _|_ _×_
As nodes are expected to be joining and leaving the underlying overlay network
_G, occasionally, the physical addresses used to derive the cost of (4) will not be_
valid. In this case, the reported errors will lead to nodes using the high-level IDs
instead of the physical addresses, in which case the cost is that given by (3).
## 8 Expediting Top-k Query Processing
In this section we develop optimizations that can further speedup the performance of top-k query processing. These optimizations are centered on: (i) the
‘vertical’ replication of term-specific data among the nodes of a TIN, and (ii)
the ‘horizontal’ replication of data across TINs.
**8.1** **TIN Data Replication**
There are two key characteristics of the data items in our model, which permit
their large-scale replication. First, data items are rarely posted and even more
rarely updated. Second, data items are very small in size (e.g. < 50 bytes each).
Hence, replication protocols will not cost significantly either in terms of replica
state maintenance, or in terms of storing the replicas.
**Vertical Data Replication. The issue to be addressed here is how to appro-**
priately replicate term data within TIN peers so to gain in efficiency. The basic
structure of the query processing algorithm presented earlier facilitates the easy
incorporation of a replication protocol into it. Recall, that in each TIN I(t),
query processing proceeds in phases, and in each phase a TIN node (the current
start node) is responsible for visiting a number of other TIN nodes, a successor
at a time, so that enough, (i.e., a batch size of) data items for t are collected.
The last visited node in each phase which collects all data items, can initiate
a ‘reverse’ vertical communication, in parallel to sending the collection to the
coordinator. With this reverse vertical communication thread, each node in the
reverse path sends to its predecessor only the data items its has not seen. In the
end, all nodes in the path from the start node to the last node visited will eventually receive a copy of all items collected during this phase, storing locally the
pair (lowestscore, highestscore), marking its lowest and highest locally stored
scores. Since this is straightforward, the pseudocode is omitted for space reasons.
Since a new posting involves all (or most) of the nodes in these paths, each
node knows when to initiate a new replication to account for the new items.
-----
76 S. Michel, P. Triantafillou, and G. Weikum
**Exploiting Replicas. The start node selected by the query initiator no longer**
needs to perform a successor-at-a-time traversal of TIN in the first phase, since
the needed data (replicas) are stored locally. However, vertical communication
was also useful for producing the ID of the next start node for this TIN. A
subtle point to note here is that the coordinator can itself determine the new
start node for the next phase, even without receiving explicitly this ID at the end
of vertical communication. This can simply be done using the minimum score
value (boundi) it has received for term ti; the ID of the next start node is found
hashing for score prev(boundi).
Additionally, the query initiator can select as start nodes the nodes responsi
ble for storing a random (expected to be high score) and not always the maximum
score, as it does up to now. Similarly, the coordinator when selecting the ID of
the next start node for the next batch retrieval for a term, it can choose to hash
for a score value that is lower than the score prev(boundi). Thus, random start
nodes within a TIN are selected at different phases and these gather the next
batch of data from the proper TIN nodes, using the TIN DHT infrastructure for
efficiency. The details of how this is done, are omitted for space reasons.
**Horizontal Data Replication. TIN data may also be replicated horizontally.**
The simplest strategy is to create replicated TINs for popular terms. This involves the posting of data into all TIN replicas. The same algorithms can be
used as before for posting, except now when hashing, instead of using the term t
as input to the hash function, each replica of t must be specified (e.g., t.v, where
_v stands for a version/replica number). Again, the same algorithms can be used_
for processing queries, with the exception that each query can now select one of
the replicas of I(t), at random.
Overall, TIN data replication leads to savings in the number of messages and
response time speedups. Furthermore, several nodes are off-loaded since they
no longer have to partake in the query processing process. With replication,
therefore, the same number of nodes overall will be involved in processing a
number of user queries, except that each query will be employing a smaller set
of peers, yielding response time and bandwidth benefits. In essence, TIN data
replication increases the efficiency of the engine, without adversely affecting its
scalability. Finally, it should be stressed that such replication will also improve
the availability of data items and thus replication is imperative. Indirectly, for
the same reason the quality of the results with replication will be higher, since
lost items inevitably lead to errors in the top-k result.
## 9 Experimentation
**9.1** **Experimental Testbed**
Our implementation was written in Java. Experiments were performed on 3GHz
Pentium PCs. Since deploying full-blown, large networks is not an option, we
opted for simulating large numbers of nodes as separate processes on the same
PC, executing the real MINERVA code. A 10,000 node network was simulated.
_∞_
-----
MINERVA : A Scalable Efficient P2P Search Engine 77
_∞_
A real-world data collection was used in our experiments: GOV. The GOV
collection consists of the data of the TREC-12 Web Track and contains roughly
1.25 million (mostly HTML and PDF) documents obtained from a crawl of the
.gov Internet domain (with total index list size of 8 GB). The original 50 queries
from the Web Track’s distillation task were used. These are term queries, with
each query containing up to 4 terms. The index lists contained the original
document scores computed as tf * log idf. tf and idf were normalized by the
maximum tf value of each document and the maximum idf value in the corpus,
respectively. In addition, we employed an extended GOV (XGOV) setup, with a
larger number of query terms and associated index lists. The original 50 queries
were expanded by adding new terms from synonyms and glosses taken from
the WordNet thesaurus (http://www.cogsci.princeton.edu/ wn). The expansion
_∼_
yielded queries with, on average, twice as many terms, up to 18 terms.
**9.2** **Performance Tests and Metrics**
**Efficiency Experiments. The data (index list entries) for the terms to be**
queried were first posted. Then, the GOV/XGOV benchmark queries were executed in sequence. For simplicity, the query initiator node assumed the role of a
fixed coordinator. The experiments used the following metrics:
_Bandwidth. This shows the number of bytes transferred between all the nodes_
involved in processing the benchmarks’ queries. The benchmarks’ queries were
grouped based on the number of terms they involved. In essence, this grouping
created a number of smaller sub-benchmarks.
_Query Response Time. This represents the elapsed, “wall-clock” time for_
running the benchmark queries. We report on the wall-clock times per subbenchmark and for the whole GOV and XGOV benchmarks.
_Hops. This reports the number of messages sent over our network infras-_
tructures to process all queries. For communication over the global DHT G, the
number of hops was set to be log _N_ (i.e., when the query initiator contacts the
_|_ _|_
first set of start nodes for each TIN). Communication between peers within a
TIN requires, by design, one hop at a time.
To avoid the overestimation of response times due to the competition be
tween all processes for the PC’s disk and network resources, and in order to
produce reproducible and comparable results for tests ran at different times,
we opted for simulating disk IO latency and network latency. Specifically, each
random disk IO was modeled to incur a disk seek and rotational latency of 9
ms, plus a transfer delay dictated by a transfer rate of 8MB/s. For network latency we utilized typical round trip times (RTTs) of packets and transfer rates
achieved for larger data transfers between widely distributed entities [16]. We
assumed a RTT of 100 ms. When peers simply forward the query to a next peer,
this is assumed to take roughly 1/3 of the RTT (since no ACKs are expected).
When peers sent more data, the additional latency was dictated by a “large”
data transfer rate of 800Kb/s, which includes the sender’s uplink bandwidth, the
-----
78 S. Michel, P. Triantafillou, and G. Weikum
receivers downlink bandwidth, and the average internet bandwidth typically
witnessed.[2]
**Scalability Experiments. The tested scenarios varied the query load to the**
system, measuring the overall time required to complete the processing of all
queries in a queue of requests. Our experiments used a queue of identical queries
involving four terms, with varying index lists characteristics. Two of these terms
had small index lists (with over 22,000 and over 42,000 entries) and the other
two lists had sizes of over 420,000 entries. For each query the (different) query
initiating peer played the role of the coordinator.
The key here is to measure contention for resources and its limits on the pos
sible parallelization of query processing. Each TIN peer uses his disk, his uplink
bandwidth to forward the query to his TIN successor, and to send data to the
coordinator. Uplink/downlink bandwidths were set to 256Kbps/1Mbps. Similarly, the query initiator utilizes its downlink bandwidth to receive the batches
of data in each phase and its uplink bandwidth to send off the query to the
next TIN start nodes. These delays define the possible parallelization of query
execution. By involving the two terms with the largest index lists in the queries,
we ensured the worst possible parallelization (for our input data), since they
induced the largest batch size, requiring the most expensive disk reads and
communication.
**9.3** **Performance Results**
Overall, each benchmark experiment required between 2 to 5 hours for its realtime execution, a big portion of which was used up by the posting procedure.
Figures 1 and 2 show the bandwidth, response times, and hops results for
the GOV and XGOV group-query benchmarks. Note, that different query groups
have in general mutually-incomparable results, since they involve different index
lists with different characteristics (such as size, score distributions etc).
In XGOV the biggest overhead was introduced by the 8 7-term and 6 11-term
queries. Table 1 shows the total benchmark execution times, network bandwidth
consumption, as well as the number of hops for the GOV and XGOV benchmarks.
Generally, for each query, the number of terms and the size of the corre
sponding index list data are the key factors. The central insight here is that
the choice of the NRA algorithm was the most important contributor to the
overhead. The adaptation of more efficient distributed top-k algorithms within
MINERVA (such as our own [12], which also disallow random accesses) can
_∞_
reduce this overhead by one to two orders of magnitude. This is due to the fact
that the top-k result can be produced without needing to delve deeply into the
index lists’ data, resulting in drastically fewer messages, bandwidth, and time
requirements.
2 This figure is the average throughput value measured (using one stream – one cpu
machines) in experiments conducted for measuring wide area network throughput
(sending 20MB files between SLAC nodes (Stanford’s Linear Accelerator Centre)
and nodes in Lyon France [16] using NLANR’s iPerf tool [19].
-----
MINERVA : A Scalable Efficient P2P Search Engine 79
_∞_
**GOV** **1200** **GOV** **12000** **GOV**
**1000** **10000**
**800** **8000**
**600** **6000**
**400** **4000**
**200** **2000**
**3** **4** **0** **0**
**Number of Query Terms** **2** **Number of Query Terms3** **4** **2** **Number of Query Terms3** **4**
**Fig. 1. GOV Results: Bandwidth, Execution Time, and Hops**
**120000**
**100000**
**80000**
**60000**
**40000**
**20000**
**0**
**1800**
**1600**
**1400**
**1200**
**1000**
**800**
**600**
**400**
**200**
**0**
**25000**
**20000**
**15000**
**10000**
**5000**
**0**
|XGOV|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|XGOV|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|XGOV|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
**4** **5** **6** **7** **8** **9** **10** **11** **12** **13** **14** **15** **18**
**Number of Query Terms**
**4** **5** **6** **7** **8** **9** **10** **11** **12** **13** **14** **15** **18**
**Number of Query Terms**
**4** **5** **6** **7** **8** **9** **10 11 12 13 14 15 18**
**Number of Query Terms**
**Fig. 2. XGOV Results: Bandwidth, Execution Time, and Hops**
**Table 1. Total GOV and XGOV Results**
The 2-term queries introduced the biggest overheads. There are 29 2-term, 7
3-term, and 4 4-term queries in GOV.
Figure 3 shows the scalability experiment results. Query loads tested rep
resent queue sizes of 10, 100, 1000, and 10000 identical queries simultaneously
arriving into the system. This figure also shows what the corresponding time
would be if the parallelization contributed by the MINERVA architecture was
_∞_
not possible; this would be the case, for example, in all related-work P2P search
architectures and also distributed top-k algorithms, where the complete index
lists at least for one query term are stored completely at one peer. The scalability results show the high scalability achievable with MINERVA . It is due
_∞_
to the “pipelining” that is introduced within each TIN during query processing, where a query consumes small amounts of resources from each peer, pulling
together the resources of all (or most) peers in the TIN for its processing. For
comparison we also show the total execution time in an environment in which
each complete index list was stored in a peer. This is the case for most related
work on P2P search engines and on distributed top-k query algorithms. In this
case, the resources of the single peer storing a complete index list are required
-----
80 S. Michel, P. Triantafillou, and G. Weikum
**10000000**
**1000000**
**100000**
**10000**
**1000**
**100**
|Minerva|Col2|Col3|
|---|---|---|
||Minerva||
||Infinity no parallel processing||
||||
||||
||||
**1** **10** **100** **1000** **10000**
**Query Load: Queue Size**
**Fig. 3. Scalability Results**
for the processing of all communication phases and for all queries in the queue.
In essence, this yields a total execution time that is equal to that of a sequential execution of all queries using the resources of the single peers storing the
index lists for the query terms. Using this as a base comparison, MINERVA is
_∞_
shown to enjoy approximately two orders of magnitude higher scalability. Since
in our experiments there are approximately 100 nodes per TIN, this defines the
maximum scalability gain.
## 10 Concluding Remarks
We have presented MINERVA, a novel architecture for a peer-to-peer web
_∞_
search engine. The key distinguishing feature of MINERVA is its high-levels
_∞_
of distribution for both data and processing. The architecture consists of a suite
of novel algorithms, which can be classified into algorithms for creating Term
Index Networks, TINs, placing index list data on TINs and of top-k algorithms.
TIN creation is achieved using a bootstrapping algorithm and also depends on
how nodes are selected when index lists data is posted. The data posting algorithm employs an order-preserving hash function and, for higher levels of load
balancing, MINERVA engages data migration algorithms. Query processing
_∞_
consists of a framework for highly distributed versions of top-k algorithms, ranging from simple distributed top-k algorithms, to those utilizing vertical and/or
horizontal data replication. Collectively, these algorithms ensure efficiency and
scalability. Efficiency is ensured through the fast sequential accesses to index
lists’ data, which requires at most one hop communication and by algorithms
exploiting data replicas. Scalability is ensured by engaging a larger number of
TIN peers in every query, with each peer being assigned much smaller subtasks, avoiding centralized points of control. We have implemented MINERVA
_∞_
and conducted detailed performance studies showcasing its scalability and efficiency.
Ongoing work includes the adaptation of recent distributed top-k algorithms
(e.g., [12]) into the MINERVA architecture, which have proved one to two
_∞_
orders of magnitude more efficient than the NRA top-k algorithm currently
employed, in terms of query response times, network bandwidth, and peer loads.
-----
MINERVA : A Scalable Efficient P2P Search Engine 81
_∞_
## References
1. J. Aspnes and G. Shah. Skip graphs. In Fourteenth Annual ACM-SIAM Symposium
_on Discrete Algorithms, pages 384–393, Jan. 2003._
2. P. Cao and Z. Wang. Efficient top-k query calculation in distributed networks,
PODC 2004.
3. S. Chakrabarti. _Mining the Web: Discovering Knowledge from Hypertext Data._
Morgan Kaufmann, San Francisco, 2002.
4. F. M. Cuenca-Acuna, C. Peery, R. P. Martin, and T. D. Nguyen. PlanetP: Using
Gossiping to Build Content Addressable Peer-to-Peer Information Sharing Communities. Technical Report DCS-TR-487, Rutgers University, Sept. 2002.
5. R. Fagin. Combining fuzzy information from multiple systems. J. Comput. Syst.
_Sci., 58(1):83–99, 1999._
6. R. Fagin, A. Lotem, and M. Naor. Optimal aggregation algorithms for middleware.
_J. Comput. Syst. Sci., 66(4), 2003._
7. P. Ganesan, M. Bawa, and H. Garcia-Molina. Online balancing of range-partitioned
data with applications to peer-to-peer systems. In VLDB, pages 444–455, 2004.
8. A. Gupta, O. D. Sahin, D. Agrawal, and A. E. Abbadi. Meghdoot:
content-based publish/subscribe over p2p networks. In Proceedings of the 5th
_ACM/IFIP/USENIX international conference on Middleware, pages 254–273, New_
York, NY, USA, 2004. Springer-Verlag New York, Inc.
9. N. Harvey, M. Jones, S. Saroiu, M. Theimer, and A. Wolman. Skipnet: A scalable
overlay network with practical locality properties. In USITS, 2003.
10. R. Huebsch, J. M. Hellerstein, N. Lanham, B. T. Loo, S. Shenker, and I. Stoica.
Querying the internet with pier. In VLDB, pages 321–332, 2003.
11. J. Lu and J. Callan. Content-based retrieval in hybrid peer-to-peer networks. In
_Proceedings of CIKM03, pages 199–206. ACM Press, 2003._
12. S. Michel, P. Triantafillou, and G. Weikum. Klee: A framework for distributed
top-k query algorithms. In VLDB Conference, 2005.
13. S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Schenker. A scalable
content-addressable network. In Proceedings of ACM SIGCOMM 2001, pages 161–
172. ACM Press, 2001.
14. P. Reynolds and A. Vahdat. Efficient peer-to-peer keyword searching. In Proceed
_ings of International Middleware Conference, pages 21–40, June 2003._
15. A. Rowstron and P. Druschel. Pastry: Scalable, decentralized object location, and
routing for large-scale peer-to-peer systems. In IFIP/ACM International Confer_ence on Distributed Systems Platforms (Middleware), pages 329–350, 2001._
16. D. Salomoni and S. Luitz. High performance throughput tuning/measurement.
http://www.slac.stanford.edu/grp/scs/net/talk/High perf ppdg jul2000.ppt.2000.
17. I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan. Chord: A
scalable peer-to-peer lookup service for internet applications. In Proceedings of the
_ACM SIGCOMM 2001, pages 149–160. ACM Press, 2001._
18. T. Suel, C. Mathur, J. Wu, J. Zhang, A. Delis, M. Kharrazi, X. Long, and K. Shan
mugasunderam. Odissea: A peer-to-peer architecture for scalable web search and
information retrieval. Technical report, Polytechnic Univ., 2003.
19. A. Tirumala et al. iperf: Testing the limits of your network. http://dast.nlanr.net/
projects/iperf/. 2003.
20. P. Triantafillou and T. Pitoura. Towards a unifying framework for complex query
processing over structured peer-to-peer data networks. In DBISP2P, 2003.
21. Y. Wang, L. Galanis, and D. J. de Witt. Galanx: An efficient peer-to-peer search
engine system. Available at http://www.cs.wisc.edu/ yuanwang.
-----
|
{
"disclaimer": null,
"license": null,
"status": null,
"url": ""
}
| 2,005
|
[] | false
| null |
[] | 17,535
|
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012060a1b33dbb659214465af5ad64974ca35f8e
|
[] | 0.821707
|
Exploratory literature review of blockchain in the construction industry
|
012060a1b33dbb659214465af5ad64974ca35f8e
|
Automation in Construction
|
[
{
"authorId": "79423048",
"name": "D. Scott"
},
{
"authorId": "98555753",
"name": "Tim Broyd"
},
{
"authorId": "2115504054",
"name": "Ling Ma"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Autom Constr"
],
"alternate_urls": [
"http://www.sciencedirect.com/science/journal/09265805",
"https://www.journals.elsevier.com/automation-in-construction"
],
"id": "cbe2e2e0-f4d3-4923-8b48-a02259e5f89c",
"issn": "0926-5805",
"name": "Automation in Construction",
"type": "journal",
"url": "http://www.elsevier.com/wps/find/journaldescription.cws_home/523112/description#description"
}
| null |
Contents lists available at ScienceDirect
# Automation in Construction
[journal homepage: www.elsevier.com/locate/autcon](https://www.elsevier.com/locate/autcon)
#### Review
## Exploratory literature review of blockchain in the construction industry
### Denis J. Scott [*], Tim Broyd, Ling Ma
_Bartlett School of Sustainable Construction, University College London (UCL), London, United Kingdom_
A R T I C L E I N F O
_Keywords:_
Blockchain
Smart contract
Decentralised applications
Construction industry
Built environment
Smart cities
**1. Introduction**
A B S T R A C T
First academic publications on blockchain in construction instantiated in 2017, with three documents. Over the
course of several years, new literature emerged at an average annual growth rate of 184%, surmounting to 121
documents at time of writing this article in early 2021. All 121 publications were reviewed to investigate the
expansion and progression of the topic. A mixed methods approach was implemented to assess the existing
environment through a literature review and scientometric analysis. Altogether, 33 application categories of
blockchain in construction were identified and organised into seven subject areas, these include (1) procurement
and supply chain, (2) design and construction, (3) operations and life cycle, (4) smart cities, (5) intelligent
systems, (6) energy and carbon footprint, and (7) decentralised organisations. Limitations included using only
one scientific database (Scopus), this was due to format inconsistencies when downloading and merging various
bibliographic data sets for use in visual mapping software.
Blockchain is the technology that enables triple entry accounting,
which allows multiple parties to transact across a shared synchronous
ledger. Each transaction is substantiated with a digital signature to
provide proof of its authenticity [1]. Blockchain includes several key
features, such as decentralised, distributed, and consensus [2]. A typical
public blockchain is comprised of thousands of computer nodes con
nected through a decentralised network, and it does not require a central
power of authority to manage the system [3]. Blockchain is a selfsustaining network that rewards users for participating in mining,
which is the process of creating new blocks and distributing them across
all nodes on the network [4]. Whenever transactions are sent to the
network, they are placed in a pool of unverified transactions, where they
are periodically collected and validated by miners before they are placed
into a block [5]. Miners apply a consensus mechanism to check each
other’s results prior to the inclusion of new blocks, this is to ensure that
there is only one version of the ledger in existence at any moment in time
[6]. Bitcoin was the first blockchain which came into existence in 2009,
since then, its protocol has proved immutable to hacks and has not
suffered accounting errors, such as double spending [7]. Ethereum was
the second blockchain to come into existence, which emerged in 2015
and introduced smart contracts, which allowed transacting parties to
codify and deploy peer-to-peer agreements without the reliance of a
trusted third party [8]. Smart contracts include a unique property, in
that they cannot be changed once deployed, which mitigates against
users unfairly withdrawing from signed agreements [9]. Smart contracts
disallow external entities from interfering with peer-to-peer contracts
and enables atomic transactability. The codified terms of a smart con
tracts are transparent and open for auditing, which allows transacting
parties to verify agreements for consistency.
The timescale of this review spans from 2017 to 2021 and in
corporates 121 academic documents. A bottom-up method was imple
mented to assess the existing environment through a literature review,
which includes an exploratory investigation of the progression of the
topic across a wide range of application categories. The document types
used in the review are comprise of journal articles, conference papers,
and book chapters. Non-academic sources such as company reports were
not included into the review as they do not include the same level of
scientific rigor as peer-reviewed content, furthermore, the quantity of
documents attainable from academic sources were of sufficient quantity.
Two search queries were conducted on the Scopus scientific data
base, which was used to obtain all of the reviewed documents. The
_research method_ chapter displays the structure of these queries dia
grammatically; furthermore, the search string for retrieving the results is
available in the appendix, which allows users to replicate the search
results. Other scientific databases that were considered include Web of
Science (WoS), IEEE Xplore, Science Direct, Directory of Open Access
Journals (DOAJ), and JSTOR [10]. Based on the topic of blockchain in
construction, Scopus and WoS included the largest quantity of results by
- Corresponding author.
_[E-mail addresses: Denis.scott.19@ucl.ac.uk (D.J. Scott), Tim.broyd@ucl.ac.uk (T. Broyd), L.ma@ucl.ac.uk (L. Ma).](mailto:Denis.scott.19@ucl.ac.uk)_
[https://doi.org/10.1016/j.autcon.2021.103914](https://doi.org/10.1016/j.autcon.2021.103914)
Received 8 June 2021; Received in revised form 22 July 2021; Accepted 19 August 2021
[0926-5805/© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/)
-----
a substantial margin. In a comparison of these, it revealed Scopus with
53% more content, and with 85% of the WoS data already existent in
Scopus. Both databases included a balanced range of top tier journals
(top 25% based on Scientific Journal Ranking indicator), while Scopus
included a larger number of mid to lower tier journals.
The first academic literature on blockchain in construction emerged
in 2017 within the categories of Building Information Modelling (BIM)
[11], smart cities [12], and peer-to-peer energy markets [13]. The
quantity of new publications on topic increased at an annual growth rate
of 184% each year since 2017. The quantitative aspect of this article
provides data on the expansion of the topic through statistics and sci
entometrics. VOS-Viewer was used to present scientometric data
through visual mapping. The literature review chapter was structured
around application categories of blockchain in construction. Each
category was substantiated by a minimum of three documents to ensure
a level of academic consensus was achieved. Altogether, 33 application
categories were investigated and organised into seven subject areas,
which are (1) procurement and supply chain, (2) design and construc
tion, (3) operations and life cycle, (4) smart cities, (5) intelligent sys
tems, (6) energy and carbon footprint, and (7) decentralised
organisations. An exploratory method was implemented to encapsulate
a wide range of categories to investigate the existing environment
through a macro-orientated approach. This method aligns with the
quantitative analysis that was conducted as part of this review.
_1.1. Related works_
From the 121 reviewed documents in this article, six included re
views of similar nature and are displayed in Table 1. From these, four
delimited their results to academic documents, while two incorporated a
combination of academic and non-academic sources. The Non-academic
material included company and organisation reports [14]. An expansive
literature review of 121 documents on blockchain in construction from
academic publications have only recently been feasible since 2021, as
there is now an established body of work on the topic. Blockchain is a
fast-evolving technology, and this article builds upon the work displayed
in Table 1 to provide an updated review on the contemporary state of the
topic.
Bhushan et al., conducted a comparative literature review of block
chain in smart cities, published in Sustainable Cities and Society journal,
which outlined six subject areas and eight categories [15]. Hunhevicz &
Hall, produced a literature review of blockchain in construction, pub
lished in Advanced Engineering Informatics journal, which included
seven categories and 24 use-cases [16]. Kiu, et al., composed a sys
tematic review of blockchain in construction, published in the Interna
tional Journal of Construction Management, and outlined six subject
areas [14]. Li et al., composed a systematic literature review published
in Automation in Construction, which extrapolated seven built envi
ronment application categories; furthermore, three use-cases were sub
stantiated through interviews with academics and industry
practitioners, such as “automated project bank accounts”, “regulation
and compliance”, and “single shared-access BIM model” [17]. Perera,
**Table 1**
Related works.
Author Year Categories Use- Ref. count Review type
cases
Bhushan, et al., 2020 6 10 42 Literature
Hunhevicz & Hall 2020 7 24 15[a ] Literature
Kiu, et al., 2020 6 b 57[a ] Systematic
Li, et al., 2019 7 3 75 Systematic
Perera, et al., 2020 18 b 27[a ] Systematic
Yang, et al., 2020 4 b 83 Literature
Note:
a Includes content from non-peer reviewed sources (e.g., reports).
b Includes many use-cases that were not individually itemised by its author.
et al., produced a literature review article on blockchain in construction
published in the Journal of Industrial Information Integration, and
identified 18 categories, extracted from academic and non-academic
sources [7]. Yang et al., included a literature review in their block
chain proof of concept article published in Automation in Construction,
which summarised four subject categories for managing business pro
cesses [18].
**2. Research method**
Content was collected from journal articles, book chapters, and
conference proceedings. Scopus was selected as the scientific database
for extracting documents, as it contained the largest bibliographic index
of academic literature on the topic, and is reputably owned by pub
lishing organisation Elsevier [19]. Reason for using only one scientific
database is due to format inconsistencies when merging data sets from
various databases. When conducting a parallel search on Scopus and
Web of Science (WoS) (the top two largest academic indexes on the
topic) [20], it revealed Scopus with 53% more content, and with 85% of
WoS documents already existent in Scopus, thus Scopus was selected as
the database of choice.
Fig. 1 displays the two search queries. Search one incorporated
inputting the ISSN and ISBN numbers of journals and books within the
subject category of architecture, building and construction, and civil and
structural engineering, followed by the key words shown in the search
query column in Table 1. The ISSN and ISBN number is a unique iden
tifier given to each journal and book, which can be downloaded from
[https://www.scimagojr.com. The SCImago web portal provides an](https://www.scimagojr.com)
index of academic publishers for each specific subject area [21]. The
Scopus web portal allows users to search for documents according to a
predefined list of subject areas, in this case SUBJAREA(engi) was
implemented into query two, with key terms such as blockchain and
construction. Two queries were used to increase the accuracy of results
from Scopus, which returned to a combined total of 412 documents.
Upon removing duplicates and filtering content for suitability, the final
result surmounted to 121 publications.
**3. Quantitative analysis**
Fig. 2 displays the quantity of documents published each year, doc
uments types, and scientific journal rankings (SJR). SJR is the impact
factor of each journal, which is calculated through a network analysis of
citations [22]. SJR is measured in quartiles, whereby, Q1 represents the
top 25% of journals, while Q4 is the lowest 25% [22]. The statistics in
Fig. 2 were obtained through conducting a search using the queries
listed in Fig. 1. The results in Fig. 2 are based on full complete years, in
this case 2017–2020. This article was written in 2021, thus results from
that year were not included.
The subject areas and categories of the literature review are dis
played in Fig. 3. Each category was substantiated by a minimum of three
documents to ensure a level of academic consensus was achieved. These
categories were further organised into seven subject areas for the pur
pose of adding structure when organising correlating categories
together.
Fig. 4 displays a timeline showing when each of the reviewed cate
gories emerged in literature. The colours in Fig. 4 are assigned in
conjunction with Fig. 3. The first publications on blockchain in con
struction instantiated in 2017 with three documents and six categories,
2018 included 9 new publications (200% increase from previous year)
with nine new categories, 2019 displayed 33 new publications (267%
increase) with 13 new categories, while 2020 included 69 new publi
cations (109% increase) with five new categories. Altogether totalling
the 33 categories. At time of writing this article in early 2021, there were
no new additions to the category list.
The category with the highest number of publications include
building information modelling (BIM) with 39 documents. Joint second
-----
**Fig. 1. Search query process that was used to obtain the results from Scopus.**
**Fig. 2. Quantity of published content each year, document types, and SJR rankings.**
**Fig. 3. The 33 reviewed categories organised into seven subject areas.**
-----
**Fig. 4. Timeline showing the emergence of each category from 2017 to 2020.**
with 28 documents each includes internet of things (IoT), supply chain
management, and smart grids. While peer-to-peer energy markets is
third place with 27 documents. The newest categories which emerged in
2020 included machine learning, water management, physical waste
management, geospatial, and Integrated Project Delivery (IPD).
The topical coverage of each of the 121 reviewed documents were
manually recorded and transferred into visual mapping software VOSviewer, to produce the Fig. 6 visual map. VOS-viewer algorithmically
maps data using natural language processing techniques [23]. Fig. 6 is
broken down into three parts, which includes categories (shown as cir
cular nodes), colour clusters (shown as the groups of nodes displayed in
blue, green, yellow, or red), and links (which are the lines that connect
the nodes together). Each of the reviewed documents typically covered a
range of categories. Illustrating the overlap/co-occurence of these cat
egories is the purpose of the Fig. 6 co-occurrence map. Colour clusters
are assigned when a group of categories frequently co-occur in the
reviewed documents. Categories with a high number of shared links
naturally gravitate to the centre, as a central position has greater equi
distance with its shared links. However, categories also gravitate to each
other based on their link strength, whereby, if two categories appear
frequently together in literature, they will be positioned close to each
other on the Fig. 6 map. Blockchain was positioned most centrally as it
shares links with all of the 33 categories. BIM was also positioned cen
trally as it shared links with 32 out of the 33 total categories. Whereas
_IPD, carbon accounting, fintech and off-site construction were all positioned_
in the outskirt, due to their low number of shared links with the overall
categories.
Table 2 displays the results from Fig. 6. The table is sorted from
largest to smallest according to links, followed by link strength, then oc
_currences. The Link strength is calculated by the number of times each_
category co-occurs with another. While the occurrences is calculated by
the number of times each category appears in literature irregardless of
its link strength. The results show that 89% of the reviewed documents
included multiple categories in their paper, while 11% focused their
attention solely on one category.
Fig. 7 displays which blockchain platforms were most utilised in the
reviewed documents. 18 documents developed solutions for Ethereum
[8], while 14 developed solutions for Hyperledger [24]; additionally,
one publication investigated utilising both platforms [18]. Ethereum
emerged in 2015 as a public blockchain platform; furthermore, it is
currently the leading platform for decentralised applications and in
cludes the largest population of blockchain developers [25]. Hyper
ledger, by the Linux Foundation, instantiated their own variant in the
same year (2015) using a private blockchain protocol [26]. Less popular
platforms in the reviewed material include Multiledger [27], Bitcoin
[28], Corda [29], and IOTA [30].
Fig. 8 displays the various types of data collection implemented in
the reviewed documents. A conceptual framework was incorporated in
46% of documents, which was used as a foundation to formulate highlevel ideas [31]. Case studies were also a popular method used in 27%
of documents, which included joint ventures between academia and
industry [32]. Literature reviews were used in 26% of the documents,
which were typically implemented as a prerequisite to support the
development of conceptual frameworks [33], such as with the Brooklyn
micro-grid project, which used a literature review to assess the existing
environment prior to the implementation of a case study [34]. Statistics
was incorporated in 23% of documents, such as with measuring the
performance of blockchain-based network systems [35]. The types of
data collection which appeared less frequently included systematic re
views (12%), proof of concepts (12%), interviews (7%), surveys (7%),
-----
and questionnaires (1%).
**Fig. 5. Quantity of publications published for each category from 2017 to 2020.**
depth, such as with a PoC.
Fig. 9 displays a visual map showing the co-occurrences of the data
collection types shown in Fig. 8. Fig. 9 displays links shown in red nu
merals and _link strength_ shown in blue numerals. From analysing the
diagram, the top three data collection types which co-occurred most
frequently in the reviewed literature included conceptual frameworks,
statistics, and case studies, demonstrated through their high link strength
count shown in blue numerals. The outer position of systematic reviews
revealed that it co-occurred less frequently than literature reviews,
however, this particular statistic can be misleading, as both systematic
and _literature_ reviews are terms used interchangeably throughout
research; however, the author ensured not to interfere with the termi
nologies provided in the reviewed documents. 12 publications con
ducted a proof of concept (PoC), which surmounts to 10% of the
reviewed documents. The data collection types with the least number of
co-occurrences included questionnaires, systematic reviews, and sur
veys. Altogether, 55% of the reviewed documents incorporated multiple
data collection types in their research, while 45% included only one.
Through conducting this review, the author noticed that papers which
included higher numbers of data collection types were typically less
technical overall, such as literature/systematic reviews. While papers
which included only one data collection type were typically more in
Table 3 is to be read in conjunction with Fig. 9, and is organised
according to link count, _total link strength, and_ _occurrences. Link count_
refers to the quantity times a particular type of data collection co-occurs
with another; however, it does not take into account the weight if each
link. Whereas _link strength_ factors in the weight, which refers to the
cumulative total of when each link co-occurred with another. The oc
_currences column represents the quantity of times each data collection_
type occurred in literature regardless of its links or link strength.
**4. Literature review**
The literature review is broken down into seven sections, which is
represent by the seven subject areas listed in Fig. 3, these are (1) pro
curement and supply chain; (2) design and construction; (3) operations
and lifecycle; (4) smart cities; (5) intelligent transport; (6) energy and
carbon footprint; and (7) decentralised organisations. Each subject area
includes several application categories, these were grouped according to
their correlation. The subject areas and categories were selected
following a bottom-up approach. This was conducted without a pre
defined or systematic strategy on which topics to cover, provided that it
was in conjunction with the construction industry or built environment.
-----
**Fig. 6. Co-occurrence map of the 33 reviewed categories.**
**Table 2**
Presents the values of the categories displayed in Fig. 6. The colours labelled in
the ’Clusters’ column is representative of the colour clusters shown in Fig. 6.
Categories Link Total link Occurrences Cluster
strength
BIM 32 146 37 Blue
Supply chain 29 132 31 Green
IoT & cyber-physical 27 131 27 Red
Intelligent transport 27 79 15 Red
Smart cities 25 73 15 Red
Cybersecurity 25 54 12 Red
Logistics & scheduling 24 81 16 Green
Cash flow & payments 24 56 12 Blue
Smart grids 23 84 29 Yellow
Digital contracts 22 70 14 Green
Cloud, ERP & EDMS 22 61 13 Red
FinTech 21 57 9 Green
Standards 21 40 9 Blue
Real estate 20 48 10 Green
AI 20 47 8 Red
Physical waste 20 28 3 Green
Water mgmt 20 28 3 Green
P2P energy 19 95 31 Yellow
Citizen participation 19 26 4 Red
ID & certificate 18 29 5 Green
Big data & analytics 17 39 6 Red
Smart homes 17 26 4 Blue
Facility mgmt 16 25 5 Green
Life cycle & circular 14 44 10 Green
Procurement 14 36 11 Yellow
Geospatial 14 25 4 Red
Machine learning 14 21 4 Red
Off-site const. 11 27 5 Red
Decentralised Autonomous 10 12 3 Blue
Organisation
Carbon accounting 8 19 7 Yellow
IFC 8 16 5 Blue
Renewable energy 6 8 3 Yellow
IPD 4 6 3 Blue
The process followed an organic progression through manually making
note on a spreadsheet the topical coverage of each of the review docu
ments, as shown in the shared Google spreadsheet following the link
provided below.
[https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9](https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing)
[rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing](https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing)
_4.1. Procurement and supply chain_
This section is comprised of six application categories grouped into
the procurement and supply chain subject area. Altogether, this subject
area was discussed in 57 out for the 121 documents and is focused on
pre-construction activities.
_Procurement, bid, and tender (discussed in 12 documents). In a survey_
conducted by Kim, et al., based on theme of lifecycle, project manage
ment, and blockchain, and from respondents in construction industry,
the top three applications for blockchain emerged as bidding, procure
ment, and change management [36]. Lack of trust is particularly evident
in procurement, and current management practices requires innovating
to improve the ability to track provenance of faults, trace contract al
terations, and drawing revisions, while minimising information asym
metry during the tender process [37]. Based on a questionnaire and
survey by Isikdag, of 64 industry practitioners in construction industry,
consisting of architects, engineers, contractors, and subcontractors, eprocurement appeared to offer very few benefits compared to its nonelectronic counterpart, furthermore, the primary barrier to e-procure
ment includes a lack of trust in supply chain, unsatisfactory legal
infrastructure, and inadequate cybersecurity for storing confidential
data [38]. Moreover, Isikdag, stated that blockchain can potentially be
used to provide the vital infrastructure required to support privacy
without the risks associated with centralised storage; furthermore, he
discussed how e-procurement lacks standardisation from regulatory
bodies [38].
_Logistics, scheduling and programme_ (discussed in 16 documents).
Logistics management has become increasingly complex due to global
isation [39]. Kifokeris, et al., performed a case study of seven Swedish
logistics consultancies, which outlined that “delivery failure, imprecise
-----
**Fig. 7. Utilisation of blockchain platforms in the reviewed documents**
**Fig. 8. Data collection types existent in the reviewed documents.**
**Fig. 9. Co-occurrence map of the data collection types.**
data, delays in time, inefficient flows and data transfers between sys
tems” are limitations in existing logistics processes, and discussed the
lack of cyber-physical systems integration and analytics in managing on
site assets [39]. Moreover, he proposed a blockchain solution for logis
tics, using a crypto-economic model to incentivise collaboration [39].
Lanko, et al., considered that existing centralised computer systems are
-----
**Table 3**
Presents the values of the data collection types displayed in Fig. 9. The numerals
highlighted in bold in the ’Total link strength’ column are the same values as the
blue numerals shown in Fig. 9.
Data collection type Link count Total link strength Occurrences
Conceptual framework 7 **48** 52
Case study 6 **43** 32
Interview 6 **9** 8
Survey 6 **7** 5
Statistics 5 **39** 27
Literature review 5 **16** 31
Proof of concept 4 **12** 13
Systematic review 2 **3** 12
Review 2 **2** 7
Questionnaire 1 **1** 1
susceptible to data manipulation, and proposed a framework which
incorporated blockchain with RFID for managing logistics of readymixed concrete on-site, whereby, RFID tags are used to record stages
of delivery, such as pouring, transportation, handling, quality in
spections, and mould forming, with all data exchanges recorded on the
blockchain [40]. Blockchain in logistics provides opportunities in of
fering improved service to clients through automating the process of
storing and authenticating data with increased trust, furthermore,
decentralised applications potentially reduce the resource requirements
in managing systems efficiently [41].
_Cash flow and payments (discussed in 14 documents). Chong & Dia_
mantopoulos conducted a case study on a project in Melbourne,
Australia, that used blockchain to automate payments; Furthermore,
works included the delivery of 5000 building façade panels tracked with
Bluetooth sensors to monitor live location of each panel from factory in
China to on-site, with BIM used to monitor installation of each panel,
while smart contracts executed payments at delivery checkpoints [42].
Additionally, this integrated with a mobile phone application which
allowed project participants to view progress of installation in real-time
[42]. Ahmadisheykhsarmast developed an add-in for Microsoft Project
using programming language C-sharp and Visual Studio, which allowed
smart contracts to integrate with mainstream project management
software; furthermore, blockchain platform Ethereum, with its native
programming language Solidity, was used to link the front and back-end
functions of the user application that connected blockchain to Microsoft
Project [43].
Late payments is a major problem in construction, caused by con
tractors performing cash farming, which is the process of withholding
supply chain payments to sustain positive cashflow while aggressively
investing in new work [44]. Das, et al., proposed a conceptual frame
work that enabled smart contracts to control the release of payments to
supply chain which includes integration with banking systems,
furthermore, he discussed the potential to integrate with strategies such
as Project Bank Account (PBA) [45].
_Digital and automated contracts (discussed in 14 documents). McNa_
mara & Sepasgozar conducted an interview of industry practitioners in
the construction industry and derived that trust, risk, and dispute
management were ubiquitous concerns in almost all projects, with main
contractors exerting dominance through unfair contract conditions [46].
In a survey conducted by Badi et al., of 104 respondents in the UK
construction industry, regarding the use of smart contracts, the main
factors which determined its adoption in enterprise is competitive edge
and commercial value [47]. Hunhevicz, et al., proposed a digital con
tracting framework which simulated the decision points of a typical
design-bid-build project in Switzerland, which included the client,
owner, planner, contractor, and supplier, all interacting with smart
contracts to control the approvals and validations process of contract
activities, such as project definition, design coordination, tendering,
supplier selection, and contract signing; furthermore, this was proto
typed through a web-based application connected to the Ethereum
blockchain [48].
_Supply chain management_ (discussed in 30 of documents) Qian &
Papadonikolaki conducted interviews of industry practitioners in the
construction industry that are knowledgeable in supply chain and
blockchain, and identified that blockchain can potentially be used to
mitigate the trust problem in construction, through data traceability,
non-repudiation, and disintermediation; furthermore, it was projected
that blockchain can save up to 70% on costs associated with data pro
cessing and management, through automating compliance checking,
payments, and analytics on project performance [49]. Sheng, et al.,
proposed a framework which allowed project participants to assess
compliance to standards and monitor information exchanges through a
user application, where project participants would upload data associ
ated with contract documents, project schedule, and cost; furthermore,
the application would autonomously notify users of their responsibilities
to upload or approve works, which automated the processing of pay
ments and completion certificates [50]. Dutta, et al., conducted a sys
tematic review of blockchain in supply chain and identified several key
attributes where blockchain can improve performance, such as eviden
tiary trail of delivered works substantiated with immutable data, resil
ience from network disruption, improved data synchronicity, data trust
in cyber-physical systems, business process automation through smart
contracts, and improved tracking of product revisions [51].
_Standards, regulation, and compliance_ (discussed in 10 documents).
The transparent and irrefutable properties of the blockchain make it a
suitable technology for trialling whether smart contracts can be used to
automate the compliance checking of objects in BIM models [52].
Nawari & Ravindran proposed an automated regulation and compliance
checking framework for BIM, whereby, modelling elements are scanned
and cross-checked with client specifications, which autonomously no
tifies designers of their obligations to make design alterations [53].
Blockchain can also be used as a decentralised authority to provide BIM
objects with copyright verification, through a lookup service that checks
the intellectual property signature of a BIM object, and cross-checks it
with data stored in a distributed database; furthermore, designers and
contractors working on a BIM model can be instantaneously notified of
any copyright infringement of model objects [54].
_4.2. Design and construction_
The design and construction subject area consists of five application
categories discussed in 44 of the review documents. This section is fo
cuses on the capital expenditure stage of construction projects.
_Building Information Modelling (BIM)_ (discussed in 41 documents).
One of the fundamental reasons for the slow adoption of BIM is a lack of
traceability in model revisions, as the current systems is based on
manual data entry and relies on trust from designers to keep track of
changes [55]. The ability for multiple users in a project to update a BIM
model simultaneously is extremely challenging using existing central
ised cloud systems, furthermore, the coupling of BIM with blockchain
further creates bandwidth limitations, which is due to blockchain’s
consensus properties, whereby, majority of the nodes on the network
need to agree on changes before data can be revised [56]. Zheng, et al.,
proposed a mobile device application which allowed users to verify on
their portable computing device (e.g., phone, tablet, laptop) whether a
BIM model is the most recent version, whereby, a hash of the BIM model
is stored on the blockchain which allows a lookup service to cross-check
the hash of a downloaded model with the hash stored on-chain, after
wards, the application would provide users with a verification receipt
stating the model’s validity [57]. On another note, a case study by
Mason, et al., discussed how the effective logging of geometry and
volume in BIM models can transition effectively into computable code
for smart contracts [58].
_IFC-based interoperability_ (discussed in 6 documents). IFC is a data
standard format registered by the International Standards for Organi
sation (ISO), which is used for saving BIM model files [59].
-----
BuildingSmart is an organisation that promotes digital workflow
through utilisation of IFC, while OpenBIM is a set of common agreed
workflow standards for BIM projects, for the purpose of increasing
supply chain collaboration and standardising data exchange processes
[59]. Hunhevicz, et al., produced a prototype which incentivised users
to produce high quality data sets following the OpenBIM standard, this
incorporated the use of smart contracts to provide financial rewards
based on the quality of data provided by its users [48]. Ye, et al., pro
duced a prototype which incorporated an IFC model that interoperated
with smart contracts, which executed payments autonomously based on
elements quantified within the BIM model; furthermore, readable text
was maintained as it transferred into smart contracts, which allowed
users the ability to intuitive cross-reference IFC data in blockchain code
[60]. A study was conducted by Xue & Lu which investigated whether
IFC semantics can be substantially minimised to allow for potential
storage of IFC code on-chain, and whether small portions of the IFC code
can be partitioned away from its original syntax while still remaining
readable for purpose of isolating model revisions, which resulted in a
semantic reduction of 99.98% of its original size; however, the
consensus properties of blockchain proved to be problematic due to its
low throughout with data processing, even when tested on a private
blockchain network [56].
_Integrated Project Delivery (IPD)_ (discussed in 3 documents). IPD
operates through onboarding the construction supply chain with a
shared risk and reward contract for the purpose of promoting collabo
rative workflow [61]. Hunhevicz, et al., discussed how the character
istics of IPD integrate effectively with the ideologies of common pool
resource (CPR) and the Ostrom principles for flat organisational struc
tures, which incorporates mutual and economical benefit for project
participants who work together to achieve a common goal, whereby,
projects which implement blockchain in IPD contracts include potential
to reward participants with tokenised and non-tokenised incentives,
such as financial rewards for collaborative delivery, transparent agree
ments, and automated payments upon validated completion of works
[62]. Elghaish, et al., conducted a simulated proof of concept which
incorporated blockchain in an IPD contract for managing supply chain
payments, using private the blockchain platform Hyperledger Fabric
(HLF); Whereby, financial operations such as reimbursed cost, profit
pool, cost saving pool, and risk pool, were programmed into smart
contracts which automated the dispensation of funds according preagreed terms, such as validated completion of works from appointed
authorities and project milestone dates [61].
_Off-site construction (discussed in 4 documents). Off-site construction_
includes strong topical overlap with Internet of Things, blockchain, BIM,
AI, robotics, and 3-D printing [63]. According to Turk, R. Klinc, the
primary application for blockchain in off-site construction is supply
chain management, with a projected average saving of 70% through
reduced processing costs, which is amassed through improved systems
integration, automation through smart contracts, and real-time data
traceability [63]. Wang et al., proposed a framework using blockchain
platform Hyperledger Fabric for the management of precast construc
tion activities through a user interface, which allowed real-time
querying of scheduling, production, and transportation [64]. Additive
manufacturing, synonymously called 3-D printing, includes potential to
integrate with off-site construction and blockchain for the production,
cataloguing, and copyrighting of customised building components [65].
_Geospatial, 3-D scanning, and point cloud (discussed in 4 documents)._
Geospatial technologies such as “remote sensing, LiDAR, internet map
ping, GPS and GIS” have strong implications working in conjunction
with autonomous vehicles due to their rapid response in scanning
geographical landscapes; furthermore, it interoperates effectively with
BIM models, smart infrastructure, and cyber-physical systems [66]. 3-D
scanning allows assets and geographical locations to be imported into
BIM models; however, there is currently a lack of technological capacity
for scanned objects to be autonomously cross-referenced with registered
objects in a database [63]. Copeland and Bilec proposed a conceptual
framework which integrated assets with geospatial sensors and block
chain to produce what they called “buildings as material banks”, which
utilises sensors affixed to building components which records metadata
regarding its condition for reusability, using blockchain as the trusted
system for authenticating components and materials within built assets
[67].
_4.3. Operations and lifecycle_
The operations and lifecycle subject area is comprised of four cate
gories and consists of 24 documents. This section is focused on the
operational expenditure stage of an asset’s lifecycle.
_Facilities management and maintenance (discussed in 6 documents). Li,_
et al., proposed a framework for the semi-automated procurement of
replacement parts during the operations phase of a built asset, which
includes the integration of Internet of Things (IoT) sensors and a com
puter aided facilities management system (CAFM) for the automated
identification of faulty parts; furthermore, a request for replacement
parts is processed through a decentralised autonomous organisation,
while an e-marketplace handles the bidding and appointment of pro
spective contractors [68]. Blockchain includes the ability to transact on
and off-chain for the purpose of increasing the performance of data
exchanges in a decentralised network. Bai, et al., proposed a framework
for managing the communications between IoT and blockchain for asset
maintenance, which uses on-chain for immutable hash storage and
smart contracts, while off-chain handles data storage, computational
processing, and analytics [69]. Integrating off-chain applications with
blockchain allows for greater transaction throughput, lower transaction
fees, and greater control over system operations such as privacy
controls.
_Life cycle and circular economy (discussed in 11 documents). Shojaei_
discussed how metadata recorded of raw materials extracted from
source can be appended onto the blockchain for end-to-end lifecycle
assessment, which allows for a complete and uninterrupted data stream
from each handling merchant to end-user to provide proof of prove
nance from source to construction [70]. Asset data such as specifica
tions, standards, and contract agreements include potential to integrate
with blockchain for post-occupancy evaluation, utilising BIM as the data
repository for the built environment asset and blockchain as its corre
sponding data validator [71]. Copeland & Bilec proposed a framework
which utilised RFID, BIM, and blockchain to provide components with
an evidentiary trail of data throughout its lifecycle, through sensors
periodically recording data at key stages, such as installation, decom
mission, provenance, and metadata regarding supplier, manufacturer,
and handling checkpoints [67]. This includes potential to integrate with
a crypto-economic incentive scheme for the recycling of assets, with
data verified by blockchain.
_Construction waste management (discussed in 3 documents). Surplus_
waste generated by the construction industry is a global issue; further
more, there is a lack of systems that can accurately account for material
waste, which make it an acceptable by-product despite its carbon impact
and incurred costs on projects [7]. However, blockchain includes po
tential to increase the accountability of waste through its ability to verify
its lifecycle from source to disposal [7]. Despite this, a proposed solution
on who would supply the systems which allows supply chain to quan
titatively account the unused material was not discussed in the reviewed
papers.
_Real estate and property registry (discussed in 10 documents). Dakhli,_
et al., conducted a case study of 56 residential properties and concluded
that blockchain has potential to achieve construction cost savings of
8.3%, which is higher than a typical property developer’s net margin of
6%; furthermore, the projected cost savings were attributed to the use of
smart contracts and a decentralised autonomous organisations (DAO) to
manage and automate business processes [72].
The management of land registries in many developing countries is
an unnecessarily complicated process which is prone to fraud and
-----
manipulation [73]. Land management was identified in the World
Bank’s Ease of Doing Business report as a one of the main services that
affects the economic growth of a country, furthermore, blockchain was
discussed as having the potential to provide a single source of truth to
land records, thus reducing administrative overheads in data processing
and alleviating risk of fraud [73].
_4.4. Smart cities_
The smart cities subject area is comprised of four categories and
consists of 27 documents. This section is focused on how city infra
structure networks can interoperate to provide a data-rich ecosystem of
connect devices for managing built environment assets.
_Smart cities (discussed in 16 documents). Ahad, et al., conducted a_
literature review on the topic of smart cities and suggested that they are
driven by network-based technologies that integrate to support the de
livery of industry 4.0 [66]. These technologies include Internet of Things
(IoT), big data, cyber-physical systems, 5-G technology, artificial intel
ligence (including machine learning and deep learning), blockchain,
cloud/edge computing, and geospatial technologies [66]. The inter
connected network of devices in a smart city increases the demand for
trusted data, therefore, a new business model is required that is more
resilient to hacks and central point of failure [74]. This can potentially
be supported through the traceable, immutable, and decentralised
properties of blockchain [74]. Fu & Zhu proposed a conceptual frame
work which integrated technologies such as cloud platforms, block
chain, and IoT to form a trusted platform for monitoring live data from
infrastructure services, such as geographic information systems (GIS),
safety devices, and weather monitoring systems that relay information
to city infrastructure services such as transport, communication, and
utility [75].
_Smart homes and buildings (discussed in 4 documents). Moretti, et al.,_
proposed a conceptual framework that incorporated the use of ultra
sonic sensors for the purpose of monitoring indoor activity of a building,
which includes sensors placed in rooms to monitor usage, occupancy,
and maintenance, which integrate with analytics to provide automated
reporting of indoor activity; furthermore, the author discussed the po
tential to incorporate a blockchain-based management system, through
using smart contracts to provide automated payments upon successful
delivery of maintenance works [76]. Roy, et al., proposed a prototype
for a smart home ecosystem, which included the aggregation of a home
device network, blockchain platform, and maintenance service system;
furthermore, the home network was comprised of smart meters, IoT, and
actuators; the blockchain was used to store and validate results received
from the home devices; while the maintenance system provided facility
management through identifying when replacement parts were required
and provided credentials of prospective suppliers [77].
_Intelligent transport_ (discussed in 15 documents). Lopez ´ & Farooq
proposed a smart mobility blockchain framework for managing trans
portation data, which was comprised of five layers such as (1) privacy,
which gives users control of their data when using location revealing
applications such as Google maps; (2) contract layer, which controls
how smart contracts use user data; (3) communication layer, which
appends digital identifiers to communication channels between network
nodes; (4) incentive layer, which rewards users for participating in the
blockchain network; and (5) consensus layer, which allows nodes to
upload data verified by its users [24]. Implications of this included
privacy between users and transportation system hosted on a decen
tralised network [24].
Supplying battery recharge to electric vehicles based on a fast-charge
system is technologically challenging, as current recharge systems need
to be designed for both intermittent and continual usage [78]. Zhang,
et al., conducted a 15 month study at University of California, Los
Angeles (UCLA), which implemented a blockchain platform that
incentivised users to charge their electric vehicles at specific timescales,
which mitigates energy providers having to store unused energy in
batteries for extended periods of time; moreover, a user interface pro
vided users with a ranking system based on their record of renewable
energy consumption, which rewarded users with discounts and the
ability to choose flexible recharge schedules [78].
_Water management (discussed in 3 documents). The infrastructure for_
wastewater management in cities is reaching the end of its lifespan in
many countries, which is caused by old treatment plants and damaged
pipes which excrete sewage into environmentally sensitive areas that
cause health and safety and wildlife concerns [79]. Berglund, et al,
discussed how the construction of new water management systems can
potentially benefit through innovations such as Internet of Things (IoT),
smart meters, and blockchain, to provide live data feed on the perfor
mance of water management systems, with implications in improving
lifecycle maintenance of infrastructure assets [79]. Perera, et al., dis
cussed how WaterChain, a water utility blockchain network in the
United States, allows their participants to invest in water recycling
plants and allows them to benefit through the dividends supplied by its
service; furthermore, the management of the plant is transparent and
can be investigated by the community at any time and dividends are
automated through smart contracts, this merges the boundary between
consumer and producer and allows the opportunity for communities to
self-sustain and self-own their utilities [7].
_4.5. Intelligent systems_
The intelligent systems subject area includes six categories and
consists of 46 documents altogether. This section focuses on advanced
computer systems, information processing, and the benefits of data-rich
networks.
_Big data (discussed in 6 documents). The amount of new data pro_
duced each year is increasing exponentially, furthermore, the con
struction industry is under additional pressure to exploit the benefits of
data-driven economies whilst in a resource deficit caused by poor
margins in construction projects [80]. Blockchain offers a new type of
data model which reduces the resource requirements for storing data
securely, through bypassing the need to use heavily centralised systems
to authenticate data [66]. Network systems such as internet of things
(IoT) and smart technologies include the potential to integrate with
blockchain to provide increased trust in authenticating data, which is
achieved without reliance on oversight from centralised technology
companies [24]. Concerns regarding privacy is mitigated through pri
vate blockchain protocols such as Hyperledger Fabric, which uses an
enterprise-centric model that provides platform developers with control
over the privacy features on their network [81]. Alternatively, public
blockchain protocols, such as Ethereum, include advanced crypto
graphic methods such as zero-knowledge-proofs which allow private
data exchanges to occur on a public network [82]. Big data integrated
with blockchain includes practical applications in off-site construction
and supply chain management, through improved contract manage
ment, compliance checking, traceability of data in project reports, and
reliable data for use with analytics [63].
_Artificial intelligence (AI) (discussed in 8 documents). AI, alongside_
additive manufacturing (synonymously called 3-D printing), autono
mous vehicles, blockchain, drones, and Internet of Things, are the
fundamental components that form to create the emerging industry 4.0,
which were points first discussed in the 2011 report by Germany’s
economic development agency [65]. Car manufacturers use AI powered
robots that work alongside humans in production plants; furthermore,
companies such as General Electric and Caterpillar are developing AI
solutions to equip workers with robotic exoskeletons to assist with la
bour intensive jobs [65]. AI is progressively being used in industries to
streamline workflow and improve decision making, such as with JP
Morgan, who developed a software algorithm called COIN, that scans
thousands of contract documents instantaneously to provide judgement
on written agreements [83]. A practical use-case for blockchain in AI is
the ability to safeguard its coding through placing it in a smart contract,
-----
which mitigates the risk of unauthorised manipulation of the code
without permission from authorised actors, effectively, creating un
breakable codified laws which govern the functionality of AI; simulta
neously, AI can also be used to debug smart contracts and improve
blockchain’s protocol design [84].
_Cloud computing and electronic document management system (EDM)_
(discussed in 13 documents). EDM allows companies to manage, store,
and process documents electronically [30]. EDM platforms are limited
with their potential to interoperate with other technology suppliers,
which is due to its centralised systems architecture; conversely, a
blockchain-based EDM is built with interoperability as its core and is not
financially driven by sales of its product [14].
Cloud computing is a fundamental driver of logistics 4.0 (a branch of
industry 4.0), which encompasses global standards, digitisation of
business processes, and cyber-physical systems that interoperate with
supply chain and logistics networks [14]. Blockchain-based decentral
ised cloud platforms provide the ability for users and enterprises to store
data with greater privacy, this is achieved without risk of hacks or data
mining from service providers; however, due to its nascency, decen
tralised storage solutions may lack in its ability to modularise its func
tions to suit business workflows [85]. Singh, et al., proposed a
framework for managing the data flows of cyber-physical systems in a
smart city network, which integrates cloud computing, software-defined
networking, and blockchain for trusted data exchanges [29].
_Cybersecurity (discussed in 12 documents). The decentralised char_
acteristics of blockchain puts the responsibility in the custody of its users
to manage their digital keys competently, which requires users to keep
their private-key secret and not reveal the personal identity behind their
public-key [86]. Xiong, et al., proposed a “secret-sharing-based key
protection” protocol which allows users with compromised or lost
private-keys to retrieve access to their account, which involves a stepby-step multiparty verification process, whereby, each party anony
mously and privately reveals a small portion of the key, which altogether
combines to produce the entire lost private-key [28].
The immutable property of blockchain also comes at the cost of low
scalability (measured in transactions per second) and limited capacity to
store large amounts of data on-chain [87]. To mitigate this, Bai, et al.,
proposed a framework which consists of on-chain and off-chain func
tionalities, which included a “smart predictive maintenance” and a
“sharing service of equipment status data” model, whereby, the hashes
(unique identifiers) of files are stored on-chain, while off-chain handles
high-volume data storage and computational processing [69]. This in
cludes the use of a lookup service which connects the hashes stored onchain to data repositories off-chain, which amalgamates the immutable
properties of blockchain with large capacity data storage [69].
_Machine learning (ML) (discussed in 3 documents). The procurement_
and management process of road construction in India is challenged
with political corruption and fraud, through lack of compliance checks,
material fraud, and unsupervised labour that leads to incomplete works
[88]. Shinde, et al., discussed how ML can be used to forecast material
quantities, labour requirements, and delivery schedules, while block
chain can be used as the trusted system to verify the authenticity of data
sets without reliance on a trusted third party; furthermore, ML coding
can be stored in a smart contract or decentralised repository, which can
be designed to allow authorised parties to jointly contribute to updating
and verifying the code through consensus [88]. ML is used in con
struction for statistical decision making, irregularity detection, and
deriving insight from historic records [89]. Woo, et al., identified five
software applications that use ML in the construction industry, these are
(1) GenMEP, by Building Systems Planning, which uses ML for the
automation of mechanical, electrical, and plumbing data in a Revit
model; (2) BIM 360 IQ, by Autodesk, which uses ML to forecast and
calculate the impact of subcontractor risks in construction projects; (3)
SmartTag, by Smartvid.io, uses ML to automate the process of labelling/
tagging of site assets from pictures and videos; (4) Smart Construction,
by Komatsu and NVIDIA, uses ML to simulate the construction process
for health and safety and programme analysis; followed by, (5) IBM
Watson IoT, who uses ML for proposing energy efficiency and occupancy
enhancing solutions in buildings [89].
_Internet of things (IoT)_ (discussed in 31 documents). Wang, et al.,
discussed how IoT and blockchain can potentially integrate with
building information modelling (BIM) to provide a central hub for
managing and authenticating data received from built environment
sensors; furthermore, the BIM model can be used to map the position of
each sensor in a digital model, which provides a 3-D map for mainte
nance suppliers to utilise [63]. IoT can also be fitted onto the wearables
of personnel on construction sites to provide quantitative insight on the
environmental conditions and geographic positioning of on-site
workers, with blockchain used to hash and timestamp data received
from the IoT [30]. Fu & Zhu proposed a smart city framework which
incorporates the use of IoT to provide a system which integrates and
monitors geographic, safety, and weather, which altogether feed data to
a user interface to provide live analytics for use in construction and asset
management [75].
_4.6. Energy and carbon footprint_
The energy and carbon footprint subject area includes four categories
and consists of 38 documents altogether. This section is focuses on how
blockchain can integrate as part of a system to better manage energy,
renewables, and carbon.
_Peer-to-peer (P2P) energy markets (discussed in 30 documents). P2P_
energy markets are designed around homeowners buying and selling
excess renewable electricity through a local network, which provides
neighbourhoods with self-sufficiency and promotes decarbonisation
[90]. Esmat, et al., proposed a conceptual framework for a P2P energy
marketplace hosted on blockchain, which includes automated uniform
pricing and real-time settlements [91]. Ableitner, et al., conducted a 4month field study of 37 households in Switzerland to assess the outcome
of a micro-grid prototype, which was a joint effort between academia
and industry; furthermore, each of the households were supplied with
renewable energy production technologies, smart meters, and a P2P
energy trading application hosted on the blockchain [92]. Afterwards,
the results were analysed through questionnaires, interviews, and sta
tistics, which displayed active involvement from the participants with
the blockchain application and an eagerness from the households to
continue with the study after it concluded [92]. Energy trading can also
occur between machine-to-machine (M2M) for the purpose of achieving
full automation without the reliance of appointing users to authorise the
trade, as shown in a conceptual framework by Sikorski, et al., which
included a study of two energy suppliers that operate in tandem to
provide consumers with the most economically priced electricity [13].
Despite the immutable property of blockchain, P2P markets are at po
tential risk from producers manipulating the power measurements
recorded at connection points; However, to mitigate this, Saha, et al.,
proposed a blockchain-based distributed verification algorithm that
penalises inconsistent measurements of current [93].
_Smart grids (discussed in 29 documents). ‘Peer-to-peer (P2P) energy’_
and ‘smart grids’ are discussed interchangeably; however, the former
relates to trading markets, while the latter relates to energy infrastruc
ture and smart meters. The integration between decentralised microgrids and the main power grid is made possible through a demandside management (DSM) application proposed by Noor et al., whereby
consumers are able to supply their own smart energy appliances and
battery storage and utilise the DSM application to connect their local
grid to the main grid [94]. Christidis, et al., conducted a case study of 63
solar panel fitted homes, situated in Texas, United States, which
compared the efficiency of a semi-centralised versus a decentralised
energy grid market, which included the former consisting of high
transactions speeds with lower security, while the latter included low
transaction speeds with higher security, which resulted in the block
chain approach being less efficient due to its high latency in processing
-----
transactions [81]. A similar framework was proposed by Foti & Vavalis,
which investigated how a blockchain-based smart grid would perform
with 1000 participants transacting on the Ethereum blockchain testnetwork, which resulted in the centralised grid being efficient at
providing lower cost electricity due to the mining fees associated with
blockchain, however, when factoring in the lifecycle cost of managing
systems, the decentralised approach was discussed as potentially being
more cost-effective and resilient to external threats such as cyber-attacks
[95].
_Renewable energy solutions (discussed in 3 documents). The energy_
industry is experimenting with new business models that transition from
centralised to decentralised, which includes the integration of smart
devices, micro-grids, blockchain, and energy recycling technologies
[96]. A combined heat and power (CHP) system provides energy recy
cling through combining electricity and heat generation into one system,
which integrates fittingly with renewable production technologies such
as photovoltaic and wind turbines for the purpose of reducing carbon
footprint [97]. Furthermore, in the event of natural disasters such as
flooding, high winds, earthquakes, wild fires, snow/ice, and extreme
temperature, CHP maintained performance most consistently in com
parison with photovoltaic, wind turbine, standby generators, and biogas
[97]. The demand for renewable energy increases with the depletion of
oil and rise in global warming. Perrons et al., stated that the geothermal
energy sector has received pressure from stakeholders to innovate
renewable production methods and management systems, with block
chain discussed as a potential candidate to improve the software aspect
of it [98]. Keivanpour investigated two off-shore wind farms in United
Kingdom, called Robin Rigg, and Walney Phase 1, and concluded that
the current delivery method of industrial scale renewables is unneces
sarily expensive due to longstanding supply chain processes, and dis
cussed the innovation potential with blockchain, Internet of Things, and
big data [99].
_Carbon accounting and decarbonisation_ (discussed in 7 documents).
Khaqqi, et al., proposed a carbon emission trading framework, where a
government organisation would issue construction companies with a
limited number of carbon credits to expend on a construction project,
whereby, each credit is representative of a tonne of carbon emissions;
furthermore, companies are able to buy or sell excess carbon credits to
each other through a decentralised online marketplace, which incenti
vises renewable companies, while at the same time penalises nonrenewable companies [27]. Rodrigo, et al., conducted an interview
with three industry practitioners, each with over 13 years of experience
in information technology, which concluded that the inherent properties
of blockchain, such as auditability, security, and decentralisation, is a
suitable tool for embodied carbon estimating [100]. Hua, et al., pro
posed an energy trading framework that rewards carbon credits to
prosumers of a micro-grid network, whereby, energy producing tech
nologies are linked to the blockchain to record the carbon footprint at
time of production; furthermore, each prosumer is provided a set
quantity of carbon credits which their permitted to expend during pro
duction, which incentivises prosumers to act sustainably [101].
_4.7. Decentralised organisations_
The decentralised organisations subject areas is comprised of four
categories and consists of 19 documents altogether. This section is
focused on decentralised services and autonomous organisations. Some
of the topics in this section are more general purpose than the previous
sections, nevertheless, they included strong overlap with the construc
tion industry and each category was discussed several times in the
reviewed documents.
_Decentralised Autonomous Organisation (DAO) (discussed in 5 docu_
ments). DAO is an autonomous blockchain entity with decentralised
governance at its core, which rewards users with tokenised incentives
for participating in the network and operates entirely through smart
contracts [102]. The construction industry is particularly known for
incurring change orders and programme alterations, which is prob
lematic for smart contracts due to their unalterable properties once
deployed; furthermore, translating written agreements into codified
form creates linguistical challenges between contract managers and
programmers, whereby, each party may not understand the industryspecific culture differences of the other, such as terminologies and
processes [68]. Dounas, et al, produced a prototype which utilised DAO
and smart contracts to automate the awarding of works for architectural
design submissions, which involved a simulated study where stake
holders submit a request for a built environment asset through a DAO
platform, followed by submission of the designs from prospective con
tractors or architects, and finally, the autonomous calculation of the
winning proposal through a predefined scoring system and awarding of
work through a smart contract [103]. Similarly, DAO also includes the
potential to integrate with the construction or operations phase of a built
asset, through semi-automating the procurement process for obtaining
new materials or replacement parts, whereby, DAO is used as the me
dium for connecting prospective suppliers to new work, managing
payments, cross-checking compliance certificates, and quantitatively
assessing the risk of each supplier through their track record of delivered
works [104].
_Identity and certificate authentication (discussed in 5 documents). The_
fundamental properties of blockchain (traceability, transparency, and
immutability) make it a suitable technology for incorporating identity
authentication services, as centralised systems are prone to hacks and
data manipulation [86]. Private blockchains include privacy controls as
a fundamental feature to its protocol. Whereas, public blockchains
include cryptographic functions such as zero-knowledge proofs which
permit private transactions to occur on a public network, however, this
incurs additional transaction fees added onto the existing mining fee
[82]. Nawari & Ravindran discussed how private blockchain Hyper
ledger is suited for identity management services in construction due to
its modular architecture, which allows automated compliance checking
of identities on the network [53]. Similarly, Shojaei, et al., discussed
how Hyperledger’s certificate authority can be used to maintain an
active lists of supply chain participants in a construction project, which
can be reused across multiple projects [105].
Blockchain allows the creation of non-fungible tokens (NFTs), which
can be used as a digital certificate that represents the ownership of a
physical asset; furthermore, this NFT can hold additional data such as
title deeds, lifecycle data, building certificates, and any other associated
data [106]. Implications include substantial reductions in data retrieval
for insurers, estate agents, facility managers, and building inspectors
[72]. Due to the immutable properties of blockchain, data stored in the
NFT is append only, thus leaving an intact evidentiary trail of data
throughout its lifespan.
_Financial technology & banks (discussed in 7 documents). The emer_
gence of decentralised finance in 2020 allows banks to extend their
portfolio to include additional commercial products for customers [12].
Yao, et al., proposed a conceptual framework which discussed the
viability for banks to provide blockchain-based supply chain finance,
through using blockchain to verify the regulatory compliance of their
customers, track signed agreements, and trace pending invoices [107].
Blockchain can be used to maintain an accurate and irrefutable record of
transactions without risk of ledger inconsistencies, such as reconcilia
tion errors and double spending; furthermore, banks can potentially
provide escrow services through smart contracts, which allows trans
acting parties to formalise agreements amongst themselves while under
oversight from regulatory controls, this ensures compliance to fair
business terms and legal standards [15]. Smart contracts also include the
potential to automate tax duties, such as with the legal movement of
goods across international borders, whereby compliance certificates
would be autonomously awarded upon payment of taxes [108].
_Crowdsourcing_ (discussed in 4 documents). Blockchain-based
crowdsourcing is a decentralised alternative to acquiring project fund
ing, which includes benefits such as providing opportunities for skilled
-----
talent in economically disadvantaged nations, reduced intermediaries,
and codified agreements with auditable terms for the purpose of sup
porting fair contract executions [109]. Public blockchains provide free
protocol infrastructure that allows users to develop platforms and raise
funds through initial coin offerings (ICOs), which is similarly compared
to the initial public offerings (IPOs) offered in stock markets when pri
vate companies transition to PLC [110]. However, ICOs have been a
target for criminal activity due to their ability to raise funds from
anonymous users and lack of regulation checks, such as know your
customer (KYC) and anti-money laundering (AML). Hassija, et al., dis
cussed how the crowdfunding platform, BitFund, allows investors to
propose a problem to a public community of programmers and include
project-specific parameters such as budget, timescale, use-case, etc.,
afterwards, the awarding of works is conducted algorithmically through
smart contracts to ensure a fair selection process of the development
team [109].
**5. Discussion**
An exploratory approach was implemented into this review for the
purpose of understanding which categories in construction are most
influenced by blockchain. This review explored 33 application cate
gories of blockchain in construction. Each category was substantiated by
a minimum of three documents. These categories were further organised
into seven subject areas, which include (1) procurement and supply
chain, (2) design construction, (3) operations and life cycle, (4) smart
cities, (5) intelligent systems, (6) energy and carbon footprint, and (7)
decentralised organisations. When assessing the types of data collection
used in the reviewed documents (as shown in Fig. 8), synonymous data
collection terminologies were merged together for simplicity, such as
conceptual frameworks, which included conceptual models and theo
retical frameworks. Similarly, proof of concepts (PoC) included pilot
studies and prototypes. The first three subject areas of this review are
sequential stages that occur in a construction project, such as subject
area one _procurement and supply chain, which includes implementing_
blockchain in the digital tendering process [111], contract and cashflow
management [43], and automated checking of compliance to standards
[68]. Subject area two _design and construction, incorporated using_
blockchain for trusted data exchanges [112] and traceability of de
liverables throughout the supply chain [113]. While subject area three
_life cycle and circular economy, included how blockchain can be used as_
part of the assessment and management of a built asset during its
operational expenditure stage [114]. Subject area four smart cities, and
subject area five _intelligent systems, included a macro-orientated_
approach, assessing how multiple built environment assets and ser
vices interact through a smart city network, which includes the inter
operability of various systems such as utility [115], transport [116],
Internet of Things (IoT) [117], and smart technologies [88]. Subject area
six energy and carbon, focused attention on peer-to-peer energy trading
models [118], sustainable technologies for the built environment [119],
and carbon accounting strategies [120]. And finally, subject area seven
_decentralised organisations, incorporated decentralised autonomous or_
ganisations (DAO) and decentralised services [103]. DAO is difficult to
precisely classify in the current environment, as its definition is dynamic
in translation and its development is constantly evolving; however, in
construction, many of its activities (for now) overlap with the re
sponsibilities of a main contractor, therefore, for simplicity, DAO can be
described as a decentralised contractor.
The aforementioned 33 categories and seven subject areas were not
distinctly siloed and included substantial topical crossovers. E.g., the
supply chain management category overlapped with all of the subject
areas, however, based on the scientometric analysis conducted (as per
Fig. 6), it was positioned most quantitatively relative in the procurement
subject area, due to its high number of shared link with other categories
in that area [121,122]. IoT also strongly overlapped with several subject
areas, which include smart cities [7], energy and carbon [123], design
and construction [113], procurement [85], and decentralised organi
sations [79]; however, IoT was placed in the intelligent systems subject
area due to its strong correlation with the other categories in this area.
The categories electronic document management systems (EDMS) and
digital/automated contracts were placed in separate subject areas
despite their similarities, as the former is characterised by the digital
management of documents on a centralised system, while the latter
utilises smart contracts on a decentralised protocol, thus dissimilar
systems architecture [124].
A smart medical record system, which includes managing patient
records and sharing healthcare data with hospitals, is a category sup
ported by two authors [15,75]; however, blockchain for healthcare is an
entirely different subject area and a vast topic suited for a separate
literature review altogether [115]. Health and safety monitoring of site
conditions and historic records of on-site accidents were discussed in
two documents [79,102]; however, despite its practical applications in
construction, it also lacked content for substantiation. Another topic that
was excluded despite its interest in two documents is smart governance,
which incorporates governmental organisations implementing block
chain to automate the compliance checking and auditing of built envi
ronment assets [17,115]. Multi-category applications of blockchain in
construction that were not included due to its general-purpose nature
include transaction immutability, digital notarisation, decentralised
applications (dApps), smart contracts, and information sharing, as
effectively, these topics are already integrated within all of the reviewed
categories and do not require itemising [102].
As blockchain is a decentralised technology, appropriate incentiv
isation techniques must be applied to encourage platform interaction
through a crypto-economic model [102]. The integration of blockchain
in enterprise in the current environment is reliant on dApps harmonising
with existing centralised systems, however, as blockchain matures, the
transition to complete decentralisation is likely to increase. This
assumption is based on assessing the growth and expansion of block
chain in construction since its emergence in academic literature, and the
intensifying global interest in blockchain. In a report regarding impact
of blockchain, it was identified as potentially transforming 58 industries
globally, which includes the construction industry [125].
Business operations are entirely based on risk management activities,
which includes economic risks through investments in new business
models, social risk through job losses, legal risk through dispute reso
lution and corporate liability, environmental risk through sustainability
and ecological sensitivity, and technical risk through increased pressure
to integrate systems and provide data-driven solutions [85]. Blockchain
mitigates against centralised hacks, data manipulation, accounting er
rors, and provides a foundation for trusted data without reliance on a
trusted third party [126]. An area which lacked discussion from the
review documents was the integration capabilities of blockchain with
existing enterprise systems, as blockchain is considered a high-risk
technology due to its decentralised design and lack of standards. Trust
is a term that appeared most frequently in the reviewed literature when
describing the characteristics of blockchain, such as “stakeholder trust”
[122], “peer-to-peer trust” [127], “trust in collaboration” [128], “in
formation trust” [26], “removal of trusted authority” [11], and “trusted
distributed ledger” [129]. Other commonly used terms include trans
parency, traceability, immutability, security, automation, auditability,
decentralisation, and disintermediation [9,118–120,123,130,131].
Over the course of 2017–2020, the rate at which new documents
were published on blockchain in construction was recorded at an
average of 184%; however, the sample number of years is small, and this
level of growth cannot be maintained long-term. A 10-year period would
provide a more statistically comprehensive result. Fig. 4 documented the
annual expansion of new categories on topic since its emergence in
2017, which displayed six new categories in 2017, nine in 2018, 13 in
2019, followed by five in 2020. It is likely that the expansion of new
categories on the topic has almost reached a plateau, therefore, over the
next consecutive years, it is envisaged that existing categories will
-----
undergo maturity as more attention is focused on testing and developing
earlier ideations.
**6. Conclusion**
New academic documents on blockchain in construction increased at
an average of 184% each year since 2017, surmounting to an accumu
lated total of 121 documents at time of writing this article in 2021. An
exploratory approach was implemented to investigate all 121 publica
tions to examine the contemporary environment of the topic. This re
view identified 33 application categories, these were organised into
seven subject areas and included (1) procurement and supply chain; (2)
design and construction; (3) operations and life cycle; (4) smart cities;
(5) intelligent systems; (6) energy and carbon footprint; and (7)
decentralised organisations. To support the literature review, statistics
and scientometrics were incorporated to display the progression of the
topical area. This includes visual maps that display the co-occurrences of
the categories (as shown in Fig. 6) and data collection types imple
mented in the reviewed documents (shown in Fig. 9). A complete list of
the 121 reviewed documents, along with their category coverage,
document type, data collection type, and impact factor, is provided in
the shared Google spreadsheet link provided below.
[https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9](https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing)
[rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing](https://docs.google.com/spreadsheets/d/1V4UICRdoyWycaGENH9rnuxukRNQJFIArQ-feV7NM0a4/edit?usp=sharing)
Limitations included using only one scientific database, Scopus, due
to the inconsistencies which emerged when amalgamating information
from various scientific databases for use in visual mapping software. In a
comparison of the search results from seven scientific databases and
based on the topic of blockchain in construction, Scopus overshadowed
its competition by a large margin; furthermore, up to 85% of the doc
uments indexed in other scientific databases were already existent in
Scopus. Another limitation was the restricted capacity to conduct indepth investigation on one particular subject area within the topic,
this was due to the exploratory nature of the study, which covered a
wide range of application categories. Despite this, the findings provided
a solid foundation for aggregating all of the research areas of blockchain
in construction in the contemporary environment.
Content for this exploratory review was obtained predominantly
from documents published from 2017 to 2020, as this article was written
in early 2021; however, further work includes an extended review
following the progression of the topic over the next consecutive years.
**Declaration of Competing Interest**
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influence
the work reported in this paper.
**Acknowledgments**
The authors would like to acknowledge his sponsors, University
College London (UCL) and Costain PLC, who are joint funding the PhD of
the primary author.
**Appendix**
_Search query one_
Search query one isolated the ISSN numbers of all academic docu
ments in the subject areas of architecture, building & construction, and
civil & structural engineering, followed by specific key words, such as
_TITLE-ABS-KEY ("blockchain*" OR "block chain*" OR "distributed ledger*"_
_OR "smart contract*",_ which was inputted into the Scopus search to
obtain the results.
The exact string of text for query one consists of:
_ISSN(08950563 or 17433509 or 23848898 or 23639075 or 20361602_
_or 20299990 or 23352000 or 17246768 or 22150900 or 22150897 or_
_24208213 or 23851546 or 20966717 or 08602395 or 25448870 or_
_19401507 or 19401493 or 02658135 or 0142694X or 23527102 or_
_15417808 or 15417794 or 23520124 or 14356066 or 09349839 or_
_15583066 or 15583058 or 17527589 or 17452007 or 1365232X or_
_09699988 or 15710882 or 17453755 or 14770857 or 14714175 or_
_17481317 or 20952635 or 09603182 or 15731529 or 23635150 or_
_23635142 or 23537396 or 20952430 or 20952449 or 20755309 or_
_1000131X or 10760431 or 19435568 or 07181299 or 07188358 or_
_02632772 or 00038628 or 17589622 or 26316862 or 19387806 or_
_02663511 or 09560599 or 20598033 or 20297947 or 20297955 or_
_22133038 or 2213302X or 19434618 or 15526100 or 21952701 or_
_18864805 or 18877052 or 13001884 or 15224600 or 13472852 or_
_13467581 or 23034521 or 14370980 or 01715445 or 18269745 or_
_22832998 or 1450569X or 22178066 or 22882987 or 12268046 or_
_2239267X or 21753369 or 18818153 or 13404202 or 19461194 or_
_19461186 or 24751448 or 2475143X or 00200883 or 19883234 or_
_22321500 or 18234208 or 22502157 or 22502149 or 18558399 or_
_03536483 or 10067930 or 13602365 or 14664410 or 18285961 or_
_01466518 or 22390243 or 07182309 or 16744764 or 01682601 or_
_22546103 or 11336137 or 18818188 or 13419463 or 00379808 or_
_20612710 or 03055477 or 17496292 or 19895313 or 16952731 or_
_22889930 or 22347224 or 23321091 or 23321121 or 00139661 or_
_15882764 or 23202661 or 00448680 or 13591355 or 14740516 or_
_07187262 or 0718204X or 13028324 or 19346026 or 15499715 or_
_20696469 or 20690509 or 18085741 or 22147233 or 22123202 or_
_00392553 or 00038504 or 20507836 or 20507828 or 10464883 or_
_1531314X or 14665123 or 07380895 or 20455895 or 20455909 or_
_2325159X or 23251581 or 23870346 or 23410531 or 0066622X or_
_13300652 or 0007473X or 20137087 or 14929600 or 02585316 or_
_23251395 or 23251379 or 25317644 or 26117487 or 07160852 or_
_07176996 or 20505833 or 11239247 or 23251662 or 23251670 or_
_23251638 or 2325162X or 21736723 or 20419112 or 20419120 or_
_0951001X or 23413050 or 23412747 or 23090103 or 19360886 or_
_19346832 or 02677768 or 20390491 or 18751490 or 18751504 or_
_15263819 or 16068238 or 01696238 or 23093072 or 23074485 or_
_12282472 or 19357001 or 00038520 or 0003858X or 00038695 or_
_03899160 or 10934421 or 11249064 or 22546332 or 11385596 or_
_21716897 or 21731616 or 23409711 or 23867027 or 23322578 or_
_23322551 or 00012505 or 08950563 or 01932527 or 17433509 or_
_01642006 or 7314906 or 03622479 or 10412336 or 20361602 or_
_20299990 or 23352000 or 24208213 or 23851546 or 20966717 or_
_00088846 or 03062619 or 09589465 or 18736785 or 03605442 or_
_03787788 or 01674730 or 08867798 or 03601323 or 17499518 or_
_17499526 or 09265805 or 07339445 or 01407007 or 0143974X or_
_10900268 or 19435614 or 09500618 or 15452263 or 15452255 or_
_02638231 or 13595997 or 18716873 or 10840702 or 19435592 or_
_19401507 or 19401493 or 09056947 or 16000668 or 09613218 or_
_14664321 or 1570761X or 03931420 or 17448980 or 15732479 or_
_07339364 or 19435533 or 08991561 or 13632469 or 1559808X or_
_12299367 or 19968744 or 19963599 or 1751763X or 00249831 or_
_23527102 or 14644177 or 17517648 or 08893241 or 01446193 or_
_1466433X or 15417808 or 15417794 or 19435509 or 08873828 or_
_23520124 or 16713664 or 1993503X or 02194554 or 09517197 or_
_17517605 or 0889325X or 17527589 or 17452007 or 13694332 or_
_1365232X or 09699988 or 20962754 or 24679674 or 14371006 or_
_00059900 or 12254568 or 23767642 or 00138029 or 14006529 or_
_14036835 or 19883226 or 04652746 or 20714726 or 20710305 or_
_14770857 or 14714175 or 2374474X or 23744731 or 2041420X or_
_20414196 or 00056650 or 2287531X or 22875301 or 15623599 or_
_20952635 or 13468014 or 13473913 or 22049029 or 14770849 or_
_01436244 or 1816112X or 10002383 or 23635150 or 23635142 or_
_23537396 or 20755309 or 17550750 or 17550769 or 22973362 or_
_1000131X or 09650911 or 17517702 or 19375247 or 19375255 or_
_10760431 or 19435568 or 23984708 or 17512549 or 17562201 or_
_24705322 or 24705314 or 10006869 or 17442591 or 02632772 or_
_21628246 or 01430750 or 12266116 or 20369913 or 2533168X or_
-----
_18670520 or 18670539 or 23644176 or 23644184 or 02630923 or_
_20484046 or 14613484 or 08879672 or 15503984 or 14733315 or_
_14371049 or 00389145 or 10168664 or 22143998 or 03405044 or_
_09328351 or 14370999 or 17517664 or 14784637 or 22135812 or_
_22135820 or 17595916 or 17595908 or 18236499 or 21804222 or_
_1351010X or 20598025 or 10034722 or 1028365X or 19969015 or_
_21862990 or 21862982 or 17566932 or 17508975 or 18748368 or_
_2214398X or 02663511 or 09560599 or 20598033 or 15551369 or_
_1822427X or 18224288 or 09766308 or 09766316 or 14780771 or_
_15732487 or 17448999 or 22133038 or 2213302X or 19434618 or_
_15526100 or 22110844 or 22110852 or 20466102 or 20466099 or_
_22132031 or 2213204X or 10079629 or 24123811 or 14258129 or_
_13472852 or 13467581 or 20569459 or 20569467 or 10840680 or_
_14370980 or 01715445 or 21487847 or 20937628 or 2093761X or_
_16876261 or 1687627X or 07177925 or 18764037 or 18764029 or_
_15744078 or 00467316 or 00012491 or 03649962 or 07362501 or_
_21688710 or 19302991 or 19302983 or 17329353 or 18818153 or_
_13404202 or 19461194 or 19461186 or 16719379 or 22488723 or_
_01205609 or 13556207 or 00200883 or 19883234 or 22321500 or_
_18234208 or 16718879 or 22502157 or 22502149 or 10067930 or_
_12101389 or 01466518 or 07162952 or 07185073 or 22390243 or_
_12295515 or 22342842 or 19853807 or 16744764 or 15460118 or_
_01998595 or 00194565 or 18818188 or 13419463 or 26007959 or_
_21803242 or 09700137 or 22889930 or 22347224 or 07224397 or_
_09490205 or 07224400 or 03734331 or 20696469 or 20690509 or_
_22147233 or 22123202 or 00060208 or 02446014 or 00392553 or_
_08919976 or 14665123 or 03552705 or 13003453 or 03002721 or_
_03410552 or 11790776 or 20137087 or 00333840 or 03731995 or_
_00105317 or 09745904 or 15280187 or 09502289 or 15337316 or_
_00105333 or 02677768 or 00209384 or 23793244 or 23793252 or_
_10450343 or 09698213 or 08919526 or 09621784 or 0267825X or_
_01426168 or 15274055 or 0306400X or 25673742 or 13693999 or_
_20541236 or 02702932 or 18672442 or 08950563 or 10459065 or_
_01932527 or 17433509 or 08938717 or 01610589 or 13395629 or_
_21875103 or 20299990 or 23352000 or 25901982 or 20966717 or_
_03062619 or 0968090X or 00431354 or 18792448 or 01912615 or_
_15265447 or 00411655 or 13665545 or 08883270 or 10961216 or_
_18736785 or 03605442 or 07232632 or 09658564 or 03787788 or_
_01674730 or 12086010 or 00083674 or 14678667 or 10939687 or_
_03601323 or 00494488 or 15729435 or 17499518 or 17499526 or_
_02638223 or 09265805 or 0734743X or 13619209 or 07339445 or_
_18737323 or 01410296 or 00457949 or 0143974X or 10900268 or_
_19435614 or 15568334 or 15568318 or 09500618 or 15452263 or_
_15452255 or 00207403 or 02638231 or 13595997 or 18716873 or_
_22106707 or 10840702 or 19435592 or 20460430 or 20460449 or_
_02677261 or 09613218 or 14664321 or 1570761X or 13698478 or_
_02668920 or 18784275 or 22143912 or 09641726 or 1361665X or_
_07339496 or 17448980 or 15732479 or 00380806 or 07339364 or_
_19435533 or 08991561 or 13632469 or 1559808X or 09204741 or_
_15731650 or 12299367 or 01676105 or 14680629 or 21647402 or_
_18142079 or 00221686 or 08873801 or 16449665 or 21905479 or_
_21905452 or 1751763X or 00249831 or 07339429 or 19437900 or_
_23527102 or 07339453 or 14644177 or 17517648 or 08893241 or_
_15376532 or 15210596 or 15376494 or 00936413 or 10001964 or_
_15706443 or 15417808 or 15417794 or 05785634 or 17936292 or_
_19435509 or 08873828 or 23520124 or 19971400 or 19966814 or_
_15230406 or 14356066 or 09349839 or 1477268X or 10298436 or_
_19435460 or 0733950X or 23253444 or 23267186 or 22341315 or_
_19760485 or 11749857 or 22150986 or 16713664 or 1993503X or_
_21996679 or 21996687 or 10006915 or 07339437 or 19434774 or_
_02194554 or 0889325X or 13694332 or 19435584 or 10840699 or_
_1365232X or 09699988 or 24680672 or 20962754 or 24679674 or_
_07339488 or 00224502 or 12254568 or 10760342 or 1943555X or_
_15276988 or 19475411 or 1947542X or 23767642 or 10007598 or_
_00138029 or 14006529 or 14036835 or 20926219 or 2005307X or_
_14647141 or 14651734 or 21999260 or 21999279 or 16742370 or_
_22886605 or 22886613 or 20714726 or 20710305 or 10693629 or_
_14770857 or 14714175 or 23350164 or 03542025 or 17481317 or_
_20957564 or 00056650 or 21964386 or 21964378 or 2287531X or_
_22875301 or 09203796 or 13894420 or 21167214 or 19648189 or_
_20771312 or 18223605 or 13923730 or 03611981 or 14488353 or_
_1816112X or 10286608 or 10290249 or 17522706 or 00396265 or_
_23635150 or 23635142 or 07339402 or 0733947X or 23537396 or_
_20952430 or 20952449 or 20755309 or 12267988 or 19763808 or_
_24732893 or 24732907 or 15397742 or 15397734 or 1000131X or_
_15361055 or 10523928 or 16797825 or 16797817 or 09650911 or_
_17517702 or 19375247 or 19375255 or 10760431 or 19435568 or_
_20796439 or 23984708 or 08931321 or 17517710 or 0965092X or_
_19491190 or 19491204 or 10709622 or 18759203 or 20927614 or_
_20927622 or 13287982 or 24705322 or 24705314 or 10017372 or_
_17350522 or 10006869 or 22918752 or 22918744 or 07339372 or_
_19437870 or 17476526 or 17476534 or 23791357 or 21653984 or_
_05536626 or 15873773 or 25735438 or 19969465 or 19969457 or_
_12266116 or 20936311 or 15982351 or 2044124X or 20441258 or_
_20369913 or 2533168X or 18670520 or 18670539 or 20950349 or_
_23644176 or 23644184 or 09715010 or 21643040 or 02630923 or_
_20484046 or 14613484 or 08879672 or 2195268X or 21952698 or_
_14784629 or 17517680 or 14733315 or 03151468 or 12086029 or_
_18213197 or 14514117 or 14371049 or 00389145 or 10168664 or_
_22143998 or 21532648 or 03405044 or 09328351 or 14370999 or_
_17517664 or 14784637 or 13354205 or 25857878 or 10535381 or_
_10044523 or 22342184 or 22342192 or 17476518 or 1747650X or_
_10375783 or 17579872 or 17579864 or 18236499 or 21804222 or_
_03535320 or 16431618 or 2449769X or 20083556 or 20086695 or_
_21967202 or 21967210 or 1028365X or 19969015 or 02663511 or_
_09560599 or 20598033 or 15551369 or 1822427X or 18224288 or_
_18657362 or 18657389 or 16878086 or 16878094 or 09766308 or_
_09766316 or 22286160 or 12302945 or 16480627 or 18224202 or_
_22133038 or 2213302X or 2191916X or 21919151 or 17587328 or_
_17587336 or 10096582 or 19434618 or 15526100 or 01376365 or_
_16879724 or 16879732 or 22110844 or 22110852 or 17881994 or_
_16711637 or 20466102 or 20466099 or 22132031 or 2213204X or_
_10263098 or 23831359 or 23832525 or 10079629 or 00025968 or_
_03502465 or 13339095 or 23144912 or 23144904 or 17517672 or_
_0965089X or 14513749 or 18207863 or 18676944 or 18676936 or_
_24123811 or 18741495 or 20588305 or 20588313 or 13472852 or_
_13467581 or 17550807 or 17550793 or 19434162 or 19434170 or_
_15630854 or 10840680 or 13365835 or 21996512 or 17550785 or_
_17550777 or 09650903 or 17517699 or 21646457 or 21646473 or_
_18245463 or 23915439 or 18632351 or 17514312 or 17514304 or_
_18713033 or 20421338 or 20421346 or 2226809X or 22235329 or_
_23539003 or 17344492 or 07177925 or 15744078 or 00467316 or_
_19930461 or 2225157X or 2229838X or 19302991 or 19302983 or_
_01878336 or 20072422 or 17329353 or 17386225 or 22882235 or_
_16719379 or 10212019 or 00200883 or 19883234 or 17085284 or_
_1000582X or 23746793 or 22502157 or 22502149 or 1006754X or_
_21928253 or 07162952 or 07185073 or 17400694 or 18753507 or_
_17568404 or 13681494 or 12295515 or 22342842 or 18029876 or_
_16744764 or 00465828 or 19918747 or 22243429 or 14439255 or_
_00194565 or 14412713 or 18022308 or 26007959 or 21803242 or_
_09700137 or 22889930 or 22347224 or 23321091 or 23321121 or_
_22926062 or 23361182 or 1802680X or 16106199 or 13028324 or_
_00060208 or 12104027 or 18052576 or 00174653 or 01580728 or_
_00392553 or 09568700 or 20449283 or 02755823 or 14665123 or_
_02126389 or 0923666X or 10093443 or 10155856 or 13003453 or_
_03002721 or 00137308 or 12313726 or 21959870 or 21959862 or_
_0376723X or 20817738 or 23007591 or 18825974 or 13443755 or_
_00333840 or 17593433 or 23662565 or 23662557 or 03731995 or_
_00105317 or 09745904 or 17452058 or 00124419 or 22783075 or_
_23793244 or 23793252 or 00263982 or 01665766 or 00333735 or_
_00348619 or 16954408 or 0149337X or 09698213 or 00097853 or_
_08857024 or 03600556 or 08919526 or 0267825X or 01426168 or_
-----
_00284939 or 22113444 or 13693999 or 20541236) AND TITLE-ABS-KEY_
_("blockchain*" OR "block chain*" OR "distributed ledger*" OR "smart_
_contract*")_
Search query two:
Search query two used a simpler method, which included using one
of the predefine subject areas available on Scopus, followed by specific
key words. The limitation to using this search query is the high number
irrelevant documents that accompany the results.
The string of text for query to consists of:
_SUBJAREA(ENGI)_ _AND_ _TITLE-ABS-KEY("Blockchain"_ _AND_
_"Construction")._
**References**
[[1] I. Grigg, Triple Entry Accounting. https://nakamotoinstitute.org/triple-entry-acc](https://nakamotoinstitute.org/triple-entry-accounting/)
[ounting/, 2005 accessed 6th February 2020.](https://nakamotoinstitute.org/triple-entry-accounting/)
[2] R.N. Chen, Y.N. Li, Y. Yu, H.L. Li, X.F. Chen, W. Susilo, Blockchain-based dynamic
provable data possession for smart cities, Inst. Electr. Electron. Eng. Internet
[Things J. 7 (5) (2020) 4143–4154, https://doi.org/10.1109/jiot.2019.2963789.](https://doi.org/10.1109/jiot.2019.2963789)
[3] M. Foti, C. Mavromatis, M. Vavalis, Decentralized blockchain-based consensus for
[optimal power flow solutions, Appl. Energy 283 (2020), https://doi.org/](https://doi.org/10.1016/j.apenergy.2020.116100)
[10.1016/j.apenergy.2020.116100.](https://doi.org/10.1016/j.apenergy.2020.116100)
[4] Y. Wang, C.H. Chen, A. Zghari-Sales, Designing a blockchain enabled supply
[chain, Int. J. Prod. Res. 59 (5) (2020) 1450–1475, https://doi.org/10.1080/](https://doi.org/10.1080/00207543.2020.1824086)
[00207543.2020.1824086.](https://doi.org/10.1080/00207543.2020.1824086)
[5] S. Karale, V. Ranaware, Applications of blockchain technology in smart city
development: a research, Int. J. Innov. Technol. Explor. Eng. 8 (11) (2019)
[556–559, https://doi.org/10.35940/ijitee.K1093.09811S19.](https://doi.org/10.35940/ijitee.K1093.09811S19)
[6] M. Hribernik, K. Zero, S. Kummer, D.M. Herold, City logistics: towards a
blockchain decision framework for collaborative parcel deliveries in micro-hubs,
[Transp. Res. Interdisc. Perspect. 8 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.trip.2020.100274)
[trip.2020.100274.](https://doi.org/10.1016/j.trip.2020.100274)
[7] S. Perera, S. Nanayakkara, M.N.N. Rodrigo, S. Senaratne, R. Weinand, Blockchain
technology: is it hype or real in the construction industry? J. Ind. Inf. Integr. 17
[(2020) https://doi.org/10.1016/j.jii.2020.100125.](https://doi.org/10.1016/j.jii.2020.100125)
[8] D. Han, C. Zhang, J. Ping, Z. Yan, Smart contract architecture for decentralized
energy trading and management based on blockchains, Energy 199 (2020),
[https://doi.org/10.1016/j.energy.2020.117417.](https://doi.org/10.1016/j.energy.2020.117417)
[9] X. Ye, M. Konig, Framework for automated billing in the construction industry ¨
using BIM and smart contracts, in: Lecture Notes in Civil Engineering Vol. 98,
[Springer, S˜ao Paulo, Brazil, 2021, pp. 824–838, https://doi.org/10.1007/978-3-](https://doi.org/10.1007/978-3-030-51295-8_57)
[030-51295-8_57.](https://doi.org/10.1007/978-3-030-51295-8_57)
[[10] Paperpile, The Top List of Academic Research Databases. https://paperpile.](https://paperpile.com/g/academic-research-databases/)
[com/g/academic-research-databases/, 2021 accessed 23rd May 2021.](https://paperpile.com/g/academic-research-databases/)
[11] Z. Turk, R. Klinc, Potentials of blockchain technology for construction [ˇ]
management, in: Creative Construction Conference (CCC) 2017 Vol. 196, Elsevier
[Ltd, Primosten, Croatia, 2017, pp. 638–645, https://doi.org/10.1016/j.](https://doi.org/10.1016/j.proeng.2017.08.052)
[proeng.2017.08.052.](https://doi.org/10.1016/j.proeng.2017.08.052)
[12] R. Coyne, T. Onabolu, Blockchain for architects: challenges from the sharing
[economy, Archit. Res. Q. 21 (4, 2017) 369–374, https://doi.org/10.1017/](https://doi.org/10.1017/S1359135518000167)
[S1359135518000167.](https://doi.org/10.1017/S1359135518000167)
[13] J.J. Sikorski, J. Haughton, M. Kraft, Blockchain technology in the chemical
industry: machine-to-machine electricity market, Appl. Energy 195 (2017)
[234–246, https://doi.org/10.1016/j.apenergy.2017.03.039.](https://doi.org/10.1016/j.apenergy.2017.03.039)
[14] M.S. Kiu, F.C. Chia, P.F. Wong, Exploring the potentials of blockchain application
in construction industry: a systematic review, Int. J. Constr. Manag. (2020),
[https://doi.org/10.1080/15623599.2020.1833436.](https://doi.org/10.1080/15623599.2020.1833436)
[15] B. Bhushan, A. Khamparia, K.M. Sagayam, S.K. Sharma, M.A. Ahad, N.
C. Debnath, Blockchain for smart cities: a review of architectures, integration
[trends and future research directions, Sustain. Cities Soc. 61 (2020), https://doi.](https://doi.org/10.1016/j.scs.2020.102360)
[org/10.1016/j.scs.2020.102360.](https://doi.org/10.1016/j.scs.2020.102360)
[16] J.J. Hunhevicz, D.M. Hall, Managing mistrust in construction using DLT: a review
of use-case categories for technical design decisions, in: European Conference on
[Computing in Construction, Crete, Greece, 2019, pp. 100–109, https://doi.org/](https://doi.org/10.35490/EC3.2019.171)
[10.35490/EC3.2019.171.](https://doi.org/10.35490/EC3.2019.171)
[17] J. Li, D. Greenwood, M. Kassem, Blockchain in the built environment and
construction industry: a systematic review, conceptual models and practical use
[cases, Autom. Constr. 102 (2019) 288–307, https://doi.org/10.1016/j.](https://doi.org/10.1016/j.autcon.2019.02.005)
[autcon.2019.02.005.](https://doi.org/10.1016/j.autcon.2019.02.005)
[18] R. Yang, R. Wakefield, S. Lyu, S. Jayasuriya, F. Han, X. Yi, X. Yang,
G. Amarasinghe, S. Chen, Public and private blockchain in construction business
[process and information integration, Autom. Constr. 118 (2020), https://doi.org/](https://doi.org/10.1016/j.autcon.2020.103276)
[10.1016/j.autcon.2020.103276.](https://doi.org/10.1016/j.autcon.2020.103276)
[19] J.F. Burnham, Scopus database: a review, Biomed. Digit. Libr. 3 (1, 2006) 1–8,
[https://doi.org/10.1186/1742-5581-3-1.](https://doi.org/10.1186/1742-5581-3-1)
[20] A. Aghaei Chadegani, H. Salehi, M.M. Md Yunus, H. Farhadi, M. Fooladi,
M. Farhadi, N. Ale Ebrahim, A comparison between two main academic literature
collections: Web of Science and Scopus databases, Asian Soc. Sci. 9 (5, 2013)
[18–26, https://doi.org/10.5539/ass.v9n5p18.](https://doi.org/10.5539/ass.v9n5p18)
[[21] SCImago, Scimago Journal & Country Rank. https://www.scimagojr.com/](https://www.scimagojr.com/)
accessed 24th May 2021.
[22] ScimagoResearchGroup, Description of Scimago Journal Rank Indicator, Scimago
[Research Group, 2007. https://www.scimagojr.com/SCImagoJournalRank.pdf.](https://www.scimagojr.com/SCImagoJournalRank.pdf)
[23] G.B. Ozturk, Interoperability in building information modeling for AECO/FM
[industry, Autom. Constr. 113 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.autcon.2020.103122)
[autcon.2020.103122.](https://doi.org/10.1016/j.autcon.2020.103122)
[24] D. Lopez, B. Farooq, A multi-layered blockchain framework for smart mobility ´
[data-markets, Transp. Res. C Emerg. Technol. 111 (2020) 588–615, https://doi.](https://doi.org/10.1016/j.trc.2020.01.002)
[org/10.1016/j.trc.2020.01.002.](https://doi.org/10.1016/j.trc.2020.01.002)
[25] H. Zhang, J. Wang, Y. Ding, Blockchain-based decentralized and secure keyless
[signature scheme for smart grid, Energy 180 (2019) 955–967, https://doi.org/](https://doi.org/10.1016/j.energy.2019.05.127)
[10.1016/j.energy.2019.05.127.](https://doi.org/10.1016/j.energy.2019.05.127)
[26] W.N. Suliyanti, R.F. Sari, Blockchain-based building information modeling, in:
2nd International Conference on Applied Engineering (ICAE), Institute of
[Electrical and Electronics Engineers Inc., Batam, Indonesia, 2019, https://doi.](https://doi.org/10.1109/ICAE47758.2019.9221744)
[org/10.1109/ICAE47758.2019.9221744.](https://doi.org/10.1109/ICAE47758.2019.9221744)
[27] K.N. Khaqqi, J.J. Sikorski, K. Hadinoto, M. Kraft, Incorporating seller/buyer
reputation-based system in blockchain-enabled emission trading application,
[Appl. Energy 209 (2018) 8–19, https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2017.10.070)
[apenergy.2017.10.070.](https://doi.org/10.1016/j.apenergy.2017.10.070)
[28] F. Xiong, R. Xiao, W. Ren, R. Zheng, J. Jiang, A key protection scheme based on
secret sharing for blockchain-based construction supply chain system, Inst. Electr.
[Electron. Eng. 7 (2019) 126773–126786, https://doi.org/10.1109/](https://doi.org/10.1109/ACCESS.2019.2937917)
[ACCESS.2019.2937917.](https://doi.org/10.1109/ACCESS.2019.2937917)
[29] S.K. Singh, Y.S. Jeong, J.H. Park, A deep learning-based IoT-oriented
[infrastructure for secure smart City, Sustain. Cities Soc. 60 (2020), https://doi.](https://doi.org/10.1016/j.scs.2020.102252)
[org/10.1016/j.scs.2020.102252.](https://doi.org/10.1016/j.scs.2020.102252)
[30] S. Sun, X. Zheng, J. Villalba-Díez, J. Ordieres-Mer´e, Data handling in industry 4.0:
interoperability based on distributed ledger technology, Sensors (Switzerland) 20
[(11) (2020), https://doi.org/10.3390/s20113046.](https://doi.org/10.3390/s20113046)
[31] R. Talat, M. Muzammal, Q. Qu, W. Zhou, M. Najam-Ul-Islam, S.M.H. Bamakan,
J. Qiu, A decentralized system for green energy distribution in a smart grid,
[J. Energy Eng. 146 (1) (2020), https://doi.org/10.1061/(ASCE)EY.1943-](https://doi.org/10.1061/(ASCE)EY.1943-7897.0000623)
[7897.0000623.](https://doi.org/10.1061/(ASCE)EY.1943-7897.0000623)
[32] W. Hu, Y.W. Hu, W.H. Yao, W.Q. Lu, H.H. Li, Z.W. Lv, A blockchain-based smart
contract trading mechanism for energy power supply and demand network, Adv.
[Prod. Eng. Manag. 14 (3) (2019) 284–296, https://doi.org/10.14743/](https://doi.org/10.14743/apem2019.3.328)
[apem2019.3.328.](https://doi.org/10.14743/apem2019.3.328)
[33] A. Shojaei, I. Flood, H.I. Moud, M. Hatami, X. Zhang, An implementation of smart
contracts by integrating BIM and blockchain, in: 4th Future Technologies
Conference (FTC) 2019 Vol. 1069, Springer, San Francisco, USA, 2020,
[pp. 519–527, https://doi.org/10.1007/978-3-030-32523-7_36.](https://doi.org/10.1007/978-3-030-32523-7_36)
[34] E. Mengelkamp, J. G¨arttner, K. Rock, S. Kessler, L. Orsini, C. Weinhardt,
Designing microgrid energy markets: a case study: the brooklyn microgrid, Appl.
[Energy 210 (2018) 870–880, https://doi.org/10.1016/j.apenergy.2017.06.054.](https://doi.org/10.1016/j.apenergy.2017.06.054)
[35] L. Wang, J. Liu, R. Yuan, J. Wu, D. Zhang, Y. Zhang, M. Li, Adaptive bidding
strategy for real-time energy management in multi-energy market enhanced by
[blockchain, Appl. Energy 279 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2020.115866)
[apenergy.2020.115866.](https://doi.org/10.1016/j.apenergy.2020.115866)
[36] K. Kim, G. Lee, S. Kim, A study on the application of blockchain technology in the
[construction industry, KSCE J. Civ. Eng. 24 (9, 2020) 2561–2571, https://doi.](https://doi.org/10.1007/s12205-020-0188-x)
[org/10.1007/s12205-020-0188-x.](https://doi.org/10.1007/s12205-020-0188-x)
[37] S. Fong Piew, S. Sarip, A.Y. AbdFatah, H.M. Kaidi, Business sustainability through
technology management approach for construction company, Int. J. Emerg.
[Trends Eng. Res. 8 (1) (2020) 22–26, https://doi.org/10.30534/ijeter/2020/](https://doi.org/10.30534/ijeter/2020/0481.12020)
[0481.12020.](https://doi.org/10.30534/ijeter/2020/0481.12020)
[38] U. Isikdag, An evaluation of barriers to E-Procurement in Turkish construction
[industry, Int. J. Innov. Technol. Explor. Eng. 8 (4, 2019) 252–259. https://www.](https://www.ijitee.org/wp-content/uploads/papers/v8i4/D2731028419.pdf)
[ijitee.org/wp-content/uploads/papers/v8i4/D2731028419.pdf.](https://www.ijitee.org/wp-content/uploads/papers/v8i4/D2731028419.pdf)
[39] D. Kifokeris, C. Koch, Blockchain in building logistics: emerging knowledge, and
related actors in Sweden, in: 35th Annual Conference on Association of
Researchers in Construction Management, Association of Researchers in
Construction Management (ARCOM), Gothenburg, Sweden, 2019, pp. 426–435.
[https://www.scopus.com/inward/record.uri?eid=2-s2.0-85077126273&partnerI](https://www.scopus.com/inward/record.uri?eid=2-s2.0-85077126273&partnerID=40&md5=e54aeb29511b92b64daa353918869523)
[D=40&md5=e54aeb29511b92b64daa353918869523.](https://www.scopus.com/inward/record.uri?eid=2-s2.0-85077126273&partnerID=40&md5=e54aeb29511b92b64daa353918869523)
[40] A. Lanko, N. Vatin, A. Kaklauskas, Application of RFID combined with blockchain
technology in logistics of construction materials, in: 2017 International Science
Conference on Business Technologies for Sustainable Urban Development Vol.
[170, EDP Sciences, St. Petersburg, Russia, 2018, https://doi.org/10.1051/](https://doi.org/10.1051/matecconf/201817003032)
[matecconf/201817003032.](https://doi.org/10.1051/matecconf/201817003032)
[41] A. Sivula, A. Shamsuzzoha, P. Helo, Blockchain in logistics: mapping the
opportunities in con-struction industry, in: 3rd North American Industrial
Engineering and Operations Management (IEOM) Conference, 2018, IEOM
[Society, Washington DC, USA, 2018, pp. 1954–1960. https://www.scopus.com/](https://www.scopus.com/record/display.uri?eid=2-s2.0-85065252883&origin=inward&txGid=bbac08638afd88dbff5f6ba202ab4a6e)
[record/display.uri?eid=2-s2.0-85065252883&origin=inward&txGid=bbac08](https://www.scopus.com/record/display.uri?eid=2-s2.0-85065252883&origin=inward&txGid=bbac08638afd88dbff5f6ba202ab4a6e)
[638afd88dbff5f6ba202ab4a6e.](https://www.scopus.com/record/display.uri?eid=2-s2.0-85065252883&origin=inward&txGid=bbac08638afd88dbff5f6ba202ab4a6e)
[42] H.Y. Chong, A. Diamantopoulos, Integrating advanced technologies to uphold
[security of payment: data flow diagram, Autom. Constr. 114 (2020), https://doi.](https://doi.org/10.1016/j.autcon.2020.103158)
[org/10.1016/j.autcon.2020.103158.](https://doi.org/10.1016/j.autcon.2020.103158)
[43] S. Ahmadisheykhsarmast, R. Sonmez, A smart contract system for security of
[payment of construction contracts, Autom. Constr. 120 (2020), https://doi.org/](https://doi.org/10.1016/j.autcon.2020.103401)
[10.1016/j.autcon.2020.103401.](https://doi.org/10.1016/j.autcon.2020.103401)
[44] S.L. Gruneberg, G.J. Ive, The Economics of the Modern Construction Firm,
[Macmillan Press Ltd, Hampshire, UK, 2000, ISBN 978-0-230-51043-2, https://](https://doi.org/10.1057/9780230510432)
[doi.org/10.1057/9780230510432.](https://doi.org/10.1057/9780230510432)
-----
[45] M. Das, H. Luo, J.C.P. Cheng, Securing interim payments in construction projects
[through a blockchain-based framework, Autom. Constr. 118 (2020), https://doi.](https://doi.org/10.1016/j.autcon.2020.103284)
[org/10.1016/j.autcon.2020.103284.](https://doi.org/10.1016/j.autcon.2020.103284)
[46] A.J. McNamara, S.M.E. Sepasgozar, Developing a theoretical framework for
[intelligent contract acceptance, Constr. Innov. 20 (3) (2020) 421–445, https://](https://doi.org/10.1108/CI-07-2019-0061)
[doi.org/10.1108/CI-07-2019-0061.](https://doi.org/10.1108/CI-07-2019-0061)
[47] S. Badi, E. Ochieng, M. Nasaj, M. Papadaki, Technological, organisational and
environmental determinants of smart contracts adoption: UK construction sector
[viewpoint, Constr. Manag. Econ. 39 (1) (2021) 36–54, https://doi.org/10.1080/](https://doi.org/10.1080/01446193.2020.1819549)
[01446193.2020.1819549.](https://doi.org/10.1080/01446193.2020.1819549)
[48] J. Hunhevicz, T. Schraner, D. Hall, Incentivizing high-quality data sets in
construction using blockchain: a feasibility study in the Swiss industry, in: 37th
International Symposium on Automation and Robotics in Construction (ISARC),
[2020, https://doi.org/10.22260/ISARC2020/0177.](https://doi.org/10.22260/ISARC2020/0177)
[49] X. Qian, E. Papadonikolaki, Shifting trust in construction supply chains through
blockchain technology, Eng. Constr. Archit. Manag. 28 (2, 2020) 584–602,
[https://doi.org/10.1108/ECAM-12-2019-0676.](https://doi.org/10.1108/ECAM-12-2019-0676)
[50] D. Sheng, L. Ding, B. Zhong, P.E.D. Love, H. Luo, J. Chen, Construction quality
[information management with blockchains, Autom. Constr. 120 (2020), https://](https://doi.org/10.1016/j.autcon.2020.103373)
[doi.org/10.1016/j.autcon.2020.103373.](https://doi.org/10.1016/j.autcon.2020.103373)
[51] P. Dutta, T.M. Choi, S. Somani, R. Butala, Blockchain technology in supply chain
operations: applications, challenges and research opportunities, Transp. Res. E
[Logist. Transp. Rev. 142 (2020), https://doi.org/10.1016/j.tre.2020.102067.](https://doi.org/10.1016/j.tre.2020.102067)
[52] N.O. Nawari, S. Ravindran, Blockchain and the built environment: potentials and
[limitations, J. Build. Eng. 25 (2019), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.jobe.2019.100832)
[jobe.2019.100832.](https://doi.org/10.1016/j.jobe.2019.100832)
[53] N.O. Nawari, S. Ravindran, Blockchain and Building Information Modeling (BIM):
[review and applications in post-disaster recovery, Buildings 9 (6) (2019), https://](https://doi.org/10.3390/BUILDINGS9060149)
[doi.org/10.3390/BUILDINGS9060149.](https://doi.org/10.3390/BUILDINGS9060149)
[54] A. Adibfar, A. Costin, R.R.A. Issa, Design copyright in architecture, engineering,
and construction industry: review of history, pitfalls, and lessons learned, J. Leg.
[Aff. Disput. Resolut. Eng. Constr. 12 (3) (2020), https://doi.org/10.1061/(ASCE)](https://doi.org/10.1061/(ASCE)LA.1943-4170.0000421)
[LA.1943-4170.0000421.](https://doi.org/10.1061/(ASCE)LA.1943-4170.0000421)
[55] P.K. Wan, L. Huang, H. Holtskog, Blockchain-enabled information sharing within
a supply chain: a systematic literature review, Inst. Electr. Electron. Eng. 8 (2020)
[49645–49656, https://doi.org/10.1109/ACCESS.2020.2980142.](https://doi.org/10.1109/ACCESS.2020.2980142)
[56] F. Xue, W. Lu, A semantic differential transaction approach to minimizing
information redundancy for BIM and blockchain integration, Autom. Constr. 118
[(2020), https://doi.org/10.1016/j.autcon.2020.103270.](https://doi.org/10.1016/j.autcon.2020.103270)
[57] R. Zheng, J. Jiang, X. Hao, W. Ren, F. Xiong, Y. Ren, BcBIM: a blockchain-based
big data model for BIM modification audit and provenance in mobile cloud, Math.
[Probl. Eng. (2019), https://doi.org/10.1155/2019/5349538.](https://doi.org/10.1155/2019/5349538)
[58] J. Mason, BIM Fork: are smart contracts in construction more likely to prosper
with or without BIM? J. Leg. Aff. Disput. Resolut. Eng. Constr. 11 (4) (2019)
[https://doi.org/10.1061/(ASCE)LA.1943-4170.0000316.](https://doi.org/10.1061/(ASCE)LA.1943-4170.0000316)
[59] O.V. Bukunova, A.S. Bukunov, Tools of data transmission at building information
modeling, in: 2019 International Science and Technology Conference Institute of
[Electrical and Electronics Engineers Inc., Vladivostok, Russia, 2019, https://doi.](https://doi.org/10.1109/Eastonf.2019.8725373)
[org/10.1109/Eastonf.2019.8725373.](https://doi.org/10.1109/Eastonf.2019.8725373)
[60] X. Ye, K. Sigalov, M. Konig, Integrating BIM- and cost-included information ¨
container with Blockchain for construction automated payment using billing
model and smart contracts, in: Proceedings of 37th International Symposium on
Automation and Robotics in Construction (ISARC) 2020, International
Association for Automation and Robotics in Construction (IAARC), Kitakyushu,
[Japan, 2020, pp. 1388–1395, https://doi.org/10.22260/ISARC2020/0192.](https://doi.org/10.22260/ISARC2020/0192)
[61] F. Elghaish, S. Abrishami, M.R. Hosseini, Integrated project delivery with
[blockchain: an automated financial system, Autom. Constr. 114 (2020), https://](https://doi.org/10.1016/j.autcon.2020.103182)
[doi.org/10.1016/j.autcon.2020.103182.](https://doi.org/10.1016/j.autcon.2020.103182)
[62] J.J. Hunhevicz, B. Pierre-Antoine, M.M.M. Bonanomi, D.M. Hall, Blockchain and
smart contracts for Integrated Project Delivery: inspiration from the commons, in:
Proceedings of 18th Annual Engineering Project Organization Conference
(EPOC), Engineering Project Organization Society (EPOS), Virtual due to Covid[19, 2020, https://doi.org/10.3929/ethz-b-000452056.](https://doi.org/10.3929/ethz-b-000452056)
[63] M. Wang, C.C. Wang, S. Sepasgozar, S. Zlatanova, A systematic review of digital
technology adoption in off-site construction: current status and future direction
[towards industry 4.0, Buildings 10 (11, 2020) 1–29, https://doi.org/10.3390/](https://doi.org/10.3390/buildings10110204)
[buildings10110204.](https://doi.org/10.3390/buildings10110204)
[64] Z. Wang, T. Wang, H. Hu, J. Gong, X. Ren, Q. Xiao, Blockchain-based framework
for improving supply chain traceability and information sharing in precast
[construction, Autom. Constr. 111 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.autcon.2019.103063)
[autcon.2019.103063.](https://doi.org/10.1016/j.autcon.2019.103063)
[65] C.S. Tang, L.P. Veelenturf, The strategic role of logistics in the industry 4.0 era,
[Transp. Res. E Log. Transp. Rev. 129 (2019) 1–11, https://doi.org/10.1016/j.](https://doi.org/10.1016/j.tre.2019.06.004)
[tre.2019.06.004.](https://doi.org/10.1016/j.tre.2019.06.004)
[66] M.A. Ahad, S. Paiva, G. Tripathi, N. Feroz, Enabling technologies and sustainable
[smart cities, Sustain. Cities Soc. 61 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.scs.2020.102301)
[scs.2020.102301.](https://doi.org/10.1016/j.scs.2020.102301)
[67] S. Copeland, M. Bilec, Buildings as material banks using RFID and building
information modeling in a circular economy, in: 27th CIRP Life Cycle Engineering
[Conference Vol. 90, Elsevier, 2020, pp. 143–147, https://doi.org/10.1016/j.](https://doi.org/10.1016/j.procir.2020.02.122)
[procir.2020.02.122.](https://doi.org/10.1016/j.procir.2020.02.122)
[68] J. Li, M. Kassem, R. Watson, A blockchain and smart contract-based framework to
increase traceability of built assets, in: Proc. 37th CIB W78 Information
Technology for Construction Conference, S˜ao Paulo, Brazil, 2020, pp. 347–362,
[https://doi.org/10.46421/2706-6568.37.2020.paper025.](https://doi.org/10.46421/2706-6568.37.2020.paper025)
[69] L. Bai, M. Hu, M. Liu, J. Wang, BPIIOT: a light-weighted blockchain-based
platform for industrial IoT, Inst. Electr. Electron. Eng. 7 (2019) 58381–58393,
[https://doi.org/10.1109/ACCESS.2019.2914223.](https://doi.org/10.1109/ACCESS.2019.2914223)
[70] A. Shojaei, Exploring applications of blockchain technology in the construction
industry, in: 10th International Structural Engineering and Construction
Conference (ISEC) 2019, ISEC Press, University of Illinois at Chicago, 2019,
[https://doi.org/10.14455/isec.res.2019.78.](https://doi.org/10.14455/isec.res.2019.78)
[71] G.M. Di Giuda, G. Pattini, E. Seghezzi, M. Schievano, F. Paleari, Digital
Transformation of the Design, Construction and Management Processes of the
[Built Environment, Springer, 2020, pp. 27–36. ISBN: 21987300, https://doi.org/](https://doi.org/10.1007/978-3-030-33570-0_3)
[10.1007/978-3-030-33570-0_3.](https://doi.org/10.1007/978-3-030-33570-0_3)
[72] Z. Dakhli, Z. Lafhaj, A. Mossman, The potential of blockchain in building
[construction, Buildings 9 (4) (2019), https://doi.org/10.3390/buildings9040077.](https://doi.org/10.3390/buildings9040077)
[73] R. Gupta, M.N. Shah, S.N. Mandal, Emerging paradigm for land records in India,
[Smart Sustain. Built Environ. (2020), https://doi.org/10.1108/SASBE-11-2019-](https://doi.org/10.1108/SASBE-11-2019-0152)
[0152.](https://doi.org/10.1108/SASBE-11-2019-0152)
[74] S. Dewan, L. Singh, Use of blockchain in designing smart city, Smart Sustain. Built
[Environ. 9 (4, 2020) 695–709, https://doi.org/10.1108/SASBE-06-2019-0078.](https://doi.org/10.1108/SASBE-06-2019-0078)
[75] Y. Fu, J. Zhu, Trusted data infrastructure for smart cities: a blockchain
[perspective, Build. Res. Inf. 49 (1, 2020) 21–37, https://doi.org/10.1080/](https://doi.org/10.1080/09613218.2020.1784703)
[09613218.2020.1784703.](https://doi.org/10.1080/09613218.2020.1784703)
[76] N. Moretti, J.D. Blanco Cadena, A. Mannino, T. Poli, F. Re Cecconi, Maintenance
service optimization in smart buildings through ultrasonic sensors network,
[Intell. Build. Int. 13 (1, 2020) 4–16, https://doi.org/10.1080/](https://doi.org/10.1080/17508975.2020.1765723)
[17508975.2020.1765723.](https://doi.org/10.1080/17508975.2020.1765723)
[77] G.G.R. Roy, S. Britto Ramesh Kumar, A security framework for a sustainable
smart ecosystem using permissioned blockchain: performance evaluation, Int. J.
[Innov. Technol. Explor. Eng. 8 (10) (2019) 4247–4250, https://doi.org/](https://doi.org/10.35940/ijitee.J9954.0881019)
[10.35940/ijitee.J9954.0881019.](https://doi.org/10.35940/ijitee.J9954.0881019)
[78] T. Zhang, H. Pota, C.C. Chu, R. Gadh, Real-time renewable energy incentive
system for electric vehicles using prioritization and cryptocurrency, Appl. Energy
[226 (2018) 582–594, https://doi.org/10.1016/j.apenergy.2018.06.025.](https://doi.org/10.1016/j.apenergy.2018.06.025)
[79] E.Z. Berglund, J.G. Monroe, I. Ahmed, M. Noghabaei, J. Do, J.E. Pesantez, M.
A. Khaksar Fasaee, E. Bardaka, K. Han, G.T. Proestos, J. Levis, Smart
infrastructure: a vision for the role of the civil engineering profession in smart
[cities, J. Infrastruct. Syst. 26 (2) (2020), https://doi.org/10.1061/(ASCE)IS.1943-](https://doi.org/10.1061/(ASCE)IS.1943-555X.0000549)
[555X.0000549.](https://doi.org/10.1061/(ASCE)IS.1943-555X.0000549)
[80] R. Woodhead, P. Stephenson, D. Morrey, Digital construction: from point
[solutions to IoT ecosystem, Autom. Constr. 93 (2018) 35–46, https://doi.org/](https://doi.org/10.1016/j.autcon.2018.05.004)
[10.1016/j.autcon.2018.05.004.](https://doi.org/10.1016/j.autcon.2018.05.004)
[81] K. Christidis, D. Sikeridis, Y. Wang, M. Devetsikiotis, A framework for designing
and evaluating realistic blockchain-based local energy markets, Appl. Energy 281
[(2021), https://doi.org/10.1016/j.apenergy.2020.115963.](https://doi.org/10.1016/j.apenergy.2020.115963)
[82] A. Banerjee, M. Clear, H. Tewari, Demystifying the role of zk-SNARKs in Zcash, in:
2020 IEEE Conference on Application, Information and Network Security (AINS),
Institute of Electrical and Electronics Engineers Inc., Kota Kinabalu, Malaysia,
[2020, pp. 12–19, https://doi.org/10.1109/AINS50155.2020.9315064.](https://doi.org/10.1109/AINS50155.2020.9315064)
[83] J. Veuger, Trust in a viable real estate economy with disruption and blockchain,
[Facilities 36 (1-2) (2018) 103–120, https://doi.org/10.1108/F-11-2017-0106.](https://doi.org/10.1108/F-11-2017-0106)
[84] S. Singh, P.K. Sharma, B. Yoon, M. Shojafar, G.H. Cho, I.H. Ra, Convergence of
blockchain and artificial intelligence in IoT network for the sustainable smart
[city, Sustain. Cities Soc. 63 (2020), https://doi.org/10.1016/j.scs.2020.102364.](https://doi.org/10.1016/j.scs.2020.102364)
[85] O. Kodym, L. Kub´aˇc, L. Kavka, Risks associated with Logistics 4.0 and their
[minimization using Blockchain, Open Eng. 10 (1) (2020) 74–85, https://doi.org/](https://doi.org/10.1515/eng-2020-0017)
[10.1515/eng-2020-0017.](https://doi.org/10.1515/eng-2020-0017)
[86] G. Wang, X. Song, L. Liu, Application prospect of blockchain technology in power
field, in: 5th IEEE Information Technology and Mechatronics Engineering
Conference (ITOEC) 2020, Institute of Electrical and Electronics Engineers Inc.,
[Chongqing, China, 2020, pp. 407–412, https://doi.org/10.1109/](https://doi.org/10.1109/ITOEC49072.2020.9141546)
[ITOEC49072.2020.9141546.](https://doi.org/10.1109/ITOEC49072.2020.9141546)
[87] E.A. Parn, D. Edwards, Cyber threats confronting the digital built environment:
common data environment vulnerabilities and block chain deterrence, Eng.
[Constr. Archit. Manag. 26 (2, 2019) 245–266, https://doi.org/10.1108/ECAM-](https://doi.org/10.1108/ECAM-03-2018-0101)
[03-2018-0101.](https://doi.org/10.1108/ECAM-03-2018-0101)
[88] R. Shinde, O. Nilakhe, P. Pondkule, D. Karche, P. Shendage, Enhanced road
construction process with machine learning and blockchain technology, in: 2020
International Conference on Industry 4.0 Technology (I4Tech), Institute of
[Electrical and Electronics Engineers Inc., Pune, India, 2020, pp. 207–210, https://](https://doi.org/10.1109/I4Tech48345.2020.9102669)
[doi.org/10.1109/I4Tech48345.2020.9102669.](https://doi.org/10.1109/I4Tech48345.2020.9102669)
[89] J. Woo, S. Shin, A.T. Asutosh, J. Li, C.J. Kibert, An overview of state-of-the-art
technologies for data-driven construction, in: Lecture Notes in Civil Engineering
Vol. 98, Springer, Sao Paulo, Brazil, 2021, pp. 1323˜ [–1334, https://doi.org/](https://doi.org/10.1007/978-3-030-51295-8_94)
[10.1007/978-3-030-51295-8_94.](https://doi.org/10.1007/978-3-030-51295-8_94)
[90] Y. Li, W. Yang, P. He, C. Chen, X. Wang, Design and management of a distributed
hybrid energy system through smart contract and blockchain, Appl. Energy 248
[(2019) 390–405, https://doi.org/10.1016/j.apenergy.2019.04.132.](https://doi.org/10.1016/j.apenergy.2019.04.132)
[91] A. Esmat, M. de Vos, Y. Ghiassi-Farrokhfal, P. Palensky, D. Epema, A novel
decentralized platform for peer-to-peer energy trading market with blockchain
[technology, Appl. Energy 282 (2021), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2020.116123)
[apenergy.2020.116123.](https://doi.org/10.1016/j.apenergy.2020.116123)
[92] L. Ableitner, V. Tiefenbeck, A. Meeuw, A. Worner, E. Fleisch, F. Wortmann, User ¨
behavior in a real-world peer-to-peer electricity market, Appl. Energy 270 (2020),
[https://doi.org/10.1016/j.apenergy.2020.115061.](https://doi.org/10.1016/j.apenergy.2020.115061)
[93] S. Saha, N. Ravi, K. Hreinsson, J. Baek, A. Scaglione, N.G. Johnson, A secure
distributed ledger for transactive energy: the Electron Volt Exchange (EVE)
-----
[blockchain, Appl. Energy 282 (2021), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2020.116208)
[apenergy.2020.116208.](https://doi.org/10.1016/j.apenergy.2020.116208)
[94] S. Noor, W. Yang, M. Guo, K.H. van Dam, X. Wang, Energy demand side
management within micro-grid networks enhanced by blockchain, Appl. Energy
[228 (2018) 1385–1398, https://doi.org/10.1016/j.apenergy.2018.07.012.](https://doi.org/10.1016/j.apenergy.2018.07.012)
[95] M. Foti, M. Vavalis, Blockchain based uniform price double auctions for energy
[markets, Appl. Energy 254 (2019), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2019.113604)
[apenergy.2019.113604.](https://doi.org/10.1016/j.apenergy.2019.113604)
[96] Y. Jiang, K. Zhou, X. Lu, S. Yang, Electricity trading pricing among prosumers
with game theory-based model in energy blockchain environment, Appl. Energy
[271 (2020), https://doi.org/10.1016/j.apenergy.2020.115239.](https://doi.org/10.1016/j.apenergy.2020.115239)
[97] M.K. McGowan, Integrating CHP systems, renewable energy to increase
resilience, Am. Soc. Hea. Refrigerat. Air Condition. Eng. 61 (12) (2019) 38–43,
[https://doi.org/10.22260/ISARC2019/0010.](https://doi.org/10.22260/ISARC2019/0010)
[98] R.K. Perrons, T. Cosby, Applying blockchain in the geoenergy domain: the road to
[interoperability and standards, Appl. Energy 262 (2020), https://doi.org/](https://doi.org/10.1016/j.apenergy.2020.114545)
[10.1016/j.apenergy.2020.114545.](https://doi.org/10.1016/j.apenergy.2020.114545)
[99] S. Keivanpour, A. Ramudhin, D. Ait Kadi, An empirical analysis of complexity
management for offshore wind energy supply chains and the benefits of
[blockchain adoption, Civ. Eng. Environ. Syst. 37 (3, 2020) 117–142, https://doi.](https://doi.org/10.1080/10286608.2020.1810674)
[org/10.1080/10286608.2020.1810674.](https://doi.org/10.1080/10286608.2020.1810674)
[100] M.N.N. Rodrigo, S. Perera, S. Senaratne, X. Jin, Potential application of
blockchain technology for embodied carbon estimating in construction supply
[chains, Buildings 10 (8) (2020), https://doi.org/10.3390/BUILDINGS10080140.](https://doi.org/10.3390/BUILDINGS10080140)
[101] W. Hua, J. Jiang, H. Sun, J. Wu, A blockchain based peer-to-peer trading
framework integrating energy and carbon markets, Appl. Energy 279 (2020),
[https://doi.org/10.1016/j.apenergy.2020.115539.](https://doi.org/10.1016/j.apenergy.2020.115539)
[102] J.J. Hunhevicz, D.M. Hall, Do you need a blockchain in construction? Use case
categories and decision framework for DLT design options, Adv. Eng. Inform. 45
[(2020), https://doi.org/10.1016/j.aei.2020.101094.](https://doi.org/10.1016/j.aei.2020.101094)
[103] T. Dounas, D. Lombardi, W. Jabi, Framework for decentralised architectural
design BIM and Blockchain integration, Int. J. Archit. Comput. 19 (2, 2021)
[157–173, https://doi.org/10.1177/1478077120963376.](https://doi.org/10.1177/1478077120963376)
[104] Z. Ye, M. Yin, L. Tang, H. Jiang, Cup-of-Water theory: a review on the interaction
of BIM, IoT and blockchain during the whole building lifecycle, in: 35th
International Symposium on Automation and Robotics in Construction and
International AEC/FM Hackathon: The Future of Building Things (ISARC) 2018,
International Association for Automation and Robotics in Construction (IAARC),
[Berlin, Germany, 2018, pp. 478–486, https://doi.org/10.22260/isarc2018/0066.](https://doi.org/10.22260/isarc2018/0066)
[105] A. Shojaei, J. Wang, A. Fenner, Exploring the feasibility of blockchain technology
as an infrastructure for improving built asset sustainability, Built Environ. Project
[Asset Manag. 10 (2, 2019) 184–199, https://doi.org/10.1108/BEPAM-11-2018-](https://doi.org/10.1108/BEPAM-11-2018-0142)
[0142.](https://doi.org/10.1108/BEPAM-11-2018-0142)
[106] V. Hargaden, N. Papakostas, A. Newell, A. Khavia, A. Scanlon, The role of
blockchain technologies in construction engineering project management, in:
25th IEEE International Conference on Engineering, Technology and Innovation,
Institute of Electrical and Electronics Engineers Inc., Valbonne Sophia-Antipolis,
[France, 2019, https://doi.org/10.1109/ICE.2019.8792582.](https://doi.org/10.1109/ICE.2019.8792582)
[107] Y. Yao, L. Chu, L. Shan, Q. Lei, Supply chain financial model innovation based on
block-chain drive and construction of cloud computing credit system, in: 4th
Institute of Electrical and Electronics Engineers (IEEE) International Conference
on Smart Internet of Things 2020, IEEE Inc., Beijing, China, 2020, pp. 249–255,
[https://doi.org/10.1109/SmartIoT49966.2020.00044.](https://doi.org/10.1109/SmartIoT49966.2020.00044)
[108] W. Lu, Blockchain technology and its applications in FinTech, in: 2nd
International Conference on Intelligent, Secure and Dependable Systems in
Distributed and Cloud Environments (ISDDC) 2018, Springer Verlag, Vancouver,
[Canada, 2018, pp. 118–124, https://doi.org/10.1007/978-3-030-03712-3_10.](https://doi.org/10.1007/978-3-030-03712-3_10)
[109] V. Hassija, V. Chamola, S. Zeadally, BitFund: a blockchain-based crowd funding
platform for future smart and connected nation, Sustain. Cities Soc. 60 (2020),
[https://doi.org/10.1016/j.scs.2020.102145.](https://doi.org/10.1016/j.scs.2020.102145)
[110] K.M. San, C.F. Choy, W.P. Fung, The potentials and impacts of blockchain
technology in construction industry: a literature review, in: 11th Curtin
University Technology, Science and Engineering International Conference
(CUTSE) 2018 Vol. 495, Institute of Physics Publishing, Sarawak, Malaysia, 2019,
[https://doi.org/10.1088/1757-899X/495/1/012005.](https://doi.org/10.1088/1757-899X/495/1/012005)
[111] G. Pattini, G.M.D. Giuda, L.C. Tagliabue, Blockchain application for contract
schemes in the construction industry, in: 3rd European and Mediterranean
Structural Engineering and Construction Conference 2020, International
Structural Engineering and Construction Society (ISEC), Limassol, Cyprus, 2020,
[pp. 1–6, https://doi.org/10.14455/ISEC.res.2020.7(1).AAE-21.](https://doi.org/10.14455/ISEC.res.2020.7(1).AAE-21)
[112] E. Aleksandrova, V. Vinogradova, G. Tokunova, Integration of digital
technologies in the field of construction in the Russian Federation, Eng. Manag.
[Product. Serv. 11 (3, 2019) 38–47, https://doi.org/10.2478/emj-2019-0019.](https://doi.org/10.2478/emj-2019-0019)
[113] C.Z. Li, Z. Chen, F. Xue, X.T.R. Kong, B. Xiao, X. Lai, Y. Zhao, A blockchain- and
IoT-based smart product-service system for the sustainability of prefabricated
[housing construction, J. Clean. Prod. 286 (2021), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.jclepro.2020.125391)
[jclepro.2020.125391.](https://doi.org/10.1016/j.jclepro.2020.125391)
[114] C.S. Gotz, P. Karlsson, I. Yitmen, Exploring applicability, interoperability and
integrability of Blockchain-based digital twins for asset life cycle management,
[Smart Sustain. Built Environ. (2020), https://doi.org/10.1108/sasbe-08-2020-](https://doi.org/10.1108/sasbe-08-2020-0115)
[0115.](https://doi.org/10.1108/sasbe-08-2020-0115)
[115] P.F. Wong, F.C. Chia, M.S. Kiu, E.C.W. Lou, Potential integration of blockchain
technology into smart sustainable city (SSC) developments: a systematic review,
[Smart Sustain. Built Environ. (2020), https://doi.org/10.1108/SASBE-09-2020-](https://doi.org/10.1108/SASBE-09-2020-0140)
[0140.](https://doi.org/10.1108/SASBE-09-2020-0140)
[116] O. Aljabri, O. Aldhaheri, H. Mohammed, A. Alkaabi, J. Abdella, K. Shuaib,
Facilitating electric vehicle charging across the UAE using blockchain, in: 2019
International Conference on Electrical and Computing Technologies and
Applications (ICECTA), Institute of Electrical and Electronics Engineers, Ras Al
[Khaimah, United Arab Emirates, 2019, https://doi.org/10.1109/](https://doi.org/10.1109/ICECTA48151.2019.8959705)
[ICECTA48151.2019.8959705.](https://doi.org/10.1109/ICECTA48151.2019.8959705)
[117] A. Ghosh, D.J. Edwards, M.R. Hosseini, Patterns and trends in Internet of Things
(IoT) research: future applications in the construction industry, Eng. Constr.
[Archit. Manag. 28 (2) (2020), https://doi.org/10.1108/ecam-04-2020-0271.](https://doi.org/10.1108/ecam-04-2020-0271)
[118] M.E. Wainstein, Blockchains as enablers of participatory smart grids, Technol.
[Archit. Des. 3 (2, 2019) 131–136, https://doi.org/10.1080/](https://doi.org/10.1080/24751448.2019.1640521)
[24751448.2019.1640521.](https://doi.org/10.1080/24751448.2019.1640521)
[119] A. Lüth, J.M. Zepter, P. Crespo del Granado, R. Egging, Local electricity market
designs for peer-to-peer trading: the role of battery flexibility, Appl. Energy 229
[(2018) 1233–1243, https://doi.org/10.1016/j.apenergy.2018.08.004.](https://doi.org/10.1016/j.apenergy.2018.08.004)
[120] J. Woo, A.T. Asutosh, J. Li, W.D. Ryor, C.J. Kibert, A. Shojaei, Blockchain: a
theoretical framework for better application of carbon credit acquisition to the
building sector, in: Construction Research Congress 2020: Infrastructure Systems
and Sustainability, American Society of Civil Engineers (ASCE), Tempe, Arizona,
[2020, pp. 885–894, https://doi.org/10.1061/9780784482858.095.](https://doi.org/10.1061/9780784482858.095)
[121] H. Hamledari, M. Fischer, Role of blockchain-enabled smart contracts in
automating construction progress payments, J. Leg. Aff. Disput. Resolut. Eng.
[Constr. 13 (1) (2021), https://doi.org/10.1061/(ASCE)LA.1943-4170.0000442.](https://doi.org/10.1061/(ASCE)LA.1943-4170.0000442)
[122] D. Kifokeris, C. Koch, A conceptual digital business model for construction
logistics consultants, featuring a sociomaterial blockchain solution for integrated
economic, material and information flows, J. Inf. Technol. Constr. 25 (29) (2020)
[500–521, https://doi.org/10.36680/j.itcon.2020.029.](https://doi.org/10.36680/j.itcon.2020.029)
[123] T. Kobashi, T. Yoshida, Y. Yamagata, K. Naito, S. Pfenninger, K. Say, Y. Takeda,
A. Ahl, M. Yarime, K. Hara, On the potential of “Photovoltaics + Electric vehicles”
for deep decarbonization of Kyoto’s power systems: techno-economic-social
[considerations, Appl. Energy 275 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2020.115419)
[apenergy.2020.115419.](https://doi.org/10.1016/j.apenergy.2020.115419)
[124] H. Luo, M. Das, J. Wang, J.C.P. Cheng, Construction payment automation through
smart contract-based blockchain framework, in: 36th International Symposium on
Automation and Robotics in Construction, ISARC 2019, International Association
for Automation and Robotics in Construction (IAARC), Banff Alberta, Canada,
[2019, pp. 1254–1260, https://doi.org/10.22260/isarc2019/0168.](https://doi.org/10.22260/isarc2019/0168)
[125] CB_Insights, Banking is Only the Beginning: 58 Big Industries Blockchain Could
[Transform. https://www.cbinsights.com/research/industries-disrupted-blockchai](https://www.cbinsights.com/research/industries-disrupted-blockchain/)
[n/, 2021 accessed 4th May 2021.](https://www.cbinsights.com/research/industries-disrupted-blockchain/)
[126] Z. Zhao, X. Song, Y. Xu, S. Tian, M. Gao, Y. Zhang, Research on the construction of
data pool and the management model of data pool integrating block chain
concept for ubiquitous electric power Internet of Things, in: 3rd Institute of
Electrical and Electronics Engineers (IEEE) Conference on Energy Internet and
Energy System Integration 2019, IEEE Inc., Changsha, China, 2019,
[pp. 1828–1833, https://doi.org/10.1109/EI247390.2019.9062040.](https://doi.org/10.1109/EI247390.2019.9062040)
[127] J. Lin, M. Pipattanasomporn, S. Rahman, Comparative analysis of auction
mechanisms and bidding strategies for P2P solar transactive energy markets,
[Appl. Energy 255 (2019), https://doi.org/10.1016/j.apenergy.2019.113687.](https://doi.org/10.1016/j.apenergy.2019.113687)
[128] Z. Liu, L. Jiang, M. Osmani, P. Demian, Building Information Management (BIM)
and blockchain (BC) for sustainable building design information management
[framework, Electronics 8 (7) (2019), https://doi.org/10.3390/](https://doi.org/10.3390/electronics8070724)
[electronics8070724.](https://doi.org/10.3390/electronics8070724)
[129] J. Yang, M. Guo, F. Fei, H. Wang, L. Zhang, J. Xie, Enabling technologies for
multinational interconnected smart grid, in: 3rd Institute of Electrical and
Electronics Engineers (IEEE) International Electrical and Energy Conference
[2019, IEEE Inc., Beijing, China, 2019, pp. 1106–1111, https://doi.org/10.1109/](https://doi.org/10.1109/CIEEC47146.2019.CIEEC-2019411)
[CIEEC47146.2019.CIEEC-2019411.](https://doi.org/10.1109/CIEEC47146.2019.CIEEC-2019411)
[130] G. van Leeuwen, T. AlSkaif, M. Gibescu, W. van Sark, An integrated blockchainbased energy management platform with bilateral trading for microgrid
[communities, Appl. Energy 263 (2020), https://doi.org/10.1016/j.](https://doi.org/10.1016/j.apenergy.2020.114613)
[apenergy.2020.114613.](https://doi.org/10.1016/j.apenergy.2020.114613)
[131] Z. Zeng, Y. Li, Y. Cao, Y. Zhao, J. Zhong, D. Sidorov, X. Zeng, Blockchain
technology for information security of the energy internet: fundamentals,
[features, strategy and application, Energies 13 (4) (2020), https://doi.org/](https://doi.org/10.3390/en13040881)
[10.3390/en13040881.](https://doi.org/10.3390/en13040881)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1016/j.autcon.2021.103914?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1016/j.autcon.2021.103914, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://discovery.ucl.ac.uk/10134004/7/Scott_Exploratory%20literature%20review%20of%20blockchain%20in%20the%20construction%20industry.pdf"
}
| 2,021
|
[
"Review"
] | true
| 2021-12-01T00:00:00
|
[
{
"paperId": "954e2a53f706c2bf86819243eba2e65b5b620b3b",
"title": "A blockchain- and IoT-based smart product-service system for the sustainability of prefabricated housing construction"
},
{
"paperId": "45fe706be2ee2fcc64c4c69eb95d0e5ab6a81081",
"title": "Role of Blockchain-Enabled Smart Contracts in Automating Construction Progress Payments"
},
{
"paperId": "7e783f2105c295ee53049f8d281859d7f52e88f3",
"title": "A novel decentralized platform for peer-to-peer energy trading market with blockchain technology"
},
{
"paperId": "3b66139f31adc6fd6405ceac61f9ff1e663ea6ff",
"title": "A framework for designing and evaluating realistic blockchain-based local energy markets"
},
{
"paperId": "41f03b88fade44b6f9cbcf94d83f79d169a1be2f",
"title": "Potential integration of blockchain technology into smart sustainable city (SSC) developments: a systematic review"
},
{
"paperId": "ac0854da422d603e725b49f0d6006d41214cbf7e",
"title": "Convergence of blockchain and artificial intelligence in IoT network for the sustainable smart city"
},
{
"paperId": "b8e0bfc698f85c9fafd009bdf697d857dc40baeb",
"title": "A blockchain based peer-to-peer trading framework integrating energy and carbon markets"
},
{
"paperId": "149c810f988c1293675fdad36b4c5c09f60d1831",
"title": "Construction quality information management with blockchains"
},
{
"paperId": "20afdc74da7374c90c5c148343a01b3e998c7bf4",
"title": "Adaptive bidding strategy for real-time energy management in multi-energy market enhanced by blockchain"
},
{
"paperId": "e674bce4e2c03210fec8429756f5317a8a850b87",
"title": "A smart contract system for security of payment of construction contracts"
},
{
"paperId": "8971043be9d8433368db1561135e9745697746dc",
"title": "Exploring applicability, interoperability and integrability of Blockchain-based digital twins for asset life cycle management"
},
{
"paperId": "2a37ff0519bcc64e8354429f65dcb35fcca2d3a0",
"title": "Decentralized blockchain-based consensus for Optimal Power Flow solutions"
},
{
"paperId": "7ae0e6c84231f6466be466863c38c2604c9abf3a",
"title": "A Secure Distributed Ledger for Transactive Energy: The Electron Volt Exchange (EVE) Blockchain"
},
{
"paperId": "22a15985ac8eb2ee4ba92f17d2a76039a603d482",
"title": "A Systematic Review of Digital Technology Adoption in Off-Site Construction: Current Status and Future Direction towards Industry 4.0"
},
{
"paperId": "f20474a6a82384934d0786b4c1f578fec55919dc",
"title": "Blockchain: A Theoretical Framework for Better Application of Carbon Credit Acquisition to the Building Sector"
},
{
"paperId": "10cd5e487b9cd3c248f9c459f36e8bd8e92fa7f6",
"title": "City logistics: Towards a blockchain decision framework for collaborative parcel deliveries in micro-hubs"
},
{
"paperId": "1aee9eecac104f75d7581c0dfe78685968fcebbb",
"title": "A conceptual digital business model for construction logistics consultants, featuring a sociomaterial blockchain solution for integrated economic, material and information flows"
},
{
"paperId": "fa967c210277d738a71490aee0169859ac0e9cf3",
"title": "Exploring the potentials of blockchain application in construction industry: a systematic review"
},
{
"paperId": "a793ae647d8f895034ca139aa868b4cec9a4e775",
"title": "Integrating BIM- and Cost-included Information Container with Blockchain for Construction Automated Payment using Billing Model and Smart Contracts"
},
{
"paperId": "f518e180bc735ec3970547582f9e41a6d4d201e3",
"title": "Framework for decentralised architectural design BIM and Blockchain integration"
},
{
"paperId": "33fe20d7e2d68f02f9da8004c774132ece1d43d4",
"title": "Incentivizing High-Quality Data Sets in Construction Using Blockchain: A Feasibility Study in the Swiss Industry"
},
{
"paperId": "04b58585d0d5d707c3b3d9afd996a884ef387fde",
"title": "Enabling technologies and sustainable smart cities"
},
{
"paperId": "8d95dc5c1d78f54bd8ef442353fb203f6f35bddc",
"title": "A semantic differential transaction approach to minimizing information redundancy for BIM and blockchain integration"
},
{
"paperId": "1a415ec978c083c81148ad9f29a7a807c1acd9f4",
"title": "Blockchain for smart cities: A review of architectures, integration trends and future research directions"
},
{
"paperId": "8fc3594eef16f18f8e35f8e4387337fbf3675d8b",
"title": "On the potential of “Photovoltaics + Electric vehicles” for deep decarbonization of Kyoto’s power systems: Techno-economic-social considerations"
},
{
"paperId": "10a743a1c175e11218c13ddddddb488a40716e7c",
"title": "Public and private blockchain in construction business process and information integration"
},
{
"paperId": "cece44ce43909efcf5346c61da0dc184748c3e5a",
"title": "Securing interim payments in construction projects through a blockchain-based framework"
},
{
"paperId": "a9e212ef2e0abaca7878194ab831e23d5b5296df",
"title": "Blockchain technology in supply chain operations: Applications, challenges and research opportunities"
},
{
"paperId": "d99829f8ecf858e7d131527793d8062f3ec662c2",
"title": "Designing a blockchain enabled supply chain"
},
{
"paperId": "1ab288b6f82a90ca34321acc3e4a823323a67bf3",
"title": "Technological, organisational and environmental determinants of smart contracts adoption: UK construction sector viewpoint"
},
{
"paperId": "bbc71211666ff20eae2cdb6ef662de85a52eb5d9",
"title": "Business Sustainability through Technology Management Approach for Construction Company"
},
{
"paperId": "c1830c7b2454a6495cf54cbb1fda95fa5cd8a4f3",
"title": "BitFund: A blockchain-based crowd funding platform for future smart and connected nation"
},
{
"paperId": "10ed4a256d051166963970c97e8f8946df19fe82",
"title": "A deep learning-based IoT-oriented infrastructure for secure smart City"
},
{
"paperId": "efa27b5cacac592d1946fb807f8b27f35007a3ba",
"title": "Patterns and trends in Internet of Things (IoT) research: future applications in the construction industry"
},
{
"paperId": "14180128da9ce6884c72a4168d98cc77761d1000",
"title": "An Overview of State-of-the-Art Technologies for Data-Driven Construction"
},
{
"paperId": "90b00d8d6537c020f10379c699009f2980b3fd12",
"title": "Framework for Automated Billing in the Construction Industry Using BIM and Smart Contracts"
},
{
"paperId": "90b9a4b54d9eda92158109905e3f343c04a770fa",
"title": "Potential Application of Blockchain Technology for Embodied Carbon Estimating in Construction Supply Chains"
},
{
"paperId": "2532be5c8981a1c691e09b601f4ee180f8cfde63",
"title": "Demystifying the Role of zk-SNARKs in Zcash"
},
{
"paperId": "2748bf0b3d8efda356588884703d36fec6aebb16",
"title": "Supply Chain Financial Model Innovation Based on Block-chain Drive and Construction of Cloud Computing Credit System"
},
{
"paperId": "90536f5a48aaecaeced36b50f6b615c2572b69d4",
"title": "Electricity trading pricing among prosumers with game theory-based model in energy blockchain environment"
},
{
"paperId": "29a1007aa79bda5664064ea5e8893caa1eca390b",
"title": "Design Copyright in Architecture, Engineering, and Construction Industry: Review of History, Pitfalls, and Lessons Learned"
},
{
"paperId": "f28039807e149df3ec8704201347803e7a33fcd3",
"title": "BLOCKCHAIN APPLICATION FOR CONTRACT SCHEMES IN THE CONSTRUCTION INDUSTRY"
},
{
"paperId": "ca075c74994f5d7cdaafde22a5030a04e615fc2b",
"title": "A Study on the Application of Blockchain Technology in the Construction Industry"
},
{
"paperId": "7671d772320995511e118a5978cea2d615eb722b",
"title": "An empirical analysis of complexity management for offshore wind energy supply chains and the benefits of blockchain adoption"
},
{
"paperId": "407e78a3b55075a371f58737fbff387f9714907b",
"title": "Trusted data infrastructure for smart cities: a blockchain perspective"
},
{
"paperId": "cd66a67c19df4e4bb121360bbd794730de5b7b3b",
"title": "User behavior in a real-world peer-to-peer electricity market"
},
{
"paperId": "30b9cbc50765a3ef2830003fa23c461d26f1a86a",
"title": "Maintenance service optimization in smart buildings through ultrasonic sensors network"
},
{
"paperId": "088afe931b4dfd9d8517a4f2815417b1c6f2ead9",
"title": "Smart Infrastructure: A Vision for the Role of the Civil Engineering Profession in Smart Cities"
},
{
"paperId": "dd3eb01107beda6f3e0bf2838b3561f9f55c4d48",
"title": "Integrating advanced technologies to uphold security of payment: Data flow diagram"
},
{
"paperId": "e50b2476b0e00039faa64462ddce1ace17c611ed",
"title": "Application Prospect of Blockchain Technology in Power Field"
},
{
"paperId": "369a6957d0e9fcaba5ffe77183750c57feb3383f",
"title": "Integrated project delivery with blockchain: An automated financial system"
},
{
"paperId": "504ecf4f705430e1358c3ab009f07c718f3a7a8f",
"title": "Data Handling in Industry 4.0: Interoperability Based on Distributed Ledger Technology"
},
{
"paperId": "5c1fa5d2d7eaf93bf5023f3df57a06b160a00838",
"title": "Smart contract architecture for decentralized energy trading and management based on blockchains"
},
{
"paperId": "fbf9573f3a108028c773a8289ee8240a842b270e",
"title": "Interoperability in building information modeling for AECO/FM industry"
},
{
"paperId": "54592cde9a2891327ae533100d570ab10466c8f0",
"title": "Shifting trust in construction supply chains through blockchain technology"
},
{
"paperId": "15c8ec6c787605652ee8aaeb71e1607587b8f174",
"title": "A Blockchain and Smart Contract-Based Framework to Inrease Traceability of Built Assets"
},
{
"paperId": "dc7e25aaffb63642dd3ee0a5220bd6de2ca57110",
"title": "Emerging paradigm for land records in India"
},
{
"paperId": "4964f2dc4a999f212427428d421bb95f4e6427ef",
"title": "An integrated blockchain-based energy management platform with bilateral trading for microgrid communities"
},
{
"paperId": "91faadf3071a022907b4c73a68efacebded8f981",
"title": "Do you need a blockchain in construction? Use case categories and decision framework for DLT design options"
},
{
"paperId": "2a236bced8c548f991c3b338b12842570284278a",
"title": "Developing a theoretical framework for intelligent contract acceptance"
},
{
"paperId": "698087b9986b2651ac253345fcbaf492d7c57f6e",
"title": "Applying blockchain in the geoenergy domain: The road to interoperability and standards"
},
{
"paperId": "8d0b554012e3db32b825f3f7d213c211fe2cbdd1",
"title": "Blockchain-Enabled Information Sharing Within a Supply Chain: A Systematic Literature Review"
},
{
"paperId": "f6a925fbbc7494d76236465bb480b8da1ecaf0b9",
"title": "Use of blockchain in designing smart city"
},
{
"paperId": "9f7af9b09c2854704f65a8de044d8f224917c29f",
"title": "Blockchain technology: Is it hype or real in the construction industry?"
},
{
"paperId": "cf8dadbb09cc0c9997de4ab4977d5f0924839e9f",
"title": "Blockchain-based framework for improving supply chain traceability and information sharing in precast construction"
},
{
"paperId": "cf8967a7e11aad6cf4dfcce6410f248655fb9220",
"title": "Blockchain Technology for Information Security of the Energy Internet: Fundamentals, Features, Strategy and Application"
},
{
"paperId": "04655d5fdc4af929cf84ae4e2c583bd202b16789",
"title": "A Decentralized System for Green Energy Distribution in a Smart Grid"
},
{
"paperId": "7e9756034d74d7d46661b1c4121a8263a0ae9d8f",
"title": "Enhanced Road Construction Process with Machine Learning and Blockchain Technology"
},
{
"paperId": "8c333011616f05c449b1d8a7d938663636cf972c",
"title": "Blockchain-Based Dynamic Provable Data Possession for Smart Cities"
},
{
"paperId": "0af503e56b0966757388737db73b922880299e30",
"title": "Risks associated with Logistics 4.0 and their minimization using Blockchain"
},
{
"paperId": "3bc5e91d3331120e0bfe3e97d9bbf3c3ec72ad97",
"title": "Comparative analysis of auction mechanisms and bidding strategies for P2P solar transactive energy markets"
},
{
"paperId": "26da370edcdc698971e519358bdce50fba80fa31",
"title": "Blockchain based uniform price double auctions for energy markets"
},
{
"paperId": "e5cc84bf611b5eff714fe379234f0ae2fbf079fd",
"title": "Exploring the feasibility of blockchain technology as an infrastructure for improving built asset sustainability"
},
{
"paperId": "c451c411ec28594d7a11825cee40483dc78dfe72",
"title": "Research on the Construction of Data Pool and the Management Model of Data Pool Integrating Block Chain Concept for Ubiquitous Electric Power Internet of Things"
},
{
"paperId": "154f71b83da195fbeb8899397edd1b60b5dec8ca",
"title": "Facilitating Electric Vehicle Charging Across the UAE Using Blockchain"
},
{
"paperId": "57ef1675afb2e1c69173d2289ce88103ada39770",
"title": "BIM Fork: Are Smart Contracts in Construction More Likely to Prosper with or without BIM?"
},
{
"paperId": "51a0c86e7d156453a0cc19de99efa61d93798704",
"title": "An Implementation of Smart Contracts by Integrating BIM and Blockchain"
},
{
"paperId": "9b61095876485dc4771cfe8eaf072656bca38b02",
"title": "Applications of Blockchain Technology in Smart City Development: A Research"
},
{
"paperId": "1fe1761ee3ac07c6974efeacafc5d1ff48aa4f04",
"title": "Blockchain-based Building Information Modeling"
},
{
"paperId": "56f00126a7f95ad4f2f606abc0ffd00be756b02a",
"title": "The strategic role of logistics in the industry 4.0 era"
},
{
"paperId": "cad4018665a33dbbce206e4cb4e4dbda0af0c8d1",
"title": "Enabling Technologies for Multinational Interconnected Smart Grid"
},
{
"paperId": "15556a7cecf78ce0b7293791da1ec54bc7afaec3",
"title": "Integration of digital technologies in the field of construction in the Russian Federation"
},
{
"paperId": "e6d0ab0fd0ed94dca9f659f197597139cb11d276",
"title": "Blockchain and the built environment: Potentials and limitations"
},
{
"paperId": "e9abe56a9b4aad075a91e746af7f38e1b22f16f3",
"title": "A Key Protection Scheme Based on Secret Sharing for Blockchain-Based Construction Supply Chain System"
},
{
"paperId": "fdbd52bab0983cdae4c16c68984ce0a3d1ceb996",
"title": "Design and management of a distributed hybrid energy system through smart contract and blockchain"
},
{
"paperId": "a262adfcbaf56d77212a13c9dbfd896e457bdcaf",
"title": "A Security Framework for A Sustainable Smart Ecosystem using Permissioned Blockchain: Performance Evaluation"
},
{
"paperId": "60502ffacc366d5ab67706fea21a935294af0c10",
"title": "Blockchain-based decentralized and secure keyless signature scheme for smart grid"
},
{
"paperId": "38f6f0b1d21c0000bf03cb9e2254e8ba6736d2e7",
"title": "Managing mistrust in construction using DLT: a review of use-case categories for technical decisions"
},
{
"paperId": "2b19ff6cb3725e8959ec7b9e541c868ab856fb99",
"title": "Blockchains as Enablers of Participatory Smart Grids"
},
{
"paperId": "61d99bfe3589904598f9e6d00ff589cf47bcc137",
"title": "Building Information Management (BIM) and Blockchain (BC) for Sustainable Building Design Information Management Framework"
},
{
"paperId": "6d98b4786da46b50790ad5be5c89c80d0e7bde45",
"title": "Blockchain and Building Information Modeling (BIM): Review and Applications in Post-Disaster Recovery"
},
{
"paperId": "d693b7492b20fc2b6a2aed29950059c4102d93bc",
"title": "A multi-layered blockchain framework for smart mobility data-markets"
},
{
"paperId": "473a63d45bd5b73cb3d1b55d3b3ccf5e44378b3f",
"title": "Blockchain in the built environment and construction industry: A systematic review, conceptual models and practical use cases"
},
{
"paperId": "b18859babf3cb062440051bd5098fd9baa30705a",
"title": "The Role of Blockchain Technologies in Construction Engineering Project Management"
},
{
"paperId": "9ad2a3e08cfeec91d402cfda9eb8b5cb2390c0d0",
"title": "Construction Payment Automation through Smart Contract-based Blockchain Framework"
},
{
"paperId": "55a8d41c05a4fb5af720cad55f06832ec31597a9",
"title": "BPIIoT: A Light-Weighted Blockchain-Based Platform for Industrial IoT"
},
{
"paperId": "f726dfa8480cb4309b3f3bfb4f57e8d358f67776",
"title": "EXPLORING APPLICATIONS OF BLOCKCHAIN TECHNOLOGY IN THE CONSTRUCTION INDUSTRY"
},
{
"paperId": "21ef6e3b2a233393a4dae08ebdaae3712c4bc97e",
"title": "The Potential of Blockchain in Building Construction"
},
{
"paperId": "ea2be6a62e543f9316ea301d0b85d4922771ae60",
"title": "The Potentials and Impacts of Blockchain Technology in Construction Industry: A Literature Review"
},
{
"paperId": "74bc40c88738ee13e1f65d21f5e65d9a80f30450",
"title": "bcBIM: A Blockchain-Based Big Data Model for BIM Modification Audit and Provenance in Mobile Cloud"
},
{
"paperId": "a3aa399df2ace99ba521e372ac89d3c707d68002",
"title": "Tools of Data Transmission at Building Information Modeling"
},
{
"paperId": "4fb1ba8f667616b98ad2c23f45fb3ba5249410be",
"title": "Cyber threats confronting the digital built environment"
},
{
"paperId": "40eebcf479382c5269bbbee663eeeeb2331286e0",
"title": "Blockchain Technology and Its Applications in FinTech"
},
{
"paperId": "0623fccd7d9bed096539c5fc1d45cc69e4d46929",
"title": "Local electricity market designs for peer-to-peer trading: The role of battery flexibility"
},
{
"paperId": "f1abdb39568305750bf8682177c19079213d498f",
"title": "Energy Demand Side Management within micro-grid networks enhanced by blockchain"
},
{
"paperId": "f52da5de09381f55a39be3e2c308b1e0bb1ee630",
"title": "Digital construction: From point solutions to IoT ecosystem"
},
{
"paperId": "6535a6e20ad025d209f98a906c4413bcd0bf536e",
"title": "Real-time renewable energy incentive system for electric vehicles using prioritization and cryptocurrency"
},
{
"paperId": "3e64d0ea58070d3715aad218499660b043628173",
"title": "Cup-of-Water Theory: A Review on the Interaction of BIM, IoT and Blockchain During the Whole Building Lifecycle"
},
{
"paperId": "8d3f008403e73226955699e31361f5ae02628dc8",
"title": "Designing microgrid energy markets"
},
{
"paperId": "06e9bfcddee862c55971dc25b359ca4d5990d79e",
"title": "Blockchain for architects: challenges from the sharing economy"
},
{
"paperId": "70599dfc16ff4fbe3ce9bf1d77b74f870175614e",
"title": "Trust in a viable real estate economy with disruption and blockchain"
},
{
"paperId": "eabb07994b757d329a434a082e72b81fca9f6237",
"title": "Blockchain technology in the chemical industry: Machine-to-machine electricity market"
},
{
"paperId": "2689296a82d8aabcd477c21c2980b41d0cd04881",
"title": "A Comparison between Two Main Academic Literature Collections: Web of Science and Scopus Databases"
},
{
"paperId": "c9e151ba8e59422320013d64307a17a94e018a98",
"title": "Scopus database: a review"
},
{
"paperId": "cdaf426db242cd7f2b97a2dc79629b213420a8e0",
"title": "The Economics of the Modern Construction Firm"
},
{
"paperId": "7d2a2f737e5aa492bbc757c19cb7b4ecace88d4a",
"title": "Technological"
},
{
"paperId": "eed76365143d2e3f35d0cd37cb8f41fdebffe34d",
"title": "Country"
},
{
"paperId": null,
"title": "Paperpile, The Top List of Academic Research Databases"
},
{
"paperId": "612d9c441c72178b360adb60043c480c09d40b03",
"title": "Buildings as material banks using RFID and building information modeling in a circular economy"
},
{
"paperId": null,
"title": "Blockchain and smart contracts for Integrated Project Delivery: inspiration from the commons"
},
{
"paperId": "e9e89d1417cf008f3512ce34b291277fab0f0bcc",
"title": "Blockchain in building logistics: emerging knowledge, and related actors in Sweden"
},
{
"paperId": "ef7f7ac224beaec1ad26f21e204c84a19146bf84",
"title": "A blockchain-based smart contract trading mechanism for energy power supply and demand network"
},
{
"paperId": null,
"title": "An evaluation of barriers to E-Procurement in Turkish construction industry"
},
{
"paperId": "cb70bbd86f76b85a519cbe44ede044d71d32d2cb",
"title": "Digital Transformation Design"
},
{
"paperId": null,
"title": "Integrating CHP systems, renewable energy to increase resilience"
},
{
"paperId": "22a604ea60646ebd0e76caa49d1f8e01d2a3781a",
"title": "Application of RFID combined with blockchain technology in logistics of construction materials"
},
{
"paperId": "256553ef50899b0338398fdf90bcc7920f3a78ea",
"title": "Incorporating seller/buyer reputation-based system in blockchain-enabled emission trading application"
},
{
"paperId": "35a038a3d077b489d27eff403901373f2448eb4c",
"title": "Blockchain in Logistics: Mapping the Opportunities in Construction Industry"
},
{
"paperId": "392fe6f21d9b735c719a742ed987702b893824dd",
"title": "Designing microgrid energy markets A case study: The Brooklyn Microgrid"
},
{
"paperId": "921f24569bcf47c3743e5bfad6536371deb35fcd",
"title": "Potentials of Blockchain Technology for Construction Management"
},
{
"paperId": null,
"title": "Description of Scimago Journal Rank Indicator"
},
{
"paperId": null,
"title": "ScimagoResearchGroup , Description of Scimago Journal Rank Indicator , Scimago Research Group , 2007"
},
{
"paperId": null,
"title": "Banking is Only the Beginning: 58 Big Industries Blockchain Could Transform"
}
] | 38,524
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012123313b7676f86c331a3e62bd39dec6fa9771
|
[] | 0.900354
|
Blockchain and Interplanetary File System (IPFS)-Based Data Storage System for Vehicular Networks with Keyword Search Capability
|
012123313b7676f86c331a3e62bd39dec6fa9771
|
Electronics
|
[
{
"authorId": "2212473323",
"name": "N. Sangeeta"
},
{
"authorId": "33509446",
"name": "S. Nam"
}
] |
{
"alternate_issns": [
"2079-9292",
"0883-4989"
],
"alternate_names": null,
"alternate_urls": [
"http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-247562",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-247562",
"https://www.mdpi.com/journal/electronics"
],
"id": "ccd8e532-73c6-414f-bc91-271bbb2933e2",
"issn": "1450-5843",
"name": "Electronics",
"type": "journal",
"url": "http://www.electronics.etfbl.net/"
}
|
Closed-circuit television (CCTV) cameras and black boxes are indispensable for road safety and accident management. Visible highway surveillance cameras can promote safe driving habits while discouraging moving violations. According to CCTV laws, footage captured by roadside cameras must be securely stored, and authorized persons can access it. Footages collected by CCTV and Blackbox are usually saved to the camera’s microSD card, the cloud, or hard drives locally but there are concerns about security and data integrity. These issues may be addressed by blockchain technology. The cost of storing data on the blockchain, on the other hand, is prohibitively expensive. We can have decentralized and cost-effective storage with the interplanetary file system (IPFS) project. It is a file-sharing protocol that stores and distributes data in a distributed file system. We propose a decentralized IPFS and blockchain-based application for distributed file storage. It is possible to upload various types of files into our decentralized application (DApp), and hashes of the uploaded files are permanently saved on the Ethereum blockchain with the help of smart contracts. Because it cannot be removed, it is immutable. By clicking on the file description, we can also view the file. DApp also includes a keyword search feature to assist us in quickly locating sensitive information. We used Ethers.js’ smart contract event listener and contract.queryFilter to filter and read data from the blockchain. The smart contract events are then written to a text file for our DApp’s keyword search functionality. Our experiment demonstrates that our DApp is resilient to system failure while preserving the transparency and integrity of data due to the immutability of blockchain.
|
# electronics
_Article_
## Blockchain and Interplanetary File System (IPFS)-Based Data Storage System for Vehicular Networks with Keyword Search Capability
**N. Sangeeta and Seung Yeob Nam ***
Department of Information and Communication Engineering, Yeungnam University,
Gyeongsan 38541, Republic of Korea
*** Correspondence: synam@ynu.ac.kr**
**Citation: N. Sangeeta; Nam, S.Y.**
Blockchain and Interplanetary File
System (IPFS)-Based Data Storage
System for Vehicular Networks with
Keyword Search Capability.
_[Electronics 2023, 12, 1545. https://](https://doi.org/10.3390/electronics12071545)_
[doi.org/10.3390/electronics12071545](https://doi.org/10.3390/electronics12071545)
Academic Editor: Hamed Taherdoost
Received: 17 February 2023
Revised: 22 March 2023
Accepted: 22 March 2023
Published: 24 March 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Closed-circuit television (CCTV) cameras and black boxes are indispensable for road safety**
and accident management. Visible highway surveillance cameras can promote safe driving habits
while discouraging moving violations. According to CCTV laws, footage captured by roadside
cameras must be securely stored, and authorized persons can access it. Footages collected by CCTV
and Blackbox are usually saved to the camera’s microSD card, the cloud, or hard drives locally but
there are concerns about security and data integrity. These issues may be addressed by blockchain
technology. The cost of storing data on the blockchain, on the other hand, is prohibitively expensive.
We can have decentralized and cost-effective storage with the interplanetary file system (IPFS) project.
It is a file-sharing protocol that stores and distributes data in a distributed file system. We propose
a decentralized IPFS and blockchain-based application for distributed file storage. It is possible to
upload various types of files into our decentralized application (DApp), and hashes of the uploaded
files are permanently saved on the Ethereum blockchain with the help of smart contracts. Because
it cannot be removed, it is immutable. By clicking on the file description, we can also view the file.
DApp also includes a keyword search feature to assist us in quickly locating sensitive information.
We used Ethers.js’ smart contract event listener and contract.queryFilter to filter and read data from
the blockchain. The smart contract events are then written to a text file for our DApp’s keyword
search functionality. Our experiment demonstrates that our DApp is resilient to system failure while
preserving the transparency and integrity of data due to the immutability of blockchain.
**Keywords: blockchain; Ethereum blockchain; decentralized application (DApp); interplanetary file**
system (IPFS); smart contracts
**1. Introduction and Background**
CCTV camera images are a valuable source of traffic surveillance that supplements
other traffic control measures. CCTV is aimed at helping in the detection and prevention of
criminal activity. It can be helpful in protecting the citizens in the community. It is placed in
public areas to provide evidence to appropriate law enforcement agencies. CCTV cameras
can be found on busy roads, atop traffic lights, and at highway intersections. Operators
detect and monitor traffic incidents using images from CCTV cameras. It may be possible
to predict the duration of a traffic incident based on prior experience and traffic modeling
techniques. Cameras are used to observe and monitor traffic, as well as to record traffic
pattern data. Moving violation tickets are even issued using cameras.
The vehicle’s event data recorder is constantly recording information in a loop while
we are driving, at least until a collision occurs. Black boxes save data collected at the time
of impact, as well as 5 s before and after the event. The black boxes will record all human
contact with the vehicle. The data collected helps us understand the reasons for collisions
and prevent them from happening again.
-----
_Electronics 2023, 12, 1545_ 2 of 23
CCTV footage is being used in crime investigations by police officers and insurance
companies [1] all over the world. Recorded footage is typically used by investigators to
locate or confirm the identity of a suspect. Real-time surveillance systems allow employees
or law enforcement officials to detect and monitor any threat in real time. Then, there’s the
archival footage record, which can be reviewed later if a crime or other issue is discovered.
In these cases, the recorded footage must be securely deposited and kept for future use,
making video storage a critical component of any video camera security system.
The vast majority of information collected by surveillance cameras and dashboard
cameras is securely kept on hard drives as well as memory cards. The amount of storage on
the MicroSD card of our security camera, on the other hand, is determined by the amount
of activity recorded by our camera. This type of storage necessitates a large amount of
storage space and exposes our data to risk if the device’s hard drive fails or is damaged. It
is critical to securely store CCTV and black box footage in order for it to be available and
unaltered at all times. In many cases, the introduction and popularity of IP camera cloud
storage have reduced the importance of local storage to a secondary option.
Cloud systems are an extremely good tool that offers us many advantages and functionalities. Cloud storage systems, on the other hand, have flaws such as problems with
data safety [2,3], centralized data storage, and the requirement for trusted third parties.
Owners are reassured of the burden of maintaining local data storage, but they end up
losing direct control over storage reliability and protection. Every year, large database
hacks cost millions of dollars. Furthermore, because the data is kept on an external device,
the owners have no power over it; if the service provider disconnects or limits access, they
will not be able to access it.
Due to the centralized nature of cloud storage data, an intruder to servers is able to
view and alter it. Cloud data is untrustworthy and can be altered or removed at any time.
As a result, making sure data security [4] and safeguarding users’ privacy [5] are critical.
Users are usually needed to cover the cost of any storage plan they select, even if they only
use a portion of it.
Even the finest cloud service providers can face such a challenging problem while retaining
strong maintenance standards. Centralized storage service providers occasionally fail to deliver
the security service as agreed. For example, a hack on Dropbox [6] which is among the world’s
largest online storage companies, did result in the leak of 68 million usernames and passwords
on the dark web. Well-known cloud services have started experiencing blackouts and security
breaches. The mass email deletion event of Gmail [7], Amazon S3’s latest shutdown [8], and the
post-launch interruption of Apple MobileMe’s [9] are other examples.
Blockchain technology may be able to address these issues. A blockchain is made up
of a cryptographic algorithm, a timestamp, and transaction information that connects it to
the preceding block. As a result, every block links to the next, forming a “chain” of blocks
and producing safe and immutable records. In comparison, the blockchain is not designed
for the purpose of file storage. The cost of keeping data on the blockchain is exorbitantly
high. We can have decentralized as well as low-cost storage with the IPFS project [10].
Peer-to-peer networks provide greater security than centralized networks. As an outcome,
they are ideal for protecting sensitive information from malicious actors.
We propose an IPFS-based distributed and decentralized storage application that
offers more storage space compared to solely blockchain-based systems in this paper. Using
distributed storage, information is kept on different nodes or servers on the Internet. To
upload files, we use the Geth [11] software client to operate an Ethereum node and an IPFS
Daemon server to operate our own IPFS node. Users will link to the DApp through the
use of one‘s web browser as well as a blockchain wallet, Metamask [12], to connect to a
blockchain in our proposed scheme. Since it is powered by Ethereum smart contracts, the
decentralized application will interact with the blockchain, which will keep all the code
of the application in addition to the data. The smart contracts keep track of all sources of
information in IPFS files. A DApp can receive any kind of information. The hash value of
the uploaded file is permanently saved on the Ethereum blockchain via smart contracts and
-----
_Electronics 2023, 12, 1545_ 3 of 23
it cannot be changed or deleted. Whenever a file is uploaded, the DApp hears the event
“File Upload” and updates the DApp’s user interface. We retrieve all of the smart contract
events and reveal them on our DApp, which is called the “smart contract event log”. The
smart contract event log contains data such as the file name, file summary (including the
event and location of the file), file type, size of the file, time and date of upload, Ethereum
account information of the user, and the hash value of the file once it has been uploaded to
IPFS. Users can also view the file by clicking on its description. The user does not need to
remember and save the hash value independently, which could be dangerous if another
individual has access to it. Our DApp also includes a keyword search feature to assist you
in quickly locating sensitive information. Figure 1 shows an example scenario where our
proposed system can be applied. When an accident occurs, our proposed system might be
used to save the video taken by the dashboard camera on IPFS and the hash value of the
video on the blockchain to prevent the manipulation of the video using the immutability
property of blockchain.
**Figure 1. Example scenario to illustrate the application of the proposed system in the presence of**
accidents on the road.
The key contributions of our paper can be summarized as follows:
- Our proposed distributed storage application supports the storage of various file
types since uploaded files are stored on IPFS and their hash values are stored in smart
contracts on the Ethereum blockchain. Users need not remember the hash values since
they can be retrieved from the blockchain later.
- DApp provides a keyword search feature to help users quickly find the necessary files
based on Ethers.js’s smart contract event listener and contract.queryFilter.
- Our experiment shows that our DApp is resilient to system failure, and our system
provides better transparency than is possible with centrally managed applications.
The rest of our paper is structured as follows: Section 2 contains related work. Section 3
contains preliminary information. The proposed scheme is described in Section 4. Section 5
goes over implementation. The performance evaluation results are described in Section 6.
Finally, Section 7 brings the paper to a close.
-----
_Electronics 2023, 12, 1545_ 4 of 23
**2. Related Work**
Hao, J. et al. [13] studied a blockchain and IPFS-based storage scheme for agricultural
product tracking. During the manufacturing, processing, and logistics processes, sensors
collect real-time data on product quality as well as video and picture data, according to
this study. The server parses and encapsulates the data before writing it to IPFS, and the
hash address is then stored in the blockchain to complete the data storage. The collected
data is not directly written to an IPFS. The authors employ a private data server, and data
collected by sensors is first stored on the private data server before being directly stored
on the IPFS. If the server experiences problems, such as server failure, the collected data is
lost, and the server is unable to write data to IPFS. There is no keyword search function for
quickly finding agricultural product information.
Rajalakshmi et al. [14] proposed a framework for access control methods in research
records that manages to combine blockchain, IPFS, as well as other traditional encryption methods. The system stores the verification metadata information acquired from the IPFS on the
blockchain network using Ethereum smart contracts, resulting in tamper-proof record-keeping
for further auditing. There is no keyword search functionality for searching information related to research records in this proposed scheme, which only stores PDF files.
Vimal, S. et al. [15] proposed a method to improve the efficiency of the P2P file-sharing
system by incorporating trustworthiness and proximity awareness during file transfer
using IPFS and blockchain. Any of these hashed files can be retrieved by simply calling the
hash of the file. Miners who collaborate to ensure the successful transfer of resources are
compensated. This study discusses the file transfer service, as well as the security strength
and some of the IPFS-based incentives.
This system is built around IPFS and Blockchain. Yongle Chen et al. [16] proposed
a more efficient P2P file system scheme. The authors pointed out the high-throughput
problem for individual IPFS users by incorporating the responsibility of content service
providers. A novel zigzag-based storage model is utilized to improve the IPFS block storage
model by taking data reliability and availability, storage overhead, and other issues for
service providers into account.
Rong Wang et al. proposed a video surveillance system relying on permissioned
blockchains (BCs) and edge computing in their paper [17]. Convolutional neural networks
(CNN), edge computing, and permissioned BCs, as well as IPFS technology, were used in
this system. Edge computing was utilized to collect and process large amounts of wireless
sensor data, while the IPFS storage solution was utilized to enable huge video data storage.
CNN technology was applied to real-time monitoring, and Edge computing was utilized to
gather and analyze large amounts of wireless sensor data.
Sun, J. et al. [18] proposed a blockchain-based secure storage and access scheme for
electronic medical records in IPFS, which ensures necessary access to electronic medical data
while preserving retrieval efficiency. IPFS is a file system used in order to store encrypted
electronic medical data. After receiving the hash value and encrypted hash address, the
physician needs to be encrypted using the hash value and encoded hash address with a random
number, hash the health information and index with the SHA256 hash function, and broadcast
the hash value and encoded hash address to the blockchain. Furthermore, the system offers
targeted defense against relevant keyword attacks. Medical data is not directly stored on IPFS,
and electronic health data is encrypted before being stored on IPFS. It also takes time for the
IPFS value to be encrypted before even being kept on the blockchain.
Most of the previous works lack a keyword search functionality for quickly locating relevant information. They do not mention how to retrieve the metadata from the
blockchain. It is not possible to retrieve data from IPFS without the hash value of the file.
Table 1 compares our proposed system with existing approaches.
-----
_Electronics 2023, 12, 1545_ 5 of 23
**Table 1. Comparison of existing approaches with the proposed scheme.**
**Constraints** **Hao, J. et al. [13]** **Rajalakshmi A. [14]** **Sun, J. et al. [18]** **Our Proposed Scheme**
High delay Collected
High delay Encryption
Delay data is not directly Low delay
of medical data
written to an IPFS
Tampering on the Possibilities of data
No tampering No tampering
stored data tampering
More storage capacity
Less Storage capacity
Storage capacity More storage capacity as the data stored on
Stored on data server
IPFS
Low delay in
uploading files to IPFS
and file hash is
automatically stored on
BC with help of Smart
contract
No tampering of data
as data is stored on
IPFS and hash on
Blockchain
More storage capacity
as the data stored on
IPFS
Uploading only video Only electronic medical Heterogeneous data
Heterogeneous data Uploading only PDF’s
and images on IPFS record upload
Keyword Search No Keyword search No Keyword search No Keyword search Supports Keyword
function function function function search function
**3. Preliminaries**
_3.1. IPFS_
The interplanetary file system is a distributed file system protocol developed by Joan
Bennett in 2015 and managed by Protocol Labs. The IPFS network consists of computers
running the IPFS client software. Anyone can join the IPFS network, either as an IPFS node
running the IPFS client or as a network user storing and retrieving files. Any type of file can
be stored, including text, music, video, and images, which is especially useful for non-fungible
tokens (NFTs). In contrast to HTTP, data in IPFS is identified by content rather than location.
When we upload a file to IPFS, a hash of the content is generated. This hash identifies the
content uniquely and can be used to retrieve the file. If we upload a different file, the hash
will be completely different, but we can always recompute the file’s hash locally to ensure
it matches the original IPFS hash. We selected the IPFS protocol in our proposed scheme
because it is a well-known and working decentralized file storage protocol.
_3.2. Ethereum_
Ethereum [19] is, at its core, a decentralized global software platform that utilizes
blockchain technology. It is most well-known for its native cryptocurrency, ether, abbreviated as ETH. Anyone can use Ethereum to start creating any protected digital technology. It
has a token intended to be utilized by the blockchain network, but it may also be employed
to pay participants for blockchain work. It is a platform for various DApps that can be
deployed through smart contracts. An Ethereum Private Network is a blockchain that is
completely separate from the main Ethereum network. The Ethereum Private Network is
primarily used by organizations to limit blockchain read permissions.
_3.3. Web3.js_
Web3.js [20] is a set of libraries that allows developers to communicate with a remote
or local Ethereum node via HTTP, IPC, or WebSocket. You can use this library to create
websites or clients that communicate with the blockchain.
_3.4. Ethers.js_
Ethers.js [21] connects to Ethereum nodes using Alchemy, JSON-RPC, Etherscan,
Infura, Metamask, or Cloudflare. Developers can use ethers. js to take advantage of full
functionality for their various Ethereum needs.
-----
_Electronics 2023, 12, 1545_ 6 of 23
_3.5. Smart Contract_
Smart contracts are programs that are implemented and stored on a blockchain when
certain requirements are fulfilled. They are frequently used to automate agreement execution so that all groups have instant surety of the results even without the involvement of an
additional party. They also can automate a workflow by automatically performing the next
action if certain requirements are fulfilled.
_3.6. Smart Contract Events_
When a transaction is mined, smart contracts could also emit events and logs to the
blockchain, which the front end can then process. Events are essential on any blockchain
because they make connections between smart contracts, which are self-executing software
programs that have the terms of the buyer’s and seller’s agreement straight integrated into
lines of code for response with user interfaces. To use a smart contract, a user must first
manually sign a transaction and interact with the blockchain. This is where automation
can help users by simplifying things. Event-driven automation initiates processes without
requiring human intervention. An automation tool can start a predefined process or
workflow of smart contracts after detecting an event.
_3.7. Decentralized Applications (DApp)_
A decentralized application [22] is an application that can run autonomously, typically
using smart contracts and running on a decentralized computing, blockchain, or other
distributed ledger system. DApps, like traditional applications, provide some function or
utility to their users.
_3.8. React.js_
React.js [23], also known as simply React, is a free and open-source JavaScript library. It is
best to create user interfaces by combining code sections (components) into complete websites.
We can use React as much or as little as we want. React enables developers to use separate
software components across the client and server sides, which also speeds up development.
_3.9. Dependencies_
3.9.1. Node Package Manager (NPM)
The node package manager (NPM) is a command-line tool for installing, updating,
and removing Node.js packages from our application. It also serves as a repository for
open-source Node.js packages. A package manager is essentially a set of software tools
that can be used by a developer to automate and standardize package management.
3.9.2. Node.js
Node.js is a simple programming language that can be used for prototyping and agile
development, as well as to create extremely fast and scalable services.
3.9.3. MetaMask
MetaMask is a non-custodial Ethereum-based decentralized wallet that also lets users
save, buy, send, transform, and swap crypto tokens, as well as sign transactions. Using
Metamask in conjunction with Web3.js in a web interface simplifies communication with
the Ethereum network.
3.9.4. Truffle Framework
Truffle is a set of tools that allows us to create smart contracts, write tests against
them, and deploy them to blockchains. It also provides a development console and allows
us to create client-side applications within our project. Truffle is the most widely used
framework for creating smart contracts. It supports Solidity and Viper as smart contract
languages. Truffle has three main functions: it compiles, deploys, and tests smart contracts.
-----
_Electronics 2023, 12, 1545_ 7 of 23
**4. Proposed Data Storing Scheme**
Our proposed scheme divides data storage, retrieval, and searching into four steps.
The system uploads a file, file hash is stored on the blockchain, monitors smart contract
events, and searches for relevant information.
_4.1. File Uploading_
The main concept of the file uploading process is depicted in Figure 2. The file is
selected from the DApp (browser) (1), and when the DApp form’s submit button is clicked,
the uploaded file is stored on IPFS (2). The hash of the file uploaded is returned to the
DApp (3); this hash is the file’s location. The file’s hash is saved to a smart contract (4),
which is subsequently kept on the blockchain (5), and the hash and other information of
the uploaded file were also listed on the DApp (6), from which we can obtain all of the files
we have uploaded to IPFS.
**Figure 2. File Upload.**
To connect to an Ethereum wallet Metamask, we used a web browser as a front end
which will communicate with the blockchain and store the smart contract on it.
We will upload the file directly to an IPFS, and then IPFS will return to us a hash. We
will then store this hash on the smart contract, and it will store that hash on the blockchain,
allowing us to access all of the files we have created when we list them on the DApp.
A smart contract stores the hash value on the blockchain, and another smart contract
lists the uploaded files on the DApp. The smart contract handles file uploading, file storage,
and file listing.
Figures 3 and 4 show our smart contract. Our project’s smart contract is responsible for
four tasks. Define a data structure for file management, upload the files, store the file hash
in the blockchain, and display the uploaded files on the DApp. We use a struct to manage
the files inside Solidity. Solidity structs allow us to create more complex data types with
multiple properties. By creating a struct, we can define our own type. They are useful for
organizing related data. Structures can be declared outside of one contract and imported
into another.
-----
_Electronics 2023, 12, 1545_ 8 of 23
**Figure 3. Solidity code for creation of a blockchain register and events to facilitate interoperability (1/2).**
**Figure 4. Solidity code for creation of a blockchain register and events to facilitate interoperability (2/2).**
The following steps show the tasks of a smart contract:
(i) Define data structure for the management of files:
Figure 3 shows step one in modeling the file (6). We created a file object, and inside we
defined a unit id, which will be the unique identifier for the file inside our smart contract.
The string will be the hash of the file, and this will be its location on IPFS, and a description
of the file, which contains the location of the file and events related to the uploaded file.
The address-payable uploader is the person who uploads the file, and it is the Ethereum
address of that person’s wallet address as they are connected to the blockchain; it is like
their username on the blockchain.
(ii) Store and list the files:
Step two is to store the file on IPFS, and step three is to list the event logs on the DApp.
We used mapping inside of Solidity to store the files, as shown in Figure 3. Mapping is
another data structure. It can be utilized to store data as key-value pairs, with the key
being any of the built-in data types but just not reference types, as well as the value being
any type. We created mapping (5) as shown in Figure 3. A mapping inside of Solidity is
-----
_Electronics 2023, 12, 1545_ 9 of 23
just a key-value store. We can give it a key and a value. The data type of the key in our
smart contract is an unsigned integer, and the return value is file struct (6), as shown in
Figure 2. When we place a file with an id within this mapping, it will write and store it on
the blockchain. Mapping is also going to give us the ability to list the files because mapping
is public, and thus it gives us a function called “files” (5) that we can call, pass in the id,
and fetch out each individual file. We can get back a file with all the data, such as the id,
hash, file name, description, and uploader.
(iii) Upload File:
The solidity code has a function called fileUpload (8). “fileUpload” takes the following
arguments: fileHash, fileSize, fileType, fileName, fileDescription. Whenever we upload a new
file, we will just add a new file to the mapping. We created a new file (6) and put it inside the
file’s mapping (5). We are going to store the file based on the id inside the mapping, as shown
in (5). We stored the file onto the blockchain as shown in (11).
Inside the smart contract, Solidity has a global variable called “msg” or “message”
that has many different attributes, one of which is the person calling the function, “message
sender” is the Ethereum address of the person uploading the file. We created a video struct
and saved it inside the “files mapping”, which we simply say “files”, pass in the id, and it
will be equal to a new file (11).
**fileCount (4) is a variable that stores the number of files that have been created.**
Whenever we create the smart contract, the counter value will be zero, but we can change
this value inside the function (11) as fileCount anytime the function is called. We could
write fileCount ++ (10) and then pass in fileCount in (11). fileCount keeps track of all the
files; it is basically our ID management system, and we save it inside the file mapping,
which acts like our database.
(iv) Creating an Event:
The event allows us to know when the file was uploaded. We can create events
from the Solidity code. We define an event called “fileUpload” and we pass in the same
arguments as the struct (7); this is going to allow us to subscribe to the event whenever it
is triggered from our application. We can trigger the upload event (12). We use the emit
keyword, then FileUploaded which has the same name as the event (7) and we pass in
the arguments file count, fileHash, fileSize, fileType, filename, file description, and now,
msg.sender.
Next, we added some requirements to the function to make it robust. We can use Solidity’s
**require function (9). The require function checks that a set of parameters is true before the rest**
of the function executes. Table 2 shows the list of variables used in our smart contract.
**Table 2. Smart Contract variables.**
**Variables** **Why It Is Used**
Keeps track of how many files have been
fileCount
added to the current smart contract.
mapping File key value store and lists the files
struct Manage the files
event FileUploaded Allows us to know when the file was uploaded
function fileUpload Uploads new file
emit FileUploaded Trigger an event
Recently, diverse types of formal methods are investigated to enhance the security
of smart contracts, since the compromise of smart contracts can lead to a catastrophic
monetary loss [24]. However, our smart contract codes have not been analyzed using those
formal methods yet, and we will verify our codes in our future work.
Our first project element is a private Ethereum blockchain that will act as the back end
for our DApp. Ethereum nodes maintain an archive of the blockchain’s code. The information is dispersed throughout the network. The Geth is utilized to run an Ethereum node.
-----
_Electronics 2023, 12, 1545_ 10 of 23
By running a node on the Ethereum network, we could also perform transactions as well
as communicate with smart contracts. The uploaded file’s hash is saved in a smart contract,
and then immutably stored also on the Ethereum blockchain.
The next component is IPFS, which enables us to keep files in a distributed fashion.
Because files are large, storing megabytes and gigabytes of files on the blockchain may not
be feasible. This is where IPFS comes into play. It has nodes, just like Ethereum, and we
distribute files that cannot be tampered with across the network. IPFS uses hashes. When
you upload a file to IPFS, it will be stored somewhere and identified by its hash. We run
our own IPFS node, which supports an IPFS gateway for file retrieval and storage and runs
the IPFS Daemon server. We cannot store or retrieve data unless the Daemon server is up
and running, or unless we link to public gateways such as Infura [25].
When a user uploads CCTV footage to our DApp, they can specify the location as well
as event details such as whether it was an accident or a traffic violation. This information is
fed into the DApp as a file description. This information is critical when uploading a video
to the DApp because users can quickly search for location and event information using the
DApp’s keyword search function.
We first must import and link our Ethereum blockchain account to Metamask before
we can use the DApp. Our web browser now supports blockchain networks, and we can
upload files to IPFS using our custom-designed DApp user interface (UI). First, we must
select the file, enter its description (such as file event and location), and then click the
submit button. When we click the submit button, the file is sent to IPFS and we receive
the IPFS result, which contains the hash value and path of the file. Metamask directs us to
accept the transaction, save the hash in a smart contract, and store the smart contract on
the blockchain via a confirmation pop-up. To store the hash on the blockchain, we should
pay some gas in the manner of ethers. When we confirm the Metamask transaction, the
hash of the uploaded file is preserved on the Ethereum blockchain.
The DApp monitors the “file upload” event and updates the DApp’s User interface
automatically. The event log of the smart contract is generated by retrieving and displaying
all events from the smart contract within our DApp. The smart contract event log includes
the file no, file description, type of file, file size, timestamp, Ethereum information of the
uploaded person, and the hash value of the file after it has been stored in IPFS. By clicking
on the file description, individuals may view the uploaded files in their web browser. The
hash value does not need to be remembered or stored separately by the user.
_4.2. Keyword Searching_
Users of the blockchain network can view transaction details but cannot identify the
individuals who made the transactions. On our DApp, we can see the transactions and use
the data for keyword searching.
(i) Read information from the blockchain:
When events occur in the smart contract, the smart contract emits events in order to
communicate with DApps and other smart contracts. When we invoke a smart contract
function, it has the ability to generate an event. It is critical for us to be able to listen to
these events in real time when developing DApps.
To listen for smart contract events, we used Ethers.js smart contract event listener. To
communicate with a smart contract using Ethers.js, we must first create a new contract
object with Ethers as shown in step (1) of Figure 5.
-----
_Electronics 2023, 12, 1545_ 11 of 23
**Figure 5. Ethers.js filter to read events from blockchain.**
As shown in steps (2), (3), and (4) of Figure 5, we need the blockchain address for
the smart contract, the ABI of the smart contract, and the signer or provider (4). The ABI
is a JSON object that describes how the smart contract works; it describes the interface,
which essentially means what functions the smart contract has, what function arguments it
accepts, and what it responds to when we try to read data from it. Ether.js allows us to store
ABIs as an array and only pull in the parts we want when we are setting up a smart contract
object. We require file upload information for our project, so we included ABI, which is
related to the file upload event. Then, we need a provider or a signer; in our project, we
have a provider. A provider is an abstraction of an Ethereum network connection that
provides a concise, consistent interface to standard Ethereum node functionality. We take
our smart contract ABI and create a new contract address ABI, and then we provide all of
the required information as shown in step (5) of Figure 5.
We used contract.queryFilter to filter the information, as shown in step (6) of Figure 5.
Using this command, we will examine every single FileUploaded event that has ever
occurred on our blockchain. We include this filter to reduce the search space inside the
Ethereum blockchain. Ethers.js allows us to examine the FileUploaded events and specify
which blocks we want to examine as shown in step (7) of Figure 5.
(ii) Keyword search text file creation:
We can create a text file for keyword searches once the events are retrieved from
the blockchain. The smart contract events are written into a text file. We store only
necessary information in the text file, e.g., information such as file name, event type,
location, Ethereum account number, and smart contract.
Figure 6 shows how to retrieve data from the blockchain and conduct keyword
searches. To listen to smart contract events, we used a command prompt to send requests to the blockchain (1). Blockchain responded with a filtered smart contract event
log containing all of the information about the uploaded file, including the smart contract
address, file name, file hash and description, Ethereum address of the uploaded file, and so
on (2). When we received a smart contract event log, we saved some of the event logs in a
text file (3). We wrote code in react.js to filter the results and search for keywords on the
DApp. When a user searches for a keyword on the DApp, the request is sent to a text file
containing smart contract events, which is then filtered, and the result is returned to the
DApp (4). Users can look up a word or an alphabet.
-----
_Electronics 2023, 12, 1545_ 12 of 23
**Figure 6. Keyword Search Function of the proposed DApp.**
**5. Implementation**
On the Windows 10 operating system, we used a private Ethereum blockchain to implement a proposed scheme. The Ethereum core network is not connected to a private Ethereum
network. Organizations primarily use it to limit blockchain read permissions. Installing
geth/parity allows the current node to join the Ethereum network and download the blockchain
to local storage. We used Go Ethereum to create our Ethereum blockchain (Geth).
_5.1. Steps to Create Private Ethereum Network_
The following steps show how we built our private Ethereum network:
5.1.1. Download “Geth”
Go Ethereum (Geth) can be directly downloaded and installed from geth.ethereum.
**org, accessed on 16 February 2023. Because Geth is a command line interface, we execute**
all commands from the command line. After installing Geth on our system, we typed geth
and pressed enter in a command prompt and obtained the output as shown in Figure 7.
**Figure 7. Geth command.**
We used the geth command to connect to a blockchain, and the geth command will
run in fast sync mode. Fast sync is Geth’s current default sync mode. Fast Sync nodes
download the headers of each block and retrieve all the nodes beneath them until they
-----
_Electronics 2023, 12, 1545_ 13 of 23
reach the leaves. Instead of reprocessing all transactions that have ever taken place, fast
sync downloads the blocks but only validates the affiliated proof-of-works (which could
take weeks). When we stop and restart the geth, it will operate in full sync mode. Full sync
needs to download all blocks and incrementally generate the blockchain state by running
each block since genesis. The data size of the Ethereum blockchain is currently around
800–1000 gigabytes, and we do not need to download the entire Ethereum blockchain on
our system.
5.1.2. Make a Folder for Our Private Ethereum Network
For the private Ethereum network, we created a separate folder called “Private Ethereum”.
This folder separates the Ethereum private network files from the public files.
5.1.3. Construct a Genesis Block
In blockchain, all transactions are recorded in the form of blocks in sequential order.
There are an infinite number of blocks, but there is always one distinct block that gives rise
to the entire chain, known as the genesis.
The genesis block, also known as Block 0 or Block 1, is the first block ever recorded
on its respective blockchain network. There are no transactions. The genesis block is used
to initialize the blockchain, as shown in Figure 8. A genesis block is required to create a
private blockchain. The genesis block can be created with any text editor and saved with
the JSON extension in the Private Ethereum folder. Figure 9 shows the genesis block file.
**Figure 8. Genesis block in a blockchain.**
**Figure 9. Genesis block file.**
5.1.4. Run the Genesis File
To extract the genesis file, we open the Private Ethereum folder in Visual Studio Code
and run the command geth init ./genesis.json -datadir eth. Eth is the name of a folder.
Geth is connected to the genesis file after running the above command.
-----
_Electronics 2023, 12, 1545_ 14 of 23
5.1.5. Set Up the Private Network
We created a private network in which multiple nodes can add new blocks. We must
use the command geth –datadir ./eth/ –nodiscover to accomplish this. When–nodiscover
is used to start a geth node, it prevents the node from being discovered by the network’s
bootnode. Every time the private network chain is needed, commands in the console must
be executed to connect to the genesis file and the private network. A private Ethereum
network and a personal blockchain are now available. Figure 10 shows the running status
of a private Ethereum network.
**Figure 10. Private Ethereum network.**
5.1.6. Make Externally Owned Account (EOA)
EOAs are controlled by users who have access to the account’s private keys. These
accounts, which can both send transactions and trigger contract accounts, are typically used
in conjunction with a wallet. To manage the blockchain network, EOA is required. To make
it, we launched Geth in two windows. One terminal to run Geth as shown in Figure 10 and
another terminal to create EOA. We entered the command geth attach \\.\pipe\geth.ipc in
the second terminal (console window). This will connect the second terminal to the private
Ethereum network established in Figure 10. We used the command personal.newAccount()
to create a new account. After executing this command, we entered our password to obtain
our account number and saved it for future use as shown in Figure 11.
**Figure 11. Externally owned account, Mining Start and Stop.**
5.1.7. Ethereum Mining on Our Private Chain
If we mine on the Ethereum main chain, we will need expensive equipment with
powerful graphics processors. ASICs are typically used for this but high performance is not
-----
_Electronics 2023, 12, 1545_ 15 of 23
required in our private network, and we can begin mining with the command miner.start ()
as shown in Figure 11.
After a few seconds, some ether was found in the default account if the balance
status is checked as shown in Figure 11. To check the balance, we used the command
**eth.getBalance(eth.accounts[0]). Figure 12 shows the mining process. We used the com-**
mand miner.stop() to stop mining as shown in Figure 11.
**Figure 12. Mining Process.**
5.1.8. Connecting the Private Ethereum Network to Metamask
We closed the terminal in which our private network was running and opened a new
terminal and typed the command geth –datadir ./eth/ –nodiscover –http –http.addr “local**host” –http.port “8545” –http.corsdomain=“*” –http.api web3,eth,debug,personal,net –**
**ws.api web3,eth,debug,personal,net –networkid 7777 –allow-insecure-unlock, as shown**
in the Figure 13 and now our private Ethereum is connected to Metamask.
Explanation of the used commands as follows:
– http.addr value:
Listening interface for HTTP-RPC servers (default: “localhost”).
– http.port value:
Listening port for HTTP-RPC server (default: 8545).
– http.corsdomain value:
A list of domains separated by commas that will accept cross-origin queries (browser
enforced). Because the HTTP server can be accessed from any local application, the
server includes additional safeguards to prevent API abuse from web pages. The
server must be configured to accept Cross-Origin requests in order to allow API
access from a web page. The —http.corsdomain flag is used to accomplish this. The
—http.corsdomain command accepts wildcards, allowing access to the RPC from any
location: —corsdomain ’*’.
– http.api value:
APIs accessible via the HTTP-RPC protocol.
– ws.api value: APIs accessible via the WS-RPC interface.
– nodiscover:
The peer discovery mechanism is disabled.
– networkid value:
Sets network id explicitly.
– allow-insecure-unlock:
When account-related RPCs are exposed via http, this allows for insecure account unlocking.
-----
_Electronics 2023, 12, 1545_ 16 of 23
**Figure 13. Importing Ethereum account in Metamask.**
We launched Metamask and added the Network “Local Host 8545” with the Chain
ID “2022”. It is the chain ID we specified in our private Ethereum network’s genesis
block. By importing a JSON file from our private Ethereum folder, we imported a private
Ethereum account. The JSON file can be found in the keystore’s Private Network folder.
Figure 13 depicts how to add a Private Ethereum account to Metamask.
_5.2. Running Our Own IPFS Node_
To store information on IPFS, we must run an IPFS Daemon server on our own IPFS
node. To use IPFS, we must first download and install the Go language from the golang
website, then go to the IPFS command line install page and download “install go-ipfs”. We
navigate to the download path, extract the files to C drive, and then run ipfs.exe to start the
Daemon server, as shown in Figure 14.
**Figure 14. IPFS Execution and Daemon server.**
_5.3. Deploying Smart Contract_
A smart contract stores the hash of the uploaded file. To make smart contracts in
the Solidity programming language, the Truffle framework is used. The Truffle Suite is a
collection of tools specifically designed for Ethereum blockchain development. The suite
includes three pieces of software. Truffle is capable of helping compile and deploy smart
contracts in addition to injecting them into web apps and building DApp front ends. Truffle
is now a popular Ethereum Blockchain IDE.
-----
_Electronics 2023, 12, 1545_ 17 of 23
_5.4. File Uploading and Retrieving_
After writing the smart contract, deploying, and publishing it to our Ethereum
blockchain, we then utilize Metamask to connect our DApp to the Ethereum blockchain.
A Metamask is required to communicate with the blockchain. The client-side application,
which is also going to communicate with IPFS, was built with React.
Figures 15 and 16 show how we initially deployed the smart contract to Ethereum,
then launched the DApp with the command npm run start, imported an Ethereum account
into Metamask, and linked Metamask to our DApp. Figures 17 and 18 show how to submit
a file to IPFS, deposit the file’s hash in a smart contract, record the smart contract on the
Ethereum blockchain, and successfully retrieve the file using our DApp.
**Figure 15. Smart contract deploy.**
**Figure 16. Connecting Metamask to the DApp.**
We chose the file and entered the location as well as the location of the file in the
user interface of DApp after logging into Metamask, then clicked the submit button also
confirmed the transaction of Metamask, as shown in Figure 17. To deploy the smart contract,
upload files, and store hash values on the blockchain, we start and maintain mining.
-----
_Electronics 2023, 12, 1545_ 18 of 23
**Figure 17. Choosing a file and confirming Metamask transaction.**
As the transaction is confirmed, the DApp listens for the event “File Upload” and
updates the DApp’s user interface automatically. Whenever a transaction has been mined,
smart contracts generate events and logs to the blockchain, which can then be processed by
the front end. Our DApp retrieves and displays all smart contract events. It is referred to as
a “smart contract event log”. The event log of the smart contract contains the file number,
file description (which includes an event and location of the file), type of the file, file size,
date and time, the uploader’s Ethereum account details, and the hash of the file. By having
to click on the file’s file details, users are able to view the uploaded files through their web
browser. Figure 18 depicts a smart contract event log and various file types retrieved.
**Figure 18. Event log and file retrieve.**
_5.5. Keyword Searching_
Our DApp supports the keyword search method. In order to conduct keyword
searches, we obtain event information from Blockchain. Smart contracts could even emit
logs as well as events to the blockchain whenever an Ethereum transaction is mined, which
the front end can then process. An event broadcasts information about a file upload, and
we could have access to all of the events so that we could listen to them in real time, or we
could just use them to obtain all of the most recent file uploads on the blockchain.
We can read smart contract events outside of the DApp’s user interface by using
Web3.js or Ethers.js. In our implementation, Ethers.js is used to read smart contract events.
We only have one event in our smart contract, so we use a filter to retrieve information
from that event, which is File Upload. A smart contract event log is shown in Figure 19.
-----
_Electronics 2023, 12, 1545_ 19 of 23
**Figure 19. Smart contract events.**
The smart contract events are then written to a text file, allowing our DApp to conduct
keyword searches. We store only necessary information in the text file, e.g., information
such as file name, event type, location, Ethereum account number, and smart contract.
When looking for sensitive information on the DApp, keyword searching is essential.
Entering an alphabet or a keyword into keyword searching will filter the results to show
only the keyword we entered. In the case of an alphabet search, the DApp will display all
events that include the letter we typed into the search box. This method makes navigating
an event easier and more efficient. Keyword searching is shown in Figure 20.
**Figure 20. Keyword Searching.**
**6. Performance Evaluation**
The majority of applications we use today are centralized, which means they are
managed by a single authority. Google [26] and Facebook [27], for example, retain complete
ownership of their respective products, running their apps and storing user data on private
servers and databases. While this gives Google and Facebook control over their applications
and user experiences, it can also be discouraging to users. Users of centralized apps have
little control over their data or experience within the app. They must have faith in the
app’s developer to listen to their feedback, provide product services, and treat them and
their data with dignity. However, with other centralized applications facing backlash over
privacy and the monetization of user data, many users are wary of relying on them.
Centralized applications run programs and store critical user information on centralized servers. The entire application may fail if a single, central server is compromised.
DApps enable users to complete transactions, verify claims, and collaborate in real time
without relying on a centralized intermediary.
Our DApp operates on a peer-to-peer network, similar to a distributed ledger, with each
network member contributing to the program. Each of the roles that a central server would
normally provide, from computing power to storage, is distributed across the network. We
do not need to keep and secure a central server, and users can directly participate in the app’s
operation. Our system is robust to system failure. There is no single point of failure in our DApp
and is distributed across a network of public nodes, with copies of critical information distributed
-----
_Electronics 2023, 12, 1545_ 20 of 23
among them. The application is unaffected if one or more IPFS nodes are compromised. Even if
there is a virus attack, a hardware failure, or the system is turned off, the user can still retrieve
the uploaded files and perform keyword searches.
When a user uploads data to IPFS, it is chopped into smaller chunks, hashed, and
assigned a unique content identifier (CID), which serves as a fingerprint. This makes it
faster and easier to store small amounts of data on the network. A cryptographic hash
(CID) is generated for each piece of data, making each upload to the network unique and
resistant to security breaches or tampering.
The experiment we conducted demonstrates that our DApp is resistant to system
failure, robust, and transparent.
The experiments we carried out are listed below.
**Scenario 1:**
In Scenario 1, the system unexpectedly shuts down, and when it is restarted, the
DApp’s event log vanishes, as illustrated in Figure 21. We can retrieve the event log outside
of the DApp using smart contract event listeners. In Figure 19, we used Ethers.js to retrieve
the event log. The data associated with the uploaded file is included in the event log. As a
result, system failure has no effect on the uploaded data.
**Figure 21. No event log listed on the DApp.**
**Scenario 2:**
The information in the keyword search text file was accidentally deleted in Scenario 2
as shown in Figure 22, and we were unable to perform the keyword search on the DApp.
As demonstrated in Scenario 1, we recreated the keyword search text file using information
retrieved from the smart contract event log and performed a keyword search as illustrated
in Figure 23. Table 3 summarizes the scenarios of performance evaluation.
**Table 3. Performance Evaluation Scenario Summarization.**
**Scenario #** **Description**
The system unexpectedly shuts down. When the system restarted, the DApp’s
1
event log vanished. We used ether.js to retrieve the event log.
The information in the keyword search text file was accidentally deleted. We
2 recreated the keyword search text file by using information retrieved from the smart
contract event log and performed a keyword search.
-----
_Electronics 2023, 12, 1545_ 21 of 23
**Figure 22. Text file with no data.**
**Figure 23. Keyword Search.**
If a malicious actor manages to compromise the blockchain network, any changes
are visible on a public network, allowing both users and developers to respond quickly.
Our DApp operates on a public ledger, which means that anyone with internet access can
participate in the application and network. As a result, anyone can view the transaction
record and any changes made to those records. Therefore, this system can provide better
transparency than centralized applications can provide. On a publicly distributed ledger,
no central entity can revoke transparency, limit viewership, or censor participation.
**7. Conclusions**
In this paper, we present the design and implementation of a decentralized application
that uses Ethereum blockchain and IPFS to store CCTV and black box footage securely and
efficiently. The DApp allows users to easily manage their storage. For scalability, only hashes
of the files are stored on the blockchain via smart contracts. Our proposed scheme works in
a decentralized manner. When a file is uploaded, the DApp listens for the event File Upload
and automatically updates the DApp’s user interface. All smart contract events are fetched and
displayed on our DApp. The extracted information is called a smart contract event log, and it
includes information about the file, timestamp, the uploader’s account information, and the
hash of the IPFS file returned. By clicking on the file’s description, users can gain access to it.
The selected file is then displayed in the web browser. DApp also includes a keyword search
-----
_Electronics 2023, 12, 1545_ 22 of 23
feature to help us find any information quickly. To filter and read data from the blockchain,
we used ether.js’ smart contract event listener and contract.queryFilter. We used the smart
contract address as well as the smart contract’s ABI. The smart contract events are then written
into a text file. The text file only contained necessary information, such as the file name, event
type, location, Ethereum account number, and smart contract. Our experiment shows that our
DApp is not affected by system failure. We can secure an application by managing the data in a
decentralized manner. Because our DApp runs on a public ledger, anyone with internet access
can participate in the application and network. As a result, anyone can view and modify the
transaction record. As a result, unlike centrally managed applications, this system provides
greater transparency. We anticipate that our DApp can be used in a variety of fields, such as for
keeping records of student research securely at universities, the medical information of patients
at hospitals, and customer information at banks due to its ability to store various file types.
In our current system, the access control function is not included in the smart contract
yet, and thus, the hash values of one’s files can be exposed to anyone who knows his or her
smart contract address. We will investigate the access control scheme for the smart contract
to resolve this issue in our future work. In addition, we will also verify the source code of
our smart contract using well-known formal methods.
Recently, Ethereum has been upgraded by changing its consensus mechanism from
proof-of-work (PoW) to proof of stake (PoW), and this new version is also known as
Ethereum 2.0. However, this new consensus mechanism has not been verified intensively
compared to the PoW mechanism, and thus, we used an old version of Ethereum and
its corresponding Ethereum Virtual Machine (EVM) environment in this paper. We will
implement and investigate our proposed system on the new version of Ethereum in our
future work.
**Author Contributions: Conceptualization, N.S. and S.Y.N.; data curation, N.S.; formal analysis, N.S.;**
methodology, N.S.; project administration, N.S. and S.Y.N.; resources, N.S.; software, N.S.; supervision,
S.Y.N.; validation, N.S.; visualization, N.S.; writing—original draft, N.S.; writing—editing and review, N.S.
and S.Y.N. All authors have read and agreed to the published version of the manuscript.
**Funding: This research was supported in part by the National Research Foundation of Korea (NRF),**
with a grant funded by the Korean government (MSIT) (2020R1A2C1010366). This research was
supported in part by Basic Science Research Program through the National Research Foundation of
Korea (NRF) funded by the Ministry of Education (2021R1A6A1A03039493).
**Data Availability Statement: No new data were created or analyzed in this study. Data sharing is**
not applicable to this article.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Mateen, A.; Khalid, A.; Nam, S.Y. Management of Traffic Accident Videos using IPFS and Blockchain Technology. KICS Summer
_Conf. 2022, 1, 1366–1368._
2. [Singh, A.; Chatterjee, K. Cloud security issues and challenges: A survey. J. Netw. Comput. Appl. 2017, 79, 88–115. [CrossRef]](http://doi.org/10.1016/j.jnca.2016.11.027)
3. Shin, Y.; Koo, D.; Hur, J. A survey of secure data deduplication schemes for cloud storage systems. ACM Comput. Surv. 2017, 49, 1–38.
[[CrossRef]](http://dx.doi.org/10.1145/3017428)
4. Yinghui, Z.; Dong, Z.; Deng, R.H. Security and privacy in smart health: Efficient policy-hiding attribute-based access control.
_IEEE Internet Things J. 2018, 5, 2130–2145._
5. Zhang, Y.; Chen, X.; Li, J.; Wong, D.S.; Li, H.; You, I. Ensuring attribute privacy protection and fast decryption for outsourced data
[security in mobile cloud computing. Inf. Sci. 2017, 379, 42–61. [CrossRef]](http://dx.doi.org/10.1016/j.ins.2016.04.015)
6. [Dropbox. Available online: https://www.theguardian.com/technology/2016/aug/31/dropbox-hack-passwords-68m-data-](https://www.theguardian.com/technology/2016/aug/31/dropbox-hack-passwords-68m-data-breach)
[breach (accessed on 17 February 2023).](https://www.theguardian.com/technology/2016/aug/31/dropbox-hack-passwords-68m-data-breach)
7. [Arrington, M. Gmail Disaster: Reports of Mass Email Deletions. December 2006. Available online: https://techcrunch.com/2006](https://techcrunch.com/2006/12/28/gmail-disaster-reportsof-mass-email-deletions/)
[/12/28/gmail-disaster-reportsof-mass-email-deletions/ (accessed on 17 February 2023).](https://techcrunch.com/2006/12/28/gmail-disaster-reportsof-mass-email-deletions/)
8. [Amazon. Amazon s3 Availability Event: 20 July 2008. Available online: https://simonwillison.net/2008/Jul/27/aws/ (accessed](https://simonwillison.net/2008/Jul/27/aws/)
on 17 February 2023).
9. [Krigsman, M. Apple’s MobileMe Experiences Post-Launch Pain. July 2008. Available online: https://www.zdnet.com/article/](https://www.zdnet.com/article/apples-mobileme-experiences-post-launch-pain/)
[apples-mobileme-experiences-post-launch-pain/ (accessed on 17 February 2023).](https://www.zdnet.com/article/apples-mobileme-experiences-post-launch-pain/)
-----
_Electronics 2023, 12, 1545_ 23 of 23
10. [Benet, J. Ipfs-Content Addressed, Versioned, p2p File System. 2014. Available online: https://arxiv.org/abs/1407.3561 (accessed](https://arxiv.org/abs/1407.3561)
on 17 February 2023).
11. [Geth. Available online: https://geth.ethereum.org/ (accessed on 17 February 2023).](https://geth.ethereum.org/)
12. [Metamask. Available online: https://metamask.io/ (accessed on 17 February 2023).](https://metamask.io/)
13. Hao, J.; Sun, Y.; Luo, H. A Safe and Efficient Storage Scheme Based on BlockChain and IPFS for Agricultural Products Tracking.
_J. Comput. 2018, 29, 158–167._
14. Rajalakshmi, A.; Lakshmy, K.V.; Sindhu, M.; Amritha, P. A blockchain and IPFS based framework for secure Research record
keeping. Int. J. Pure Appl. Math. 2018, 119, 1437–1442.
15. Vimal, S.; Srivatsa, S.K. A new cluster P2P file sharing system based on IPFS and blockchain technology. J. Ambient. Intell Hum.
_[Comput. 2019, 1–8. [CrossRef]](http://dx.doi.org/10.1007/s12652-019-01453-5)_
16. Chen, Y.; Li, H.; Li, K.; Zhang, J. An improved P2P file system scheme based on IPFS and Blockchain. In Proceedings of the 2017
[IEEE International Conference on Big Data (Big Data), Boston, MA, USA, 11–14 December 2017; pp. 2652–2657. [CrossRef]](http://dx.doi.org/10.1109/BigData.2017.8258226)
17. Wang, R.; Tsai, W.-T.; He, J.; Liu, C.; Li, Q.; Deng, E. A Video Surveillance System Based on Permissioned Blockchains and Edge
Computing. In Proceedings of the 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), Kyoto,
[Japan, 27 February–2 March 2019; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/BIGCOMP.2019.8679354)
18. Sun, J.; Yao, X.; Wang, S.; Wu, Y. Blockchain-Based Secure Storage and Access Scheme For Electronic Medical Records in IPFS.
_[IEEE Access 2020, 8, 59389–59401. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2982964)_
19. [Ethereum. Available online: https://ethereum.org/ (accessed on 17 February 2023).](https://ethereum.org/)
20. [Web3. Available online: https://web3js.readthedocs.io/en/v1.8.0/ (accessed on 17 February 2023).](https://web3js.readthedocs.io/en/v1.8.0/)
21. [Ethers. Available online: https://docs.ethers.io/v5/ (accessed on 17 February 2023).](https://docs.ethers.io/v5/)
22. Cai, W.; Wang, Z.; Ernst, J.B.; Hong, Z.; Feng, C.; Leung, V.C.M. Decentralized Applications: The Blockchain-Empowered Software
[System. IEEE Access 2018, 6, 53019–53033. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2870644)
23. [React. Available online: https://reactjs.org/ (accessed on 17 February 2023).](https://reactjs.org/)
24. Krichen, M.; Lahami, M.; Al-Haija, Q.A. Formal Methods for the Verification of Smart Contracts: A Review. In Proceedings of the
15th International Conference on Security of Information and Networks (SIN), Sousse, Tunisia, 11–13 November 2022; pp. 1–8.
25. [Infura. Available online: https://infura.io/ (accessed on 17 February 2023).](https://infura.io/)
26. [Google. Available online: https://www.google.com/ (accessed on 17 February 2023).](https://www.google.com/)
27. [Facebook. Available online: https://www.facebook.com/ (accessed on 17 February 2023).](https://www.facebook.com/)
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/electronics12071545?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/electronics12071545, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2079-9292/12/7/1545/pdf?version=1679978401"
}
| 2,023
|
[] | true
| 2023-03-24T00:00:00
|
[
{
"paperId": "0f880f7152719d5f34a1f2dd0472dc92bc52b8ce",
"title": "Blockchain-Based Secure Storage and Access Scheme For Electronic Medical Records in IPFS"
},
{
"paperId": "a53f2a6bdd0bde4def8447c562d2265774c40fea",
"title": "A new cluster P2P file sharing system based on IPFS and blockchain technology"
},
{
"paperId": "759117d27b07ace9d2fa58fa4b7ff6f08bc7a326",
"title": "A Safe and Efficient Storage Scheme Based on BlockChain and IPFS for Agricultural Products Tracking"
},
{
"paperId": "acea0aada3b7c58b3711507f0935cfb4606eab72",
"title": "Decentralized Applications: The Blockchain-Empowered Software System"
},
{
"paperId": "13cb37c279dee9e674bc151395c048b1bd60354e",
"title": "Security and Privacy in Smart Health: Efficient Policy-Hiding Attribute-Based Access Control"
},
{
"paperId": "b2f207a7819c71fcec0de9eb57e5d7822291f674",
"title": "Ensuring attribute privacy protection and fast decryption for outsourced data security in mobile cloud computing"
},
{
"paperId": "cd470eaaf7b1117c6c637408ef3a104636d1ac03",
"title": "Cloud security issues and challenges: A survey"
},
{
"paperId": "b9c50d2fedecf434e477788e46cc730d9e2a0b34",
"title": "A Survey of Secure Data Deduplication Schemes for Cloud Storage Systems"
},
{
"paperId": "714311007bc41cffeb11ca2968cec4f89128ce95",
"title": "A Blockchain and IPFS based framework for secure Research record keeping"
}
] | 15,280
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Economics",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012136c4ef2730ee762ae3110798b0648dfade38
|
[
"Computer Science",
"Medicine",
"Economics"
] | 0.886622
|
The Bitcoin as a Virtual Commodity: Empirical Evidence and Implications
|
012136c4ef2730ee762ae3110798b0648dfade38
|
Frontiers in Artificial Intelligence
|
[
{
"authorId": "98204516",
"name": "Cinzia Baldan"
},
{
"authorId": "2073965620",
"name": "Francesco Zen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Front Artif Intell"
],
"alternate_urls": null,
"id": "6a8c0041-d0b7-4e32-b52c-33adef005c7e",
"issn": "2624-8212",
"name": "Frontiers in Artificial Intelligence",
"type": null,
"url": "https://www.frontiersin.org/journals/artificial-intelligence#"
}
|
The present work investigates the impact on financial intermediation of distributed ledger technology (DLT), which is usually associated with the blockchain technology and is at the base of the cryptocurrencies' development. “Bitcoin” is the expression of its main application since it was the first new currency that gained popularity some years after its release date and it is still the major cryptocurrency in the market. For this reason, the present analysis is focused on studying its price determination, which seems to be still almost unpredictable. We carry out an empirical analysis based on a cost of production model, trying to detect whether the Bitcoin price could be justified by and connected to the profits and costs associated with the mining effort. We construct a sample model, composed of the hardware devices employed in the mining process. After collecting the technical information required and computing a cost and a profit function for each period, an implied price for the Bitcoin value is derived. The interconnection between this price and the historical one is analyzed, adopting a Vector Autoregression (VAR) model. Our main results put on evidence that there aren't ultimate drivers for Bitcoin price; probably many factors should be expressed and studied at the same time, taking into account their variability and different relevance over time. It seems that the historical price fluctuated around the model (or implied) price until 2017, when the Bitcoin price significantly increased. During the last months of 2018, the prices seem to converge again, following a common path. In detail, we focus on the time window in which Bitcoin experienced its higher price volatility; the results suggest that it is disconnected from the one predicted by the model. These findings may depend on the particular features of the new cryptocurrencies, which have not been completely understood yet. In our opinion, there is not enough knowledge on cryptocurrencies to assert that Bitcoin price is (or is not) based on the profit and cost derived by the mining process, but these intrinsic characteristics must be considered, including other possible Bitcoin price drivers.
|
Edited by:
Alessandra Tanda,
University of Pavia, Italy
Reviewed by:
Jürgen Hakala,
Leonteq Securities AG, Switzerland
Marika Vezzoli,
University of Brescia, Italy
*Correspondence:
Cinzia Baldan
[cinzia.baldan@unipd.it](mailto:cinzia.baldan@unipd.it)
Specialty section:
This article was submitted to
Artificial Intelligence in Finance,
a section of the journal
Frontiers in Artificial Intelligence
Received: 19 November 2019
Accepted: 20 March 2020
Published: 30 April 2020
Citation:
Baldan C and Zen F (2020) The
Bitcoin as a Virtual Commodity:
Empirical Evidence and Implications.
Front. Artif. Intell. 3:21.
[doi: 10.3389/frai.2020.00021](https://doi.org/10.3389/frai.2020.00021)
[ORIGINAL RESEARCH](https://www.frontiersin.org/journals/artificial-intelligence#editorial-board)
[published: 30 April 2020](https://www.frontiersin.org/journals/artificial-intelligence#editorial-board)
[doi: 10.3389/frai.2020.00021](https://doi.org/10.3389/frai.2020.00021)
# The Bitcoin as a Virtual Commodity: Empirical Evidence and Implications
[Cinzia Baldan* and Francesco Zen](http://loop.frontiersin.org/people/770598/overview)
Department of Economics and Management, University of Padova, Padua, Italy
### The present work investigates the impact on financial intermediation of distributed ledger technology (DLT), which is usually associated with the blockchain technology and is at the base of the cryptocurrencies’ development. “Bitcoin” is the expression of its main application since it was the first new currency that gained popularity some years after its release date and it is still the major cryptocurrency in the market. For this reason, the present analysis is focused on studying its price determination, which seems to be still almost unpredictable. We carry out an empirical analysis based on a cost of production model, trying to detect whether the Bitcoin price could be justified by and connected to the profits and costs associated with the mining effort. We construct a sample model, composed of the hardware devices employed in the mining process. After collecting the technical information required and computing a cost and a profit function for each period, an implied price for the Bitcoin value is derived. The interconnection between this price and the historical one is analyzed, adopting a Vector Autoregression (VAR) model. Our main results put on evidence that there aren’t ultimate drivers for Bitcoin price; probably many factors should be expressed and studied at the same time, taking into account their variability and different relevance over time. It seems that the historical price fluctuated around the model (or implied) price until 2017, when the Bitcoin price significantly increased. During the last months of 2018, the prices seem to converge again, following a common path. In detail, we focus on the time window in which Bitcoin experienced its higher price volatility; the results suggest that it is disconnected from the one predicted by the model. These findings may depend on the particular features of the new cryptocurrencies, which have not been completely understood yet. In our opinion, there is not enough knowledge on cryptocurrencies to assert that Bitcoin price is (or is not) based on the profit and cost derived by the mining process, but these intrinsic characteristics must be considered, including other possible Bitcoin price drivers.
Keywords: Bitcoin, FinTech, Vector Autoregression model, distributed ledger technology, cryptocurrencies price
determination
JEL Codes: G12, C52, D40
## INTRODUCTION
A strict definition of FinTech seems to be missing since it embraces different companies and
technologies, but a wider one could assert that FinTech includes those companies that are
developing new business models, applications, products, or process based on digital technologies
applied in finance.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 1 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
Financial Stability Board (FSB) (2017) defines FinTech as
“technology-enabled innovation in financial services that could
result in new business models, applications, processes, or
products with an associated material effect on the provision of
financial services.”
OECD (2018) analyzes instead various definitions from
different sources, concluding that none of them is complete
since “FinTech involves not only the application of new digital
technologies to financial services but also the development of
business models and products which rely on these technologies
and more generally on digital platform and processes.”
The services offered by these companies are indeed various:
some are providing financial intermediation services (FinTech
companies), while others offer ancillary services relating to
the financial intermediation activity (TechFin companies).
Technology is, for FinTech firms, an instrument, a productive
factor, an input, while for TechFin firms, it is the final
product, the output. The latter are already familiar with different
technologies and innovation; hence, they could easily diversify
their production by adding some digital and financial services
to the products they already offer. They enjoy a situation of
privileged competition because they are already known in the
market due to their previous non-financial services and thus
could take advantage of their customers’ information to enlarge
their supply of financial services. TechFin firms are the main
competitors for FinTech companies (Schena et al., 2018). Indeed
FinTech, or financial technology, is changing the way in which
financial operations are carried out by introducing new ways to
save, borrow, and invest, without dealing with traditional banks.
FinTech platforms, firms, and startups rose after the global
financial crisis in 2008 as a consequence of the loss of trust in
the traditional financial sector. In addition, digital natives (or
millennials, born between 1980 and 2000) seemed interested
in this new approach proposed by FinTech entrepreneurs.
Millennials were old enough to be potential customers, who feel
much more related to these new, fresh mobile services offered
through mobile platforms and apps, rather than bankers. The
strength of these new technologies lies in their transparent and
easy-to-use interfaces that was seen as an answer to the trust crisis
toward banks (Chishti and Barberis, 2016).
After the first Bitcoin (Nakamoto, 2008) has been sent in
January 2009, hundreds of new cryptocurrencies started being
traded in the market, whose common element is to rely on a
public ledger (or blockchain technology; Hileman and Rauchs,
2017). In fact, in addition to Bitcoin, other cryptocurrencies
gained popularity, such as: Ethereum (ETH), Dash, Monero
(XMR), Ripple (XRP), and Litecoin (LTC). Ethereum (ETH) was
officially launched in 2015 and is a decentralized computing
platform characterized by its own programming language. Dash
was introduced in 2014 but its market value was rising in 2016.
The peculiarity of this digital coin is that, in contrast with
other cryptocurrencies, block rewards are equally shared among
community participants and a revenue percentage (equal to
10%) is stored in the “treasury” to fund further improvements,
marketing, and network operations. Monero (XMR), launched
in 2014, is a system that guarantees anonymous digital cash by
hiding the features of the transacted coins. Its market value raised
in 2016. Ripple (XRP) has the unique feature to be based on a
“global consensus ledger” rather than on blockchain technology.
Its protocol is adopted by large institutions like banks and money
service businesses. Litecoin (LTC) appeared for the first time in
2011 and is characterized by a large supply of 84 million LTC.
Its functioning is based on that of Bitcoin, but some parameters
were altered (the mining algorithm is based on Scrypt rather than
Bitcoin’s SHA-265).
Despite the creation of these new cryptocurrencies, Bitcoin
remains the main coin in terms of turnover. The main advantage
of this new digital currency seems to be the low cost of transaction
(even if this is actually a myth, since BTC transactions topped
out at 50 USD per transaction in 2017–2018, while private banks
charge less these days) and, contrary on what many people think,
anonymity was not one of its main features when this network
was designed. An individual could attempt to make his identity
less obvious but the evidences available by now do not support
the claim that it could be hidden easily; it may be probably
impossible. To this purpose, fiat physical currencies remain the
best option.
Hayes (2015, 2017, 2019) analyzes the Bitcoin price formation.
In particular, he assumes the cryptocurrency as a virtual
commodity, starting from the different ways by which an
individual could obtain it. A person could buy Bitcoins
directly in an online marketplace by giving in exchange fiat
currencies or other types of cryptocurrencies. Alternatively,
he can accept them as payment and finally an individual
can decide to “mine” Bitcoins, which consists in producing
new units, by using computer hardware designed for this
purpose. This latter case involves an electrical consumption
and a rational agent would not be involved in the mining
process if the marginal costs of this operation exceed its
marginal profits. The relation between these values determines
price based on the cost of production that is the theoretical
value underlying the market price, around which it is
supposed to gravitate. Abbatemarco et al. (2018) resume Hayes’
studies introducing further elements missed in the previous
formulation. The final result confirms Hayes’ findings: the
marginal cost model provides a good proxy for Bitcoin market
price, but the development of a speculative bubble is not
ruled out.
We study the evolution of Bitcoin price by considering a
cost of production model introduced by Hayes (2015, 2017,
2019). Adding to his analysis some adjustment proposed by
Abbatemarco et al. (2018), we recover a series for the hypothetical
underlying price; then, we study the relationship between this
price and the historical one using a Vector Autoregression
(VAR) model.
The remainder of the paper proceeds as follows: in
section Literature Review, we expose a literature overview,
presenting those papers that investigate other drivers for Bitcoin
price formation, developing alternative approaches. In section
Materials and Methods, we exploit the research question,
describing the methodology behind the implemented cost of
production model, the sources accessed to collect data, the
hardware sample composition, and the formula derivations. In
section Main Outcomes, we analyze and comment on the main
findings of the analysis; section Conclusions concludes the work
with our comments on main findings and their implications.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 2 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
## LITERATURE REVIEW
Researchers detect a number of economic determinants for
Bitcoin price; it seems that given the new and particular
features of this cryptocurrency, price drivers will change over
time. For this reason, several authors analyze various potential
factors, which encompass technical aspects (such as the hashrate
and output volume), user-based growth, Internet components
(as Google Trends, Wikipedia queries, and Tweets), market
supply and demand, financial indexes (like S&P500, Dow Jones,
FTSE100, Nikkei225), gold and oil prices, monetary velocity, and
exchange rate of Bitcoin expressed in US dollar, euro, and yen.
Among others, Kristoufek (2015) focuses on different sources
of price movements by examining their interconnection during
time. He considers different categories: economic drivers, as
potential fundamental influences, followed by transaction and
technical drivers, as influences on the interest in the Bitcoin.
The results show how Bitcoin’s fundamental factors, such as
usage, money supply and price level, drive its price over the
long term. With regard to the technical drivers, a rising price
encourages individuals to become miners but this effect eclipses
over time, since always more specialized mining hardware have
increased the difficulty. Evidences show that price is even driven
by investors’ interest. According to previous studies (Kristoufek,
2013; Garcia et al., 2014), the relationship appears as most
evident in the long run, but during episodes of explosive
prices, this interest drives prices further up, while during rapid
declines, it pushes them further down. He then concludes that
Bitcoin is a unique asset with properties of both a speculativefinancial asset, and a standard one and because of his dynamic
nature and volatility, it is obvious to expect that its price
drivers will change over time. The interest element seems to
be particularly relevant when analyzing the behavior of Bitcoin
price, leading many researchers to study its interconnection with
Internet components, such as Google Trends, Wikipedia queries,
and Tweets.
Even Matta et al. (2015) investigate whether information
searches and social media activities could predict Bitcoin price
comparing its historical price to Google Trends data and volume
of tweets. They used a dataset based only on 60 days, but, in
addition to the other papers regarding this topic, they implement
an automated sentiment analysis technique that allows one to
automatically identify users’ opinions, evaluations, sentiments,
and attitudes on a particular topic. They use a tool called
“SentiStrength,” which is based on a dictionary only made by
sentiment words, where each of them is linked to a weight
representing a sentiment strength. Its aim is to evaluate the
strength of sentiments in short messages that are analyzed
separately, and the result is summed up in a single value: a
positive, negative, or neutral sentiment. The study reveals a
significant relationship between Bitcoin price and volumes of
both tweets and Google queries.
Garcia et al. (2014) study the evolution of Bitcoin price based
on the interplay between different elements: historical price,
volume of word-of-mouth communication in on-line social
media (information sharing, measured by tweets, and posts on
Facebook), volume of information search (Google searches and
Wikipedia queries), and user base growth. The results identify an
interdependence between Bitcoin price and two signals that could
form potential price bubbles: the first concerns the word-ofmouth effect, while the other is based on the number of adopters.
The first feedback loop is a reinforcement cycle: Bitcoin interest
increases, leading to a higher search volume and social media
activity. This new popularity encourages users to purchase the
cryptocurrency driving the price further up. Again, this effect
would raise the search volume. The second loop is the user
adoption cycle: after acquiring information, new users join the
network, growing the user base. Demand rises but since supply
cannot adjust immediately but changes linearly with time, Bitcoin
price would increase.
Ciaian et al. (2016) adopt a different approach to identify
the factors behind the Bitcoin price formation by studying both
the digital and traditional ones. The authors point out the
relevance of analyzing these factors simultaneously; otherwise,
the econometric outputs could be biased. To do so, they specify
three categories of determinants: market forces of supply and
demand; attractiveness indicators (views on Wikipedia and
number of new members and posts on a dedicated blog), and
global macro-financial development. The results show that the
relevant impact on price is driven by the first category and it tends
to increase over time. About the second category, they assert that
the short-run changes on price following the first period after
Bitcoin introduction are imputable to investors’ interest, which
is measured by online information search. Its impact eases off
during time, having no impact in the long run and may be due
to an increased trust among users who become more willing to
adopt the digital currency. On the other hand, the results suggest
that investor speculations can also affect Bitcoin price, leading to
a higher volatility that may cause price bubbles. To conclude, the
study does not detect any correspondences between Bitcoin price
and macroeconomics and financial factors.
Kjærland et al. (2018) try to identify the factors that have an
impact on Bitcoin price formation. They argue that the hashrate,
CBOE volatility index (VIX), oil, gold, and Bitcoin transaction
volume do not affect Bitcoin price. The study shows that price
depends on the returns on the S&P500, past price performance,
optimism, and Google searches.
Bouoiyour and Selmi (2015) examine the links between
Bitcoin price and its potential drivers by considering investors’
attractiveness (measured by Google search queries); exchange–
trade ratio; monetary velocity; estimated output volume;
hashrate; gold price; and Shanghai market index. The latter
value is due to the fact that the Shanghai market is seen as the
biggest player in Bitcoin economy, which could also drive its
volatility. The evaluation period is the one from 5th December
2010 to 14th July 2014 and it is investigated through the adoption
of an ARDL Bounds Testing method and a VEC Grander
causality test. The results highlight the speculative nature of
this cryptocurrency stating that there are poor chances that it
becomes internationally recognized.
Giudici and Abu-Hashish (2019) propose a model to explain
the dynamics of bitcoin prices, based on a correlation network
VAR process that models the interconnections between different
crypto and classic asset price. In particular, they try to assess
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 3 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
whether bitcoin prices in different exchange markets are
correlated with each other, thus exhibiting “endogenous” price
variations. They select eight exchange markets, representative
of different geographic locations, which represent about 60% of
the total daily volume trades. For each exchange market, they
collect daily data for the time period May 18th, 2016 to April
30th, 2018. The authors also try to understand whether bitcoin
price variations can also be explained by exogenous classical
market prices. Hence, they use daily data (market closing price)
on some of the most important asset prices: gold, oil, and SP500,
as well as on the exchange rates USD/Yuan and USD/Eur. Their
main empirical findings show that bitcoin prices from different
exchanges are highly interrelated, as in an efficiently integrated
market, with prices from larger and/or more connected trading
exchanges driving the others. The results also seem to confirm
that bitcoin prices are typically unrelated with classical market
prices, thus bringing further support to the “diversification
benefit” property of crypto assets.
Katsiampa (2017) uses an Autoregressive model for the
conditional mean and a first-order GARCH-type model for
the conditional variance in order to analyze the Bitcoin price
volatility. The study collects daily closing prices for the Bitcoin
Coindesk Index from 18th July 2010 to 1st October 2016 (2,267
observations); the returns are then calculated by taking the
natural logarithm of the ratio of two consecutive prices. The
main findings put on evidence that the optimal model in terms
of goodness of fit to the data is the AR-CGARCH, a result
that suggests the importance of having both a short-run and a
long-run component of conditional variance.
Chevallier et al. (2019) investigate the Bitcoin price
fluctuations by combining Markov-switching models with
Lévy jump-diffusion to match the empirical characteristics of
financial and commodity markets. In detail, they try to capture
the different sub-periods of crises over the business cycle, which
are captured by jumps, whereas the trend is simply modeled
under a Gaussian process. They introduce a Markov chain with
the existence of a Lévy jump in order to disentangle potentially
normal economic regimes (e.g., with a Gaussian distribution) vs.
agitated economic regimes (e.g., crises periods with stochastic
jumps). By combining these two features, they offer a model
that captures the various crashes and rallies over the business
cycle, which are captured by jumps, whereas the trend is simply
modeled under a Gaussian framework. The regime-switching
Lévy model allows identifying the presence of discontinuities for
each market regime, and this feature constitutes the objective of
the proposed model.
## MATERIALS AND METHODS
We study the evolution of Bitcoin price by considering a cost
of production model introduced by Hayes (2015, 2017). Adding
to his analysis some adjustment proposed by Abbatemarco et al.
(2018), we recover a series for the hypothetical underlying
price, and we study the relationship between this price and
the historical one using a VAR model. In detail, Hayes backtests the pricing model against the historical market price to
consolidate the validity of his theory. The findings show how
Bitcoin price is significantly described by the cryptocurrency’s
marginal cost of production and suggest that it does not depend
on other exogenous factors. The conclusion is that during periods
in which price bubbles happen, there will be a convergence
between the market price and the model price to shrink the
discrepancy. Abbatemarco et al. (2018) resume Hayes’ studies
introducing further elements missed in the previous formulation.
The final result confirms Hayes’ findings: the marginal cost
model provides a good proxy for Bitcoin market price, but the
development of a speculative bubble is not ruled out. Since
these studies were published before Bitcoin price raise reached
its peak on 19th December 2017 (the value was $19,270), the
aim of our work is to extend the analysis considering a larger
time frame and verify if, even in this case, the results are
unchanged. In particular, we consider the period from 9th April
2014 to 31st December 2018. We start with some unit root
tests to verify if the series are stationary in level or need to
be integrated and then we identify the proper number of lags
to be included in the model. We then check for the presence
of a cointegrating relationship to verify whether we should
adopt a Vector Error Correction Model (VECM) or a VAR
model; the results suggest that a VAR model is the best suited
for our data [1] . We thus collect the final results of the analysis
and we improve them by correcting the heteroscedasticity in
the regressions.
The marginal cost function, which estimates the electrical
costs of the devices used in the mining process, is presented as
Equation (1):
COST $ [hash] ∗ Eff J $ (1)
day [=][ H] s hash [∗] [CE] kWh [∗] [24] [ h] day
Where:
H hash/s is the hashrate (measured by hash/second);
EFF J/hash is the energy efficiency of the devices involved in the
process and it is measured by Joule/hash;
CE $/kWh is the electricity cost expressed in US dollar
per kilowatt/hour;
24 is the number of hours in a day;
A marginal profit function, which estimates the reward of the
mining activity, is instead depicted as Equation (2):
�
PROFIT BTC day [=][ BR] [BTC] [ ∗]
3, 600 s h [∗] [24] [ h] day
BTs
�
(2)
Where:
1 According to Abbatemarco et al. (2018), the nature of the variables considered
suggests that they probably are mutually interdependent. Lütkepohl and Krätzig
(2004) state that the analysis of interdependencies between time series is subject
to the endogenous problem; part of the literature proposes to specify a Vector
Auto Regressive model (VAR) that analyzes the causality between the two
series estimated by the model. Engle and Granger (1987), instead, demonstrated
that the estimate of such a model in the presence of non-stationary variables
(i.e., with mean and variance non-constant over time) can lead to erroneous
model specification and hence to unconditional regressions (spurious regressions).
Scholars’ intuition suggests that the price trend of a cryptocurrency and that of its
estimated equilibrium prices are non-stationary time series, as there is a constant
increase in their values over time.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 4 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
BR BTC is the block reward that refers to new Bitcoins
distributed to miners who successfully solved a block (hence it
is measured by BTC) and it is given by a geometric progression
(Equation 3):
n−1
BR BTC = BR 1 ∗ [1] (3)
2
n increases by 1 every 210,000 blocks. At the beginning, it was
BR 1 = 50, but during the course of time, it halved twice: on 29th
November 2012 and on 10th July 2016.
3,600 is the number of seconds in an hour;
24 is again the number of hours in a day;
BT s is the block time, which is expressed as the seconds needed
to generate a block (around 600 s = 10 min), and it is computed
as Equation (4):
BT s = [D][ ∗] [2] [32] (4)
H
Where H = hashrate and D = difficulty. The latter variable
specifies how hard it is to generate a new block in terms of
computational power given a specific hashrate. This is the value
that changes frequently to ensure a BT s close to 10 min [2] .
In addition to the variables already considered, we introduce
some adjustments proposed by Abbatemarco et al. (2018), who
thought there were two elements missing in Hayes’ formulations.
They add, on the cost side, the one required to maintain and
update miners’ hardware (MAN, expressed in US dollar), and
on the profit side, the fees (FEES) received by miners who place
transactions in a block [3] .
Maintenance costs are computed as a ratio between the
weighted devices’ price and their weighted lifespan (5), while fees,
expressed in BTC, are measured as a ratio between the daily total
transaction fees and the number of daily transactions [4] (6).
MAN $ = [Wei] [g] [hted Devices Price] [$] (5)
Weighted Lifespan
FEES BTC = [Total Transaction Fees][ (][BTC][)] (6)
Daily Transaction Fees
The new equations become:
COST $ [hash] ∗ Eff J $ (7)
day [=][ H] s hash [∗] [CE] kWh [∗] [24] [ h] day [+][ MAN] [$]
PROFIT BTC 3, 600 s h [∗] [24] [ h] day + FEES BTC (8)
day [=][ BR] [BTC] [ ∗] BTs
� �
Moreover, due to the equality 1 joule = 1 watt [∗] second, Equation
(7) could be expressed as follows:
COST $/day = H hash/s ∗ Eff W hash ∗ s [∗] [CE] kWh $ [∗] [24] [h][/][day] [ +][ MAN] [$] [ (9)]
2 Results are shown in Table A.1 ( **Supplementary Material** ). In order to simplify
the presentation, we display only the values for the last day of each month.
3 Bitcoin could be obtained through both the mining process and the registration
of transactions but, since Bitcoin supply is limited to 21 million, once it is reached,
fees become the only remuneration source in the future.
4 Fees computation results are displayed in Table A.1 ( **Supplementary Material** ).
TABLE 1 | Sources.
Variables Sources
P hist$ Historical price in US [https://Bitcoinvisuals.com](https://Bitcoinvisuals.com)
dollar
H hash/s Hashrate
BR BTC Block reward
D Difficulty
BT s Block time Computed using D and H hash/s
FEES BTC Transaction fees [https://charts.Bitcoin.com/bch/](https://charts.Bitcoin.com/bch/)
CE $/kWh Cost of energy Computed using data from:
en.Bitcoin.it/wiki/Mining_
hardware_comparison
[https://archive.org/web/](https://archive.org/web/)
MAN $ Hardware maintaining
cost
EFF J/hash Hardware energy
efficiency
Source: Authors’ elaboration.
By converting watt in kilowatt/hour, it can be written as:
Eff W ∗ s
hash
COST $/day = H hash/s ∗ 1000 ∗ CE kWh $ [∗] [24] [h][/][day] [ +][ MAN] [$]
(10)
COST $/day = H hash/s ∗ Eff kWh hash ∗ s ∗ CE kWh $ [∗] [24] [h][/][day] [ +][ MAN] [$]
(11)
According to the competitive market economic theories, the ratio
between the cost and profit functions must lead to the price under
equilibrium condition (Equation 12):
COST $
day
P $/BTC = (12)
PROFIT BTC
day
A historical price below the one predicted by the model would
force a miner out of the market, since he is operating in loss, but
at the same time, the removal of its devices from the network
increases others’ marginal profits (competition decreases), and
at the end, the system would return to equilibrium. On the
other hand, a historical price higher than what predicted by
the model attracts more miners, thus increasing the number of
devices operating in the network and decreasing others’ marginal
profits (competition increases). Again, the system would return
in balance (Hayes, 2015).
We must remark that the assumption of an energy price per
hemisphere is not very realistic. In fact, for large consumers,
energy price is contractually set differently for peak times and
less busy times. There is a lot of variation in the energy price of
mines in different countries and circumstances (see, for example,
Iceland with its geothermal cheap energy as a cheap energy
example; Soltani et al., 2019). Taking more variation around
energy prices into account would probably add a wider range of
BTC prices (de Vries, 2016); due to the difficulties on collecting
comparable data, we adopted a simplified proxy of the cost
of energy.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 5 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
**Table 1** presents the sources used to collect and compute the
required information.
We start the analysis by constructing a hardware sample that
evolves during a chosen time window (2010–2018), which is
divided in semesters associated with the introduction of a specific
device ( **Table 2** ).
Since the first Bitcoin was traded, there has been an
evolution of the devices used by miners. The first ones
adopted were GPU (Graphical Processing Unit) and later
FPGA (Field-Programmable Gate Array), but these days, only
ASIC (Application-Specific Integrated Circuit) is suitable for
mining purposes.
For each device model, we collect the efficiency, expressed in
Mhash/J, and the dollar price at the release day.
Technical data were collected from the Wikipedia pages
[https://en.Bitcoin.it/wiki/Mining__hardware__comparison](https://en.Bitcoin.it/wiki/Mining__hardware__comparison)
and [https://en.Bitcoin.it/wiki/Non-specialized__hardware__](https://en.Bitcoin.it/wiki/Non-specialized__hardware__comparison)
[comparison by using in addition the online archive https://](https://en.Bitcoin.it/wiki/Non-specialized__hardware__comparison)
[archive.org/web/,](https://archive.org/web/) which allows the recovery of different
webpages at the date in which they were modified, enabling the
comparison before and after reviews [5] .
Since only ASIC devices were created with specifications
to mining purpose, there is homogeneity among FPGA and
especially among GPU hardware. Due to this fact and considering
the difficulty to recover the release prices, we make some
simplified assumptions about them based on the information
available online. This means that given the same computational
power, we assume price homogeneity among devices when they
were not available for specific models [6] .
Given the hardware sample, we construct a weights
distribution matrix (Table A.3 in **Supplementary Material** )
that represents the evolution of the devices used during each
semester of the time window selected, which are replaced
following a substitution rate that increases over time. In fact,
until 2012, before FPGA took roots, it is equal to 0.05; until 2016,
we set it equal to 0.1, and in the last 2 years of the analysis, it is
equal to 0.15 [7] .
All computations are based on this matrix; indeed,
we multiplied it by a specific column of the hardware
sample table to obtain the biannual Efficiency (Table A.4 in
**Supplementary Material** ) (J/Hash), Weighted Devices’ Prices
($) (Table A.5 in **Supplementary Material** ), and Weighted
Lifespans (Table A.6 in **Supplementary Material** ). Regarding
this latter matrix, we made further assumptions on the
device lifespans by implementing Abbatemarco et al. (2018)
assumptions. Hence, we set a lifespan equal to 2,880 days for
5 When possible, we double check Wikipedia prices with those on the websites of
the companies producing mining hardware, and if they are not identical, we choose
the latter.
6 In detail, we approximate the prices of ATI FirePro M5800, Sapphire Radeon
5750 Vapor-X, GTX460, FireProV5800, Avnet Spartan-6 LX150T, and AMD
Radeon 7900.
7 Despite that ASIC devices have been released for the first time in 2013, they
became the main devices used in the mining process only in 2015–2016. In the
last 2 years of the analysis, we increase the substitution rate up to 0.15 because the
competition among miners has been driven up as more sophisticated hardware was
developed with a larger frequency.
FIGURE 1 | Historical market price vs. implied model price (July
2010–December 2018). Source: Authors’ elaboration.
GPU, 1,010 days for FPGA, and 540 days for ASIC, but after
2017, due to a supposed market growth phase, we halved these
numbers ( **Table 2** ).
To evaluate the cost of energy, we follow the assumptions
suggested by the cited researchers and we divide the world into
two parts relative to Europe: East and West, each one with a fix
electricity price equal to 0.04 and 0.175 $/kWh, respectively. The
weights’ evolution of the mining pool is set up in 2010 equal to
0.7 for the West part and 0.3 for the East part and it changes
progressively until reaching in 2018 a 0.2 for the West and 0.8
for the East. We obtained a biannual cost of energy evolution
measured by $/kWh by multiplying the biannual weights to the
electricity costs and summing up the value for the West and
the East (Table A.7 in **Supplementary Material** ).
At this point, to smooth the values across the
time window, we take the differences between
biannualMAN $, biannualEFF J/Hash, and biannualCE $/kWh at
time t and t – 1 and we divide these values by the number of days
in each semester, obtaining DeltaMAN, DeltaEFF, and DeltaCE
(Table A.8 in **Supplementary Material** ). Starting the first day
of the analysis with the first value of the biannual matrixes, we
compute the final variables as follows:
MAN $ (t) = MAN $ (t − 1) + DeltaMAN (13)
EFF hash J [(][t][)][ =][ EFF] [J][/][hash] [ (][t][ −] [1][)][ +][ DeltaEFF] (14)
CE kWh $ [(][t][)][ =][ CE] kWh $ [(][t][ −] [1][)][ +][ DeltaCE] (15)
## MAIN OUTCOMES
By applying Equations (8), (11), and (12), we obtain the model
price [8] and compare its evolution to the historical one ( **Figure 1** ).
The evolution of the model (or implied) price shows a spike
during the second semester of 2016, probably because on 10th
July 2016, the Block Reward halved from 25 to 12.5, leading to a
reduction on the profit side and a consequent price increase.
Despite this episode, the historical price seems to fluctuate
around the implied one until the beginning of 2017, the period
8 Table A.2 ( **Supplementary Material** ) displays all the variables required to
compute the model price and compares it with the historical price. Since our time
window involves 3,107 observation days, for the sake of simplicity, we present only
the results for the last day of each month.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 6 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
TABLE 2 | Hardware sample.
TYPE MODEL TIME EFF. (Mhash/J) PRICE (USD) LIFESPAN
Before [′] 17 After [′] 17
GPU ATI FirePro M5800 2 s. 2010 1.45 175 2,880 1,440
GPU Sapphire Radeon 5750 Vapor-X 2 s. 2010 1.35 160 2,880 1,440
GPU GTX460 2 s. 2010 1.73 200 2,880 1,440
GPU FirePro V5800 1 s. 2011 2.08 469 2880 1,440
FPGA Avnet Spartan-6 LX150T 2 s. 2011 6.25 995 1,010 505
FPGA AMD Radeon 7900 1 s. 2012 10.40 680 1,010 505
FPGA Bitcoin Dominator X5000 2 s. 2012 14.70 750 1,010 505
FPGA X6500 1 s. 2013 23.25 989 1,010 505
ASIC Avalon 1 2 s. 2013 107.00 1,299 540 270
ASIC Bitmain AntMiner S1 1 s. 2014 500.00 1,685 540 270
ASIC Bitmain AntMiner S2 2 s. 2014 900.00 2,259 540 270
ASIC Bitmain AntMiner S3 1 s. 2015 1,300.00 1,350 540 270
ASIC Bitmain AntMiner S4 2 s. 2015 1,429.00 1,400 540 270
ASIC Bitmain AntMiner S5 1 s. 2016 1,957.00 1,350 540 270
ASIC Bitmain AntMiner S5+ 2 s. 2016 2,257.00 2,307 540 270
ASIC Bitmain AntMiner S7 1 s. 2017 4,000.00 1,832 540 270
ASIC Bitmain AntMiner S9 2 s. 2017 10,182.00 2,400 540 270
ASIC Ebit E9++ 1 s. 2018 10,500.00 3,880 540 270
ASIC Ebit E10 2 s. 2018 11,100.00 5,230 540 270
Source: Authors’ elaboration.
in which Bitcoin price started raising exponentially, reaching its
peak with a value equal to $19,270 on 19th December 2017. It
declined during 2018, converging again to the model price.
Another divergence was detected at the end of 2013, but it was
of a lower amount and resolved quickly.
Given the historical and implied price series, we make a
further step than what Hayes (2019) and Abbatemarco et al.
(2018) did, by including in the analysis a time frame even in the
divergence phase. Therefore, we consider the period from 9th
April 2014 to 31st December 2018. We select this time window
also to base the analysis on solid data. Because of the difficulty to
obtain reliable information on the hardware used in the mining
process, we make some simplified assumptions on their features.
By choosing this time window, we include the hardware sample
whose data are more precise.
## Unit Root Tests
We first try to determine with different unit root tests whether
the time series is stationary or not. The presence of a unit
root indicates that a process is characterized by time-dependent
variance and violates the weak stationarity condition [9] . We test
the presence of a unit root with three procedures: the augmented
Dickey–Fuller test (Dickey and Fuller, 1979), the Phillips–Perron
test (Phillips and Perron, 1988), and the Zivot–Andrews test
(Zivot and Andrews, 1992).
Given a time series {y t }, both the augmented Dickey–Fuller
test (Dickey and Fuller, 1979) and the Phillips–Perron test are
9 The condition of weak stationarity asserts that Var(r t ) = γ o, which means that
the variance of the process is time invariant and equal to a finite constant.
based on the general regression (Equation 16):
�y t = α + βt + θ y t−1 + δ 1 �y t−1 + . . . + δ p−1 �y t−p+1 + ε t
(16)
Where �y t indicates changes in time series, α is the constant, t is
the time trend, p is the order of the autoregressive process, and ε
is the error term (Boffelli and Urga, 2016).
For both tests, the null hypothesis is that the time series
contains a unit root; thus, it is not stationary (H 0 : θ = 0), while
the alternative hypothesis asserts stationarity (H 0 : θ < 0).
Considering only the augmented Dickey–Fuller test, its basic
idea is that if a series {y t } is stationary, then {�y t } can be
explained only by the information included in its lagged values
(�y t−1 . . . �y t−p+1 ) and not from those in y t− 1 .
For each variable, we conduct this test firstly with a constant
term and later by including also a trend [10] .
**Table 3** presents the main findings of the test.
The Phillips–Perron test points out that the process generating
y t might have a higher order of autocorrelation than the one
admitted in the test equation. This test corrects the issue, and it is
10 In order to select the proper number of lags to include in this test, we used, only
for this part of the analysis, the open-source software Gretl. Its advantage is to
apply clearly the Schwert criterion for the maximum lag (p max ) estimation, which
is given by: p max = integer part of �12 ∗ � 100 T � 1/4 [�], where T is the number of
observations. The test is conducted firstly with the suggested value of p max, but if
the absolute value of the t statistic for testing the significance of the last lagged value
is below the threshold 1.6, p max is reduced by 1 and the analysis is recomputed. The
process stops at the first maximum lag that returns a value >1.6. When this value
is found, the augmented Dickey–Fuller test is estimated.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 7 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
robust in case of unspecified autocorrelation or heteroscedasticity
in the disturbance term of the equation. **Table 4** displays the
test results.
The main difference between these tests is that the
latter applies Newey–West standard errors to consider serial
correlation, while the augmented Dickey–Fuller test introduces
additional lags of the first difference.
Since the previous tests do not allow for the possibility of a
structural break in the series, Zivot and Andrews (1992) propose
to examine the presence of a unit root including the chance of
an unknown date of a break-point in the series. They elaborate
three models to test for the presence of a unit root considering a
one-time structural break:
a) permits a one-time change in the intercept of the series:
�y t = α + βt + θ y t−1 + γDU t + δ 1 �y t−1 + . . .
+ δ p−1 y t−p+1 + ε t (17)
b) permits a one-time change in the slope of the trend function:
�y t = α + βt + θ y t−1 + ϑDT t + δ 1 �y t−1 + . . .
+ δ p−1 �y t−p+1 + ε t (18)
c) combines the previous models:
�y t = α + βt + θ y t−1 + γDU t + ϑDT t + δ 1 �y t−1 + . . .
+ δ p−1 �y t−p+1 + ε t (19)
Where DU t is a dummy variable that relates to a mean shift at a
given break-date, while DT t is a trend shift variable.
The null hypothesis, which is the same for all three models,
states that the series contains a unit root (H 0 : θ = 0), while the
alternative hypothesis asserts that the series is a stationary process
with a one-time break occurring at an unknown point in time
(H 0 : θ < 0) (Waheed et al., 2006).
The results in **Table 5** confirm what the other tests predict:
both series are integrated of order 1. Since this last test identifies
for �lnPrice the presence of a structural break on 18th December
2017 and after this date the Bitcoin price reaches its higher value
to start declining later, we add to the analysis a dummy variable
related to this observation, in order to take into account a broken
linear trend in a series.
## Identifying the Number of Lags
The preferred lag length is the one that generates the lowest value
of the information statistic considered. We follow Lütkepohl’s
intuition that “the SBIC and HQIC provide consistent estimates
of the true lag order, while the FPE and AIC overestimate the lag
order with positive probability” (Becketti, 2013). Therefore, for
our analysis, we select 1 lag ( **Table 6** ) [11] .
11 To identify the proper lag length to be included in the VAR model, we use the
“varsoc” command in Stata that displays a table of test statistics, which reports
for each lag length, the log of the likelihood functions (LL), a likelihood-ratio
test statistic with the related degrees of freedom and p value (LR, df, and p),
and also four information criteria: Akaike’s final prediction error (FPE); Akaike’s
information criterion (AIC), Hannan and Quinn’s information criterion (HQIC),
## Identifying the Number of Cointegrating Relationships
A cointegrating relationship is a relationship that describes the
long-term link among the levels of a number of the nonstationary variables. Given K non-stationary variables, they can
have at most K – 1 cointegrating relationships. Since we have
only two non-stationary variables (lnPrice and lnModelPrice), we
could obtain, at most, only one cointegrating relationship.
If series show cointegration, a VAR model is no more the best
suited one for the analysis, but it is better to implement a Vector
Error-Correction Model (VECM), which can be written as (20):
′
Where the first part α β u t−1 + ν + ρt represents
� �
the cointegrating equations, while the second
p−1
� i=1 [Ŵ] [i] [�][y] [t][−][i] [ +][ γ][ +][ τ] [t][ +][ ε] [t] [ refers to the variables in levels.]
This representation allows specifying five cases that Stata tests:
1) Unrestricted trend: allows for quadratic trend in the level
of y t (τ t appears in the equation) and states that the
cointegrating equations are trend stationary, which means
they are stationary around time trends.
2) Restricted trend (τ = 0): excludes quadratic trends but
includes linear trends (ρt). As in the previous case, it allows
the cointegrating equations to be trend stationary.
3) Unrestricted constant (τ = 0, ρ = 0): lets linear trends in
y t to present a linear trend (γ ) but the cointegrating equations
are stationary around a constant means (ν).
4) Restricted constant (τ = 0, ρ = 0, γ = 0): rules out
any trends in the levels of the data but the cointegrating
relationships are stationary around a constant mean (ν).
and Schwarz’s Bayesian information criterion (SBIC). Every information criteria
provide a trade-off between the complexity (e.g., the number of parameters) and
the goodness of fit (based on the likelihood function) of a model. Since the output
is sensitive to the maximum lag considered, we try different options by changing
the one included in the command computation. We tried with 4, 8, 12, 16, 20, and
24 lags. After selecting a maximum lag length equal to 16, the optimal number of
lags suggested changes: while the previous results agree recommending 1 lag with
each information criteria, now the FPE and AIC diverge and propose 13 lags.
�y t = µ + δt + αβ [′] u t−1 +
p−1
� Ŵ i �y t−i + ε t (20)
i=1
Where the deterministic components µ + δt are, respectively, the
linear and the quadratic trend in y t that can be separated into the
proper trends in y t and those of the cointegrating relationship.
This depends on the fact that in a first-difference equation: a
constant term is a linear trend in the level of the variables (y t =
κ + λt → �y t = λ), while a linear trend derives from the
quadratic one in the regression in levels (y t = κ + λt + ωt [2] →
�y t = λ + 2ωt − ω). Therefore, µ ≡ αν + γ, and δt = αρt + τ t.
By substituting in the previous expression, the VECM can be
expressed as Equation (21):
�y t = α �β [′] y t−1 + ν + ρt� +
p−1
� Ŵ i �y t−i + γ + τ t + ε t (21)
i=1
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 8 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
TABLE 3 | Augmented Dickey–Fuller test.
Augmented Dickey–Fuller test
Constant Constant + trend Result
t stat p-value t-stat p-value
lnPrice −0.606 0.8696 −1.839 0.6856 NO stationary
lnModelPrice −0.467 0.8982 −1.669 0.7644 NO stationary
�lnPrice −7.694 0.0000 −7.697 0.0000 Stationary
�lnModelPrice −8.041 0.0000 −8.038 0.0000 Stationary
Critical values
Constant Constant + trend
1% 5% 10% 1% 5% 10%
−3.430 −2.860 −2.570 −3.960 −3.410 −3.120
Source: Authors’ elaboration.
TABLE 4 | Phillips–Perron test.
Phillips–Perron test
Constant Constant + trend Result
t stat p-value t stat p-value
lnPrice −0.437 0.9037 −1.546 0.8130 NO stationary
lnModelPrice −0.637 0.8624 −1.805 0.7021 NO stationary
�lnPrice −34.394 0.0000 −34.385 0.0000 Stationary
�lnModelPrice −42.972 0.0000 −42.959 0.0000 Stationary
Critical values
Constant Constant + trend
1% 5% 10% 1% 5% 10%
−3.430 −2.860 −2.570 −3.960 −3.410 −3.120
Source: Authors’ elaboration.
5) No trend (τ = 0, ρ = 0, γ = 0, ν = 0): considers no
non-zero means or trends.
Starting from these different specifications, the Johansen test can
detect the presence of a cointegrating relationship in the analysis.
The null hypothesis states, again, that there are no cointegrating
relationships against the alternative that the null is not true. H 0 is
rejected if the trace statistic is higher than the 5% critical value.
We run the test with each case specification and the results
agree to detect zero cointegrating equations (a maximum rank
of zero). Only the unrestricted trend does not display any
conclusion from the test but, since the other results matched, we
consider rank = 0 the right solution. This implies that the two
time series could be fitted into a VAR model.
## VAR Model
The VAR model allows investigating the interaction of several
endogenous time series that mutually influence each other. We
do not only want to detect if Bitcoin price could be determined
by the one suggested by the cost of production model; we also
want to check if the price has an influence on the model price.
This latter relation can occur if, for example, a price increase
leads to a higher cost for the mining hardware. In fact, a raise
in the price represents also a higher reward if the mining process
is successfully conducted, with the risk to push hardware price
atop, which in turn could boost the model price up.
To explain how a VAR model is constructed, we present
a simple univariate AR(p) model, disregarding any possible
exogenous variables, which can be written as (22):
y t = µ + φ 1 y t−1 + . . . + φ p y t−p + ε t (22)
Or, in a concise form (23):
φ(L)y t = µ + ε t (23)
where y t depends on its p prior values, a constant (µ) and a
random disturbance (ε t ).
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 9 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
TABLE 5 | Zivot–Andrews test.
Zivot–Andrews test
Intercept Trend Intercept + trend Result
t stat Break Date t stat Break Date t stat Break Date
lnPrice −2.964 1,083 26/03/2017 −2.049 261 25/12/2014 −2.562 1,196 17/07/2017 NON-stationary
lnModelPrice −3.221 281 14/01/2015 −3.357 408 21/05/2015 −3.914 620 19/12/2015 NON-stationary
�lnPrice −34.905 1,350 18/12/2017 −34.626 1,285 14/10/2017 −34.895 1,350 18/12/2017 Stationary
�lnModelPrice −42.848 582 11/11/2015 −42.781 1,469 16/04/2018 −42.858 582 11/11/2015 Stationary
Critical values
Intercept Trend Intercept + trend
1% 5% 10% 1% 5% 10% 1% 5% 10%
−5.34 −4.8 −4.58 −4.93 −4.42 −4.11 −5.57 −5.08 −4.82
Source: Authors’ elaboration.
TABLE 6 | Proper number of lags.
Lag LL LR df P FPE AIC HQIC SBIC
0 7160.95 8.0e−07 −8.36581 −8.3611 −8.35308
1 7190.57 59.237 4 0.000 7.7e−07 −8.39575 −8.38633* −8.37029*
2 7192.42 3.7134 4 0.446 7.8e−07 −8.39325 −8.37911 −8.35506
3 7194.48 4.1059 4 0.392 7.8e−07 −8.39097 −8.37231 −8.34005
4 7195.74 2.5346 4 0.638 7.8e−07 −8.38778 −8.36422 −8.32413
5 7197.81 4.1319 4 0.388 7.8e−07 −8.38552 −8.35725 −8.30914
6 7199.73 3.8486 4 0.427 7.8e−07 −8.38309 −8.35011 −8.29399
7 7201.63 3.8014 4 0.434 7.9e−07 −8.38064 −8.34295 −8.2788
8 7204.56 5.8468 4 0.211 7.9e−07 −8.37938 −8.33698 −8.26482
9 7208.36 7.6003 4 0.107 7.9e−07 −8.37914 −8.33204 −8.25185
10 7212.23 7.7429 4 0.101 7.9e−07 −8.37899 −8.32717 −8.23897
11 7213.48 2.5086 4 0.643 7.9e−07 −8.37578 −8.31925 −8.22304
12 7225.63 24.303 4 0.000 7.8e−07 −8.38531 −8.32407 −8.21983
13 7243.57 35.872* 4 0.000 7.7e−07* −8.4016* −8.33565 −8.2234
14 7244.29 1.4495 4 0.836 7.7e−07 −8.39777 −8.32711 −8.20684
15 7246.50 4.4025 4 0.354 7.7e−07 −8.39567 −8.3203 −8.19201
16 7248.86 4.7357 4 0.316 7.8e−07 −8.39376 −8.31368 −8.17737
Source: Authors’ elaboration.
A vector of n jointly endogenous variables is express as (24):
Where µ is a vector (Equation 26) of the n-constants:
µ 1
µ 2
...
µ p
(26)
(24)
y t =
y 1,t
y 2,t
...
y n,t
µ =
the matrix of coefficients � i is Equation (27):
This n-element vector can be rearranged as a function (Equation
25) of n constants, p prior values of Y t, and a vector of n random
disturbances, ǫ t :
y t = µ + φ 1 y t−1 + . . . + φ p y t−p + ǫ t (25)
(27)
� 1 =
φ i,11 φ i,12 - · · φ i,1n
φ i,21 φ i,22 - · · φ i,2n
... ... ... ...
φ i,n1 φ i,n2 . . . φ i,nn
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 10 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
TABLE 7 | Regressions of the Vector Autoregression model.
Variables (1) (2)
dlnPrice dlnModelPrice
L.dlnPrice 0.18330223*** 0.00799770
(0.02359822) (0.02055802)
L.dlnModelPrice −0.00655017 −0.02899205
(0.02762476) (0.02406582)
Dummy −0.00588960*** 0.00027999
(0.00185465) (0.00161571)
Constant 0.00236755*** 0.00149779**
(0.00086910) (0.00075713)
Observations 1,726 1,726
R [2] 0.04178812 0.00092579
Standard errors in parentheses.
***p < 0.01, **p < 0.05, and *p < 0.1.
Source: Authors’ elaboration.
and ǫ t consists in Equation (28):
ε 1
ε 2
...
ε p
TABLE 8 | Regressions with robust standard errors.
Variables (1) (2)
dlnPrice dlnModelPrice
L.dlnPrice 0.18330223*** 0.00799770
(0.04306718) (0.01592745)
L.dlnModelPrice −0.00655017 −0.02899205***
(0.02681078) (0.00979148)
Dummy −0.00588960*** 0.00027999
(0.00225058) (0.00142356)
Constant 0.00236755*** 0.00149779*
(0.00078480) (0.00078942)
Observations 1,726 1,726
R [2] 0.04178812 0.00092579
Robust standard errors in parentheses.
***p < 0.01, **p < 0.05, and *p < 0.1.
Source: Authors’ elaboration.
This feature does not compromise the unbiasedness or the
consistency of the OLS coefficients but invalidates the usual
standard errors. In time series analysis, heteroscedasticity is
usually neglected, as the autocorrelation of the error terms is seen
as the main problem due to its ability to invalidate the analysis.
Since it is not possible to check and correct heteroscedasticity
while performing the VAR model, we run each VAR regression
separately and check the presence of heteroscedasticity by
running the Breusch-Pagan test, whose null hypothesis states
that the error variance are all equal (homoscedasticity) against
the alternative hypothesis that the error variances change over
time (heteroscedasticity).
H 0 : σ 1 [2] [=][ σ] [ 2] 2 [=][ . . .][ =][ σ] [ 2] (31)
The null hypothesis is rejected if the probability value of the chisquare statistic (Prob < chi2) is <0.05. The results of the test for
both regressions show that the null hypothesis is always rejected,
implying the presence of heteroscedasticity in the residuals (Table
A.10 in **Supplementary Material** ).
We try to correct the issue using heteroscedasticity-robust
standard errors. The results are displayed in **Table 8** .
These new robust standard errors are different from the
standard errors estimated with the VAR model, while the
coefficients are unchanged. The first difference of lnPrice depends
even in this case on its lag, but, contrary from the VAR, now
the first difference of lnModelPrice is not independent from its
previous values. This new specification confirms the previous
finding that each variable does not depend on the lagged value
of the other one. Therefore, it seems that during the time window
considered, the Bitcoin historical price is not connected with the
price derived by Hayes’ formulation, and vice versa.
Recalling **Figure 1**, it seems that the historical price fluctuated
around the model (or implied) price until 2017, the year in
which Bitcoin price significantly increased. During the last
months of 2018, the prices seem to converge again, following a
common path. In our analysis, we focus on the time window
in which Bitcoin experienced its higher price volatility (Figure
(28)
ǫ t =
With Eǫ t = 0 and Eǫ t ǫ [′] s = ��0,, t t ̸= = s s
the elements of ǫ t can be contemporaneously correlated.
Given these specifications, a pth-order VAR can be presented
as Equation (29):
�(L) u t = µ + ǫ t (29)
To clarify this expression, the ith endogenous time series can be
extracted from these basic VAR and be represented as (30):
y i,t = µ i + φ 1,i1 y 1,t−1 + . . . + φ 1,in y n,t−1
+ φ 2,i1 y 1,t−2 + . . . + φ 2,in y n,t−2 + . . .
+ φ p,i1 y 1,t−p + . . . + +φ p,in y n,t−p + ε i,t (30)
The result of the VAR model considering the dummy variable is
presented in **Table 7** :
As expected, the dummy is significant in the dlnPrice function
but not in dlnModelPrice.
Looking at the significance of the parameters, we can see how
dlnPrice depends on its lagged value, on the dummy and on the
constant term, but it seems not to be linked with the lagged value
of dlnModelPrice. The regression of dlnModelPrice appears not
to be explained by any variable considered in the model. We then
check the stationarity of the model. The results confirm that the
model is stable and there is no residual autocorrelation (Table A.9
in **Supplementary Material** ).
## Heteroscedasticity Correction
Given the series’ path and the daily frequency of the data, the
variables included in the model are probably heteroskedastic.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 11 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
A.1 in **Supplementary Material** ) and the results suggest that it
is disconnected from the one predicted by the model. These
findings may depend on the features of the new cryptocurrencies,
which have not been completely understood yet.
The previous analyses, conducted on different time periods,
by Hayes (2019) and Abbatemarco et al. (2018) assert that
Bitcoin price could be justified by the costs and revenues of its
blockchain network, leading to an opposite result from ours. We
suggest that the difference could be based on the time window
analyzed since we make a further step evaluating also the months
in which Bitcoin price was pushed atop and did not follow a
stable path. We think that there is not enough knowledge on
cryptocurrencies to assert that Bitcoin price is (or is not) based
on the profit and cost derived by the mining process, but these
intrinsic characteristics must be considered and checked also in
further analysis that include other possible Bitcoin price drivers
suggested by the literature.
## CONCLUSIONS
The main findings of the analysis presented show how, in the
considered time frame, the Bitcoin historical prices are not
connected with the price derived from the model, and vice versa.
This result is different from the one obtained by Hayes
(2019) and Abbatemarco et al. (2018), who conclude
that the Bitcoin price could be explained by the cost of
production model.
The reason behind these opposite outcomes could be the
considered time window. In fact, our analysis includes also
those months where Bitcoin price surges up, reaching a peak of
$19,270 on 19th December 2017, without following a seasonal
path (Figure A.1 in **Supplementary Material** ). This has a relevant
impact on the results even if the historical price started declining
in 2018, converging again to the model one. Looking at the overall
time frame, it seems that the increasing value of the historical
price from the beginning of 2017 to the end of 2018 is a unique
episode that required some months to get back to more standard
behavior (Caporale et al., 2019).
It seems now possible to assert that Bitcoin could not be
seen as a virtual commodity, or better not only. According to
Abbatemarco et al. (2018), the implemented approach does not
rule out the possibility of a bubble development and, given the
actual time frame, this is the reason why it would be more
precise to explain Bitcoin price not only with the one implied
by the model, but also with other explanatory variables that
the literature seems to identify as meaningful. Therefore, to
avoid misleading results, Bitcoin intrinsic characteristics must be
considered and checked by adding to the profit and cost functions
also these suggested parameters that range from technical aspects
and Internet components to financial indexes, commodity prices,
and exchange rate. This could open new horizons for research,
which, despite the traditional drivers, should consider also new
factors such as Google Trends, Wikipedia queries, and Tweets.
These elements are related to the Internet component and
appear to be particularly relevant given the social and digital
Bitcoin’s nature.
Kristoufek’s (2013) intuition, which considers Bitcoin as a
unique asset that presents properties of both a speculative
financial asset and a standard one, whose price drivers will change
over time considering its dynamic nature and volatility, seems to
be confirmed.
The explanatory power of the VAR specification we
implemented to inspect fundamental vs. market price dynamics
could be quite low, which is to ascribe to missing factors and
volatility. Further researches could include more tests on the
VAR specification also including other controls/factors to
check whether, for example, the VIX is another and important
explanatory factor. More involved analyses should also explore
for latent factors and/or time-varying relationships with
stochastic and jump components.
Although there are highlighted elements of uncertainty,
Bitcoin has undoubtedly introduced to the market a new
way to think about money transfers and exchanges. The
distributed ledger technology could be a disruptive innovation
for the financial sector, since it can ease communication
without the need of a central authority. Moreover, the
spread of private cryptocurrencies, which enter into
competition with the public forms of money, could affect
the monetary policy and the financial stability pursued by
official institutions. For these reasons, central banks all
over the world are seeking to understand if it is possible
to adopt this technology in their daily operations, with the
aim of including it in the financial system and controlling
its implementations, enhancing its benefits, and reducing its
risks (Gouveia et al., 2017; Bank for International Settlements,
2018).
## DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the
article/ **Supplementary Material** .
## AUTHOR CONTRIBUTIONS
FZ: Introduction, Literature Review, and Conclusions.
CB: Materials and Methods, Main Outcomes,
and Conclusions.
## ACKNOWLEDGMENTS
We acknowledge useful comments and suggestions
from two anonym ous referees that have helped to
substantially improve the paper. We are also grateful
to Alessia Rossi, who has helped us in collecting and
processing data.
## SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found
online at: https://www.frontiersin.org/articles/10.3389/frai.2020.
00021/full#supplementary-material
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 12 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
Baldan and Zen Bitcoin as a Virtual Commodity
## REFERENCES
Abbatemarco, N., De Rossi, L., and Salviotti, G. (2018). An econometric
model to estimate the value of a cryptocurrency network. The Bitcoin case.
Association for Information Systems. AIS Electronic Library (AISeL) ECIS.
[Available online at: https://aisel.aisnet.org/ecis2018_rp/164?utm_source=aisel.](https://aisel.aisnet.org/ecis2018_rp/164?utm_source=aisel.aisnet.org%2Fecis2018_rp%2F164&utm_medium=PDF&utm_campaign=PDFCoverPages@@uline@)
[aisnet.org%2Fecis2018_rp%2F164&utm_medium=PDF&utm_campaign=](https://aisel.aisnet.org/ecis2018_rp/164?utm_source=aisel.aisnet.org%2Fecis2018_rp%2F164&utm_medium=PDF&utm_campaign=PDFCoverPages@@uline@)
[PDFCoverPages@@uline@ (accessed February 20, 2020).](https://aisel.aisnet.org/ecis2018_rp/164?utm_source=aisel.aisnet.org%2Fecis2018_rp%2F164&utm_medium=PDF&utm_campaign=PDFCoverPages@@uline@)
Bank for International Settlements (2018). Central bank digital currencies.
Committee on Payments and market Infrastructures. Markets Committee.
Becketti, S. (2013). Introduction to Time Series Using Stata. College Station, TX:
Stata Press.
Boffelli, S., and Urga, G. (2016). Financial Econometrics Using Stata. College
Station, TX: Stata Press, 17–30.
Bouoiyour, J., and Selmi, R. (2015). What does bitcoin look like? Ann. Econ.
[Financ. 16, 449–492. Available online at: http://aeconf.com/Articles/Nov2015/](http://aeconf.com/Articles/Nov2015/aef160211.pdf)
[aef160211.pdf](http://aeconf.com/Articles/Nov2015/aef160211.pdf)
Caporale, G. M., Plastun, A., and Oliinyk, V. (2019). Bitcoin fluctuations and the
frequency of price overreactions, Financ. Mark. Portf. Manag. 33, 109–131.
[doi: 10.1007/s11408-019-00332-5](https://doi.org/10.1007/s11408-019-00332-5)
Chevallier, J., Goutte, S., Guesmi, K., and Saadi, S. (2019). Study of the Dynamic
[of Bitcoin’s Price, HAL. Available online at: https://halshs.archives-ouvertes.fr/](https://halshs.archives-ouvertes.fr/halshs-02175669)
[halshs-02175669 (accessed February 20, 2020).](https://halshs.archives-ouvertes.fr/halshs-02175669)
Chishti, S., and Barberis, J. (2016). The FinTech Book: The Financial Technology
Handbook for Investors, Entrepreneurs and Visionaries. Chichester: Wiley.
Ciaian, P., Rajcaniova, M., and Kancs, D. A. (2016). The economics of Bitcoin price
[formation, Appl. Econ. 48, 1799–1815. doi: 10.1080/00036846.2015.1109038](https://doi.org/10.1080/00036846.2015.1109038)
de Vries, A. (2016). Bitcoin’s growing energy problem, Joule 2, 801–809.
[doi: 10.1016/j.joule.2018.04.016](https://doi.org/10.1016/j.joule.2018.04.016)
Dickey, D. A., and Fuller, W. A. (1979). Distribution of the estimators for
autoregressive time series with a unit root, J. Am. Stat. Assoc. 74, 427–431.
Engle, R. F., and Granger, C. W. (1987). Co-integration and error
correction: representation, estimation, and testing, Econometrica 55,
251–276.
Financial Stability Board (FSB) (2017). Financial Stability Implications from
FinTech. Supervisory and Regulatory Issues that Merit Authorities’ Attention.
Available online at: [http://www.fsb.org/wp-content/uploads/R270617.pdf](http://www.fsb.org/wp-content/uploads/R270617.pdf)
(accessed February 20, 2020).
Garcia, D., Tassone, C. J., Mavrodiev, P., and Perony, N. (2014). The digital traces
of bubbles: feedback cycles between socio-economic signals in the Bitcoin
[economy. J. R. Soc. Interface 11, 1–9. doi: 10.1098/rsif.2014.0623](https://doi.org/10.1098/rsif.2014.0623)
Giudici, P., and Abu-Hashish, I. (2019). What determines bitcoin exchange
prices? A network VAR approach. Financ. Res. Lett. 28, 309–318.
[doi: 10.1016/j.frl.2018.05.013](https://doi.org/10.1016/j.frl.2018.05.013)
Gouveia, O. C., Dos Santos, E., de Lis, S. F., Neut, A., and Sebastián, J. (2017).
Central Bank Digital Currencies: Assessing Implementation Possibilities and
Impacts. BBVA Working Paper, No. 17/04.
Hayes, A. (2015). A Cost of Production Model for Bitcoin. Department of
Economics. The New School for Social Research, Working Paper No. 5.
Hayes, A. (2017). Cryptocurrency value formation: an empirical analysis leading to
a cost of production model for valuing bitcoin, Telemat. Inform. 34, 1308–1321.
[doi: 10.1016/j.tele.2016.05.005](https://doi.org/10.1016/j.tele.2016.05.005)
Hayes, A. (2019). Bitcoin price and its marginal cost of production:
support for a fundamental value, Appl. Econ. Lett. 26, 554–560.
[doi: 10.1080/13504851.2018.1488040](https://doi.org/10.1080/13504851.2018.1488040)
Hileman, G., and Rauchs, M. (2017). Global Cryptocurrency Benchmarking Study.
Cambridge Centre for Alternative Finance, University of Cambridge, Judge
Business School.
Katsiampa, P. (2017). Volatility estimation for Bitcoin: a comparison of GARCH
[models, Econ. Lett. 158, 3–6. doi: 10.1016/j.econlet.2017.06.023](https://doi.org/10.1016/j.econlet.2017.06.023)
Kjærland, F., Khazal, A., Krogstad, E. A., Nordstrøm, F. G. B., and Oust, A.
(2018). An analysis of Bitcoin’s price dynamics. J. Risk Financ Manag. 11:63.
[doi: 10.3390/jrfm11040063](https://doi.org/10.3390/jrfm11040063)
Kristoufek, L. (2013). Bitcoin meets Google Trends and Wikipedia: quantifying
the relationship between phenomena of the Internet era. Sci. Rep. 3:3415.
[doi: 10.1038/srep03415](https://doi.org/10.1038/srep03415)
Kristoufek, L. (2015). What are the main drivers of the bitcoin price?
evidence from wavelet coherence analysis. PLoS ONE 10:e0123923.
[doi: 10.1371/journal.pone.0123923](https://doi.org/10.1371/journal.pone.0123923)
Lütkepohl, H., and Krätzig, M. (2004). (Eds.). Applied Time Series Econometrics.
Cambridge University Press.
Matta, M., Marchesi, M., and Lunesu, M. I. (2015). “Bitcoin spread prediction
using social and web search media,” in Proceedings of Conference on Workshop
Deep Content Analytics Techniques for Personalized & Intelligent Services.
(Dublin).
Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Available
[online at: https://Bitcoin.org/Bitcoin.pdf (accessed February 20, 2020).](https://Bitcoin.org/Bitcoin.pdf)
OECD (2018). Financial Markets, Insurance and Pensions. Digitalisation
and Finance.
Phillips, P. C. B., and Perron, P. (1988). Testing for a unit root in time series
[regression. Biometrika 75, 335–346. doi: 10.1093/biomet/75.2.335](https://doi.org/10.1093/biomet/75.2.335)
Schena, C., Tanda, A., Arlotta, C., and Potenza, G. (2018). Lo sviluppo del FinTech.
CONSOB. Quaderni Fintech.
Soltani, M., Kashkooli, F. M., Dehghani-Sanij, A. R., Kazemia, A. R., Bordbar, N.,
Farshchi, M. J., et al. (2019). A comprehensive study of geothermal heating
and cooling systems. Sustain. Cities Soc. 44, 793–818. doi: 10.1016/j.scs.2018.
09.036
Waheed, M., Alam, T., and Ghauri, S. P. (2006). Structural Breaks and Unit
Root: Evidence from Pakistani Macroeconomic Time Series. Available Online
[at: https://ssrn.com/abstract=963958 (accessed February 20, 2020).](https://ssrn.com/abstract=963958)
Zivot, E., and Andrews, D. (1992). Further evidence of great crash, the oil
price shock and unit root hypothesis. J. Bus. Econ. Stat. 10, 251–270.
[doi: 10.1080/07350015.1992.10509904](https://doi.org/10.1080/07350015.1992.10509904)
**Conflict of Interest:** The authors declare that the research was conducted in the
absence of any commercial or financial relationships that could be construed as a
potential conflict of interest.
Copyright © 2020 Baldan and Zen. This is an open-access article distributed
[under the terms of the Creative Commons Attribution License (CC BY). The use,](http://creativecommons.org/licenses/by/4.0/)
distribution or reproduction in other forums is permitted, provided the original
author(s) and the copyright owner(s) are credited and that the original publication
in this journal is cited, in accordance with accepted academic practice. No use,
distribution or reproduction is permitted which does not comply with these terms.
[Frontiers in Artificial Intelligence | www.frontiersin.org](https://www.frontiersin.org/journals/artificial-intelligence) 13 [April 2020 | Volume 3 | Article 21](https://www.frontiersin.org/journals/artificial-intelligence#articles)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7861307, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.frontiersin.org/articles/10.3389/frai.2020.00021/pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-04-30T00:00:00
|
[
{
"paperId": "d7e8216c2ccf352092faee4c8efc37ac75bd8748",
"title": "Study of the dynamic of Bitcoin's price"
},
{
"paperId": "0681f2b62a76d0deeff58d8927df30f8f3fc8e09",
"title": "Bitcoin fluctuations and the frequency of price overreactions"
},
{
"paperId": "c44a39fcacc18debbdc02a94a1deefe1a222726a",
"title": "Central Bank digital currencies"
},
{
"paperId": "835cd1d2eb825837466de11bd52a51ee192caa92",
"title": "What determines bitcoin exchange prices? A network VAR approach"
},
{
"paperId": "6acd750d0beb00a8e204e30d5538c4323c764da3",
"title": "A comprehensive study of geothermal heating and cooling systems"
},
{
"paperId": "5bcbe099925ebec9e9ff66818dfad08d2562b733",
"title": "An Analysis of Bitcoin’s Price Dynamics"
},
{
"paperId": "d4a4785d6f75f0b83a5b3105e49eb605285d9d80",
"title": "Bitcoin price and its marginal cost of production: support for a fundamental value"
},
{
"paperId": "51648f57cd9d445519ebba8ff583a7150d9bd0e7",
"title": "Bitcoin's Growing Energy Problem"
},
{
"paperId": "886e6a9047853392a0e695b1f7f466a0de1fc158",
"title": "Volatility estimation for Bitcoin: A comparison of GARCH models"
},
{
"paperId": "570d73250449ba3b9f0c4d43912e7bc2e1f09b51",
"title": "Central Bank Digital Currencies: assessing implementation possibilities and impacts"
},
{
"paperId": "dc4a51341b37fc376af3958e1a4b2f705335cd3b",
"title": "Financial Econometrics Using Stata"
},
{
"paperId": "aeccfade09d20de2b0ad8071aec6be9c2a445d37",
"title": "The FINTECH Book: The Financial Technology Handbook for Investors, Entrepreneurs and Visionaries"
},
{
"paperId": "9ab35bdcec458f0c4ad818c48f158b8117ff719b",
"title": "THE FINTECH BOOK: THE FINANCIAL TECHNOLOGY HANDBOOK FOR INVESTORS, ENTREPRENEURS AND VISIONARIES"
},
{
"paperId": "58252961cea2898d6fe39a1529ff11595b867193",
"title": "Cryptocurrency Value Formation: An Empirical Analysis Leading to a Cost of Production Model for Valuing Bitcoin"
},
{
"paperId": "c1757f6ff1faac403294eebf809df251944b7c9b",
"title": "Bitcoin Fluctuations and the Frequency of Price Overreactions"
},
{
"paperId": "4d4d41bd4821db58d5d813dfa659ac949555a953",
"title": "A Cost of Production Model for Bitcoin"
},
{
"paperId": "f4391720375a2ad269a95826a2eb15a7ddb48ba8",
"title": "What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur"
},
{
"paperId": "7328d31f9b5078658718ada22e94088c5a094291",
"title": "The digital traces of bubbles: feedback cycles between socio-economic signals in the Bitcoin economy"
},
{
"paperId": "b2c445eb9b5b917f51ac14e91a7603e58553d29a",
"title": "What Are the Main Drivers of the Bitcoin Price? Evidence from Wavelet Coherence Analysis"
},
{
"paperId": "a4c31b5d6cc6ca750a3bc982101ff12354e07805",
"title": "The economics of BitCoin price formation"
},
{
"paperId": "f544743ba489bce65e1a4df0d80d48754f81bcd1",
"title": "BitCoin meets Google Trends and Wikipedia: Quantifying the relationship between phenomena of the Internet era"
},
{
"paperId": "4f6c218687cd155d00ef17eb1fe3793429ca3a9f",
"title": "Introduction to Time Series Using Stata"
},
{
"paperId": "caee2834e07a27c255d3cb45645a1bb2ec77ef41",
"title": "Structural Breaks and Unit Root: Evidence from Pakistani Macroeconomic Time Series"
},
{
"paperId": "33c4502a781403a69a3bc2f57b1f7ef7abbe2994",
"title": "Further Evidence on the Great Crash, the Oil-Price Shock, and the Unit-Root Hypothesis"
},
{
"paperId": "fafcf2e5f7878549d6537aa8ff5582b8fdcc3a34",
"title": "Testing for a Unit Root in Time Series Regression"
},
{
"paperId": "9885b611792aedb706ba905e610b4a0a409d5984",
"title": "Co-integration and error correction: representation, estimation and testing"
},
{
"paperId": "5cbbb5deb4d92dc0504fb7f2af0f6fe7da355d98",
"title": "Distribution of the Estimators for Autoregressive Time Series with a Unit Root"
},
{
"paperId": null,
"title": "Conflict of Interest"
},
{
"paperId": "d6bebe45c128e4d0bca46c2c4c2b11b44de373ab",
"title": "An econometric model to estimate the value of a cryptocurrency network. The Bitcoin case"
},
{
"paperId": null,
"title": "Lo sviluppo del FinTech"
},
{
"paperId": null,
"title": "Financial Markets, Insurance and Pensions"
},
{
"paperId": "f4bee5e8eeacfd6abb059c901c19155da385861b",
"title": "Financial Stability Implications from FinTech: Supervisory and Regulatory Issues that Merit Authorities' Attention"
},
{
"paperId": "94c347a7c426011d68da497e761efdc472852bfb",
"title": "Global Cryptocurrency Benchmarking Study"
},
{
"paperId": "b7410adc691b805bab0129850b021d5285193f59",
"title": "What Does Bitcoin Look Like"
},
{
"paperId": "1345a50edee28418900e2c1a4292ccc51138e1eb",
"title": "Bitcoin Spread Prediction Using Social and Web Search Media"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "0eb41ce10d763ce2cbe1006fae83d911b89d23a4",
"title": "Co-Integration and Error Correction : Representation , Estimation , and Testing"
},
{
"paperId": "f75230a9f5a0a5a39d53099f7815234abeeb38ad",
"title": "Testing for a Unit Root in Time Series with TARCH-Skew-t Errors"
},
{
"paperId": "247ce4c5e0cd2935bfe5a5e58c2828ca51986444",
"title": "Applied Time Series Econometrics"
},
{
"paperId": null,
"title": "Unrestricted constant ( τ = 0, ρ = 0): lets linear trends in y t to present a linear trend ( γ ) but the cointegrating equations are stationary around a constant means ( ν )"
},
{
"paperId": null,
"title": "Bitcoin as a Virtual Commodity"
},
{
"paperId": null,
"title": "Unrestricted trend: allows for quadratic trend in the level of y t ( τ t appears in the equation) and states that the cointegrating equations are trend stationary"
},
{
"paperId": null,
"title": "Restricted trend ( τ = 0): excludes quadratic trends but includes linear trends ( ρ t )"
}
] | 19,897
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0127a973ed71969408342e74ef1b27ebbb67a76d
|
[] | 0.903294
|
FOG Computing: The new Paradigm
|
0127a973ed71969408342e74ef1b27ebbb67a76d
|
[
{
"authorId": "11013929",
"name": "H. Alwageed"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
# FOG Computing: The new Paradigm
## Hathal Salamah A. Alwageed
#### Department of Computer Engineering and Network
Aljouf University, Saudi Arabia
## ABSTRACT
As the Internet of Everything (IoE) heats up, Cisco engineers
put forward a new networking, compute, and storage
paradigm that extends to the edge of the network
[http://newsroom.cisco]. Fog Computing is a paradigm that
stretches out or extends Cloud Computing and services to the
systems or network edge. Like Cloud, Fog gives
information/data, process or compute, storage, and application
services to end-clients. The recognizing Fog attributes are its
closeness to end-clients, its tightly packed geographical
conveyance or distribution, and its backing for mobility.
Services are facilitated at the network edge or even end
devices, for example, set-top-boxes or end points. Thusly, Fog
diminishes services latency, and enhances QoS, bringing
about prevalent client experience. Fog Computing holds up
up-and-coming Internet of Everything (IoE) applications that
request real timing/unsurprising latency (Industrial
computerization/automation, transportation, sensors networks
and actuators). On account of its geographical distribution the
Fog paradigm is very much situated for real-time huge
information or big data and analytics. Fog bolsters compactly
distributed data collection points, subsequently adding a
fourth pivot to the frequently specified Big Data
measurements such as volume, variety, and velocity.
## Keywords
Cloud Computing, Distributed Computing, Networking, IoT
## 1. INTRODUCTION
The Internet of Everything is changing how we interface with
this present reality," Milito says.
"Things that were totally isolated from the Internet some time
as of late, for instance, cars, are at present continuing onto it.
Regardless, as we go from one billion endpoints to one trillion
endpoints around the globe, that makes an authentic
adaptability/scalability issue and the defy of overseeing
complex gatherings or cluster of endpoints – what we call
'rich systems' – as opposed to overseeing individual endpoints.
Fog's hardware infrastructure and software platform handle
that [http://newsroom.cisco]. The information and
communication technology (ICT) gather routinely puts aside
time to yield to the authentic meaning, extension and setting
of the new terms that show up associated to new development
examples and their related hype. Web services, big data, cloud
computing are a few instances of developed terms that were
puzzling when at first founded. The term Fog Computing is
resulting in these present circumstances starting wreckage
now. Not in the slightest degree like the representations over,
'the fog' is not obliged to a particular inventive reach. In this
manner, we can expect the beginning perplexity about 'what
the fog is?' to reach uncommon levels. As it consistently
happens with new developments, an understanding definition
ought to be surrendered to by the community to tone down
hype and chaos. The central definitions tend to focus on just
two or three perspectives, like flexibility in the cloud or
interoperability in web services. The way that the Fog stuck
together various uniting imaginative examples makes this
issue fundamentally more genuine. To be sure, looking at any
of the progressions related to the fog from a singular point
may offer the false view that there is insignificant new to it.
For example, late definition attempts have shown it as just
advancement to our present cloud model. It's out and out selfevident, for event, Cisco's point of view of the fog [Flavio
Bonom et al]. Fog is an expansion of the Cloud Paradigm,"
says Technical Leader Rodolfo Milito, one of Cisco's thought
pioneers in fog computing, "It's similar to cloud yet closer to
the ground. Fog computing architecture enhances the cloud
out into this present reality, the physical world of things." Fog
supplements the Cloud, tending to creating IoT applications
that are geo-scattered or geographical distributed oblige low
latency, or snappy flexibility support or mobility. Fog
computing would prop up sensors (which ordinarily measure,
recognize, and accumulate data) and actuators – which are
devices that can perform a physical movement, for instance,
closing a valve, moving the arms of a robot, or rehearsing the
brakes in an auto [http://newsroom.cisco]. Not at all like
ordinary server homesteads or data centers, Fog devices are
geologically passed on over heterogeneous platforms,
spreading over diverse management territories. Cisco is
involved with creative proposals that energize service
flexibility across over stages or platforms, and headways that
defend end-customer and content security and confidentiality
transversely over territories. Fog gives exceptional perks over
a couple of verticals, for instance, IT, incitement, advancing,
personal computing et cetera. Cisco is uncommonly motivated
by proposals that accentuation on Fog Computing
circumstances related to Internet of Everything (IoE), Sensor
Networks, Data Analytics and other data concentrated
services to show the upsides of such another perspective, to
survey the trade offs in both exploratory and fabrication
deployments and to address potential examination issues for
those course of action
## 2. THE FOG DESCRIPTION
Fog takes the data and workload technology to another level.
We are currently discussing edge computing – the home of
Fog. While Fog insightfully expands Cloud computing and
impacts Cloud's essential progressions, Fog, by definition,
compasses more broad geographic territories than Cloud, and
in a denser way. Similarly, Fog devices are significantly more
heterogeneous in nature, running from end-customer devices,
access points, to edge routers and switches. To oblige this
heterogeneity, Fog services are engrossed inside a holder for
effortlessness of association. Holder or container technologies
are Linux containers and Java Virtual Machine (JVM).
Implications that look into service versatility transversely over
Fog platform are astoundingly convincing. Specifically,
- Technologies that support workload adaptability
amidst Cloud and Fog platform in light of
methodologies and the essential's infrastructure.
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
- Technologies that enhance various parts of service
mobility.
- Fog services will be orchestrated transversely over
management domains; services will be provisioned,
looked at and took after over these zones.
Suggestion looking at security and insurance in the
association of Fog Computing are engaged.
Specifically,
- Privacy, security peril examination for distinctive
Fog players (ex: service supplier, end-customer,
content supplier) in the association of particular Fog
service verticals (ex: IoE, Sensor Networks, Data
Analytics, IT, redirection, Personal Computing).
- Technologies that ensure security and insurance of
customer/substance across over zones.
- Technologies that reliably fuse and widen existing
Cloud security/insurance courses of action in the
association of Fog.
- While Fog gives astounding central focuses to
advantages over a couple of verticals, for instance,
IT, incitement, publicizing, personal computing so
as to register et cetera., Cisco is outstandingly
fascinated Fog ideal circumstances for Big Data
services in a couple of verticals including IoE.
Specifically, improvements in compute, storage
offerings for data intensive services, for instance,
the going with:
- Interaction between the Fog and the Cloud.
Typically, the Fog platform supports real time,
critical examination, processes, and channels the
data, and pushes to the Cloud data that is worldwide
in time and geographical scope.
- Collection of data and analytics (pulled from access
devices, pushed to Cloud)
- Data storage for redistribution (pushed from Cloud,
pulled by downstream devices)
- Technologies that empower data fusion in the above
settings.
- Analytics noteworthy for neighborhood
communities transversely over distinctive verticals
(ex: advancements video examination, social
protection, sensing and performance observation et
cetera.)
- Methodologies, Models and Algorithms to
streamline the cost and execution through workload
flexibility amidst Fog and Cloud.
- PC frameworks or networks can be portrayed into
differing sorts in perspective of their size of
operation. They include:
- **LAN: Local Area Networks spread or cover a bit**
physical area, like a home, office, or a small group
of buildings, for instance, a school or university etc.
- **WLAN: Wireless Local Area Networks engage**
customers to move around within a greater degree
domain, yet be remotely connected with the
framework/network.
- **WAN: Wide Area Networks spread a far reaching**
district, like communication links that cross
metropolitan, neighborhood, or national points of
confinement. The Internet is the best outline of a
WAN.
- **MAN:** Metropolitan Area Networks are
unfathomable frameworks that cover an entire city.
**SAN**
Storage Area Networks facilitate associate remote PC storage
devices, for instance, disk arrays, optical jukeboxes and tape
libraries to servers in such a way that they reserves of being
secretly joined to the O.S.
Considering this information, we put forward the going with
importance of the Fog:
Fog computing is a circumstance where a monster number of
heterogeneous (remote/wireless and autonomous) widespread
and decentralized devices bestow and conceivably cooperate
among them and with the framework/network to perform
storage and processing assignments without the intervention
of third-parties. These errands can be for supporting major
framework/network limits or new services and applications
that continue running in a sandboxed circumstance.
Customers leasing bit of their devices to have these services
get encouragements for doing in that capacity. This definition
incorporates the parts which we consider will be key
components of the fog: all inclusiveness, improved framework
capacities as an encouraging circumstance, and better
sponsorship for support among devices. In the event that in
light of the fact that the deficient front of the terms, the
complexities amidst fog and cloud computing could be hard to
handle for a couple of customers. Some could consider the fog
just an "extension" of the cloud.
## 3. APPLICATIONS & USAGE CASES
[datacenterknowledge.com] The expression "Fog computing"
has been clutched by Cisco Systems as another perspective to
reinforce remote data trade to sponsorship distributed devices
in the "Web of Things." different passed on handling and
storage new services are in like manner getting the expression.
It develops earlier thoughts in distributed computing, for
instance, content transport frameworks/networks, however
allows the movement of more personality boggling services
using cloud propels. Before you get confused for yet another
development term, it's discriminating to grasp where Fog
Computing expect a section. Regardless of the way that it is
another wording, this development starting now has a spot
within the present day's universe server ranch and the cloud.
Passing on data close to the customer- The volume of data
being passed on by method for the cloud makes a quick need
to store data or diverse services. These services would be
discovered closest to the end-customer to upgrade stillness
concerns and data access. As opposed to cabin information at
server ranch regions far from the end-point, the Fog expects to
put the data close to the end-customer.
Making dense geographical allotment- Fog making in order to
process widens direct cloud services an edge
framework/network which sits at different core interests. This,
thick or dense, topographically scattered system helps from
various perspectives. As an issue of first significance, gigantic
data and examination ought to be conceivable speedier with
better results. By then, administrators have the ability to
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
support range based adaptability demands and not have to
explore the entire WAN. Finally, these edge (Fog) structures
would be made in a way that continuous data examination
transforms into a reality on a truly gigantic scale.
Authentic sponsorship for adaptability and the IoE- As said
some time recently, there is a quick augmentation in the
measure of devices and data that we bring into play.
Executives have the ability to impact the Fog and control
where customers are coming in and how they get to this
information or data. Not simply this upgrade customer
execution, it similarly helps with security and insurance
issues. By controlling data at diverse edge centers, Fog
preparing consolidates focus cloud services with those of a
truly distributed server datacenter platform. As more services
are made to advantage the end-customer, edge and Fog
frameworks will end up being more pervasive.
Reliable joining with the cloud and diverse services- The
musing isn't to supplant the cloud. With Fog services, we are
prepared to enhance the cloud experience by disengaging
customer data that needs to live on the edge. Starting there,
heads have the ability to tie-in examination, security, or
distinctive services clearly into their cloud model. This base
still keeps up the cloud's thought while uniting the power of
Fog Computing at the edge.
## 4 TECHNOLOGIES 4.1 The ubiquity of devices
There is a tremendous augmentation in the amount of devices
getting connected with the framework/network. This
augmentation is driven by two sources: customer devices and
sensors/actuators. Cisco unadventurously assesses that there
will be 50 billion joined devices by 2020 [D.Evans]. This
impact in the amount of devices per individual is illuminated
by the increase of mobile phones e.g. cell phones and tablets,
remarkably in developing countries. Yet, these imperative
numbers will soon be overpasses by the group of
distinguishing/acting devices put in every way that really
matters everywhere on the assumed Internet of Things, IoT,
and pervasive sensor networks. Wearable computing devices
(smart watches, glasses, et cetera.), sharp urban zones
[Taewoo Nam et al], smart metering devices sent by energy
suppliers to explore usage at the home level [Beth Plale et al],
self-driving vehicles, sensor networks et cetera will be
genuine drivers to the all inclusiveness of related devices.
Each one of these applications are developing the region of
devices everywhere around us. Along these lines ubiquity has
incited intensive investigation, provoking another kind of
particular achievement that hopes to handle today's
repressions in device size and battery lifespan. This may itself
encourage the association of more devices, making a calm
circle.
## 4.1.1. Battery Size and lifetime:
Cost is an essential issue driving devices to be as meager as
would be reasonable. This also grows device portability and
lessens power use, which may be noteworthy in some
association e.g. advantageous phones or sturdy fire sensors in
remote boondocks. Packaging and power management
headways hope to make smaller and more independent
devices that can run way more in any event cost. System on
Chip (SoC) headways addition fragments, for instance, CPU,
memory e.g. HP's memristor [Duncan R et al], checks and
outside interfaces in a singular chip. They oblige less room
and exhaust less power than common multi-chip systems.
System in Package (SiP) is an answer some spot amidst SoCs
and multi chip structures: it outfits circuits in a single unit or
'package', and is used today for little devices, for instance,
propelled cellular telephones or smart phones. Despite when
better packaging may improve power consumption, this alone
may not be adequate for it to last more. The IoT is calling for
long life sensors which here and there won't have the ability to
join with any power supply. Today's lithium-molecule
batteries (LiB) are brought into play for flexible devices of
different sorts; solid state LiB plans are obliged to supplant
them in the medium term, extending up to three times today's
energy thickness. Still, batteries in perspective of chemical
power sources can transform into a compelling component in
future upgrades: higher power requirements in an unobtrusive
piece of the degree of current batteries. Research efforts are
revolved around 3D micro-batteries. "3D" is a term that
incorporates the efforts to sort out the anode and cathode of
batteries in 3D plans (past the typical 2D courses of action), to
enhance density of both its power and energy. Using those 3D
structures at minute scale is realizing batteries of humble size
and tremendous power. Moreover, we have to watch the
advancement of RF-powered computing [Shyamnath
Gollakota et al], which speaks to that energy can be harvested
from encompassing radiofrequency signs, (for instance, TV,
cell) to power low-end devices that sense, compute and
communicate. Also renewable energy empowered devices are
presently available.
## 4.2 Network Management or
Administration
Having various devices can be especially helpful to improve
our systems at all levels from our home to the planet all things
considered and help us with understanding them better. These
devices ought to be masterminded and kept up once they get
passed on e.g. a future phone encouraging a service sold to an
outcast customer or third party or a remote sensor at the sea's
base. Administering frameworks or networks of billions of
heterogeneous devices that run one or more services is
boundlessly trying and complex. A couple Fog advances have
been creating to help disciplined this versatile quality:
"softwareisation/Programming" of framework and service
management for better flexibility; conclusive techniques for
scaling management; "little or small" edge clouds to host
services close to the endpoints or at the endpoints themselves;
and circulated (P2P)- and sensor framework/network like
approaches for application auto-coordination.
## 4.2.1 NetworkManagementSoftwareisation
or Programming
Organizing and keeping upgraded and secure fog networks,
services and devices is done autonomously e.g. switches,
servers, services and devices are freely administered by
unusual inhabitants. These assignments are work raised and
slip by slanted. For example, definitely comprehended
Internet associations ensure a single chairman handles large
number machines running a lone service sort. Planning and
keeping up various diverse sorts of services running on
billions of heterogeneous devices will simply fuel our present
management issues. The Fog needs heterogeneous devices
and their running services to be dealt with in a more
homogeneous manner; ideally totally automated by
programming. Network Function Virtualization (NFV) is
obviously the most bewildering advancement in such way.
NFV is the re-movement of telco overseers to their
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
nonattendance of ability and unvarying prerequisite for tried
and true or reliable systems. NFV tries to give the limit of
intensely passing on-interest network services e.g. a firewall,
a switch or a WAN enlivening specialist, another LAN or a
VPN or customer services e.g. a database where and when
desirable. Software Defined Networks (SDN’s) are one of the
sections needed for NFV, since some network services like
making new "virtual" networks on top of the physical system
ought to be conceivable by programming just. For instance, a
couple of entries can be sent as virtual machines and their
traffic can be solidly controlled because of SDN capacities in
an area edge cloud. The programming of a generally hardware
driven business amassed around switches and servers where
services got passed on will achieve not so much lavish but
rather more deft operations. A corresponding close estimation
is proposed by Cisco with its first programming simply type
of the IOS wrapped in with a Linux transport (IOx). The
switch itself becomes a SDN-enabled virtualization
establishment where NFV and application services are sent
close to the spot where they are truly going to be used. On the
other hand, IOx's computing capacities will even now be
limited. [Arati Baliga et al], [Arijit Banerjee et al] put
forwards, however NFV limits don't accomplish end customer
devices or sensors yet. In like manner, NFV and IOx simply
consider requirements of vendors, telco overseer's or
operators. Network gear equipment is only a little division of
the Fog' devices. Billions of customer handheld devices and
conceivably trillions of sensors need to have a near
automation set up that can adjust to the obliged scale.
## 4.2.2. Arbitrary or Asymptotic methods:
At fog scale, simply definitive and asymptotic methods have
all the earmarks of being achievable [G. Pollock et al]. These
procedures join with parts in their own specific management
endeavors so that: a) the manager just shows the last desired
state (life-changing) rather than individual charges; and b)
she/she acknowledge the setup may never happen in light of
the way that when it is set out the system may have changed
e.g. fog nodes are gone or fresh nodes show up. As a
delineation of these methodologies, see exertion on
definitive/declarative and asymptotic management ended by
HP Labs in the past [G. Pollock et al]. Diverse vendors are
similarly starting to bring into play dramatic structures to
reasonable scale and multifaceted nature, for instance see
Cisco's technique at managing OpFlex (a kind of Cisco's
OpenFlow reinforced by IBM and Midokura) SDNs.
## 4.2.3. Clouds at the Edge
Littler than anticipated or Mini-clouds are getting sent closer
to the edge to the customer by means of private clouds. Telcos
and gear venders are moving on that course also. Long Term
Evolution (LTE's) Enhanced Packet Core (EPC) can without a
doubt be stretched out to take account of their own specific
mini-clouds. Having a modest cloud at the EPC can lend a
hand to pass on services close customers (at the edge) and
confine traffic there while diminishing trombone routes with
the help of SDNs. In like manner, IOx is just a progression of
the present cloud model in which routers can transform into
the virtualization infrastructure given that their pervasiveness
and hierarchical position help to fulfill domain. The fog
engages customer devices to wind up or become the
virtualization platform themselves. In this manner, they can
lease some computing and storage aptitude of confinement for
applications to continue running on them. In the Fog, both the
framework/network and the services running on top of it can
be passed on enthusiasm for a fog of edge devices. Service
delivery to specific regions in the framework or network is
remarkably streamlined. For example [S. Sae Lor et al] gives
a specimen of storage functions being dynamically passed on
in diff erent mini-fogs in picked framework territories so that
lumbering data trades are quickened.
## 4.2.4. Scattered or Distributed Management:
The management practices discussed so far relies on upon a
supplier e.g. the telco administrator as the sole aware of
framework/network and service operation. In any case, there
are in like manner P2P and sensor framework/network like
procedures that allow endpoints to team up in order to
perform equivalent results, yet can scale better. P2P advances
have been around for quite a while and they are growing
enough to help pass on the fog's vision. They can abuse
neighborhood while emptying the prerequisite for a central
management or administration point. Applications like
Popcorn Time have shown the benefits of a P2P model to pass
on overall services at scale. Various the musings of P2P
content distribution networks (CDNs) are pertinent to the fog
also; a fog application could be seen as a content distribution
network where some sort of data is exchanged between peers.
Thusly, in the fog a subset of framework/network and
customer device/sensor segments can go ahead as a minicloud or in other words a littler than ordinary fogs. Hence the
fog becomes an area where applications and data are not any
more expected to stay in united server ranches. This perks up
versatility and draws in customers to hold control and
obligation regarding own data/applications. Applications will
then be completed by bringing into play droplets or little bits
of code that can securely continue running in devices at the
edge with bare minimum communication with central parts,
reducing undesired exchanges of data to central servers in
corporate server ranches (data centre’s).
## 4.3 The connectivity at a fog scale
The region of perhaps unassuming devices all around is one
and just of the fog's components. As indicated over, each one
of these devices ought to be joined. The sheer volume of
devices 50 billion handheld customer devices in 2020 together
with various moreover identifying/acting devices of the IoT
working throughout the day, consistently will likely minute
individual present bandwidth and connectivity issues. A
remarkable report in The EconoFog titled (Augmented
Business) depicted how cows will be checked to ensure
healthier, more sufficient supply of meat for people to eat up.
In light of current circumstances, every year each cow
produces around 200 Mega Byte of information.
## 4.3.1 The Physical Connectivity
A result of having numerous billions of devices using and
conveying data at the framework's or networks edge is that
these networks transform into an enormous bottleneck [Metro
network traffic growth]. Network managers have been
genuinely placing assets into a blended sack of new remote
access advances to adjust to the sudden augmentation in
devices per customer; however these LAN and Personal
Network, WAN and MAN hypotheses may come up short in
an IoT world. Most efforts in WAN/MAN are revolved
around LTE; LTEv12 will be the first feature that fulfills each
one of the essentials of the International Telecommunications
Union to be labeled 4G. 4G LTE/EPC ought to be totally
taken off by 2017 [Metro network traffic growth] and it will
augment the available information exchange limit or
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
bandwidth of edge frameworks/networks [D. Astely et al].
LAN development has improved to lessen congestion and
boost the on-hand bandwidth at lower power utilization, see
for instance the latest Wi-Fi determination, 802.11ac. Finally,
there have been monstrous improvements in PNs. These short
range advances oblige center points to deal with themselves,
as no central access point may be open. Bluetooth Low
Energy, ANT+, ZigBee and RF4CE are the most striking.
### 4.3.2 Network Connectivity
Past upgrades on remote frameworks, other expansions are
relied upon to engage correspondence in circumstances where
having all endpoints joined with some LAN & WAN is not
possible as a result of costs, nonappearance of enough links
centers, for instance, base stations, etc. In the fog, each center
must have the ability to go about as a router for its neighbors
and must be adaptable to nodes entering and leaving the
network and compactness. Mobile Ad-hoc Networks
(MANET), which have been a discriminating research subject
for a long time now [S. K. Sarkar et al], could be the reason
for future fog frameworks as they will engage the course of
action of thickly populated frameworks without obliging
adjusted and costly establishments to be open to this point.
Frankly, Bluetooth LE, ANT+, ZigBee and RFC4CE all allow
the advancement of MANETs at any rate up to adjacent reach.
How-ever, most capacity is still to be done to engage MANET
in MAN and WAN frameworks. Remote Mesh Networks are
answers close to MANETs. A WMN can bring into play
system routers at its core, which have no transportability or
connectivity. Nodes bring into play those routers to get
accessibility, or diverse nodes if no quick association with the
routers can be set up. Routers facilitate access to distinctive
frameworks, for instance, cell, Wi-Fi, etc. There is still a
raised examination development on WMNs and MANET. On
top of WMNs and MANET or right on top of the wireless
framework if achievable we come across the protocols that
have been delivered for the IoT, as MQTT [MQTT Protocol]
and CoAP [CoAP Protocol]. All are sketched out in
perspective of two targets: low resource consumption and
adaptability to dissatisfaction; they tend to take after a
publish/subscribe (pub/sub) communication model. Both IoT
protocols and network can benefit by data region: they not any
more need to send all the data around the world continually.
Just aggregates may be sent or a pub/sub model can be
approved that can colossally facilitate our accessibility needs,
tying potential blockage tribulations at the framework's or
network’s edge more so with the happening to edge
switch/handheld/sensor enabled littler than ordinary fogs. In
addition to confining traffic at the edge, this has an
extraordinarily constructive outcome or optimistic on
confidentiality.
## 4.4 Confidentiality or Privacy
Today, we ceaselessly discharge personal information by
employing unusual things, services and platforms. Albrecht et
al. picture a blunt, however reasonable, reality: we may think
we are in charge of our client cards and our mobile
applications and our smart fridges, yet we should not deceive
ourselves. The information is not our own. It has a spot with
Google, and IBM, and Cisco Systems and the overall MegaCorp that has your adjacent store. If you don't believe us,
essentially make a go at evacuating your data from their
databases [K. Albrecht et al]. Customers are ending up being
logically stressed over the risk of having their private data
revealed. As needs be, other than the specific troubles
exhibited by the inescapability of devices, another example
will push for a fog circumstance where data is not sent to a
couple united services, but instead it is fairly kept in the
framework/network for better assurance. Data proprietorship
will be a discriminating establishment of the fog, where some
applications will have the ability to bring into play the
framework/network to run applications and administer data
without relying upon united services. Securing mixed
sensitive data in standard fogs is a particular alternative for
keep security. Nevertheless, this makes it genuinely hard to
perform any taking care of over such data. There is
fundamental examination wear down this subject, for case
using crypto-processors or applying excellent encryption lives
up to expectations that figure while keeping some of its
interesting properties, thusly allowing performing certain
obliged endeavors on it [Raluca Ada Popa et al]. Still, such
decisions have obliged suitability. In this way, customers will
ask for inventive ways to deal with shield their assurance from
any potential colossal kin like component. This will be an
extraordinary impulse to get fog developments, as they will
engage the framework/network to supplant centralized or
united services.
## 5 PROPSECTS DEFIES
Regardless of the way that the investigation pains and
customer examples depicted in past sections are pushing to
bring the Fog, the way is far from cleared. There are various
open issues that will must be tended to make the fog a reality.
It is essential to unforgivably recognize these issues so
prospect investigation works can focus on them. The game
plan of open defies for the fog to wind up the fact of the
matter is:
- _Restriction of Compute/Storage: Current examples_
are improving this with smaller, more energy
proficient and all the more exceptional devices e.g.
one of today's phones is more prevailing than
various top notch desktops from fifteen years back.
Still new-fangled changes are yielded for non buyer
devices.
- _Administration or Management: despite setting up_
the communication routes transversely over end
center points or nodes, IoT/general handling nodes
and applications running on top ought to be
genuinely setup and intended to fill in as needed.
Having potentially billions of little or small devices
to be orchestrated, the fog will strongly rely on upon
decentralized (adaptable) management methods that
are yet to be attempted at this unprecedented scale.
One thing that can be foreseen with certain level of
buoyancy is that there will be no jam-packed control
of the complete fog and asymptotic dramatic setup
methods will turn out to be more indispensable.
- Sync or Discovery: Applications running on devices
may have need of either some agreed united or
centralized points e.g. set up an upstream
fortification if there are unreasonably few peers in
our storage application.
- Standardization: At the moment no systematized
instruments are available so every individual from
the framework/network can proclaim its
accessibility to host others software components,
and for others to sent it their software to be run.
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
- Accountability: Enabling customers to share their
extra assets for host applications is discriminating to
engage new plans of activity around the fog's
thought. An honest to goodness course of action of
rousing strengths ought to be made. The spurring
powers can be funds related or for the most part e.g.
unrestricted free data rates. On the other hand the
nonappearance of central controlling entity in the
fog makes it difficult to confirm if a given device is
to make sure encouraging a section droplet or not.
- Programmability: Controlling application lifecycle
is by now a test in cloud circumstances [19]. The
existence of minimal utilitarian units ―droplets‖ in
more territories (devices) obliges the right
reflections to be set up so programming designers
don't need to deal with these difficult issues [12].
Easy to use APIs for programming designers will
overwhelmingly rely on upon fundamental
Management segments that outfit them with the
right reflections to disguise the tremendous manysided nature of the Fog. A couple of vendors like
Microsoft have successfully ventured in arranging
themselves in this space.
## Protection/Security: The similar security stresses
that apply to contemporary virtualized
circumstances can be expected to affect Fog devices
encouraging applications. The region of secure
sand-boxes for the execution of droplets
applications acts up-to-the-minute out of the
ordinary predicaments (Privacy & Trust). Prior to
using distinctive devices or downsized fogs in the
framework/network to run some software’s,
withdrawal and sandboxing parts must be set up to
ensure bidirectional trust among cooperating parties.
The fog will allow applications to transform
customer’s data in third parties hardware and
software. This clearly displays strong stresses over
data security and its detectable quality to those third
parties.
## 6 CONCLUSION
Fog Computing [dataversity.net] identifies with a to an
extraordinary degree essential progression in Cloud
Computing and in handling when all is said in done. Its
improvement emphasizes the ascendance of a decentralized
model for computing that is more versatile and facilitated than
the routine centralized paradigm. Such deftness and versatility
are fundamental with Big Data applications taking the kind of
the IoT and its low or no inertia necessities. Fog Computing
may not exhibit a panacea for the exceptional solicitations of
the IoT and the rigid advancement towards mobile computing.
In any case, it at any rate sees and attempts to address an
extensive parcel of the circumscriptions of bound together
models which simply attract more movement—with less and
less transmission limit and frameworks
organization/management capacities—as Big Data continues
creating. It gives a sensible building response for these
stresses which may even perk up in the near prospect.
## 7 REFERENCES
[1] http://newsroom.cisco.com/featurecontent?type=webcont
ent&articleId=1365576)
[2] http://www.datacenterknowledge.com/archives/2013/08/
23/welcome-to-the-fog-a-new-type-of-distributedcomputing/)
[3] CoAP Protocol - IETF Draft‖
[http://tools.ietf.org/html/draft-ietf-core-coap-18.](http://tools.ietf.org/html/draft-ietf-core-coap-18)
[4] MQTT Protocol - OASIS Specification.
―http://www.oasisopen.org/committees/mqtt/
[5] Metro network traffic growth: An architecture impact
study. Technical report, Bell Labs Alcatel-Lucent,
December 2013.
[6] ―K. Albrecht et al‖, Connected: To everyone and
everything [guest editorial: Special section on sensors].
_Technology and Society Magazine, IEEE, 32(4):31–34,_
winter 2013.
[7] ―D. Astely et al‖: the evolution of mobile broadband.
_Communications Magazine, IEEE, 47(4):44–51, April_
2009.
[8] ―Arati Baliga et al‖, Virtual private mobile network
towards mobility-as-a-service. In _Proceedings_ _of the_
_Second_ _International_ _Workshop_ _on_ _MobileCloud_
_Computing and Services, MCS ’11, pages 7–12, New_
York, NY, USA, 2011. ACM.
[9] ―Arijit Banerjee, et al ―, A lightweight mobile cloud
offloading architecture. In _Proceedings of the Eighth_
_ACM International Workshop on Mobility in the_
_Evolving Internet Architecture, MobiArch ’13, pages 11–_
16, New York, NY, USA, 2013. ACM.
[10] ―Flavio Bonom et al‖, Fog computing and its role in the
internet of things. In Proceedings of the First Edition _of_
_the MCC Workshop on Mobile Cloud Computing, MCC_
’12, pages 13–16, New York, NY, USA, 2012. ACM.
[11] ―Duncan R et al‖, The missing memristor found. Nature,
(7191):8083, 2008.
[12] ―D.Evans‖, ―The internet of things how the next
evolution of the internet is changing everything‖.
Technical report, CISCO IBSG, April 2011.
[13] ―Shyamnath Gollakota et al‖, The emergence of rf
powered computing. Computer, 99(PrePrints):1, 2013.
[14] Kirak Hong, David Lillethun, Umakishore
Ramachandran, Beate Ottenw¨alder, and Boris
Koldehofe. Mobile fog: A programming model for largescale applications on the internet of things. In
_Proceedings of the Second ACM SIGCOMM Workshop_
_on Mobile Cloud Computing, MCC ’13, pages 15–20,_
New York, NY, USA, 2013. ACM.
[15] ―Taewoo Nam et al‖, Smart city as urban innovation:
Focusing on management, policy, and context. In
_Proceedings of the 5th International_ _Conference on_
_Theory_ _and_ _Practice_ _of Electronic_ _Governance,_
ICEGOV ’11, pages 185–194, New York, NY, USA,
2011.
-----
_Foundation of Computer Science FCS, New York, USA_
_Volume 3– No.5, November 2015 – www.caeaccess.org_
[16] ―Beth Plale et al‖, Adaptive cyberinfrastructure for real
time multiscale weather forecasting. _Computer,_
39(11):56–64, November 2006.
[17] ―G. Pollock et al‖, The asymptotic configuration of
application components in a distributed system.
Technical report, University of Glasgow, Glasgow, UK.
[18] ―Raluca Ada Popa et al‖, Processing queries on an
encrypted database.
[19] _Communications of the ACM, 55(9):103–111, September_
2012.―S. Sae Lor et al‖, The cloud in core networks.
_Communications Letters,_ _IEEE,_ 16(10):1703–1706,
October 2012.
[20] ―S. K. Sarkar et al‖, Ad Hoc Mobile Wireless Networks:
_Principles, Protocols, and Applications. CRC Press,_
2007.
[21] ―Luis M. Vaquero et al‖, Towards runtime
reconfiguration of application control policies in the
cloud. J. Netw. Syst. Manage., 20(4):489–512, December
2012.
[22] http://www.dataversity.net/the-future-of-cloud
computing-fog-computing-and-the-internet-of-things/)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.5120/cae2015651946?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5120/cae2015651946, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://doi.org/10.5120/cae2015651946"
}
| 2,015
|
[] | true
| null |
[] | 9,553
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01290c447f2304ff3c347c3ac0dfa22f053e281b
|
[
"Computer Science"
] | 0.910503
|
A Balanced Trust-Based Method to Counter Sybil and Spartacus Attacks in Chord
|
01290c447f2304ff3c347c3ac0dfa22f053e281b
|
Secur. Commun. Networks
|
[
{
"authorId": "2998809",
"name": "R. Pecori"
},
{
"authorId": "1685596",
"name": "L. Veltri"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
A Sybil attack is one of the main challenges to be addressed when securing peer-to-peer networks, especially those based on Distributed Hash Tables (DHTs). Tampering routing tables by means of multiple fake identities can make routing, storing, and retrieving operations significantly more difficult and time-consuming. Countermeasures based on trust and reputation have already proven to be effective in some contexts, but one variant of the Sybil attack, the Spartacus attack, is emerging as a new threat and its effects are even riskier and more difficult to stymie. In this paper, we first improve a well-known and deployed DHT (Chord) through a solution mixing trust with standard operations, for facing a Sybil attack affecting either routing or storage and retrieval operations. This is done by maintaining the least possible overhead for peers. Moreover, we extend the solution we propose in order for it to be resilient also against a Spartacus attack, both for an iterative and for a recursive lookup procedure. Finally, we validate our findings by showing that the proposed techniques outperform other trust-based solutions already known in the literature as well.
|
Hindawi
Security and Communication Networks
Volume 2018, Article ID 4963932, 16 pages
[https://doi.org/10.1155/2018/4963932](https://doi.org/10.1155/2018/4963932)
#### Research Article A Balanced Trust-Based Method to Counter Sybil and Spartacus Attacks in Chord
###### Riccardo Pecori 1 and Luca Veltri 2
_1SMARTEST Research Centre, eCAMPUS University, Novedrate, CO 22060, Italy_
_2Department of Engineering and Architecture, University of Parma, Parma, PR 43124, Italy_
Correspondence should be addressed to Riccardo Pecori; riccardo.pecori@uniecampus.it
Received 23 February 2018; Revised 27 September 2018; Accepted 14 October 2018; Published 12 November 2018
Academic Editor: Carmen Fernandez-Gago
[Copyright © 2018 Riccardo Pecori and Luca Veltri. This is an open access article distributed under the Creative Commons](https://creativecommons.org/licenses/by/4.0/)
[Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is](https://creativecommons.org/licenses/by/4.0/)
properly cited.
A Sybil attack is one of the main challenges to be addressed when securing peer-to-peer networks, especially those based on
Distributed Hash Tables (DHTs). Tampering routing tables by means of multiple fake identities can make routing, storing, and
retrieving operations significantly more difficult and time-consuming. Countermeasures based on trust and reputation have already
proven to be effective in some contexts, but one variant of the Sybil attack, the Spartacus attack, is emerging as a new threat and
its effects are even riskier and more difficult to stymie. In this paper, we first improve a well-known and deployed DHT (Chord)
through a solution mixing trust with standard operations, for facing a Sybil attack affecting either routing or storage and retrieval
operations. This is done by maintaining the least possible overhead for peers. Moreover, we extend the solution we propose in order
for it to be resilient also against a Spartacus attack, both for an iterative and for a recursive lookup procedure. Finally, we validate our
findings by showing that the proposed techniques outperform other trust-based solutions already known in the literature as well.
###### 1. Introduction
By now, peer-to-peer (P2P) networks, be they structured or
not, concern a significant and mature slice of the overall
Internet traffic. Their usage ranges from file-sharing to VoIP
applications [1], from on-line alerting systems to intrusion
detection, and so forth. Moreover, their deployment is experiencing a new flourishing in all those scenarios involving
the Internet of Things (IoT) [2]. The most important type
of current structured P2P networks is the one relying on
Distributed Hash Tables (DHT) algorithms. Among these,
we can mention the well-known Kademlia [3], Chord [4],
CAN [5], and Pastry [6], but other ones are emerging. A
DHT is a distributed structure that maps identifiers to values,
similar to a hash table. The lookup is performed through an
efficient routing mechanism leading to the peer that actually
maintains the mapping. In a DHT-based P2P network, the
identifiers generally refer to both the peers (peer IDs) and the
resources (resource IDs). The logical overlay and the routing
tables of DHTs feature a solid structure allowing them to
perform in a quick and simple way, but, at the same time,
make them subject to various malicious attacks. The so-called
Sybil attack [7] is a prominent example of these types of
attacks. This attack usually involves an attacker managing a
great amount of multiple false IDs, called sybils, able to taint
the routing tables and, as a consequence, capable to disrupt
or degrade the main operations of the DHT.
_1.1. Novelty, Contribution and Motivation. In this work we_
consider a scenario that is implicitly infected by sybils, but
we also address Spartacus-behaving nodes (spartaci), i.e.,
nodes that steal the IDs of other nodes. This is one of the
main novelties of this work, since a Spartacus attack, i.e.,
a variant of the Sybil attack, is not very studied in the
relevant literature yet. Differently from [8], which provides
an admission control system, we do not focus on limiting the
access of malicious nodes to the P2P network, but, similarly
to the contribution in [9], involving a clean routing process,
we tried to devise a trusted lookup and storage mechanism,
involving only those nodes turning out to be the most
trustworthy. As a consequence, the querying node will be able
to take a decision by itself about which nodes to trust, by
-----
2 Security and Communication Networks
considering, in a dynamic and evolving way, the interactions
it experienced.
In the aforementioned framework, we investigate and
compare some reasonable trust-based techniques to avoid, or
better still, to moderate the misconduct of malicious peers,
be they sybils or spartaci. In particular, we focus on the wellknown Chord as DHT, trying to modify its lookup as well as
its storage and retrieval procedures in a way similar to what
has already been done for Kademlia in [10]. Since Chord has
a different way of computing the distance between pairs of
nodes and its lookup and storage and retrieval procedures are
different from those of Kademlia, we needed to devise proper
modification with respect to the solution presented in [10].
The contribution of this work is multifaceted and recapitulated in the following points:
(1) We readapted a trust-based mixed strategy, which
already proved effective in Kademlia both for routing
[11] and for storage and retrieval procedures, to
Chord, verifying its effectiveness during lookup as
well as storage and retrieval procedures in a network
infected by sybils.
(2) Then we tested the proposed solution, called SChord, in a Spartacus attack scenario, considering
both iterative and recursive lookup procedures, and
we propose some improvements in order to make SChord resilient also to a Spartacus attack.
(3) Finally, we compare the proposed strategy with other
trust-based solutions published in the literature along
the years.
_1.2. Structure of the paper. The remainder of this article_
is structured in this way: in Section 2 we summarize the
main concepts about the Sybil and Spartacus attacks and
we describe some possible solutions already analyzed in
the literature, and in Section 3 we investigate the effects
of a Sybil attack in Chord, while we accurately present the
extension of the technique in [10] to Chord in Section 4.
In Section 5 we study the proposed strategy in a Spartacus
attack scenario showing how the aforementioned procedure
may result effective also in presence of spartaci through some
degrees of fine-tuning. Finally Section 6 seals up the work and
provides some suggestions on possible future followups.
###### 2. Background and Related Works
The Sybil attack, firstly described in 2002 by Douceur in
[7], exploits the redundancy of DHT networks, and it is
usually launched through a malicious physical entity owning
multiple virtual and logical identities. The trick is to introduce
into the P2P network many fake identities (the so-called
_sybils), which are controlled by one single attacker in the_
physical layer. This allows attackers to monitor the traffic,
to partition the network, e.g., through an eclipse attack, or
to misuse the DHT in different ways, e.g., performing a
Distributed Denial of Service (DDoS) attack [12].
Different types of threats can be generated by the sybils
and they can be classified into the following categories:
(i) routing table invasions, (ii) storage and retrieval malfunctions, and (iii) miscellaneous attacks [13]. The first
types encompass incorrect lookups and wrong routing table
updates, and the second ones regard denying to store
resources or simulating to have resources the nodes actually
do not own, while the third types may concern inconstant
behaviors, overloading of specific nodes, quick joining and
leaving the network, and so on.
Many solutions have been devised during the years for
solving or mitigating the effects of a Sybil attack: they range
from the introduction of trusted certification and computational puzzle approaches to costs and fees and trusted devices
[10].
Some renowned solutions are Whanau [14], X-vine [15],
and Persea [16]. However, these are protocols that leverage on
a further social network of trusted relationships between the
users of the P2P network and they need proper datasets to
be evaluated. These are important limitations and overheads
that, if not available, do not allow one to enact any countermeasure against the sybils. Moreover, these solutions take
for granted that friends in a social network are trusted users
in a P2P network, something not always true. Furthermore,
they restrict the access of the sybils to the P2P system by
constraining the number of possible paths among good and
malicious peers. Conversely, we resort to a solution that
applies to an open scenario where the sybils may freely join
the network: we do not pose limitations to the possibilities of
an attacker.
The Sybil attack is still investigated nowadays, as demonstrated by some recent works such as [17, 18]. In [17] the
authors focus on solving a limitation of Persea through
lookup inspection for detecting the sybils, while the latter
presents VoteTrust, a system to detect sybils through the
friend invitation graph of social networks. However, both
of these systems focus on sybils detection. The scenario
of our work envisages a P2P network inherently infected
by malicious nodes instead. Moreover, both works provide an active collaboration among good nodes for sybils
detection and this may introduce a further overhead that
our solution carefully avoids through an automatic trust
computation.
In this work, we analyze routing as well as storage and
retrieval attacks in a Chord DHT and provide a solution,
already proven effective in Kademlia [10], using trust metrics
in a balanced way. We chose to focus on Chord, as it is one
of the most well-known, relevant, and representative DHTs,
and even if introduced in the early 2000s, it is still currently
studied and considered a reference DHT for its simplicity
and efficiency. This is, for example, shown by the following:
a recent work that uses Chord in large scaled P2P networks
in conjunction with a dynamic trust computing model [19], as
well as recent studies on the further improvement of Chord
inherent efficiency [20] and correctness [21], or on the joining
time of Chord nodes through the usage of anchor peers in an
educational scenario [22]. Further recent contributions have
concerned the usage of Chord in mobile networks [23] and
for location-based services [24]; therefore, Chord’s security
and reliability are still important aspects to be carefully
assessed and studied.
-----
Security and Communication Networks 3
Some solutions for improving Chord’s security are already
present in the relevant literature, such as [25] where the
authors deploy certificates and signatures. Nevertheless, an
increased number of messages, compared with standard
Chord, are required; moreover, the authors try to remove
malicious nodes from the network, something leading to a
scenario that differs from the one considered in this work
where we preferred another approach, i.e., accepting sybils
in the network and trying to overcome their malicious
activity, through direct trust, in such a way an attacker
cannot recognize a countermeasure has been enacted. This
is performed by following some successful solutions mainly
studied in [10, 26].
Concerning trust and reputation in general, in [27] a
method for comparing trust models based on a hierarchical
fuzzy inferring model is proposed. However, the contribution focuses only on a file downloading scenario, like the
mechanism proposed in [28], which is moreover applied only
to nonstructured P2P networks. Conversely, the mechanism
proposed in [29] can be applied in P2P networks based on
DHTs, but it is difficult to implement in a real world scenario.
Concerning the application of trust and reputation
directly to Chord, some ideas can be found in [30–33]. In
[31] the authors apply some of the strategies proposed by
Koyanagi in [30], extending them to security purposes and
not limiting them to maintenance aims. More precisely, the
authors of [31] exploit Bayesian networks in order to enrich
Chord finger tables with a further column, called “trust,”
whose value will be used in order to finalize trustworthy
lookups. They took advantage of a sort of variable central
entity. In contrast, in our work, we prefer to keep a kind of
decentralized web-of-trust, as it is the case with [34], and
we consider a more general trust score, not limited to some
features like age or downloads and uploads that could be
specific to some P2P network usages.
In [32], Koyanagi et al. propose a solution similar to ours
where a general trust score computation and its weighing are
used. However, our work differs in the fact that we consider
a different balanced strategy, a mix between standard and
trusted Chord using a tunable parameter. We also take
into account the so-called environmental risk, analyzing the
outcomes of a growing number of sybils, not only monitoring
a constant percentage of them. Moreover, unlike [32], (i) we
consider churn a normal action of peers, (ii) our solution does
not require a look-ahead phase, and (iii) the computation of
trust is simpler as we do not consider trust propagation but
only direct experiences.
Finally, the work in [33] is the most recent one attempting
to apply a trust model to Chord by leveraging onto guarantee_ing and archive peers, whose reputation is evaluated together_
with the one of service peers. The proposal provides also an
incentive system and an anonymous reputation management
strategy; nevertheless, the authors consider at least 4 different
roles for a peer and they envision a complex system for
establishing guarantee relationships (at least 10 messages) and
a lookup transaction (at least 14 messages). This is in contrast
with our idea of keeping the system as simple as possible,
featuring a minimum number of changes and a minimum
overhead with respect to standard Chord; moreover, we do
not set up different roles for the peers, which would lead to
extra overhead, but we prefer a distributed approach, where
all peers are at the same level.
The Spartacus attack, a variation of the Sybil attack,
has not yet been thoroughly examined within the relevant
literature. In this attack, itself especially dangerous in an
environment focusing on trust and reputation, the malicious
physical entity does not generate a great amount of pseudonymous logical IDs but simply steals them from other real
nodes, inheriting their trust value. Such an action takes place
in the bootstrap phase, when a spartacus, intending to enter
the network, looks for a possible bootstrap node, and later for
nodes with high trust score. The spartacus pings these nodes
and, when they appear to be disconnected, it replaces them by
copying the hash of their IDs. If no way exists to bind a node
to its virtual ID, and such is the scenario we consider, and
the node replaced by the spartacus needs to choose another
virtual ID and must start again to create its own trust score
from the beginning. This malicious activity allows spartaci
to acquire a higher trust score than sybils, at least initially,
and if churn is enacted, this attack is much more feasible and
dangerous. An example of possibly disrupting consequences
can be found in a Fog computing IoT scenario, where smart
gateways, belonging to the fog layer, may be part of a virtual
peer-to-peer overlay. If one of these fog gateways is attacked
by a spartacus node, all the information coming from the
sensors managed by the attacked node may be easily lost or
counterfeited.
Some solutions against the Spartacus attack exist, but
they are based on public cryptography, as it is the case
with the “kad-spartacus” extension of the “kadtool” package
[(http://www.npmjs.com/package/kad-spartacus), on a cen-](http://www.npmjs.com/package/kad-spartacus)
tral server, on the binding between IP address and virtual
ID or on an active verification of the node about to join
the network, carried out by those nodes having already
joined the network. However, these are not fully decentralized
solutions. They require certification authorities and a public
key infrastructure, leading to possible bottlenecks due to
the presence of central servers or further operations on
the joining procedure, such as the binding with network
or physical addresses or other verification activities, which
limit the entering of the peer-to-peer network and can
cause additional overheads. Conversely, in our scenario we
consider the joining procedure free from further constraints
or limitations, and we do not use central authorities or
servers.
In the last part of this work we propose a new adaptation
of the solution firstly proposed for a Sybil attack, called SChord, to work effectively also against a Spartacus attack.
Furthermore, we provide also a comparison between the
mixed trust-based technique and those described in [32, 33]
and demonstrate they reach worse results when considering
successful lookups and the average number of hops.
###### 3. Sybil Attack in Chord
In this part of the paper, after briefly recalling Chord [4],
we study the degradation of the performances of the DHT
in presence of a Sybil attack by means of simulations,
-----
4 Security and Communication Networks
considering both lookups and storage and retrieval operations. Some distinctions are made also between iterative and
recursive lookup procedures.
_3.1. A Brief Overview of Chord. Peers and resources are_
associated with a unique 𝑚-bit identifier (𝑚 is the ID
length), calculated by using consistent hashing of the peer
addresses. In the following, we indifferently name Chord
entities as “peers” or as “nodes.” Chord logical IDs are
ordered following a numerical circle called Chord ring, where
distances are computed modulo 2[𝑚]. The owner of a resource
key 𝑘 is defined as the first node with an ID matching or
following 𝑘 in the key-ring modulo 2[𝑚]. This node is named
𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟(𝑘).
The definition of distance between a node 𝑛1 and a node
𝑛2 is (𝑛2 −𝑛1) 𝑚𝑜𝑑2[𝑚]; therefore it is asymmetric. Often, due
to reliability reasons, redundancy is added and more than
one node may be in charge of a certain key. Hereafter we
separately describe the procedures for (i) lookup and (ii) join.
_3.1.1. Chord Lookup. A lookup operation is used to find_
out what node is in charge of a specific identifier. Chord
features two lookup procedures, called, respectively, 𝑏𝑎𝑠𝑖𝑐
and 𝑎𝑐𝑐𝑒𝑙𝑒𝑟𝑎𝑡𝑒𝑑. In the basic lookup, each node simply contacts its current successor, since the requests are transferred
around the ring through successor pointers until they find a
pair of node identifiers spanning the requested key identifier;
the second in the pair is the target of the lookup query.
The accelerated lookup takes advantage of further routing
information: a so-called finger table made up of m entries. The
𝑖[𝑡ℎ] entry in the table of node n is the first node 𝑠𝑖 following n
by a distance of minimum 𝑑𝑖 = 2[𝑖−1]. Essentially, 𝑠𝑖 is defined
as 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟(𝑛+2[𝑖−1]) 𝑚𝑜𝑑2[𝑚], with 1 ≤𝑖≤𝑚. 𝑠𝑖 is termed as
the 𝑖[𝑡ℎ] finger of peer n, and it can be denoted by 𝑓𝑖𝑛𝑔𝑒𝑟(𝑛, 𝑖).
An entry on any finger table includes both the identifier and
the peer address.
It is possible to summarize the accelerated lookup procedure for any given key through the following steps:
(i) A node is requested for a certain key.
(ii) Should the node be in charge of the key being
requested, the lookup ends.
(iii) Otherwise, should the node’s successor be in charge
for that key, the requested node responds with the
successor node.
(iv) Else the requested node returns the highest entry in
the finger table that comes before the key.
This version of the lookup algorithm assures that lookups, in
a 𝑁-node network, require at most 𝑂(log2𝑁) hops in order
to succeed. This is because the topological distance between
the current requested node and the target node is divided by
two at each repetition of the algorithm. Both the basic and
the accelerated lookup may be performed in an iterative or
recursive way [35]; during the former the querying peer has
full control of the lookup process, while in the latter case at
each step a different peer is in charge to forward the request.
_3.1.2. Chord Join. A new node joins a Chord network by_
computing its ID and contacting a bootstrap node. This is
performed by executing a find successor procedure, using the
joining node ID as an argument. Once the joining node has
discovered its successor, it establishes its successor node to
be the admitting peer and sends to it a 𝑛𝑜𝑡𝑖𝑓𝑦 message that
informs the admitting peer that a new predecessor has joined
the network.
_3.2. Simulation Conditions. In order to perform numerical_
simulations, we employed DEUS [36] together with its
inherent Chord implementation. This simulator is based on
discrete events and on virtual seconds (V𝑠) as regards simulation time. For the sake of simplicity, we only considered
stationary conditions so that, regardless of peers churning, a
fixed number of them are constantly active in the considered
scenario. The number of peers that behave well, i.e., according
to the standard Chord algorithm, is constant and set to 10,000,
while malicious peers, i.e., the sybils, vary across the different
simulations, and they can acquire the following figures: 0,
2,000, 4,000, 6,000, 8,000, 10,000, 15,000, and 20,000. Those
values reaching more than 10,000 are considered as cases of
overwhelming presence of malicious nodes. The simulation
time is set to 30,000 V𝑠. Good nodes are subject to a churning
Poisson process of parameter 𝜆 (equaling 10 joins/leaves every
1 V𝑠); malicious nodes, instead, join and leave following a
random period process. The ID length 𝑚 is 128 bits, while
the observation cycle amounts to 1,000 V𝑠, allowing for 30
observations for each simulation.
The DEUS implementation of Chord does also feature a
parameter, accounting for a maximum waiting time, named
𝑀𝑊, which is set to 1,000 V𝑠. This parameter is used to
simulate those nodes responding too slowly to a request.
Indeed, we only concentrate on lookup procedures that
correctly terminated, that begin only when the network is in
a stationary condition, and that do not cause 𝑀𝑊 to expire.
Should 𝑀𝑊 expire, we labeled these lookups as “unended.”
As a final remark, for all parameter sets, we considered the
averages of the results computed over the 30 datasets coming
from each observation cycle and over various simulation
seeds. This was done in order to achieve a confidence interval
amounting at least to 95% for the obtained results.
_3.3. How the Sybils Affect Standard Chord. We study both_
lookup attacks as well as storage and retrieval attacks and both
iterative and recursive accelerated lookups. In particular two
malicious basic behaviors have been considered for lookups.
_Sybils asked for a next hop could either_
(1) refuse to answer or
(2) respond with a randomly chosen peer ID.
Concerning storage and retrieval, we analyze different behaviors:
(3) refusing to store a resource,
(4) accepting a resource and then refusing the relative
retrieval queries,
-----
Security and Communication Networks 5
(5) returning a null resource in response to a retrieval
request,
(6) blocking storage and retrieval operations for certain
specific resources chosen randomly.
100
90
80
70
60
50
40
30
20
10
In particular, since just the first and third behaviors are able to
effectively disrupt standard Chord operations, the second and
the fourth behaviors are studied only against our trustbased
proposal in the following sections, in order to stress its
effectiveness.
Although various metrics to study a Sybil attack are
present in the literature ([37, 38]), we mainly focused on the
following:
(i) for lookups, the average amount of successful requests
and the mean hops per lookup;
(ii) for storage and retrieval operations, the mean of successful store (PUT) and retrieval (GET) procedures.
0
0 5000 10000 15000 20000
Sybils number
In Figure 1 we show lookups in a scenario where sybils
refuse to provide an answer to the querying peer (case (1))
of the aforementioned list). We refer to successful operations
when lookups reached the peer correctly in charge of the
searched key, regardless of whether the lookup procedure has
involved sybils or not. The outcomes demonstrate an intense
decrease in successful lookup procedures while malicious
nodes grow. Unsuccessful operations, which overtake successful lookups if the sybils overtake the 8,000 threshold, refer
to lookups that have not yet finished once the 𝑀𝑊 timeout has
expired; therefore they are labeled as “unended.”
Furthermore, in the same figure, we show that a little
fraction (2,000 or 4,000) of malicious nodes does not influence too much the overall performances, so there should be
a minimum amount of malicious nodes that makes a Sybil
attack successful. Conversely, those lookup requests being
unsuccessful are somehow bounded when malicious peers
are less than 6,000, while they increase very much and exceed
successful lookups from 8,000 on. It is also relevant to the
saturation behavior experienced by the curve when the sybils
are more than 10,000. This could be due to the fact that
the sybils have reached and overtaken good nodes and their
negative effect tends to stabilize.
In Figure 2 we consider the other malicious behavior: a
random response (case (2)). In this case, the sybils respond
to lookup queries with the ID of a random peer, regardless of
whether it is malicious or not. In these simulation conditions,
we point out a different assessment for genuine and nongenuine lookups, as this could be useful to analyze the
behavior of iterative and recursive lookups. As a matter of
fact, when we consider a no response attack, the iterative and
recursive lookups are similarly affected; that is, the procedure
is unended, whereas when we consider a random response,
this could lead to different outcomes according to the number
of sybils encountered during the procedure. More specifically,
there are three cases: (i) successful genuine lookups, (ii)
successful non-genuine lookups, and (iii) unended lookups.
Genuine lookups refer to those lookups not affected by sybils
at all, whereas non-genuine lookups are the lookups in which
at least a sybil has been contacted during the procedure. It
succ.
unended
Figure 1: Successful and unended lookups versus the number of
_sybils, whenever sybils do not respond._
is evident from these definitions that in case of no response
from the sybils this different behavior could not be detected.
Returning to Figure 2, while the number of genuine
lookups decreases gently in a way similar to the one depicted
in Figure 1, the number of non-genuine ended lookups
experiences a peak in correspondence to a number of sybils
equaling 8,000 and then it sharply decreases reaching the
performances of genuine successful operations. The opposite
behavior can be observed for unended lookups that, when the
_sybils number more than 8,000, undergo a sharp increase._
While non-genuine lookups have the chance of considering good nodes leading to the correct target, when sybils are
not overwhelming, this cannot take place anymore when the
quantity of sybils surpasses a given threshold, which, looking
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
genuine
not genuine
unended
Figure 2: Successful genuine, successful non-genuine,and unended
lookups versus the number of sybils, whenever sybils respond with
random IDs.
-----
6 Security and Communication Networks
at the trends in the graph, can be set to 8,000. On the other
hand, when this threshold is passed, the probability for the
lookups to encompass a great majority of sybils increases
and, as a consequence, the lookup operations may be stalled
into never-ending cycles. Figure 2 shows aggregated data for
both iterative and recursive lookup; however, we noticed that
successful lookups belong more to the iterative procedures,
rather than to the recursive ones. This is because the Chord
requesting node can check whether a lookup is moving
towards the queried key and, in the case of the iterative
procedure, can decide not to follow the wrong suggestion
of a sybil and to query again the previous peer. This is not
possible in a recursive lookup procedure. We will return on
the difference between recursive and iterative lookups later,
as a difference will emerge when we apply our trust-based
extension to Chord.
In Figure 3 we compare the two malicious behaviors
considering the average of hops per lookup, regardless of the
fact that the lookup is genuine or not, and not considering
if it succeeds or it is unended after the 𝑀𝑊 timeout has
been reached. As it may be noticed, the curve concerning the
_sybils not responding, experiences a first increase and then a_
decrease, whereas the curve of the second malicious behavior
undergoes a constant growth of the hops per lookup. This
could be due to the fact that in the second case an increasing
number of sybils may lead to a never-ending lookup process,
while in the first case, above a certain threshold of sybils in the
network, the chance to reach good behaving nodes decreases
and so lookups experience a longer waiting time without
increasing the number of hops that tends to decrease on
average. It must be highlighted that in the first case the mean
quantity of hops does not surpass the typical Chord threshold
(log2𝑁, with 𝑁 the overall nodes in the network: good nodes
plus sybils), ranging from 13.28 to 14.87 depending on the
variable amount of sybils in the considered scenario. In the
second case this happens whenever the sybils equal or go over
10,000.
In Figure 4 we show the trend of successful storage and
retrieval operations when the amount of sybils in the network
increases. We encompass both PUT and GET procedures
together and the basic malicious behaviors related to these
two operations: refusing to accept a legitimate PUT and
responding with a null value when receiving a GET request.
However, it is unuseful to consider whether the procedure is
iterative or recursive or whether it is genuine or not, because
we do not regard maliciousness in the lookups: the nature
of the last peer reached in the lookup process is the only
thing that counts in this case. In the figure, we report also
the curve of successful lookups in presence of no response
from the sybils for the sake of comparison, since all these
attacks are quite similar, regarding the refusal of performing
the requested operation.
As can be inferred from the figure, the decrease of the
successful storage or retrieval operations is more evident
than that of lookups (already shown in Figure 1), especially
when the sybils number more than 4,000. We can explain
this trend considering that, in this case, we consider two
malicious behaviors at the same time, i.e., both for PUTs
and for GETs, while the lookups were affected by only one
In this section, we investigate some possible countermeasures
to limit the impact of sybils on lookups, as well as on
18
17
16
15
14
13
12
11
10
0 5000 10000 15000 20000
Sybils number
no response
random response
Figure 3: Average number of hops versus the number of sybils, when
comparing two different malicious behaviors.
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
succ. lookups
succ. storage and retrieval
unended
Figure 4: Comparison between successful lookups and successful
storage and retrieval operations versus an increasing number of
_sybils in standard Chord._
form of attack, i.e., no response. This trend is not so marked
in those cases where there are few sybils in the network,
confirming again that a small number of malicious nodes
do not influence the performances very much. Moreover, we
can notice that the number of unended storage and retrieval
procedures is almost constant and very limited, and this is
correct since this percentage is only dependent on the 𝑀𝑊
parameter rather than on malicious behaviors in the storage
and retrieval procedures.
###### 4. S-Chord
-----
Security and Communication Networks 7
storage and retrieval procedures, by integrating a simple but
effective trust-based mechanism into the standard Chord
environment and procedures, called S-Chord. The proposed
approach is similar but yet different, to the one proposed for
Kademlia in [10].
We propose improving the resilience of Chord to a Sybil
attack, using trust information in the lookup as well as in
storage and retrieval operations, but with a differentiated
trust management for the two types of operations. The
proposed solution considers a trust metric when sorting local
finger tables. We suppose, contrarily from Koyanagi [32],
that sorting the entries of such tables regarding only trust
would not be effective in some cases, since the best solution
is obviously the one of standard Chord. We introduce a novel
metric for computing the distance of peers and call it “new
distance” (𝑛𝑑𝑖). This new distance takes into account also
trust, and it is computed, for each peer 𝑖 in the finger table,
according to
𝑛𝑑𝑖 = 𝑏⋅2[𝑖−1] + (1 −𝑏) 𝑇𝑖 ⋅2[𝑖−1] (1)
where 𝑛𝑑𝑖 is the new distance, 𝑏 is a balancing term, whose
values may range from 0 to 1, while 𝑇𝑖 is the trust factor,
with values from 0 to 1, of the 𝑖[𝑡ℎ] entry of the finger table.
The formula in (1) guarantees that the actual successor is not
skipped, as the base 2 exponentiations are always multiplied
by factors less than 1. The farthest node according to the new
distance is chosen as the candidate for the next step of the
iterative or recursive lookup process. This is mandatory also
in case a recursive lookup is considered and the peer currently
in charge for the lookup is a sybil.
_4.1. Trust Score. The trust score is calculated following a_
definition coming from the PET model [26], and not from
the model in [32] or in [33] as regards Chord, or from the
model in [9, 39] for Kademlia. In this way, we try to maintain
the procedures as simple as possible and, at the same time,
to insert the least overhead as possible. In case there are no
previous interactions with other peers, we account for a global
risk factor, which allows us to escape the grace phase used in
[9] and the indirect trust employed in [32]. Conversely, we
propose a proper mixture of direct trust and risk by means
of two numerical weights. The trust score (𝑇) is defined as
follows:
𝑇= 𝑊𝑅𝑒 ⋅𝑅𝑒 + 𝑊𝑅𝑟 ⋅(1 −𝑅𝑟) (2)
where 𝑅𝑒 and 𝑅𝑟 are the direct trust and the risk, respectively,
trust and to the global risk of the network (values rangingwhereas 𝑊𝑅𝑒 and 𝑊𝑅𝑟 are the weights assigned to the direct
between 0 and 1, and one the complement to 1 of the other
one), respectively. We simplify further the model with the
introduction of a unique negative level of interaction (L =
low grade). This implies that the risk 𝑅𝑟 is calculated with the
subsequent equation:
𝑅𝑟 = [𝑁][𝐿] . (3)
𝑁𝑇
𝑁𝐿 accounts for all interactions, with a low grade of service,
provided by a certain peer in the whole network. It may
concern various behaviors, depending on the particular
scenario:
(i) considering only the trust in lookups, 𝑁𝐿 may encompass timeouts of lookups, no responses, and the like;
(ii) considering only the trust in PUT and GET procedures, 𝑁𝐿 encompasses storage refusals, retrieval of
null resources, and so on.
(iii) considering trust in all possible operations, 𝑁𝐿 is
computed considering all the previously mentioned
malicious responses.
𝑁𝑇 represents all the requests generated by the nodes of the
considered scenario towards the peer under evaluation, be
they lookups or PUT or GET operations.
The risk of a certain peer is a global measure that in
a realistic implementation could be maintained through
mechanisms similar to those of blockchain [40].
Conversely, the value of the direct trust depends on
the particular trustor node, which stores various values of
trust, one for each trustee peer, in a local table. If no
communications with a particular peer have taken place yet
(e.g., new joining node),𝑅𝑟); in the opposite situation it comprises the direct trust 𝑡 is defined only through 𝑊𝑅𝑟 ⋅(1 −
(𝑅𝑒) contribution as well, which is computed and updated
according to the following steps:
(i) when only lookups are examined: +1/𝑀 to all peers
involved in a lookup that leads to a correct resource,
or −1.5/𝑀 to the peers leading to dead-points or
wrong targets;
(ii) when only PUT and GET procedures are of interest:
+1/𝑀 to all nodes that accept to store a legitimate
PUT or that return a correct resource, or −2/𝑀 to all
nodes refusing a storage request or providing a null
value as a response to a legitimate GET;
(iii) in case both lookups and storage and retrieval operations are considered: the update of the single direct
trusts is included in considering the before mentioned
procedures.
𝑀 represents the total number of queries, launched by the
trustor node, in which the trustees have been involved; they
can be lookups, PUTs, or GETs.
This asymmetry in rewarding or punishing the involved
peers is devised to foster the correct behavior of nodes
and, at the same time, to discourage malicious behaviors by
punishing a bogus node more than it could recover with a
single good interaction. The greater punishment in storage
and retrieval operations (−2/𝑀 in place of −1.5/𝑀) is given
by the fact that we consider jointly a malicious behavior in
both PUTs and GETs and this leads to worse performances
than in the lookup case as shown in Figure 4.
Similarly to what was done in [9], trust values expire after
a certain time (we set this time to 24 virtual hours in our
simulations) in order to avoid temporal attacks. This is to
prevent that a bogus peer, by colluding with other malicious
nodes, may obtain from them a high trust score before
starting behaving badly. The negative direct trust update is
-----
8 Security and Communication Networks
a significant difference with [32] and it could be tuned in a
more fine-grained way to involve only the last in charge peer,
or a certain subgroup of peers, for a single lookup path. This
is done with the aim of considering the chance that not every
peer of a wrong path acted in a bad way.
_4.2. Evaluation of S-Chord_
_4.2.1. Parameter Tuning. In order to evaluate the effectiveness_
of S-Chord, we consider the same simulation conditions as
those in Section 3.2, and we analyze its performances in a
network scenario with a growing number of sybils. The tuning
of parameter 𝑏 is very important since it may determine
outcomes that may be worse than those obtained with
traditional Chord. Therefore, a great simulation campaign
was carried out for optimizing parameter 𝑏, obtaining a final
value of 0.68 when considering only lookups and of 0.60 when
considering only storage and retrieval operations. This may
be explained by the fact that more importance should be
given to the standard procedure for an actual convergence of
the lookups rather than for the successfulness of storage and
retrieval operations, something depending only on the nature
(malicious or not) of the last queried peer.
for a node joining for the first time andConcerning the other parameters of the model, 0.3 for those nodes 𝑊𝑅𝑟 is 1
having already joined the network, and0 for new joining nodes (this accounts for the lack of direct 𝑊𝑅𝑒 is, by definition,
trust) and 0.7 for already joined ones (this allows having a
greater impact of direct trust over the environmental risk,
whenever such data are available).
In Figure 5 we compare the performances of different
values of the 𝑏 parameter in case of no response from the sybils
during a lookup procedure: it is evident that if the balance
factor 𝑏 is not properly set, it can lead to performances
worse than in the case of standard Chord. This happens
when 𝑏 approaches zero and trust becomes preponderant, to
the detriment of the optimum strategy of standard Chord.
Standard Chord strategy is obviously the best one and would
be based only on a distance given by 2[𝑖−1], where the base 2
exponentiation is not affected by the trust factor. On the other
hand, whenever 𝑏 gets close to 1, the outcomes appear to be
similar to standard Chord, and this is correct, as the formula
in (1) reduces to 2[𝑖−1], the standard form of the distance in
classic Chord. From the same figure, it may also be inferred
that S-Chord, whenever 𝑏 is properly set, can limit or mitigate,
at least partly, the negative effects of a Sybil attack. This is
especially marked whenever malicious nodes are more than
6,000, since the percentage increase in the performances,
compared to standard Chord, ranges from 28% (6,000 sybils)
to 184% (10,000 sybils).
In Figure 6 we compare standard Chord and S-Chord in
case of sybils responding with a random node in a lookup
procedure and variable values for 𝑏. In this case, we do not
consider genuine and non-genuine lookups separately, but
they equally contribute to successfully ended lookups as the
objective is to tune parameter 𝑏. As one can see, whenever
the balancing factor is correctly set (𝑏 equal to 0.68) the
performances of S-Chord are far better than standard Chord,
with an improvement ranging from +4.27% (4,000 𝑠𝑦𝑏𝑖𝑙𝑠)
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
standard Chord
S-Chord b=0.9
S-Chord b=0.68
S-Chord b=0.2
Figure 5: A comparison between standard Chord and S-Chord
for different values of 𝑏 considering the percentage of successful
lookups versus the number of sybils in the network; “no response” is
considered as malicious behavior.
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
standard Chord S-Chord b=0.68
S-Chord b=0.2 S-Chord b=0.9
Figure 6: Comparison of successful lookups versus sybils number
for standard Chord and S-Chord for different values of 𝑏. The
considered malicious behavior is random response.
to +94.84% (20,000 𝑠𝑦𝑏𝑖𝑙𝑠). On the contrary, when 𝑏 is not
correctly set to its optimum value, two main cases are possible
and they are described in the following.
The first one of such cases is when 𝑏 is close to zero: in
this situation, represented in Figure 6 by the case with 𝑏 equal
to 0.2, the performances are far worse than standard Chord,
even worse than in the case of no response attack shown in
Figure 5. This could be explained considering that, in this
situation, the new distance is mainly driven by the trust score
and the distance metric reduces, more or less, to 𝑇𝑖 ⋅2[𝑖−1].
This has as an ultimate effect on the reinforcement of the
randomness in the responses given by the sybils themselves.
-----
Security and Communication Networks 9
The second one is when 𝑏 is close to 1 and this is depicted
in Figure 6 by the case of 𝑏 equal to 0.9. In this case, whenever
the number of sybils is small, the performances seem to
follow the ones of standard Chord, like in Figure 5. This is
because the distance metric reduces to the classic one: 2[𝑖−1].
However, this is not true anymore when malicious peers
are more than or equal to 10,000. As a matter of fact, in
these conditions the curve undergoes a saturation behavior
across the 20% value. This is a very interesting attitude and
a tentative explanation is that there could be a threshold of
the overall network nodes, be they sybils or not, after which
the random response behavior should lead always to the same
results. This threshold should depend on the balancing factor,
as it is not present for lower values of 𝑏, the 𝑀𝑊 parameter,
since it determines the timeout and the classification of a
lookup as “unended,” and the relative ratio of malicious
nodes versus good nodes.
_4.2.2. Effectiveness in the Routing Process. In this subsec-_
tion, we carry out a comparison between S-Chord, standard Chord, Koyanagi’s solution [32], Kohnen’s solution
[9] readapted to Chord, and GeTrust [33] both in terms
of successfulness of the lookups and considering overhead
complexity. In these simulations, we set the parameter 𝑏 of
S-Chord to its optimum value.
In Figure 7 we make a comparison in terms of successful
lookups with an increasing number of sybils. The outcomes
prove that a “balanced” strategy is better than a solution based
only on trust, particularly whenever malicious nodes are not
preponderant. It must be stressed that when the sybils number
significantly more than or equal to 10,000, the number of
good nodes, the chance to obtain a successful lookup is
always less than 50%. From the same figure, it can be inferred
that our solution reaches similar performances as GeTrust;
however, S-Chord is less computationally intensive in terms
of simulation time, in comparison both to Koyanagi’s solution
and especially to GeTrust. This is shown in Figure 8, where we
make a graph of the average time spent by each trust model
after 30 simulation cycles. The best performances, in terms
of temporal overhead, reached by our solution are due to
the fact that Koyanagi’s method requires trust propagation,
while in GeTrust there is an overhead of messages both to
establish warranted relationships and in the lookup transactions themselves. The solution of Kohnen is the worst in
terms of temporal overhead and this is probably due to the
usage of certificates and public key cryptography. Because
of this temporal overhead, and since its performances are
only slightly better than Koyanagi’s ones (see Figure 7), we
will not take it into account anymore in the rest of the
paper.
In Figure 9 we compare standard Chord, S-Chord, and
GeTrust solutions, in case of sybils responding with a random ID and performing different analyses for genuine and
non-genuine lookups as well as for iterative and recursive
lookups. Considering genuine lookups are of no interest, as
these decrease with the same trend for both our solution
and GeTrust, therefore, they are not shown here. What
is interesting, instead, is the analysis of the behavior of
successful non-genuine lookups and of iterative and recursive
procedures separately.
As per the figure, our solution is better than both standard
Chord and GeTrust, especially when we look at the recursive
lookup procedure. While the iterative lookup procedure
undergoes a similar improvement as GeTrust compared with
standard Chord (more or less the same curve, and this is
why they are not shown), the recursive implementation of
S-Chord outperforms them both, particularly whenever the
_sybils number more or less like the good nodes (8,000-_
10,000). This can be explained thanks to a better spreading
of the trust information, since, differently from the iterative
procedure, more than a good node is involved in the recursive
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
standard Chord
S-Chord b=0.68
Koyanagi
GeTrust
Kohnen
Figure 7: Comparison between standard Chord, the method
employed by Koyanagi [32], the method employed by Kohnen [9],
GeTrust [33], and S-Chord considering successful lookups versus
_sybils number. The considered malicious behavior is no response._
100
80
60
40
20
0
S-Chord Koyanagi GeTrust Kohnen
Strategy
Figure 8: Comparing different strategies using trust in Chord, in
terms of computational time of 30 cycles in a simulation.
-----
10 Security and Communication Networks
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
Chord
iterative S-Chord/GeTrust
recursive S-Chord
Figure 9: Comparison of standard Chord, GeTrust [33], and SChord, considering successful non-genuine lookups versus sybils
number. The considered malicious behavior is random response.
grows constantly from a certain point on, even in those cases
when the sybils are more than 8,000. Such a behavior is quite
different from what happens for standard Chord. However,
the threshold of standard Chord performances (log2𝑁) is
reached and surpassed only from 15,000 sybils on, confirming
again the goodness of the proposed strategy in presence of a
preponderant number of malicious nodes.
Obviously, if our solution is compared with standard
Chord algorithm, it leads to worse performances (more
number of hops) in some circumstances. This can be mainly
inferred from the first parts of the curves, when the number of
_sybils is limited, and thus the performances can be assimilated_
with an adequate degree of precision to those of standard
Chord. As one can notice, in this situation, when bogus peers
are fewer than or equal to 4,000, the lines of S-Chord are
always above the corresponding ones of standard Chord,
meaning that it requires more hops. This is correct, since SChord is based on a distance metric that is not optimum but
encompasses also the fading value of the trust score.
_4.2.3. Effectiveness in the Storage and Retrieval Processes._
In this subsection, we perform an evaluation of S-Chord
considering storage and retrieval procedures. We do not
report, for the sake of brevity, the same comparisons, reported
previously for lookups, on the fine-tuning of parameter 𝑏, but
we take for granted it has been optimized for these procedures
to the above-stated value of 0.60. Moreover, this value does
not depend on the type of lookup (iterative or recursive),
since it obviously depends only on the choice of the last
peer in the query. Considering successful PUT and GET
operations in presence of basic malicious behaviors leads to
graphs similar to the ones provided before for the lookup
process, thus in this part we concentrate only on the study of
the sophisticated behaviors described in Section 3.3, which
consider a good behavior immediately followed by a bad
behavior or a selective bad behavior. They are summarized
in the following:
18
17
16
15
14
13
12
11
10
0 5000 10000 15000 20000
Sybils number
no response Chord
random response Chord
no response S-Chord
random response S-Chord
Figure 10: Average number of hops per lookup with standard Chord
and S-Chord algorithm when considering two different malicious
behaviors.
lookup and thus they can be aware of the right or wrong
responses received by other peers contacted in the procedure.
Finally, in Figure 10 we analyze S-Chord concerning the
hops mean value per lookup. Under these conditions, when
the sybils perform a random response behavior our solution
seems to lead to better performances (fewer hops), provided
that the number of malicious nodes is greater than 4,000. In
this case, the increase in the performances provided by SChord ranges from +2.35% (6,000 𝑠𝑦𝑏𝑖𝑙𝑠) to +14.62% (8,000
𝑠𝑦𝑏𝑖𝑙𝑠).
On the other hand, a particular situation takes place in
the case of no response from the sybils: by using our solution
the curve does not experience a maximum anymore; rather it
(i) accepting a resource and then refusing the relative
retrieval queries;
(ii) blocking storage and retrieval operations for certain
specific resources chosen randomly.
The first of these malicious behaviors does not influence
the effectiveness of storing operations; therefore we focus
only on retrieval procedures. In such a situation, the outcomes of the previously proposed solution can decline, as
Figure 11 demonstrates; however, this is linked to the quantity
of GETs compared to the PUTs, to the number of sybils
and to the lookup type (iterative or recursive) used to reach
the queried node. Since also in this case we experienced
the fact that the GET operations preceded by a recursive
lookup reach the same performances like the ones of the basic
malicious behaviors, we decided to employ the relative curve
as a benchmark for the following considerations.
Defining 𝑟 as the number of PUTs divided by the number
of GETs, within a certain period of time (30,000 V𝑠 in
the considered scenario), one observes, according to the
following evaluations, that its fluctuation can worsen the
-----
Security and Communication Networks 11
100
90
80
70
60
50
40
30
20
10
100
90
80
70
60
50
40
30
20
10
0
0 5000 10000 15000 20000
Sybils number
0
0 5000 10000 15000 20000
Sybils number
standard Chord
recursive S-Chord
iterative S-Chord s=0.3
iterative S-Chord s=0.8
standard Chord
recursive S-Chord
iterative S-Chord r=2
iterative S-Chord r=0.9
Figure 11: Successful GETs versus sybils number with standard
Chord and S-Chord considering two different ways of performing
the lookup and variable ratio (𝑟) between PUTs and GETs. The
maliciousbehavior consistsintoacceptingallPUTsandintorefusing
to answer to a retrieval request.
outcomes of our proposal, even if they are always better than
standard Chord’s. This happens especially if 𝑟 is higher than 1,
and malicious nodes pass 6,000 and with an iterative lookup.
Conversely, when 𝑟 is less than 1, the performances of
the proposed solution degrade later in case of GETs preceded
by an iterative lookup: when malicious peers become more
than 8,000. On the contrary, as already stated, when the GET
is preceded by a recursive lookup, the performances are not
influenced by 𝑟 and they mirror the ones experienced when
the basic malicious behaviors are enacted.
Conclusively, one can notice that when the sybils are not
overwhelming in number, the role of 𝑟 is no more crucial and
the outcomes are more or less the same as they are in those
situations without these more advanced attacks.
The second sophisticated malicious behavior, concerning,
like the first one, storages and retrievals, could influence both
PUT and GET procedures. Such an attack is to be evaluated
concerning the spreading of a content or resource. In case
the resource is very well-liked, S-Chord (be it iterative or
recursive) can be employed with no changes, while if the
searched contents are not so popular, the outcomes of SChord fluctuate according to a spreading factor 𝑠, which
measures the popularity of the searched resource and can be
determined as the number of storages for a certain content
divided by all PUT procedures.
This may be easily inferred from Figure 12, where successful storages are analyzed for standard Chord, iterative SChord with a varying 𝑠 parameter and recursive S-Chord.
Also, in this case, the curve of recursive S-Chord is used as a
benchmark since it is very similar to the one obtained under
basic malicious behaviors for PUTs and GETs using S-Chord.
As it may be inferred, if 𝑠 is higher than 0.5 (widespread
content) the trend of iterative S-Chord is very similar to
the one of recursive S-Chord, whereas when 𝑠 is lower
Figure 12: Successful PUTs versus sybils number with standard
Chord and S-Chord considering two different ways of performing
the lookup and variable spreading parameter (𝑠). The malicious
behavior consists into blocking storage and retrieval operations for
certain specific resources chosen randomly.
than 0.5 the trend becomes irregular, resulting sometimes
below and sometimes above standard Chord, according to
the number of malicious nodes. In this attack scenario, the
trust management could be readapted to execute storages or
retrievals in a correct way. The adaptation may influence both
the evaluation of the risk, introducing another parameter in
addition to 𝑁𝐿, and the update of the trust score. However,
such a focused attack could influence the final outcome of
the whole attack strategy, since not all storage procedures are
compromised and executing various attacks for every existing
content could be computationally very heavy.
As we have shown, recursive S-Chord usually obtains
better results and features fewer problems compared to the
iterative one. This is due to the fact that, in the considered
implementation of recursive S-Chord, the last searched peer
transmits back, to every previously contacted peer, some data
about the outcome of storages or retrievals, while, in the
considered implementation of iterative S-Chord, these data
are available only to the peer that originated a storage or
retrieval query. As a consequence, such an implementation
could foster the diffusion of trust data across nodes.
###### 5. Spartacus Attack in S-Chord
In this section, we study S-Chord in further detail, trying
to understand its effectiveness and resilience under more
complex attack scenarios. For this purpose, we consider
only the recursive way of performing lookups, as it proved
to be the best solution (see Section 4.2). Furthermore, we
assess the effects of a network infected by spartaci against SChord and we try to provide some slight modifications in the
proposed algorithms, to soothe the negative consequences
of a Spartacus attacker. We focus solely on routing attacks
coming from spartaci and particularly the already studied
-----
12 Security and Communication Networks
cases of (i) no response or of (ii) random response, since
the results for storage and retrieval operations are very
similar. Moreover, the balancing factor corresponds with the
optimum value we found, i.e., 0.68 for S-Chord routing,
unless otherwise stated. The two main differences in the
following analysis, compared to the previous ones, regard
the presence of the temporal dimension on the abscissas of
the graphs and the constancy of the number of peers in the
system; i.e., we do not consider a variable amount of bogus
peers. The reasons for these changes are the following: (i) first
of all, because the effectiveness of a Spartacus attack varies
in time, reasonably maximum in the first instants and then
decreasing and, (ii) secondly, because the Spartacus attack
provides the replacement of already valid IDs rather than
the addition of new nodes. In order to properly assess the
temporal dimension, in the simulation sets of this section the
churn period of malicious nodes is no more random but it
is established to 150 V𝑠, and the observation interval is set
to 50 V𝑠, in order to achieve more granular observations.
Obviously, in a real network malicious nodes could have
different times in which they enter the network; however,
the simplified assumption of a constant and common churn
period is reasonable for an attacker characterized by bounded
computational resources. Despite the churning, the average
amount of bogus peers is constant during a simulation cycle.
_5.1. The Effects of a Spartacus Attack onto S-Chord. In this_
subsection, we analyze the effects of a Spartacus attack
directly onto the proposed S-Chord. We focus our attention
on the performance degradation concerning both the number of successful lookups and the average hops per lookup.
As we have already explained in advance, the analyses are
performed in time, limiting to the first 2,000 V𝑠, but the results
are still an average over various simulation seeds to get a
confidence of 90%. The limitation to the first 2,000 V𝑠 is due to
the inherent nature of a Spartacus attack: this is more effective
in its beginning and its effects fade afterward. In particular, we
report only this time interval as in the following the curves do
not experience other significant trends.
Concerning the first metric, we focus only on the case of
no response, since this was the case of better improvement
of S-Chord compared to standard Chord, for both 8,000 and
10,000 malicious nodes (see Figure 7). As it may be inferred
from Figure 13, when we consider a Spartacus attack the
performances decrease over time, something happening, by
a lesser degree, also on the standard Chord algorithm. This
may be caused by the fact that Chord, unlike other DHT,
e.g., Kademlia, has no inherent proximity routing procedures
that reinforce the whole algorithm during the joining phase
(see Section 3.1.2). The performances of S-Chord dramatically
drop, especially in the case of 10,000 spartaci and this is
quite obvious because the good nodes have to face the
same number of malicious counterparts. Particularly, the
performances in case of 8,000 spartaci decrease of about
26%, while in the case of 10,000 spartaci they fall of about
36%. The worsening of the performances of S-Chord under a
Spartacus attack may be motivated by the combined effect,
caused by malicious nodes, of both inheriting high trust
scores and replacing of good behaving nodes. That is, not
only are malicious nodes trusted by good nodes in the
beginning, but their presence also decreases the number of
good behaving peers and, therefore, the overall performances
drop accordingly.
What is to be highlighted is also the presence of a periodic
trend in the curves, with period circa equivalent to the
churning period of malicious nodes, i.e., 150 V𝑠.
In Figure 14 we assess the effects of spartaci on Chord and
S-Chord whenever the mean number of hops per lookup is
concerned. In this circumstance, we consider only the case
of 8,000 malicious nodes, since this is one of the two points
of the graph in Figure 10 where our trust-based solution
obtains better results than the classic algorithm. As the
performances are very similar for both the no response and
70
60
50
40
30
20
10
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Virtual time (vs)
Chord 8,000 spartaci
Chord 10,000 spartaci
S-Chord 8,000 spartaci
S-Chord 10,000 spartaci
Figure 13: Successful lookups versus sybils number in standard
Chord and S-Chord with a different quantity of spartaci.
18
17
16
15
14
13
12
11
10
0 500 1000 1500 2000
Virtual time (vs)
Chord
S-Chord
standard Chord threshold
Figure 14: Average hops per lookup versus time in standard Chord
and S-Chord with 8,000 spartaci in the network.
-----
Security and Communication Networks 13
0 500 1000 1500 2000
Virtual time (vs)
the random response cases, we consider an average value and,
as a consequence, only one combined malicious behavior.
The graphs depicted in Figure 14 show an almost constant
behavior for classic Chord, whereas the performances of SChord quickly degrade, overtaking the standard algorithm
and even the log2𝑁 threshold (log2(18, 000) = 14.13)
by 1,500 V𝑠. Furthermore, in this situation, the periodic
tendency, following the churn of malicious nodes, appears
again, especially in the curve of S-Chord.
_5.2. Improving S-Chord. In this subsection, we propose some_
improvements to S-Chord to combat a Spartacus attack.
We focus the improvement efforts mainly as regards the
percentage of successful lookups, since, also in presence of
_sybils, the advantage of our balanced trust-based method,_
considering average hops, is visible only in some limited
circumstances, namely, when malicious peers are limited to
6,000 or 8,000 (see Figure 10).
As a consequence, we vary S-Chord mainly using the
following two countermeasures:
70
60
50
40
30
20
10
0
(i) the first one regards the opportune tuning of the
weight 𝑊𝑅𝑟 of the risk environment.
(ii) the second one concerns the utilization of an opportune time decay function to be applied to the trust
score.
S-Chord 8,000 spartaci
S-Chord 10,000 spartaci
improved S-Chord 8,000 spartaci
improved S-Chord 10,000 spartaci
Figure 15: Successful lookups versus sybils number in S-Chord and
enhanced S-Chord featuring different spartaci in the network.
What can be inferred, from Figure 15, is a sort of
improvement in the performances of improved S-Chord: the
curves, both for 8,000 and for 10,000 spartaci, maintain an
overall constant trend. This does not take place with standard
S-Chord. As a consequence, we could manage to get a stable
behavior in time and thus soothe a little the drop in the
performances. This is more evident in the case of 10,000
_spartaci than in that of 8,000 malicious nodes; as a matter of_
fact, in the first case, the decrease of improved S-Chord is only
about 1.89%, greatly better if compared with 33% of S-Chord,
while in the second case the drop is more than three times, i.e.,
4.35%, however, better than 15% of S-Chord. Moreover, the
periodic trend, following the churning period of malicious
nodes, tends to disappear both in the case of 8,000 and in the
case of 10,000 spartaci.
_5.3. Comparisons with Other Techniques. Finally, in this_
section, we assess the effectiveness of our proposal in comparison with other solutions, already known in the literature, under a Spartacus attack. We address the comparison
between improved S-Chord, Koyanagi’s solution [32], and
GeTrust both in terms of successful lookups and in terms
of average hops. The simulation conditions, as well as the
utilized metrics, are the same as the ones of the previous
subsections.
In Figure 16 we analyze the performances of S-Chord,
GeTrust, and Koyanagi’s solution with the ones of the
improved version of S-Chord, in terms of successful lookups
with a growing simulation time and a combination of malicious behaviors regarding the routing process (no response
and random responses). The number of malicious nodes is
8,000. We can see that GeTrust performs more or less like SChord, especially for a growing simulation time, according
to the analyses showed before in the paper, but its curve is
always under the one of improved S-Chord, which tends to
The first improvement we devised involves augmenting
the weight of the network scenario. This seems a straightforward consequence since the Spartacus attack appears more
dangerous than the Sybil-based one. However, we do not
simply increase the environment weight to a higher value
for those peers having already joined the network, since,
following our first tests, this static countermeasure would
have affected only the first moments of the simulation.
Therefore, we make 𝑊𝑅𝑟 vary according to a period equal to
the one of the churn of the spartaci, increasing till 0.5 and
decreasing to 0.3. This obviously implies the knowledge of
such a period; however, it could be easily detected through
a cooperation between those peers that are going to admit
possible malicious nodes.
The second aforementioned countermeasure encompasses the introduction of a time decay function. More in
detail, we multiply the trust score by a negative exponential
function of time, with mean value corresponding to the
churning period of the spartaci, i.e., 150 V𝑠. This is better
explained in (4), where 𝑇[̃] stands for new trust, 𝑇 for old trust,
and 𝐷𝑓 for decay function.
̃𝑇= 𝑇⋅𝐷𝑓 (4)
The decay function is, instead, expressed by the following
formula:
1
𝐷𝑓 (𝑡) = 150V𝑠 [⋅𝑒][−(1/150][V][𝑠)𝑡] (5)
Therefore, also 𝑇[̃] in (4) becomes a function of time. 𝑇[̃] is going
to replace 𝑇 in the formulas of (2) and (1).
-----
14 Security and Communication Networks
Table 1: Comparison between different trust-based solutions in terms of decrease of successful lookups with a variable amount of spartaci in
the network.
**S-Chord** **improved S-Chord** **GeTrust** **Koyanagi’s**
**8,000 spartaci** -14.93% -4.49% -15.77% -26.96%
**10,000 spartaci** -33.07% -2.00% -32.81% -58.14%
70
65
60
55
30
25
50
45
20
15
40
0 500 1000 1500 2000
Virtual time (vs)
10
0 500 1000 1500 2000
Virtual time (vs)
S-Chord
improved S-Chord
GeTrust
Koyanagi
S-Chord
standard Chord threshold
improved S-Chord
GeTrust
Koyanagi
Figure 16: Successful lookups versus time in S-Chord, enhanced SChord, Getrust, and Koyanagi’s solution with 8,000 spartaci in the
network.
50
45
40
35
30
25
20
15
10
0 500 1000 1500 2000
Virtual time (vs)
S-Chord
improved S-Chord
GeTrust
Koyanagi
Figure 17: Successful lookups versus time in S-Chord, enhanced SChord, Getrust, and Koyanagi’s solution with 10,000 spartaci in the
network.
be almost constant. The decreasing performances of Getrust
may be due to the chance that spartaci could assume the role
of guarantors: this does not lead to a complete worsening
of the overall performances, since the direct trust towards
the guarantors decrease, but they, however, tend to follow
the decreasing trend of S-Chord, where no guarantors are
Figure 18: Average number of hops versus time in S-Chord,
enhanced S-Chord, Getrust, and Koyanagi’s solution with 8,000
_spartaci in the network._
present. The worst performances are those of Koyanagi’s
solution. This may be due to its reliance on trust aggregation
and propagation that may boost the collusion activity of
_spartaci._
In Figure 17 we show the performances of S-Chord,
GeTrust, and Koyanagi’s solution with the ones of the
improved version of S-Chord, when bogus peers are 10,000
and considering successful lookups. The same conditions
used for simulations of Figure 16 apply and we can draw
more or less the same considerations. The curves of S-Chord
and GeTrust seem to overlap much more and the decrease
in the performances is more marked as reported in Table 1.
From that table, it can be inferred that the improved S-Chord
solution is better than S-Chord and than GeTrust and much
better than Koyanagi’s strategy.
In Figure 18 we show a comparison between the aforementioned solutions in terms of average hops per lookup
and considering 8,000 spartaci, the most critical situation
as seen previously. As we can see, Koyanagi’s solution is the
worst since it is based solely on trust and its propagation and
aggregation and thus much more vulnerable to a Spartacus
attack, even if its hop number is on average already the double
of standard Chord by default [32].
In order to compare GeTrust and the other solutions
effectively, we do not consider the messages exchanged with
guarantor nodes and archive nodes, but only those used to
actually perform the lookup. GeTrust performs similarly to
-----
Security and Communication Networks 15
S-Chord, with a trivial growth for the hops according to
the simulation time, while the improved version of S-Chord
succeeds in maintaining an almost constant trend, similarly
to standard Chord, and to remain under the 𝑂(log2𝑁)
threshold.
###### 6. Conclusions
In this article a deep analysis of some trust-based countermeasures for Chord, under a Sybil or a Spartacus attack,
has been presented. We have numerically studied the consequences of a Sybil attack in routing as well as in storage and
retrieval operations, and we introduced a solution (namely,
S-Chord), based on direct trust, to make Chord procedures
more resilient. Moreover, we evaluated the still not deeply
investigated Spartacus attack, both in Chord and in S-Chord,
proposing some effective improvements to S-Chord itself.
The results of our simulations are encouraging compared to
standard Chord and to existing methods using exclusively
trust, or other complex trust management systems. In conclusion, our approach may be regarded as a good candidate for a
security solution applied to P2P networks. Using simple trust
metrics is, as a matter of fact, far less power consuming than
other cryptography-based solutions, opening its applicability
to new emerging scenarios featuring low power devices, like
the Internet of Things one. This is a matter for possible
future research work, along with considering other DHTs,
focusing on the application of trust to lookups and PUT
and GET operations jointly, analyzing the spreading of trust
information across the peers, or varying the optimum value
of the balancing factor according to various scenarios.
###### Data Availability
The data used to support the findings of this study are
available from the corresponding author upon request.
###### Conflicts of Interest
Dr. Riccardo Pecori and Dr. Luca Veltri declare that no
conflicts of interest, regarding the publication of this paper,
are present at the moment of submission.
###### Acknowledgments
Dr. Riccardo Pecori would like to thank Mr. Antonio Enrico
Buonocore for carefully proof-reading the paper and polishing English and Stefano Marmani for making him discover
the Spartacus attack.
###### References
[1] R. Pecori and L. Veltri, “3AKEP: Triple-authenticated key
exchange protocol for peer-to-peer VoIP applications,” Com_puter Communications, vol. 85, pp. 28–40, 2016._
[2] C. Liao, S. Cheng, and M. Domb, “On Designing Energy
Efficient Wi-Fi P2P Connections for Internet of Things,” in
_Proceedings of the IEEE 85th Vehicular Technology Conference_
_(VTC Spring ’17), pp. 1–5, Sydney, NSW, June 2017._
[3] P. Maymounkov and D. Mazi`eres, “Kademlia: a peer-to-peer
information system based on the XOR metric,” in Peer-to-Peer
_Systems, vol. 2429 of Lecture Notes in Computer Science, pp. 53–_
65, Springer-Verlag, 2002.
[4] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H.
Balakrishnan, “Chord: a scalable peer-to-peer lookup service
for internet applications,” in Proceedings of the Conference
_on Applications, Technologies, Architectures, and Protocols for_
_Computer Communications (SIGCOMM ’01), pp. 149–160, San_
Diego, Calif, USA, August 2001.
[5] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Schenker,
“A scalable content-addressable network,” in Proceedings of
_the Conference on Applications, Technologies, Architectures, and_
_Protocols for Computer Communications, ser. SIGCOMM ’01, pp._
161–172, San Diego, California, USA, 2001.
[6] A. Rowstron and P. Druschel, “Pastry: Scalable, Decentralized
Object Location, and Routing for Large-Scale Peer-to-Peer
Systems,” in Middleware 2001, vol. 2218 of Lecture Notes in
_Computer Science, pp. 329–350, Springer Berlin Heidelberg,_
Berlin, Heidelberg, 2001.
[7] J. R. Douceur, “The sybil attack,” in Peer-to-Peer Systems, P.
Druschel, F. Kaashoek, and A. Rowstron, Eds., vol. 2429 of ser.
_Lecture Notes in Computer Science, pp. 251–260, Springer Berlin_
Heidelberg, 2002.
[8] H. Rowaihy, W. Enck, P. McDaniel, and T. La Porta, “Limiting
sybil attacks in structured P2P networks,” in Proceedings of the
_26th IEEE International Conference on Computer Communica-_
_tions (INFOCOM ’07), pp. 2596–2600, May 2007._
[9] M. Kohnen, “Analysis and optimization of routing trust values
in a Kademlia-based distributed hash table in a malicious environment,” in Proceedings of the 2nd Baltic Congress on Future
_Internet Communications, BCFIC ’12, pp. 252–259, Lithuania,_
April 2012.
[10] R. Pecori, “S-Kademlia: A trust and reputation method to
mitigate a Sybil attack in Kademlia,” Computer Networks, vol.
94, pp. 205–218, 2016.
[11] R. Pecori and L. Veltri, “Trust-based routing for Kademlia
in a sybil scenario,” in Proceedings of the 22nd International
_Conference on Software, Telecommunications and Computer_
_Networks, SoftCOM ’14, pp. 279–283, Croatia, September 2014._
[12] M. De Donno, N. Dragoni, A. Giaretta, and A. Spognardi,
“DDoS-Capable IoT Malwares: Comparative Analysis and
Mirai Investigation,” Security and Communication Networks,
vol. 2018, pp. 1–30, 2018.
[13] S. Delgado-Segura, C. P´erez-Sol`a, J. Herrera-Joancomart´ı, G.
Navarro-Arribas, and J. Borrell, “Cryptocurrency Networks: A
New P2P Paradigm,” Mobile Information Systems, vol. 2018,
Article ID 2159082, 16 pages, 2018.
[14] C. Lesniewski-Laas and M. F. Kaashoek, “Whanau: A sybilproof distributed hash table,” in Proceedings of the 7th USENIX
_Conference on Networked Systems Design and Implementation,_
_ser. NSDI’10, 2010._
[15] P. Mittal, M. Caesar, and N. Borisov, “X-vine: Secure and
pseudonymous routing in dhts using social networks,” in
_Proceedings of the in 19th Annual Network and Distributed_
_System Security Symposium, (NDSS ’12), San Diego, California,_
USA, 2012.
[16] M.N.Al-AmeenandM.Wright,“Persea:A sybil-resistant social
dht,” in Proceedings of the Third ACM Conference on Data and
_Application Security and Privacy, ACM, p. 169, New York, NY,_
USA, Feburary 2013.
-----
16 Security and Communication Networks
[17] M. N. Al-Ameen and M. Wright, “IPersea: Towards improving
the Sybil-resilience of social DHT,” Journal of Network and
_Computer Applications, vol. 71, pp. 1–10, 2016._
[18] Z. Yang, J. Xue, X. Yang, X. Wang, and Y. Dai, “VoteTrust:
Leveraging friend invitation graph to defend against social
network sybils,” IEEE Transactions on Dependable and Secure
_Computing, vol. 13, no. 4, pp. 488–501, 2016._
[19] Z. Tan, X. Wang, and X. Wang, “A Novel Iterative and Dynamic
Trust Computing Model for Large Scaled P2P Networks,”
_Mobile Information Systems, vol. 2016, Article ID 3610157, 12_
pages, 2016.
[20] L. Shi, J. Zhou, Q. Huang, and W. Yan, “A modification on
the Chord finger table for improving search efficiency,” in
_Proceedings of the 13th IEEE/ACIS International Conference on_
_Computer and Information Science, ICIS ’14, pp. 395–398, China,_
June 2014.
[21] P. Zave, “Reasoning About Identifier Spaces: How to Make
Chord Correct,” IEEE Transactions on Software Engineering, vol.
43, no. 12, pp. 1144–1156, 2017.
[22] C. T. Min and L. T. Ming, “Investigate SPRON Convergence
Time Using Aggressive Chord and Aggressive AP-Chord,” in
_Proceedings of the 12th International Conference on Information_
_Technology: New Generations, ITNG ’15, pp. 61–66, USA, April_
2015.
[23] I. Woungang, F.-H. Tseng, Y.-H. Lin, L.-D. Chou, H.-C. Chao,
and M. S. Obaidat, “MR-Chord: Improved Chord Lookup Performance in Structured Mobile P2P Networks,” IEEE Systems
_Journal, vol. 9, no. 3, pp. 743–751, 2015._
[24] T. Amft and K. Graffi, “Moving peers in distributed, locationbased peer-to-peer overlays,” in Proceedings of the International
_Conference on Computing, Networking and Communications,_
_ICNC ’17, pp. 906–911, USA, January 2017._
[25] W. Zhang, B. Sun, and Y. Sun, “Trustchord: chord protocol
based on the trust management mechanism,” in Proceedings
_of the International Conference on Advanced Intelligence and_
_Awareness Internet (AIAI ’10), pp. 64–67, Beijing, China._
[26] Z. Liang and W. Shi, “PET: A PErsonalized Trust Model with
Reputation and Risk Evaluation for P2P Resource Sharing,” in
_Proceedings of the 38th Annual Hawaii International Conference_
_on System Sciences, HICSS ’05, pp. 201b–201b, Big Island, HI,_
USA.
[27] J. Wang and J. Liu, “The comparison of distributed P2P trust
models based on quantitative parameters in the file downloading scenarios,” Journal of Electrical and Computer Engineering,
vol. 2016, Article ID 4361719, pp. 1–10, 2016.
[28] L. Mekouar, Y. Iraqi, and R. Boutaba, “Reputation-based trust
management in peer-to-peer systems: Taxonomy and anatomy,”
in Handbook of Peer-to-Peer Networking, X. Shen, H. Yu, J.
Buford, and M. Akon, Eds., pp. 689–732, Springer US, 2010.
[29] X. L. Xie, “Creditability assessment of dealers in P2P ecommerce,” in Proceedings of the 2016 IEEE Advanced Informa_tion Management, Communicates, Electronic and Automation_
_Control Conference, IMCEC ’16, pp. 1326–1333, China, October_
2016.
[30] X. Ding and K. Koyanagi, “Study on trust-based maintenance
of overlays in structured P2P systems,” in Proceedings of the
_International Conference on Computational Problem-Solving,_
_ICCP ’11, pp. 598–603, China, October 2011._
[31] R. R. Rout and D. Talreja, “Trust-based decentralized service
discovery in structured Peer-to-Peer networks,” in Proceedings
_of the 11th IEEE India Conference, INDICON ’14, India, Decem-_
ber 2014.
[32] Y. Han, K. Koyanagi, T. Tsuchiya, T. Miyosawa, and H. Hirose,
“A trust-based routing strategy in structured P2P overlay
networks,” in Proceedings of the 27th International Conference
_on Information Networking, ICOIN ’13, pp. 77–82, Thailand,_
January 2013.
[33] X. Meng and D. Liu, “GeTrust: A Guarantee-Based Trust
Model in Chord-Based P2P Networks,” IEEE Transactions on
_Dependable and Secure Computing, vol. 15, no. 1, pp. 54–68, 2018._
[34] R. Pecori, “A comparison analysis of trust-adaptive approaches
to deliver signed public keys in P2P systems,” in Proceedings of
_the 7th International Conference on New Technologies, Mobility_
_and Security, NTMS ’15, pp. 1–5, France, July 2015._
[35] F. Tomonori, N. Yoshitaka, Y. Shiraishi, and T. Osamu, “An
effective lookup strategy for recursive and iterative lookup on
hierarchical dht,” International Journal of Informatics Society
_(IJIS), vol. 4, no. 3, pp. 143–152, 2012._
[36] M. Amoretti, M. Picone, F. Zanichelli, and G. Ferrari, “Simulating mobile and distributed systems with DEUS and ns3,” in Proceedings of the 11th International Conference on High
_Performance Computing and Simulation, HPCS ’13, pp. 107–114,_
Finland, July 2013.
[37] J. Zhang, R. Zhang, J. Sun, Y. Zhang, and C. Zhang, “TrueTop:
A Sybil-Resilient System for User Influence Measurement on
Twitter,” IEEE/ACM Transactions on Networking, vol. 24, no. 5,
pp. 2834–2846, 2016.
[38] M. S. Khan and N. M. Khan, “Low Complexity Signed Response
Based Sybil Attack Detection Mechanism in Wireless Sensor
Networks,” Journal of Sensors, vol. 2016, pp. 1–9, 2016.
[39] M. Kohnen, “Applying trust and reputation mechanisms to a
Kademlia-based Distributed Hash Table,” in Proceedings of the
_IEEE International Conference on Communications, ICC ’12, pp._
1036–1041, Canada, June 2012.
[40] A. Anjum, M. Sporny, and A. Sill, “Blockchain Standards for
Compliance and Trust,” IEEE Cloud Computing, vol. 4, no. 4,
pp. 84–90, 2017.
-----
|Col1|VLSI Design Hwiwndwa.whiindawi.com Volume 2018|
|---|---|
**Modelling &**
International Journal of **Simulation**
Navigation and
**in Engineering**
Observation
Hindawiwww.hindawi.com Volume 2018 Hindawiwww.hindawi.com
Volume 2018 Hindawiwww.hindawi.com Volume 2018 Hindawiwww.hindawi.com
Advances in
###### Shock and Vibration
Volume 2018 Hindawiwww.hindawi.com Volume 2018 Hindawiwww.hindawi.com
International Journal of
Antennas and
Propagation
Hindawiwww.hindawi.com Volume 2018
Journal of
###### Sensors
Hindawiwww.hindawi.com Volume 2018
_Advances in_
### Multimedia
_Hindawiwww.hindawi.com_ _Volume 2018_
Journal of
Electrical and Computer
Engineering
Hindawiwww.hindawi.com Volume 2018
# AerospaceInternational Journal of Engineering
Hindawiwww.hindawi.com Volume 2018
###### The Scientific World JournalHindawi Publishing Corporation Hindawihttp://www.hindawi.comwww.hindawi.com Volume 2018Volume 2013
_Advances in_
_OptoElectronics_
_Hindawiwww.hindawi.com_ _Volume 2018_
Volume 2018
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2018/4963932?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2018/4963932, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "http://downloads.hindawi.com/journals/scn/2018/4963932.pdf"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-11-12T00:00:00
|
[
{
"paperId": "660bed056b20b9bc54093d78b7b61b0947cf4739",
"title": "Cryptocurrency Networks: A New P2P Paradigm"
},
{
"paperId": "ece4cc22924a45619416d0ef3e7fb20d8ea4ece7",
"title": "Vulnerability Analysis of Network Scanning on SCADA Systems"
},
{
"paperId": "658bd6ad713bd86e5a2add7dcab4847c0eb0a353",
"title": "DDoS-Capable IoT Malwares: Comparative Analysis and Mirai Investigation"
},
{
"paperId": "17dafb105068b17b3eb4970fd80dd7e6b05931cf",
"title": "Blockchain Standards for Compliance and Trust"
},
{
"paperId": "cc0c287a05adf5cfed3c65982e68c69800d23319",
"title": "On Designing Energy Efficient Wi-Fi P2P Connections for Internet of Things"
},
{
"paperId": "e5c541ac892b182a9597a06f0b6e1e0e6b8a637e",
"title": "Reasoning About Identifier Spaces: How to Make Chord Correct"
},
{
"paperId": "3555b40711a2b0d5790430f8f848977a0cea4f1d",
"title": "Creditability assessment of dealers in P2P e-commerce"
},
{
"paperId": "f6796627ee20187d71dc2d97f8ba71cedcb43b6c",
"title": "iPersea: Towards improving the Sybil-resilience of social DHT"
},
{
"paperId": "395c08b916fab4d39f376c1cbcedc1ed087e2d34",
"title": "3AKEP: Triple-authenticated key exchange protocol for peer-to-peer VoIP applications"
},
{
"paperId": "60b2708c90f9ea0a86d6970336fad1c7e220d405",
"title": "The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios"
},
{
"paperId": "d26c0ffb387829262196980b4ed6550bac001f55",
"title": "A Novel Iterative and Dynamic Trust Computing Model for Large Scaled P2P Networks"
},
{
"paperId": "5bfce77ea7799d826e6faca26426f601def52b7f",
"title": "S-Kademlia: A trust and reputation method to mitigate a Sybil attack in Kademlia"
},
{
"paperId": "b2e66961cab5f00e2873b291f9c31382d29e64ea",
"title": "Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System"
},
{
"paperId": "e15e8f17300255c8ce95258f4fb2df58365ce82f",
"title": "MR-Chord: Improved Chord Lookup Performance in Structured Mobile P2P Networks"
},
{
"paperId": "336629800964489fa2cf1d78a0180696a557e93f",
"title": "A comparison analysis of trust-adaptive approaches to deliver signed public keys in P2P systems"
},
{
"paperId": "5a715e2fe02bddd83c7be8e91ca40a6546a5203f",
"title": "TrueTop: A Sybil-Resilient System for User Influence Measurement on Twitter"
},
{
"paperId": "cfff11f0c072ebac5d506254b52b8228d475d1b5",
"title": "Investigate SPRON Convergence Time Using Aggressive Chord and Aggressive AP-Chord"
},
{
"paperId": "690c74bf1448fe9faf1e3246be69b3581068acf0",
"title": "Trust-based decentralized service discovery in structured Peer-to-Peer networks"
},
{
"paperId": "0f98863c8ba3069863aaea5da7839c4caaae1614",
"title": "Trust-based routing for Kademlia in a sybil scenario"
},
{
"paperId": "a9632676d019a8f00b6b8994472561d882b1d086",
"title": "A modification on the Chord finger table for improving search efficiency"
},
{
"paperId": "47faf047a4157073d42e0988e4ebc1a660b7e4e4",
"title": "Simulating mobile and distributed systems with DEUS and ns-3"
},
{
"paperId": "56adc6a44b1d7397c80d219321b055b2ccea6c22",
"title": "VoteTrust: Leveraging friend invitation graph to defend against social network Sybils"
},
{
"paperId": "7499e266887d1b5b070e486e694591697929fe7f",
"title": "Persea: a sybil-resistant social DHT"
},
{
"paperId": "3d9434f1da75b88a1a950443768e1bdc600b857d",
"title": "A trust-based routing strategy in structured P2P overlay networks"
},
{
"paperId": "fc159c799be60d6bcafa5a4744e089828c9af9ca",
"title": "An Effective Lookup Strategy for Recursive and Iterative Lookup on Hierarchical DHT"
},
{
"paperId": "81ae41d0caa85b0c2d2e423fa8e7f41b94c86145",
"title": "Applying trust and reputation mechanisms to a Kademlia-based Distributed Hash Table"
},
{
"paperId": "ee0d55b58d006d783437c458704bbd4c4dadfb18",
"title": "Analysis and optimization of routing trust values in a Kademlia-based Distributed Hash Table in a malicious environment"
},
{
"paperId": "04434837cb5487cf4b487ef328c450c40c375a73",
"title": "Study on trust-based maintenance of overlays in structured P2P systems"
},
{
"paperId": "d072cf541b29e9aad70ebf9340bfe0171ff9fc02",
"title": "Whanau: A Sybil-proof Distributed Hash Table"
},
{
"paperId": "d0dd4fee9d433d16428e8530da5742da6543e50f",
"title": "Handbook of Peer-to-Peer Networking"
},
{
"paperId": "067b124d5635b12bdd868d71fcbb27e4f53aa470",
"title": "Limiting Sybil Attacks in Structured P2P Networks"
},
{
"paperId": "a0811379b8d2ea7d17c9a4aed00f76d20d855cf9",
"title": "PET: A PErsonalized Trust Model with Reputation and Risk Evaluation for P2P Resource Sharing"
},
{
"paperId": "eb51cb223fb17995085af86ac70f765077720504",
"title": "Kademlia: A Peer-to-Peer Information System Based on the XOR Metric"
},
{
"paperId": "35516916cd8840566acc05d0226f711bee1b563b",
"title": "The Sybil Attack"
},
{
"paperId": "cf025469b2d7e4b37c7f2d2bf0d46c6776f48fd4",
"title": "Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems"
},
{
"paperId": "680ba806b8a651e8cb2e2d64a9d6bc2325a8eea1",
"title": "A scalable content-addressable network"
},
{
"paperId": "f03db79dc2922af3ec712592c8a3f69182ec5d65",
"title": "Chord: A scalable peer-to-peer lookup service for internet applications"
},
{
"paperId": "eb452b225ab5a047e7d1ede136a3ca8a39c31dc8",
"title": "GeTrust: A Guarantee-Based Trust Model in Chord-Based P2P Networks"
},
{
"paperId": "b812ff1ff57b0beededb72bf8b9e0470b64cdc49",
"title": "Moving peers in distributed, location-based peer-to-peer overlays"
},
{
"paperId": "66be23d2d7e120e4d727fbfb7d188b4a68e010e0",
"title": "X-Vine: Secure and Pseudonymous Routing in DHTs Using Social Networks"
},
{
"paperId": "a1e9cf0348b873370461cf6f02772792a70898aa",
"title": "Reputation-Based Trust Management in Peer-to-Peer Systems: Taxonomy and Anatomy"
},
{
"paperId": "e0a9226104a8469c9b077033e3ff8f0da50cb0d6",
"title": "Trustchord: chord protocol based on the trust management mechanism"
},
{
"paperId": null,
"title": "LowComplexity Signed Response Based Sybil Attack Detection Mechanism in Wireless Sensor Networks"
}
] | 22,017
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012b5aa479f62062944ab4190a60d6f940f10c37
|
[] | 0.919339
|
A Sponge-Based Key Expansion Scheme for Modern Block Ciphers
|
012b5aa479f62062944ab4190a60d6f940f10c37
|
Energies
|
[
{
"authorId": "2185802696",
"name": "Maciej Sawka"
},
{
"authorId": "144009429",
"name": "Marcin Niemiec"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563",
"https://www.mdpi.com/journal/energies",
"http://www.mdpi.com/journal/energies"
],
"id": "1cd505d9-195d-4f99-b91c-169e872644d4",
"issn": "1996-1073",
"name": "Energies",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563"
}
|
Many systems in use today require strong cryptographic primitives to ensure confidentiality and integrity of data. This is especially true for energy systems, such as smart grids, as their proper operation is crucial for the existence of a functioning society. Because of this, we observe new developments in the field of cryptography every year. Among the developed primitives, one of the most important and widely used are iterated block ciphers. From AES (Advanced Encryption Standard) to LEA (Lightweight Encryption Algorithm), these ciphers are omnipresent in our world. While security of the encryption process of these ciphers is often meticulously tested and verified, an important part of them is neglected—the key expansion. Many modern ciphers use key expansion algorithms which produce reversible sub-key sequences. This means that, if the attacker finds out a large-enough part of this sequence, he/she will be able to either calculate the rest of the sequence, or even the original key. This could completely compromise the cipher. This is especially concerning due to research done into side-channel attacks, which attempt to leak secret information from memory. In this paper, we propose a novel scheme which can be used to create key expansion algorithms for modern ciphers. We define two important properties that a sequence produced by such algorithm should have and ensure that our construction fulfills them, based on the research on hashing functions. In order to explain the scheme, we describe an example algorithm constructed this way, as well as a cipher called IJON which utilizes it. In addition to this, we provide results of statistical tests which show the unpredictability of the sub-key sequence produced this way. The tests were performed using a test suite standardized by NIST (National Institute for Standards and Technology). The methodology of our tests is also explained. Finally, the reference implementation of the IJON cipher is published, ready to be used in software. Based on the results of tests, we conclude that, while more research and more testing of the algorithm is advised, the proposed key expansion scheme provides a very good generation of unpredictable bits and could possibly be used in practice.
|
# energies
_Article_
## A Sponge-Based Key Expansion Scheme for Modern Block Ciphers
**Maciej Sawka *[,†]** **and Marcin Niemiec** **[†]**
Department of Telecommunications, AGH University of Science and Technology, Mickiewicza 30,
30-059 Krakow, Poland
*** Correspondence: maciejsawka@gmail.com**
† These authors contributed equally to this work.
**Citation: Sawka, M.; Niemiec, M.**
A Sponge-Based Key Expansion
Scheme for Modern Block Ciphers.
_[Energies 2022, 15, 6864. https://](https://doi.org/10.3390/en15196864)_
[doi.org/10.3390/en15196864](https://doi.org/10.3390/en15196864)
Academic Editor: Wei-Hsin Chen
Received: 8 August 2022
Accepted: 15 September 2022
Published: 20 September 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Many systems in use today require strong cryptographic primitives to ensure confidentiality**
and integrity of data. This is especially true for energy systems, such as smart grids, as their proper
operation is crucial for the existence of a functioning society. Because of this, we observe new developments in the field of cryptography every year. Among the developed primitives, one of the most
important and widely used are iterated block ciphers. From AES (Advanced Encryption Standard) to
LEA (Lightweight Encryption Algorithm), these ciphers are omnipresent in our world. While security
of the encryption process of these ciphers is often meticulously tested and verified, an important part
of them is neglected—the key expansion. Many modern ciphers use key expansion algorithms which
produce reversible sub-key sequences. This means that, if the attacker finds out a large-enough part
of this sequence, he/she will be able to either calculate the rest of the sequence, or even the original
key. This could completely compromise the cipher. This is especially concerning due to research done
into side-channel attacks, which attempt to leak secret information from memory. In this paper, we
propose a novel scheme which can be used to create key expansion algorithms for modern ciphers.
We define two important properties that a sequence produced by such algorithm should have and
ensure that our construction fulfills them, based on the research on hashing functions. In order to
explain the scheme, we describe an example algorithm constructed this way, as well as a cipher
called IJON which utilizes it. In addition to this, we provide results of statistical tests which show the
unpredictability of the sub-key sequence produced this way. The tests were performed using a test
suite standardized by NIST (National Institute for Standards and Technology). The methodology
of our tests is also explained. Finally, the reference implementation of the IJON cipher is published,
ready to be used in software. Based on the results of tests, we conclude that, while more research and
more testing of the algorithm is advised, the proposed key expansion scheme provides a very good
generation of unpredictable bits and could possibly be used in practice.
**Keywords: cybersecurity; cryptography; block ciphers; symmetric key; iterated ciphers; smart grids**
**1. Introduction**
Data integrity and confidentiality are the crucial security requirements of information systems and communication networks, including smart grids [1,2]. Deployment of
protection methods allows for secure data transmission in cyberspace. However, the security services need efficient cryptography algorithms, such as symmetric block ciphers [3].
These kinds of algorithms are building blocks of modern security services—from privacy
protection to authentication of smart meters [4].
Although nowadays most discussion on security focuses on modern algorithms and
protocols, cryptography itself is a very old art. One of the oldest mentions of encryption is the
Caesar’s cipher, allegedly used by the ruler of ancient Rome. It was a simple cipher operating
on text written in Latin alphabet. As centuries passed, other ciphers have improved on
its design, with a notable example being the Vigènere cipher. It was also based on Latin
alphabet but used a secret key in a way similar to modern constructions. Much later, in
the 20th century, the Enigma machine was used by German army during World War II to
-----
_Energies 2022, 15, 6864_ 2 of 18
encrypt classified military information. The next step in evolution of cryptography came
with the invention of a computer which surpassed humans’ computing ability. This made it
necessary to create more complex encryption algorithms that would be able to withstand
the newfound computing power without breaking. Works in the field of telecommunication
and information theory, based on research of Claude Shannon (among others) served an
important role in this development. This led to invention of Data Encryption Standard
(DES). This could be considered the beginning of era of modern cryptography, based
on ciphers designed for digital computers. Following into the 1990s and early 2000s,
improvements made in the area of processing units and other integrated circuits caused
DES to become obsolete. Its key sizes were deemed too small and attempts to improve its
security through multiple iterations (Triple DES) were thwarted by new attacks. A new
standard for cryptography became a necessity. Out of this necessity, Advanced Encryption
Standard (AES) was introduced. Despite the time passed since its inception, AES remains in
widespread use and is still considered a secure choice for data confidentiality. Nonetheless,
new algorithms are still being created in order to improve—if not in terms of security then in
terms of performance and ease of implementation [5,6]. Both DES and AES, as well as many
modern ciphers, follow a structure called an iterated cipher in which the encryption process
is split into a number of rounds. Every round requires a separate secret sub-key. To fulfill
this requirement, each cipher of this type defines its own key expansion algorithm. The role
of the algorithm is to derive a sequence of sub-keys from the main key.
_1.1. Motivation_
A lot of effort has been put over the years into making modern symmetric ciphers
secure. A lot of this effort was focused on the encryption process itself, and less on the
process of key expansion. This may be observed in the fact that many currently used and
upcoming ciphers have fully reversible key expansion algorithms. This means that, given a
long enough sequence of sub-keys, the attacker is able to decipher not only the rest of the sequence, but often the main key itself. Example ciphers which exhibit this behaviour include
AES and LEA. AES has been the standard for symmetric encryption for around 20 years,
and no practical attacks against it have been found. Specifically, no attacks were found
that would target its key expansion. Despite this, we cannot be certain that a new attack
is not developed in 1, 5 or perhaps 10 years. Modern constructions should not ignore this
possibility. This is especially true when one takes into account the research into side-channel
attacks, which do not attack the cipher directly. Instead, they attack the environment the
algorithm is performed in. The goal of such attacks is to leak secret values from memory. If
a sufficiently large part of the sub-key sequence was to be leaked this way, any cipher with
reversible key expansion would be instantly compromised. To prevent this from happening,
the sub-key sequence should have two important properties:
- The main key should not be directly used as part of the sub-key sequence;
- Every sub-key should be sufficiently difficult to derive from any other sub-key, including the one happening before and after it in the sequence.
The word “sufficiently” in this context means a varying degree of security, but, in
general, it should be practically impossible to guess one sub-key based on the knowledge
of another. This would mean that, in order to break the encryption, an attacker would have
to leak every sub-key in the sequence. Given the cost, complexity and low reliability of
side-channel attacks should render this attack vector impractical.
_1.2. Contribution_
The authors of this paper propose a key expansion scheme based on the sponge
construction. This solution can be used to create key expansion algorithms for modern block
ciphers. The scheme produces a sequence of sub-keys which is difficult to reverse thanks
to the properties of the sponge construction. This makes it difficult to retrieve the original
key or other sub-keys from part of the sequence. This is achieved by using the excess bits
-----
_Energies 2022, 15, 6864_ 3 of 18
of the state of the sponge as a variable unknown to the attacker, as well as increasing the
work performed between absorbing the input and squeezing the output. This is done to
protect ciphers from attacks on block ciphers which aim to retrieve original key material
from singular sub-keys, e.g., slide attacks or attacks based on side-channel data extraction.
In addition to the scheme itself, a cipher called IJON (pronounced “e-yon”) is proposed.
It serves as an example application of the scheme. It is a block cipher with 128 bits of block
and key size optimized for processing units capable of operating on 32-bit words. The
sequence of sub-keys used in the cipher is generated using a key expansion algorithm
based on the sponge construction with 96 bits of sponge state and 32 bitrate.
Finally, the test results are described. The sequence of sub-keys produced by the key
expansion algorithm of IJON was tested using a suite of tests for cryptographically secure
pseudo-random number generators (CSPRNGs) standardized by National Institute of
Standards and Technology (NIST) [7]. The suite checks whether a sequence behaves like
a truly random, unpredictable stream of bits. Specific methodology which was assumed
during tests is described as well.
The remainder of the paper proceeds as follows: Section 2 provides an introduction to
cryptography techniques applied in modern block ciphers. Sponge construction is explained
in Section 3. In Section 4, a new sponge-based key expansion scheme is proposed. The IJON
cipher is explained in Section 5 in detail, including both the key expansion as well as the
encryption processes. Section 6 describes security considerations, testing methodology and
results of statistical tests of the cipher. Finally, Section 7 concludes the paper.
The paper is intended for cryptographers working on new symmetric block ciphers.
The authors hope it provides them with tools necessary to create secure key expansion
algorithms for their constructions. Additionally, any readers interested in developments of
cryptography should find the paper interesting.
_1.3. Related Works_
The problems arising from usage of reversible sub-key sequences have been noticed
before. Latest developments meant to create a secure key expansion scheme have been mostly
focused on advanced mathematics, specifically chaos maps [8–10]. While these approaches
are most likely to result in a solution, they are also difficult to follow for readers unfamiliar
with the topic. When new ciphers are created, it is not only important that they are safe, but
also that it is relatively easy to prove that they are safe, or to approximate their level of security.
That is why we propose a solution that is based on less complex concepts and constructions—
specifically, the sponge construction. Instead of placing the trust in mathematics, we place
it in the research previously performed in the field of hashing functions. This way, the
resulting solution is much easier to understand for people not acquainted with advanced
mathematics—for example software developers, project managers, and smart grid engineers.
We believe that, since those are the people who will benefit from results of our work, it is
important that they are able to comprehend it. At the same time, we strongly believe that
the increase in simplicity will not negatively impact the practical application of our solution.
In fact, we provide a reference implementation of the proposed cipher which is ready to be
used in software which needs symmetric ciphers with strong key expansion algorithms.
_1.4. Acronyms_
The acronyms used in the paper are listed and expanded in Table 1 below. The name
“IJON” is not an acronym.
-----
_Energies 2022, 15, 6864_ 4 of 18
**Table 1. Acronyms used in the paper.**
**Acronym** **Meaning**
AES Advanced Encryption Standard
ARX Add-Rotate-XOR
ASCII American Standard Code for Information Interchange
CPU Central Processing Unit
CSPRNG Cryptographically Secure Pseudo-Random Number Generator
DES Data Encryption Standard
LEA Lightweight Encryption Algorithm
LTS Long Trail Strategy
NIST National Institute for Standards and Technology
P-BOX Permutation box—the permutation layer of an SPN
PRNG Pseudo-Random Number Generator
S-BOX Substitution box—the substitution layer of an SPN
SHA-3 Secure Hashing Algorithm 3
SPN Substitution-Permutation Network
WTS Wide Trail Strategy
**2. Symmetric Block Ciphers**
An algorithm is a set of instructions meant to be performed in specific order with a
certain purpose behind it. Therefore, a cipher can be viewed as a set of algorithms. Every
cipher defines at least two algorithms: encryption and decryption. Purpose of encryption
is to transform secret data (called plaintext) in a way that prevents anyone without the
knowledge of the special secret key from recovering it. At the same time, anyone who
knows the key can easily recover plaintext from the encrypted data (called ciphertext) using
the decryption algorithm. Additional algorithms may also be a part of the cipher if they
are necessary.
Ciphers are divided into two categories as seen in Figure 1: symmetric ciphers and
asymmetric ciphers. Symmetric ciphers use the same key during encryption and decryption,
while asymmetric ciphers use two distinct, albeit related keys for each operation. The
difference does not have any security implications—neither type of cipher is inherently
“more secure”. Instead, it necessitates different assumptions about the privacy of the key.
This in turn causes each type of ciphers to have different use cases. In practice, both types
often are used together. This way they are able to complement each other. Symmetric
ciphers can further be divided into stream and block ciphers. Stream ciphers encrypt and
decrypt one bit at a time. Block ciphers operate on blocks of data, which have fixed length
usually defined in bits or bytes. Encryption and decryption algorithms of such ciphers take
a block of data of specified length as input and produce a different block of data of the
same length.
**Figure 1. Types of ciphers.**
A popular design choice for block ciphers is a construction called an iterated cipher,
shown in Figure 2. Instead of creating one large algorithm for encryption, a round function
is defined. It modifies internal state of the cipher during execution. It is applied to the
plaintext a certain number of times called number of rounds. Output of the last round is the
-----
_Energies 2022, 15, 6864_ 5 of 18
ciphertext. Each iteration of the round function usually uses a sub-key. It is a smaller piece
of data generated from the original key. An inverse round function must also be defined. It
is used during decryption, to undo the work performed during encryption. Decryption
usually applies sub-keys in reverse order. This approach not only makes creating a cipher
simpler, but also minimizes code size. It makes it necessary to define additional algorithm
within the cipher, called the key expansion algorithm. Its role is to generate a sequence of
sub-keys from the main key.
**Figure 2. Encryption in an iterated cipher with r number of rounds.**
Iterated ciphers may be implemented in a multitude of ways. One of them is a
substitution–permutation network (SPN), as seen in Figure 3. This type of construction is
divided into two layers: the substitution layer and permutation layer. The role of the first
one is to achieve nonlinearity. This means that the cipher is more difficult to approximate
with linear functions. This mitigates attacks based on linear cryptanalysis [11]. Nonlinearity
is achieved by using a function usually called an S-BOX. Substitution is the act of replacing
one value with another. It is often implemented using a lookup-table (LUT) to allow any
possible mapping from input to output. The other part of an SPN is a permutation layer. Its
role is to mix the bits of the state together. This layer might only swap bits around, or also
mix them using XOR, matrix multiplication or other operations. In the end, the purpose of
the P-BOX is to increase diffusion. Making bits of state change positions with each round
causes each bit of the output to depend on multiple input bits.
**Figure 3. Full round of SPN.**
An alternative to SPN is a construction called Feistel network. Assume that desired
block size of the cipher is N bits. To create a Feistel network, an F function needs to be
defined. The function has to accept two inputs: half of the block (N/2 bits long) and a
sub-key. The Feistel network begins by splitting the input block into two halves. During
-----
_Energies 2022, 15, 6864_ 6 of 18
each round, the left half is combined with a sub-key by the F function. The output of
the function is then combined with the right half using the XOR operation. As the last
step in a round, the halves are reordered. The left half becomes the right and vice versa.
This may continue for any number of rounds. After the last round, the two halves are
concatenated into an N bit block of ciphertext. Feistel network is a simple construction
with great potential. It was proven to be secure even with a small number of rounds as
long as the input of the F function is sufficiently hard to predict based on its output [12].
Additionally, the function F does not need to be reversible as the decryption algorithm uses
the function itself rather than its inverse. The only requirement is that the sub-keys have to
be supplied to the round function in reverse order during decryption.
In contrast to a Feistel network or an SPN, Add-Rotate-XOR (ARX) does not directly
refer to the construction of a block cipher. Instead, ARX can be thought of as a special
category of ciphers. Ciphers of this category are built entirely out of three operations:
addition modulo 2[n], XOR of n-bit words and n-bit rotations by a constant amount. These
operations are very simple and easy to approximate in various ways by potential attackers.
Because of this, creation of a secure cipher based solely on them is not a trivial task.
However, if used properly, the ARX operations provide the resulting cipher with important
advantages, listed below.
- All three operations are very fast, usually taking small number of cycles on various CPU architectures. This causes software implementations of such ciphers to be
very efficient.
- Not only is the time of their execution low but also constant. This means that ciphers
built out of them are naturally immune to side-channel attacks based on the time of
execution of certain parts of code [13].
- Since the ciphers use only basic operations, they are often very easy to implement
and analyze.
**3. Sponge Construction**
Sponge construction [14] is a scheme most often considered as the core of hashing
algorithms rather than block ciphers. It was introduced as a part of the Keccak hash
algorithm, the winner of the SHA-3 (Secure Hashing Algorithm 3) competition [15]. Since
it is intended to be used in hashing functions, a sequence of bits generated using a sponge
construction is usually difficult to reverse. By “reversing” a sequence, we mean calculating
one part of the sequence knowing another part of it, or finding the “seed” it was based on.
This property also makes it useful for creation of key expansion algorithms.
Sponge construction requires three elements. The first is a function f which takes b-bit
data block as input and outputs another b-bit data block (b is the size of the internal state
of the sponge construction). The second is the bitrate r, which defines the size of chunks
in which sponge consumes input and returns output (r should be always smaller than b).
Finally, the third element is a pad function, which makes sure that input to the sponge is
always a multiple of r. In this paper, we can ignore the pad function and assume that input
always has correct length.
The sponge construction allows creation of a function which generates output of any
size (limited to multiples of bitrate) from input of any size. Sponge functions work in the
way visualized in Figure 4. In the figure, the vertical dashed line divides the absorbing and
squeezing stages. The horizontal lines mark the part of the state directly modified by the
input and directly copied to output. Other bits of the state have to be populated by the f
function. The steps of the function are presented below.
1. Set all b bits of internal state to 0.
2. Divide input data into chunks of r bits: I0, I1, up to Ik for selected k.
3. For each chunk of input data perform the absorbing procedure:
(a) Apply the input chunk to the first r bits of internal state through the XOR
operation,
(b) Apply the f function to the internal state.
-----
_Energies 2022, 15, 6864_ 7 of 18
4. After all input has been absorbed by the sponge, start squeezing out the output:
(a) Append first r bits of the state to the output,
(b) Apply the f function to the internal state.
5. Stop after all necessary output has been squeezed out.
**Figure 4. Sponge function.**
**4. Sponge-Based Key Expansion**
The role of the key expansion algorithm in an iterated cipher is to generate sub-keys
for all the rounds of encryption and decryption. The sub-keys have to depend on the value
of the main key. The bits of the main key are also collectively known as key material. At the
same time, we suggest that it should be difficult to guess one part of the sequence of
sub-keys from another. It should also be difficult to guess the seed that the sequence was
generated from. To make it possible, we propose usage of the sponge construction as the
framework for key expansion algorithms. In order to explain how one would use the
sponge this way, we propose a novel key expansion algorithm to serve as an example. Our
key expansion algorithm is based entirely on 32-bit ARX operations. This way it should
be easy to implement and be optimized for 32-bit processing units. The size of its internal
state is 96 bits, which can be easily implemented as three 32-bit words. The input to the key
expansion algorithm is 128 bits (16 bytes) of key material, divided into four input words.
In terms of the sponge construction, the bitrate parameter is equal to 32 bits [14]. Output is
also split into 32-bit words, and each output word is a single sub-key.
Key expansion is split into four stages, as seen in Figure 5. The stages, in order, are:
initialization, absorbing, mixing and squeezing. Initialization, absorbing and squeezing are
all standard phases of the sponge construction.The mixing stage might be thought of as
part of the squeezing stage, with output discarded. It is added to increase the amount of
work performed on the internal state before output words are collected. The function f is
defined later in this section.
**Figure 5. Sponge-based key expansion algorithm.**
-----
_Energies 2022, 15, 6864_ 8 of 18
The algorithm is described in steps below.
1. Initialization: Set 96 bits of internal state to 0.
2. Absorbing:
(a) Absorb a word of input K[i] through a XOR operation with the first 32 bits of
the state
(b) Apply 4 iterations of the f function to the internal state;
Repeat until all of the key material has been absorbed (4 times).
3. Mixing: Apply 24 iterations of the f function
4. Squeezing:
(a) Apply 12 iterations of the f function to the internal state;
(b) Squeeze an output word Sk[j], by saving first 32 bits of the internal state.
Repeat until all of the sub-keys have been squeezed out.
The number of iterations of each stage of the algorithm as well as the number of
applications of the f function is based on the length of the input and required length of the
output. They also take into account the results of security analysis, to find a good trade-off
between security and performance of the algorithm.
_The f Function_
The f function used in the key expansion algorithm transforms 96-bit input into 96-bit
output. Its purpose is to fill the bits of state that are not modified by input data and to mix
all the bits of the state. The security of the key expansion algorithm relies heavily on the
excess bits of the state. Because of this, the design of the f function is very important.
The function is designed as three applications of a function fot (“f one-third”), which
is presented in Figure 6. It splits input state into three 32-bit words a, b and c. Then, it
applies constants C1, C2 and C3 to the state using XOR. This is followed by a series of ARX
operations between the words. In the end, the state is rotated one position to the right to
produce the output words, a[′], b[′] and c[′].
**Figure 6. The fot function (one third of the entire f function).**
The application of constants as the first step of the function ensures that any bits
are set to 1 prior to the additions, XORs and rotations being performed. This mitigates a
fundamental weakness of ARX operations. Every ARX operation will produce an all-zero
word as output if given an all-zero input. Thanks to the constants, if the initial state of a, b
and c is set to 0, some of the bits will change value to 1 before any other operations take
place. In case the input state happens to be equal to the constants, applying the constants
will actually have the opposite effect and clear all bits. However, those will be again set back
to 1 in the next iteration of the fot function. Usage of constants at the beginning removes an
entire class of trivial weak keys with all bits set to 0 or with a very small number of bits set
to 1.
-----
_Energies 2022, 15, 6864_ 9 of 18
The values of the constants proposed by the authors are given below:
```
C1 = 0x1763af12, C2 = 0xd1bb5770, C3 = 0x2b3a55bb
```
These are so called nothing-up-my-sleeve numbers. A nothing-up-my-sleeve number
is a type of a constant generated in a complicated manner, based on values which are
hard to control. Oftentimes, fractional parts of mathematical constants are used or values
derived from the name of the algorithm. This is done to eliminate any suspicion—a skilled
cryptanalyst could theoretically develop a cipher with meticulously chosen constants that
allow a backdoor into the algorithm. By using nothing-up-my-sleeve numbers, this is made
significantly harder. This, in turn, makes the algorithm more trustworthy.
The fot function constants have been generated from the name “IJON” (The algorithm
was developed in the year 2021, which also marked the hundredth anniversary of the birth
of Stanisław Lem—a Polish writer of science fiction and futurologist. Ijon Tichy is the main
character in many of his novels and the cipher was named after him.) in the way described
in steps below:
1. Four bytes which form the string IJON (in ASCII encoding) were interpreted as a
32-bit floating point number. In addition to the number itself, its square root and
second power were calculated. This resulted in a total of three floating point values.
2. All three values from previous step were reinterpreted as unsigned integers. The
following procedure was performed on each of them;
(a) The integer was multiplied by itself, generating a 64-bit value;
(b) The upper and lower halves of the result from previous step were XORed
together to make a 32-bit number;
(c) This number then served as input to the next iteration of the procedure, for a
total of 128 iterations;
3. The result of the last iteration of the procedure became the output of the entire algorithm. Since procedure was performed on three integers, it resulted in three constants:
_C1, C2 and C3._
The ARX operations performed after the application of constants are the core of the
function. The order of operations was decided by attempting to obtain the highest possible
diffusion between all three words of the state to utilize the state to its full potential. Both
order and rotation amounts were determined by trial and error, with resulting sub-key
sequences rated by statistical tests which measure randomness [7]. Another consideration
was performance on various CPUs—some architectures support shifts by any amount, but
some 8-bit architectures might only support rotations through arithmetic shifts, which are
limited to 8 bits at most. In those cases, amounts chosen are a multiple of 8, with a potential
additional shift by 1 (for example 17 = 8 + 8 + 1, which results in only three rotations).
The last part of the fot function is 32-bit words swap. During each of three iterations,
every word enters the fot function at a different position to perform different operations.
This allows for chaining of three fot iterations into one full f iteration and results in a larger
and more complex procedure being constructed out of smaller steps. The full f function
is presented in Figure 7. The first word of the state and its path through the function are
highlighted. This construction both increases the diffusion and allows for memory/time
trade-off during implementation. If performance is more important than space, loop unrolling might be performed as is usually done. However, if the implementation targets the
embedded environment, space might be more important than speed of execution. In such
case, the key expansion algorithm may be implemented as a loop which repeatedly executes
the fot function. Since fot is relatively small—consisting of only 5 XORs, 3 additions and
4 rotations—it would result in very small code size, at the expense of performance.
-----
_Energies 2022, 15, 6864_ 10 of 18
**Figure 7. The f function made of three iterations of fot.**
**5. IJON Cipher**
The proposed sponge-based key expansion algorithm was used in a selected iterated
cipher. The authors proposed such cipher and decided to call it IJON. The IJON cipher
encrypts data over the span of 10 rounds. Each round consumes eight sub-keys. This
results in a total of 80 sub-keys in a sequence generated by the key expansion algorithm.
The design of IJON was based on the research into the Long Trail Strategy (LTS) [16].
LTS is a cipher design strategy applicable to ARX ciphers. It was inspired by previous
work on AES and the Wide Trail Strategy (WTS) [17]. Its aim is to allow bounding of the
possible probabilities of differential trails within the ciphers. To achieve that, the ciphers
are constructed in a way similar to traditional substitution–permutation networks, except
with S-BOXes implemented as series of ARX operations (called ARX-BOX).
LTS focuses on the substitution layer, by introducing multiple applications of the
S-BOX intertwined with sub-key applications before the permutation layer. This is done
to place the primary burden of achieving diffusion on the substitution layer. This allows
differential probabilities to be bounded, approximating complexity of an attack. All of this
is reflected in the architecture of IJON encryption algorithm. To match the nomenclature of
the LTS research, in this section, applications of the S-BOX will be referred to as rounds,
while what is usually called a round in iterated ciphers will be referred to as a step.
_5.1. Encryption Algorithm_
The encryption algorithm is visualized in Figure 8 and described below.
1. Plaintext Pt contains 128 bits of data and serves as an input to the algorithm.
2. Split Pt into 4 words of 32 bits each.
3. Perform ten steps on the words of the state. Each step has 8 sub-keys K assigned from
the sequence generated by the key expansion.
(a) Combine four first sub-keys with the words of the state using XOR.
(b) Apply the S-BOX S twice in parallel to the state.
(c) Combine four last sub-keys.
-----
_Energies 2022, 15, 6864_ 11 of 18
(d) Perform the second application of S-BOXes.
(e) Apply the P-BOX P.
4. The output of the last step is the resulting ciphertext Ct.
As the core of the substitution layer IJON can use a selected S-BOX. The authors decided
to use a 64-bit ARX-BOX called Alzette [18]. It offers great statistical properties even at just
two applications while being very efficient in both software and hardware. Due to the IJON
block size being 128 bits long, two parallel applications of this S-BOX are performed on the
input during each application.
**Figure 8. The encryption algorithm of IJON.**
The permutation layer, while deemphasized in the LTS, still remains a vital part of
the SPN construction. It ensures mixing of all the bits of the input together. A construction
similar to a Feistel network is used in IJON as the permutation layer. The F function is
inspired by the one used in the SPARX family of ciphers, introduced as part of the LTS
research [16]. The entire Feistel-like P-BOX is shown in Figure 9, while the F function itself
is shown in Figure 10.
**Figure 9. Feistel-like P-BOX of IJON.**
**Figure 10. F function used in P-BOX.**
-----
_Energies 2022, 15, 6864_ 12 of 18
_5.2. Decryption Algorithm_
The role of the decryption algorithm is to reverse the work performed by encryption—
a given ciphertext and sub-key sequence should return the original plaintext. Because of
this, its design is closely related to the encryption function through inverse operations.
Luckily, all ARX operations have trivial inverses: inverse of XOR is the XOR itself, inverse
of rotation right by N bits is either rotation left by the same amount, or rotation right by
(32 _N) bits (when working with 32-bit words) and inverse of addition modulo 2[32]_ is
_−_
subtraction. Thanks to this, many complex functions built out of those three operations
are easily invertible. When defining the decryption algorithm, the goal is to find those
inversions and perform them in reverse order.
The decryption algorithm is listed in steps below and is presented in Figure 11.
1. Ciphertext Ct contains 128 bits of encrypted data and serves as the input to the
algorithm.
2. Split Ct into 4 words of 32 bits each.
3. Perform ten reverse steps on the words of the state. Each step has 8 sub-keys K assigned
from the sequence generated by the key expansion.
(a) Apply the inverse P-BOX P[−][1] to reverse the mixing of bits.
(b) Apply the inverse S-BOX S[−][1] twice in parallel to the state.
(c) Combine four last sub-keys with the words of the state using XOR.
(d) Apply the inverse S-BOX again.
(e) Combine the four first sub-keys with the state.
4. The output of the last decryption step is the resulting plaintext Pt.
**Figure 11. Decryption algorithm.**
It is worth mentioning that the inverse of the P-BOX must be found. This structure
is presented in Figure 12. It is used to reverse the order of operations: the application of
the F function and the reordering of halves. What is important is that the F function itself
stays the same. It is one of the advantages of the Feistel-like structure on which the P-BOX
is based.
An inverse of the S-BOX is harder to find, although still trivial. This is due to the fact
that ARX operations are easily invertible. In fact, most of the operations in the S-BOX do
not even have to be replaced—they are their own inverse. The only operation that has to be
replaced by its inverse is addition modulo 2[32] which becomes subtraction.
-----
_Energies 2022, 15, 6864_ 13 of 18
**Figure 12. Inverse P-BOX.**
**6. Security Considerations**
The functionality of the IJON cipher which contains a sponge-based key expansion
algorithm was verified. Additionally, basic diffusion and confusion properties have been
tested. This means that a single bit changed in key or plaintext results in massive changes
in the ciphertext, with around 50% of bits changing value. This is important to verify, but it
is not enough to fully evaluate the security of the cipher. IJON, like many iterated ciphers,
has parameters—for example, the number of steps, the number of rounds within a step
(S-BOX applications), block size, key size, etc. All of them should have values that are
justified either by tests and experiments or by common practice and logic. This section
contains security considerations and explains the choices made in design of the cipher.
_6.1. Key Size and Block Size_
Minimal values for these two parameters are slowly growing as hardware evolves.
Ciphers with 64-bit block or key size, albeit sometimes still used in memory-constrained
environments, are usually deemed not secure enough. This is mostly due to the key space
being too small and prone to a brute force attack. The block size and shortest key of AES
cipher is 128 bits, which still holds up very well today after more than 20 years in use.
Therefore, 128 bits seems to be the perfect middle-ground between ’too small to be secure’
and ’too big to be practical’. Due to those reasons, this key size was chosen.
The introduced key expansion algorithm may theoretically use keys of any length, as
long as the length is a multiple of 32 bits. This can be done by performing the absorbing
stage of the algorithm different amount of times. In the future, if 128 bits of key material
proves to be insufficient, the algorithm could easily be expanded for longer keys.
_6.2. Side-Channel Attacks_
The cipher was consciously designed with resistance against time-based side-channel
attacks [13] in mind. Thanks to the exclusive usage of constant-time ARX operations, it is
easier to implement the cipher in a way that is immune to this type of attack. This includes
the reference implementation created to verify functionality of the proposed algorithm [19].
_6.3. Slide Attack_
Slide attack is a technique which targets iterated ciphers that base their security on a
large number of rounds and re-use a single sub-key between multiple rounds [20]. Since
IJON is an iterated cipher, it may be susceptible to a slide attack. To counter that, a lot of
effort was directed into creating a strong, secure key expansion algorithm. Every sub-key is
used only once, in order to make potential attack as complicated as possible. Additionally,
even if an attacker comes in possession of a single sub-key, it should be difficult to retrieve
the main key or other sub-keys from that information. This is due to the non-invertible
sponge construction used as the framework for the key expansion algorithm.
-----
_Energies 2022, 15, 6864_ 14 of 18
_6.4. Construction of Encryption_
The important component of the encryption algorithm is the selected S-BOX (in IJON,
the Alzette S-BOX was chosen) due to Long Trail Strategy [16] guiding the design of the
cipher. Because of that, security approximation was based entirely on its properties. Authors
of the Alzette S-BOX [18] claim that two iterations of the ARX-BOX have the Maximum
Expected Differential Characteristic Probability (MEDCP) bounded at around 2[−][32]. This
was decided to be reasonably low for a single step of encryption, and so the number
of rounds within a single step was set to 2. The number of steps was set to 10, which
allows approximation of maximum differential trail probability at around 2[−][320] under the
assumption that no other characteristics arise from the repeated use of the S-BOX or from
connection with P-BOX within the encryption algorithm. This probability is very small,
which means that the number of steps could have been lowered to possibly 8 or 7. However,
a security margin was assumed and number of steps during encryption was chosen to be 10.
_6.5. Randomness of Key Expansion_
Key expansion of IJON has been tested as a Cryptographically Secure Pseudo-Random
Number Generator (CSPRNG). The tests were performed according to a specification
published by the National Institute of Standards and Technology (NIST) [7].
6.5.1. Methodology
The suite defines a number of tests meant to measure the unpredictability of a random
sequence of bits. Each test generates a p-value as an output which characterizes randomness
in quantitative way. The sequence should result in a p-value bigger than 0.01 to pass a test.
The tests are listed below:
1. Frequency (monobit) test
2. Frequency test within a block
3. Runs test
4. Test for the longest run of ones in a block
5. Binary matrix rank test
6. Discrete Fourier transform (spectral) test
7. Non-overlapping template matching test
8. Overlapping template matching test
9. Maurer’s “Universal Statistical” test
10. Linear complexity test
11. Serial test
12. Approximate entropy test
13. Cumulative sums (cusum) test
14. Random excursions test
15. Random excursions variant test
All tests put constraints on the input given to them, with one of the constraints being
the minimal length of the input sequence. When testing IJON, tests with numbers 5, 8, 9,
10, 14 and 15 have been omitted due to their requirements on length. Additionally, in tests
where an arbitrary length of a block was to be chosen, a value of 256 bits was used—the
total length of 8 sub-keys used during a single round of encryption.
Sequences generated from expansion of a set of keys have been tested using the test
suite. Out of all the keys, most attention was given to the theoretical weak keys. This is
a group of keys with only 0, 1 or 2 bits set to 1. Those keys could become trivial weak
keys if the algorithm was not properly designed and tested due to the nature of ARX
operations used. Additionally, two variants of pseudorandom keys have been tested. In
the first variant, 128 bits of the main key were generated, and then expanded using the
IJON key expansion algorithm. In the second variant, the entire sequence of 320 bytes was
generated, and no expansion was used. All pseudorandom data in both cases have been
generated using/dev/urandom—Pseudo-Random Number Generator (PRNG) present in
-----
_Energies 2022, 15, 6864_ 15 of 18
Linux environment. The second variant has been performed to compare the results of IJON
key expansion algorithm to a well-known PRNG.
6.5.2. Test Results
Test results for the three sets of keys are presented in Tables 2–4. Meaning behind
values in columns is described below:
- Max Diff—applicable only to the monobit test, it is the maximal absolute difference
between expected and actual number of ones in the sequence. The percentage in the
brackets is given in relation to the expected value (100% = 1280 bits).
- Success rate—number of samples from the given set that successfully passed a given
test. Percentage in brackets is in relation to the number of samples in the set.
- Min/Max/Avg P—respectively minimal, maximal and average value of P among all
samples, both successful and failing. The p-value has to be greater than 0.01 to pass
a test.
The results for the second pseudorandom variant should be treated as a benchmark.
They were generated using an industry-standard PRNG and are independent from the IJON
cipher structure. Overall, results for all three groups are very similar. Success rates in all
cases are above 98%, and average p-values are relatively high: 0.40 and more. In some rare
cases, the IJON-based sequences achieve better results than those generated through Linux
PRNG (an example would be the Min P for the longest run of ones test). The differences
are so small that they can be attributed to the randomness during bit generation. Therefore,
IJON key expansion algorithm could be considered comparable to the Linux/dev/urandom
PRNG in terms of generating an unpredictable sequence of bits. What is worth emphasizing
is that the difference between results for potential weak keys and first variant random keys
is also very small. This means that the potential weak keys are not weaker than random
keys generated using a PRNG. In turn, they should not be considered weak at all.
**Table 2. Test results for potential weak keys (8257 samples total).**
**Test Number/Name** **Success Rate** **Min P** **Max P** **Avg P**
1. Monobit—Max Diff 109 (8.52%) 8182 (99.09%) 0.000016 1.000000 0.500998
2. Frequency within block 8172 (98.97%) 0.000023 0.999526 0.499081
3. Runs 8162 (98.85%) 0.000000 1.000000 0.501611
4. Longest run of ones 8178 (99.04%) 0.000100 0.993439 0.496900
6. DFT 8166 (98.90%) 0.000066 1.000000 0.494753
7. Non-overlapping template match 8242 (99.82%) 0.000188 1.000000 0.923600
11. Serial 8135 (98.52%) 0.000182 0.997537 0.414052
12. Approximate entropy 8200 (99.31%) 0.000075 0.999842 0.502494
13. Cusum 8157 (98.79%) 0.000016 0.999526 0.422093
**Table 3. Test results for the first variant random (10,000 samples total).**
**Test Number/Name** **Success Rate** **Min P** **Max P** **Avg P**
1. Monobit—Max Diff 97 (7.58%) 9907 (99.07%) 0.000126 1.000000 0.499881
2. Frequency within block 9902 (99.02%) 0.000082 0.999928 0.499588
3. Runs 9895 (98.95%) 0.000124 1.000000 0.497403
4. Longest run of ones 9920 (99.20%) 0.000004 0.993439 0.498367
6. DFT 9901 (99.01%) 0.000006 1.000000 0.490965
7. Non-overlapping template match 9981 (99.81%) 0.000236 1.000000 0.920993
11. Serial 9824 (98.24%) 0.000022 0.998638 0.403821
12. Approximate entropy 9893 (98.93%) 0.000106 0.999955 0.493109
13. Cusum 9884 (98.84%) 0.000111 0.999798 0.423216
-----
_Energies 2022, 15, 6864_ 16 of 18
**Table 4. Test results for the second variant random (10,000 samples total).**
**Test Number/Name** **Success Rate** **Min P** **Max P** **Avg P**
1. Monobit—Max Diff 98 (7.66%) 9918 (99.18%) 0.000065 1.000000 0.502647
2. Frequency within block 9910 (99.10%) 0.000021 0.999950 0.503439
3. Runs 9894 (98.94%) 0.000122 1.000000 0.501850
4. Longest run of ones 9907 (99.07%) 0.000000 0.993439 0.497037
6. DFT 9912 (99.12%) 0.000066 1.000000 0.489584
7. Non-overlapping template match 9985 (99.85%) 0.000094 1.000000 0.922027
11. Serial 9832 (98.32%) 0.000012 0.999070 0.409102
12. Approximate entropy 9882 (98.82%) 0.000026 0.999896 0.497468
13. Cusum 9883 (98.83%) 0.000014 0.999526 0.426680
**7. Conclusions**
Development of modern ciphers leads us towards higher security of smart grids. This
paper describes a novel key expansion scheme based on the sponge construction. This
construction is able to produce a sequence of subkeys which is difficult to reverse and can
be used in iterated modern ciphers. The authors also introduced a new block cipher called
IJON. The design of this solution was described in details and a reference implementation
was developed. The cipher is able to encrypt and decrypt data as long as the same key is
used in both operations, which is a standard way of operation of symmetric ciphers. The
new construction is well suited to execute on 32-bit CPUs. The algorithm is based on fast
operations on 32-bit words that perfectly fit into registers of such processing units. At the
same time, it does not require hardware implementation to perform well, due to low clock
cycles required for used operations.
Tests and approximations of security bounds were performed. The results indicate that
the algorithm is safe to use. The sequence generated by the key expansion algorithm is very
unpredictable and hard to reverse. It is worth mentioning that the security margin left by
the number of rounds is very high. It was calculated based on differential probabilities
of the used Alzette S-BOX. However, this does not give a guarantee about the cipher’s
security. Further tests and analysis are still required in this area. On the other hand, research
might also assess that the number of rounds is actually too big. In such case, it may be
reduced in future versions of the algorithm to increase efficiency. This could help to increase
performance and resolve potential memory requirement problem. However, results of
security analysis indicate that the cipher seems to have great potential.
IJON with the sponge-based key expansion algorithm is a fully functional cipher;
however, there are still a lot of possibilities for improvements. First of all, further tests are
needed to fully evaluate the encryption process in terms of security. In addition, further
performance tests and benchmarks are needed. Speed of execution should be tested and
compared to other modern ciphers. In addition to that, further optimizations in assembly
may be possible using vector processing instructions on various architectures—for instance,
SIMD extensions to the x86 instruction set. Furthermore, while the cipher should execute very
well on 32-bit microcontrollers, its memory requirements may be a problem in embedded
environments. The length of the buffer necessary to hold all sub-keys after key expansion
is equal to 320 bytes. This is by no means a small amount, even by today’s standards. This
problem could be mitigated by generating sub-keys as needed during encryption, but it
is a solution which wastes a lot of cycles by duplicating work. It also does not work with
decryption, as sub-keys are used in reverse order. In summary, further research on cipher’s
security as well as performance optimizations are required before the algorithm could be
widely used. However, the cipher is pretty much ready to be incorporated into software
thanks to the reference implementation.
-----
_Energies 2022, 15, 6864_ 17 of 18
**Author Contributions: Conceptualization, M.S. and M.N.; methodology, M.S. and M.N.; software,**
M.S.; validation, M.S.; formal analysis, M.S. and M.N.; investigation, M.S. and M.N.; writing—original
draft preparation, M.S. and M.N.; writing—review and editing, M.S. and M.N.; visualization, M.S.;
supervision, M.N.; project administration, M.N.; funding acquisition, M.N. All authors have read
and agreed to the published version of the manuscript.
**Funding: This work has been funded by the European Union’s Horizon 2020 Research and In-**
novation Programme, under Grant Agreement No. 830943, project ECHO (European network of
Cybersecurity centres and competence Hub for innovation and Operations). The research was also
partially supported by the National Centre for Research and Development, Grant No. CYBERSECIDENT/381319/II/NCBR/2018 on “The federal cyberspace threat detection and response system”
(acronym DET-RES) as part of the second competition of the CyberSecIdent Research and Development
Program–Cybersecurity and e-Identity.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The data presented in this study—reference implementation of the IJON**
[block cipher—are available online in: https://github.com/msaw328/ijon (accessed on 8 September 2022).](https://github.com/msaw328/ijon)
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Tufail, S.; Parvez, I.; Batool, S.; Sarwat, A. A Survey on Cybersecurity Challenges, Detection, and Mitigation Techniques for the
[Smart Grid. Energies 2021, 14, 5894. [CrossRef]](http://doi.org/10.3390/en14185894)
2. Alghassab, M. Analyzing the Impact of Cybersecurity on Monitoring and Control Systems in the Energy Sector. Energies 2022,
_[15, 218. [CrossRef]](http://dx.doi.org/10.3390/en15010218)_
3. Jain, N.; Chauhan, S.S. Novel Approach Transforming Stream Cipher to Block Cipher. In Proceedings of the 2021 International
Conference on Technological Advancements and Innovations (ICTAI), Tashkent, Uzbekistan, 10–12 November 2021; pp. 182–187.
4. Di Matteo, S.; Baldanzi, L.; Crocetti, L.; Nannipieri, P.; Fanucci, L.; Saponara, S. Secure Elliptic Curve Crypto-Processor for Real-Time
[IoT Applications. Energies 2021, 14, 4676. [CrossRef]](http://dx.doi.org/10.3390/en14154676)
5. Rodinko, M.; Oliynykov, R. Comparing Performances of Cypress Block Cipher and Modern Lighweight Block Ciphers on Different
Platforms. In Proceedings of the 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science
and Technology (PIC S&T), Kyiv, Ukraine, 8–11 October 2019; pp. 113–116.
6. Alasaad, A.; Alghafis, A. Key-Dependent S-box Scheme for Enhancing the Security of Block Ciphers. In Proceedings of the 2019 2nd
International Conference on Signal Processing and Information Security (ICSPIS), Dubai, United Arab Emirates, 30–31 October 2019;
pp. 1–4.
7. Rukhin, A.; Soto, J.; Nechvatal, J.; Smid, M.; Barker, E.; Leigh, S.; Levenson, M.; Vangel, M.; Banks, D.; Heckert, A.; et al. A Statistical
_Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications; National Institute of Standards & Technology:_
Gaithersburg, MD, USA, 2010.
8. Xu, Y.; Zhao, M.; Liu, H. Design an irreversible key expansion algorithm based on 4D memristor chaotic system. Eur. Phys. J. Spec.
_[Top. 2022. [CrossRef]](http://dx.doi.org/10.1140/epjs/s11734-022-00561-2)_
9. Liu, H.; Wang, X.; Li, Y. Cryptanalyze and design strong S-Box using 2D chaotic map and apply to irreversible key expansion.
_arXiv 2021, arXiv:2111.05015 ._
10. Zhao, M.; Liu, H. Construction of a Nondegenerate 2D Chaotic Map with Application to Irreversible Parallel Key Expansion
[Algorithm. Int. J. Bifurc. Chaos 2022, 32, 2250081. [CrossRef]](http://dx.doi.org/10.1142/S021812742250081X)
11. Matsui, M. Linear Cryptanalysis Method for DES Cipher. In Proceedings of the Advances in Cryptology— EUROCRYPT’93; Helleseth,
T., Ed.; Springer: Berlin/Heidelberg, Germany, 1994.
12. Luby, M.; Rackoff, C. How to Construct Pseudorandom Permutations from Pseudorandom Functions. SIAM J. Comput. 1988,
_[17, 373–386. [CrossRef]](http://dx.doi.org/10.1137/0217022)_
13. Kocher, P.C. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. In Proceedings of the Advances
_in Cryptology—CRYPTO’96; Koblitz, N., Ed.; Springer: Berlin/Heidelberg, Germany, 1996; pp. 104–113._
14. [Bertoni, G.; Daemen, J.; Peeters, M.; Van Assche, G. Cryptographic Sponge Functions. 2011. Available online: https://keccak.](https://keccak.team/files/CSF-0.1.pdf)
[team/files/CSF-0.1.pdf (accessed on 8 September 2022).](https://keccak.team/files/CSF-0.1.pdf)
15. Dworkin, M. SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions; Federal Inf. Process. Stds. (NIST FIPS);
National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015.
16. Dinu, D.; Perrin, L.; Udovenko, A.; Velichkov, V.; Großschädl, J.; Biryukov, A. Design Strategies for ARX with Provable Bounds: Sparx
and LAX. In Proceedings of the Advances in Cryptology—ASIACRYPT 2016; Cheon, J.H., Takagi, T., Eds.; Springer: Berlin/Heidelberg,
Germany, 2016; pp. 484–513.
17. Daemen, J.; Rijmen, V. The Wide Trail Design Strategy. In Proceedings of the Cryptography and Coding; Honary, B., Ed.; Springer:
Berlin/Heidelberg, Germany, 2001; pp. 222–238.
-----
_Energies 2022, 15, 6864_ 18 of 18
18. Beierle, C.; Biryukov, A.; Cardoso dos Santos, L.; Großschädl, J.; Perrin, L.; Udovenko, A.; Velichkov, V.; Wang, Q. Alzette: A 64-Bit
ARX-box. In Proceedings of the Advances in Cryptology—CRYPTO 2020; Micciancio, D., Ristenpart, T., Eds.; Springer International
Publishing: Cham, Switzerland, 2020; pp. 419–448.
19. [Sawka, M. Reference Implementation of the IJON Block Cipher. 2021. Available online: https://github.com/msaw328/ijon](https://github.com/msaw328/ijon)
(accessed on 8 September 2022).
20. Biryukov, A.; Wagner, D. Slide Attacks. In Proceedings of the Fast Software Encryption; Knudsen, L., Ed.; Springer: Berlin/Heidelberg,
Germany, 1999; pp. 245–259.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/en15196864?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/en15196864, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1996-1073/15/19/6864/pdf?version=1663744451"
}
| 2,022
|
[] | true
| 2022-09-20T00:00:00
|
[
{
"paperId": "56244a2dea676d3ff025a6a604cb418b4e4c2b7c",
"title": "Design an irreversible key expansion algorithm based on 4D memristor chaotic system"
},
{
"paperId": "157284c5da3baadf122db30b06f98a996c7825d4",
"title": "Analyzing the Impact of Cybersecurity on Monitoring and Control Systems in the Energy Sector"
},
{
"paperId": "8bfb36b4ae66b3794946b6dc2f789f7a56cb950d",
"title": "Novel Approach Transforming Stream Cipher to Block Cipher"
},
{
"paperId": "4fd2be3f0549fbb3b3464b297bab433de82a9e66",
"title": "Cryptanalyze and design strong S-Box using 2D chaotic map and apply to irreversible key expansion"
},
{
"paperId": "cdead98d0e59c098ec2b225225e51cbf97abc4c3",
"title": "A Survey on Cybersecurity Challenges, Detection, and Mitigation Techniques for the Smart Grid"
},
{
"paperId": "bda236ee1940bc2078e74f9b276a522a8c4c860c",
"title": "Secure Elliptic Curve Crypto-Processor for Real-Time IoT Applications"
},
{
"paperId": "0d7822069f69fa33477307e5e106a9f6a4b71b21",
"title": "Alzette: A 64-bit ARX-box"
},
{
"paperId": "dd0987a4327aed336e70d219639b3a9ce7313a7e",
"title": "Key-Dependent S-box Scheme for Enhancing the Security of Block Ciphers"
},
{
"paperId": "55ba14cd22ec8ca8b52d85313bd127fb37626967",
"title": "Comparing Performances of Cypress Block Cipher and Modern Lighweight Block Ciphers on Different Platforms"
},
{
"paperId": "84343afddb829367110b053edb8424f9f8db1b19",
"title": "Design Strategies for ARX with Provable Bounds: Sparx and LAX"
},
{
"paperId": "0a1a4a9a4e19c58445a89339719b6d23c427fac7",
"title": "The Wide Trail Design Strategy"
},
{
"paperId": "4c7b001199c2e95d4fcdf25b3e1592f4cbd9bfce",
"title": "Linear Cryptanalysis Method for DES Cipher"
},
{
"paperId": "172cb8e1f02f57f1893edab976be9fed99a4d2b3",
"title": "How to Construct Pseudo-Random Permutations from Pseudo-Random Functions (Abstract)"
},
{
"paperId": "5e2595b9a40bae56c789e82b2ed735c49146d618",
"title": "First Things First"
},
{
"paperId": "c8f6a8f081f49325eb97600eca05620887092d2c",
"title": "Timing Attacks on Implementations of Di(cid:14)e-Hellman, RSA, DSS, and Other Systems"
}
] | 13,991
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012bcc9a33bf8686741b315847b6c45f3b23fbb9
|
[
"Computer Science"
] | 0.886741
|
Safe fusion compared to established distributed\ fusion methods
|
012bcc9a33bf8686741b315847b6c45f3b23fbb9
|
International Conference on Multisensor Fusion and Integration for Intelligent Systems
|
[
{
"authorId": "2715984",
"name": "J. Nygårds"
},
{
"authorId": "2947805",
"name": "Viktor Deleskog"
},
{
"authorId": "2486923",
"name": "Gustaf Hendeby"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"MFI",
"Int Conf Multisens Fusion Integr Intell Syst"
],
"alternate_urls": null,
"id": "c0c48886-e760-4ad7-b692-040bb5c2a621",
"issn": null,
"name": "International Conference on Multisensor Fusion and Integration for Intelligent Systems",
"type": "conference",
"url": null
}
| null |
## Safe Fusion Compared to Established Distributed Fusion Methods
#### Jonas Nygårds, Viktor Deleskog and Gustaf Hendeby
### Conference Publication
#### N.B.: When citing this work, cite the original article.
Original Publication:
Jonas Nygårds, Viktor Deleskog and Gustaf Hendeby, Safe Fusion Compared to Established Distributed Fusion Methods, Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 2016.
Copyright: http://ieee.org
Postprint available at: Linköping University Electronic Press
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131425
-----
# Safe Fusion Compared to Established Distributed Fusion Methods
##### Jonas Nygårds[∗], Viktor Deleskog[∗], and Gustaf Hendeby[†]
_∗_ Div. of C4ISR, Swedish Defence Research Agency (FOI), Linköping, Sweden
e-mail: firstname.lastname@foi.se
_† Dept. of Electrical Engineering, Linköping University, Linköping, Sweden_
e-mail: hendeby@isy.liu.se
**_Abstract—The safe fusion algorithm is benchmarked against_**
**three other methods in distributed target tracking scenarios. Safe**
**fusion is a fairly unknown method similarly to, e.g., covariance**
**intersection, that can be used to fuse potentially dependent**
**estimates without double counting data. This makes it suitable**
**for distributed target tracking, where dependencies are often**
**unknown or difficult to derive. The results show that safe fusion**
**is a very competitive alternative in five evaluated scenarios, while**
**at the same time easy to implement and compute compared to**
**the other evaluated methods. Hence, safe fusion is an attractive**
**alternative in track to track fusion systems.**
I. INTRODUCTION
Methods for track-to-track fusion (T2TF) are important
in distributed tracking systems. T2TF enables tracks from
multiple sources to be fused in a close to optimal way.
Nowadays it is also common that sensors contains some sort of
tracking functionality which limits access to the “raw” sensor
output. To integrate such a sensor in a tracking network of
different sensors T2TF is necessary.
In the literature, different methods for T2TF have been
presented and analyzed. In this paper we implement and
compare four different methods, with focus on robustness and
tracking accuracy and how they perform against centralized
measurement fusion (CMF), which is the optimal choice if
possible. The methods studied in this paper are: (i) naïve
_information matrix fusion (naïve IMF); (ii) covariance inter-_
_section (CI) fusion [5, 11]; (iii) generalized information matrix_
_filter (GIMF) fusion [19], and (iv) safe fusion (SF) [9]. The_
authors have not manged to find any publication where SF (or
equivalent methods), contrary to the other three methods, have
been applied to the T2TF problems. However, the ellipsoidal
_intersection (IE, [15, 17]) method, as mentioned below to_
produce the same result as SF, has been evaluated in other
contexts, e.g., vehicle platooning in [16]. One of the contributions of this paper is hence to compare SF to other methods
in the context of T2TF.
To get optimal T2TF performance the cross-correlations
between tracks must be taken into account [18], one unavoidable source of cross-correlations is the shared process
noise in tracks that describe the same target. The naïve fusion
method assumes that tracks are uncorrelated which leads to
overconfident error covariance matrices and overuse of data.
Both CI (and variations thereof, [3, 14]) and SF assumes
that there exist an unknown cross-correlation between tracks
and provides a conservative solution, which is a more sound
approach than naïve fusion. Exact methods as those in [18],
on the other hand, calculates the cross-correlations between
tracks and attains optimal performance, with the drawback
that it requires a lot of information to be transferred between
sensors. The ellipsoidal intersection method combines the two
approaches, approximating the cross-correlation with a worst
case scenario and then compensate for it. Though, derived
based on different fundamental principles, it turns out that SF
and EI produces identical estimates. The GIMF method also
uses an information theoretic approach to handle the crosscorrelation. It is known that information is additive, hence you
can subtract information to avoid double counting data.
In the real world, communication between nodes in a network is not ideal, especially in wireless network configurations
with limited communication rate and possible communication
delays. Such issues are a current focus in T2TF community.
Here, we consider communication to be mostly synchronous,
in the sense that data is current when fusing, but not necessarily at full rate. In some cases delayed measurements are also
considered.
In recent research issues regarding communication rate and
transmission load are highlighted to pursue CMF performance.
For example the distributed version of the augmented state
_density (ASD) filter called DASD [13] and the recent devel-_
oped distributed Kalman filter (DKF) [7]. In [6], the two previous methods are compared in terms of fusion performance,
process noise sensitivity, and level of global knowledge about
sensor parameters.
The main contribution of this paper is to compare the SF
algorithm to other T2TF methods in a security setting, this
way bringing the attention to the SF method. To evaluate the
chosen T2TF methods we have picked three different datasets
to accentuate the differences between the fusion methods.
The first dataset is based on data used in previous T2TF
evaluations [19]; the second dataset consists of recorded target
trajectories [20]; and the third dataset is data from a real
world field trial suitable for target tracking in surveillance
applications highlighting security problems.
The paper is organized as follows. In Sec. II the general
notation and description of methods are presented. How the
evaluation of the methods was performed is presented in
Sec. III and the results and discussion in Sec. IV. Sec. V
-----
_x1(tf |t2)_
_x2(tf |t1)_
_xf (t0|t0)_ _xf (tf_ 1|t1) _xf (tf |tf )_
_x2(tf_ 1|t1) _x2(tf |t2)_
_x2(t0|t0)_ _z2(t1)_ _z2(t2)_
Fig. 1. Sampling sequence for asynchronous fusion of two sensors.
concludes the paper.
II. METHOD DESCRIPTION
It is assumed that bandwidth restrictions in combination
with communication delays leads to an architecture where not
every sample is communicated between the sensor locations.
Consider a generic asynchronous case of decentralized fusion,
as depicted in Fig. 1. We have two sensors and a fusion center
that could be co-located with one of the sensors (or both
if fully decentralized). At time t2 we have new information
from the first sensor z1(t2) which is brought forward (under
communication delays) to the fusion center as the prediction
_x1(tf_ _|t2) and since we assume asynchronous reports we have_
also the current and predicted values for the the second sensor;
_i.e. information at the time of the most current sample and_
at the previous communication time x2(tf _|t2) and x2(tf_ _|t1),_
respectively. To keep the notation minimal we will in the
following drop the argument for the most recent estimates from
the two sensors and only keep it for the previous information
for the second sensor. We assume that the estimates can
be represented by their first two moments thus {xˆ1, P1},
_{xˆ2, P2}, {xˆf_ _, Pf_ _} and {xˆ2(tf_ _|t1), P2(tf_ _|t1)} are the notation_
for the variables considered in the track fusion steps. In the
following sections brief descriptions of the studied methods
are given, for more information the reader is referred to the
provided main references of the methods.
All methods are assumed to have some sort of fusion
memory, i.e. a state that keeps the predicted first two moments
from the last fusion at the fusion center. For GIMF this is
already included, but also for CIF and SF this memory is used
to enable the use of asynchronous sensor information.
_A. Local and Centralized Filters_
As reference a centralized extended Kalman filter (EKF)
[12] is run on all measurements from the sensors. The EKF
is also used as the local filter in the sensor nodes for most
scenarios. For one scenario an interacting multiple models
(IMM, [4]) filter is used with a bank of two EKFs tuned with
a low and a high process noise level. The model transition
probability matrix in the IMM is tuned for sojourn times of
30 s for the low noise and 8 s for the high noise case following
the model in [1] giving the transition probability matrix:
�0.9983 0.0017�
Π = _._ (1)
0.0062 0.9938
_B. Generalized Information Matrix Filter_
The GIMF is a generalization of the information filter
for asynchronous tracklets following [19]. At the time of
fusion the previous information is redacted giving the resulting
equations:
_Pf[−][1]_ = P1−1 + [P2−1 − _P2(tf_ _|t1)−1]_ (2a)
_Pf[−][1]xˆf = P1−1 ˆx1 + [P2−1 ˆx2 −_ _P2(tf_ _|t1)−1 ˆx2(tf_ _|t1)]. (2b)_
The decorrelation by removing the previous information can
also be performed at the local estimates as in the channel
filter [8].
_C. Covariance Intersection_
The CI fusion rule [11] was explicitly developed to handle the problem of fusion of two (Gaussian) estimates with
unknown correlations. The fusion equations are
_Pf[−][1]_ = ωP1−1 + (1 − _ω)P2−1_ (3a)
_Pf[−][1]xˆf = ωP1−1 ˆx1 + (1 −_ _ω)P2−1 ˆx2,_ (3b)
where
_ω = arg min_ det(Pf ). (4)
_ω_
The choice to use the determinant in the criteria can be seen
as minimizing the Shannon information [10].
_D. Naïve Independence Assumption_
For comparison the result of applying naïve fusion, i.e.,
fusion of the track reports ignoring the correlation introduced
by the common process noise in the target trajectory, are also
presented. The corresponding fusion equations for the naïve
information matrix filter (naïve IMF) are:
_Pf[−][1]_ = P1−1 + P2−1 (5a)
_Pf[−][1]xˆf = P1−1 ˆx1 + P2−1 ˆx2_ (5b)
This filter is thus the information form of the Kalman filter
[12] under the (for track to track fusion naïve) assumption of
uncorrelated errors between the local tracks.
_E. Safe Fusion_
Similar to the covariance intersection method, safe fusion
(SF, [9]) avoids double counting information from two possibly dependent estimates by decoupling the components in
the estimates and using the most informative one from each
estimate. This is achieved by repeatedly applying singular
_value decomposition (SVD). The largest ellipsoid method [3]_
utilizes an eigen vector basis factorization to obtain the same
covariance matrix as SF, but differs in how the mean of the
estimate is computed. Contrary to CI, SF has not been shown
to provide consistent estimates.
-----
_xˆ2_
_xˆ2_
_xˆ1_ _xˆ2_ _xˆ1_
_xˆ1_
|Col1|ˆx|
|---|---|
|||
|||
Fig. 2. Illustration of the important steps of safe fusion between the two
possibly correlated estimates ˆx1 and ˆx2.
This is illustrated in Fig. 2. To the very right in the figure,
the two estimates, ˆx1 and ˆx2, to be fused are illustrated
by their covariance ellipses. In step 1 of the algorithm, a
linear transformation is applied (obtained from an SVD) to
transform the covariance matrix of ˆx1 into a unit matrix (the
middle of the illustration). The components in ˆx1 are now
independent. In step 2, a rotation is applied (obtained by
an SVD) to make the components in ˆx2 independent (the
right part of the illustration). Note that the components in
**_xˆ1 remain independent under the rotation as they are equally_**
uncertain in all directions. Next the most informative estimate
is used in each direction, resulting in the grayed ellipsoid to
the right. It is allowed to treat the components independently
as the different directions have been decoupled by the two
transformations. Double counting information is hence avoided
by using only information from one of the two estimates to be
fused in each direction. Finally, the fused estimate is obtained
by simply applying the inverse of the two transformations (not
illustrated).
For completeness, the algorithm is provided in Algorithm 1,
explicitly stating how to derive the necessary transformations.
It should be noted that SF can be implemented using standard linear algebra functions, without the requirement of the
optimization step found in CI. Hence, as SF also does not
need to store or compute correlations it can be efficiently
implemented with predictable execution time. The interested
reader is referred to [9] for details and a motivation.
III. METHOD EVALUATION
The methods will be evaluated through three different
datasets. The first two datasets are chosen from literature
to allow for comparisons. The first set is inspired by [19]
to provide direct comparisons for the GIMF. The second
set is an adaption of one of the air scenarios in [20] (for
brevity called Blair in the figures) used as a ground person
tracking scenario. Finally results from field trials of a security
scenario are presented. For the second dataset permutations in
parametrizations of the process models or the introduction of
IMM models gives two additional scenarios for a total of 5
scenarios described below.
**Algorithm 1 Safe Fusion [9]**
Given two possibly correlated estimates of x, ˆx1 and ˆx2 such
that P1 = cov(ˆx1), and P2 = cov(ˆx2):
1) Compute U1 and D1, using an SVD of the positive
definite matrix P1, such that
_P1 = U1D1U1[T]_ _[.]_ (6)
2) Similarly, derive U2 and D2 using an SVD, such that
_D1[−][1][/][2]U1[T]_ _[P][2][U][1][D]1[−][1][/][2]_ = U2D2U2[T] _[.]_ (7)
3) Let
_T = U2[T]_ _[D]1[−][1][/][2]U1[T]_ (8a)
**_xˆ¯_** 1 = T ˆx1 **_xˆ¯_** 2 = T ˆx2, (8b)
where by construction cov(x¯[ˆ] 1) = I and cov(x¯[ˆ] 2) = D2.
4) Select the most informative source for each component
_i = 1, 2, . . ., dim(x), let_
[x¯[ˆ]]i = [x¯[ˆ] 1]i, [D]ii = 1 if [D2]ii 1, (9a)
_≥_
[x¯[ˆ]]i = [x¯[ˆ] 2]i, [D]ii = D2[ii] if [D2]ii < 1. (9b)
5) The final estimate given by
**_xˆf = T_** _[−][1]x¯[ˆ]_ (10a)
_Pf = T_ _[−][1]D[−][1]T_ _[−][T]_ _._ (10b)
_A. Scenario 1_
The first setup is basically the same as Scenario 3 in [19],
but with the same sample time for both sensors and without
initial delay. Thus the scenario consist of two sensors one
at origin and one at (5000 m, 0 m). They sample the position
with 2 s interval ([19] uses 2 and 2.5 s). The range-bearing
uncertainty is modeled as white noise with standard deviation
_σr = 10 m in range and σθ = 1[◦]_ in bearing. The fusion is only
performed with a rate of 8 s and a delay of 8 s. The motion
model used is a continuous white noise acceleration model
(CWNA) with the process noise σw = 0.1 ms[−][3][/][2].
In the first scenario the process model used for the motion
is the same as the one used in the filter, hence the centralized
Kalman filter is optimal. However, for a typical security
scenario the human motion model is more like the aircraft
flight of [20].
_B. Scenario 2_
For the second scenario we modify flight Scenario 6 of [20]
to a ground scenario by scaling both positions and velocities
with 1/1000 giving a small velocity but reasonable motions
for a ground scenario. (See Fig. 3.) The sensors are placed at
(0 m, 5 m) and (0 m, 45 m), respectively, and sampled at 5 Hz.
It runs without any delay but with fusion in a subsampled rate
of 2.5 s. The motion model is still a CWNA with the process
noise σw = 0.1 ms[−][3][/][2].
-----
50
40
20
0
30
20
10
0
-20
-40
-20 0 20 40 60
S2
S1
-10
-20
X [m]
-30
-10 0 10 20 30 40 50
X [m]
Fig. 3. Trajectory of Scenario 6 in [20] adopted to a ground scenario.
_C. Scenario 3_
The third scenario is identical to Scenario 2, except for an
additional delay of 2.5 s.
_D. Scenario 4_
In Scenario 4 we return to Scenario 2 but introduces the
IMM filter for the local sensors where the high/low (H/L)
process models are chosen as σwH = 4 × 0.1 ms[−][3][/][2] and
_σwL = 0.1/4 ms[−][3][/][2], respectively._
_E. Scenario 5_
The final scenario illustrates a real world example of tracking a person in surveillance cameras. There are two cameras
pointing at the same area from different angles where one
person walks through as illustrated in Fig. 4. The person is
assumed to move according to a constant velocity model with
process noise σw = 0.2 ms[−][3][/][2]. The fusion runs at a rate
of 0.5 s.
_F. Evaluation_
Fig. 4. Map overview of Scenario 5 where the optimal track is marked as
red. The two sensors, S1 and S2, are illustrated as black boxes. The field of
view of each sensor is illustrated as lines for S1 and dashed lines for S2. The
area considered for evaluation is the common area seen by both sensors. The
target moves from left to right.
60
Sensor 1 EKF
Sensor 2 EKF
50 GIMF
CIF
Safe fusion
CKF
40
30
20
10
0
0 50 100 150
Time [s]
Fig. 5. 100 Monte Carlo simulations on a continuous white noise acceleration
scenario similar to [19].
The results are evaluated against ground truth when available. Monte Carlo simulations are performed with 100 samples
for Scenario 1 and 40 samples for Scenarios 2–4. To allow easy
comparison with [19] the solutions of Scenario 1 are tested
for consistency by normalized state estimation error squared
(NEES, [2])
NEES(t) = �x(t)−xˆf (t|t)�T P −f 1(t|t)�x(t)−xˆf (t|t)�, (11)
from the optimal track, i.e. the CKF. The optimal track and
each sensor track is generated using a multi-sensor-multi-target
tracker which associates visual detections from each sensor
to tracks in a world coordinate frame. In the project both
performance measures were studied for all scenarios but for
brevity only the most interesting plots are reproduced here.
IV. RESULTS AND DISCUSSION
evaluated as a mean over Monte Carlo evaluations.
The performance is evaluated as root mean square error
(RMSE) averaged over Monte Carlo evaluations in Scenarios 1–4. In Scenario 5 no ground truth is available so the results
will be evaluated as the root mean square deviation (RMSD)
Scenario 1 was chosen to relate the obtained results to the
results in [19]. The result in Fig. 5 compare favorably with
the results in [19], but note that the initialization here was
not as advanced as in the initial reference, causing larger
errors for the first two updates in our implementation. The
SF filter gives comparable results with the GIMF while the
CIF performance varies along the trajectory. In Fig. 6 the
consistency of the fusion is tested by the NEES. Apart from
the inconsistent initialization, both the GIMF and SF filter
-----
Time [s]
Fig. 6. Mean NEES of 100 Monte Carlo simulations on a continuous white
noise acceleration scenario similar to [19].
perform consistently while as expected the CI show an overly
conservative covariance.
The grouping of estimates four by four is due to the reduced
rate of information used for the fusion. New information only
arrives every 8 s, i.e., every fourth sample.
For the second dataset in Scenario 2–4 with the trajectory
illustrated in Fig. 3 the true motion model is nonlinear and thus
the baseline CKF is no longer the optimal filter but can only
aspire to be the best linear filter. In this case the local filters
will not necessarily provide consistent estimates at all times
either. In Fig. 7 the performance for Scenario 2, without delay
is illustrated. Here the SF performs better than the CKF that
actually cannot be optimal for this scenario. Since the path is
based on basically linear motion with maneuvers the use of a
CWNA has to be a compromise between good tracking in the
corners or along the straight lines. It is interesting to note that
for the more linear parts of the paths the SF is actually better
than the CKF. However in the corners when the local filters lag
the SF has worse performance but still at the same level as the
GIMF. When delays are introduced as in Fig. 8 in Scenario 3
the SF can no longer beat the CKF but actually reach the level
of CKF on the straight lines. In the corners the decentralized
filters give worse performance than the individual local filters
this is probably due to the predictions of the local states being
inconsistent due to small process variance in the linear model
during corners.
The naïve IMF actually has performance worse than the
local filters in the corners.
The results suggest that an IMM would actually perform
better, hence in Fig. 9 IMM filters have been applied both
on the local and central level. On the central level the IMM
does not improve the situation as much as expected on the
straight lines, probably due to relatively uncertain sensors.
For the local filters, especially of sensor 1, an improvement
can be seen which also translates to an improvement for the
fused filters. Now at the update times the filters are better
than the local estimates. Again the SF filter is better than the
central IMM filter on the straight line parts. Using local IMM
filters complicated the picture and no straightforward way to
implement exact methods [18] was seen, but even the GIMF
poses problems in which model to use for the predictions. In
the results presented, the model with the large process noise
was used to predict the previous information forward. Trials
using smaller variance made the filter unstable during shifts in
the modes between large and small process noise. Here the SF
and CI filters were the only ones that were straight-forward to
implement and since the SF filter is more consistent than the
CI with a less conservative covariance it would be the better
choice.
In Fig. 10 the performance for Scenario 5 is illustrated with
a fusion rate set to 2 Hz. Here the GIMF filter performs best,
especially at the fusion points, with the smallest deviation from
CKF. This was the expected result since GIMF is optimal at
full rate. Here, both SF, CIF, and naïve IMF show almost equal
performance. The naïve assumption shows almost no degradation in performance as in previous scenarios. This could be an
effect of the simple target movements in the scenario. When
the estimated error covariance is inspected, naïve IMF instead
turns out to be overconfident contrary to the other methods,
which is due to the information double counting. The scenario
does not cover advanced target trajectories as in the previous
scenarios, but it shows once again that SF is an applicable
method for T2TF.
V. CONCLUSION
In this paper a less known alternative to the covariance
_intersection (CI) method for fusion of correlated estimates,_
_safe fusion (SF), was evaluated using simulated and a real_
world experimental data. It provides a less conservative covariance than the CI method, hence it provides estimates with
better consistency. In the scenarios of interest, motivated by
camera surveillance scenarios, the SF performed well, on a
level comparable to established methods such as generalized
_information matrix fusion (GIMF). The algorithm works well_
in conjunction with local interacting multiple models (IMM)
filters on level with the GIMF and where the exact methods
[18] are intractable. SF can be implemented using standard
linear algebra methods and has predictable computation time,
making it an attractive alternative to the other described methods. When local IMM filters are used, careful considerations
need to be taken in the choice of process model for the
prediction used for the GIMF making the SF the more robust
alternative.
ACKNOWLEDGMENTS
This paper was supported by research projects at the
Swedish Defence Research Agency (FOI) funded by the
Swedish Armed Forces. G. Hendeby was supported by
The Swedish Research Council through their grant Scalable
Kalman Filters. The authors would also like to thank the
anonymous reviewers for insightful comments and for pointing
-----
Fig. 7. Root mean square error for the trajectory of Fig. 3
to the ellipsoid intersection method and its relations to safe
fusion.
REFERENCES
[1] Y. Bar-Shalom and H. Chen. Covariance reconstruction for track fusion
with legacy track sources. Journal of Advances in Information Fusion,
2008.
[2] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with applica_tions to tracking and navigation: theory algorithms and software. John_
Wiley & Sons, 2004.
[3] A. R. Benaskeur. Consistent fusion of correlated data sources. In 28th
_Annual Conference of the Industrial Electronics Society, volume 4, pages_
2652–2656, Nov. 2002.
[4] S. Blackman and R. Popoli. Modern tracking systems. Artech House,
1, 1999.
10[1]
10[0]
Sensor 1 EKF
10[-1] Sensor 2 EKF
GIMF
CIF
Safe fusion
NaiveIMF
CKF
10[-2]
0 50 100 150 200
Time [s]
Fig. 8. Mean square error for delayed local EKF filters for the trajectory of
Fig. 3. The legend is the same as in Fig. 7
[5] L. Chen, P. O. Arambel, and R. K. Mehra. Fusion under unknown
correlation — covariance intersection as a special case. In Proceedings
_of 7th IEEE International Conference on Information Fusion, pages_
905–912, Annapolis, MD, USA, July 2002.
[6] C.-Y. Chong, W. Koch, and F. Govaers. Comparsion of tracklet fusion
and distributed kalman filter for track fusion. In Proceedings of 17th
_IEEE International Conference on Information Fusion, 2014._
[7] F. Govaers and W. Koch. Distributed kalman filter fusion at arbitrary
instants of time. In Proceedings of 13th IEEE International Conference
_on Information Fusion, 2010._
[8] S. Grime and H. F. Durrant-Whyte. Data fusion in decentralized sensor
networks. Control engineering practice, 2(5):849–863, 1994.
[9] F. Gustafsson. Statistical Sensorfusion. Studentlitteratur, 2010.
[10] M. B. Hurley. An information theoretic justification for covariance intersection and its generalization. In Proceedings of 7th IEEE International
_Conference on Information Fusion, volume 1, Annapolis, MD, USA,_
-----
July 2002.
[11] S. J. Julier and J. K. Uhlmann. A non-divergent estimation algorithm
in the presence of unknown correlations. In Proceedings of American
_Control Conference, pages 2369–2373, Albuquerque, NM, USA, June_
1997.
[12] T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. PrenticeHall, Inc, 2000. ISBN 0-13-022464-2.
[13] W. Koch and F. Govaers. On decorrelated track-to-track fusion based on
accumulated state densities. In Proceedings of 17th IEEE International
_Conference on Information Fusion, 2014._
[14] W. Niehsen. Information fusion based on fast covariance intersection
filtering. In Proceedings of 7th IEEE International Conference on
_Information Fusion, volume 2, pages 901–904, Annapolis, MD, USA,_
July 2002.
[15] B. Noack, J. Sijs, M. Reinhardt, and U. D. Hanebeck. Treatment of
dependent information in multisensor Kalman filtering and data fusion.
In H. Fourtati, editor, Multisensor Data Fusion: From Algorithms and
10[1]
10[0]
Sensor 1 IMM
Sensor 2 IMM
10[-1] GIMF
CIF
Safe fusion
NaiveIMF
CKF
Central IMM
10[-2]
0 50 100 150 200
Time [s]
Fig. 9. The results of applying IMM filters on the trajectory of Fig. 3
_Architectural Design to Applications, page 169–192. CRC Press, 2015._
[16] J. Sijs and M. Lazar. Emperical case-study of state fusion via ellipsoidal
intersection. In Proceedings of 14th IEEE International Conference on
_Information Fusion, Chicago, IL, July 2011._
[17] J. Sijs and M. Lazar. State fusion with unknown correlation: Ellipsoidal
intersection. Automatica, 48:1847–1878, Aug. 2012.
[18] X. Tian and Y. Bar-Shalom. Exact algorithms for four track-to-track
fusion configurations: All you wanted to know but were afraid to ask.
In Proceedings of 12th IEEE International Conference on Information
_Fusion, pages 537–544, 2009._
[19] X. Tian and Y. Bar-Shalom. On algorithms for asynchronous track-totrack fusion. In Proceedings of 13th IEEE International Conference on
_Information Fusion, pages 1–8, 2010._
[20] G. Watson and W. Blair. Benchmark problem for radar resource
allocation and tracking maneuvering targets in the presence of ECM.
Technical report, Technical Report NSWCDD/TR-96/10, 1996.
Fig. 10. RMSD from the optimal generated CKF track for the trajectory in Fig. 4.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/MFI.2016.7849499?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/MFI.2016.7849499, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://liu.diva-portal.org/smash/get/diva2:972030/FULLTEXT01"
}
| 2,016
|
[
"JournalArticle",
"Conference"
] | true
| 2016-09-01T00:00:00
|
[] | 7,584
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012cfea30b807059314c2e98d96bdbd502bdf409
|
[] | 0.879638
|
Self-Sovereign Identity Solution for Blockchain-Based Land Registry System: A Comparison
|
012cfea30b807059314c2e98d96bdbd502bdf409
|
Mobile Information Systems
|
[
{
"authorId": "2065563657",
"name": "Mohammed Shuaib"
},
{
"authorId": "2177762",
"name": "N. H. Hassan"
},
{
"authorId": "30958007",
"name": "S. Usman"
},
{
"authorId": "2068848160",
"name": "Shadab Alam"
},
{
"authorId": "2147349647",
"name": "Surbhi Bhatia"
},
{
"authorId": "73488595",
"name": "Arwa A. Mashat"
},
{
"authorId": "1712222777",
"name": "Adarsh Kumar"
},
{
"authorId": "1780130",
"name": "M. Kumar"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Mob Inf Syst"
],
"alternate_urls": [
"https://www.iospress.nl/journal/mobile-information-systems/",
"https://www.iospress.nl/html/1574017x.php",
"http://content.iospress.com/journals/mobile-information-systems"
],
"id": "6b6df2de-21bc-4137-9859-3fcef46f6a21",
"issn": "1574-017X",
"name": "Mobile Information Systems",
"type": "journal",
"url": "https://www.hindawi.com/journals/misy/"
}
|
Providing an identity solution is essential for a reliable blockchain-based land registry system. A secure, privacy-preserving, and efficient identity solution is essential but challenging. This paper examines the current literature and provides a systematic literature review in three stages based on the three research questions (RQ) that show the assessment and interpretation process step by step. Based on the parameters and RQ specified in the research methodology section, a total of 43 primary articles have been selected from the 251 articles extracted from various scientific databases. The majority of these articles are concerned with evaluating the existing self-sovereign identity (SSI) solutions and their role in the blockchain-based land registry system to address the compliance issues in the existing SSI solutions with SSI principles and find the best possible SSI solution to address the identity problems in the land registry. The existing digital identity solutions cannot handle the requirements of the identity principle and are prone to various limitations like centralization and dependency on third parties that further augment the chance of security threats. SSI has been designed to overcome these limitations and provide a secure, reliable, and efficient identity solution that gives complete control to the users over their personal identity information (PII). This paper reviews the existing SSI solutions, evaluates them based on the SSI principles, and comes up with the best possible SSI solution for a blockchain-based land registry system. It further provides a detailed investigation of each SSI solution to present its functionalities and limitations for further improvement.
|
Hindawi
Mobile Information Systems
Volume 2022, Article ID 8930472, 17 pages
[https://doi.org/10.1155/2022/8930472](https://doi.org/10.1155/2022/8930472)
# Review Article Self-Sovereign Identity Solution for Blockchain-Based Land Registry System: A Comparison
## Mohammed Shuaib,[1,2] Noor Hafizah Hassan,[1] Sahnius Usman,[1] Shadab Alam,[2]
Surbhi Bhatia,[3] Arwa Mashat,[4] Adarsh Kumar,[5] and Manoj Kumar 5
1Razak Faculty of Technology and Informatics (RFTI), Universiti Teknologi Malaysia (UTM), Kuala Lumpur, Malaysia
2College of Computer Science & IT, Jazan University, Saudi Arabia
3Department of Information Systems, College of Computer Science and Information, Technology, King Faisal University, Al Hasa,
36362, Saudi Arabia
4Faculty of Computing & Information Technology, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
5Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
Correspondence should be addressed to Adarsh Kumar; adarsh.kumar@ddn.upes.ac.in
and Manoj Kumar; wss.manojkumar@gmail.com
Received 20 January 2022; Accepted 17 March 2022; Published 4 April 2022
Academic Editor: Sebastian Podda
[Copyright © 2022 Mohammed Shuaib et al. This is an open access article distributed under the Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/)
[License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is](https://creativecommons.org/licenses/by/4.0/)
properly cited.
Providing an identity solution is essential for a reliable blockchain-based land registry system. A secure, privacy-preserving, and
efficient identity solution is essential but challenging. This paper examines the current literature and provides a systematic
literature review in three stages based on the three research questions (RQ) that show the assessment and interpretation
process step by step. Based on the parameters and RQ specified in the research methodology section, a total of 43 primary
articles have been selected from the 251 articles extracted from various scientific databases. The majority of these articles are
concerned with evaluating the existing self-sovereign identity (SSI) solutions and their role in the blockchain-based land
registry system to address the compliance issues in the existing SSI solutions with SSI principles and find the best possible SSI
solution to address the identity problems in the land registry. The existing digital identity solutions cannot handle the
requirements of the identity principle and are prone to various limitations like centralization and dependency on third parties
that further augment the chance of security threats. SSI has been designed to overcome these limitations and provide a secure,
reliable, and efficient identity solution that gives complete control to the users over their personal identity information (PII).
This paper reviews the existing SSI solutions, evaluates them based on the SSI principles, and comes up with the best possible
SSI solution for a blockchain-based land registry system. It further provides a detailed investigation of each SSI solution to
present its functionalities and limitations for further improvement.
## 1. Introduction
The land registry is an important economic pillar for any
country in nation-building. Blockchain technology can
improve the security and transparency in the land registry
by recording land-related details on the blockchain. Blockchain technology also hastens property identification and
enhances trust and accuracy in transactions by enabling digital monitoring by stakeholders. Through an increasingly
digital world, robust, useful, and flexible digital identity
management systems are critical to electronically identifying
and authenticating ourselves and to know who we communicate. As per McKinsey, “Good Digital ID” contains a high
level of digital channel protection, verification, and authenticated identity, specially created with the user consent [1]. In
2005, Cameron wrote “The Law of Identity as an Identity
and Access Architect” at Microsoft Corporation [2]. This
law consists of 7 principles that translate several guidelines
on managing and disclosing a user’s identity and identifying
various entities with different types of identification. These
-----
2 Mobile Information Systems
principles describe digital identity systems’ success and failure. So digital identity solutions are needed to facilitate the
users of the land registry system to initiate a transaction
[3]. However, many researchers [4, 5] working in the field
of applying digital identity solutions for blockchain-based
land registry systems confirmed the issue of noncompliance
with digital identity principles given by Cameron [2]. So
while developing an identity solution for a blockchainbased land registry system, these issues need attention [6, 7].
A digital identity is a collection of credentials and identifiers expressed in an appropriate context, for instance, the
name, ID, and other relevant attributes [8, 9]. Digital identity describes the attribute of an entity digitally in providing
access to systems and application of identity management
process [10]. Traditionally, digital identities are mediums
to validate users at the workplace. Existing digital identities
are controlled by identity providers, not by the users themselves. Identity providers have complete ownership over an
individual’s identity, making it vulnerable to identity misuse.
Identity owners often share their credentials for registering
or accessing a service with no standard or guidelines on what
data they need to share and store on the Internet. In addition, oversharing of data contributes to privacy issues for
the identity owner [11]. Since the challenges of current digital identity are severe and damaging, a new concept of digital identity is required. That can offer users complete
control over their identities, reduce management costs,
increase efficiency, and improve overall online identity [12].
In [13], the author presented the privacy-preserving
blockchain-based identity management system for remote
healthcare. The author evaluated the proposed system on
the parameters like transaction gas cost, transaction per second, number of blocks lost, and block propagation time. The
developed identity system can be applied to cancer patients
and can be further extended by integrating the blockchain
with IPFS. Additionally, in [14], purpose the scheme of digital coupon and explained the desired properties and features in the couponing system, which can be utilized to
identify the nonrepudiation property using malicious
issuers. Further in [15], the author presented a privacypreserving blockchain architecture for IoT using Hierarchical Identity Based Encryption (HIBE) suitable for IoT
devices and mobile edge and cloudlet environments. The
presented architecture is evaluated in a simulation environment named Contiki OS. The presented architecture provides the confidentiality, integrity, and availability of the
data for the mobile edge nodes.
SSI provides a decentralized identity and fully controls
their identity and personal data. It only shares the necessary
information with a third party, known as selective disclosure
[16]. Issuing identity credential built on the trusted network
among two parties is the main objective of self-sovereign
identity. Blockchain technology utilizes a distributed ledger
to achieve consensus using a cryptographic protocol, fulfilling the requirement of providing a decentralized system in
self-sovereign identity [17, 18]. While several blockchainbased SSI frameworks are available, no SSI model is available
specifically for the land registry systems. The SSI used in the
land registry will provide individuals with identities that can
be used for communication with land management services.
SSI can also allow individuals to create evidence of their
property, such as a certified survey plan or a notarized declaration. SSI offers an opportunity to design a gradually
more secure and trustworthy identity in lieu of a
government-approved identity document by collecting certificates issued by reliable third parties, such as a land registry and financial institutions [19]. SSI can provide a
framework for data transformation into credentials to use
their verified location history from a mobile provider and
land registry certificates to provide proof of ownership claim
[20]. SSI may directly connect individuals to land plots and
provide a mechanism for recording land claims and related
data.
An SSI holder can use a verifiable claim issued for land
ownership to access other services such as banking, loans,
and government benefits. Individuals could submit a digital
title to obtain financial assistance or agricultural subsidies. A
verifiable claim will be a permanent record by government
authority acknowledging the rights of a property owner at a
certain stage. If property certificates are lost, or the owners
were relocated, the verifiable claim will remain [21]. SSI development is still at the initial stage. Many governments and
enterprises are currently involved in developing SSI solutions
that are mainly based on blockchain technology. Some of the
prominent SSI solutions are Sovrin [22], UPort [23], Civic
[24], Blockstack [25], Selfkey [26], and ShoCard [27]. These
SSI solutions are being used in different domains. These SSI
solutions should satisfy the principles of digital identity solutions given by Cameron [2]. In [2], Cameron looked at SSI
solutions to figure out the cause of their failure and market
adaptability. He also came up with a requirement to comply
with the SSI principles for building a successful SSI solution
[27]. So every SSI solution should comply with the SSI principles [28].
This study is aimed at identifying how the self-sovereign
identity solves the issues of compliance with the digital identity principle in a blockchain-based land registry system.
This paper tries to identify the role of SSI in a blockchainbased land registry system. It further aims to review the various SSI principles by different researchers and come up with
an evaluation criterion to evaluate the existing SSI solutions.
Finally, it evaluates the existing SSI solutions to identify the
most suitable one for applying in the blockchain-based land
registry system. Various classification of the principle of SSI
is given by [29] [11]. However, none of these classifications
is complete since several properties are still missing. However, it appears that some principles under one group can
be irrelevant, as described in [29]. We identified the criteria
based on the classifications given by [11, 30] to compare the
SSI solutions in SSI compliance principles, which should be
taken care of while designing an SSI-based identity model
for a blockchain-based land registry. A systematic analytical
study of existing SSI solutions has been conducted based on
the defined SSI principles and finalized evaluation criteria.
This article is divided into five sections. Section 2 provides a detailed background study. Section 3 presents the
research methodology that includes identified research questions (RQ), data sources used, search mechanism, and
-----
Mobile Information Systems 3
inclusion/exclusion criteria to shortlist the study sources.
Section 4 presents a detailed analysis of the outcomes
extracted from the literature based on each research question. Finally, Section 5 concludes the findings and reviews.
## 2. Background and Literature Review
This section provides a detailed study of the background literature required for this study that includes concepts of selfsovereign identity (SSI), the role of SSI in information flow,
blockchain technology, and its application in SSI and applications of SSI in the land registry system.
2.1. Concept of Self-Sovereign Identity. Self-sovereign identity
(SSI) is a revolutionary way to address identity. In the early
days, centralized organizations controlled digital identities,
while in the real world, people stored their issued identity
information in a decentralized manner using a physical wallet.
SSI’s objective is to connect online identity systems to the
actual world and give users control over their identities. In
the actual world, after the birth of a child, identity credentials
like birth certificates, identification numbers, etc., are provided
by the government authorities [16]. The person utilizes these
credentials on several occasions to identify themselves or
establish a relationship throughout life.
The self-sovereign identity is a well-developed concept in
the academic and industry fields. However, there is still no
consensus on its exact definition. Generally, the SSI is defined
by considering the principles of self-sovereign by de Marneffe
[31] and descriptions of identity by [32]. Self-sovereign identity is a digitalized form of personal features, details, and attributes. No entity can breach the right to choose a level of
privacy or reputation of identity attribute. While working as
an identity and access architect in Microsoft Corporation,
Cameron wrote identity laws in 2005. The identity law [2] follows a distributed ledger [33], which first explains the concept
of SSI [34]. Although Cameron was unaware of the advancement of distributed ledgers in the upcoming years, proposing
the Microsoft Passport is an unnecessary reliance on a single
organization without user control and can lead to identity failures. The necessity of user access, minimal disclosure, and a
portable, interoperable structure is required. The first occurrence of sovereign identity happened in 2019 [35].
In 2016, Allen presented ten principles of the selfsovereign identity (SSI) [34], focusing upon identity laws by
describing how identity could work, why systems and algorithms need to be transparent, and how is it permanent despite
being portable and interoperable. The details required for the
concept of self-sovereign identity were proposed by [36].
The definition provided by Abraham is congruent with the
ten principles provided by Allen [34]. Abraham extends the
control concept and adds, “All user identity information will
be recorded for further authentication.” It is trade-of-security
and privacy, which should be based on the chosen user. SSI
is considered as a long-lasting identity possessed and controlled by the individual without any external authority sans
the possibility of identity removal. It requires user consent
for interoperability of user identity across several locations
and ownership over the identity to provide user autonomy.
SSI may prove to be the new normal in the evaluation of identity management.
2.2. Roles of Self-Sovereign Identity. The self-sovereign identity (SSI) environment structure is defined as a peer-to-peer
model where the independent identity works as a peer and
communicate with each other. Communication is done so
that people and organizations can affirm the information
from individuals by assigning claims or credentials [12,
16]. The significant elements in SSI are identity verifier,
identity issuer, and credential issuer. The functions of each
entity of SSI are represented in Figure 1.
Figure 1 explains the roles of SSI in the credential flow
order. The issuer provides the credential for making the statement as it is often given through off-chain. The credentials and
self-attached data of the identity owner are available in the
wallet. Issuers may withdraw credentials if requirements are
not fulfilled. The identity owner stores the credentials provided in a digital wallet, which function as an agent in the
SSI environment. The entire identity credential is held in a
digital wallet as proof of verification displayed in a disclosed
manner. The identity owner has complete control over data
sharing and usage. The consent of the identity owner is
required to access information for verifier services. Accessible
records in public registries such as the issuer’s identification
key, DID, are confirmed to ensure the actual issuer issues these
credentials. When the identity owner’s information meets the
criteria, access is given, where the presented credentials are
checked without contacting an issuer.
Similarly, offering alternative credentials like a student ID
does not require the university’s permission in the actual
world. The blockchain uses a distributed ledger technology
which allows the creation of identity without a central authority where the ledger acts as a basis of trust. An essential feature
of SSI is the backend data storage in off-ledger. Most DID
methods use a public or private repository, such as a private
database or IPFS (Interplanetary File System), to collect offledger information. IPFS generates content-based hashes
using particular IPFS data. Wallet files are stored as a backup
in the backend off-ledger, making it easy to recover if lost.
2.3. Blockchain Technology and Self-Sovereign Identity. Selfsovereign identity systems are based on blockchain technology. The blockchain is an evolving technology that uses cryptocurrency to provide a decentralized, open shared ledger [37]
that can be used for electronic voting [38] land registration
[39, 40]. It is evident that cryptocurrency is not the only feasible use case for blockchain [41, 42]. Blockchain technology is
well placed due to its technical features in facilitating a notable
change in digital identity [43]. The self-sovereign identity is
based on the sharing and storing verifiable claims held in
off-ledger [44]. The authenticity of these signed data objects
is assured by storing a hash of the thing on a blockchain. Once
subjects submit a verifiable claim to a relying party, the hash of
the claim with available blockchain record can be compared
and verified through an integrated signature where the relying
party can quickly and precisely ascertain the claim’s validity. A
blockchain provides a way to revoke or store an auditable
record of consent behavior and maintain the security of data
-----
4 Mobile Information Systems
Issuer
Identify owner
3.Presents credential or creates
proof
2. Stores credential in
wallet
Verifier
SSI-Ledger
Figure 1: Roles of SSI with information flow [12].
objects to assure the integrity of the data object. Blockchain is
built on a decentralized public-key infrastructure and provides
robust methods that can be used for encryption and authentication, apart from self-sovereign identity [45, 46]. Additionally, [47], blockchain offers several key features that have
ample opportunities for identity systems, including immutability, usability, and low transaction cost [48, 49].
2.4. Self-Sovereign Identity and Land Registry. The selfsovereign identity (SSI) fundamental application for the land
registry is to provide individuals with identities so that they
can be used for communication with land management services. There is no identification record for one billion people
across the world. SSI offers an opportunity to design a gradually more secure and trustworthy identity in lieu of a
government-approved identity document by collecting certificates issued by reliable third parties, such as a land registry
and financial institutions [19]. In the absence of legal documentation, SSI can also allow individuals to create evidence
of their property, such as a certified survey plan or a notarized declaration. SSI credentials are robust and should not
be limited to the digital version of the traditional paper
[50]. SSI can provide a framework for data transformation
into credentials so that administrative agencies trust it. For
example, a person can use their verified location history from
a mobile provider and land registry certificates to provide
proof of ownership claim [20].
In the absence of land registries, the self-sovereign identity may directly connect individuals to land plots and provide a mechanism for recording land claims and related
data to access other services such as banking, loans, and government benefits. An SSI holder can use a verifiable claim
issued for land ownership. Individuals could submit a digital
title to obtain financial assistance or agricultural subsidies. A
verifiable claim will be a permanent record by government
authority acknowledging the rights of a property owner at a
particular stage. In the case of property, if the certificate is
lost or the owner relocated, and the verifiable claim will
remain [51].
(i) User Control. Self-sovereign identity solutions using
a cryptographic signature, pairwise connection, and
digital identities provide the user with complete
control over his identity information. The user or
the groups will be attached to the assets through
self-sovereign identity, which improves the functions and scope of the land registry. Moreover, verifying and exchanging identity information will
evolve to provide validated credentials and manage
the remaining registry components that do not benefit through Self-sovereign identity
(ii) Facilitate Access to Finance. Self-sovereign identitybased land registers can also provide more detailed
and trusted information about potential borrowers
in developing countries. The financial-market specialists at the Inter-American Development Bank,
Juan Antonio Ketterer, and Gabriela Andrade,
acknowledged that transparent and more accurate
asset registers as collateral could mitigate
knowledge-related asymmetry constraints and provide financial access [52]. As shown in recent initiatives in the United States of America, The expansion
of mobile assets can have a major impact on economic growth for small and medium scale enterprises [53]
(iii) Efficiency in Real Estate Markets. To reduce the possibility of fraud in the real estate markets, a high
degree of due diligence is required for the identity
of the involved parties, leading to inefficiency and
more transaction fees. A self-sovereign identity
solution will securely associate the owner with its
properties and legally bind the digital signature to
provide trusted and transparent online working
(iv) Land Ownership in Postconflict Situations. Legal
reestablishment of land for refugees and internally
displaced persons (IDPs) helps postconflict restoration. However, the restoration process is complicated as many refugees do not have any essential
land records or fear consequences [54]. An SSI
secures land ownership records and receives verifiable credentials from an NGO to help record a
claim in lieu of a proper land registry [1]
(v) Natural Disaster Resilience. Land ownership is
important for preparing for disasters and can
improve the restoration process. New programs
for disaster preparedness use innovative
-----
Mobile Information Systems 5
technologies. Nevertheless, a solution to SSI will
give users a safer and more accessible tool to show
their land ownership and submit a request for assistance and restoration grants. Decentralized record
management will guarantee the preservation of land
ownership records. The use of biometrics in SSI
allows people to prove their identities and authorized services, even though documents are deleted
or lost
## 3. Research Methodology
This paper performs a systematic literature review to explore
the latest state-of-the-art academic research on self-sovereign
identities and blockchain. Additionally, to examine the role
of self-sovereign identity in the land registry system. To have
the most comprehensive coverage of all published literature,
our systematic review methods were carefully planned using
the guidelines of Kitchenham and Charters [55] to identify
the need for review and create a review plan. Our systematic
review method includes the research questions, data sources
used for retrieving papers, search strategy, inclusion and
exclusion criteria, and screening and final selection description
are summarized in Figure 2.
3.1. Research Questions. The first stage of the systematic literature review was to identify research questions (RQs) for a
detailed review of available topics. The main research question
addressed in this study is as follows.
(RQ): how to select the most appropriate self-sovereign
identity for the blockchain-based land registry?
To answer the main research question of this study, we
outlined three guiding questions.
RQ1: how self-sovereign identity solves the issue of noncompliance with digital identity principles in the
blockchain-based land registry?
RQ2: which criteria can be used to compare the most
appropriate blockchain-based self-sovereign identity
solution?
RQ3: what is the evaluation result of various blockchainbased self-sovereign identity solutions?
To address the above guiding questions, we used the
guidelines given by Kitchenham and Charters for a systematic review [55] and the standard procedure for selecting
the literature for our research.
3.2. Data Sources. In this systematic research, material collection was performed through various scholarly databases such
as Scopus, Web of Science, ACM Digital Library, and IEEE
Xplore to collect more articles. These databases were chosen
as they contain peer-reviewed papers and enable logical
expressions (keywords, names, and/or abstracts) to be
searched. Grey’s literature, such as reports on government
projects, working papers, and documents on assessment, was
also included. The blockchain-based self-sovereign identity
implementation subject is a new study area, and the various
blockchain-based firms are currently working on it. Including
grey literature extends state-of-the-art research sources by
using a broader research source. Each selected database was
checked separately by the specified search words, and the
results were combined after removing duplicates using Mendeley software. Table 1 shows the number of articles generated
by search string in each database. Some found publications are
available in more than one database. The total number of articles with duplication is 251.
3.3. Search Strategy. The search strategy is carried out between
2008 and 2021. This systematic review study took the starting
point from 2008 when the first actual research in the blockchain was published. The grey literature includes magazines,
company whitepapers, and books. To identify different
blockchain-based self-sovereign identity solutions, and to be
as generic as possible, the search string used to retrieve the
articles from databases is (“self-sovereign identity” AND
“Blockchain”) OR (“self-sovereign identity” AND “identity
management “) OR (“self-sovereign identity” AND “Blockchain” AND “identity management”). Also, semantic search
words were identified in the fields of digital identity, and
self-sovereign identity and blockchain are also searched in
the databases. Moreover, our search string is restricted only
to the article’s title, abstract, and subject terms. It was done
to exclude irrelevant articles referencing the search words only
in the body’s text.
The next step was to search for all related papers. A final
search was carried out on 17 November 2020, covering years
from 2008 to 2022. The search consists of conferences, journals, workshops, government project reports, working papers,
review documents, and book sections. The searched terms are
“blockchain”, “land registry”, “Identity model”, and “Law of
identity” to check the title, keywords, and abstracts of academic papers. Some research papers use real estate in place
of land registry, so we have modified the search strategy and
used only the real estate & blockchain keywords.
Additionally, some researchers use identity management
in place of the identity model. As a result, we finally decided
to discover all papers based on strings (“land registry” AND
“Blockchain” or “real estate” AND “Blockchain” or “Identity
model” AND “Law of identity”, “identity principle” or “Identity management” AND “Law of identity”, “identity principle”). Table 2 displays the search string and the results from
scholarly databases.
3.4. Inclusion/Exclusion Criteria. Not all of the articles found
were important to the subject, and thus, the next step was to
identify the article that satisfies the scope of our study. We
have done this by specifying criteria for inclusion and exclusion, as seen in Table 3. These criteria are applied to all titles,
abstracts, and keywords of the identified article to classify
them according to the scope of our study. Titles and abstracts
in some cases have not been all appropriate; therefore, the
whole paper has been examined to ensure the compactness
of criteria for inclusion and exclusion.
3.5. Screening and Final Selection. The initial screening process was carried out on collected papers to verify compliance
with our scope of the study. In this Systematic Literature
Review, 251 articles were collected mainly from the scholarly
databases (grey literature has been omitted from the
-----
6 Mobile Information Systems
Database searching Grey literature
Record excluded
based on criteria
(n = 51)
|Col1|Database searching|Col3|
|---|---|---|
||Web of IEEE Xplore ACM DL Scopus (n = 69) (n = 41) science (n = 69) (n = 44) Total (n = 251)||
||||
||Records imported into citation manager (n = 251)||
||||
|Records for screening (n = 65)|Col2|
|---|---|
|||
|Title/abstract screening (n = 214)|Col2|
|---|---|
|||
|||
|Full text article assessed for eligibility (n = 64)|Col2|Col3|
|---|---|---|
||||
|Number of articles selected (n = 31)|||
||||
|Number of report selected (n = 12)|Col2|
|---|---|
|||
|Database searching|Grey lite|
|---|---|
|Web of IEEE Xplore ACM DL Scopus (n = 69) (n = 41) science (n = 69) (n = 44) Total (n = 251) Records imported into Duplication removed citation manager (n = 251) (n = 37) Articles excluded based on title (n = 88) Title/abstract screening (n = 214) Articles excluded based on Abstract (n = 62) Full text article Full text articles excluded assessed for eligibility (n = 33) (n = 64) Number of articles selected (n = 31)|Records retrieved from other sources (n = 65) Records for screening (n = 65) Number of report selected (n = 12)|
|Total number of articles selected (n = 43):31 article and 12 report||
Figure 2: Procedural steps for the selection process.
Table 1: Search string and results for scholarly databases.
Database Scopus Web of Science ACM Digital Library IEEE Xplore
(“self-sovereign identity” AND “Blockchain”) 48 19 19 25
(“self-sovereign identity” AND “identity management”) 27 14 11 18
(“self-sovereign identity” AND “Blockchain” AND “identity management”) 22 11 11 26
Total with duplicates 97 44 41 69
Table 2: Search terms and results from different scholarly databases.
Search terms IEEE Xplore Scopus ACM Science direct Web of Science
“Land registry” AND “Block chain” 7 28 19 36 14
“Real estate” AND “Blockchain” 20 77 67 77 33
“Identity model”, “Identity” AND “Law of identity”, “identity principle” 7 9 5 8 2
“Identity management” AND “Law of identity”, “identity principle” 6 21 8 22 11
Total with duplicates 40 135 99 143 60
Table 3: Inclusion/exclusion criteria.
Inclusion criteria Exclusion criteria
(i) Publication between 2008 and 2022
(ii) papers with research scope of blockchain technology and subscope—the
application of that technology for the domain related to the self-sovereign
identity, identity management
(iii) original research paper instead of review/survey paper
(i) Duplicate
(ii) not English language paper
(iii) papers that had some other meaning other than one
relevant to the blockchain-based self-sovereign identity
(iv) articles addressing technical aspects of blockchain
technology
-----
Mobile Information Systems 7
descriptive analyzes for conformity). The number of articles
chosen as primary studies has been reduced to 214 after eliminating (37) duplicate papers, resulting in 214 articles. Subsequently, we read each publication’s titles, abstracts, and
keywords to keep them relevant to the next stage of screening.
We also carefully reviewed whether they are inside or outside
the scope through the inclusion and exclusion criteria by reading the abstract, conclusion, and discussion sections. Eightyeight articles are excluded based on the title, and 62 articles
are excluded based on the abstract. A limited number of publications passed the primary screening stage for many factors.
Finally, the first screening of the article ended with 64 articles.
In the final screening, the remaining 64 articles were read in
detail, thereby removing the publications that have little significance to the scope of our study. Finally, 31 papers and 12
reports have been selected for our study.
## 4. Research Questions and Analysis
This section is further divided into three subsections (A, B,
and C). Section A presents the issues of noncompliance with
identity principles in the blockchain-based land registry system and how SSI solves this issue. Section B describes the
criteria for evaluating the blockchain-based SSI solutions.
Section C shows evaluation results of various blockchainbased SSI solutions based on the defined criteria.
4.1. RQ1: How Self-Sovereign Identity Solves the Issue of
Noncompliance with Digital Identity Principles in the
Blockchain-Based Land Registry? Through an increasingly
digital world, robust, useful, and flexible digital identity management systems are critical to electronically identifying and
authenticating ourselves and to know who we communicate.
As per McKinsey1, “Good Digital ID contains a high level of
digital channel protection” verification and authenticated
identity, specially created with the user consent [56]. It helps
us to decide with whom and for what reasons we choose to
exchange data to ensure user’s privacy and control of personal data. “This would” unlock value by encouraging inclusion, formalization, and digitalization.
For instance:
(i) 45% of females aged around 15+ in low-income
countries lack ID, and only 30% of males do
(ii) Digital ID could increase 3-13 percent of GDP in
2030
In 2005, Cameron wrote The Law of Identity as an Identity and Access Architect at Microsoft Corporation [2]. A
basic definition of identity requires concepts that can be
focused on the design of additional services by involved
parties. The principles can also be used as a goal to build trust
and interoperability between services in the environment.
This law consists of 7 principles that translate several guidelines on managing and disclosing a user’s identity and identifying various entities with different types of identification.
These principles describe digital identity systems’ success
and failure. These are briefly explained below.
(i) Law 1: User Control and Consent. “Identity systems
only disclose user identification with user consent”
(ii) Law 2: Minimum Disclosure. The most successful
long-term solution is one that discloses the lowest
quantity of information and limits its use
(iii) Law 3: Justifiable Parties. Digital identity systems
should be established to limit information disclosure to parties with the necessary, justifiable position in a particular identity relationship
(iv) Law 4: Directed Identification. The universal identity scheme must recognize omnidirectional identifiers for public entities and unidirectional
identifications for private entities, simplifying discovery and preventing unnecessary correlation
disclosures
(v) Law 5: Pluralism of Operators and Technology. The
identity system should manage multiple identity
technologies run by different providers and allow
them to communicate
(vi) Law 6: Human Integration. The human user must
be represented as part of the distributed system that
can be integrated into communication mechanisms
between people and machines to safeguard from
identity attacks
(vii) Law 7: Consistent Experience across Contexts. A
unifying identity metasystem must ensure that its
users have a clear and consistent experience,
enabling operators and technologies to differentiate
between different contexts
The explanation principles of digital identity are extensive. Some of these principles may be more specific. For
example, the first concept can be divided into user control
and consent. Some identity solutions may satisfy one but
not the other. Given that there was no self-sovereign identification at the time of writing these principles. It was all the
more remarkable to have the majority of principles adopted
from “The Evolution of Digital Identity Concepts guiding
principles” by Allen [34]. In a well-known post, “The Path
to Self-Sovereign Identity,” Allen outlined SSI principles,
including specific guidelines from other sources such as
Cameron and the W3C Verifiable Statements Task Force
[57]. These ten principles are taken from Allen’s paper [34]
and serve as guidelines for SSI adaptation.
(1) Control: Users Must Control Their Identities. The
user is the ultimate authority of his identity, subject
to well-understood and safe algorithms that ensure
that the identity and its arguments remain valid.
He should be able to identify, update, or even hide
it. The user is free to pick actors or privacy as he
wishes. The user does not regulate all identity
claims: other users can make claims about a user,
but they should not be central to its identity
-----
8 Mobile Information Systems
(2) Access: Users Must Have Access to Their Own Data.
A user must always be able to easily access and
recover all the claims and other identification
details. There must be no hidden data and no
gatekeepers
(3) Transparency: Transparent Systems and Algorithms.
The systems for managing and running an identity
network must be transparent in terms of their functioning, management, and updating. The algorithms
should be open source, well-documented, and autonomous from any particular architecture
(4) Persistence: Identities Must Be Long-Lived. The user
can only remove identities. Claims can be updated
and removed, but the identity that belongs to these
claims should be long-lived. Identities can ideally
remain permanently or probably as long as the consumer wants. Although private keys could have to
be rotated and data need to be changed, the identity
remains. In the rapidly evolving world of the Internet, this goal may not be entirely feasible, but
identities at least remain until new identity systems
outdate them
(5) Portability: Identity Information and Services Must
Be Transportable. A trusted third-party entity should
not hold the identity. It should be transportable,
although a trusted entity behaves in the customer’s
best interests. Transportable identities ensure that
the individual stays in charge of their identity, which
can increase identity persistence over time
(6) Interoperability: Identities Should Be Used as Widely
as Possible. Identity is of little benefit if used only in
small niches. A modern-day digital identity system
aims to access identity information widely and across
international borders to create global identities without relinquishing user control
(7) Consent: Users Must Agree to the Use of Their Identity. Any identity system is designed to share identity and claims, and an interoperable system
improves the number of shares occurring. However, data sharing must only occur with user consent. While other users such as an employer,
credit office, or spouse can make claims, the user
must also confirm consent
(8) Existence: Users Must Have an Independent Existence. An SSI fundamentally depends on the ineffable “I” at the core of identity. It will never fully exist
in digital form. It needs to be the self-supporting
kernel to support this
(9) Minimalization: Disclosure of Claims Must Be Minimized. It should include the least amount of data
required to perform the task when sharing data. It
is supported by selective disclosure and zeroknowledge proof. However, noncorruptibility is a
difficult task. The best possible way to solve this is
to use minimization to promote privacy
(10) Protection: The Rights of Users Must Be Protected. If
the identity network priorities vary from those of
the rights of individuals, the network should commit to protecting the rights and freedom of users
over the network
SSI is considered as a long-lasting identity possessed and
controlled by the individual without any external authority
sans the possibility of identity removal. It requires user consent for interoperability of user identity across several locations and ownership over the identity to provide user
autonomy. SSI may prove to be the new normal in the era
of digital identity. The self-sovereign identity is a potential
solution since it provides people, organizations, and companies sovereignty over their identifiers and full control on
how and to whom information is shared or utilized. Only
the necessary information will be revealed to third parties
in what is known as selective disclosure [12, 16]. Issuing
identity credential built on the trusted network among two
parties is the main objective of self-sovereign identity.
Through the use of an easy, automated process and standard
format, SSI can create a convenient communication method.
4.2. RQ2: What Are the Criteria for Evaluating BlockchainBased Self-Sovereign Identity Solutions?
4.2.1. Related Work. The various evaluation criteria taken by
multiple researchers to evaluate self-sovereign identity and
comparative studies of blockchain-based self-sovereign solutions are discussed below.
Cameron (2005) explained the seven laws discussed in
the earlier section, where he outlined the strengths and
weaknesses of digital identity concepts [2]. These laws are
vital to prevent any repercussions where the laws of identity
and the requirement of self-sovereign identity are described
in detail. Certain blockchain-based solutions may not satisfy
certain properties of self-sovereign identity. Based on these
seven laws, Christopher (2016) outlined ten principles to
consider when implementing SSI solutions [34]. In the
self-sovereign identity solution, these principles are aimed
at user control besides providing the differences between
the seven laws. Stokkink and Pouwelse (2018) used these
ten self-sovereign identity principles to test blockchainbased SSI solutions: Sovrin and uPort. They included an
additional property in the evaluation list that involves claims
to be provable [58].
The problem with the current identity solution is identified as the individual is not the real owner of their identity.
Besides, this problem can be overcome with the growth of
the SSI solution. A DNS-Idm blockchain-based identity management system is developed using a smart contract to
improve protection and privacy features [59]. In [43], the
author compares various blockchain identity management
systems and identifies challenges like trust, security, and privacy issues. Also, he discussed various trust, security, and
privacy-based schemes that can be utilized to improve the
blockchain-based identity nmangennet system. Shuaib et al.
-----
Mobile Information Systems 9
(2022) compare the identity model, namely, centralized, federated, user-centric, and SSI based on laws of digital identity and
suggested the SSI solution to be used for blockchain-based
land registry system [60, 61]. Finally, a comparison of the
available SSI solution, i.e., uPort, Sovrin, and Shocard, is made
with the developed DNS-Idm using security and privacy criteria like ownership, user control and consent, human integration, privacy-friendly, and directed precise identity.
Dunphy and Petitcolas (2018) made a comparison
between blockchain solutions that used SSI based on the seven
laws of identity [62], where they used trusted, decentralized
identity where identity proofing relies on trustable existing
credentials. They concluded that the usability (human integration) feature needs further improvement [63].
Similarly, Panait et al. (2020) evaluated ten current blockchain identity management solutions using SSI focusing on
the implementation of the platform and long-term validity
[64]. He emphasizes the need to improve the cryptography
and usability aspect of the SSI’s current identity management
solution. On the other hand, Van Bokkem et al.(2019) evaluated the seven blockchain-based self-sovereign identity solutions based on the eleven identity principles outlined by [34]
alongside the provable property notion [65]. In [66], a comparative study of the popular identity management system is
done using SSI like ShoCard, UPort, and Sovrin based on
the seven laws of identity by [62].
Liu et al. (2020) compared blockchain identity systems
that use SSI, namely, uPort, Sovrin, and Shocard, based on
aspects like control, security, and privacy. Liu et al. compared
the existing blockchain-based self-sovereign identity system
such as UPort, ShoCard, and Sovrin [43] using the principles
of self-sovereign identity given by [34].
The three self-sovereign identity solutions, namely, Everest, Evernym, and uPort, are analyzed using the SSI principle
using the desk research and interview with company blockchain experts [19]. As the “consent” principle in developing
countries is difficult to adopt, so it has been removed with
the “Inclusion” principle.
4.2.2. Our Evaluation Criteria. The SSI requires the basic
principle of identity given by [2]. The principle of SSI in an
article by Allen is examined that provides an additional view
on the digital identity liked to the seven identity principles
given by Cameron. The ten essential principles for SSI are portability, access, transparency, persistence, control, transparency, existence, interoperability, protection, and
minimization [34]. A similar classification of principle for
SSI is given in (Ferdous et al., 2019), containing three properties: acceptance, zero cost, and controllability. Further, these
principles were classified in [29], where the SSI principle is
divided into three main groups: controllability, security, and
portability. Additionally, the seven principles, namely, availability, approval, tenacity, approval, authority, autonomy,
and confidentiality, were used to compare SSI solutions [11].
None of these classifications is complete since several
properties are still missing. However, it appears that some
principles under one group can be irrelevant, as described
in [29], where they highlighted that the principle of persistence and existence in the context of controllability are mis
matched. This study introduces the principle of “Inclusion”
and the elimination of “Existence,” which is essential for
implementation in developing countries. The “usability”
principle was also incorporated in the assessment model,
as customer service’s role is crucial in creating a better digital
identity system. Therefore, a new taxonomy is categorized
based on the classifications given by [11, 30] to compare
the SSI solutions in SSI compliance principles. Figure 3 gives
a mapping of principles of identity with the SSI principles.
Based on all these classifications, new criteria for evaluating
the SSI solutions have been proposed. The proposed principles to compare the SSI based solution in our study are
described as follows:
(1) Inclusion. Everyone possesses an individual identity
and should have an identity from birth to death
(2) User Control and Consent. Users must have ownership over their identity and can refer, update, trace,
and access their personal data. Online data sharing
of personal data should only be accomplished with
user consent
(3) Privacy and Protection. The user’s “right to privacy”
should be secured on the protocol level
(4) Portability. The identities should be available as long
as the identity owner desires. The identity information will be portable, allowing users to access and
control their identity, increasing identity persistence
over time
(5) Persistence. The identity system will be long-lasting,
where identity owners can recover private keys and
passwords if their primary device is damaged or
stolen
(6) Transparency. The system used to manage the identity network must be transparent in its processes,
management, and updates
(7) Interoperability. User identities are universally
acceptable across various international boundaries
and systems
(8) Human Integration. The system interface meets the
user’s needs where identity owners will add user
experience in upcoming technology and services
4.3. RQ3: What Is the Evaluation Result of Various
Blockchain-Based Self-Sovereign Identity Solutions? Secure
user authentication and authorization are significant challenges for a reliable identity solution that needs to be
addressed. SSI is a possible solution for resolving current identity models’ issues and providing permanent identity while
providing full user control. Blockchain is an innovative technology to implement SSI solutions. The use of blockchain
technology in the identity management system presents a possible solution for storing data on the blockchain. The stored
data is secured using cryptographic tools and makes them
immutable. The blockchain-based SSI solution foster trust
among participants within the network without disclosing
-----
10 Mobile Information Systems
Laws of identity Principle of self-sovereign identity
(Camerons, 2005) (Christopher Allen, 2016)
1. User control & 1. Existence
consent
2. Control
2. Minimal
disclosure
3. Access
3. Justifiable
4. Transparency
parties
5. Persistence
4. Directed
entity
6. Portability
5. Pluralism of
operators 7. Interoperability
6. Human 8. Consent
integration
9. Minimalization
7. Consistent
experience
10. Protection
Figure 3: Mapping of principles of identity with the SSI principles.
the actual data. Various blockchain-based SSI solutions will be
discussed and compared in this section based on the criteria
defined in Section 4.2.2. The comparative analysis of the existing SSI solutions has been given in Table 4.
(1) Selfkey [26]. It has been created based on an SSI network where users can store data on personal devices
[67]. Selfkey is a digital identity self-sovereign network [68] where user information is stored on a
user-operated device, providing user ownership. If
a third party needs to access identity data, the user
will present information stored in the blockchain.
SelfKey ensures that zero-knowledge proof gathers
only a minimal amount of data, meeting acceptance
and minimization requirements. It uses censorshipresistant and force-resistant algorithms to verify the
identity where individuals can ascertain the identity
claims of a customer. The portability in Selfkey is
achieved using UPort. A significant weakness of Selfkey is a third-party dependency where no specific
information about a trusted third party is available.
Other inadequacies of Selfkey include lack of human
integration and persistent identifier attributes that
can last for a particular time [69]
(2) Shocard [27]. The ShoCard offers a digital identity
authentication platform designed based on the public blockchain. Identity owner authentication is
achieved using a centralized database containing
cryptographic hashes of digital identity users. The
individual is responsible for initiating interaction
with third parties to check identity. Data is
-----
Mobile Information Systems 11
-----
12 Mobile Information Systems
ultimately stored in a protected data envelope that
receivers can only decrypt. ShoCard was founded
in 2015 and can include five million records within
30 minutes in a verified public blockchain [27, 63]
It enables users and organizations to create identities
secure and verified where end-users control personal information access and 3rd party sharing. The third party or
ShoCard will not access the data without first sharing useful information. The blockchain network is used to store
the identities, but it does not hold the user’s identity data.
Additionally, ShoCard does not have decentralized login
data storage, a target of hacking where the central servers
are intermediaries among users and trusted third parties
[63]. The partially centralized status of ShoCard creates
instability in the existence of ShoCard ID. If ShoCard
servers stop running, identity holders will not be able to
use their own digital IDs and credentials [70].
Additionally, the cryptographic key management does
not support users since ShoCard stores the identities on
the public blockchain, which provides open access and
transparency. Users will secure the private key in their
personal device, where the service provider uses a public
key to verify the ShoCard ID. Organizations may use a
software development kit to integrate ShoCard technology
with its current application or website. ShoCard supports
multiple authentication and verification, such as KYC,
encryption, traceable authorizing, and credential certification, besides offering an authentication mechanism using
a phone app. The method of authentication involves
downloading the application to establish its ShoCard ID.
It requires a user to take a snapshot of a legitimate
government-issued identity through which ShoCard
gathers personal information. The user can then validate
the details, create a password, or ask for a biometric scan.
(3) Civic [24]. Civic is based on a blockchain-based
identity authentication ecosystem where a third
party wallet creates key pairs, storing identification
information in the user’s computer. Civic and blockchain only accept data hashes stored on the Ethereum network as ERC20 Tokens. Civics support
three independent groups in the network: consumers, validators, and service providers, based on
the Ethereum blockchain and uses smart contracts
to track the proof of attestation
The Civic identity utilizes the validated identity for websites and mobile development without requiring the username and passwords for multiparty authentication. Users
monitor their protected data and must only share information in which they are willing. The Civic app is used to store
identity information on a mobile device in an encrypted
form. The hash value of attached identity information is
stored in the Merkel tree and collected in the blockchain.
The Merkle tree sections can be exposed selectively, increasing user control by enabling identity owners to disclose personal details selectively. The Civic allows trustworthy
identity authentication providers known as validators to par
ticipate and sign transactions in public blockchain nodes. It
reconfigures the centralization function and provides an
interactive open system for the validator, but it is not
entirely decentralized.
Nevertheless, it has the same consensus mechanism as the
Sovrin. The authenticator can revoke identity records. For
instance, when a user changes their last name, the authenticating agency cancels the blockchain’s previous/invalid last name.
Therefore, Civic users depend on authentication authorities to
establish a protected digital identity, resulting in a lack of portability [71]. Civic is a transparent system that utilizes a
permission-less blockchain and does not have software or
infrastructure for its network [72].
The benefits of the Civic ecosystem include a strong relationship among financial institutions, public agencies, and
utilities as it intends to build a market among banks, utility
organizations, local, state or federal governments, etc., verifying individual or business identity attributes in a blockchain.
The validators can price identity authentication and sell the
identity to stakeholders using smart contracts. The Civic system remains effective, as it plays a vital role in its ecosystem
and uses validators to verify identity data accessible through
mobile apps. Civic also plans to launch the Civic wallet. By
integrating identification with other applications, users can
interact more securely and efficiently using standard cryptocurrency applications compared to other wallets. However,
the development of this project is at an early stage.
(4) Sovrin [27]. The Sovrin foundation started using
blockchain to store distributed identities to formalize
and create an SSI network. Theoretically, anyone can
verify or issue the identity. The Sovrin is used to build
identities, using centralized CA to create a trust model
network, and using permissioned blockchain and
Stewards nodes to achieve consensus. The Sovrin
Foundation is a nonprofit organization with a board
of twelve trustees, including the governance council
Sovrin allows a user to have complete control over digital identities where the user can choose which information to be shared and with whom. This selective
disclosure uses a unique technique, ZKPs. Additionally,
Sovrin provides pair-named DIDs [73] and public keys
to protect user privacy without compromising functionality. Since the Sovrin network only has central authority,
users rely on agencies and stewards where trust and
accountability are managed through stewards’ confidence,
integrity, and noncollusion system. User data is stored in
the user’s personal computer and cannot be stored in the
network service provider’s database. Sovrin aims to establish a market for customers to incorporate data portability
and restore private key loss using cryptographic accumulators. Semantic graphs, like JSON-LD, are often used to
provide portability among providers.
Sovrin protocol uses open-source software licenses built
based on the Hyperledger Indy [74]. The Sovrin trust system
will regulate the Sovrin network of digital identity, security,
policies, and stewards [75]. Sovrin network contains stewards
worldwide, including various financial institutions, start-ups,
-----
Mobile Information Systems 13
charities, and authority for personal information. Sovrin foundation requires systems to comply with other digital identity
systems where the user’s interaction is not clearly defined.
Since Sovrin is in the early stage of development, developers
and providers entering the identity ecosystem need to extensively discuss user experience [74].
(5) uPort [23]. The uPort uses an open ID system that
enables customers to enrol their identities securely,
sign transactions, send and request identity keys, as
well as, accessing keys and data [76]. The identity
owner appoints trustees to produce a public key
through a controller for key recovery purposes. The
controller consensus is achieved by replacing the missing public key when executing a proxy with a newly
created key. Built using the Ethereum blockchain, the
UPort connects attributes and stores them as a basic
JSON structure [77]. The identity owner will obtain
the ecosystem’s credentials without performing identity proofing when using the uPort framework. Users
control UPortID and share personal information with
third parties where users’ personal data is always available and stored on-chain or off-chain using IPFS. In
uPort, the user has more responsibilities and authority
over uPort IDs
UPort identifiers can be created without disclosing personal information since the missing inherent connection
between the UPort identities contributes to system robustness.
The registry user’s JSON information is publicly available,
which may violate user privacy. Users can claim ownership
over uPort IDs without depending on a centralized entity.
UPort also contains several centralized components such as
transfer messaging service, push notification centre, and program manager attributes which are means for machine control
or compliance. UPort allows users to store identity data, credentials, and keys in the self-sovereign wallet while the personal user key is stored on user devices. The key recovery
protocol allows users a persistent digital identity in case of
mobile loss or theft. The software also supports faster authentication singular-sign-on support for Dapps and other apps
besides establishing a Decentralized Identity Foundation for
a uniform user experience. Furthermore, The QR codescanning functionality allows communication with the other
party [78]. Nevertheless, users consider UPort’s key protocol
to recover and preserve personal data complex and lack comprehensibility [77].
(6) Blockstack [25]. Blockstack is a decentralized network
of computers that handles identity and perhaps even
users’ data. Blockstack ID is a decentralized user ID
that connects decentralized applications (DApps).
Blockstack public benefit corporation (PBC) is an
open-source organization interested in developing
core Blockstack protocols and applications [79]
The application developed on Blockstack provides users
control over their own identities and eliminates failure refer
ence points. The user’s used data credentials cannot be
stored at a centralized server where content sharing is carried out using encryption. However, the collection of profiles
can be seen and tracked globally through a blockchain which
may leak information and endangers users’ privacy. Blockstack business logic and data processing works on a computer rather than on centralized servers hosted by service
providers. The decentralized storage current scheme, Gaia
[80], ensures that users own and operate private data
lockers. Cloud users may use these lockers as additional data
storage platforms.
In Blockstack, the key recovery protocol is unavailable;
thus, users cannot reset their keys in the event of failure or
stolen ID, thereby noncompliance with the persistence principle. Conceptually, Blockstack operates on the top of the
Bitcoin network and is an open-source repository offering
programming libraries on a variety of platforms. The portable nature of Blockstack allows developers to adapt and integrate other technologies. Blockstack involves a full-stack
approach that provides all layers required to build decentralized applications besides allowing customers with a single
username to operate across all applications without passwords. Nevertheless, the Blockstack environment is in its
initial development stage and only offers desktop versions
of the Blockstack browser [81].
(7) LifeID [82]. LifeID is an open digital identity platform that allows users to create a personal online
identity. Users verify every online real-world transaction where authentication is required without
third-party companies or government organizations.
LifeID is often used combined with a biometric
smartphone and app [83]. Only the user accepts
the information request from third parties that need
user consent. LifeID uses zero-knowledge proof
where data is recorded on the user’s computer, and
the necessary information is released whenever identity verification is needed. The LifeID Identity is
backed up and recovered using three different
methods: cold storage backup, trusted relatives or
associates, and a reputable organization, combating
theft by momentarily disabling or restoring identities
(8) Evernym [84]. Evernym was established in 2013 by
Jason Law, and Timothy Ruff is a well-known player
and aims to facilitate SSI introduction within various
industries [84]. Sovrin was explicitly designed for
identity, and the company describes itself as the
world’s first professionally authenticated and verifiable public service provider. The mobile application,
Connect Me wallet, enables users to create private,
peer-to-peer communication with other people. It
also allows users to control digital keys and verifiable
credentials of their digital identity
Evernym achieves universal accessibility by using Sovrin
to claim that SSI is a global public utility to meet everyone’s
-----
14 Mobile Information Systems
Table 5: Comparative study of the Blockchain-based self-sovereign identity solutions.
Blockchain-based self-sovereign identity solutions
SSI principles (evaluation criteria)
Sovrin ShoCard Selfkey uPort Civic Blockstack LifeID Evernym EverID
Inclusion ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
User control and consent ✓ ✓ x ✓ ✓ ✓ ✓ x ✓
Privacy and protection ✓ x ✓ x ✓ x x ✓ x
Portability ✓ x ✓ ✓ x ✓ ✓ ✓ ✓
Persistence ✓ x x ✓ x x ✓ ✓ x
Transparency ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ x
Interoperability ✓ ✓ ✓ ✓ ✓ ✓ ✓ x ✓
Human integration x ✓ x ✓ ✓ x x x ✓
identity needs. The firm will handle an identity on behalf of
a vulnerable person or anyone else incapable of managing
their digital wallet. Evernym can store all personal information on the customer’s smartphone while control in an Evernym solution is enabled by biometry, using the default
biometrics on a particular device. The Evernym solution
provides an easy way to import/import a private key and
handle an SSI. An individual may usually import a private
key into a digital wallet through a text file or QR code scanning. Using Sovrin, Evernym will have a concept of “guardian,” a trusted third party to protect an exposed individual’s
identity. Evernym uses a hybrid open-source framework that
provides access to a permission ledger where guardian organizations must behave according to the criteria set out in the
Sovrin Trust Framework. The INF or Sovrin Foundation
management and the secure implementation of blockchain
may reduce the abuse of digital identity and personal identity information. Evernym observed that the Sovrin network
architecture, management, and operation could provide
members with the possible portability of their public and
private data in compliance with other principles. Evernym
connections within the Sovrin network will be connected
by comparing a “fairly-pseudonymous identification,” or a
single DID in each relation. The Evernym system is unable
to provide flexibility which results in a lack of interoperability. Also, a small amount of information is available for the
user control of the issuer’s credential [72].
In Evernym’s Connect. Me DApp, user biometrics is
necessary to access a given identity and the related details
in all situations. Individuals may also be expected to provide
biometric information to establish peer-to-peer contact networks with other individuals and organizations in accepting
credentials from an issuer to exchange credentials.
(9) EverID [85]. EverID is a user-centric-based SSI and
transitional solution built on blockchain [85]. The
decentralized framework of EverID includes data,
documents, and biometrics to store and validate user
identities. EverID provides multiple third-party user
verification and enables the secure transfer of value
between network members [86]. The decentralized
architecture provider ownership of personal data,
which can be accessed only by the user. The individual’s personal details are stored so that the individual
controls how with whom and for how long these
details are shared (persistence). The EverID system
is operated on a number of network supernodes.
Such supernodes are the blockchain host
Additionally, it hosts the bridge service to allow data
transfer to an API server where SDK-enabled devices perform these transactions, making it portable. EverID differs
from other approaches, as the user does not need a device
because the digital computer identity (a combination of biometrics, government identification and third-party confirmations) is being saved on the cloud. However, EverID
noncompliance with the minimization property as the data
is required for a claim to be checked where the user must
fully reveal it. For example, if the user is over 18, the user
can choose to show his complete birthday or not. EverID
is also not open-source; thus, the statements in the whitepapers cannot be provable. Its implementation details are also
not available in the public domain, raising concerns about
compliance with transparency [65].
Additionally, it hosts the bridge service to allow data
transfer to an API server where SDK-enabled devices perform these transactions, making it portable. EverID differs
from other approaches, as the user does not need a device
because the digital computer identity (a combination of biometrics, government identification and third-party confirmations) is being saved on the cloud. However, EverID
noncompliance with the minimization property as the data
is required for a claim to be checked where the user must
fully reveal it. For example, if the user is over 18, the user
can choose to show his complete birthday or not. EverID
is also not open-source; thus, the statements in the whitepapers cannot be provable. Its implementation details are also
not available in the public domain, raising concerns about
compliance with transparency [65].
## 5. Discussion
Based on the detailed analysis of available SSI solutions that
can be used in the land registry environment, Table 5 provides a review of these selected SSI solutions. It shows that
-----
Mobile Information Systems 15
ShoCard is not complying with the principle of privacy, portability, and persistence due to its partial dependence on a
centralized server for attribute validation. Selfkey lacks user
control and consent, which is a significant weakness, persistence, and human integration. Civic does not comply with
portability and persistence due to its reliance on a third
party. Evernym does not comply with the principles of user
control and interoperability. EverID does not comply with
the principles of privacy, persistence, and transparency.
LifeID has a significant issue of privacy and security. Blockstack does not comply with privacy, persistence, and human
integration principles. Among these available SSI solutions,
Sovrin and UPort are the SSI models that comply with maximum SSI principles but noncompliance with human integration and privacy principles, respectively. The above
assessment shows that none of the available SSI solutions
fully comply with SSI principles.
## 6. Conclusion
This paper highlights the limitations of existing identity
solutions, advantages of SSI, and its application in the
blockchain-based land registry system. This paper uses a systemic literature review (SLR) based on three defined research
questions highlighting the role of SSI in solving the issue of
noncompliance with identity principles, evaluation criteria
for evaluating existing SSI solutions, and suggesting the best
possible SSI solution in the case of Blockchain-based Land
registry system. This SLR has selected 251 papers based on
criteria and 65 articles from grey literature and finally used
a total of 43 articles for review. A detailed study of SSI principles and evaluation criteria for existing SSI solutions have
been defined. Based on the defined evaluation criteria, an
extensive review of the existing SSI solutions has been done.
This study highlights the strengths, limitations, and functioning of each SSI solution, and it concludes that none of
the existing SSI solutions complies with all the SSI principles. Based on the defined evaluation mechanism, Sovrin is
the best possible solution among the existing SSI solutions.
It complies with most of the SSI principles but lacks the scale
of human integration. It is the best possible SSI solution that
can be applied in the case of a Blockchain-based land registry system. As the Sovrin lacks a human integration factor
that is essential for ease of use and high adaptability, it provides a scope for further improvement and future research.
## Data Availability
Data is available on reasonable request.
## Conflicts of Interest
The authors declare that they have no conflicts of interest.
## References
[1] J. Dempsey and M. Graglia, Case study: property rights and stability in Afghanistan, New America, 2017.
[[2] K. Cameron, “The laws of identity,” August 2005, https://www](https://www.identityblog.com/stories/2005/05/13/TheLawsOfIdentity.pdf)
[.identityblog.com/stories/2005/05/13/TheLawsOfIdentity.pdf.](https://www.identityblog.com/stories/2005/05/13/TheLawsOfIdentity.pdf)
[3] K. Mintah, K. T. Baako, G. Kavaarpuo, and G. K. Otchere,
“Skin lands in Ghana and application of blockchain technology for acquisition and title registration,” Journal of Property,
Planning and Environmental Law, vol. 12, no. 2, pp. 147–
169, 2020.
[4] B. Bundesverband, Blockchain Opportunities and Challenges of
a New Digital Infrastructure for Germany, Blockchain Bundes[verband, Berlin,Germany, 2017, https://jolocom.io/wp-](https://jolocom.io/wp-content/uploads/2018/07/Blockchain-Opportunities-and-challenges-of-a-new-digital-infrastructure-for-Germany-_-Blockchain-Bundesverband-2018.pdf)
[content/uploads/2018/07/Blockchain-Opportunities-and-](https://jolocom.io/wp-content/uploads/2018/07/Blockchain-Opportunities-and-challenges-of-a-new-digital-infrastructure-for-Germany-_-Blockchain-Bundesverband-2018.pdf)
[challenges-of-a-new-digital-infrastructure-for-Germany-_-](https://jolocom.io/wp-content/uploads/2018/07/Blockchain-Opportunities-and-challenges-of-a-new-digital-infrastructure-for-Germany-_-Blockchain-Bundesverband-2018.pdf)
[Blockchain-Bundesverband-2018.pdf.](https://jolocom.io/wp-content/uploads/2018/07/Blockchain-Opportunities-and-challenges-of-a-new-digital-infrastructure-for-Germany-_-Blockchain-Bundesverband-2018.pdf)
[5] M. Kaczorowska, “Blockchain-based land registration: possibilities and challenges,” Masaryk University Journal of Law
and Technology, vol. 13, no. 2, pp. 339–360, 2019.
[6] N. Mehdi, “Blockchain: an emerging opportunity for sur[veyors?,” 2020, https://www.rics.org/globalassets/blockchain_](https://www.rics.org/globalassets/blockchain_insight-paper.pdf)
[insight-paper.pdf.](https://www.rics.org/globalassets/blockchain_insight-paper.pdf)
[7] G. Sylvester, E-Agriculture in action: blockchain for agriculture
opportunities and challanges, Food and agriculture organization of the united nations and the international telecommunication union, Bangkok, 2018th edition, 2019.
[8] J. Torres, M. Nogueira, and G. Pujolle, “A survey on identity
management for the future network,” IEEE Communication
Surveys and Tutorials, vol. 15, no. 2, pp. 787–802, 2013.
[9] H. Ning, X. Liu, X. Ye, J. He, W. Zhang, and
M. Daneshmand, “Edge computing-based ID and nID combined identification and resolution scheme in IoT,” IEEE
Internet of Things Journal, vol. 6, no. 4, pp. 6811–6821,
2019.
[10] M. Laurent, J. Denouël, C. Levallois-Barth, and P. Waelbroeck,
“Digital identity,” Digital Identity Management, pp. 1–45,
2015.
[11] M. A. Bouras, Q. Lu, F. Zhang, Y. Wan, T. Zhang, and H. Ning,
“Distributed ledger technology for eHealth identity privacy:
state of the art and future perspective,” Sensors, vol. 20, no. 2,
p. 483, 2020.
[12] M. Schaffner, Analysis and Evaluation of Blockchain-Based
Self-Sovereign Identity Systems, Technical University of
Munich, Munich, Germany, 2020.
[13] I. T. Javed, F. Alharbi, B. Bellaj, T. Margaria, N. Crespi, and
K. N. Qureshi, “Health-ID: a blockchain-based decentralized
identity management for remote healthcare,” Health, vol. 9,
no. 6, p. 712, 2021.
[14] A. S. Podda and L. Pompianu, “An overview of blockchainbased systems and smart contracts for digital coupons,” in
ICSEW'20: Proceedings of the IEEE/ACM 42nd International
Conference on Software Engineering Workshops, pp. 770–778,
Jun 2020.
[15] D. Pavithran, J. N. Al-Karaki, and K. Shaalan, “Edge-based
blockchain architecture for event-driven IoT using hierarchical identity based encryption,” Information Processing and
Management, vol. 58, no. 3, p. 102528, 2021.
[16] Y. Panfil and C. Mellon, The credential highway: how selfsovereign identity unlocks property rights for the bottom billion,
New America Weekly, 2019.
[17] K. Panetta, 5 trends drive the gartner hype cycle for emerging
[technologies, 2020, Gartner, 2020, https://www.gartner.com/](https://www.gartner.com/smarterwithgartner/5-trends-drive-the-gartner-hype-cycle-for-emerging-technologies-2020/)
[smarterwithgartner/5-trends-drive-the-gartner-hype-cycle-](https://www.gartner.com/smarterwithgartner/5-trends-drive-the-gartner-hype-cycle-for-emerging-technologies-2020/)
[for-emerging-technologies-2020/.](https://www.gartner.com/smarterwithgartner/5-trends-drive-the-gartner-hype-cycle-for-emerging-technologies-2020/)
[18] M. Van Wingerde, Blockchain-Enabled Self-Sovereign Identity,
Tilburg University, Tilburg, Netherland, 2017.
-----
16 Mobile Information Systems
[19] M. Graglia, C. Mellon, and T. Robustelli, “The nail finds a hammer self-sovereign identity, design principles, and property
rights in the developing world,” New America Weekly, 2018.
[20] serkan senturk, Future of property rights: self-sovereign identity
and property rights, New America Weekly,, America, 2019.
[21] Q. Shang and A. Price, “A blockchain-based land titling project for the Republic of Georgia,” Innovations, vol. 12, no. 3/
4, pp. 72–78, 2019.
[22] Sovrin Foundation, Sovrin™: A Protocol and Token for SelfSovereign Identity and Decentralized Trust, Sovrin Foundation,
2018.
[23] C. Lundkvist, R. Heck, J. Torstensson, and Z. Mitton, Uport: A
Platform for Self-Sovereign Identity, ConsenSys, 2016.
[[24] Civic Technologies Inc, “Civic whitepaper,” 2017, https://](https://tokensale.civic.com/CivicTokenSaleWhitePaper.pdf)
[tokensale.civic.com/CivicTokenSaleWhitePaper.pdf..](https://tokensale.civic.com/CivicTokenSaleWhitePaper.pdf)
[25] M. Ali, R. Shea, J. Nelson, and M. J. Freedman, “Blockstack: a
[new internet for decentralized applications,” 2017, http://](http://blockstack.org)
[blockstack.org.](http://blockstack.org)
[26] Self Key Foundation, “Self-sovereign identity for more free[dom and privacy-self key,” 2017, https://selfkey.org/.](https://selfkey.org/)
[[27] S. Card, “Sho Card,” 2020, https://shocard.com/wp-content/](https://shocard.com/wp-content/uploads/2019/02/ShoCard-Whitepaper-2019.pdf)
[uploads/2019/02/ShoCard-Whitepaper-2019.pdf.](https://shocard.com/wp-content/uploads/2019/02/ShoCard-Whitepaper-2019.pdf)
[28] A. Nagy, K. A. Nyante, A. Peter, and Z. Hattyasy, Eds., Secure
Identity Management on the Blockchain, University of Twente,
2018.
[29] A. Tobin and D. Reed, The inevitable rise of self-sovereign
identity A white paper from the Sovrin Foundation, Sovrin
[Foundation, 2017, https://sovrin.org/wp-content/uploads/](https://sovrin.org/wp-content/uploads/2017/06/The-Inevitable-Rise-of-Self-Sovereign-Identity.pdf)
[2017/06/The-Inevitable-Rise-of-Self-Sovereign-Identity.pdf.](https://sovrin.org/wp-content/uploads/2017/06/The-Inevitable-Rise-of-Self-Sovereign-Identity.pdf)
[30] M. M. S. M. M. S. M. Ferdous, F. Chowdhury, and M. O.
Alassafi, “In search of self-sovereign identity leveraging
blockchain technology,” IEEE Access, vol. 7, pp. 103059–
103079, 2019.
[31] P. de Marneffe, “Vice laws and self-sovereignty,” Criminal Law
and Philosophy, vol. 7, no. 1, pp. 29–41, 2013.
[32] M. H. Weik, Computer Science and Communications Dictionary, Springer US, Boston, MA, 2001.
[33] C. S. Wright, “Bitcoin: a peer-to-peer electronic cash system,”
SSRN Electronic Journal, 2008.
[34] C. Allen, The path to self-sovereign identity, Coin Desk, 2016,
[http://www.lifewithalacrity.com/2016/04/the-path-to-self-](http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html)
[soverereign-identity.html.](http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html)
[35] M. Marlinspike, What Is ‘Sovereign Source Authority’?, Moxy
[Tougue, 2019, https://www.moxytongue.com/2012/02/what-](https://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html)
[is-sovereign-source-authority.html.](https://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html)
[36] A. Abraham, Whitepaper: self-sovereign identity, Graz, Austria,
[2017http://www.egiz.gv.at.](http://www.egiz.gv.at)
[37] Q. Stokkink, D. Epema, and J. Pouwelse, “A truly self-sovereign
[identity system,” 2020, http://arxiv.org/abs/2007.00415.](http://arxiv.org/abs/2007.00415)
[38] F. P. Hjalmarsson, G. K. Hreioarsson, M. Hamdaqa, and
G. Hjalmtysson, “Blockchain-based E-voting system,” in
2018 IEEE 11th International Conference on Cloud Computing
(CLOUD), pp. 983–986, San Francisco, CA, USA.
[39] M. Shuaib, S. M. Daud, S. Alam, and W. Z. Khan, “Blockchainbased framework for secure and reliable land registry system,”
TELKOMNIKA Telecommunication, Computing, Electronics
and Control, vol. 18, no. 5, p. 2560, 2020.
[40] M. Shuaib, S. Alam, and S. M. Daud, Improving the Authenticity of Real Estate Land Transaction Data Using BlockchainBased Security Scheme, Springer, Singapore, 2021.
[41] R. Joosten, Self-sovereign identity framework and block[chain, ERCIM NEWS, 2017, https://ercim-news.ercim.eu/](https://ercim-news.ercim.eu/en110/special/self-sovereign-identity-framework-and-blockchain)
[en110/special/self-sovereign-identity-framework-and-](https://ercim-news.ercim.eu/en110/special/self-sovereign-identity-framework-and-blockchain)
[blockchain.](https://ercim-news.ercim.eu/en110/special/self-sovereign-identity-framework-and-blockchain)
[42] Medici, “22 companies leveraging blockchain for identity
management and authentication,” MEDICI, vol. 13, no. 21,
pp. 2–5, 2017.
[43] Y. Liu, D. He, M. S. Obaidat, N. Kumar, M. K. Khan, and K.K. Raymond Choo, “Blockchain-based identity management
systems: a review,” Journal of Network and Computer Applications, vol. 166, p. 102731, 2020.
[44] G. Kondova and J. Erbguth, “Self-sovereign identity on public
blockchains and the GDPR,” in SAC '20: Proceedings of the
35th Annual ACM Symposium on Applied Computing,
pp. 342–345, Brno, Czech Republic, Mar 2020.
[45] R. Saia, S. Carta, D. Recupero, and G. Fenu, “Internet of entities (IoE): a Blockchain-based distributed paradigm for data
exchange between wireless-based devices,” Proceedings of the
8th International Conference on Sensor Networks, pp.,
201977–84, 2019.
[46] S. T. Siddiqui, R. Ahmad, M. Shuaib, and S. Alam, “Blockchain
security threats, attacks and countermeasures,” Adv. Intell.
Syst. Comput., vol. 1097, pp. 51–62, 2020.
[47] R. Jesse McWaters, “A blueprint for digital identity. The role of
[financial institutions in building digital identity,” 2016, http://](http://www3.weforum.org/docs/WEF_A_Blueprint_for_Digital_Identity.pdf)
[www3.weforum.org/docs/WEF_A_Blueprint_for_Digital_](http://www3.weforum.org/docs/WEF_A_Blueprint_for_Digital_Identity.pdf)
[Identity.pdf.](http://www3.weforum.org/docs/WEF_A_Blueprint_for_Digital_Identity.pdf)
[48] D. Sikeridis, A. Bidram, M. Devetsikiotis, and M. J. Reno, “A
blockchain-based mechanism for secure data exchange in
smart grid protection systems,” in 2020 IEEE 17th Annual
Consumer Communications & Networking Conference
(CCNC), pp. 1–6, Las Vegas, NV, USA, Jan 2020.
[49] T. Mitani and A. Otsuka, “Traceability in permissioned blockchain,” IEEE Access, vol. 8, pp. 21573–21588, 2020.
[50] M. Shuaib, S. Alam, M. Shahnawaz Nasir, and M. Shabbir
Alam, “Immunity credentials using self-sovereign identity for
combating COVID-19 pandemic,” Materials Today : Proceedings, 2021.
[51] A. Piore, “Can blockchain finally give us the digital privacy we
[deserve?,” 2019. https://www.newsweek.com/2019/03/08/can-](https://www.newsweek.com/2019/03/08/can-blockchain-finally-give-us-digital-privacy-we-deserve-1340689.html)
[blockchain-finally-give-us-digital-privacy-we-deserve-](https://www.newsweek.com/2019/03/08/can-blockchain-finally-give-us-digital-privacy-we-deserve-1340689.html)
[1340689.html.](https://www.newsweek.com/2019/03/08/can-blockchain-finally-give-us-digital-privacy-we-deserve-1340689.html)
[52] J. A. Ketterer and G. Andrade, “Blockchain asset registries:
[approaching enlightenment?- Coin Desk,” Dec 2017, https://](https://www.coindesk.com/blockchain-asset-registries-entering-slope-enlightenment)
[www.coindesk.com/blockchain-asset-registries-entering-](https://www.coindesk.com/blockchain-asset-registries-entering-slope-enlightenment)
[slope-enlightenment.](https://www.coindesk.com/blockchain-asset-registries-entering-slope-enlightenment)
[53] International Finance Corporation, Secured Transactions Systems and Collateral Registries, World Bank, 2010.
[54] M. Hendow, “Bridging refugee protection and development,”
[2019, https://www.researchgate.net/publication/331530630_](https://www.researchgate.net/publication/331530630_Bridging_refugee_protection_and_development_Policy_Recommendations_for_Applying_a_Development-Displacement_Nexus_Approach)
[Bridging_refugee_protection_and_development_Policy_](https://www.researchgate.net/publication/331530630_Bridging_refugee_protection_and_development_Policy_Recommendations_for_Applying_a_Development-Displacement_Nexus_Approach)
[Recommendations_for_Applying_a_Development-](https://www.researchgate.net/publication/331530630_Bridging_refugee_protection_and_development_Policy_Recommendations_for_Applying_a_Development-Displacement_Nexus_Approach)
[Displacement_Nexus_Approach.](https://www.researchgate.net/publication/331530630_Bridging_refugee_protection_and_development_Policy_Recommendations_for_Applying_a_Development-Displacement_Nexus_Approach)
[55] B. Kitchenham and S. Charters, Guidelines for performing systematic literature reviews in software engineering, EBSE, Dur[ham, UK, 2007, https://www.elsevier.com/__data/promis_](https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf)
[misc/525444systematicreviewsguide.pdf.](https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf)
[56] McKinsey, Infographic: what is good digital ID?, McKinsey &
[Company, 2019, https://www.mckinsey.com/business-](https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/infographic-what-is-good-digital-id)
[functions/mckinsey-digital/our-insights/infographic-what-is-](https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/infographic-what-is-good-digital-id)
[good-digital-id.](https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/infographic-what-is-good-digital-id)
-----
Mobile Information Systems 17
[57] W3C Credentials Community Group, “Verifiable claims task
[force,” May 2017, https://w3c.github.io/vctf/.](https://w3c.github.io/vctf/)
[58] Q. Stokkink and J. Pouwelse, “Deployment of a blockchainbased self-sovereign identity,” in 2018 IEEE international conference on internet of things (iThings) and IEEE green computing and communications (green com) and IEEE cyber, physical
and social computing (CPSCom) and IEEE smart data (smart
data),, pp. 1336–1342, Halifax, NS, Canada, 2018.
[59] J. Alsayed Kassem, S. Sayeed, H. Marco-Gisbert, Z. Pervez, and
K. Dahal, “DNS-IdM: a blockchain identity management system to secure personal data sharing in a network,” Applied Sciences, vol. 9, no. 15, p. 2953, 2019.
[60] M. Shuaib, N. Hafizah Hassan, S. Usman et al., “Identity model
for blockchain-based land registry system: a comparison,”
Wireless Communications and Mobile Computing, vol. 2022,
Article ID 5670714, 17 pages, 2022.
[61] M. Shuaib, S. Alam, M. Shabbir Alam, and M. Shahnawaz
Nasir, “Self-sovereign identity for healthcare using blockchain,” Materials Today: Proceedings, 2021.
[62] K. Cameron, The Laws of Identity, 2005.
[63] P. Dunphy and F. A. P. Petitcolas, “A first look at identity
management schemes on the blockchain,” IEEE Security and
Privacy, vol. 16, no. 4, pp. 20–29, 2018.
[64] A. E. Panait, R. F. Olimid, and A. Stefanescu, “Identity management on blockchain – privacy and security aspects,”
[(2020), https://arxiv.org/abs/2004.13107.](https://arxiv.org/abs/2004.13107)
[65] D. van Bokkem, R. Hageman, G. Koning, L. Nguyen, and
N. Zarin, “Self-sovereign identity solutions: the necessity of
[blockchain technology,” (2019), https://arxiv.org/abs/1904](https://arxiv.org/abs/1904.12816)
[.12816.](https://arxiv.org/abs/1904.12816)
[66] S. El Haddouti and M. D. Ech-Cherif El Kettani, “Analysis of
identity management systems using blockchain technology,”
in 2019 International Conference on Advanced Communication Technologies and Networking (Comm Net), pp. 1–7, Rabat,
Morocco, Apr 2019.
[[67] The Self Key Foundation, Self Key, Self key, 2017, https://](https://selfkey.org/)
[selfkey.org/.](https://selfkey.org/)
[68] S Foundation and Self Key Foundation, “Self key-the self key
[foundation,” (2017), https://selfkey.org/wp-content/uploads/](https://selfkey.org/wp-content/uploads/2019/03/selfkey-whitepaper-en.pdf)
[2019/03/selfkey-whitepaper-en.pdf.](https://selfkey.org/wp-content/uploads/2019/03/selfkey-whitepaper-en.pdf)
[69] T. Koens and S. Meijer, “Matching identity management solutions to self-sovereign identity principles.pdf,” vol. 2, p. 32,
2018.
[70] A.-E. Panait, R. F. Olimid, and A. Stefanescu, “Identity management on the blockchain,” in Proceedings of the Romanian
Academy Series A-Mathematics Physics Technical Sciences
Information Science, pp. 45–52, 2020.
[71] M. Kuperberg, “Blockchain-based identity management: a survey from the enterprise and ecosystem perspective,” IEEE
Transactions on Engineering Management, vol. 67, no. 4,
pp. 1008–1027, 2020.
[72] A. G. Nabi, Comparative Study on Identity Management
Methods Using Blockchain, University of Zurich, 2017.
[73] D. Reed, M. Sporny, D. Longley, C. Allen, R. Grant, and
M. Sabadello, “Decentralized identifiers (DIDs): data model
[and syntaxes for decentralized identifiers,” (2019), https://](https://w3c-ccg.github.io/did-spec/)
[w3c-ccg.github.io/did-spec/.](https://w3c-ccg.github.io/did-spec/)
[74] Hyperledger Contributors, Hyperledger Indy documentation,
[Hyperledger, 2018, https://www.hyperledger.org/use/](https://www.hyperledger.org/use/hyperledger-indy)
[hyperledger-indy.](https://www.hyperledger.org/use/hyperledger-indy)
[75] M. Ali, R. Shea, and M. J. Freedman, Blockstack: a new decen[tralized internet, Whitepaper, 2017, http://blockstack.org.](http://blockstack.org)
[76] uport, uPort, 2019.
[77] C. Lundkvist, R. Heck, J. Torstensson, Z. Mitton, and M. Sena,
Uport: A Platform for Self-Sovereign Identity, Sovrin Foundation, 2016.
[78] N. Nizamuddin, H. R. Hasan, and K. Salah, “IPFS-blockchainbased authenticity of online publications,” in Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
vol. 10974 LNCS, pp. 199–212, 2018.
[79] O. Labazova, T. Dehling, and A. Sunyaev, “From hype to reality: a taxonomy of blockchain applications,” Proceedings of the
52nd Hawaii International Conference on System Sciences,
2019, pp. 4555–4564, Grand Wailea, Maui, 2019.
[80] vercel, A Decentralized Storage Architecture, Stacks docs, 2018,
[https://docs.blockstack.org/storage/overview.html.](https://docs.blockstack.org/storage/overview.html)
[81] M. Ali, J. Nelson, R. Shea, B. Labs, and M. J. Freedman, “Blockstack: a global naming and storage system secured by blockchains,” in usenix Annual technical conferencepp. 181–194,
Denver, Colorado, 2016.
[[82] I. D. Life, “Digital identity simple & secure,” (2018), https://](https://lifeid.io/)
[lifeid.io/.](https://lifeid.io/)
[83] L Foundation, “Life ID-an open-source, blockchain-based
platform for self-sovereign identity,” p. 34, 2019.
[[84] M. Nijjar, “Evernym,” (2019), https://www.evernym.com/.](https://www.evernym.com/)
[85] B. Reid, B. Witteman, and W. Brad, “Ever ID whitepaper,”
[May 2018, https://neironix.io/documents/whitepaper/6176/](https://neironix.io/documents/whitepaper/6176/EverID_Whitepaper_v1.0.2_July2018.pdf)
[EverID_Whitepaper_v1.0.2_July2018.pdf.](https://neironix.io/documents/whitepaper/6176/EverID_Whitepaper_v1.0.2_July2018.pdf)
[86] A. Aloraini and M. Hammoudeh, “A survey on data confidentiality and privacy in cloud computing,” in Proceedings of the
International Conference on Future Networks and Distributed
Systems, pp. 1–7, Cambridge United Kingdom, Jul. 17.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/8930472?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/8930472, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/misy/2022/8930472.pdf"
}
| 2,022
|
[
"Review"
] | true
| 2022-04-04T00:00:00
|
[
{
"paperId": "ac6f4643e97a50fea7520ee5ec5b396bdf8ba46b",
"title": "The credential highway: how self-sovereign identity unlocks property rights for the bottom billion"
},
{
"paperId": "e63a9d5738e4c3f9d6ea71f290b8d72099bf8be7",
"title": "Health-ID: A Blockchain-Based Decentralized Identity Management for Remote Healthcare"
},
{
"paperId": "9e5a1a024db905c47d89b60c526c8b43cd06aa20",
"title": "Edge-Based Blockchain Architecture for Event-Driven IoT using Hierarchical Identity Based Encryption"
},
{
"paperId": "5a54034b2509ec4e32c822fc5df1f532f66e5e26",
"title": "Immunity credentials using self-sovereign identity for combating COVID-19 pandemic"
},
{
"paperId": "4fe8862cbde84d4a91e968a7b75b5cea3f805bda",
"title": "Self-sovereign identity for healthcare using blockchain"
},
{
"paperId": "17e4f79fa03cc0d9bfd5928a98c38f7f00f41d46",
"title": "Blockchain-Based Identity Management: A Survey From the Enterprise and Ecosystem Perspective"
},
{
"paperId": "368f1d90ae7a5d92dfca814b84d54de8068b492a",
"title": "Blockchain-based framework for secure and reliable land registry system"
},
{
"paperId": "10a65ae7d8ea1ffa96148e1d36c9a69aa7285b26",
"title": "A Truly Self-Sovereign Identity System"
},
{
"paperId": "6105c1c53f866864fabfd27c98f03192da24dbee",
"title": "An overview of blockchain-based systems and smart contracts for digital coupons"
},
{
"paperId": "1c5cc9f976fb58be1de112b95cffd81a743a1453",
"title": "Skin lands in Ghana and application of blockchain technology for acquisition and title registration"
},
{
"paperId": "496a450d5921c13a369fb885c8c19400d5d27c00",
"title": "Identity Management on Blockchain - Privacy and Security Aspects"
},
{
"paperId": "83906464c306290dfd399e744f0ff1888e7f245d",
"title": "Self-sovereign identity on public blockchains and the GDPR"
},
{
"paperId": "d324c7d451061edb5f2d8e04afb051058bd5b4da",
"title": "Distributed Ledger Technology for eHealth Identity Privacy: State of The Art and Future Perspective"
},
{
"paperId": "cfbffdbd298a57cd99c52fc8c308e3e6b2f5fce9",
"title": "A blockchain-based mechanism for secure data exchange in smart grid protection systems"
},
{
"paperId": "084813a5b1faa4d66f9cb3818994b9e680be3a2c",
"title": "Blockchain-based Land Registration: Possibilities and Challenges"
},
{
"paperId": "f74c0d01b0481ed21ec5e9dfc8f7a76d8e988a5d",
"title": "In Search of Self-Sovereign Identity Leveraging Blockchain Technology"
},
{
"paperId": "e9747eace33602d3f4b5f5c0fd35e33c31b022cc",
"title": "DNS-IdM: A Blockchain Identity Management System to Secure Personal Data Sharing in a Network"
},
{
"paperId": "9a94f839b960382f3c4dc7da4c50d17a5cb2d0fa",
"title": "Traceability in Permissioned Blockchain"
},
{
"paperId": "77460667e38f21092b42addefcd4049a965dd559",
"title": "Self-Sovereign Identity Solutions: The Necessity of Blockchain Technology"
},
{
"paperId": "632b76ec1d55f7b13dafe8b4ce7b34eaab96afb4",
"title": "Edge Computing-Based ID and nID Combined Identification and Resolution Scheme in IoT"
},
{
"paperId": "621975a39f59d555fbe7f517cf1486479435edc9",
"title": "Analysis of Identity Management Systems Using Blockchain Technology"
},
{
"paperId": "4964c80f15fc7347aa82481fd29e1b33983b6e9f",
"title": "Internet of Entities (IoE): A Blockchain-based Distributed Paradigm for Data Exchange between Wireless-based Devices"
},
{
"paperId": "3c22ae83d1f6f029835f7a529a79ec3016592d4e",
"title": "From Hype to Reality: A Taxonomy of Blockchain Applications"
},
{
"paperId": "80a80be07321fb6354b7cc64b4765d56a338e8c5",
"title": "A Blockchain-Based Land Titling Project in the Republic of Georgia: Rebuilding Public Trust and Lessons for Future Pilot Projects"
},
{
"paperId": "54d50269928dafc6a0744e46044c17d973fdb01c",
"title": "Blockchain-Based E-Voting System"
},
{
"paperId": "866306df18c325ccea7d9cfae027e407489f5e3b",
"title": "IPFS-Blockchain-Based Authenticity of Online Publications"
},
{
"paperId": "9de7f1057a235570be559205ce5203f09e09af81",
"title": "Deployment of a Blockchain-Based Self-Sovereign Identity"
},
{
"paperId": "bb46aeb975545b581c7301c786de3075bcceb471",
"title": "Identity Management with Blockchain"
},
{
"paperId": "d4cf27fc7484eac5069d647a734d26df67b05c41",
"title": "A First Look at Identity Management Schemes on the Blockchain"
},
{
"paperId": "2158248201cbd43a23fb5134208bcec4150e0c9a",
"title": "A Survey on Data Confidentiality and Privacy in Cloud Computing"
},
{
"paperId": "1f42fdecd70a7d72f0f108e80511320f7204316c",
"title": "Blockstack: A Global Naming and Storage System Secured by Blockchains"
},
{
"paperId": "042e3227bf4bb39d15aadcbce00d051663dea30e",
"title": "A Survey on Identity Management for the Future Network"
},
{
"paperId": "55bdaa9d27ed595e2ccf34b3a7847020cc9c946c",
"title": "Performing systematic literature reviews in software engineering"
},
{
"paperId": "c7916abb8b13c54ba532edd3edcde745f2d722f3",
"title": "Digital identity"
},
{
"paperId": "ddde481484b674f451b3c5bdbda407e16a02503a",
"title": "Open Source Biotechnology"
},
{
"paperId": "1bfd09796a2cc89b09053e79510c6de6b995dac1",
"title": "Consent"
},
{
"paperId": "eddd1a2ff14e4a90ee138533e323d12f24e23284",
"title": "International Finance Corporation"
},
{
"paperId": null,
"title": "Identity model for blockchain-based land registry system: a comparison"
},
{
"paperId": "0e438c9f0686d9d56e2cb822cf77be0e0e7871da",
"title": "Improving the Authenticity of Real Estate Land Transaction Data Using Blockchain-Based Security Scheme"
},
{
"paperId": "877db6de2b97d491482f118374eb9f8be8010697",
"title": "Blockchain Security Threats, Attacks and Countermeasures"
},
{
"paperId": null,
"title": "5 trends drive the gartner hype cycle for emerging technologies, 2020"
},
{
"paperId": null,
"title": "“ Blockchain: an emerging opportunity for sur-veyors?, ”"
},
{
"paperId": null,
"title": "uport"
},
{
"paperId": null,
"title": "“ Decentralized identi fi ers (DIDs): data model and syntaxes for decentralized identi fi ers, ”"
},
{
"paperId": null,
"title": "Infographic: what is good digital ID?"
},
{
"paperId": null,
"title": "Future of property rights: self-sovereign identity and property rights"
},
{
"paperId": null,
"title": "E-Agriculture in action: blockchain for agriculture opportunities and challanges, Food and agriculture organization of the united nations and the international telecommunication union"
},
{
"paperId": "f5a77c5a4dd43e76ea11dd7f92793a98aff24153",
"title": "Secure Identity Management on the Blockchain"
},
{
"paperId": null,
"title": "2018 IEEE international conference on internet of things (iThings) and IEEE green computing and communications (green com) and IEEE cyber, physical and social computing (CPSCom) and IEEE smart data"
},
{
"paperId": null,
"title": "Hyperledger Contributors"
},
{
"paperId": null,
"title": "“ The nail fi nds a hammer self-sovereign identity, design principles, and property rights in the developing world, ”"
},
{
"paperId": null,
"title": "A Decentralized Storage Architecture"
},
{
"paperId": null,
"title": "Matching identity management solutions to self-sovereign identity principles.pdf"
},
{
"paperId": null,
"title": "Sovrin™: A Protocol and Token for Self-Sovereign Identity and Decentralized Trust"
},
{
"paperId": "03a396becd6c730c6142204e9429ce4503649bf7",
"title": "The Path to Self-Sovereign Identity"
},
{
"paperId": "8dc1c83c80e628a6ba6d1dfd3f22972c82ede0c5",
"title": "Blockstack : A New Internet for Decentralized Applications"
},
{
"paperId": "5817acc7ab5181c78893283ce1dc48b2294781b1",
"title": "Self-Sovereign Identity Framework and Blockchain"
},
{
"paperId": "606b2c57cfed7328dedf88556ac657e9e1608311",
"title": "Blockstack : A New Decentralized Internet"
},
{
"paperId": null,
"title": "Civic Technologies Inc"
},
{
"paperId": null,
"title": "The inevitable rise of self-sovereign identity A white paper from the Sovrin Foundation, Sovrin Foundation"
},
{
"paperId": null,
"title": "22 companies leveraging blockchain for identity management and authentication"
},
{
"paperId": null,
"title": "Blockchain Opportunities and Challenges of a New Digital Infrastructure for Germany"
},
{
"paperId": null,
"title": "The Self Key Foundation"
},
{
"paperId": null,
"title": "Wingerde, Blockchain-Enabled Self-Sovereign Identity, Tilburg University, Tilburg, Netherland, 2017"
},
{
"paperId": null,
"title": "Graglia, Case study: property rights and stability in Afghanistan"
},
{
"paperId": "2a4c9e9e396f2be969b3af6bf8fa23afc5d017ae",
"title": "Vice Laws and Self-Sovereignty"
},
{
"paperId": null,
"title": "What Is ‘Sovereign Source Authority’"
},
{
"paperId": "213475e88c019a2bb8afd673f84d0a62285efbea",
"title": "Secured transactions systems and collateral registries"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "3b46aef1526699c2f79ab5fb13a0a2e4afde5c48",
"title": "The Laws of Identity"
},
{
"paperId": "9f54677fc97e2d51a7e4f6c9c783dc53055811c8",
"title": "Privacy"
},
{
"paperId": "f0f05e01c828536afa2c3f2fd0a7bb3b3df8c00a",
"title": "Computer Science and Communications Dictionary"
},
{
"paperId": "3240ee72999cb51b82b5820c36cae18cbc171d9a",
"title": "Mobile information systems"
},
{
"paperId": "667cc1ac858c058b6f55797d427bb5174f712dba",
"title": "Journal of Network and Computer Applications"
},
{
"paperId": null,
"title": "Sho Card"
},
{
"paperId": null,
"title": "Facilitate Access to Finance . Self-sovereign identity-based land registers can also provide more detailed and trusted information about potential borrowers in developing countries"
},
{
"paperId": null,
"title": "Existence: Users Must Have an Independent Exis-tence . An SSI fundamentally depends on the ine ff able “ I ” at the core of identity"
},
{
"paperId": null,
"title": "Transparency . The system used to manage the identity network must be transparent in its processes, management, and updates"
},
{
"paperId": null,
"title": "v) Natural Disaster Resilience . Land ownership is important for preparing for disasters and can improve the restoration process. New programs for disaster preparedness"
},
{
"paperId": null,
"title": "Blockchain asset registries: approaching enlightenment?- Coin Desk"
},
{
"paperId": null,
"title": "improve protection and privacy features [ author compares various blockchain identity systems and identi fi es challenges like trust, privacy issues"
},
{
"paperId": null,
"title": "Bridging refugee protection and development"
},
{
"paperId": null,
"title": "Persistence . The identity system will be long-lasting, where identity owners can recover private keys and passwords if their primary device is damaged or stolen"
},
{
"paperId": null,
"title": "Portability . The identities should be available as long as the identity owner desires"
},
{
"paperId": null,
"title": "Persistence: Identities Must Be Long-Lived"
},
{
"paperId": null,
"title": "Minimalization"
},
{
"paperId": null,
"title": "Interoperabilit y"
},
{
"paperId": null,
"title": "Evernym"
},
{
"paperId": null,
"title": "Ever ID whitepaper"
},
{
"paperId": null,
"title": "Can blockchain finally give us the digital privacy we deserve?"
},
{
"paperId": null,
"title": "Verifiable claims task force"
}
] | 22,070
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012dbb6ffa748446949f36b1e1551bc593a440bb
|
[
"Business"
] | 0.847333
|
Critical Dimensions of Blockchain Technology Implementation in the Healthcare Industry: An Integrated Systems Management Approach
|
012dbb6ffa748446949f36b1e1551bc593a440bb
|
Sustainability
|
[
{
"authorId": "30797251",
"name": "S. Aich"
},
{
"authorId": "13379591",
"name": "S. Tripathy"
},
{
"authorId": "66938976",
"name": "Moon-Il Joo"
},
{
"authorId": "40478718",
"name": "Hee-Cheol Kim"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
In the digital era, almost every system is connected to a digital platform to enhance efficiency. Although life is thus improved, security issues remain important, especially in the healthcare sector. The privacy and security of healthcare records is paramount; data leakage is socially unacceptable. Therefore, technology that protects data but does not compromise efficiency is essential. Blockchain technology has gained increasing attention as it ensures transparency, trust, privacy, and security. However, the critical factors affecting efficiency require further study. Here, we define the critical factors that affect blockchain implementation in the healthcare industry. We extracted such factors from the literature and from experts, then used interpretive structural modeling to define the interrelationships among these factors and classify them according to driving and dependence forces. This identified key drivers of the desired objectives. Regulatory clarity and governance (F2), immature technology (F3), high investment cost (F6), blockchain developers (F9), and trust among stakeholders (F12) are key factors to consider when seeking to implement blockchain technology in healthcare. Our analysis will allow managers to understand the requirements for successful implementation.
|
## sustainability
_Article_
# Critical Dimensions of Blockchain Technology Implementation in the Healthcare Industry: An Integrated Systems Management Approach
**Satyabrata Aich** **[1]** **, Sushanta Tripathy** **[2], Moon-Il Joo** **[1]** **and Hee-Cheol Kim** **[3,]***
1 Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Korea;
satyabrataaich@gmail.com (S.A.); joomi@inje.ac.kr (M.-I.J.)
2 School of Mechanical Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India;
sushant.tripathy@gmail.com
3 College of AI Convergence/Institute of Digital Anti-aging Healthcare/u-AHRC, Inje University,
Gimhae 50834, Korea
***** Correspondence: heeki@inje.ac.kr; Tel.: +82-55-320-3720
[����������](https://www.mdpi.com/article/10.3390/su13095269?type=check_update&version=1)
**�������**
**Citation: Aich, S.; Tripathy, S.; Joo,**
M.-I.; Kim, H.-C. Critical Dimensions
of Blockchain Technology
Implementation in the Healthcare
Industry: An Integrated Systems
Management Approach. Sustainability
**[2021, 13, 5269. https://doi.org/](https://doi.org/10.3390/su13095269)**
[10.3390/su13095269](https://doi.org/10.3390/su13095269)
Academic Editor: Nicu Bizon
Received: 28 February 2021
Accepted: 15 April 2021
Published: 8 May 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: In the digital era, almost every system is connected to a digital platform to enhance**
efficiency. Although life is thus improved, security issues remain important, especially in the
healthcare sector. The privacy and security of healthcare records is paramount; data leakage is
socially unacceptable. Therefore, technology that protects data but does not compromise efficiency
is essential. Blockchain technology has gained increasing attention as it ensures transparency,
trust, privacy, and security. However, the critical factors affecting efficiency require further study.
Here, we define the critical factors that affect blockchain implementation in the healthcare industry.
We extracted such factors from the literature and from experts, then used interpretive structural
modeling to define the interrelationships among these factors and classify them according to driving
and dependence forces. This identified key drivers of the desired objectives. Regulatory clarity and
governance (F2), immature technology (F3), high investment cost (F6), blockchain developers (F9),
and trust among stakeholders (F12) are key factors to consider when seeking to implement blockchain
technology in healthcare. Our analysis will allow managers to understand the requirements for
successful implementation.
**Keywords: blockchain; healthcare; critical factors; digital healthcare; interpretive structural modeling**
**1. Introduction**
Recently, blockchain (BC) technology has attracted increasing attention from industry
and academia. BC technology allows users to preserve, certify, and synchronize the contents
of a transaction ledger, which are available to multiple users. Transactions are decentralized;
the data are not controlled by a third party. Within the system, transactions are timestamped
in a ledger; data modifications/alterations are generally impossible without changing the
ledger. Figure 1 shows the key components of a BC.
BC technology ensures that trust and security are maintained during any transaction [1,2]. The healthcare, financial, and educational industries perceive the advantages
afforded. Figure 2 describes the working principles of a BC.
As BC technology reduces fraudulent activity and protects privacy, healthcare providers
would like to implement it [3]. Breaches of healthcare data are increasing rapidly; in 2017,
the number of people affected exceeded 300 records; from 2010 to 2017, this number rose to
37 million records [4,5]. There are growing concerns regarding healthcare data sharing, secure data storage, and data ownership, as digitization becomes the norm [3]. A BC ensures
transparency, security, and speed during data storage and distribution; it also solves the
security, privacy, and integrity issues that arise in the field of healthcare technology [6–9].
A BC is decentralized, thus eliminating the accuracy and security concerns associated
-----
_Sustainability 2021, 13, 5269_ 2 of 17
with dependence on a central authority. BC technology is inter-operator-based, ensuring
_Sustainability Sustainability 20212021,, 1313, x FOR PEER REVIEW, x FOR PEER REVIEW_ 2 of 17 2 of 17
a high standard of data exchange among healthcare associates. This boosts innovation,
coordination among associates, market competition, and care quality [10–13].
**Figure 1.Figure 1. Figure 1. Components of a BC.Components of a BC. Components of a BC.**
**Figure 2.Figure 2. Figure 2. Principles of a BC: a step-by-step flowchart.Principles of a BC: a step-by-step flowchart. Principles of a BC: a step-by-step flowchart.**
In the past (when no BC was available), healthcare data interoperability among
As BC technology reduces fraudulent activity and protects privacy, healthcare pro-As BC technology reduces fraudulent activity and protects privacy, healthcare pro
different institutions was categorized as push, pull, and view. In the push model, data
viders would like to implement it [3]. Breaches of healthcare data are increasing rapidly; viders would like to implement it [3]. Breaches of healthcare data are increasing rapidly;
transfer is possible between two providers; no third provider has access. For example, data
in 2017, the number of people affected exceeded 300 records; from 2010 to 2017, this num-in 2017, the number of people affected exceeded 300 records; from 2010 to 2017, this num
transfer is possible between departments within the same hospital; however, data cannot
ber rose to 37 million records [4,5]. There are growing concerns regarding healthcare data ber rose to 37 million records [4,5]. There are growing concerns regarding healthcare data
be accessed by a different hospital, regardless of patient transfer to the other hospital. In
sharing, secure data storage, and data ownership, as digitization becomes the norm [3]. A sharing, secure data storage, and data ownership, as digitization becomes the norm [3]. A
the push model, it is very difficult to ensure data integrity during transfer. During a pull,
one provider informally seeks data from another provider; there is no standardized auditBC ensures transparency, security, and speed during data storage and distribution; it also BC ensures transparency, security, and speed during data storage and distribution; it also
trail. For example, an orthopedic surgeon can informally ask a cardiologist for information.solves the security, privacy, and integrity issues that arise in the field of healthcare tech-solves the security, privacy, and integrity issues that arise in the field of healthcare techDuring a view, one provider sees the record of another provider. For example, a surgeonnology [6nology [6––9]. A BC is decentralized, thus eliminating the accuracy and security concerns 9]. A BC is decentralized, thus eliminating the accuracy and security concerns
can access an X-ray taken in the emergency department. The security approaches areassociated with dependence on a central authority. BC technology is inter-operator-based, associated with dependence on a central authority. BC technology is inter-operator-based,
not based on the relationship that exists between a patient and a provider; thus, they areensuring a high standard of data exchange among healthcare associates. This boosts inno-ensuring a high standard of data exchange among healthcare associates. This boosts inno
vation, coordination among associates, market competition, and care quality [10vation coordination among associates market competition and care quality [10 13]–13]
-----
_Sustainability 2021, 13, 5269_ 3 of 17
largely ad hoc. The relevant policies are also subject to the laws of the local and federal
governments.
A BC-based model for a healthcare market creates a new dimension by considering
the safety of data integrity and the use of standardized formal contracts for data accession.
When an electronic health record (EHR) (which stores data from multiple workers) is
accessed, it is difficult to determine the identity of the person who performed a task and
when the work was performed. BC timestamps all work and identify the worker; the
data are also distributed to all participating nodes. If a modification or update appears in
any node, this is distributed to all nodes and is thus visible systemwide. Data integrity
is maintained without the need for human intervention [14]. Although BC affords many
benefits, it has never been implemented in real-time healthcare. Adoption is inevitable.
Our literature review revealed only limited empirical evidence for BC use, despite
its many possible benefits [15]. Very few studies have investigated the benefits, deficits,
and functionalities of BC technology [16–19]. Most studies have sought to explain how BC
works and to determine its current real-world implementation status [20]. However, critical
factors affecting BC implementation in healthcare have not been addressed; knowledge
of these factors is essential. Thus, we sought to identify these factors. Our findings
can remove the confusion associated with real-time BC implementation. We offer a better
understanding of the challenges imposed by implementation of BC technology in healthcare
and the factors affecting such implementation. Our objectives are:
(1) To identify factors that critically impact the implementation of BC technology in the
healthcare industry;
(2) To build a structured framework that depicts the interrelationships among such
factors;
(3) To define the motivation and reliance powers of such factors.
Based on past works and the opinions of experts in BC technology, we define 13 factors that greatly affect the implementation of such technology in healthcare. We used
interpretive structural modeling (ISM) to explore the relationships among such factors. We
performed Matrice d’Impact Croise’s Multiplication Appliquée a UN Classement (MICMAC) analysis to define the motivation and reliance powers of the factors. We sought to
encourage industries that wish to implement BC technology.
The remainder of this paper is organized as follows. The literature regarding applications of BC technology in healthcare is reviewed in Section 2. Section 3 describes
the methods used to achieve our research objectives. The research approach is discussed
in Section 4. Managerial implications are discussed in Section 5. Practical implications
are discussed in Section 6. The outcomes are summarized and conclusions are drawn
in Section 7.
**2. Related Works**
This section is divided into two subsections. Past works regarding applications of BC
in healthcare are covered in the first subsection. The second subsection discusses critical
factors influencing BC implementation in healthcare.
_2.1. Past Studies Regarding Applications of BC in Healthcare_
BC use in healthcare scenarios has focused on smart healthcare management, useroriented medical research, and prevention of drug counterfeiting. In terms of healthcare
management, health networks allow medical experts to obtain detailed information regarding current patient status (described in the reports of physicians or healthcare centers, as
well as various studies). Analysis of medical records creates an ecosystem that transparently reduces the merit costs of patient records. Moreover, medical experts can monitor the
treatment activities of stakeholders, such as physicians and healthcare centers. These systems facilitate insurance claim settling if insurance companies are permitted (by patients)
to access data. As healthcare records are increasingly stored digitally, security elements
must be incorporated in such digital systems. Any digital platform must be scalable and
-----
_Sustainability 2021, 13, 5269_ 4 of 17
adaptable, thus capable of handling large numbers of records and adaptable to many types
of changes [21]. One practical solution is Live Interactive FramE (LIFE), which ensures that
all media streaming in a healthcare domain are appropriately secured with minimal video
quality loss during immersive applications [22].
In the context of user-oriented medical research, several authors have focused on the
workings and structures of health banks; these studies have led to major breakthroughs
in medical research. A company may accumulate data from wearable devices and may
provide a user platform for data storage and management. The stored medical data facilitate
high-quality research by trusted organizations. Patients are not financially compensated
for their data.
In terms of preventing drug counterfeiting, various authors have shown that the
typical counterfeiting level today ranges from approximately 10% to 30% in developed
countries. Counterfeiting is not confined to lifestyle medicines or drugs; it can include
drugs that treat major medical issues, such as cardiovascular diseases. Recently, the
Hyperledger research network described how drug counterfeiting could be reduced by
timestamping. This problem could be addressed using a BC: all data have an address based
on the stored information, thereby preventing drug counterfeiting [23].
Other authors [24] identified a simple but very robust BC system for patient data
storage by the healthcare sector. The system encourages the storage of all data (beginning
at birth), including lipid profile data yielded by wearable devices and magnetic resonance
imaging information. All information is stored in a data lake, which facilitates simple
querying, advanced analytics, and machine learning. This is a simple form of data warehousing; the stored materials include documents, images, .pdf files, and key values. The
BC of each user serves as an index catalog containing a unique identification number, an
encrypted link to the health record stored in the lake, and a timestamp that shows all data
modifications. The user enjoys robust access control and can allow or restrict access using
an audit log that shows every visit to the data repository. Such systems will greatly aid
medical professionals (ranging from students to doctors) and governmental agencies. The
data shared from the lake are tagged with the unique patient identification numbers, are
up-to-date and accurate, and can be used for longitudinal healthcare research.
Electronic records contain data from wearable devices, medical records, and/or medical imaging; all are private. Cloud storage in a central regulatory database may be
associated with data leakage; unauthorized data transfer can be avoided by enabling a BC
architecture wherein a user or a patient has full control over data access by third parties (expert consultants or medical researchers). BC use in healthcare will aid in the development
of a consensus system that verifies the appropriateness of prescribed medication [25].
As healthcare data are very sensitive, the type of BC to be used is critical. One
report [26] discussed the various types of BC available, their bases, and their uses in terms
of maintenance, validation, and storage of medical data. Out-of-the-box consortium BCs
were considered optimal; both the node owner and miners control access. A consensus
can be determined regarding the optimal number of validations imparted by a healthcare
provider, a clinic, or an insurance company; the stored data are thus very accurate.
Although BC implementation in the healthcare sector is vital, this has not yet happened. It is essential to define factors that would aid implementation. This is the topic of
the next subsection.
_2.2. Crucial Factors Affecting Implementation of BC in Healthcare_
Here, we review works regarding crucial factors that affect BC implementation in
healthcare. Ekblaw et al. [14] regarded data security and privacy assurance as a major
concern. Although BC data are safely stored, Shi et al. [27] presumed that healthcare data
in a public database would be vulnerable to statistical attack. Thus, frequent encryption
key changes would be required, increasing key storage costs. To address this problem,
Zhao et al. [28] developed a smart key recovery scheme based on a body sensor network.
-----
_Sustainability 2021, 13, 5269_ 5 of 17
In this system, there is no need for a stored encryption key. If a key must be recovered, it is
retrieved by the body sensor network.
Khan et al. [29] stated that medical data interoperability was another key barrier;
medical data must be communicated, shared, and used by employing various information
communication technologies. Although creation of data interoperability has been discussed
by several authors [11,30], its implementation remains a major concern.
Agarwal and Prabakaran [31] presumed that data fragmentation could compromise
EHRs. Thus, there is a need to meticulously maintain EHRs [32]. Errors compromise
diagnosis and patient safety [33]. Data unavailability is unacceptable, thus posing a problem
when implementing BC technology. The compatibility of existing information with BC
technology is also critical; incompatibility reduces system performance. Incompatible
information technology interfaces [23] greatly slow the system. The deployment and
maintenance of dedicated technology are essential components [34].
Regulation plays a major role in healthcare; lives are at stake [1]. Schwerin [35]
questioned the compatibility of BC with general data protection regulations. BC must
protect consumer rights according to law [36]. Trust is a critical consideration when using
BC; data are decentralized, transparent, and uploaded [37]. Illegal bitcoin activities have
compromised trust in the technology [38]. However, in the healthcare sector, the technology
can never be completely trusted due to the demand for healthcare data privacy [39–42].
Regulators have stated that a key issue preventing large-scale implementation of BC
technology is the lack of standardization. Only a few well-known infrastructures are
available [38]. Different countries seek to apply BC in various manners. For example,
Estonia wishes to use BC to award e-residency to new citizens [43]. Other countries
seek to use BC for transparent taxation and other social needs [44]. Standardization is
urgently required [45].
The creativity and intricacy of BC increase adoption costs. Furthermore, there are few
service providers, which leads to increasing costs [38]. As BC implementation reduces
transaction costs [46], the initial costs may be worthwhile. However, the technology may
not be suitable for small- and medium-size industries that lack workers with skills to
manage the system [38]. There is a need for more skilled operators.
BC became well-known due to bitcoin, but disagreements are increasing in the bitcoin
community due to frequent code forking, supporting the notion that the technology is
immature [47]. Many technical limitations remain [38].
Scalability is essential for robust system performance. In its present form, the network
of a public BC is comparatively expensive and slow and thus difficult to adopt at a large
scale. However, new scaling techniques such as plasma chains, lightning networks, zksnarks, and state channels enhance performance and scalability (both in and out of the
healthcare context) [1].
Although BC is newer than the Internet of Things (IoT), cloud computing, and artificial
intelligence (AI), the IoT exhibits many security issues. BC has greatly improved IoT
applications, restoring trust [48]. The use of AI in healthcare affords excellent accuracy
and precision, facilitating rapid decision-making. In the current coronavirus disease
2019 era, AI-based decisions categorize patients within a few minutes, greatly aiding
clinicians. However, privacy remains important; patients fear that their data may be
misplaced during AI analysis in the cloud. Prior to BC, cloud security was enhanced
using specific applications [49–51]. After BC integration, data can be stored securely. As
each transaction is recorded, patients can easily identify where the data are used. All
involved groups experience benefits [52]. The integration of BC into the cloud resolves
issues regarding location-based storage and analysis, while guaranteeing security [53].
BC must be integrated with more recent cutting-edge technologies. All industries accept
that modern technology will render them smarter, but safety concerns remain. BC is
already safe; the addition of other technologies can render it safer, faster, energy-saving,
and cheaper [54–56].
-----
_Sustainability 2021, 13, 5269_ 6 of 17
**3. Solutions**
We first identified critical factors affecting BC introduction; we reviewed past works
and sought 15 expert opinions (inputs to structured self-interaction matrices (SSIMs)).
These opinions were collected during a workshop concerning digital technology in the
healthcare sector held at KIIT University, Bhubaneswar, India, in 2020. The 15 experts
included nine senior medical practitioners with at least 10 years of experience in reputable
hospitals with digital platforms hosting patient records and managing medicine supplies,
as well as six academics with at least 10 years of research experience in BC (all academics
were at or above professor/associate professor level in their medical colleges/universities).
There was no limit on the number of experts that had completed S exploring remanufacturing and green campus operations (Singhal et al., 2020 [57], Gholami et al., 2020 [58]); 10 had
completed SIMs concerning researcher selection (Nilashi et al., 2019 [59]). Our 15 experts
were thus adequate. Next, ISM was used to develop a baseline model of associations among
critical factors, and MICMAC analysis was performed to group the factors. ISM seeks
to determine relationships among factors identified through literature review or expert
opinion as an issue or a problem (Jharkharia and Shankar 2005 [60], Ravi and Shankar
2005 [61], Raj and Attri 2011 [62]). ISM techniques include brainstorming, nominal group
techniques, and face-to-face interviews, yielding expert views regarding how to develop
a contextual relationship among selected key factors (Ravi et al., 2005 [63], Barve et al.,
2007 [64], Hasan et al., 2007 [65], Raj et al., 2007 [66]). Here, we addressed the complex barriers to BC implementation in healthcare. Factors determined through a literature review
were reviewed by experts. No limit was imposed on the number of factors (Singhal et al.,
2020 [57], Nayak et al., 2019 [67]). Table 1 lists the 13 factors identified and Table 2 lists the
ISM steps. The flowchart of the solution (i.e., research framework and sequential steps)
is shown in Figure 3. The critical dimensions of BC in healthcare commences with SIM
completion and concludes with MICMAC policy recommendations. A strong correlation is
evident between the ISM model and the critical factors identified.
**Table 1. Numbers of factors evaluated in various reports.**
**Source** **No. of Factors Used** **Research Objective**
Singhal et al. 2020 [57] 15 Factors affecting electronic remanufacturing
Nayak et al. 2019 [67] 14 Factors affecting rail safety performance
Ahmed et al. 2019 [68] 15 Benchmarking of significant factors in seismic soil liquefaction
Nayak et al., 2018 [69] 17 Factors affecting nontechnical human skills in engineering
Aich and Tripathy, 2014 [70] 13 Factors affecting green supply chain management
Tripathy et al., 2013 [71] 14 Factors affecting manufacturing R&D
**Table 2. ISM steps.**
**Steps** **Focus**
Define pairwise relationships among identified critical
1: Establishment of a structural self-interaction matrix (SSIM)
dimensions of healthcare BC technology
2: Create a reachability matrix Determine driving and dependent factors
3: Level partitioning Define structural levels (factor level partitioning)
Develop an ISM model using a reachability matrix and
4: ISM modeling
level partitioning
Classify critical dimensions of healthcare BC technology into
5: MICMAC analysis four categories (drivers, dependents, autonomous factors, and
linked factors) via MICMAC analysis
-----
_Sustainability 2021, 13, 5269_ Classify critical dimensions of healthcare BC technology into four categories (drive7 of 17
#### 5: MICMAC analysis
autonomous factors, and linked factors) via MICMAC analysis
**Figure 3. Flowchart of solution methodology.**
#### Figure 3. Flowchart of solution methodology.
_3.1. Data Collection_
### 3.1. Data Collection We reviewed all BC papers in Web of Science and Scopus in terms of critical factors
influencing the adoption of BC in healthcare. With assistance from experts, we selected the
13 factors listed in Table 3.
-----
_Sustainability 2021, 13, 5269_ 8 of 17
**Table 3. Factors affecting the implementation of BC in healthcare.**
**Code** **Factor** **References**
F1 Data unavailability (DU) [33,34,36]
F2 Regulatory clarity and governance (RCG) [38–40]
F3 Immature technology (IMT) [42,51]
F4 Safer and smarter organization (SSO) [52–57]
F5 Compatibility with other IT systems (CIT) [36,37]
F6 High investment cost (HIC) [42,50]
F7 Privacy and security of stored data (PSD) [27–29]
F8 Scalability and accessibility (SA) [1,42]
F9 Blockchain developers (BD) [42]
F10 Interoperability of electronic health records (IEH) [30,32,34]
F11 Data standardization (DS) [42,47–49]
F12 Trust among stakeholders (TAS) [41–46]
F13 Encouragement of integration (EI) [50,53–57]
_3.2. ISM_
ISM is old and widely used by researchers in knowledge management, energy conservation, supplier selection, and green supply chain management; it is also used by strategic
decisionmakers in various organizations [72–74]. ISM seeks to recognize/construct associations between factors affecting decision-making when a particular problem arises,
then to solve the problem by considering the driving and dependency powers of each
factor [75]. The framework features associations among factors, as identified by experts [76].
Fewer experts are required, compared with structural equation modeling or the Delphi
method. ISM nonetheless builds models that solve decision-making problems [77,78].
Table 4 lists the various applications of ISM. Modeling proceeds as follows: (1) recognition
of relevant factors based on past studies and expert opinion; (2) development of an SSIM
and then a reachability matrix; (3) creation of a partition level table using a reachability
matrix; (4) characterization of relationships among various factors; and (5) identification of
uncertainties and consequent modifications.
**Table 4. Applications of ISM.**
**Techniques** **Application**
ISM Adoption of IoT services [70]
ISM Challenges posed by BC adoption within the Indian public sector [79]
ISM BC as a disruptive technology in the construction industry [80]
ISM and DEMATEL Modeling BC-enabled traceability in an agriculture supply chain [81]
ISM Factors influencing lean implementation in healthcare organizations [82]
3.2.1. The SSIM
The SSIMs completed by experts served as the ISM inputs. The contextual relationships among the 13 factors were determined by the majority opinions of the 15 experts
expressed in a brainstorming session conducted during a 2020 workshop. The contextual
relationships were finalized after considering the nature of each problem, the objective,
and the majority opinion concerning the relationships between factors. The contextual
association between two elements (i and j) is represented in one of four manners: (a) if i
influences j, this is represented by “V”; (b) if j influences i, this is represented by “A”; (c) if i
and j influence each other, this is represented by “X”; and (d) if i and j are independent, this
is represented by “O”. For example, the interoperability of electronic health records F10
(IEH) influences the BC developers F9 (BD); the symbol used is V. Compatibility with other
IT systems F5 (CIT) influences high investment cost F6 (HIC); the symbol used is A. The
interoperability of electronic health records F10 (IEH) and privacy and security of storage
data F7 (PSD) interact; the symbol used is X. Scalability and accessibility F8 (SA) has no
-----
_Sustainability 2021, 13, 5269_ 9 of 17
relationship with data unavailability F1 (DU); the symbol used is O. The SSIM summary is
presented in Table 5. The reachability matrix associated with the SSIMs is addressed below.
**Table 5. SSIM summary.**
**F1** **F2** **F3** **F4** **F5** **F6** **F7** **F8** **F9** **F10** **F11** **F12** **F13**
F1 A A V O A V O A V O A V
F2 V V V V V V V V V V V
F3 V V A V V A V V A V
F4 A A A A A A A A A
F5 A V O A V O A V
F6 V V O V V X V
F7 A A X A A X
F8 A V O A V
F9 V V O V
F10 A A X
F11 A V
F12 V
3.2.2. Reachability Matrix
The four SSIM representations, V, A, X, and O, were replaced by 1 or 0 in a reachability
matrix, as follows: (a) the symbol “V” in the (i, j) position of the SSIM matrix is substituted
by 1 and 0 in the (i, j) and (j, i) positions of the reachability matrix; (b) the symbol “A” in
the (i, j) position of the SSIM matrix is substituted by 0 and 1 in the (i, j) and (j, i) positions
of the reachability matrix; (c) the symbol “X” in the (i, j) position of the SSIM matrix is
substituted by 1 in both the (i, j) and (j, i) positions of the reachability matrix; and (d) the
symbol “O” in the (i, j) position in the SSIM matrix is substituted by 0 in both the (i, j) and
(j, i) positions of the reachability matrix. Next, the transitivity of the reachability matrix
was checked. Transitivity means that if factor F1 influences F2 and F2 influences F3, then F1
impacts F3. If the position (i, j) of F1 impacts F3, the value becomes 1. The driving power
(DVP) of a factor is calculated by adding all values in the accommodating row and the
dependence power (DNP) is calculated by adding all values in the accommodating column.
After considering transitivity, the final version of the reachability matrix is shown in Table
6. The subsequent step (i.e., partition of different levels) uses the reachability matrix.
**Table 6. Reachability matrix.**
**F1** **F2** **F3** **F4** **F5** **F6** **F7** **F8** **F9** **F10** **F11** **F12** **F13** **DVP**
F1 1 0 0 1 1 0 1 0 0 1 0 0 1 5
F2 1 1 1 1 1 1 1 1 1 1 1 1 1 13
F3 1 0 1 1 1 0 1 1 0 1 1 0 1 9
F4 0 0 0 1 0 0 0 0 0 0 0 0 0 1
F5 0 0 0 1 1 0 1 0 0 1 0 0 1 5
F6 1 0 1 1 0 1 1 1 0 1 1 1 1 11
F7 0 0 0 1 1 0 1 0 0 1 0 0 1 4
F8 0 0 0 1 1 0 1 1 0 1 0 0 1 5
F9 1 0 1 1 1 0 1 1 1 1 1 0 1 10
F10 0 0 0 1 1 0 1 0 0 1 0 0 1 4
F11 0 0 0 1 0 0 1 0 0 1 1 0 1 5
F12 1 0 1 1 1 1 1 1 0 1 1 1 1 11
F13 0 0 0 1 0 0 1 0 0 1 0 0 1 4
DNP 6 1 5 13 6 3 12 6 2 12 6 3 12 1
3.2.3. Level Partition
The antecedent and reachability sets for each element were developed based on the
reachability matrix [83]. The reachability set contains the factors themselves and factors
impacted by other factors, and the antecedent set consists of the factors themselves and
-----
_Sustainability 2021, 13, 5269_ 10 of 17
factors impacting those factors. The intersection set is the group of elements common to
the antecedent and reachability sets. The procedure was iterated; when the antecedent and
reachability sets were equal, the top factor was identified. For example, level I is occupied
by F13 due to the equality of the antecedent and reachability sets. Five iterations were
performed when identifying the level of a factor. The level partition is shown in Table 7.
All 13 factors are split into six levels. F2 occupies the sixth level and F13 occupies the first
level; the other factors lie between these levels.
**Table 7. Level partition.**
**Factors** **Reachability Set** **Antecedent Set** **Intersection Set** **Level**
F1 1,4,7,10,13 1,2,3,6,9,12 1 III
F2 1,2,3,4,5,6,7,8,9,10,11,12,13 2 2 VI
F3 1,3,4,5,7,8,10,11,13 2,3,6,9,12 3 IV
F4 4 1,2,3,4,5,6,7,8,9,10,11,12,13 4 I
F5 4,5,7,10,13 2,3,5,6,9,12 5 III
F6 1,3,4,5,6,7,8,10,11,12,13 2,6,12 6,12 V
F7 4,7,10,13 1,2,3,5,6,7,8,9,10,11,12,13 7,10,13 II
F8 4,7,8,10,13 2,3,6,8,9,12 8 III
F9 1,3,4,5,7,8,9,10,11,13 2,9 9 V
F10 4,7,10,13 1,2,3,5,6,7,8,9,10,11,12,13 7,10,13 II
F11 4,7,10,11,13 2,3,6,9,11,12 11 III
F12 1,3,4,5,6,7,8,10,11,12,13 2,6,12 6,12 V
F13 4,7,10,13 1,2,3,5,6,7,8,9,10,11,12,13 7,10,13 II
3.2.4. ISM
The ISM of Figure 4 was developed based on the digraph and level partition table. A
digraph exemplifies the interrelationships among elements at edges and nodes. Digraphs
_3, x FOR PEER REVIEW_ 11 of 17
remove the transitive relationships between elements. The ISM is extracted from the
combinative information of the digraph [84].
##### Figure 4. The ISM.
#### 3 3 MICMAC Analysis
**Figure 4. The ISM.**
-----
_Sustainability 2021, 13, 5269_ 11 of 17
**Figure 4. The** ISM.
_3.3. MICMAC Analysis3.3. MICMAC Analysis_
|Figure 4.|The|ISM.|
|---|---|---|
11 of 17
MICMAC requires factor dependence and driving powers as inputs [MICMAC requires factor dependence and driving powers as inputs [85] and then 85] and then
categorizes the factors into four types (Figurecategorizes the factors into four types (Figure 5). Autonomous variables (factors with 5). Autonomous variables (factors with weak
dependence and driving powers) are shown in the first quadrant. Dependent variablesweak dependence and driving powers) are shown in the first quadrant. Dependent vari(factors with strong dependence but weak driving powers) are shown in the secondables (factors with strong dependence but weak driving powers) are shown in the second
quadrant. Linkage variables (factors with strong dependence and driving powers) arequadrant. Linkage variables (factors with strong dependence and driving powers) are
shown in the third quadrant. Driving variables (factors with weak dependence powers but
shown in the third quadrant. Driving variables (factors with weak dependence powers
strong driving powers) appear in the fourth quadrant [66].
but strong driving powers) appear in the fourth quadrant [66].
**Figure 5.Figure 5. MICMAC analysis.MICMAC analysis.**
**4. Results and Discussion4. Results and Discussion**
We systematically analyzed and constructed the relationships among factors affecting
We systematically analyzed and constructed the relationships among factors affect
the adoption of BC in healthcare. We derived factor dependence and driving powers
ing the adoption of BC in healthcare. We derived factor dependence and driving powers
through MICMAC analysis. We first identified 13 critical factors affecting the adoption of
through MICMAC analysis. We first identified 13 critical factors affecting the adoption of
BC in healthcare (Table 1). Experts from academia and industry chose these factors. We
BC in healthcare (Table 1). Experts from academia and industry chose these factors. We
used ISM to construct the model. The ISM split all factors into six levels. Regulatory clarity
and governance (F2) (at the bottom of the hierarchy) was the key driver of BC adoption in
healthcare. Daluwathumullagamage and Sims found that BC would ensure better corporate
governance if development was accompanied by changes in regulatory frameworks [86].
Healthcare industries must encourage governments to regulate appropriately; BC use
is essential. Level IV included trust among stakeholders (F12), high investment cost
(F6), and BC developers (F9); all were strong drivers of adoption. Senior healthcare
managers must enthusiastically adopt BC and consumers must understand the great
benefits afforded by BC use. Gomez-Trujillo et al. emphasized that BC guarantees trust
and transparency; if all individuals, industries, and other stakeholders maintain confidence
in BC, long-term success is ensured [87]. Koster and Borgman found that BC adoption
required the support of senior authorities and trust among partners [88]. Level IV contained
only immature technology (F3), which strongly influenced BC adoption. BC is new, not
standardized, and has seldom been implemented in governmental agencies. Compatibility
issues affecting performance may arise [48,89]. Level III included data standardization
(F11), compatibility with other IT systems (F5), data unavailability (F1), and scalability and
accessibility (SA); all strongly influenced BC adoption. The technology remains immature,
and therefore the above factors must all be upgraded to enhance performance in the
healthcare sector [90]. Level II comprises interoperability of electronic health records
(F10), privacy and security of storage (F7), and encouragement of integration (F13); all
dynamically influence adoption. Finally, the factor at level I is affected by all other factors
-----
_Sustainability 2021, 13, 5269_ 12 of 17
and thus exhibits the highest dependence power. Encouragement of integration is the key
driver; BC can be combined with many cutting-edge technologies that render organizations
more efficient and smarter [51–53]. Secure data storage makes organizations safer; this is
one of our long-term healthcare objectives.
After MICMAC analysis, quadrant 4 hosted five factors with strong driving powers
and quadrant 2 hosted four factors with strong dependence powers. Quadrant 3 was
empty; there was no linkage variable. Quadrant 1 hosted four autonomous variables.
Dependent variables comprised privacy and security of storage data (F7), interoperability
of EHRs (F10), encouragement of integration (F13), and safer and smarter organizations
(F4). MICMAC analysis revealed that regulatory clarity and governance (F2), immature
technology (F3), high investment cost (F6), BC developers (F9), and trust among stakeholders (F12) exhibited strong driving powers and were thus the most important factors in
terms of BC adoption in healthcare. Data unavailability (F1), compatibility with other IT
systems (F5), scalability and accessibility (F8), and data standardization (F11) (autonomous
variables) exhibited weaker dependences and driver powers, suggesting that they were
less important than other factors. However, all identified factors affect the adoption of BC
in healthcare.
We shared our analysis with stakeholders in healthcare industries. Surprisingly, many
managers were unaware of many factors. We hope that our analysis will help them to
prepare for successful BC adoption.
**5. Managerial Implications**
Our work will allow regulators, policymakers, governments, healthcare industrialists,
and consumers to recognize the critical factors that affect BC incorporation in healthcare.
Managers and decisionmakers should focus on the inputs and outputs of the ISM model.
The inputs were based on a literature review and expert opinions. The outputs identify the
interdependencies and the short- and long-term importances of various factors. The model
will be implemented and tested in a cross-sectional manner in multiple industries.
Managers will be interested in the outcomes; they should prepare the resources for
successful implementation. Managers must offer staff workshops and training regarding
BC and its benefits. Existing educational institutes and special training schools may be
involved. Managers must be careful when sharing information; a competitive advantage
must not be lost. Petersson and Baur [91] emphasized that an organization is not required
to reorganize its business model during BC integration. Furthermore, BC is possible in
a traditional system; a new system is unnecessary. During organizational preparation,
knowledge of the technical aspects will be helpful.
All organizations must now adopt cutting-edge BC technology. Its basic features
include smart contracts, privacy, and data security; it is easy to switch to new (improved)
future platforms. Existing open-source platforms are expensive if they are expected to
serve as proprietorial infrastructure [92]. Organizations should implement BC technology
immediately; the “wait-and-see” period is over. Early acceptance of the technology will
afford competitive advantages [93].
**6. Practical Implications**
Healthcare decisionmakers must implement BC to protect the privacy of healthcare
data. Such privacy supports the implementation of AI and federated learning, which
enhance organizational efficiency. Kumar et al. [52] used BC technology for data authentication, allowing efficient use of AI and federated learning. In the era of coronavirus disease
2019, cutting-edge AI can rapidly identify an infection and is applicable worldwide; this
could be combined with BC technology. With increasing data digitization, the need for
privacy increases, along with the desire for societal betterment. BC technology can serve as
the foundation of the required systems.
-----
_Sustainability 2021, 13, 5269_ 13 of 17
**7. Conclusions**
In our digital age, it is essential to protect healthcare data, but appropriate technology
is lacking. BC technology can achieve the desired objectives. AI and federated learning
enhance efficiency. BC systems would improve greatly if organizations were to successfully
implement the technology. Large organizations (e.g., NVIDIA) have commenced research
regarding AI and federated learning, motivated by societal betterment.
Here, we recognized 13 factors that influence successful BC implementation in the
healthcare industry. We used ISM to divide these 13 factors into six levels. An inappropriate
regulatory environment greatly hinders BC adoption in the healthcare industry. Firms
are reluctant to adopt this intricate and immature technology. Compatibility, investment
cost, and security concerns are equally important. Our work has the following strengths.
First, no similar formal study has appeared. Second, we have highlighted the key obstacles
hindering the implementation of BC technology and have proposed methods to eliminate
them. We offer useful tips for specialists in cutting-edge technology. BC will greatly advance
organization in our digital era. Nonetheless, this study had the following limitations. First,
we evaluated only a few critical factors emphasized in the literature. Second, as this
technology is emerging, there are few skilled experts; we canvassed only 15.
In the future, we plan to validate the results obtained after implementing BC technology and to combine the findings with AI and federated learning to create a useful, real-time
generalized model. As previously suggested, and as reinforced by current demand, reliable
security solutions must be integrated into all digital platforms and must be capable of
adaptation to new environments [94,95]. We will seek BC technology that is secure across
all applications. We will share the corresponding implementations in future articles. We
will also perform cross-sectional studies to identify factors that can enhance the impact
(i.e., strength) of BC implementation. Finally, we suggest that others could implement
our approach in their diverse sectors by combining longitudinal and cross-sectional studies. We hope that our work may serve as a reference. It should be shared and may aid
other industries.
**Author Contributions: Conceptualization, S.A. and S.T.; methodology, S.A. and S.T.; validation,**
S.A., M.-I.J. and H.-C.K.; formal analysis, S.A., M.-I.J. and H.-C.K.; data curation, S.A. and S.T.;
writing—original draft preparation, S.A.; writing—review and editing, S.A. and S.T.; supervision,
H.-C.K.; project administration, H.-C.K.; funding acquisition, H.-C.K. All authors have read and
agreed to the published version of the manuscript.
**Funding: This research was supported by Basic Science Research Program through the National**
Research Foundation of Korea (NRF), supported by the Ministry of Science, ICT & Future Planning
(NRF2017R1D1A3B04032905).
**Informed Consent Statement: All the participants gave their consent to participate in this study.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Katuwal, G.J.; Pandey, S.; Hennessey, M.; Lamichhane, B. Applications of blockchain in healthcare: Current landscape &
challenges. arXiv 2018, arXiv:1812.02776.
2. Zhang, P.; Schmidt, D.C.; White, J.; Lenz, G. Blockchain technology use cases in healthcare. In Advances in Computers; Elsevier:
Amsterdam, The Netherlands, 2018; Volume 111, pp. 1–41.
3. Meinert, E.; Alturkistani, A.; Foley, K.A.; Osama, T.; Car, J.; Majeed, A.; Van Velthoven, M.; Wells, G.; Brindley, D. Blockchain
[implementation in health care: Protocol for a systematic review. JMIR Res. Protoc. 2019, 8, e10994. [CrossRef]](http://doi.org/10.2196/10994)
4. Talesh, S.A. Data breach, privacy, and cyber insurance: How insurance companies act as “compliance managers” for businesses.
_[Law Soc. Inq. 2018, 43, 417–440. [CrossRef]](http://doi.org/10.1111/lsi.12303)_
5. McCoy, T.H.; Perlis, R.H. Temporal trends and characteristics of reportable health data breaches, 2010–2017. JAMA 2018, 320,
[1282–1284. [CrossRef] [PubMed]](http://doi.org/10.1001/jama.2018.9222)
6. Yaeger, K.; Martini, M.; Rasouli, J.; Costa, A. Emerging blockchain technology solutions for modern healthcare infrastructure. J.
_[Sci. Innov. Med. 2019, 2. [CrossRef]](http://doi.org/10.29024/jsim.7)_
7. Khezr, S.; Moniruzzaman, M.; Yassine, A.; Benlamri, R. Blockchain technology in healthcare: A comprehensive review and
[directions for future research. Appl. Sci. 2019, 9, 1736. [CrossRef]](http://doi.org/10.3390/app9091736)
-----
_Sustainability 2021, 13, 5269_ 14 of 17
8. McGhin, T.; Choo, K.K.R.; Liu, C.Z.; He, D. Blockchain in healthcare applications: Research challenges and opportunities. J. Netw.
_[Comput. Appl. 2019, 135, 62–75. [CrossRef]](http://doi.org/10.1016/j.jnca.2019.02.027)_
9. Casino, F.; Dasaklis, T.K.; Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification
[and open issues. Telemat. Inform. 2019, 36, 55–81. [CrossRef]](http://doi.org/10.1016/j.tele.2018.11.006)
10. Park, J.H.; Park, J.H. Blockchain security in cloud computing: Use cases, challenges, and solutions. Symmetry 2017, 9, 164.
[[CrossRef]](http://doi.org/10.3390/sym9080164)
11. Zhang, P.; Walker, M.A.; White, J.; Schmidt, D.C.; Lenz, G. Metrics for assessing blockchain-based healthcare decentralized apps.
In Proceedings of the 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom),
Dalian, China, 12–15 October 2017; pp. 1–4.
12. Ahram, T.; Sargolzaei, A.; Sargolzaei, S.; Daniels, J.; Amaba, B. Blockchain technology innovations. In Proceedings of the 2017
IEEE Technology & Engineering Management Conference (TEMSCON), Santa Clara, CA, USA, 8–10 June 2017; pp. 137–141.
13. Kuo, T.T.; Zavaleta Rojas, H.; Ohno-Machado, L. Comparison of blockchain platforms: A systematic review and healthcare
[examples. J. Am. Med. Inform. Assoc. 2019, 26, 462–478. [CrossRef]](http://doi.org/10.1093/jamia/ocy185)
14. Ekblaw, A.; Azaria, A.; Halamka, J.D.; Lippman, A. A Case Study for Blockchain in Healthcare: “MedRec” prototype for electronic
health records and medical research data. In Proceedings of the IEEE Open & Big Data Conference, Vienna, Austria, 22–24 August
2016; Volume 13, p. 13.
15. Batubara, F.R.; Ubacht, J.; Janssen, M. Challenges of blockchain technology adoption for e-government: A systematic literature
review. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data
Age, Delft, The Netherlands, 30 May–1 June 2018; pp. 1–9.
16. Ali, O.; Ally, M.; Dwivedi, Y. The state of play of blockchain technology in the financial services sector: A systematic literature
[review. Int. J. Inf. Manag. 2020, 54, 102199. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2020.102199)
17. Alketbi, A.; Nasir, Q.; Talib, M.A. Blockchain for government services—Use cases, security benefits and challenges. In Proceedings
of the 2018 15th Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 25–26 February 2018; pp. 112–119.
18. Lindman, J.; Rossi, M.; Tuunainen, V.K. Opportunities and risks of blockchain technologies in payments—A research agenda. In
Proceedings of the 50th Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 4–7 January
2017; pp. 1533–1542.
19. Swan, M. Blockchain: Blueprint for a New Economy; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2015.
20. Tama, B.A.; Kweka, B.J.; Park, Y.; Rhee, K.H. A critical review of blockchain and its current applications. In Proceedings of the
2017 International Conference on Electrical Engineering and Computer Science (ICECOS), Palembang, Indonesia, 22–23 August
2017; pp. 109–113.
21. Antón, P.; Munoz, A.; Mana, A.; Koshutanski, H. Security-enhanced ambient assisted living supporting school activities during
[hospitalisation. J. Ambient Intell. Humaniz. Comput. 2012, 3, 177–192. [CrossRef]](http://doi.org/10.1007/s12652-010-0039-6)
22. Antón, P.; Maña, A.; Muñoz, A.; Koshutanski, H. An immersive view approach by secure interactive multimedia proof-of-concept
[implementation. Multimed. Tools Appl. 2015, 74, 8401–8420. [CrossRef]](http://doi.org/10.1007/s11042-013-1682-7)
23. Mettler, M. Blockchain technology in healthcare: The revolution starts here. In Proceedings of the 2016 IEEE 18th International
Conference on E-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 1–3.
24. Linn, L.A.; Koo, M.B. Blockchain for health data and its potential use in health it and health care related research. In ONC/NIST
_Use of Blockchain for Healthcare and Research Workshop. Gaithersburg, Maryland, United States: ONC/NIST; NIST: Gaithersburg, MD,_
USA, 2016; pp. 1–10.
25. Joshi, A.P.; Han, M.; Wang, Y. A survey on security and privacy issues of blockchain technology. Math. Found. Comput. 2018, 1,
[121. [CrossRef]](http://doi.org/10.3934/mfc.2018007)
26. Alhadhrami, Z.; Alghfeli, S.; Alghfeli, M.; Abedlla, J.A.; Shuaib, K. Introducing blockchains for healthcare. In Proceedings of the
2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, United
Arab Emirates, 21–23 November 2017; pp. 1–4.
27. Shi, S.; He, D.; Li, L.; Kumar, N.; Khan, M.K.; Choo, K.K.R. Applications of blockchain in ensuring the security and privacy of
[electronic health record systems: A survey. Comput. Secur. 2020, 97, 101966. [CrossRef] [PubMed]](http://doi.org/10.1016/j.cose.2020.101966)
28. Zhao, H.; Zhang, Y.; Peng, Y.; Xu, R. Lightweight backup and efficient recovery scheme for health blockchain keys. In Proceedings
of the 2017 IEEE 13th International Symposium on Autonomous Decentralized System (ISADS), Bangkok, Thailand, 22–24 March
2017; pp. 229–234.
29. Khan, F.A.; Asif, M.; Ahmad, A.; Alharbi, M.; Aljuaid, H. Blockchain technology, improvement suggestions, security challenges
[on smart grid and its application in healthcare for sustainable development. Sustain. Cities Soc. 2020, 55, 102018. [CrossRef]](http://doi.org/10.1016/j.scs.2020.102018)
30. Zhang, P.; White, J.; Schmidt, D.C.; Lenz, G. Applying software patterns to address interoperability in blockchain-based healthcare
apps. arXiv 2017, arXiv:1706.03700.
31. Agrawal, R.; Prabakaran, S. Big data in digital healthcare: Lessons learnt and recommendations for general practice. Heredity
**[2020, 124, 525–534. [CrossRef]](http://doi.org/10.1038/s41437-020-0303-2)**
32. Kruse, C.S.; Kothman, K.; Anerobi, K.; Abanaka, L. Adoption factors of the electronic health record: A systematic review. JMIR
_[Med. Inform. 2016, 4, e19. [CrossRef]](http://doi.org/10.2196/medinform.5525)_
33. Tanner, C.; Gans, D.; White, J.; Nath, R.; Pohl, J. Electronic health records and patient safety: Co-occurrence of early EHR
implementation with patient safety practices in primary care settings. Appl. Clin. Inform. 2015, 6, 136.
-----
_Sustainability 2021, 13, 5269_ 15 of 17
34. Randall, D.; Goel, P.; Abujamra, R. Blockchain applications and use cases in health information technology. J. Health Med. Inform.
**[2017, 8, 8–11. [CrossRef]](http://doi.org/10.4172/2157-7420.1000276)**
35. Schwerin, S. Blockchain and privacy protection in the case of the european general data protection regulation (GDPR): A delphi
[study. J. Br. Blockchain Assoc. 2018, 1, 3554. [CrossRef]](http://doi.org/10.31585/jbba-1-1-(4)2018)
36. Abramova, S.; Böhme, R. Perceived Benefit and Risk as Multidimensional Determinants of Bitcoin Use: A Quantitative Exploratory Study;
ICIS: Dublin, Ireland, 2016.
37. Sas, C.; Khairuddin, I.E. Design for trust: An exploration of the challenges and opportunities of bitcoin users. In Proceedings of
the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 6499–6510.
38. [Sadhya, V.; Sadhya, H. Barriers to Adoption of Blockchain Technology. Available online: https://aisel.aisnet.org/amcis2018/](https://aisel.aisnet.org/amcis2018/AdoptionDiff/Presentations/20/)
[AdoptionDiff/Presentations/20/ (accessed on 27 February 2021).](https://aisel.aisnet.org/amcis2018/AdoptionDiff/Presentations/20/)
39. Tanwar, S.; Parekh, K.; Evans, R. Blockchain-based electronic healthcare record system for healthcare 4.0 applications. J. Inf. Secur.
_[Appl. 2020, 50, 102407. [CrossRef]](http://doi.org/10.1016/j.jisa.2019.102407)_
40. Yaqoob, S.; Khan, M.M.; Talib, R.; Butt, A.D.; Saleem, S.; Arif, F.; Nadeem, A. Use of blockchain in healthcare: A systematic
[literature review. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 644–653. [CrossRef]](http://doi.org/10.14569/IJACSA.2019.0100581)
41. Esmaeilzadeh, P.; Mirzaei, T. The potential of blockchain technology for health information exchange: Experimental study from
[patients’ perspectives. J. Med. Internet Res. 2019, 21, e14184. [CrossRef]](http://doi.org/10.2196/14184)
42. Pandey, P.; Litoriya, R. Implementing healthcare services on a large scale: Challenges and remedies based on blockchain
[technology. Health Policy Technol. 2020, 9, 69–78. [CrossRef]](http://doi.org/10.1016/j.hlpt.2020.01.004)
43. [Sullivan, C.; Burger, E. E-residency and blockchain. Comput. Law Secur. Rev. 2017, 33, 470–481. [CrossRef]](http://doi.org/10.1016/j.clsr.2017.03.016)
44. Pokrovskaia, N.N. Tax, financial and social regulatory mechanisms within the knowledge-driven economy. Blockchain algorithms
and fog computing for the efficient regulation. In Proceedings of the 2017 XX IEEE International Conference on Soft Computing
and Measurements (SCM), IEEE, St. Petersburg, Russia, 24–26 May 2017; pp. 709–712.
45. Ølnes, S.; Ubacht, J.; Janssen, M. Blockchain in government: Benefits and implications of distributed ledger technology for
[information sharing. Gov. Inf. Q. 2017, 34, 355–364. [CrossRef]](http://doi.org/10.1016/j.giq.2017.09.007)
46. Thakre, A.; Thabtah, F.; Shahamiri, S.R.; Hammoud, S. A novel block chain technology publication model proposal. Appl. Comput.
_[Inform. 2019. [CrossRef]](http://doi.org/10.1016/j.aci.2019.10.003)_
47. Andersen, J.V.; Bogusz, C.I. Patterns of Self-Organising in the Bitcoin Online Community: Code Forking as Organising in Digital
Infrastructure. In Proceedings of the International Conference on Information Systems (ICIS 2017), Seoul, Korea, 10–13 December
2017.
48. Reyna, A.; Martín, C.; Chen, J.; Soler, E.; Díaz, M. On blockchain and its integration with IoT. Challenges and opportunities.
_[Future Gener. Comput. Syst. 2018, 88, 173–190. [CrossRef]](http://doi.org/10.1016/j.future.2018.05.046)_
49. Waller, A.; Sandy, I.; Power, E.; Aivaloglou, E.; Skianis, C.; Muñoz, A.; Maña, A. Policy based management for security in
cloud computing. In FTRA International Conference on Secure and Trust Computing, Data Management, and Application; Springer:
Berlin/Heidelberg, Germany, 2011; pp. 130–137.
50. Muñoz, A.; Maña, A.; González, J. Dynamic Security Properties Monitoring Architecture for Cloud Computing. In Security
_Engineering for Cloud Computing: Approaches and Tools; IGI Global: Hershey, PA, USA, 2013; pp. 1–18._
51. Muñoz, A.; Gonzalez, J.; Maña, A. A performance-oriented monitoring system for security properties in cloud computing
[applications. Comput. J. 2012, 55, 979–994. [CrossRef]](http://doi.org/10.1093/comjnl/bxs042)
52. Kumar, R.; Khan, A.A.; Zhang, S.; Wang, W.; Abuidris, Y.; Amin, W.; Kumar, J. Blockchain-federated-learning and deep learning
models for covid-19 detection using ct imaging. arXiv 2020, arXiv:2007.06537.
53. Nguyen, D.C.; Pathirana, P.N.; Ding, M.; Seneviratne, A. Integration of blockchain and cloud of things: Architecture, applications
[and challenges. IEEE Commun. Surv. Tutor. 2020, 22, 2521–2549. [CrossRef]](http://doi.org/10.1109/COMST.2020.3020092)
54. Kshetri, N. Blockchain’s roles in strengthening cybersecurity and protecting privacy. Telecommun. Policy 2017, 41, 1027–1038.
[[CrossRef]](http://doi.org/10.1016/j.telpol.2017.09.003)
55. Sun, M.; Zhang, J. Research on the application of block chain big data platform in the construction of new smart city for low
[carbon emission and green environment. Comput. Commun. 2020, 149, 332–342. [CrossRef]](http://doi.org/10.1016/j.comcom.2019.10.031)
56. [Shen, C.; Pena-Mora, F. Blockchain for cities—a systematic literature review. IEEE Access 2018, 6, 76787–76819. [CrossRef]](http://doi.org/10.1109/ACCESS.2018.2880744)
57. Singhal, D.; Tripathy, S.; Jena, S.K. Remanufacturing for the circular economy: Study and evaluation of critical factors. Resour.
_[Conserv. Recycl. 2020, 156, 104681. [CrossRef]](http://doi.org/10.1016/j.resconrec.2020.104681)_
58. Gholami, H.; Bachok, M.F.; Saman, M.Z.M.; Streimikiene, D.; Sharif, S.; Zakuan, N. An ISM Approach for the Barrier Analysis in
[Implementing Green Campus Operations: Towards Higher Education Sustainability. Sustainability 2020, 12, 363. [CrossRef]](http://doi.org/10.3390/su12010363)
59. Nilashi, M.; Dalvi, M.; Ibrahim, O.; Zamani, M.; Ramayah, T. An interpretive structural modelling of the features influencing
[researchers’ selection of reference management software. J. Librariansh. Inf. Sci. 2019, 51, 34–46. [CrossRef]](http://doi.org/10.1177/0961000616668961)
60. Jharkharia, S.; Shankar, R. IT-enablement of supply chains: Understanding the barriers. J. Enterp. Inf. Manag. 2005, 18, 11–27.
[[CrossRef]](http://doi.org/10.1108/17410390510571466)
61. Ravi, V.; Shankar, R. Analysis of interactions among the barriers of reverse logistics. Technol. Forecast. Soc. Chang. 2005, 72,
[1011–1029. [CrossRef]](http://doi.org/10.1016/j.techfore.2004.07.002)
62. Raj, T.; Attri, R. Identification and modelling of barriers in the implementation of TQM. Int. J. Product. Qual. Manag. 2011, 8,
[153–179. [CrossRef]](http://doi.org/10.1504/IJPQM.2011.041844)
-----
_Sustainability 2021, 13, 5269_ 16 of 17
63. Ravi, V.; Shankar, R.; Tiwari, M.K. Productivity improvement of a computer hardware supply chain. Int. J. Product. Perform.
_[Manag. 2005, 54, 239–255. [CrossRef]](http://doi.org/10.1108/17410400510593802)_
64. Barve, A.; Kanda, A.; Shankar, R. Analysis of interaction among the barriers of third party logistics. Int. J. Agil. Syst. Manag. 2007,
_[2, 109–129. [CrossRef]](http://doi.org/10.1504/IJASM.2007.015684)_
65. [Hasan, M.A.; Shankar, R.; Sarkis, J. A study of barriers to agile manufacturing. Int. J. Agil. Syst. Manag. 2007, 2, 1–22. [CrossRef]](http://doi.org/10.1504/IJASM.2007.015679)
66. Raj, T.; Shankar, R.; Suhaib, M. An ISM approach for modelling the enablers of flexible manufacturing system: The case for India.
_[Int. J. Prod. Res. 2008, 46, 6883–6912. [CrossRef]](http://doi.org/10.1080/00207540701429926)_
67. Nayak, S.; Tripathy, S.; Dash, A. Non-technical skill development strategy to enhance safety performance of railway system: An
[interpretive structural modelling approach. Int. J. Bus. Excell. 2019, 19, 168–188. [CrossRef]](http://doi.org/10.1504/IJBEX.2019.102233)
68. Ahmad, M.; Tang, X.W.; Qiu, J.N.; Ahmad, F. Interpretive structural modeling and MICMAC analysis for identifying and
[benchmarking significant factors of seismic soil liquefaction. Appl. Sci. 2019, 9, 233. [CrossRef]](http://doi.org/10.3390/app9020233)
69. Nayak, S.; Tripathy, S.; Dash, A. Role of non technical skill in human factor engineering: A crucial safety issue in Indian Railway.
_[Int. J. Syst. Assur. Eng. Manag. 2018, 9, 1120–1136. [CrossRef]](http://doi.org/10.1007/s13198-018-0715-z)_
70. Aich, S.; Tripathy, S. An interpretive structural model of green supply chain management in Indian computer and its peripheral
[industries. Int. J. Procure. Manag. 2014, 7, 239–256. [CrossRef]](http://doi.org/10.1504/IJPM.2014.060774)
71. Tripathy, S.; Sahu, S.; Ray, P.K. Interpretive structural modelling for critical success factors of R&D performance in Indian
manufacturing firms. J. Model. Manag. 2013, 8, 212–240.
72. Lim, M.K.; Tseng, M.L.; Tan, K.H.; Bui, T.D. Knowledge management in sustainable supply chain management: Improving
[performance through an interpretive structural modelling approach. J. Clean. Prod. 2017, 162, 806–816. [CrossRef]](http://doi.org/10.1016/j.jclepro.2017.06.056)
73. Saxena, J.P.; Vrat, P. Scenario building: A critical study of energy conservation in the Indian cement industry. Technol. Forecast. Soc.
_[Chang. 1992, 41, 121–146. [CrossRef]](http://doi.org/10.1016/0040-1625(92)90059-3)_
74. Girubha, J.; Vinodh, S.; Vimal, K.E.K. Application of interpretative structural modelling integrated multi criteria decision making
[methods for sustainable supplier selection. J. Model. Manag. 2016, 11, 358–388. [CrossRef]](http://doi.org/10.1108/JM2-02-2014-0012)
75. Diabat, A.; Govindan, K. An analysis of the drivers affecting the implementation of green supply chain management. Resour.
_[Conserv. Recycl. 2011, 55, 659–667. [CrossRef]](http://doi.org/10.1016/j.resconrec.2010.12.002)_
76. Mangla, S.K.; Luthra, S.; Mishra, N.; Singh, A.; Rana, N.P.; Dora, M.; Dwivedi, Y. Barriers to effective circular supply chain
[management in a developing country context. Prod. Plan. Control 2018, 29, 551–569. [CrossRef]](http://doi.org/10.1080/09537287.2018.1449265)
77. Chakraborty, K.; Mondal, S.; Mukherjee, K. Critical analysis of enablers and barriers in extension of useful life of automotive
products through remanufacturing. J. Clean. Prod. 2019, 227, 1117–1135.
78. Kim, Y.; Park, Y.; Song, G. Interpretive Structural Modeling in the Adoption of IoT Services. KSII Trans. Internet Inf. Syst. 2019, 13.
[[CrossRef]](http://doi.org/10.3837/tiis.2019.03.004)
79. Rana, N.P.; Dwivedi, Y.K.; Hughes, D.L. Analysis of Challenges for Blockchain Adoption within the Indian Public Sector: An
[Interpretive Structural Modelling Approach. Inf. Technol. People 2021. [CrossRef]](http://doi.org/10.1108/ITP-07-2020-0460)
80. Sharma, M.G.; Kumar, S. The Implication of Blockchain as a Disruptive Technology for Construction Industry. IIM Kozhikode Soc.
_[Manag. Rev. 2020, 9, 177–188. [CrossRef]](http://doi.org/10.1177/2277975220932343)_
81. Kamble, S.S.; Gunasekaran, A.; Sharma, R. Modeling the blockchain enabled traceability in agriculture supply chain. Int. J. Inf.
_[Manag. 2020, 52, 101967. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2019.05.023)_
82. Patri, R.; Suresh, M. Factors influencing lean implementation in healthcare organizations: An ISM approach. Int. J. Healthc. Manag.
**[2018, 11, 25–37. [CrossRef]](http://doi.org/10.1080/20479700.2017.1300380)**
83. Bhosale, V.A.; Kant, R. An integrated ISM fuzzy MICMAC approach for modelling the supply chain knowledge flow enablers.
_[Int. J. Prod. Res. 2016, 54, 7374–7399. [CrossRef]](http://doi.org/10.1080/00207543.2016.1189102)_
84. [Sushil, S. Interpreting the interpretive structural model. Glob. J. Flex. Syst. Manag. 2012, 13, 87–106. [CrossRef]](http://doi.org/10.1007/s40171-012-0008-3)
85. Sivaprakasam, R.; Selladurai, V.; Sasikumar, P. Implementation of interpretive structural modelling methodology as a strategic
[decision making tool in a Green Supply Chain Context. Ann. Oper. Res. 2015, 233, 423–448. [CrossRef]](http://doi.org/10.1007/s10479-013-1516-z)
86. Jayasuriya Daluwathumullagamage, D.; Sims, A. Blockchain-Enabled Corporate Governance and Regulation. Int. J. Financ. Stud.
**[2020, 8, 36. [CrossRef]](http://doi.org/10.3390/ijfs8020036)**
87. Gomez-Trujillo, A.M.; Velez-Ocampo, J.; Gonzalez-Perez, M.A. Trust, Transparency, and Technology: Blockchain and Its Relevance
in the Context of the 2030 Agenda. In The Palgrave Handbook of Corporate Sustainability in the Digital Era; Palgrave Macmillan:
Cham, Switzerland, 2021; pp. 561–580.
88. Koster, F.; Borgman, H. New Kid on The Block! Understanding Blockchain Adoption in the Public Sector. In Proceedings of the
53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020.
89. Monrat, A.A.; Schelén, O.; Andersson, K. A survey of blockchain from the perspectives of applications, challenges, and
[opportunities. IEEE Access 2019, 7, 117134–117151. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2936094)
90. Hasselgren, A.; Kralevska, K.; Gligoroski, D.; Pedersen, S.A.; Faxvaag, A. Blockchain in healthcare and health sciences—A scoping
[review. Int. J. Med. Inform. 2020, 134, 104040. [CrossRef] [PubMed]](http://doi.org/10.1016/j.ijmedinf.2019.104040)
91. Petersson, E.; Baur, K. Impacts of Blockchain Technology on Supply Chain Collaboration: A Study on the Use of Blockchain
[Technology in Supply Chains and How It Influences Supply Chain Collaboration. Available online: https://www.diva-portal.](https://www.diva-portal.org/smash/get/diva2:1215210/FULLTEXT01.pdf)
[org/smash/get/diva2:1215210/FULLTEXT01.pdf (accessed on 19 March 2019).](https://www.diva-portal.org/smash/get/diva2:1215210/FULLTEXT01.pdf)
-----
_Sustainability 2021, 13, 5269_ 17 of 17
92. Satyavolu, P.; Herridge, M. Blockchain in Manufacturing: Enhancing Trust, Cutting . . . (n.d.). [Available online: https:](https://www.cognizant.com/whitepapers/blockchain-in-manufacturing-enhancing-trust-cuttingcosts-and-lubricating-processes-across-the-value-chain-codex3239.pdf)
[//www.cognizant.com/whitepapers/blockchain-in-manufacturing-enhancing-trust-cuttingcosts-and-lubricating-processes-](https://www.cognizant.com/whitepapers/blockchain-in-manufacturing-enhancing-trust-cuttingcosts-and-lubricating-processes-across-the-value-chain-codex3239.pdf)
[across-the-value-chain-codex3239.pdf (accessed on 18 January 2021).](https://www.cognizant.com/whitepapers/blockchain-in-manufacturing-enhancing-trust-cuttingcosts-and-lubricating-processes-across-the-value-chain-codex3239.pdf)
93. Choi, D.; Chung, C.Y.; Seyha, T.; Young, J. Factors Affecting Organizations’ Resistance to the Adoption of Blockchain Technology
[in Supply Networks. Sustainability 2020, 12, 8882. [CrossRef]](http://doi.org/10.3390/su12218882)
94. Sánchez-Cid, F.; Mana, A.; Spanoudakis, G.; Kloukinas, C.; Serrano, D.; Munoz, A. Representation of security and dependability
solutions. In Security and Dependability for Ambient Intelligence; Springer: Boston, MA, USA, 2009; pp. 69–95.
95. Serrano, D.; Ruíz, J.F.; Muñoz, A.; Maña, A.; Armenteros, A.; Crespo, B.G.N. Development of applications based on security
patterns. In Proceedings of the 2009 Second International Conference on Dependability, IEEE, Athens, Greece, 18–23 June 2009;
pp. 111–116.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/SU13095269?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/SU13095269, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/13/9/5269/pdf"
}
| 2,021
|
[] | true
| 2021-05-08T00:00:00
|
[
{
"paperId": "46ed63d40964e6aa648b6e4be838938f1ef33711",
"title": "Analysis of challenges for blockchain adoption within the Indian public sector: an interpretive structural modelling approach"
},
{
"paperId": "7a645e1ccabc0e83cb3cc12486e6e3d0f7fd5991",
"title": "Factors Affecting Organizations’ Resistance to the Adoption of Blockchain Technology in Supply Networks"
},
{
"paperId": "5b3711cf8dd639f02f275f51f35c45adc988051e",
"title": "Improving Security on Blockchain and Its Integration with IoT"
},
{
"paperId": "27701fa918fb540565b1f841a8174c54f1dc67f4",
"title": "The state of play of blockchain technology in the financial services sector: A systematic literature review"
},
{
"paperId": "88b299a252821ba8997ba2cd6390d8396fba1511",
"title": "A novel block chain technology publication model proposal"
},
{
"paperId": "76b35c04677dd488e1bed4dca50720c21c63e992",
"title": "Applications of blockchain in ensuring the security and privacy of electronic health record systems: A survey"
},
{
"paperId": "238878e4b4d7ed4a830be1598e610b2a9bec563a",
"title": "Blockchain-Federated-Learning and Deep Learning Models for COVID-19 Detection Using CT Imaging"
},
{
"paperId": "fc178e2b4259eafa5c6a9dac3bd7218a2cfd772c",
"title": "The Implication of Blockchain as a Disruptive Technology for Construction Industry"
},
{
"paperId": "e25a1cb79cd19497aabe9d9b51a17f13f32e7830",
"title": "Blockchain-Enabled Corporate Governance and Regulation"
},
{
"paperId": "21b2c213b10b88bdd9976c64bf44bafa49b1c849",
"title": "Modeling the blockchain enabled traceability in agriculture supply chain"
},
{
"paperId": "463266ec018df5acc164b3f7832779a327f3a7c3",
"title": "Remanufacturing for the circular economy: Study and evaluation of critical factors"
},
{
"paperId": "bc1cb28eb152806acb587a1fc3dfe5fc7df4a1e3",
"title": "Blockchain technology, improvement suggestions, security challenges on smart grid and its application in healthcare for sustainable development"
},
{
"paperId": "fb30e18bbfc8de7bf7df55af7d40c0d757d1942e",
"title": "Big data in digital healthcare: lessons learnt and recommendations for general practice"
},
{
"paperId": "d448f6ad3717e0a4f8a3c6d1e43dcd57462277b9",
"title": "Implementing healthcare services on a large scale: Challenges and remedies based on blockchain technology"
},
{
"paperId": "3665a3eaebed837f09b21a0141594bd654b18170",
"title": "Blockchain-based electronic healthcare record system for healthcare 4.0 applications"
},
{
"paperId": "16f5ed6a95099276f0d7770040f6b3012ec533dd",
"title": "New Kid On The Block! Understanding Blockchain Adoption in the Public Sector"
},
{
"paperId": "3e92f40d412c9f27ee07353a2793ffe1e6db457b",
"title": "An ISM Approach for the Barrier Analysis in Implementing Green Campus Operations: Towards Higher Education Sustainability"
},
{
"paperId": "31812ed442a283a46e6e4e6a5895df2a1025a5a1",
"title": "Research on the application of block chain big data platform in the construction of new smart city for low carbon emission and green environment"
},
{
"paperId": "e21700b05fc480d4f2c18462dbeaaf90dadb378d",
"title": "Blockchain in healthcare and health sciences - A scoping review"
},
{
"paperId": "5f37190f9df8825d09f335110617e71a209d6a9d",
"title": "Non-technical skill development strategy to enhance safety performance of railway system: an interpretive structural modelling approach"
},
{
"paperId": "b815f782662787e121611e5a74b59047feca7efd",
"title": "Integration of Blockchain and Cloud of Things: Architecture, Applications and Challenges"
},
{
"paperId": "627341e1a5872676798c3ab56355157c1fe78bcc",
"title": "A Survey of Blockchain From the Perspectives of Applications, Challenges, and Opportunities"
},
{
"paperId": "835505396982a1ae0b28a656194bd7634ed6cee6",
"title": "Critical analysis of enablers and barriers in extension of useful life of automotive products through remanufacturing"
},
{
"paperId": "c1e98fa629cd65080afd919eecb8628829fe66ce",
"title": "Blockchain in healthcare applications: Research challenges and opportunities"
},
{
"paperId": "04b805999f4055e9087f7d927cd0af44866a1022",
"title": "Blockchain Technology in Healthcare: A Comprehensive Review and Directions for Future Research"
},
{
"paperId": "c098d87e026092fe0c0e941505eebb79122a6815",
"title": "Interpretive Structural Modeling in the Adoption of IoT Services"
},
{
"paperId": "6c6b4d43df1529b969910f7e112e831360f12132",
"title": "The Potential of Blockchain Technology for Health Information Exchange: Experimental Study From Patients’ Perspectives"
},
{
"paperId": "6934fddef7a37cec01e595394541abd5f2bced0a",
"title": "Comparison of blockchain platforms: a systematic review and healthcare examples"
},
{
"paperId": "e4eac826a4afce44ea1de42f78d9dde755ccfef7",
"title": "An interpretive structural modelling of the features influencing researchers’ selection of reference management software"
},
{
"paperId": "4c0945cb52d0734b25ecea49e3ae1c1b243fca66",
"title": "A systematic literature review of blockchain-based applications: Current status, classification and open issues"
},
{
"paperId": "8bb40227ce2981bb9f462f70f81815021c3f801b",
"title": "Blockchain Implementation in Health Care: Protocol for a Systematic Review"
},
{
"paperId": "be05eb406a5040efe19d244170a5e7c6f2d86529",
"title": "Emerging Blockchain Technology Solutions for Modern Healthcare Infrastructure"
},
{
"paperId": "d50a82091e30c81f3d80c5d3d1db36b496535aa4",
"title": "Interpretive Structural Modeling and MICMAC Analysis for Identifying and Benchmarking Significant Factors of Seismic Soil Liquefaction"
},
{
"paperId": "62f1da816679c1a0500906b97b8aeaafc5fa50b5",
"title": "Applications of Blockchain in Healthcare: Current Landscape & Challenges"
},
{
"paperId": "e036da854e1fa5fbe9de6a8b25406e6af3900bae",
"title": "Blockchain for Cities—A Systematic Literature Review"
},
{
"paperId": "f5ecc63b6d74a58413ecf8bc98c835d5d1488ea1",
"title": "Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010-2017"
},
{
"paperId": "d3fd9e741f161696363171ef683123dd1823251e",
"title": "Challenges of blockchain technology adoption for e-government: a systematic literature review"
},
{
"paperId": "e68053f7e09e4d0a665fd03729f4a71c80d42538",
"title": "A survey on security and privacy issues of blockchain technology"
},
{
"paperId": "107d0efeb55738282180ae3c7408e9be6b2b89a4",
"title": "Barriers to effective circular supply chain management in a developing country context"
},
{
"paperId": "89debf2d924199c8d8147f6f365447bd4b1f81eb",
"title": "Blockchain and Privacy Protection in the Case of the European General Data Protection Regulation (GDPR): A Delphi Study"
},
{
"paperId": "d8bfcdc081d5c95cfe128c8dfab0117ede9aac11",
"title": "Role of non technical skill in human factor engineering: a crucial safety issue in Indian Railway"
},
{
"paperId": "2d09a943e9ea24803dcfea50c7c003a57a964d38",
"title": "Data Breach, Privacy, and Cyber Insurance: How Insurance Companies Act as “Compliance Managers” for Businesses"
},
{
"paperId": "4196cdac0e779ea8a79e1192e32f7c8115123517",
"title": "Blockchain for government services — Use cases, security benefits and challenges"
},
{
"paperId": "9fc4af5757abd1465578855417c2a7760212c263",
"title": "Factors influencing lean implementation in healthcare organizations: An ISM approach"
},
{
"paperId": "a45b58c66c90ec4d3435e358fd46ee509dfea05b",
"title": "Introducing blockchains for healthcare"
},
{
"paperId": "7f1c3c97a639a93796e935add3665c3ef329c0c8",
"title": "Blockchain's roles in strengthening cybersecurity and protecting privacy"
},
{
"paperId": "f99878f6f3df724ffaf648f7515be053347d19de",
"title": "Metrics for assessing blockchain-based healthcare decentralized apps"
},
{
"paperId": "1967549a611e7dd34a875aa220e534a0075521c1",
"title": "Knowledge management in sustainable supply chain management: Improving performance through an interpretive structural modelling approach"
},
{
"paperId": "488ebe4db7190efe445c225aa67a10f70bc46d8d",
"title": "Blockchain in government: Benefits and implications of distributed ledger technology for information sharing"
},
{
"paperId": "5d7bf180157709f2515ea7b596bb0bf231e83559",
"title": "Blockchain Security in Cloud Computing: Use Cases, Challenges, and Solutions"
},
{
"paperId": "c4f78541eff05e539927d17ece67f239603b18a1",
"title": "A critical review of blockchain and its current applications"
},
{
"paperId": "a89287356a6d16997157ec9f91a35fb59360d903",
"title": "E-residency and blockchain"
},
{
"paperId": "54c6cb0a6ebdbc349e08b9c2e7c6bda3b72e1607",
"title": "Blockchain Applications and Use Cases in Health Information Technology"
},
{
"paperId": "9092a7802f6e56dd5b6d1be30c8b5588a22e53fe",
"title": "Blockchain technology innovations"
},
{
"paperId": "d73d60fb82318e91a8a66a1f8f2a5a1578d31f02",
"title": "Applying Software Patterns to Address Interoperability in Blockchain-based Healthcare Apps"
},
{
"paperId": "0e7be5a5e9814db82ade63b9aac80de5d7da8476",
"title": "Tax, financial and social regulatory mechanisms within the knowledge-driven economy. Blockchain algorithms and fog computing for the efficient regulation"
},
{
"paperId": "20bf5c9d575f19eaf465c43555b72ffd092d68fe",
"title": "Design for Trust: An Exploration of the Challenges and Opportunities of Bitcoin Users"
},
{
"paperId": "10268e4e102ce9c3635708b9242079c26414ba90",
"title": "Lightweight Backup and Efficient Recovery Scheme for Health Blockchain Keys"
},
{
"paperId": "310e677ce23004fdf0a549c2cfda2ef15420d6ec",
"title": "Blockchain technology in healthcare: The revolution starts here"
},
{
"paperId": "5a76824d7ddc85e2f49b86b6575cde8d3806247e",
"title": "Application of interpretative structural modelling integrated multi criteria decision making methods for sustainable supplier selection"
},
{
"paperId": "de8f0cec6e709ae7c0dc499343c6f0f815517460",
"title": "Adoption Factors of the Electronic Health Record: A Systematic Review"
},
{
"paperId": "cd17afa7165fc0a39e8d8f5b46c9b6c897f94f52",
"title": "An integrated ISM fuzzy MICMAC approach for modelling the supply chain knowledge flow enablers"
},
{
"paperId": "1b5d1095dcb43c8d6b3c1457a1e8b9fbaa62eed7",
"title": "Implementation of interpretive structural modelling methodology as a strategic decision making tool in a Green Supply Chain Context"
},
{
"paperId": "a381a43e9345028b52211f65f003dfebbf25d43d",
"title": "Electronic Health Records and Patient Safety"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "c0dd8f735f44ed53c9c6fc28b30ae784181a3dc6",
"title": "An interpretive structural model of green supply chain management in Indian computer and its peripheral industries"
},
{
"paperId": "ede1d4debfb33f626af2b611973fc1f74184449f",
"title": "Interpretive structural modelling for critical success factors of R&D performance in Indian manufacturing firms"
},
{
"paperId": "2b0f9aa8f60148e9f8786f3f1dfe10fd8eab8a0d",
"title": "An immersive view approach by secure interactive multimedia proof-of-concept implementation"
},
{
"paperId": "f49b834d24a521314c6a9f2f302cbe0dc70820b0",
"title": "Security Engineering for Cloud Computing: Approaches and Tools"
},
{
"paperId": "f84c059fb5660d6ea1e67f6f40327d2d2b4c355f",
"title": "Security-enhanced ambient assisted living supporting school activities during hospitalisation"
},
{
"paperId": "4985bfae6a1b69b8ef40bcb14bb9967ebe597a5f",
"title": "A Performance-Oriented Monitoring System for Security Properties in Cloud Computing Applications"
},
{
"paperId": "dfe48a870bc2238fc47916599c2aa30fd9c4c999",
"title": "Interpreting the Interpretive Structural Model"
},
{
"paperId": "4e72b82ab96b24defe9e21913657579a3518669a",
"title": "Identification and modelling of barriers in the implementation of TQM"
},
{
"paperId": "b9fd67f0a90ab547e3a946cf95d77fe6562ec5a4",
"title": "Policy Based Management for Security in Cloud Computing"
},
{
"paperId": "1d40975a335f302acbe015d41e183501dc655717",
"title": "An analysis of the drivers affecting the implementation of green supply chain management"
},
{
"paperId": "0ba0c4a9e2eb1329dd8412323c9f0efe7eebbf6b",
"title": "Development of Applications Based on Security Patterns"
},
{
"paperId": "88fd825a1b2f732835d55d95da3b9ffedf25f17b",
"title": "An ISM approach for modelling the enablers of flexible manufacturing system: the case for India"
},
{
"paperId": "132e301b035d7bd046a6a2a56e2be66546ea0231",
"title": "Analysis of interaction among the barriers of Third Party Logistics"
},
{
"paperId": "9abf8076e1639f12c304ffda8562bdaeb7f447f0",
"title": "A study of barriers to agile manufacturing"
},
{
"paperId": "787fc0b665ef8b696286202e34a5ae4de067a3db",
"title": "ANALYSIS OF INTERACTIONS AMONG THE BARRIERS OF REVERSE LOGISTICS"
},
{
"paperId": "53842e647f5e65df8edc3c8d71509ac07653a01f",
"title": "Productivity improvement of a computer hardware supply chain"
},
{
"paperId": "dc63bb8d856edd9ce0a575664348a30ae047aa7a",
"title": "IT-enablement of supply chains: understanding the barriers"
},
{
"paperId": "05445c3defa10aa44e2ecd49a91008e9241cb892",
"title": "Scenario building: A critical study of energy conservation in the Indian cement industry"
},
{
"paperId": null,
"title": "Blockchain in Manufacturing: Enhancing Trust, Cutting . ."
},
{
"paperId": "2b0befc4714c376c1d7e1d2d21d3d123c9fb59ec",
"title": "Trust, Transparency, and Technology: Blockchain and Its Relevance in the Context of the 2030 Agenda"
},
{
"paperId": "79a33b6b7d2ae17d55659faa084d37d6e7f930f4",
"title": "Use of Blockchain in Healthcare: A Systematic Literature Review"
},
{
"paperId": "b3bca8039c72847c2fe5f7f848c0955c30662273",
"title": "Chapter One - Blockchain Technology Use Cases in Healthcare"
},
{
"paperId": "3006545e79f50126dc421a5dfe1f118c05dd82e9",
"title": "Impacts of Blockchain technology on Supply Chain Collaboration : A study on the use of blockchain technology in supply chains and how it influences supply chain collaboration"
},
{
"paperId": "b43ecf4409b56ab5786ed12fac711ae6249dd39d",
"title": "Barriers to Adoption of Blockchain Technology"
},
{
"paperId": "508b1c18ac074428c33b0e023d3e3110e718991c",
"title": "Patterns of Self-Organising in the Bitcoin Online Community: Code Forking as Organising in Digital Infrastructure"
},
{
"paperId": "3ed0db58a7aec7bafc2aa14ca550031b9f7021d5",
"title": "A Case Study for Blockchain in Healthcare : “ MedRec ” prototype for electronic health records and medical research data"
},
{
"paperId": "f15ddad79b91c084b64dc920344e23fdbbe9e0ce",
"title": "Perceived Benefit and Risk as Multidimensional Determinants of Bitcoin Use: A Quantitative Exploratory Study"
},
{
"paperId": "cab41a7bec3b928cc13ca920d893f449cabb1599",
"title": "Opportunities and risks of Blockchain Technologies in payments – a research agenda"
},
{
"paperId": null,
"title": "Blockchain for health data and its potential use in health it and health care related research"
},
{
"paperId": "b88ca6bf59f16c8e13e21ba2f37012b027f7253b",
"title": "Dynamic Security Properties Monitoring Architecture for Cloud Computing"
},
{
"paperId": "94afa897bd6daad207942fff481d922c1d9cbf12",
"title": "Representation of Security and Dependability Solutions"
},
{
"paperId": "ac5a34da896e64c113369ea49d3e6691523e06a0",
"title": "Future Generation Computer Systems"
}
] | 18,225
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012e174c148901bedf28ae161518f3fa5c2eee4c
|
[
"Computer Science"
] | 0.87742
|
An overview of the OMNeT++ simulation environment
|
012e174c148901bedf28ae161518f3fa5c2eee4c
|
International ICST Conference on Simulation Tools and Techniques
|
[
{
"authorId": "145381357",
"name": "A. Varga"
},
{
"authorId": "102064564",
"name": "R. Hornig"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Simul Tool Tech Commun Netw Syst",
"Simulation Tools and Techniques for Communications, Networks and System",
"SIMUTools",
"SimuTools",
"Int ICST Conf Simul Tool Tech"
],
"alternate_urls": [
"https://eudl.eu/proceedings"
],
"id": "dbfba120-41ed-4e2b-966a-442cb212437c",
"issn": "2310-9831",
"name": "International ICST Conference on Simulation Tools and Techniques",
"type": "conference",
"url": "http://www.icst.org/"
}
| null |
# AN OVERVIEW OF THE OMNeT++ SIMULATION ENVIRONMENT
## András Varga
### OpenSim Ltd.
Sz l köz 11, 1032 ő ő
Budapest, Hungary
## andras.varga@omnest.com
ABSTRACT
The OMNeT++ discrete event simulation environment has been
publicly available since 1997. It has been created with the
simulation of communication networks, multiprocessors and other
distributed systems in mind as application area, but instead of
building a specialized simulator, OMNeT++ was designed to be as
general as possible. Since then, the idea has proven to work, and
OMNeT++ has been used in numerous domains from queuing
network simulations to wireless and ad-hoc network simulations,
from business process simulation to peer-to-peer network, optical
switch and storage area network simulations. This paper presents
an overview of the OMNeT++ framework, recent challenges
brought about by the growing amount and complexity of third
party simulation models, and the solutions we introduce in the
next major revision of the simulation framework.[1]
## KEYWORDS
discrete simulation, network simulation, simulation tools,
performance analysis, computer systems, telecommunications,
hierarchical, integrated development environment
## 1. INTRODUCTION
OMNeT++[1][2] is a C++-based discrete event simulator for
modeling communication networks, multiprocessors and other
distributed or parallel systems. OMNeT++ is public-source, and
can be used under the Academic Public License that makes the
software free for non-profit use. The motivation of developing
OMNeT++ was to produce a powerful open-source discrete event
simulation tool that can be used by academic, educational and
research-oriented commercial institutions for the simulation of
computer networks and distributed or parallel systems. OMNeT++
attempts to fill the gap between open-source, research-oriented
simulation software such as NS-2 [11] and expensive commercial
alternatives like OPNET [16]. A later section of this paper
presents a comparison with other simulation packages. OMNeT++
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
Permission to make digital or hard copies of all or part of this work for
not made or distributed for profit or commercial advantage and that
personal or classroom use is granted without fee provided that copies
copies bear this notice and the full citation on the first page. Copyrights
are not made or distributed for profit or commercial advantage and that
for components of this work owned by others than ICST must be
copies bear this notice and the full citation on the first page. To copy
honored. Abstracting with credit is permitted. To copy otherwise, to
otherwise, to republish, to post on servers or to redistribute to lists,
republish, to post on servers or to redistribute to lists, requires prior
requires prior specific permission and/or a fee. specific permission and/or a fee.
SIMUTOOLS 2008, March 03-07, Marseille, France
Copyright © 2008 ICST 978-963-9799-20-2 SIMUTools, March 03 – 07, 2008, Marseille, France.
DOI 10.4108/ICST.SIMUTOOLS2008.3027ISBN 978-963-9799-20-2
1
The 4.0 release is scheduled to appear in Q1 2008.
## Rudolf Hornig
### OpenSim Ltd.
Sz l köz 11, 1032 ő ő
Budapest, Hungary
## rudolf.hornig@omnest.com
is available on all common platforms including Linux, Mac OS/X
and Windows, using the GCC tool chain or the Microsoft Visual
C++ compiler.
OMNeT++ represents a framework approach. Instead of directly
providing simulation components for computer networks, queuing
networks or other domains, it provides the basic machinery and
tools to write such simulations. Specific application areas are
supported by various simulation models and frameworks such as
the Mobility Framework or the INET Framework. These models
are developed completely independently of OMNeT++, and
follow their own release cycles.
Since its first release, simulation models have been developed by
various individuals and research groups for several areas
including: wireless and ad-hoc networks, sensor networks, IP and
IPv6 networks, MPLS, wireless channels, peer-to-peer networks,
storage area networks (SANs), optical networks, queuing
networks, file systems, high-speed interconnections (InfiniBand),
and others. Some of the simulation models are ports of real-life
protocol implementations like the Quagga Linux routing daemon
or the BSD TCP/IP stack, others have been written directly for
OMNeT++. A later section of this paper will discuss these
projects in more detail. In addition to university research groups
and non-profit research institutions, companies like IBM, Intel,
Cisco, Thales and Broadcom are also using OMNeT++
successfully in commercial projects or for in-house research.
## 2. THE DESIGN OF OMNeT++
OMNeT++ was designed from the beginning to support network
simulation on a large scale. This objective lead to the following
main design requirements:
- To enable large-scale simulation, simulation models
need to be hierarchical, and built from reusable
components as much as possible.
- The simulation software should facilitate visualizing
and debugging of simulation models in order to reduce
debugging time, which traditionally takes up a large
percentage of simulation projects. (The same feature set
is also useful for educational use of the software.)
- The simulation software itself should be modular,
customizable and should allow embedding simulations
into larger applications such as network planning
software. (Embedding brings additional requirements
about the memory management, restartability, etc. of the
simulation).
-----
- Data interfaces should be open: it should be possible to
generate and process input and output files with
commonly available software tools.
- Should provide an Integrated Development
Environment that largely facilitates model development
and analyzing results.
The following sections go through the most important aspects of
OMNeT++, highlighting the design decisions that helped achieve
the above main goals.
## 2.1 Model Structure
An OMNeT++ model consists of modules that communicate with
message passing. The active modules are termed simple modules;
they are written in C++, using the simulation class library. Simple
modules can be grouped into compound modules and so forth; the
number of hierarchy levels is not limited. Messages can be sent
either via connections that span between modules or directly to
their destination modules. The concept of simple and compound
modules is similar to DEVS [46][47] atomic and coupled models.
Both simple and compound modules are instances of module
_types. While describing the model, the user defines module types;_
instances of these module types serve as components for more
complex module types. Finally, the user creates the system
module as a network module which is a special compound module
type without gates to the external world. When a module type is
used as a building block, there is no distinction whether it is a
simple or a compound module. This allows the user to
transparently split a module into several simple modules within a
compound module, or do the opposite, re-implement the
functionality of a compound module in one simple module,
without affecting existing users of the module type. The feasibility
of model reuse is proven by the model frameworks like INET
Framework [1] and Mobility Framework [17][18], and their
extensions.
### network simple modules
compound module
Figure 1. Model Structure in OMNeT++. Boxes represent simple
modules (thick border), and compound modules (thin border).
Arrows connecting small boxes represent connections and gates.
Modules communicate with messages which – in addition to usual
attributes such as timestamp – may contain arbitrary data. Simple
modules typically send messages via gates, but it is also possible
to send them directly to their destination modules. Gates are the
input and output interfaces of modules: messages are sent out
through output gates and arrive through input gates. An input and
an output gate can be linked with a connection. Connections are
created within a single level of module hierarchy: within a
compound module, corresponding gates of two submodules, or a
gate of one submodule and a gate of the compound module can be
connected. Connections spanning across hierarchy levels are not
permitted, as it would hinder model reuse. Due to the hierarchical
structure of the model, messages typically travel through a chain
of connections, to start and arrive in simple modules. Compound
modules act as 'cardboard boxes' in the model, transparently
relaying messages between their inside and the outside world.
Properties such as propagation delay, data rate and bit error rate,
can be assigned to connections. One can also define connection
types with specific properties (termed channels) and reuse them in
several places.
Modules can have parameters. Parameters are mainly used to pass
configuration data to simple modules, and to help define model
topology. Parameters may take string, numeric or boolean values.
Because parameters are represented as objects in the program,
parameters – in addition to holding constants – may transparently
act as sources of random numbers with the actual distributions
provided with the model configuration, they may interactively
prompt the user for the value, and they might also hold
expressions referencing other parameters. Compound modules
may pass parameters or expressions of parameters to their
submodules.
## 2.2 The Design of the NED Language
The user defines the structure of the model (the modules and their
interconnection) in OMNeT++'s topology description language,
_NED. Typical ingredients of a NED description are simple module_
declarations, compound module definitions and network
definitions. Simple module declarations describe the interface of
the module: gates and parameters. Compound module definitions
consist of the declaration of the module's external interface (gates
and parameters), and the definition of submodules and their
interconnection. Network definitions are compound modules that
qualify as self-contained simulation models.
The NED language has been designed to scale well, however,
recent growth in the amount and complexity of OMNeT++-based
simulation models and model frameworks made it necessary to
improve the NED language as well. In addition to a number of
smaller improvements, the following major features have been
introduced:
**Inheritance. Modules and channels can now be subclassed.**
Derived modules and channels may add new parameters, gates,
and (in the case of compound modules) new submodules and
connections. They may set existing parameters to a specific value,
and also set the gate size of a gate vector. This makes it possible,
for example, to take a GenericTCPClientApp module and
derive an FTPApp from it by setting certain parameters to a fixed
value; or derive a WebClientHost compound module from a
BaseHost compound module by adding a WebClientApp
submodule and connecting it to the inherited TCP submodule.
**Interfaces. Module and channel interfaces can be used as a**
placeholder where normally a module or channel type would be
used, and the concrete module or channel type is determined at
network setup time by a parameter. Concrete module types have
to “implement” the interface they can substitute. For example, the
module types `ConstSpeedMobility` and
```
RandomWayPointMobility need to implement IMobility
```
to be able to be plugged into a MobileHost that contains an
```
IMobility submodule.
```
**Packages. To address name clashes between different models and**
to simplify specifying which NED files are needed by a specific
-----
simulation model, a Java-like package structure was introduced
into the NED language.
**Inner types. Channel types and module types used locally by a**
compound module can now be defined within the compound
module, in order to reduce namespace pollution.
**Metadata annotations. It is possible to annotate module or**
channel types, parameters, gates and submodules by adding
_properties. Metadata are not used by the simulation kernel_
directly, but they can carry extra information for various tools, the
runtime environment, or even for other modules in the model. For
example, a module's graphical representation (icon, etc) or the
prompt string and unit (milliwatt, etc) of a parameter are specified
using properties.
The NED language has an equivalent XML representation, that is,
NED files can be converted to XML and back without loss of
data, including comments. This lowers the barrier for
programmatic manipulation of NED files, for example extracting
information, refactoring and transforming NED, generating NED
from information stored in other system like SQL databases, and
so on.
## 2.3 Graphical Editor
The OMNeT++ package includes an Integrated Development
Environment which contains a graphical editor using NED as its
native file format; moreover, the editor can work with arbitrary,
even hand-written NED code. The editor is a fully two-way tool,
i.e. the user can edit the network topology either graphically or in
NED source view, and switch between the two views at any time.
This is made possible by design decisions about the NED
language itself. First, NED is a declarative language, and as such,
it does not use an imperative programming language for defining
the internal structure of a compound module. Allowing arbitrary
programming constructs would make it practically infeasible to
write two-way graphical editors which could work directly with
both generated and hand-made NED files. (Generally, the editor
would need AI capability to understand the code.)
Most graphical editors only allow the creation of fixed topologies.
However, NED contains declarative constructs (resembling loops
and conditionals in imperative languages), which enable
parametric topologies: it is possible to create common regular
topologies such as ring, grid, star, tree, hypercube, or random
interconnection whose parameters (size, etc.) are passed in
numeric-valued parameters. The potential of parametric
topologies and associated design patterns have been investigated
in [7][9]. With parametric topologies, NED holds an advantage in
many simulation scenarios both over OPNET where only fixed
model topologies can be designed, and over NS-2 where building
model topology is programmed in Tcl and often intermixed with
simulation logic, so it is generally impossible to write graphical
editors which could work with existing, hand-written code.
## 2.4 Separation of Model and Experiments
It is always a good practice to try to separate the different aspects
of a simulation as much as possible. Model behavior is captured
_in C++ files as code, while model topology (and of course the_
parameters defining this topology) is defined by the NED files.
This approach allows the user to keep the different aspects of the
model in different places which in turn allows having a cleaner
model and better tooling support. In a generic simulation scenario,
one usually wants to know how the simulation behaves with
different inputs. These variables neither belong to the behavior
(code) nor the topology (NED files) as they can change from run
to run. INI files are used to store these values. INI files provide a
great way to specify how these parameters change and enable us
to run our simulation for each parameter combination we are
interested in. The generated simulation results can be easily
harvested and processed by the built in analysis tool. We will
explore later, in the Result Analysis paragraph, how the INI files
are organized and how they can make experimenting with our
model a lot easier.
## 2.5 Simple Module Programming Model
_Simple modules are the active elements in a model. They are_
atomic elements in the module hierarchy: they cannot be divided
any further. Simple modules are programmed in C++, using the
OMNeT++ simulation class library. OMNeT++ provides an
Integrated C++ Development Environment so it is possible to
write, run and debug the code without leaving the OMNeT++
IDE. The simulation kernel does not distinguish between
messages and events – events are also represented as messages.
Simple modules are programmed using the process-interaction
method. The user implements the functionality of a simple module
by subclassing the cSimpleModule class. Functionality is
added via one of two alternative programming models: (1)
_coroutine-based, and (2)_ _event-processing function. When using_
_coroutine-based programming, the module code runs in its own_
(non-preemptively scheduled) thread, which receives control from
the simulation kernel each time the module receives an event
(=message). The function containing the coroutine code will
typically never return: usually it contains an infinite loop with
_send and receive calls._
When using event-processing function, the simulation kernel
simply calls the given function of the module object with the
message as argument – the function has to return immediately
after processing the message. An important difference between the
_coroutine-based and event-processing function programming_
models is that with the former, every simple module needs an own
CPU stack, which means larger memory requirements for the
simulation program. This is of interest when the model contains a
large number of modules (over a few ten thousands).
It is possible to write code which executes on module
_initialization and finalization: the latter takes place on successful_
simulation termination, and finalization code is mostly used to
save scalar results into a file. OMNeT++ also supports multi_stage initialization: situations where model initialization needs to_
be done in several "waves". Multi-stage initialization support is
missing from most simulation packages, and it is usually emulated
with broadcast events scheduled at zero simulation time, which is
a less clean solution.
Message sending and receiving are the most frequent tasks in
simple modules. Messages can be sent either via output gates, or
directly to another module. Modules receive messages either via
one of the several variations of the receive call (when using
coroutine-based programming), or messages are delivered to the
module in an invocation from the simulation kernel (when using
the event-processing function). Messages can be defined by
specifying their content in an MSG file. OMNeT++ takes care of
-----
creating the necessary C++ classes. MSG files allow the
OMNeT++ kernel to generate reflection code which enables us to
peek into messages and explore their content at runtime.
It is possible to modify the topology of the network dynamically:
one can create and delete modules and rearrange connections
while the simulation is executing. Even compound modules with
parametric internal topology can be created on the fly.
## 2.6 Design of the Simulation Library
The OMNeT++ provides a rich object library for simple module
implementers. There are several distinguishing factors between
this library and other general-purpose or simulation libraries. The
OMNeT++ class library provides reflection functionality which
makes it possible to implement high-level debugging and tracing
capability, as well as automatic animation on top of it (as
exemplified by the Tkenv user interface, see later). Memory leaks,
pointer aliasing and other memory allocation problems are
common in C++ programs not written by specialists; OMNeT++
alleviates this problem by tracking object ownership and detecting
bugs caused by aliased pointers and misuse of shared objects. The
requirements for ease of use, modularity, open data interfaces and
support of embedding also heavily influenced the design of the
class library. The consistent use of object-oriented techniques
makes the simulation kernel compact and slim. This makes it
relatively easy to understand its internals, which is a useful
property for both debugging and educational use.
Recently it has become more common to do large scale network
simulations with OMNeT++, with several ten thousand or more
network nodes. To address this requirement, aggressive memory
optimization has been implemented in the simulation kernel,
based on shared objects and copy-on-write semantics.
Until recently, simulation time has been represented as with C's
```
double type (IEEE double precision). Well-known precision
```
problems with floating point calculations however, have caused
problems in simulations from time to time. To address this issue,
simulation time has been recently changed to 64-bit integer-based
fixed-point representation. One of the major problems that had to
be solved here was how to detect numeric overflows, as the C and
C++ languages, despite their explicit goals of being “close to the
hardware”, lack any support to detect integer overflows.
## 2.7 Contents of the Simulation Library
This section provides a very brief catalog of the classes in the
OMNeT++ simulation class library. The classes were designed to
cover most of the common simulation tasks.
OMNeT++ has the ability to generate random numbers from
several independent streams. The common distributions are
supported, and it is possible to add new distributions programmed
by the user. It is also possible to load user distributions defined by
histograms.
The class library offers queues and various other container
_classes. Queues can also operate as priority queues._
_Messages are objects which may hold arbitrary data structures and_
other objects (through aggregation or inheritance), and can also
embed other messages.
OMNeT++ supports routing traffic in the network. This feature
provides the ability to explore actual network topology, extract it
into a graph data structure, then navigate the graph or apply
algorithms such as Dijkstra to find shortest paths.
There are several statistical classes, from simple ones which
collect the mean and the standard deviation of the samples to a
number of distribution estimation classes. The latter include three
highly configurable histogram classes and the implementations of
the P[2] [10] and the k-split [8] algorithms. It is also supported to
write time series result data into an output file during simulation
execution, and there are tools for post-processing the results.
## 2.8 Parallel Simulation Support
OMNeT++ also has support for parallel simulation execution.
Very large simulations may benefit from the parallel distributed
simulation (PDES) feature, either by getting speedup, or by
distributing memory requirements. If the simulation requires
several Gigabytes of memory, distributing it over a cluster may be
the only way to run it. For getting speedup (and not actually
slowdown, which is also easily possible), the hardware or cluster
should have low latency and the model should have inherent
parallelism. Partitioning and other configuration can be
configured in the INI file, the simulation model itself doesn't need
to be changed (unless, of course, it contains global variables that
prevents distributed execution in the first place.) The
communication layer is MPI, but it's actually configurable, so if
the user does not have MPI it is still possible to run some basic
tests over named pipes. The figure below explains the logical
architecture of the parallel simulation kernel:
Simulation Model
Simulation Kernel
Parallel simulation subsystem
Synchronization
Event scheduling,
sending, receiving Partition (LP)
Communication
communications library (MPI, sockets, etc.)
Figure 2. Logical Architecture of the OMNeT++ Parallel
Simulation kernel
## 2.9 Internal Architecture
|architecture of the parallel simulation kernel:|Col2|
|---|---|
|Simulation Model||
|||
|Col1|Col2|Col3|
|---|---|---|
|Partition (||LP)|
||||
|Simulation Kernel Parallel simulation subsystem Synchronization Event scheduling, sending, receiving Partition (LP) Communication|Col2|Col3|
|---|---|---|
||Parallel simulation subsystem Synchronization Partition (LP) Communication||
||||
**OMNeT++ executable**
Simulation
Model
|Col1|SIM (simulation kernel)|Col3|
|---|---|---|
||||
||||
||||
Figure 3. Logical Architecture of an OMNeT++ Simulation
Program
-----
OMNeT++ simulation programs possess a modular structure. The
logical architecture is shown on Figure 3.
The Model Component Library consists of the code of compiled
simple and compound modules. Modules are instantiated and the
concrete simulation model is built by the simulation kernel and
class library (Sim) at the beginning of the simulation execution.
The simulation executes in an environment provided by the user
interface libraries (Envir, Cmdenv and Tkenv) – this environment
defines where input data come from, where simulation results go
to, what happens to debugging output arriving from the simulation
model, controls the simulation execution, determines how the
simulation model is visualized and (possibly) animated, etc.
**Embedding Application**
OMNeT++ subsystem
Simulation
Model
other parts of the
embedding application
|Col1|SIM (sim. kernel)|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
Figure 4. Embedding OMNeT++
By replacing the user interface libraries, one can customize the
full environment in which the simulation runs, and even embed an
OMNeT++ simulation into a larger application (Figure 4). This is
made possible by the existence of a generic interface between Sim
and the user interface libraries, as well as the fact that all Sim,
_Envir, Cmdenv and Tkenv are physically separate libraries. It is_
also possible for the embedding application to assemble models
from the available module types on the fly – in such cases, model
topology will often come from a database.
## 2.10 Real-Time Simulation, Network Emulation
Network emulation, together with real-time simulation and
hardware-in-the-loop like functionality, is available because the
event scheduler in the simulation kernel is pluggable too. The
OMNeT++ distribution contains a demo of real-time simulation
and a simplistic example of network emulation. Interfacing
OMNeT++ with other simulators (hybrid operation) or HLA is
also largely a matter of implementing one's own scheduler class.
## 2.11 Animation and Tracing Facility
An important requirement for OMNeT++ was easy debugging and
traceability of simulation models. Associated features are
implemented in Tkenv, the GUI user interface of OMNeT++.
_Tkenv uses three methods: automatic animation, module output_
windows and object inspectors. Automatic animation (i.e.
animation without any programming) in OMNeT++ is capable of
animating the flow of messages on network charts and reflecting
state changes of the nodes in the display. Automatic animation
perfectly fits the application area, as network simulation
applications rarely need fully customizable, programmable
animation capabilities.
Figure 5. Screenshot of the Tkenv User Interface of OMNeT++
Simple modules may write textual debugging or tracing
information to a special output stream. Such debug output appears
in module output windows. It is possible to open separate
windows for the output of individual modules or module groups,
so compared to the traditional printf()-style debugging, module
output windows make it easier to follow the execution of the
simulation program.
Further introspection into the simulation model is provided by
_object inspectors. An object inspector is a GUI window_
associated with a simulation object. Object inspectors can be used
to display the state or contents of an object in the most
appropriate way (i.e. a histogram object is displayed graphically,
with a histogram chart), as well as to manually modify the object.
In OMNeT++, it is automatically possible to inspect every
simulation object; there is no need to write additional code in the
simple modules to make use of inspectors.
It is also possible to turn off the graphical user interface
altogether, and run the simulation as a pure command-line
program. This feature is useful for batched simulation runs.
## 2.12 Visualizing Dynamic Behavior
The behavior of large and complex models is usually hard to
understand because of the complex interaction between different
modules. OMNeT++ helps to reduce complexity by mandating the
communication between modules using predefined connections.
The graphical runtime environment allows the user to follow
module interactions to a certain extent: one can animate, slow
down or single-step the simulation, but sometimes it is still hard
to see the exact sequence of the events, or to grasp the timing
relationships (as, for practical reasons, simulation time is not
proportional to real time; also, when single-stepping through
events, events with the same timestamp get animated
sequentially).
OMNeT++ helps the user to visualize the interaction by logging
interactions between modules to a file. This log file can be
processed after (or even during) the simulation run and can be
used to draw interaction diagrams. The OMNeT++ IDE has a
sequence chart diagramming tool which provides a sophisticated
view of how the events follow each other. One can focus on all, or
-----
just selected modules, and display the interaction between them.
The tool can analyze and display the causes or consequences of an
event, and display all of them (using a non-linear time axis) on a
single screen even if time intervals between events are of different
magnitudes. One can go back and forth in time and filter for
modules and events.
Figure 6.
Screenshot of a Sequence Chart from the OMNeT++ IDE
## 2.13 Organizing and Performing Experiments
The ultimate goal of running a simulation is to obtain results and
to get some insight into the system by analyzing the results.
Thorough simulation studies very often produce large amounts of
data, which are nontrivial to organize in a meaningful way.
OMNeT++ organizes simulation runs (and the results they
generate) around the following concepts:
**_model – the executable (C++ model files, external libraries, etc.)_**
and NED files. (INI files are considered to be part of the study and
_experiment rather than the model.) Model files are considered to_
be invariant for the purposes of experimentation, meaning that if a
C++ source or NED file gets modified, then it will count as a
different model.
**_study – a series of experiments to study some phenomenon on one_**
or more models; e.g. “handover optimization for mobile IPv6”.
For a study one usually performs a number of experiments from
which conclusions can be drawn. One study may contain
_experiments on different models, but one experiment is always_
performed on one specific model.
**_experiment – exploration of a parameter space on a model, e.g._**
“the `adhocNetwork` _model’s_ _behavior_ _with_
_numhosts=5,10,20,50,100 and load=2..5 step 0.1 (Cartesian_
_product)”; consists of several measurements._
**_measurement – a set of simulation runs on the same model with_**
the same parameters (e.g. “numhosts=10, load=3.8”), but
potentially different seeds. May consist of several replications of
whose results get averaged to supply one data point for the
_experiment. A measurement can be characterized with the_
parameter settings and simulation kernel settings in the INI file,
minus the seeds.
**_replication – one repetition of a measurement. Very often, one_**
would perform several replications, all with different seeds. A
replication can be characterized by the seed values it uses.
**_run – or actual run: one instance of running the simulation; that_**
is, a run can be characterized with an exact time/date and the
computer (e.g. the host name).
OMNeT++ supports the execution of whole (or partial)
experiments as a single batch. After specifying the model
(executable file + NED files) and the experiment parameters (in
the INI file) one can further refine which measurements one is
interested in. The simulation batch can be executed and its
progress monitored from the IDE. Multiple CPUs or CPU cores
can be exploited by letting the launcher run more than one
simulation at a time. The significance of running multiple
independent simulations concurrently is often overlooked, but it is
not only a significantly easier way of reducing overall execution
time of an experiment than distributed parallel simulation (PDES)
but also more efficient (as it guarantees linear speedup which is
not possible with PDES).
## 2.14 Result Analysis
Analyzing the simulation result is a lengthy and time consuming
process. In most cases the user wants to see the same type of data
for each run of the simulation or display the same graphs for
different modules in the model, so automation is very important.
(The user does not want to repeat the steps of re-creating charts
every time simulations have to be re-run for some reason.) The
lack of automation support drives many users away from existing
GUI analysis tools, and forces them to write scripts.
OMNeT++ solves this by making result analysis rule-based.
Simulations and series of simulations produce various result files.
The user selects the input of the analysis by specifying file names
or file name patterns (e.g. "adhoc-*.vec"). Data of interest can be
selected into datasets by further pattern rules. The user completes
datasets by adding various processing, filtering and charting steps,
all using the GUI (Figure 7). Whenever the underlying files or
their contents change, dataset contents and charts are recalculated.
The editor only saves the "recipe" and not the actual numbers, so
when simulations are re-run and so result files get replaced, charts
are automatically up-to-date. Data in result files are tagged with
meta information: experiment, measurement and replication labels
are added to the result files to make the filtering process easy. It is
possible to create very sophisticated filtering rules, for example,
“all 802.11 retry counts of host[5..10] in experiment X, averaged
over replications”. In addition datasets can use other datasets as
their input so datasets can build on each other.
Figure 7. Rule based processing
-----
OMNeT++ supports several fully customizable chart and graph
types which are rendered directly from datasets (Figure 8). The
visual properties of the charts are also stored in the “recipe”.
Figure 8. Charts in the OMNeT++ IDE
## 3. CONTRIBUTIONS TO OMNeT++
Currently there are two major network simulation model
frameworks for OMNeT++: the Mobility Framework [17][18] and
the INET Framework [1].
The Mobility Framework was designed at TU Berlin to provide
solid foundations for creating wireless and mobile networks
within OMNeT++. It provides a detailed radio model, several
mobility models, MAC models including IEEE 802.11b, and
several other components. Other model frameworks for mobile,
ad-hoc and sensor simulations [26][33][13] have also been
published (LSU SenSim [25][26] and Castalia [19][20], for
example), but they have so far failed to make significant impact.
Further related simulation models are NesCT for TinyOS [21]
simulations, MACSimulator and Positif [13] which are continued
in the MiXiM [5] project, EWsnSim, SolarLEACH, ChSim [27],
AdHocSim, AntNet, etc.
The INET Framework has evolved from the IPSuite originally
developed at the University of Karlsruhe. It provides detailed
protocol models for TCP, IPv4, IPv6, Ethernet, Ieee802.11b/g,
MPLS, OSPFv4, and several other protocols. INET also includes
the Quagga routing daemon directly ported from Linux code base.
Several authors have developed various extensions for the INET
Framework. OverSim [22][23][24] is used to model P2P
protocols on top of the INET Framework. AODV-UU, DSR is
also available as an add-on for the INET Framework. IPv6Suite
[45] (discontinued by 2007) supported MIPv6 and HMIPv6
simulations over wired and wireless networks.
The OppBSD [44] model allows using the FreeBSD kernel
TCP/IP protocol stack directly inside an OMNeT++ simulation.
Other published simulation models include Infiniband [28],
FieldBus [14] and SimSANs [43].
A very interesting application area of OMNeT++ is the modeling
of dynamic behavior of software systems based on the UML
standard, by translating annotated UML diagrams into OMNeT++
models. A representative of this idea is the SYNTONY project
[30][31][32]; similar approach have been reported in [35] where
the authors used UML-RT, and in [34] where performance
characteristics of web applications running on the JBoss
Application Server were studied.
The Simulation Library API can be mapped to programming
languages other than C++. There is already 3[rd] party support for
Java and C# which makes it possible to write simple module
behavior in these languages.
## 4. COMPARISON WITH OTHER SIMULATION TOOLS
The network simulation scene has changed a lot in the past ten
years, simulation tools coming and going. This section presents an
overview of various commercial and noncommercial network
simulation tools in wide use today, and compares them to
OMNeT++. Specialized network simulators (like TOSSIM, for
TinyOS simulations), and simulation packages not or rarely used
for network simulations (such as Ptolemy or Ptolemy II) are not
considered. Also, the discussion only covers the features and
services of the simulation environments themselves, but not the
availability or characteristics of specific simulation models like
IPv6 or QoS (the reason being that they do not form part of the
OMNeT++ simulation package.)
## 4.1 NS
NS-2 [11] is currently the most widely used network simulator in
academic and research circles. NS-2 does not follow the same
clear separation of simulation kernel and models as OMNeT++:
the NS-2 distribution contains the models together with their
supporting infrastructure, as one inseparable unit. This is a key
difference: the NS-2 project goal is to build a network simulator,
while OMNeT++ intends to provide a simulation _platform, on_
which various research groups can build their own simulation
frameworks. The latter approach is what called the abundance of
OMNeT++-based simulation models and model frameworks into
existence, and turned OMNeT++ into a kind of an “ecosystem”.
NS-2 lacks many tools and infrastructure components that
OMNeT++ provides: support for hierarchical models, a graphical
editor, GUI-based execution environment (except for nam),
separation of models from experiments, graphical analysis tools,
simulation library features such as multiple RNG streams with
arbitrary mapping and result collection, seamlessly integrated
parallel simulation support, etc. This is because the NS-2 project
concentrates on developing the simulation models, and much less
on simulation infrastructure.
NS-2 is a dual-language simulator: simulation models are Tcl
scripts[2], while the simulation kernel and various components
(protocols, channels, agents, etc) are implemented in C++ and are
made accessible from the Tcl language. Network topology is
expressed as part of the Tcl script, which usually deals with
several other things as well, from setting parameters to adding
application behavior and recording statistics. This architecture
makes it practically impossible to create graphical editors for
NS-2 models[3].
NS-3 is an ongoing effort to consolidate all patches and recently
developed models into a new version of NS. Although work
includes refactoring of the simulation core as well, the concepts
2
In fact, OTcl, which is an object-oriented extension to Tcl.
3
Generating a Tcl script from a graphical representation is of
course possible, but not the other way round: no graphical
editor will ever be able to understand an arbitrary NS-2 script,
and let the user edit it graphically.
-----
are essentially unchanged. The NS-3 project goals [36] include
some features (e.g. parallel simulation, use of real-life protocol
implementations as simulation models) that have already proven
to be useful with OMNeT++.
## 4.2 J-Sim
J-Sim [37][38] (formerly known as JavaSim) is a componentbased, compositional simulation environment, implemented in
Java. J-Sim is similar to OMNeT++ in that simulation models are
hierarchical and built from self-contained components, but the
approach of assembling components into models is more like
NS-2: J-Sim is also a dual-language simulation environment, in
which classes are written in Java, and glued together using Tcl (or
Java). The use of Tcl in J-Sim has the same drawback as with
NS-2: it makes implementing graphical editors impossible. In fact,
J-Sim does provide a graphical editor (gEditor), but its native
format is XML. Although gEditor can export Tcl scripts,
developers recommend that XML files are directly loaded into the
simulator, bypassing Tcl. This way, XML becomes the equivalent
of OMNeT++ NED. However, the problem with XML as native
file format is that it is hard to read and write by humans.
Simulation models are provided in the Inet package, which
contains IPv4, TCP, MPLS and other protocol models.
The fact that J-Sim is Java-based has some implications. On one
hand, model development and debugging can be significantly
faster than C++, due to existence of excellent Java development
tools. However, simulation performance is significantly weaker
than with C++, and it is also not possible to reuse existing real-life
protocol implementations written in C as simulation models. (The
feasibility and usefulness of the latter has been demonstrated with
OMNeT++, where simulation models include port of the Quagga
Linux routing daemon, the TCP stack from the FreeBSD kernel,
the port of the UU-AODV routing package, etc. The NS-3 team
has similar plans as well.)
Development of the J-Sim core and simulation models seem to
have stalled after 2004 when version 1.3 was published; later
entries on the web site are patches and contributed documents
only. There are no independent (3rd party) simulation models for
J-Sim.
## 4.3 SSFNet
SSFNet [39] (Scalable Simulation Framework) is defined as a
“public-domain standard for discrete-event simulation of large,
complex systems in Java and C++.” The SSFNet standard defines
a minimalist API (which, however, was designed with parallel
simulation in mind). The topology and configuration of SSFNet
simulations are given in DML files. DML is a text-based format
comparable to XML, but has its own syntax. DML can be
considered the SSFNet equivalent of NED, however it lacks
expressing power and features to scale up to support large model
frameworks built from reusable components. SSFNet also lacks
OMNeT++'s INI files, all parameters need to be given in the
DML.
SSFNet has four implementations: DaSSF and CSSF in C++, and
two Java implementations (Renesys Raceway and JSSF). There
were significantly more simulation models developed for the Java
versions than for DaSSF. Advantages and disadvantages of using
Java in SSFNet are the same as discussed with J-Sim.
As with J-Sim, development of the SSFNet simulation framework
and models seem to have stalled after 2004 (date of the SSFNet
for Java 2.20 release), and little activity can be detected outside
the main web site as well.
## 4.4 JiST and SWANS
JiST [42][6] represents a very interesting approach to building a
high performance Java based simulation environment. It modifies
the Java Virtual Machine to run the programs in simulation time
instead of real time. JiST is basically just a simulation kernel, and
as such, it lacks most of the features present in the OMNeT++
package.
SWANS is a scalable wireless network simulator built atop the
JiST platform as a proof of concept model, to prove the efficiency
of the virtual machine based approach. It appears that no further
simulation models have been created by the JiST team or
independent groups. Development of JiST/SWANS seems to be
halted after 2005.
## 4.5 OPNET Modeler
OPNET Modeler is the flagship product of OPNET Technologies
Inc. [16]. OPNET Modeler is a commercial product which is
freely available worldwide to qualifying universities. OPNET has
probably the largest selection of ready-made protocol models
(including IPv6, MIPv6, WiMAX, QoS, Ethernet, MPLS,
OSPFv3 and many others).
OPNET and OMNeT++ provide rich simulation libraries of
roughly comparable functionalities. The OPNET simulation
library is based on C, while the one in OMNeT++ is a C++ class
library. OPNET's architecture is similar to OMNeT++ as it allows
hierarchical models with arbitrarily deep nesting (like
OMNeT++), but with some restrictions (namely, the "node" level
cannot be hierarchical). A significant difference from OMNeT++
is that OPNET models are always of fixed topology, while
OMNeT++'s NED and its graphical editor allow parametric
topologies. In OPNET, the preferred way of defining network
topology is by using the graphical editor. The editor stores models
in a proprietary binary file format, which means in practice that
OPNET models are usually difficult to generate by program (it
requires writing a C program that uses an OPNET API, while
OMNeT++ models are simple text files which can be generated
e.g. with Perl).
Both OPNET and OMNeT++ provide a graphical debugger and
some form of automatic animation which is essential for easy
model development.
OPNET does not provide source code to the simulation kernel
(although it ships with the sources of the protocol models).
OMNeT++ – like NS-2 and most other non-commercial tools –
is fully public-source allowing much easier source level
debugging.
OPNET's main advantage over OMNeT++ is definitely its large
protocol model library, while its closed nature (proprietary binary
file formats and the lack of source code) makes development and
problem solving harder.
## 4.6 Qualnet
Qualnet [41] is a commercial simulation environment mainly for
wireless networks, which has a significant client base in the
-----
military. Qualnet has evolved from the Parsec parallel simulation
“language”[4] [12] developed at the UCLA Parallel Computing
Laboratory (PCL), and the GloMoSim (Global Mobile system
Simulation) model written on top of Parsec. The Parsec language
divides the simulation model into entities, and provides a
minimalistic simulation API (timers, etc) for them. Entities are
implemented with coroutines. Because coroutine CPU stacks
require relatively large amounts of memory (the manual
recommends reserving 200KByte each), it is rarely feasible to
map the natural units of the simulation (say, hosts and routers, or
protocols) one-to-one onto entities. What GloMoSim and Qualnet
models do is implement the equivalent of the OMNeT++ model
structure in model space, above the Parsec runtime. The Parsec
kernel is only used to provide event scheduling and parallel
simulation services.
Parsec provides a very efficient parallel simulation infrastructure,
and models (GloMoSim and Qualnet simulation models) have
been written with parallel execution in mind[5], resulting in an
excellent parallel performance for wireless network simulations.
## 4.7 Summary
In this section we have examined the simulation packages most
relevant for analysis of telecommunication networks, and
compared them to OMNeT++. NS-2 is still the most widely used
network simulator in the Academia, but it lacks much of the
infrastructure provided by OMNeT++. The other three
open-source network simulation packages examined (J-Sim,
SSFNet and JiST/SWANS), have failed to gain significant
acceptance, and their project web pages indicate near inactivity
since 2004.
We have examined two commercial products as well. Qualnet
emphasizes wireless simulations. OPNET has similar foundations
as OMNeT++, but ships with an extensive model library and
provides several additional programs and GUI tools.
## 5. CONCLUSIONS
In this paper we presented an overview of the OMNeT++ discrete
event simulation platform, designed to support the simulation of
telecommunication networks and other parallel and distributed
systems. The OMNeT++ approach significantly differs from that
of NS-2, the most widely used network simulator in academic and
research circles: while the NS-2 (and NS-3) project goal is to
build a network simulator, OMNeT++ aims at providing a rich
simulation platform, and leaves creating simulation models to
independent research groups. The last ten years have shown that
the OMNeT++ approach is viable, and several OMNeT++-based
open-source simulation models and model frameworks have been
published by various research groups and individuals.
## 6. REFERENCES
[1] OMNeT++ Home Page. http://www.omnetpp.org [accessed
on September, 2007]
[2] Varga, A. 2001. The OMNeT++ Discrete Event Simulation
System. In the Proceedings of the European Simulation
Multiconference (ESM2001. June 6-9, 2001. Prague, Czech
Republic).
4
It extends the C language with some constructs, and Parsec
programs are translated into C before compilation.
5
Lookahead annotations, avoiding central components, etc.
[3] Kaage, U., V. Kahmann, F. Jondral. 2001. An OMNeT++
TCP Model. To appear in Proceedings of the European
_Simulation Multiconference (ESM 2001), June 7-9, Prague._
[4] Wehrle, K, J. Reber, V. Kahmann. 2001. “A Simulation
Suite for Internet Nodes with the Ability to Integrate
Arbitrary Quality of Service Behavior”. In Proceedings of
_the Communication Networks and Distributed Systems_
_Modeling and Simulation Conference 2001, Phoenix (AZ),_
USA, January 7-11.
[5] [MiXiM home page. http://sourceforge.net/projects/mixim/](http://sourceforge.net/projects/mixim/)
[accessed on September, 2007]
[6] [JiST home page. http://jist.ece.cornell.edu [accessed on](http://jist.ece.cornell.edu/)
September, 2007]
[7] Varga, A. and Gy. Pongor. 1997. Flexible Topology
Description Language for Simulation Programs. In
_Proceedings of the 9th European Simulation Symposium_
_(ESS'97), pp.225-229, Passau, Germany, October 19-22._
[8] Varga, A and B. Fakhamzadeh. 1997. The K-Split
Algorithm for the PDF Approximation of MultiDimensional Empirical Distributions without Storing
Observations. In Proc. of the 9th European Simulation
_Symposium (ESS'97), pp.94-98. October 19-22, Passau,_
Germany.
[9] Varga, A. 1998. Parameterized Topologies for Simulation
Programs. In Proceedings of the Western Multiconference
_on Simulation (WMC'98), Communication Networks and_
_Distributed Systems (CNDS'98). San Diego, CA, January_
11-14.
[10] Jain, R, and I. Chlamtac. 1985. The P[2] Algorithm for
Dynamic Calculation of Quantiles and Histograms Without
Storing Observations. Communications of the ACM, 28, no.
10 (Oct.): 1076-1085.
[11] Bajaj, S., L. Breslau, D. Estrin, K. Fall, S. Floyd, P. Haldar,
M. Handley, A. Helmy, J. Heidemann, P. Huang, S. Kumar,
S. McCanne, R. Rejaie, P. Sharma, K. Varadhan, Y. Xu, H.
Yu and D. Zappala. 2000. Improving simulation for network
research. _IEEE Computer. (to appear, a preliminary draft is_
currently available as USC technical report 99-702)
[12] Bagrodia, R, R. Meyer, M. Takai, Y. Chen, X. Zeng, J.
Martin, B. Park, H. Song. 1998. Parsec: A Parallel
Simulation Environment for Complex Systems. Computer,
Vol. 31(10), October, pp. 77-85.
[13] Consensus home page.
[http://www.consensus.tudelft.nl/software.html [accessed on](http://www.consensus.tudelft.nl/software.html)
September, 2007]
[14] FieldBus home page.
[http://developer.berlios.de/projects/fieldbus [accessed on](http://developer.berlios.de/projects/fieldbus)
September, 2007]
[15] Davis, J, M. Goel, C. Hylands, B. Kienhuis, E.A. Lee, J. Liu,
X. Liu, L. Muliadi, S. Neuendorffer, J. Reekie, N. Smyth, J.
Tsay and Y. Xiong. 1999. Overview of the Ptolemy Project.
ERL Technical Report UCB/ERL No. M99/37, Dept. EECS,
University of California, Berkeley, CA 94720, July.
[16] OPNET Technologies, Inc. OPNET Modeler.
[http://www.opnet.com [accessed on September, 2007]](http://www.opnet.com/)
[[17] Mobility Framework. http://mobility-fw.sourceforge.net](http://mobility-fw.sourceforge.net/)
[accessed on September, 2007]
[18] W. Drytkiewicz, S. Sroka, V. Handziski, A. Koepke, and H.
Karl, A Mobility Framework for OMNeT++. 2003. 3rd
International OMNeT++ Workshop (Budapest University of
Technology and Economics, Department of
-----
Telecommunications Budapest, Hungary, January 2003).
[http://www.tkn.tu-berlin.de/~koepke/](http://www.tkn.tu-berlin.de/~koepke/)
[19] D. Pediaditakis, S. H. Mohajerani, and A. Boulis. 2007.
Poster Abstract: Castalia: the Difference of Accurate
Simulation in WSN. 4th European Conference on Wireless
Sensor Networks, (Delft, The Netherlands, 29-31 January
2007).
[20] Castalia: A Simulator for WSN.
[http://castalia.npc.nicta.com.au. [accessed on September,](http://castalia.npc.nicta.com.au/)
2007]
[[21] NesCT: A language translator. http://nesct.sourceforge.net](http://nesct.sourceforge.net/)
[accessed on September, 2007]
[22] OverSim:The Overlay Simulation Framework
[http://www.oversim.org [accessed on September, 2007]](http://www.oversim.org/)
[23] Ingmar Baumgart and Bernhard Heep and Stephan Krause.
2007. OverSim: A Flexible Overlay Network Simulation
Framework. Proceedings of 10th IEEE Global Internet
Symposium (May, 2007). p.79-84.
[24] Ingmar Baumgart and Bernhard Heep and Stephan Krause.
2007. A P2PSIP Demonstrator Powered by OverSim.
Proceedings of 7th IEEE International Conference on Peerto-Peer Computing (P2P2007, Galway, Ireland. Sep, 2007).
pp. 243-244,
[25] C. Mallanda, A. Suri, V. Kunchakarra, S.S. Iyengar, R.
Kannan, A. Durresi, and S. Sastry. 2005. Simulating
Wireless Sensor Networks with OMNeT++, submitted to
IEEE Computer, 2005
[http://csc.lsu.edu/sensor_web/publications.html](http://csc.lsu.edu/sensor_web/publications.html)
[[26] Sensor Simulator. http://csc.lsu.edu/sensor_web [accessed](http://csc.lsu.edu/sensor_web)
on September, 2007]
[27] S. Valentin. 2006. ChSim - A wireless channel simulator for
OMNeT++, (TKN TU Berlin Simulation workshop, Sep.
[2006) http://www.cs.uni-paderborn.de/en/research-](http://www.cs.uni-paderborn.de/en/research-group/research-group-computer-networks/projects/chsim.html)
[group/research-group-computer-](http://www.cs.uni-paderborn.de/en/research-group/research-group-computer-networks/projects/chsim.html)
[networks/projects/chsim.html](http://www.cs.uni-paderborn.de/en/research-group/research-group-computer-networks/projects/chsim.html)
[28] Mellanox Technologies: InfiniBand model:
[http://www.omnetpp.org/filemgmt/singlefile.php?lid=133](http://www.omnetpp.org/filemgmt/singlefile.php?lid=133)
[29] I. Dietrich, C. Sommer, F. Dressler. Simulating DYMO in
OMNeT++. Erlangen-Nürnberg : Friedrich-AlexanderUniversität. 2007 Internal report.
[30] Isabel Dietrich, Volker Schmitt, Falko Dressler and
Reinhard German, 2007. "SYNTONY: Network Protocol
Simulation based on Standard-conform UML 2 Models,"
Proceedings of 1st ACM International Workshop on
Network Simulation Tools (NSTools 2007), Nantes, France,
October 2007.
[31] I. Dietrich, C. Sommer, F. Dressler, and R. German. 2007.
Automated Simulation of Communication Protocols
Modeled in UML 2 with Syntony. Proceedings of GI/ITG
Workshop Leistungs-, Zuverlässigkeits- und
Verlässlichkeitsbewertung von Kommunikationsnetzen und
verteilten Systemen (MMBnet 2007), Hamburg, Germany,
September 2007. pp. 104-115.
[[32] Syntony home page. http://www7.informatik.uni-](http://www7.informatik.uni-erlangen.de/syntony)
[erlangen.de/syntony](http://www7.informatik.uni-erlangen.de/syntony) [accessed on September, 2007]
[33] Feng Chen, Nan Wang, Reinhard German and Falko
Dressler, 2008. "Performance Evaluation of IEEE 802.15.4
LR-WPAN for Industrial Applications," Proceedings of 5th
IEEE/IFIP Conference on Wireless On demand Network
Systems and Services (IEEE/IFIP WONS 2008), GarmischPartenkirchen, Germany, January 2008.
[34] A. Hennig, D. Revill and M. Pönitsch. 2003. From UML to
Performance Measures - Simulative Performance Predictions
of IT-Systems using the JBoss Application Server with
OMNET++. Proceedings of ESS2003 conference. Siemens
AG, Corporate Technology, CT SE 1.
[35] Michael, J. B., Shing, M., Miklaski, M. H., and Babbitt, J.
D. 2004. Modeling and Simulation of System-of-Systems
Timing Constraints with UML-RT and OMNeT++. In
Proceedings of the 15th IEEE international Workshop on
Rapid System Prototyping (Rsp'04) - Volume 00 (June 28 30, 2004). RSP. IEEE Computer Society, Washington, DC,
202-209. DOI= http://dx.doi.org/10.1109/RSP.2004.30.
[36] T. R. Henderson, S. Roy, S. Floyd, G. F. Riley. ns3 Project
Goals. WNS2 ns-2: The IP Network Simulator, Pisa, Italy Oct. 10, 2006.
[http://www.nsnam.org/docs/meetings/wns2/wns2-ns3.pdf](http://www.nsnam.org/docs/meetings/wns2/wns2-ns3.pdf)
[37] Ahmed Sobeih, Wei-Peng Chen, Jennifer C. Hou, Lu-Chuan
Kung, Ning Li, Hyuk Lim, Hung-Ying Tyan, and Honghai
Zhang,.J-Sim: a simulation and emulation environment for
wireless sensor networks. IEEE Wireless Communications
Magazine, Vol. 13, No. 4, pp. 104--119, August 2006.
[[38] J-SIM home page: http://www.j-sim.org [accessed on](http://www.j-sim.org/)
[September, 2007]](http://www.j-sim.org/)
[39] Cowie, J. H., Nicol, D. M., and Ogielski, A. T. 1999.
Modeling the Global Internet. Computing in Science and
Engg. 1, 1 (Jan. 1999), 42-50.
[DOI=http://dx.doi.org/10.1109/5992.743621](http://dx.doi.org/10.1109/5992.743621)
[40] X. Zeng, R. Bagrodia, M. Gerla.GloMoSim: a Library for
Parallel Simulation of Large-scale Wireless Networks.
PADS '98, May 26-29, 1998 in Banff, Alberta, Canada.
[[41] Qualnet home page: http://www.qualnet.com [accessed on](http://www.qualnet.com/)
September, 2007]
[42] R. Barr, Z. J. Haas, R. van Renesse. 2004. JiST: Embedding
Simulation Time into a Virtual Machine. Proceedings of
EuroSim Congress on Modelling and Simulation, September
2004. Computer Science and Electrical Engineering, Cornell
University, Ithaca NY 14853.
[[43] SimSAN home page. http://simsan.storwav.com/ [accessed](http://simsan.storwav.com/)
on September, 2007]
[44] OppBSD home page.
[https://projekte.tm.uka.de/trac/OppBSD [accessed on](https://projekte.tm.uka.de/trac/OppBSD)
September, 2007]
[45] E. Wu, S. Woon, J. Lai and Y. A. Sekercioglu, 2005.
"IPv6Suite: A Simulation Tool for Modeling Protocols of
the Next Generation Internet", In Proceedings of the Third
International Conference on Information Technology:
Research and Education (ITRE 2005), June 2005, Taiwan.
[46] Zeigler, B. 1990. Object-oriented Simulation with
Hierarchical, Modular Models. Academic Press, 1990.
[47] Chow, A and Zeigler, B. 1994. Revised DEVS: A Parallel,
Hierarchical, Modular Modeling Formalism. In Proceedings
_of the Winter Simulation conference 1994._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/1416222.1416290?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/1416222.1416290, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://eudl.eu/pdf/10.4108/ICST.SIMUTOOLS2008.3027"
}
| 2,008
|
[
"JournalArticle",
"Review"
] | true
| 2008-03-03T00:00:00
|
[] | 14,675
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/012e3df4305cbe5ca918f6f7f87aa80a93593662
|
[] | 0.880203
|
The importance of financial management in small and medium-sized enterprises (SMEs): an analysis of challenges and best practices
|
012e3df4305cbe5ca918f6f7f87aa80a93593662
|
Technology audit and production reserves
|
[
{
"authorId": "2260075770",
"name": "Eugine Nkwinika"
},
{
"authorId": "151429448",
"name": "Segun Akinola"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The object of research is the importance of monetary management in Small and Medium-sized Enterprises (SMEs), specializing in challenges, best practices, and future trends. Financial management in SMEs is an important aspect that influences their growth, sustainability, and competitiveness. The paper begins by defining SMEs and highlighting the significance of financial management for their success. It emphasizes the need for SME owners to understand financial concepts, make informed decisions, and prioritize financial planning to ensure sound business operations. Insights from real-world case studies showcase successful financial management practices adopted by SMEs.
Government policies and support for SME financial management are also explored, with a focus on initiatives, tax incentives, and access to financial advisory services. These government interventions play a crucial position in empowering SMEs with the necessary sources and steerage for powerful financial management.
Moreover, the evaluation delves into destiny developments, such as rising technology (AI, blockchain, IoT) and regulatory adjustments, and their capacity impact on economic management for SMEs. It discusses the challenges and possibilities in monetary forecasting, highlighting using information analytics and predictive modeling for improved accuracy.
In conclusion, this review assessment underscores the significance of financial control for SMEs, emphasizing the want for monetary literacy, era adoption, and compliance with regulatory adjustments. By embracing first-class practices and authorities’ help, SMEs can reap long-term financial balance and thrive in dynamic commercial enterprise environments. As SMEs preserve to evolve within digital technology, powerful economic control remains vital for his or her sustainable increase and achievement.
|
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
ISSN 2664-9969
### UDC 336.02:336.64
JEL Classification: G32
DOI: 10.15587/2706-5448.2023.285749
## Eugine Nkwinika, Segun Akinola
# THE IMPORTANCE OF FINANCIAL MANAGEMENT IN SMALL AND MEDIUM-SIZED ENTERPRISES (SMEs): AN ANALYSIS OF CHALLENGES AND BEST PRACTICES
#### The object of research is the importance of monetary management in Small and Medium-sized Enterprises (SMEs),
specializing in challenges, best practices, and future trends. Financial management in SMEs is an important aspect that influences their growth, sustainability, and competitiveness. The paper begins by defining SMEs and highlighting the significance of financial management for their success. It emphasizes the need for SME owners to understand financial concepts, make informed decisions, and prioritize financial planning to ensure sound business opera tions. Insights from real-world case studies showcase successful financial management practices adopted by SMEs.
Government policies and support for SME financial management are also explored, with a focus on initiatives,
tax incentives, and access to financial advisory services. These government interventions play a crucial position in empowering SMEs with the necessary sources and steerage for powerful financial management.
Moreover, the evaluation delves into destiny developments, such as rising technology (AI, blockchain, IoT) and
regulatory adjustments, and their capacity impact on economic management for SMEs. It discusses the challenges and possibilities in monetary forecasting, highlighting using information analytics and predictive modeling for improved accuracy.
In conclusion, this review assessment underscores the significance of financial control for SMEs, emphasizing
the want for monetary literacy, era adoption, and compliance with regulatory adjustments. By embracing first- class practices and authorities’ help, SMEs can reap long-term financial balance and thrive in dynamic commercial enterprise environments. As SMEs preserve to evolve within digital technology, powerful economic control remains vital for his or her sustainable increase and achievement.
Keywords: financial management, financial literacy, cashflow management, financial risk management, finan
cial technology, financial resources.
_Received date: 08.08.2023_
_Accepted date: 22.09.2023_
_Published date: 28.09.2023_
**_How to cite_**
_© The Author(s) 2023_
_This is an open access article_
_under the Creative Commons CC BY license_
_Nkwinika, E., Akinola, S. (2023). The importance of financial management in small and medium-sized enterprises (SMEs): an analysis of challenges and_
_best practices. Technology Audit and Production Reserves, 5 (4 (73)), 12–20. doi: https://doi.org/10.15587/2706-5448.2023.285749_
### 1. Introduction
Small and Medium-sized Enterprises (SMEs) play a cru
cial position within the worldwide economic system, riding
innovation, employment, and economic growth [1]. The
definition of SMEs varies across international locations,
however, in general, they’re characterized via their es
pecially small scale and constrained assets compared to
larger companies. SMEs frequently face unique challenges
in managing their economic affairs, making monetary control
a vital aspect of their sustainability and success [2]. The
importance of financial management in SMEs cannot be
overstated. Efficient financial control is crucial for opti
mizing useful resource allocation, making sure of liquidity,
and improving the general economic performance of those
firms [3]. Effective monetary management practices enable
SMEs to make informed selections, control risks, and seize
boom opportunities [4]. It additionally contributes to build
ing investor confidence, attracting external funding, and
preserving a competitive area inside the market [5]. The aim
_of this study is to explore the demanding situations and best_
practices associated with financial management in SMEs.
By identifying and knowledge of those key elements, re
searchers, policymakers, and commercial enterprise prac
titioners can gain treasured insights into the dynamics of
economic management in SMEs and formulate strategies
to enhance their economic fitness and sustainability.
### 2. Materials and Methods
This overview paper focuses on the economic manage
ment practices of SMEs throughout various industries and
**12** TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023
-----
ISSN 2664-9969
geographical areas. Its goals are to analyze the demand
ing situations that SMEs stumble upon in dealing with
their price range and to provide an in-depth examination
of high-quality practices followed with the aid of
successful SMEs.
The review will embody a complete evaluation
of applicable academic literature, empirical research,
and reviews from legitimate assets. The examination
will encompass subjects related to monetary planning,
budgeting, coins go with the flow control, financ
ing options, danger control, and financial reporting,
among others.
Furthermore, the review will assess the effect
of macroeconomic factors and government policies
on the economic control of SMEs. Understanding
the outside influences that have an effect on SMEs’
economic decision-making will offer a broader con
text for the challenges and opportunities they face.
The insights derived from this evaluation can
function as a foundation for destiny research on the
subject of SME financial management.
Moreover, policymakers and practitioners can benefit
from the proof-primarily based recommendations to devise
powerful support mechanisms and rules that foster the
financial growth and stability of SMEs.
In the end, this overview paper aims to shed mild on
the importance of financial management in SMEs and offer
a complete assessment of the challenges and nice practices
in this area. By knowledge of the intricacies of financial
control in SMEs, stakeholders can make a contribution
to enhancing their monetary resilience, thereby promoting
financial development and prosperity.
### 3. Results and Discussion
3.1. The role of financial management in SMEs
Financial control is a fundamental factor of SMEs’
operations, encompassing the tactics of planning, orga
nizing, controlling, and directing the economic assets to
attain the company’s objectives [6]. It includes studying
financial information, making informed choices, and impos
ing strategies to ensure the efficient utilization of funds.
SMEs face precise challenges because of their limited as
sets and exposure to marketplace uncertainties, making
financial control all the extra important for sustenance
and boom [7].
3.1.1. Financial management defined. One of the few
primary functional areas of management, financial manage
ment is often regarded as the foundation for the growth
and success of any enterprise. Activities aimed at managing
a business’s finances in order to meet its financial goals
are included in financial management [8]. The definition
of financial management is based on how funding sources
are mobilized and used. Financial management is important
for obtaining the cash needed to finance an enterprise’s
assets and commercial operations, distributing funds among
competing uses, and ensuring effective and efficient use
of funds in order to accomplish the organization’s main
aim and goal. Although one of several functional areas
of management, financial management is crucial to the
success of SMEs. The primary function and location of
financial management in relation to other niche areas of
business management are the emphasis of this definition [9].
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
Fig. 1 illustrates the primary function and position of
financial management in relation to particular company
management domains.
Fig. 1. The central position and role of financial management [10]
3.1.2. Significance of financial management for SMEs.
Financial management has a favorable impact on com
petitiveness, the sufficiency of company records, and the
survival of SMEs. Business records are crucial to SMEs’
existence and their ability to acquire funding from investors
and/or financial institutions. Poor cash management has an
impact on the financial position and liquidity of businesses,
and financial management helps to eliminate it. In order
for SMEs to maintain liquidity and continue operating,
financial management is crucial. Financial management gives
SME owners the information and foresight necessary to
anticipate future cash flow issues, which are essential for
the sustainability of the company [11]. Allocating financial
resources with the use of financial management increases
the likelihood that a business will survive.
3.2. Challenges in financial management for SMEs
3.2.1. Limited financial resources and capital. Resource
limitations make it difficult for SMEs to innovate because
they can’t afford to experiment, which is essential for the
creation of new items. The shortage of resources faced by
SMEs also prevents the creation of routines and organiza
tional structures that would be helpful in attracting and
developing human potential as well as improving corpo
rate operations. Compared to individuals working for large
companies, SMEs are often regarded to be more financially
limited and have less access to formal financing (long-term
loans) [12]. SMEs frequently view access to finance as one of
the major obstacles keeping them from operating effectively
in both developed and developing nations. A key factor in
the growth and development of African businesses is hav
ing enough access to financing. The lack of financial assets
and capital is one of the biggest problems SMEs encounter
while managing their finances. Unlike huge businesses, SMEs
frequently operate on a smaller scale with limited access to
resources. This obstacle makes it difficult for them to expand
their operations, invest in research and development, and
explore new opportunities. As a result, SMEs must carefully
prioritize their financial obligations and use price-effective
strategies to make the most of their limited resources [13].
3.2.2. Access to financing and credit. Giving SMEs
credit encourages and boosts economic growth; more credit
TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023 **13**
-----
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
encourages entrepreneurship in the form of increased firm
formation and expansion [14]. Comparing Middle Eastern
and Central Asian countries at comparable economic de
velopment levels to other nations, the biggest barrier to
SME access to funding is in these regions. The failure
of SMEs to recognize the value of having appropriate fi
nance and inadequate control of net working capital are
examples of related concepts [15]. Despite the SMEs’ cru
cial contribution to the socioeconomic development of the
nation, it has become difficult for them to secure sources
of short- to long-term, flexible finance. There are a number
of reasons why SMEs do not have access to credit and
finance, including high-risk lending to SMEs, information
asymmetry caused by SMEs lending, high administrative
transaction costs associated with SMEs financing, and weak
institutional and legal structures [16].
3.2.3. Cash flow management. Most SMEs struggle with
inadequate cash management since there are no cash bud
gets in place to anticipate future cash flow issues and
other financial concerns [17]. The majority of SMEs are
unable to distinguish between cash flow and profit. SMEs
prioritize profit over cash flow, which is essential for the
long-term viability of the company. When a company’s
cash flow balance is declared to be positive, it immediately
makes substantial purchases without thinking about the
post-dated cheques that were issued or the payments that
would need to be made later. Businesses only realize they
lack the money afterward to meet their responsibilities.
SMEs find it difficult to track inflows and outflows of
cash because of inadequate cash budgeting and a lack of
a business bank account [18]. SMEs concentrate on in
creasing sales and lowering inventories without taking into
account the fact that rising sales are also accompanied
by rising debtor levels. If payments are not received as
specified in the invoice, the increase in debtors may result
in liquidity restrictions. SMEs face a number of difficul
ties, including bad debts and difficulty paying creditors,
which have an impact on the cash flows, profit margins,
and liquidity of the company [19]. Operating costs will
increase as a result of the difficulties in collecting money
from debtors, and the increase in operating costs will af
fect cash flow. The difficulty of collecting payment from
debtors causes an increase in write-off revenue, which places
financial limits on the business because of variable expenses
and unrecoverable inventories [20].
3.2.4. Financial risk management. A key component of
financial management for SMEs is managing financial risk.
Among these dangers might be market risk, which is in
fluenced by a number of factors with an emphasis on the
level of market competition as a whole. Market risk is the
strategic risk that SMEs face when it comes to the longterm retention of current consumers, the acquisition and
retention of new clients, the creation of novel products,
or the provision of novel services [21]. The only thing
that enables SMEs to execute a suitable sales volume that
allows them to sustain their market position is a sufficient
number of clients. The two primary components of the
competitive environment-customers and competitors – have
a significant impact on how competitive a company is.
The ability to manage finite resources that are challenging
to replace gives organizations a competitive advantage,
which SMEs must develop if they are to thrive. Among
ISSN 2664-9969
them is human capital [22]. To succeed and develop, as
well as meet customer demand, businesses must be able
to innovate new items. Any sort of company’s principal
objective is to maximize business performance, and man
agers place high importance on this goal. Because risk is
viewed as a crucial component of a company’s financial
management, the amount of financial risk must be evalua
ted in terms of how well a firm manages risk in order
to make successful financial risk management decisions.
Financial risk is one of the main threats facing SMEs [23].
The main indicators of SME financial risk are challenges
in business financing and a lack of funds because the ma
jority of a company’s operations are funded by the capital
of the owners or managers themselves. This could result
in a rise in operational costs and corporate debt due to
worries about debt repayment and the ensuing high finan
cial risk. Access to capital is anticipated to raise the bar
for a business environment by motivating organizations to
pursue more fruitful business prospects.
Competitive advantage and internal SME capabilities
have a positive and significant correlation [24]. For SMEs
that have been in operation for under five years, this rela
tionship is weaker than it is for more seasoned companies.
Innovation has a significant and lasting effect on the com
pany’s competitiveness via increasing productivity. As levels
of customer happiness and loyalty climb, so does support
for the purchasing procedures. One of the key signs that
could affect the tighter business environment is support
from suppliers and customers in the commercial sector.
In a more constricted business climate, the help of corporate
clients has a greater impact. Improved risk management is
not always a result of long-term relationships between SMEs
and their suppliers. Some of the main reasons for business
failure include a lack of management planning activities,
a lack of working capital, offering customers too much credit,
failure to implement rapid outsourcing, market competition,
and insufficient monitoring of corporate finances [25]. Other
factors may also contribute to an organization’s failure. The
risk of failure decreases as a manager age and managerial
ownership becomes clear. However, if larger management
boards and managers are present in other organizations,
the likelihood of failure will increase.
3.3. Best practices in financial management for SMEs
Financial management provides SMEs with essential
financial competencies, such as knowledge, attitude, and
awareness [26]. The ability to make financial decisions grows
as a result of the understanding of the financial markets.
The knowledge competency helps the owners of SMEs to
successfully balance their assets and liabilities, which is
a requirement for business liquidity, and for an appropriate
financial history, which is essential for the inability to ob
tain external finance. Owners of SMEs must consider their
attitude when deciding whether to take risks. Owners of
small and medium-sized businesses can efficiently allocate
financial resources to initiatives with higher risks and have
expertise in how to use those resources. Financial manage
ment is crucial for risk management risk diversification,
aiding insufficient financial mixing, and financial mana
gement. Analyzing firm financial circumstances and managing
financial resources are made easier with awareness. Financial
management gives SME owners the knowledge and foresight
they need to anticipate future cash flow issues, which is
essential to their ability to stay in business [27]. Poor cash
**14** TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023
-----
ISSN 2664-9969
management has an impact on the financial position and
liquidity of the organization, and financial management
helps to eliminate it. Financial management serves as an
analytical tool for future sales estimates, the assets needed
to meet demand in the future, and operational costs. Fi
nancial management gives SME owners in-depth knowledge
of the relationship between the supplier chain, production
process, and operational costs. The owners of SMEs who
are trained in financial management are able to create
a connection between costs and appropriate activity as
well as effective cash flow management. Monitoring cash
inflows and outflows is crucial to the management of SMEs
with constrained financial resources [28]. The tracking of
cash inflows and outflows provides SMEs with the neces
sary data to assess their company’s competitiveness and
provides a means of ensuring their survival during the
first year of operation.
3.4. Technology and financial management for SMEs
The application of technology assists SMEs to remain
competitive and plays a vital role in financial management
and sustainability. The financial management of SMEs is
related to many new developments in different ways [10].
An association between SMEs’ financial management and
innovation has been found in earlier studies. The impact
of innovation on an SME’s financial management can be
demonstrated using both financial and non-financial metrics.
Some advantages of innovation include the capacity for
competition, financial accessibility, connectivity, communica
tion, marketing, and export success. However, other com
mentators hold a different point of view. It has also been
suggested that ignoring innovation’s potential downsides
can ultimately have a negative effect on the environment
and lead to uncontrollable firm expansion. Despite wor
ries about possible negative effects, there is a wealth of
research showing that innovation has a positive impact
on SMEs’ financial management [29].
3.4.1. Role of financial software and tools. Financial
software and equipment are essential for optimizing financial
management procedures for SMEs [30]. These technologies
encompass a wide range of applications, such as account
ing software, budgeting tools, financial analytics platforms,
and cash flow control systems. Financial software saves
time and effort by automating repetitive operations, en
abling SMEs to concentrate on making strategic decisions.
Real-time access to financial records made possible by
financial software offers better financial reporting and
analysis [31]. Giving SMEs the knowledge they need to
make wise business decisions, it provides insights into
important financial indicators including revenue, costs, and
profitability. Additionally, these systems usually come with
capabilities that can be altered to accommodate particular
demands of SMEs, offering flexibility and scalability as
organizations grow.
3.4.2. Fintech solutions for SMEs. FinTech makes it
simple for SMEs to get finance that will enable them
to expand their businesses [32]. FinTech firms are essen
tial for providing SMEs with financial support. FinTech
today provides more than simply capital finance; it also
includes a wide range of other services including digital
payments and financial mechanisms. FinTech is essential
in enhancing SMEs’ success since it increases operational
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
efficiency. By providing services like non-cash transactions
utilizing applications, fintech lowers operating expenses by
relieving firms of bank administrative fees. Furthermore,
non-collateral loans will give business owners easier access
to finance. Financial technology considerably and favo
rably affects the asset value and capital growth of SMEs.
FinTech, however, has no appreciable effects on financial
inclusion and stability [33].
3.4.3. Benefits and challenges of adopting financial tech
nology. For SMEs, using financial technology has a number
of advantages. First off, by automating manual financial
operations, it improves efficiency and production. By re
ducing processes, errors are less likely to occur and SMEs
are given more time to concentrate on their main business
operations. The accuracy of financial data and financial
transparency are also improved by financial technology.
It is simpler to spot potential financial hazards and op
portunities when SMEs have access to real-time data that
keeps them informed about their financial performance [34].
Thirdly, cost savings are offered by fintech solutions to
SMEs. Paperless operations and the usage of digital plat
forms lower administrative expenses and improve overall
cost-effectiveness. Adopting financial technology, though,
also poses difficulties for SMEs. The initial cost associated
with implementing and integrating these technologies is
one of the main obstacles. SMEs may find it challenging to
invest in new systems and receive the requisite training to
operate them efficiently [35]. The digitization of financial
data and transactions also raises cybersecurity issues. To
safeguard confidential financial information from potential
online dangers, SMEs must give data security first priority.
And last, some SMEs can experience opposition to change
from staff members accustomed to conventional finance
procedures. Successful technology adoption depends on
appropriate training and change management techniques.
Finally, as technology and financial management merge
more and more, SMEs have access to a wide range of
tools and solutions to improve their financial management
procedures. With the automation, accessibility, and costeffectiveness that financial software and fintech solutions
offer, SMEs may make better decisions, increase transpa
rency, and enhance their financial outcomes [5]. Although
using new technology might be difficult, the advantages
outweigh the disadvantages, setting up SMEs for greater
financial success and growth in the digital age.
3.5. Financial literacy for SME owners
Financial literacy is regarded on a global scale as the
basic determinant in business performance, growth, and
financial efficiency for SMEs. Financial strategies are goals,
patterns, or other approaches designed to improve and op
timize financial management in order to achieve corporate
objectives [36]. Financial management is made up of these
strategies. SME owners find it challenging to differentiate
between profit and money in the bank without financial
knowledge. SMEs find it challenging to build their busi
nesses because of a lack of financial literacy and inadequate
financial planning, which makes it difficult for them to
manage their cash inflow and outflow. Owners of SMEs
in general lack a thorough understanding of financial ac
counts and the amount of money required to raise financing.
In order to run and grow their operations, SMEs lack
the managerial skills necessary [37]. It has a high failure
TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023 **15**
-----
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
rate for new business owners because there aren’t many
organizations that train and assist SME owners in manag
ing and growing their businesses. It is difficult for SMEs
to acquire these skills because there are few institutions
and little available space. In the context of the current
corporate environment, financial literacy is the capacity to
effectively manage financial resources across their life cycles
and interact with financial products and services. By learning
more about financial products and how to evaluate risks
and opportunities, investors and SMEs can both gain from
improved financial literacy [38]. SMEs who are financially
literate and have sufficient resources typically gain access
to the loan markets. Because they are financially literate,
SMEs can increase their market share, increase profits and
sales, and retain more employees. It could be possible to
get beyond the financial barriers that prevent SMEs from
succeeding through financial literacy. The existence of SMEs
depends on being able to provide owners with the knowledge
needed to perform financial forecasting and utilize resources
efficiently [39]. When one has the correct financial mix
and understands how to lower risks, such as through asset
diversification and gearing, financial challenges are simpler
to solve. Financial literacy aids a company’s liquidity by
maintaining the right ratio of assets to liabilities. Finan
cial literacy’s primary objectives are to increase an SME’s
assets, decrease their obligations, and increase their net
profit. SMEs that are financially literate are better equipped
to comprehend how finances and operational success are
related. These skills help SMEs better manage their debt,
make timely payments to creditors, and maintain correct
financial records [40]. SMEs who are financially literate
are better equipped to manage their debt, improve their
credit status with current and/or potential creditors, and
benefit from paying off their loans early. Business owners
who are financially literate may be better able to maintain
adequate financial accounts and correct accounting records,
which is advantageous when trying to access the credit
markets. The level of literacy of SMEs in terms of their
understanding of all of their financial possibilities has some
bearing on their aspirations. Financial literacy is a crucial
component in boosting a SMEs performance. The ability
of SMEs to create appropriate goals and plans is what
defines their performance [41]. The relationship between
financial literacy and company resources has a direct im
pact on how well SMEs perform. Internal assets known
as strategic resources are employed to take advantage of
opportunities that develop outside of the organization and
give it a competitive edge. These resources, which come
in both tangible and intangible forms, can be bought with
the help of financial resources. As opposed to business ex
pertise, which is regarded an intangible resource, financial
and physical resources are considered tangible resources.
The performance of SMEs is greatly impacted by financial
resources [14]. SMEs lack the financial resources necessary
to compete in the market for the newest technologies. Busi
ness performance is also impacted by the management and
allocation of financial resources. The growth and performance
of SMEs depend on financial resource availability, which
has a direct impact on how well the business employs its
strategic resources [42]. For a company to succeed and
thrive, competitive human resources are crucial. Human
capital comes in two forms: knowledge and experience. SMEs
perform well when their human capital includes financial
knowledge. SMEs with a grasp of finance have the skill
ISSN 2664-9969
set necessary to communicate with a variety of investors,
including angel investors, capital investors, and financial
institutions. Human resources play an important role in
the growth and success of SMEs. Strategic, financial, and
human resource management that is efficient is essential
for SMEs to be innovative and productive [43].
3.6. Government policies and support for SME
A business cannot survive successfully in its early stages
for a variety of reasons, including a lack of experience, be
ing new to the market, and having a small customer base.
The main determinant in the growth of newly founded
businesses is government support. It is hardly unexpect
ed that governments all across the world have expressed
a keen interest in funding projects. Additionally, SMEs may
rely on government assistance at different points in their
business cycles, including for starting, ongoing operational
activities, and even process innovation [44]. The effective
ness of SMEs in terms of innovation and new technology
is impacted by government incentives, both directly and
indirectly. As a result, it is frequently advised that SMEs
strengthen their networking by forming positive relation
ships with political and governmental institutions in order
to gain access to valuable resources or to avoid the negative
effects of government policies that can negatively affect
SMEs [45]. According to social network theory, a company
with close relationships with suppliers, political organiza
tions, and customers may be able to access rare resources
more affordably, improving its performance. Despite the fact
that historically government support for industrial growth
was disregarded, recently the government has shown a keen
interest in industrial development by investing in R&D and
technology. Recent research has demonstrated the impor
tance of government support from this perspective in SMEs’
performance. For instance, it has been thoroughly explored
that SMEs operating in developing markets like Pakistan
are encouraged to retain their financial performance and
competitive position. Entrepreneurial characteristics do not
directly influence how well a firm performs; government
support does. Although financial assistance from the Korean
government helped Korean SMEs survive over the long
term, it wasn’t always beneficial for increased productivity
and profitability [46]. Government assistance also enables
businesses to take advantage of entrepreneurial prospects on
a local and global scale, greatly improving business success.
In transition economies, government support has a strong
beneficial impact on SME performance and a significant
negative impact on new venture performance. The effective
ness of a corporation is directly correlated with govern
ment backing, and this link has increased the advantages
of various market entry techniques. In addition, a number
of other studies have addressed how crucial government
assistance is to the development and profitability of newly
founded businesses [47].
3.7. Case studies: successful financial management in SMEs
3.7.1. Real-world examples of SMEs with effective finan
cial management practices
3.7.1.1. Case study 1: Fanella (a software development
startup). Fanella is a promising software development startup
that exemplifies effective financial management practices. The
employer, founded by way of three young marketers, com
menced as a small-scale operation with restrained financial
sources. However, via prudent economic making plans and
**16** TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023
-----
ISSN 2664-9969
strategic selection-making, Fanella effectively navigated the
aggressive tech industry and completed a widespread increase.
One key aspect of their financial management success was
their emphasis on cash flow management. They implemented
rigorous invoicing and payment tracking systems to en
sure timely collections and payments. This allowed them to
maintain a healthy cash flow, enabling them to fund their
operations and investments without relying heavily on external
financing. Additionally, Fanella adopted a conservative debt
management approach. Rather than burdening themselves
with high-interest loans, they utilized bootstrapping and
reinvested profits into their business expansion. By main
taining a low debt-to-equity ratio, they safeguarded their
financial stability and minimized financial risks [48].
3.7.1.2. Case study 2: Wandegeya business centre, Kam
pala (a family-owned SME). Wandegeya business centre,
Kampala, a family-owned SME specializing in custom
metal fabrication, illustrates the significance of financial
planning and budgeting in achieving sustainable growth.
Despite facing market fluctuations and economic downturns,
Wandegeya business centre, Kampala maintained steady
financial performance over the years. The company’s suc
cess can be attributed to its rigorous financial planning
process. They formulated detailed budgets, setting revenue
and expense targets for each quarter and year. Regular
monitoring and analysis of financial performance against
budgeted figures allowed them to identify cost overruns
and revenue shortfalls promptly. This enabled Wandegeya
business centre, Kampala to implement corrective measures,
such as cost-cutting initiatives or diversifying revenue
streams, to ensure financial stability [49].
3.7.2. Lessons learned from these case studies
1. _Emphasize Cash Flow Management: The case studies_
demonstrate the critical importance of effective cash flow
management. SMEs have to prioritize timely invoicing,
green price collections, and prudent cash go with the flow
forecasting to ensure enough liquidity for daily operations
and funding opportunities.
2. _Conservative Debt Management: SMEs need to be cau_
tious approximately taking on immoderate debt, particularly
inside the early degrees of their operations. Maintaining
a healthy debt-to-fairness ratio and exploring alternative
investment sources can mitigate monetary dangers and en
hance lengthy-term monetary balance.
3. _Rigorous Financial Planning and Budgeting: Case re_
search emphasize the importance of thorough monetary mak
ing plans and budgeting. SMEs should set clear financial
goals, create detailed budgets, and regularly monitor their
financial performance to make data-driven decisions and
adapt to market dynamics.
4. _Proactive Decision-Making: Successful SMEs take a pro_
active approach to financial management. They identify poten
tial risks, seize growth opportunities, and implement strategic
measures to optimize their financial resources effectively.
5. _Long-Term Focus: Both case studies exemplify the_
value of long-term financial focus. SMEs with a vision for
sustainable growth and financial stability are more likely
to endure challenges and achieve success over time.
In conclusion, the case studies of Fanella and Wandegeya
business centre, Kampala provide valuable insights into
effective financial management practices for SMEs. The
training discovered underscore the significance of coins flow
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
management, conservative debt control, rigorous financial
making plans, proactive selection-making, and a protractedterm monetary recognition. By adopting these practices,
SMEs can improve their monetary overall performance,
reap sustainable growth, and navigate the complexities of
the commercial enterprise landscape with confidence.
3.8. Future trends in financial management for SMEs
3.8.1. Emerging technologies and their impact. The future
of financial management for SMEs will be significantly
influenced by emerging technologies that are transforming
the financial landscape. One such technology is Artificial
Intelligence (AI), which is revolutionizing various financial
processes. AI-powered financial management software can
automate bookkeeping, financial analysis, and budgeting
tasks, enabling SMEs to streamline their financial opera
tions and improve accuracy.
Another crucial technology is blockchain, offering secure
and transparent transactional systems. Blockchain can en
hance financial data integrity, facilitate faster cross-border
transactions, and reduce the need for intermediaries. This
technology holds potential for transforming payment sys
tems and improving supply chain finance, benefiting SMEs
with improved efficiency and reduced costs.
Moreover, the Internet of Things (IoT) can impact finan
cial control by offering actual-time statistics from connected
gadgets, including stock control systems or manufacturing
equipment. These records can enhance stock forecasting,
maintenance planning, and operational performance, main
to better economic making plans and useful resource allo
cation for SMEs.
3.8.2. Regulatory changes affecting SMEs. The future
of financial management for SMEs will also be shaped by
evolving regulatory environments. Governments worldwide
are focusing on strengthening financial regulations and pro
moting transparency, particularly after the global financial
crisis. SMEs need to stay abreast of changing compliance
requirements, tax regulations, and reporting standards to
ensure full compliance and avoid potential penalties.
Furthermore, environmental, social, and governance (ESG)
reporting is gaining prominence. Regulators and investors
are increasingly emphasizing sustainability and responsible
business practices. SMEs that incorporate ESG principles
into their financial management and reporting will not only
enhance their reputation but also attract socially respon
sible investors.
3.8.3. Forecasting challenges and opportunities. Future
trends in financial management for SMEs will present both
challenges and opportunities in forecasting. As markets
become more complex and volatile, forecasting becomes
challenging due to the uncertainty of economic conditions
and consumer behavior.
However, advanced data analytics and predictive modeling
tools offer SMEs opportunities to improve their forecasting
accuracy. Big data analytics can help SMEs identify patterns
and trends, making better-informed predictions about mar
ket demand, customer preferences, and revenue projections.
Moreover, the integration of AI and machine learning
algorithms into financial forecasting can provide SMEs with
more sophisticated insights. These technologies can analyze
historical financial data, market trends, and external fac
tors to generate accurate forecasts and scenario planning.
TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023 **17**
-----
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
Additionally, the rise of alternative data sources, such
as social media data or satellite imagery, provides new
avenues for gathering insights and improving forecasting
models. SMEs that embrace these technologies and data
sources can gain a competitive advantage in their financial
planning and decision-making.
In conclusion, future trends in financial management for
SMEs will be shaped by emerging technologies, regulatory
changes, and advancements in forecasting techniques. Embrac
ing AI, blockchain, IoT, and other transformative technolo
gies can enhance financial efficiency and decision-making
for SMEs. Staying updated with regulatory changes and
adopting ESG principles will ensure compliance and repu
tation enhancement. Despite forecasting challenges, SMEs
can leverage data analytics and predictive modeling tools
to make better-informed financial projections. By embrac
ing these future trends, SMEs can position themselves for
success in an ever-evolving financial landscape.
3.9. Discussion
3.9.1. Recap of the importance of financial management
in SMEs. Financial control plays a pivotal function within
the success and sustainability of Small and Medium-sized
Enterprises (SMEs). Effective monetary control lets in SMEs
to allocate their constrained assets efficaciously, make in
formed enterprise choices, and navigate economic demanding
situations with self-assurance. It enables SME owners to
assess their financial health, set clear financial goals, and
monitor progress toward achieving them. By maintaining
sound financial practices, SMEs can enhance their credi
bility among stakeholders, attract external financing, and
seize growth opportunities. Ultimately, financial management
empowers SMEs to achieve long-term financial stability
and thrive in a competitive business environment.
3.9.2. Key challenges and best practices highlighted.
Throughout this review paper, several key challenges and
best practices in financial management for SMEs were
highlighted.
_Challenges:_
1. _Limited financial resources and capital: SMEs face_
constraints in accessing sufficient funds for growth and
development.
2. _Access to financing and credit: SMEs encounter dif_
ficulties in securing loans and financing from traditional
institutions.
3. _Cash flow management: Maintaining a steady cash_
flow is challenging for SMEs, affecting day-to-day operations.
4. _Debt management: Managing debt responsibly while_
balancing growth ambitions is a delicate challenge for SMEs.
5. _Financial risk management: SMEs must identify and_
mitigate various financial risks to safeguard their financial
stability.
_Best Practices:_
1. _Budgeting and forecasting: Creating comprehensive_
budgets and forecasts aids SMEs in resource allocation
and strategic planning.
2. _Financial reporting and analysis: Regular financial_
reporting and analysis enable data-driven decision-making.
3. _Working capital management: Efficiently managing_
working capital ensures smooth business operations.
4. _Investment appraisal and decision-making: Rigorous_
evaluation of investment opportunities helps prioritize growth
initiatives.
ISSN 2664-9969
5. _Financial planning and strategy: Aligning financial_
planning with business strategy enhances financial per
formance and competitiveness.
3.9.3. Recommendations for improving financial manage
ment in SMEs. To further improve financial management
in SMEs, several recommendations can be implemented:
1. _Enhance Financial Literacy: Governments and industry_
associations should invest in financial education programs
for SME owners to improve their financial literacy and
decision-making abilities.
2. _Leverage Technology: SMEs should embrace financial_
software, fintech solutions, and data analytics to stream
line financial processes, access real-time data, and enhance
forecasting accuracy.
3. _Seek Financial Advisory Services: SMEs should seek_
professional financial advisory services to gain expert in
sights and guidance on financial planning, risk manage
ment, and growth strategies.
4. _Foster Collaboration: Governments, financial insti_
tutions, and industry associations should collaborate to
establish mentorship programs and financing initiatives
that benefit SMEs.
5. _Monitor Regulatory Changes: SMEs should remain_
informed about regulatory changes and adapt their finan
cial practices to ensure compliance and take advantage
of incentives.
6. _Strengthen Financial Controls: Implementing sturdy_
economic controls, such as normal audits and segregation
of duties, ensures transparency and reduces the risk of
economic mismanagement.
7. _Prioritize Long-Term Planning: SMEs must attention_
on long-term economic planning and sustainable increase
strategies, avoiding short-term decision-making which could
compromise their financial stability.
3.9.4. Limitation of this study
1. Limited Generalizability: The paper may focus pri
marily on a specific region or industry, which can limit
the generalizability of its findings. SMEs in different geo
graphical locations or industries may face unique challenges
and opportunities not addressed in the paper.
2. Data Sources: The paper relies on case studies and
insights from real-world examples. These sources may not
provide a comprehensive and representative view of all
SMEs, as successful practices and government support
can vary widely across different contexts.
3. _Future Trends Speculation: While the paper discusses_
future trends, such as emerging technologies and regulatory
changes, it does not provide concrete evidence or data on
how these trends currently impact financial management
in SMEs. It may benefit from more empirical research in
this regard.
4. Lack of Quantitative Analysis: The paper primarily
focuses on qualitative aspects of financial management in
SMEs and lacks quantitative analysis or statistical data.
Quantitative data could provide a more robust foundation
for the paper’s conclusions and recommendations.
### 4. Conclusions
So, it is possible to conclude that monetary management
is a critical thing of SMEs’ achievement. By addressing
challenges and imposing first-rate practices, SMEs can
**18** TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023
-----
ISSN 2664-9969
optimize their monetary performance, make informed deci
sions, and function themselves for lengthy-term increase
and prosperity. Embracing technological advancements,
seeking financial advice, and staying updated on regulatory
changes will further enhance SMEs’ financial management
practices. As governments and industry stakeholders con
tinue to support SMEs with resources and programs, the
future looks promising for SME financial management,
driving economic growth and innovation.
### Conflict of interest
The authors declare that they have no conflict of inte
rest in relation to this study, including financial, personal,
authorship, or any other, that could affect the study and
its results presented in this article.
### Financing
The research was performed without financial support.
### Data availability
The manuscript has no associated data.
References
**1.** Olowofela, O., Kuforiji, O., Odekeye, O., Olaiya, K. I. (2022).
Financial Inclusion and Growth of Small and Medium Sized
Enterprises: Evidence from Nigeria. _Izvestiya Journal of the_
_University of Economics – Varna, 66 (3-4), 198–212. doi: https://_
doi.org/10.56065/ijuev2022.66.3-4.198
**2.** Dewi, G. C., Yulianah, Y., Alimbudiono, R. S., Kurniawan, D.
(2023). Application of Business Strategy to Create Competitive
Advantage in Indonesian Micro, Small and Medium Enterprises.
_Jurnal Manajemen Bisnis, 10 (1), 77–83._
**3.** Adjei-Boateng, E. S. (2023). A Literature Review on Management
Practices among Small and Medium-Sized Enterprises. _Journal_
_of Engineering Applied Science and Humanities, 8 (1),_ 1–23.
**4.** Mbaye, M. H. (2023). _Effective working Capital Management_
_Practice and SMEs’ financial performance: The case of SMEs_
_operating in the service and construction sectors in Senegal. Uni_
versity of Wales Trinity Saint David.
**5.** Hendayani, N., Muzakir, M., Yuliana, Y., Asir, M., Wahab, A.
(2022). Best Practice of Financial Management in SMEs Ope
ration in Digital times. _Budapest International Research and_
_Critics Institute-Journal (BIRCI-Journal), 5 (1), 3350–3361._
**6.** Nunden, N., Abbana, S., Marimuthu, F., Sentoo, N. (2022). An
assessment of management skills on capital budgeting planning
and practices: evidence from the small and medium enterprise
sector. Cogent Business & Management, 9 (1). doi: https://
doi.org/10.1080/23311975.2022.2136481
**7.** Casagranda, F. (2020). _The Chinese online market: opportunities_
_and challenges for Italian SMEs. Universit Ca’ Foscari Venezia._
**8.** Helmold, M., Samara, W. (2019). _Progress in performance_
_management._ Springer International Publishing. doi: https://
doi.org/10.1007/978-3-030-20534-8
**9.** Lu, J., Shon, J., Zhang, P. (2019). Understanding the Disso
lution of Nonprofit Organizations: A Financial Management
Perspective. _Nonprofit and Voluntary Sector Quarterly, 49 (1),_
29–52. doi: https://doi.org/10.1177/0899764019872006
**10.** Zada, M., Yukun, C., Zada, S. (2019). Effect of financial mana
gement practices on the development of small-to-medium size
forest enterprises: insight from Pakistan. _GeoJournal, 86 (3),_
1073–1088. doi: https://doi.org/10.1007/s10708-019-10111-4
**11.** Halim, H. A., Zainal, S. R. M., Ahmad, N. H. (2022). Strategic
Foresight and Agility: Upholding Sustainable Competitive
ness Among SMEs During COVID-19 Pandemic. International
_Journal of Economics & Management, 16, 81–97. doi: https://_
doi.org/10.47836/ijeamsi.16.1.006
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
**12.** Teka, B. M. (2022). Determinants of the sustainability and
growth of micro and small enterprises (MSEs) in Ethiopia:
literature review. _Journal of Innovation and Entrepreneurship,_
_11 (1)._ doi: https://doi.org/10.1186/s13731-022-00261-0
**13.** Haji Karimian, S. (2023). _Productivity in road pavement main_
_tenance & rehabilitation projects: perspectives of New Zealand_
_roading contractors on the constraints and improvement measures._
Massey University.
**14.** Zarrouk, H., Sherif, M., Galloway, L., Ghak, T. E. (2020). En
trepreneurial Orientation, Access to Financial Resources and
SMEs’ Business Performance: The Case of the United Arab
Emirates. _The Journal of Asian Finance, Economics and Busi_
_ness, 7 (12), 465–474. doi: https://doi.org/10.13106/jafeb.2020._
vol7.no12.465
**15.** Simon-Oke, O. O. (2020). Working Capital Management –
Performance Relationship: A Study of Small and Medium En
terprises in Akure, Nigeria. International Journal of Small Busi
_ness and Entrepreneurship Research, 8 (2), 32–42. doi: https://_
doi.org/10.37745/ijsber.vol8.no.2p32-42.2020
**16.** Baloyi, F., Khanyile, M. B. (2022). Innovative mechanisms to
improve access to funding for the black-owned small and me
dium enterprises in South Africa. The Southern African Journal
_of Entrepreneurship and Small Business Management, 14 (1)._
doi: https://doi.org/10.4102/sajesbm.v14i1.578
**17.** Ramli, A., Yekini, L. S. (2022). Cash Flow Management among
Micro-Traders: Responses to the COVID-19 Pandemic. Sustain
_ability, 14 (17), 10931. doi: https://doi.org/10.3390/su141710931_
**18.** Wadesango, N., Tinarwo, N., Sitcha, L., Machingambi, S. (2019).
The impact of cash flow management on the profitability and
sustainability of small to medium sized enterprises. International
_Journal of Entrepreneurship, 23 (3), 1–19._
**19.** Nkwinika, S. E. R., Mashau, P. (2020). Evaluating the financial
challenges affecting the competitiveness of small businesses in
South Africa. _Gender and Behaviour, 18 (1),_ 15151–15162.
**20.** Fred, M. U. G. A. R. U. R. A. (2021). _Effects of accounts re_
_ceivable management on the financial performance of construction_
_companies in Rwanda: A case of NPD Ltd. University of Rwanda._
**21.** Yeon, G., Hong, P. C., Elangovan, N., Divakar, G. M. (2022).
Implementing strategic responses in the COVID-19 market
crisis: a study of small and medium enterprises (SMEs) in
India. _Journal of Indian Business Research, 14 (3), 319–338._
doi: https://doi.org/10.1108/jibr-04-2021-0137
**22.** Dwikat, S. Y., Arshad, D., Mohd Shariff, M. N. (2023). Ef
fect of Competent Human Capital, Strategic Flexibility and
Turbulent Environment on Sustainable Performance of SMEs
in Manufacturing Industries in Palestine. Sustainability, 15 (6),
4781. doi: https://doi.org/10.3390/su15064781
**23.** Grondys, K., lusarczyk, O., Hussain, H. I., Androniceanu, A.
(2021). Risk Assessment of the SME Sector Operations during
the COVID-19 Pandemic. International Journal of Environmental
_Research and Public Health, 18 (8), 4183. doi: https://doi.org/_
10.3390/ijerph18084183
**24.** Farida, I., Setiawan, D. (2022). Business Strategies and Com
petitive Advantage: The Role of Performance and Innovation.
_Journal of Open Innovation: Technology, Market, and Complexity,_
_8 (3), 163. doi: https://doi.org/10.3390/joitmc8030163_
**25.** Cant, A., Agui aga, E., Scheel, C. (2021). Learning from Failure
and Success: The Challenges for Circular Economy Implementa
tion in SMEs in an Emerging Economy. _Sustainability, 13 (3),_
1529. doi: https://doi.org/10.3390/su13031529
**26.** Rosyadah, K. (2020). The influence of financial knowledge,
financial attitudes and personality to financial management
behavior for micro, small and medium enterprises typical food of
coto makassar. JHSS (Journal of Humanities and Social Studies),
_4 (2),_ 152–156. doi: https://doi.org/10.33751/jhss.v4i2.2468
**27.** Morales, R. P. (2023). Financial Literacy of Micro Entrepre
neurs in Daet, Camarines Norte. Iconic research and engineering
_journals, 6 (12),_ 114–141.
**28.** Islam, A., Mansoor, A., Rahman, M., Abd Wahab, S. (2020). Аdjust
ing a strategic cash-flow model for bangladeshi small and medium
enterprises: the art of surviving COVID-19 emergency. Business
_Excellence and Management, S.I. (1), 194–213. doi: https://_
doi.org/10.24818/beman/2020.s.i.1-16
TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023 **19**
-----
#### ECONOMICS OF ENTERPRISES:
ECONOMICS AND MANAGEMENT OF ENTERPRISE
**29.** Du, L., Razzaq, A., Waqas, M. (2022). The impact of COVID-19
on small- and medium-sized enterprises (SMEs): empirical
evidence for green economic implications. _Environmental Sci_
_ence and Pollution Research, 30 (1), 1540–1561. doi: https://_
doi.org/10.1007/s11356-022-22221-7
**30.** Gao, J. (2022). Research on Financial Management Informa
tization Mode of SME under Cloud Computing. International
_Journal of Science and Research (IJSR), 11 (7),_ 793–796.
doi: https://doi.org/10.21275/sr22712093816
**31.** Kotios, D., Makridis, G., Walser, S., Kyriazis, D., Monferrino, V.
(2022). Personalized finance management for smes. _Big Data_
_and Artificial Intelligence in Digital Finance. Springer, 215–232._
doi: https://doi.org/10.1007/978-3-030-94590-9_12
**32.** Utami, N., Sitanggang, M. L. (2021). The Effect of Fintech
Implementation on The Performance of SMEs. Journal of Inter
_national Conference Proceedings, 4 (3), 407–417. doi: https://_
doi.org/10.32535/jicp.v4i3.1342
**33.** Muthuswamy, V. V., Sharma, A. (2023). Role of Emerging
Financial Technology on Environmental and Social Governance
of Textile Companies in Saudi Arabia. _Cuadernos de Econom a,_
_46 (130), 64–72._
**34.** Alkhawaldeh, B. Y., Alhawamdeh, H., Al-Afeef, M. A. M.,
Al-Smadi, A. W., Almarshad, M., Fraihat, B. A. M., Sou
madi, M. M., Nawasra, M., Alaa, A. A. (2023). The effect of
financial technology on financial performance in Jordanian
SMEs: The role of financial satisfaction. Uncertain Supply Chain
_Management, 11 (3), 1019–1030. doi: https://doi.org/10.5267/_
j.uscm.2023.4.020
**35.** Masood, T., Sonntag, P. (2020). Industry 4.0: Adoption chal
lenges and benefits for SMEs. _Computers in Industry, 121,_
103261. doi: https://doi.org/10.1016/j.compind.2020.103261
**36.** Tuffour, J. K., Amoako, A. A., Amartey, E. O. (2020). Assessing
the Effect of Financial Literacy Among Managers on the Perfor
mance of Small-Scale Enterprises. Global Business Review, 23 (5),
1200–1217. doi: https://doi.org/10.1177/0972150919899753
**37.** Utami, E. S., Aprilia, M. R., Putra, I. C. A. (2021). Financial
literacy of micro, small, and medium enterprises of consumption
sector in Probolinggo city. _Jurnal Manajemen Dan Kewirausa_
_haan, 23 (1), 10–17. doi: https://doi.org/10.9744/jmk.23.1.10-17_
**38.** Oppong, C., Salifu Atchulo, A., Akwaa-Sekyi, E. K., Grant, D. D.,
Kpegba, S. A. (2023). Financial literacy, investment and per
sonal financial management nexus: Empirical evidence on pri
vate sector employees. _Cogent Business & Management, 10 (2)._
doi: https://doi.org/10.1080/23311975.2023.2229106
**39.** Balen, J., Nojeem, L., Bitala, W., Junta, U., Browndi, I. (2023).
Essential Determinants for Assessing the Strategic Agility Frame
work in Small and Medium-sized Enterprises (SMEs). European
_Journal of Scientific and Applied Sciences, 10 (2023), 2124–2129._
**40.** Mutamimah, M., Tholib, M., Robiyanto, R. (2021). Corporate
governance, credit risk, and financial literacy for small medium
enterprise in Indonesia. Business: Theory and Practice, 22 (2),
406–413. doi: https://doi.org/10.3846/btp.2021.13063
ISSN 2664-9969
**41.** Kitsios, F., Kamariotou, M. (2019). Strategizing information
systems: An empirical analysis of IT alignment and success
in SMEs. Computers, 8 (4), 74. doi: https://doi.org/10.3390/
computers8040074
**42.** Memon, A., Yong An, Z., Memon, M. Q. (2019). Does financial
availability sustain financial, innovative, and environmental
performance? Relation via opportunity recognition. _Corporate_
_Social Responsibility and Environmental Management, 27 (2),_
562–575. Portico. doi: https://doi.org/10.1002/csr.1820
**43.** Rodrigues, M., Franco, M., Silva, R., Oliveira, C. (2021). Suc
cess Factors of SMEs: Empirical Study Guided by Dynamic
Capabilities and Resources-Based View. Sustainability, 13 (21),
12301. doi: https://doi.org/10.3390/su132112301
**44.** Bakhtiari, S., Breunig, R., Magnani, L., Zhang, J. (2020). Fi
nancial Constraints and Small and Medium Enterprises: A Re
view. Economic Record, 96 (315), 506–523. doi: https://doi.org/
10.1111/1475-4932.12560
**45.** Garc a-P rez-de-Lema, D., Madrid-Guijarro, A., Dur ndez, A.
(2022). Operating, financial and investment impacts of Covid-19
in SMEs: Public policy demands to sustainable recovery con
sidering the economic sector moderating effect. _International_
_Journal of Disaster Risk Reduction, 75, 102951. doi: https://_
doi.org/10.1016/j.ijdrr.2022.102951
**46.** Agwaniru, A. (2023). _ICT as a Strategy for Sustainable Small_
_and Medium Enterprises in Nigeria. California Baptist University._
**47.** Balaji, M., Dinesh, S. N., Raja, S., Subbiah, R., Manoj Ku
mar, P. (2022). Lead time reduction and process enhancement
for a low volume product. _Materials Today: Proceedings, 62,_
1722–1728. doi: https://doi.org/10.1016/j.matpr.2021.12.240
**48.** Aruho, A. (2021). _Impact of financial management practices_
_on the performance of small and medium enterprises (SMEs)_
_in Uganda: case study of Wandegeya business centre, Kampala._
Makerere University.
**49.** Nurmadewi, D., Mahendrawathi, E. R. (2019). Analyzing Link
age Between Business Process Management (BPM) Capability
and Information Technology: A Case Study in Garment SMEs.
_Procedia Computer Science, 161, 935–942. doi: https://doi.org/_
10.1016/j.procs.2019.11.202
*Eugine Nkwinika, _Doctor of Business Administration, Johan_
_nesburg Business School, University of Johannesburg, Johannesburg,_
_South Africa, e-mail: sthembison@uj.ac.za, ORCID: https://orcid.org/_
_0000-0001-7626-4051_
**_Segun Akinola,_** _PhD in Electrical/Electronic Engineering, Johan_
_nesburg Business School, University of Johannesburg, Johannesburg,_
_South Africa, ORCID: https://orcid.org/0000-0003-1565-7825_
*Corresponding author
**20** TECHNOLOGY AUDIT AND PRODUCTION RESERVES — №5/4(73) 2023
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.15587/2706-5448.2023.285749?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.15587/2706-5448.2023.285749, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://journals.uran.ua/tarp/article/download/285749/282012"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-09-28T00:00:00
|
[
{
"paperId": "38eaf7a0a7a9333527511ac2e3c08bcdfaf899cc",
"title": "Financial literacy, investment and personal financial management nexus: Empirical evidence on private sector employees"
},
{
"paperId": "c3eef88ac3fee5d00d5233d43ea820c2bd421bf4",
"title": "Application of Business Strategy to Create Competitive Advantage in Indonesian Micro, Small and Medium Enterprises"
},
{
"paperId": "8699dcce8a4d996c73fabf9c0723f9cb47987f79",
"title": "Effect of Competent Human Capital, Strategic Flexibility and Turbulent Environment on Sustainable Performance of SMEs in Manufacturing Industries in Palestine"
},
{
"paperId": "8d3bd064fd7cc513d6be74e96836beed306996b7",
"title": "Financial Inclusion and Growth of Small and Medium Sized Enterprises: Evidence from Nigeria"
},
{
"paperId": "d7ac66fad800b71f42af33d998724fb967df6730",
"title": "Determinants of the sustainability and growth of micro and small enterprises (MSEs) in Ethiopia: literature review"
},
{
"paperId": "8f04be6eb4e410ef04d7c1901509e8799db77f9f",
"title": "Strategic Foresight and Agility: Upholding Sustainable Competitiveness Among SMEs During COVID-19 Pandemic"
},
{
"paperId": "0fabb5684d5c1de42f0dc50434b0edaa98c3385c",
"title": "Innovative mechanisms to improve access to funding for the black-owned small and medium enterprises in South Africa"
},
{
"paperId": "c0ad4d585649898c249386b322396550a742d217",
"title": "An assessment of management skills on capital budgeting planning and practices: evidence from the small and medium enterprise sector"
},
{
"paperId": "f7282b2d7647150b7f8f1a7494cda2aee43e0e90",
"title": "Business Strategies and Competitive Advantage: The Role of Performance and Innovation"
},
{
"paperId": "06c347e709ad9612565e78079d6494147b35895c",
"title": "Cash Flow Management among Micro-Traders: Responses to the COVID-19 Pandemic"
},
{
"paperId": "525ff01657557ce76ed63e194fdff665876e275f",
"title": "The impact of COVID-19 on small- and medium-sized enterprises (SMEs): empirical evidence for green economic implications"
},
{
"paperId": "09a9e760d4d9fef188d737800c0024546c54626a",
"title": "Research on Financial Management Informatization Mode of SME under Cloud Computing"
},
{
"paperId": "39c407cb34fa9bd7599f7c6021ec13c6888457e3",
"title": "Operating, financial and investment impacts of Covid-19 in SMEs: Public policy demands to sustainable recovery considering the economic sector moderating effect"
},
{
"paperId": "ad61b081dbfb99d299fc9099c074b7757ad52dbe",
"title": "Implementing strategic responses in the COVID-19 market crisis: a study of small and medium enterprises (SMEs) in India"
},
{
"paperId": "0f4e9f4071b0087c5e015e20bc884ba7dddbfef5",
"title": "The Effect of Fintech Implementation on The Performance of SMEs"
},
{
"paperId": "67af2390a482aea6d81ee5f442fa85021730cb03",
"title": "Lead time reduction and process enhancement for a low volume product"
},
{
"paperId": "901b364ad8ac58d3e11028f92941cac9ae614c9a",
"title": "Success Factors of SMEs: Empirical Study Guided by Dynamic Capabilities and Resources-Based View"
},
{
"paperId": "4efe10a28a5ad68d326ec11cca6946cdb3fd8260",
"title": "CORPORATE GOVERNANCE, CREDIT RISK, AND FINANCIAL LITERACY FOR SMALL MEDIUM ENTERPRISE IN INDONESIA"
},
{
"paperId": "9c9e99cbfd0e1b6ed8777e39dc7988e57934b805",
"title": "Risk Assessment of the SME Sector Operations during the COVID-19 Pandemic"
},
{
"paperId": "4afc6b0404d8baf24370390bf79789c9dfae66d3",
"title": "FINANCIAL LITERACY OF MICRO, SMALL, AND MEDIUM ENTERPRISES OF CONSUMPTION SECTOR IN PROBOLINGGO CITY"
},
{
"paperId": "293f6f63dc7405b0ea1540d872344e034afa5879",
"title": "Learning from Failure and Success: The Challenges for Circular Economy Implementation in SMEs in an Emerging Economy"
},
{
"paperId": "c011a7029032bb935793c60dbcbf3371c5b95bdb",
"title": "ADJUSTING A STRATEGIC CASH-FLOW MODEL FOR BANGLADESHI SMALL AND MEDIUM ENTERPRISES: THE ART OF SURVIVING COVID-19 EMERGENCY"
},
{
"paperId": "d5c37a19ad8b21099e2c4549ceb39420107d04fb",
"title": "Industry 4.0: Adoption challenges and benefits for SMEs"
},
{
"paperId": "5f8d95a628f3812b4030ced35549a5bf0683efe8",
"title": "THE INFLUENCE OF FINANCIAL KNOWLEDGE, FINANCIAL ATTITUDES AND PERSONALITY TO FINANCIAL MANAGEMENT BEHAVIOR FOR MICRO, SMALL AND MEDIUM ENTERPRISES TYPICAL FOOD OF COTO MAKASSAR"
},
{
"paperId": "0df3ea830575baa7de553f12330f8fed976162ec",
"title": "Financial Constraints and Small and Medium Enterprises: A Review"
},
{
"paperId": "6a9bbf497eff7aca040562b1f3f0dfa415a41f2b",
"title": "Working Capital Management –Performance Relationship: A Study of Small and Medium Enterprises in Akure, Nigeria"
},
{
"paperId": "73a820ea21bd5a1aa13362f593aac5876aab5aa8",
"title": "Does financial availability sustain financial, innovative, and environmental performance? Relation via opportunity recognition"
},
{
"paperId": "1b9b30b77c38cbb64c141ca1eaaf8454986b1ac5",
"title": "Assessing the Effect of Financial Literacy Among Managers on the Performance of Small-Scale Enterprises"
},
{
"paperId": "291e1aea5106955e5c467a85ab4aeea2af71dafa",
"title": "Understanding the Dissolution of Nonprofit Organizations: A Financial Management Perspective"
},
{
"paperId": "99e8623cbb4843e4ca22bea514c99cd91c84691e",
"title": "Effect of financial management practices on the development of small-to-medium size forest enterprises: insight from Pakistan"
},
{
"paperId": "4b66ac98df1b6f1ca03b00fc769f9209f6d63fb2",
"title": "Strategizing Information Systems: An Empirical Analysis of IT Alignment and Success in SMEs"
},
{
"paperId": "f2e36efee4f8f5641b8424ded1ec7038eec47bf9",
"title": "The effect of financial technology on financial performance in Jordanian SMEs: The role of financial satisfaction"
},
{
"paperId": null,
"title": "Productivity in road pavement maintenance & rehabilitation projects: perspectives of New Zealand roading contractors on the constraints and improvement measures"
},
{
"paperId": null,
"title": "Essential Determinants for Assessing the Strategic Agility Framework in Small and Medium-sized Enterprises (SMEs)"
},
{
"paperId": null,
"title": "Role of Emerging Financial Technology on Environmental and Social Governance of Textile Companies in Saudi Arabia"
},
{
"paperId": null,
"title": "Effective working Capital Management Practice and SMEs’ financial performance: The case of SMEs operating in the service and construction sectors in Senegal"
},
{
"paperId": null,
"title": "A Literature Review on Management Practices among Small and Medium-Sized Enterprises"
},
{
"paperId": null,
"title": "Personalized finance management for smes"
},
{
"paperId": null,
"title": "Best Practice of Financial Management in SMEs Operation in Digital times"
},
{
"paperId": null,
"title": "TECHNOLOGY AUDIT AND PRODUCTION RESERVES — № 5/4(73), 11"
},
{
"paperId": null,
"title": "Effects of accounts re-ceivable management on the financial performance of construction companies in Rwanda: A case of NPD"
},
{
"paperId": null,
"title": "The Chinese online market: opportunities"
},
{
"paperId": null,
"title": "Evaluating the financial challenges affecting the competitiveness of small businesses in South Africa"
},
{
"paperId": null,
"title": "En-trepreneurial Orientation, Access to Financial Resources and SMEs’ Business Performance: The Case of the United Arab Emirates"
},
{
"paperId": "8b196510ed0750af56aa3dd80279b1a91284dd15",
"title": "Analyzing Linkage Between Business Process Management (BPM) Capability and Information Technology: A Case Study in Garment SMEs"
},
{
"paperId": "646ac926411ce211dbf6ce36b18eab6eae576171",
"title": "The Impact of Cash Flow Management on the Profitability and Sustainability of Small to Medium Sized Enterprises"
},
{
"paperId": null,
"title": "and challenges for Italian SMEs"
},
{
"paperId": "385104d888b02b57c6fcb9177ea9bb0b91c7757b",
"title": "Financial Literacy of Micro Entrepreneurs in Daet, Camarines Norte"
},
{
"paperId": null,
"title": "working capital ensures smooth business operations. 4. Investment appraisal and decision-making: Rigorous evaluation of investment opportunities helps prioritize growth initiatives"
}
] | 14,821
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0130f7d0c517b61a63e1018d5a1ba30c13ab16ab
|
[] | 0.821035
|
Optimal Locating and Sizing of BESSs in Distribution Network Based on Multi-Objective Memetic Salp Swarm Algorithm
|
0130f7d0c517b61a63e1018d5a1ba30c13ab16ab
|
Frontiers in Energy Research
|
[
{
"authorId": "2072713343",
"name": "Sui Peng"
},
{
"authorId": "121139968",
"name": "Xianfu Gong"
},
{
"authorId": "2110656546",
"name": "Xinmiao Liu"
},
{
"authorId": "2116676328",
"name": "Xun Lu"
},
{
"authorId": "2849453",
"name": "X. Ai"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Front Energy Res"
],
"alternate_urls": [
"https://www.frontiersin.org/journals/energy-research",
"http://www.frontiersin.org/Energy_Research/archive"
],
"id": "757ec547-4fbf-4010-b6ae-c6b833ccd3a4",
"issn": "2296-598X",
"name": "Frontiers in Energy Research",
"type": "journal",
"url": "http://www.frontiersin.org/Energy_Research"
}
|
Battery energy storage systems (BESSs) are a key technology to accommodate the uncertainties of RESs and load demand. However, BESSs at an improper location and size may result in no-reasonable investment costs and even unsafe system operation. To realize the economic and reliable operation of BESSs in the distribution network (DN), this paper establishes a multi-objective optimization model for the optimal locating and sizing of BESSs, which aims at minimizing the total investment cost of BESSs, the power loss cost of DN and the power fluctuation of the grid connection point. Firstly, a multi-objective memetic salp swarm algorithm (MMSSA) was designed to derive a set of uniformly distributed non-dominated Pareto solutions of the BESSs allocation scheme, and accumulate them in a retention called a repository. Next, the best compromised Pareto solution was objectively selected from the repository via the ideal-point decision method (IPDM), where the best trade-off among different objectives was achieved. Finally, the effectiveness of the proposed algorithm was verified based on the extended IEEE 33-bus test system. Simulation results demonstrate that the proposed method not only effectively improves the economy of BESSs investment but also significantly reduces power loss and power fluctuation.
|
Edited by:
Bo Yang,
Kunming University of Science and
Technology, China
Reviewed by:
Yixuan Chen,
The University of Hong Kong,
Hong Kong, SAR China
Yang Li,
Northeast Electric Power University,
China
*Correspondence:
Xinmiao Liu
[lxm2021@foxmail.com](mailto:lxm2021@foxmail.com)
Specialty section:
This article was submitted to
Smart Grids,
a section of the journal
Frontiers in Energy Research
Received: 10 May 2021
Accepted: 07 June 2021
Published: 27 July 2021
Citation:
Peng S, Gong X, Liu X, Lu X and Ai X
(2021) Optimal Locating and Sizing of
BESSs in Distribution Network Based
on Multi-Objective Memetic Salp
Swarm Algorithm.
Front. Energy Res. 9:707718.
[doi: 10.3389/fenrg.2021.707718](https://doi.org/10.3389/fenrg.2021.707718)
p y
[doi: 10.3389/fenrg.2021.707718](https://doi.org/10.3389/fenrg.2021.707718)
# Optimal Locating and Sizing of BESSs in Distribution Network Based on Multi-Objective Memetic Salp Swarm Algorithm
Sui Peng [1], Xianfu Gong [1], Xinmiao Liu [2]*, Xun Lu [2] and Xiaomeng Ai [3]
1Grid Planning and Research Center, Guangdong Power Grid Corporation, China Southern Power Grid Company Limited,
Guangzhou, China, [2]Guangdong Power Grid Corporation, China Southern Power Grid Company Limited, Guangzhou, China,
3State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronic
Engineering, Huazhong University of Science and Technology, Wuhan, China
### Battery energy storage systems (BESSs) are a key technology to accommodate the uncertainties of RESs and load demand. However, BESSs at an improper location and size may result in no-reasonable investment costs and even unsafe system operation. To realize the economic and reliable operation of BESSs in the distribution network (DN), this paper establishes a multi-objective optimization model for the optimal locating and sizing of BESSs, which aims at minimizing the total investment cost of BESSs, the power loss cost of DN and the power fluctuation of the grid connection point. Firstly, a multi-objective memetic salp swarm algorithm (MMSSA) was designed to derive a set of uniformly distributed non-dominated Pareto solutions of the BESSs allocation scheme, and accumulate them in a retention called a repository. Next, the best compromised Pareto solution was objectively selected from the repository via the ideal-point decision method (IPDM), where the best trade-off among different objectives was achieved. Finally, the effectiveness of the proposed algorithm was verified based on the extended IEEE 33- bus test system. Simulation results demonstrate that the proposed method not only effectively improves the economy of BESSs investment but also significantly reduces power loss and power fluctuation.
Keywords: distribution networks, battery energy storage systems, optimal locating and sizing, multi-objective
memetic salp swarm algorithm, ideal-point decision method
## INTRODUCTION
In recent years, distributed generators (DGs) and controllable load in the distribution network (DN)
have continued to increase, meaning that the traditional DN faces many challenges (Sepulveda
Rangel et al., 2018; Liu et al., 2020; Peng et al., 2020). At present, one obvious tendency is that the
rapid-developed photovoltaic (PV) and wind turbine (WT) power generation technologies make the
permeability of distributed PV and WT in the DN higher. A series of problems ensue, such as voltage
quality declination and power supply reliability reduction, etc (Wang et al., 2014; Yu et al., 2016; Sun
et al., 2020). The active power through the line increases at the peak of power load, the loss increases,
and a large voltage offset appears at the end of the line (Kerdphol et al., 2016a; Zhou et al., 2021).
Battery energy storage systems (BESSs) have the characteristics of flexibility and fast response and
are an effective way to solve the above problems. The application of BESSs can greatly improve the
-----
connection of renewable energy sources (RESs) (Kerdphol et al.,
2016b; Gan et al., 2019; Hlal et al., 2019). BESSs can effectively
solve the problems of enlarging the load peak and off-peak
difference, delay in the power grid upgrading, alleviate the
power supply capacity shortage in the transition phase of the
power grid, improve the reliability and stability of the power grid,
and optimize the power flow of the grid, as well as improving the
economic benefits of system operation (Chong et al., 2016; Chong
et al., 2018; Murty and Kumar., 2020). BESSs could provide a new
direction for large-scale RESs integration, which is one of the
most effective ways to solve renewable energy grid access (Trovão
and Antunes, 2015; Liu et al., 2018; Wu et al., 2019).
However, prudent BESSs allocation and sizing in DN
determine the satisfactory performance of BESSs applications.
The optimal allocation and sizing of BESSs are crucial for the
power quality improvement of DN and transmission system
protection settings. Once BESSs are connected to the DN, the
dispatching system of DN sends dispatching instructions to the
BESSs according to the real-time running state of the system load,
and then BESSs absorbs or sends power to the parallel network
through its two-way energy flow (He et al., 2017; Jia et al., 2017;
He et al., 2020). This two-way power regulation can save
investment and improve the reliability and economy of BESSs.
If the location and sizing of BESSs are not set reasonably, or the
operation strategy adopted fails to efficiently play the role of
BESSs, the voltage quality may deteriorate, and further increase
investment and operation costs (Li et al., 2020). To enable us to
take full advantage of distributed BESSs and make their access to
the DN have a positive impact, it is important to select the
appropriate location and sizing of BESSs based on the appropriate
operation strategy (Li et al., 2018).
Recently, a large number of scholars have performed studies in
this field (Yang et al., 2020). The literature (Oudalov et al., 2007)
tends to optimize the location and power capacity of BESSs by
calculating the sensitivity of network loss, and then reduce the
power loss of DN. In one study (Pang et al., 2019), a semi-definite
relaxation method was proposed to solve the optimal BESSs
allocation problem. Another study (Wong et al., 2019)
introduces a whale optimization algorithm for the optimal
location and sizing of BESSs, while the optimization results do
not achieve a significant breakthrough.
This paper devises a multi-objective optimization model
considering total investment cost, power loss cost, and power
fluctuation for optimal BESSs locating and sizing. For the sake of
solving this model, a multi-objective memetic salp swarm
algorithm (MMSSA) is proposed to search the non-dominated
solutions of BESSs allocation strategy, which reach significant
improvement and better balance on the global exploration and
local exploitation abilities compared with the salp swarm
algorithm (SSA). Furthermore, the ideal-point decision method
(IPDM) is adapted to objectively determine the optimal weight
coefficients of each objective function and then select the best
compromised solution. To verify the effectiveness, the proposed
model and algorithm are implemented in the extended IEEE-33
bus test system.
The rest of this paper is organized as follows: Problem
Formulation develops the multi-objective optimization model.
TABLE 1 | The economic parameters of BESSs.
Parameters Values
Installation cost 1470000 ($/per BESS)
Equipment cost 175,000 ($/MW)
225,000 ($/MW h)
O&M cost 4,000 ($/MW year)
2000 ($/(MW h) year)
Lifetime 20 (year)
In Multi-Objective Memetic Salp Swarm Algorithm Based on
Pareto, MMSSA based on IPDM is introduced. Case studies
are undertaken in Case Studies. Finally, Conclusion
summarizes the main contributions of this study.
## PROBLEM FORMULATION
Objective Functions
The optimal allocation of BESSs is a multi-objective optimization
problem with multiple variables and constraints. To realize the
economic and reliable operation of BESSs in the DN, a multiobjective optimization model is established based on the Pareto
principle, where minimizing the total investment cost of BESSs,
power loss cost, and power fluctuation are the main objectives.
### Total Investment Cost
This paper focuses on the DN that has been built and operated, so
the investment and construction costs of DN other than BESSs
are not included in the cost model. The economic parameters of
BESSs are provided in Table 1, extracted from a previous study
(Behnam and Sanna, 2015). The total investment cost is
considered as the annual costs of BESSs, which can be
mathematically formulated as follows
Min F1 � Cins + Cequ + Com (1)
where F1 is the annual total investment cost of BESSs; Cins, Cequ,
and Com represent the annual installation cost, equipment cost,
and operation and maintenance (O&M) cost, respectively.
The annual installation cost of BESSs is expressed as
Cins � Ccap · NBESS · μCRF (2)
where Ccap means the cost of per BESS for installation; NBESS is the
number of BESSs deployed in DN; μCRF denotes the capital
recovery factor (CRF) that is the knowing present worth. The
CRF translates the costs throughout the useful life of BESSs to the
initial moment of the investment, which is obtained by
μCRF � ([r]1[ · (] +[1] r[ +])[y][ r]−[)][y]1 (3)
where y is the economic life cycle of BESSs; r means the discount
rate, which is calculated by the weighted average cost of capital as
follows (Harvey, 2020)
r � fd · id + �1 − fd� - ie (4)
-----
where fd and ie represent the debt ratio and the return on equity,
is respectively, 80 and 50%; id denotes the interest rate of 4.165%.
The annual equipment cost of BESSs is calculated by
NBESS
Cequ � � �α · PBESS,i + β · EBESS,i� - μCRF (5)
i�1
where α and β mean the costs per unit power and per unit
capacity, respectively; PBESS,i and EBESS,i are the power capacity
and energy capacity of the ith BESS.
The annual O&M cost of BESSs is expressed as
NBESS
COM � � �λ · PBESS,i + c · EBESS,i� - μCRF (6)
i�1
where λ and c are respectively the O&M cost of per unit power
and per unit energy of BESSs. Note that the O&M costs of
rectifier, inverter, and charge regulator are neglected.
### Power Loss Cost
BESSs grid-connected will change the power flow of DN (Injeti
and Thunuguntla, 2020). Furthermore, the different locations
and sizes of BESSs will have different influences on power losses.
For the sake of minimizing the total active power losses, the
power losses index is established in the optimization model, as
follows
T
Min F2 � ��ρloss(t) · Ploss(t)� (7)
t�1
L
Ploss(t) � ��RjIj[2][(][t][)][�] (8)
j�1
where F2 is the daily cost of power losses; ρloss(t) and Ploss(t)
represent the time of use (TOU) electricity prices and power
losses at time t; L is the total number of lines in the DN; Rj means
the resistance on the jth line; Ij(t) denotes the current on the jth
line at time t. The lower F2 is that the greater positive effect of
BESSs deployment in reducing power loss.
### Power Fluctuation
Owing to the intermittent nature of RESs, the integration of them
into power grids poses significant power fluctuation in the grid
connection point. However, BESSs can provide an effective
supplement for RESs in smoothing power fluctuation to
improve power quality. The power quality index can be
expressed as
## Constraints
### Power Balance
⎪⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎩
N
Pi(t) � Vi(t) � Vj(t)�Gij cos θij(t) + Bij sin θij(t)�
j�N1 (10)
Qi(t) � Vi(t) � Vj(t)�Gij sin θij(t) − Bij cos θij(t)�
j�1
where Pi(t) and Qi(t) represent the injected active power and reactive
power at ith node in the DN at time t, respectively; Vi(t) is the voltage
of the ith node at time t; Gij and Bij represent the admittance and
susceptance between the ith node and the jth node; θij(t) is the power
angle between the ith node and the jth node at time t.
### Range of Node Voltages
Vi[min] < Vi < Vi[max] (11)
where Vi[min] and Vi[max] represent the upper and lower limits of the
voltages of the ith node.
### Charging and Discharging Power Limits of BESSs
(12)
� −[0]P[ ≤]BESS[P][cha],i ·[,][i] η[(][t]dis[)][ ≤] ≤[P]P[BESS]dis,i[,]([i][ ·]t[ η]) ≤[cha]0
where Pcha,i(t) andPdis,i(t) represent the charging and discharging
power of BESSs at time t, respectively; ηcha and ηdis are
respectively the charging and discharging efficiency of BESSs.
### State of Charge Limits
SOC[min] < SOC(t) < SOC[max] (13)
where SOC[min] and SOC[max], respectively, mean the upper and
lower limits of SOC, is that 20 and 90%.
## Multi-Objective Optimization Model
### Establishment of the Optimization Model
In terms of multi-objective optimization problems such as BESSs
allocation, all objectives generally conflict with each other, and
optimizing one of the objectives leads to the deterioration of other
objectives in most cases. It is difficult to objectively evaluate the
superiority-inferiority of all solutions because there is no absolute
optimal solution for the overall objective (Huang et al., 2020).
Nevertheless, there exists an optimal solution set, elements of
which are named Pareto optimal solutions, realizing the optimum
matching among objectives (Fonseca and Fleming, 1993). In this
paper, the multi-objective optimization model of BESSs locating
and sizing is designed to simultaneously meet investment
economy and operation reliability requirements, as follows
Min F3 �
�����������������
� T
2
� �Pgrid(t) − Pgrid�
t�1
(9)
where F3 is the daily total power fluctuation of the grid
connection point; Pgrid(t) represents the power fluctuation at
time t; Pgrid means the mean power fluctuation over a day.
-----
exploration and local exploitation abilities. Therefore, there are
two important search mechanisms in MSSA, namely the local
search in a single chain and the global coordination in the whole
population. In MSSA, multiple salp chains are arranged in
parallel, where each salp chain is regarded as a swarm of salps
that independently perform local searches similar to SSA.
Meanwhile, all salp chains are regrouped by information
communication among all salps for the improvement of
convergence stability. The optimization framework of MSSA is
illustrated in Figure 1.
### Mathematical Model
In the single chain, the salps can be divided into two roles,
including the leaders and the followers. As illustrated in Figure 1,
the leader is regarded as the salp at the front of each salp chain,
while the rest of the salps are followers. In each iteration, the
leading salp seeks the food source, while the follower salps follow
each other in a row. Note that the best salp with the best fitness is
considered to be the food source, and will be chased by the whole
salp chain. The position of the leading salp and follower salps can
be updated as follows (Mirjalili et al., 2017)
xm[j] 1 [�] [�] [F]F[ j]mm[j] [+][−][ c][c][1][1][�][�][c][c][2][2][�][�][ub][ub][ j][ j][ −][ −] [lb][lb][ j][ j][�][�] [+][+][ lb][ lb][ j][ j][�][�][,][,][ if c][ if c][3][3][ ≥][ <][ 0][0] (16)
xmi[j] [�] [1]2 [�][x]mi[ j] [+][ x][ j]m,i−1[�][,] i � 2, 3, . . ., n; m � 1, 2, . . ., M (17)
where the j means the jth dimension of searching space; xm[j] 1 [and]
xmi[j] [respectively denote the positions of the leading salp and the]
ith follower salp in the mth salp chain; Fm[j] [is the position of a food]
source; ub [j] and lb [j] are respectively the upper and lower limits of
the jth dimension variables; n and M represent the population
size of a single salp chain and the number of salp chains,
respectively; c2, and c3 are both the uniform random numbers
from 0 to 1; c1 is a random number that is related to the iteration
number, as follows (Mirjalili et al., 2017)
⎪⎧⎨
⎪⎩
min F(x) �[F1(x), F2(x), F3(x)][T]
s.t.E(x) � 0 (14)
I(x) ≤ 0
where F(x) represents the target space consists of all objective
functions; x denotes the decision space that is constituted by all
optimization variables; E(x) and I(x) are respectively, equality
and inequality constraints that need to be satisfied in the multiobjective optimization model.
### Design of Optimization Variables
Optimization variables include the installation locations, power,
and energy capacities of two BESSs, all of which need to be
constructed in a reasonable range, otherwise, some negative
effects on the power flow, relay protection, voltage, and
waveform of the original power grid raise. In this paper, nodes
in the range of (Mirjalili et al., 2017; Liu et al., 2020) were selected
as the installation locations, in which environmental and
geographical factors need to be considered in engineering
practice. The limits of power and energy capacities are
determined to consider the topology of DN, the power limit of
the interconnection point, especially the total load power.
Therefore, the power capacity allowed to access the power grid
of a BESS is determined as 90% of the total active power load of
the power grid, and the numerical value of energy capacity limit is
equal to power capacity limit, as follows
BESS (15)
� E[P][BESS]BESS[,],[i]i[ ≤] ≤ [P]EBESS[max][max]
where PBESS,i and EBESS,i are the power capacity and energy
capacity of the ith BESS; PBESS[max] [and][ E]BESS[max] [denote the upper]
limits of the energy capacity and power capacity of BESSs, are
respectively, 3375 and 3375 kWh.
Note that the power and energy capacities of two BESSs are
continuous, while installation locations are discrete. In this paper,
continuous variables can converge to the optimal value in the
iteration process while the optimal value of discrete variables
needs to be rounded in continuous space (Zhang et al., 2017).
## MULTI-OBJECTIVE MEMETIC SALP SWARM ALGORITHM BASED ON PARETO
Memetic Salp Swarm Algorithm
### Optimization Framework
SSA is inspired by the swarming motility and foraging behavior of
salps, which successfully solves varieties of optimization problems
since it has a simple search mechanism and high optimization
efficiency (Mirjalili et al., 2017). In recent years, the memetic
algorithm has developed into a broad class of algorithms and can
properly combine global search and local search mechanisms
(Moscato, 1989; Neri and Cotta, 2012). In this paper, the memetic
computing framework first proposed by Moscato (Moscato,
1989) is adopted in the memetic salp swarm algorithm
(MSSA) to improve the searching ability of SSA. Then,
multiple slap chains were employed to better balance global
c1 � 2e−�kmax[4][k] [�]
2
(18)
where k and kmax are the current iteration number and maximum
iteration number, respectively.
In the salp population, each salp is taken as an individual
of the virtual salp population. At each iteration, the
population can be regrouped into multiple new salp chains
based on the descending order of all salps’ fitness values. In
the regroup operation, the global coordination among
different salp swarms is achieved, as shown in Figure 2. It
can be seen that the best solution is assigned to salp chain #1,
and then the second-best solution is assigned to salp chain #2,
and so on. Therefore, the mth salp chain can be updated by
(Eusuff and Lansey, 2015)
Y [m] � �xmi, fmi|xmi � X(m + M(i − 1)),
fmi � F(m + M(i − 1)), i � 1, 2, /, n�, m � 1, 2, /, M (19)
where xmi and fmi are the position vector and fitness value of the
ith salp in the mth chain, respectively; X and F denote a position
vector set and a fitness value set of all the salps, respectively.
-----
food sources with a repository to restore the non-dominated
solutions obtained by MSSA so far (Coello et al., 2004). In
the optimization process, every new non-dominated solution
needs to be compared against all residents in the repository
using the Pareto dominance operators, as follows (Faramarzi
et al., 2020).
### • If a new solution dominates a set of solutions in the
repository, they have to be swapped;
### • If at least one of the solutions in the repository dominates
the new solution, this new solution should be discarded
straight away;
### • If a new solution is non-dominated in comparison with all
repository residents, this new solution will be added to the
repository.
The repository can just store limited solutions. Therefore,
a wise method adopted to remove the similar nondominated solutions in the repository, is that the one in
the populated region is identified as the best candidate to be
removed from the repository to improve the distribution
diversity of the Pareto optimal solution set. The solutions
that are removed from the repository need to satisfy the
following equation
|Fh(xm) − Fh(xn) < Dh|, h � 1, 2, 3
h − Fh[min]
Dh � [F][max]Nr
(20)
## Multi-Objective Memetic Salp Swarm Algorithm
As discussed in Problem Formulation, the solutions for a
multi-objective problem should be a set of Pareto optimal
solutions. MSSA can drive salps towards the food source with
the best solution for the optimization problem and update it
at each iteration. The design of MMSSA is first to equip the
⎪⎧⎪⎨
⎪⎪⎩
where Fh(xm) and Fh(xn) denote the hth fitness value of the mth
salp and the nth salp, respectively; Dh is the distance threshold of
the Pareto solution set; Fh[max] and Fh[min], respectively, represent the
maximum and minimum of the hth objective function obtained
as far; Nr is the maximum size of the repository to store the nondominated solutions.
-----
problem. Crucially, the squared Euclidean distance between
different solutions and the ideal point is taken as an important
basis for ranking all non-dominated solutions and then decide the
best compromised solution from them. The squared Euclidean
distance can be calculated by
3
EUi � � � fh(xm) − 0�2 · ω2h (22)
h�1
ωh means the weights of the hth objective function, as follows
1
ωh � 2 3
�m[N][r]�1 [�] [f][h][(][x][m][) −] [0][�] - �h�1
(23)
1
Nr 2
�m�1 [[][f][h][(][x][m][)−][0][]]
Owing to the weights of each objective function obtained by
IPDM, it does not rely on the evaluation and preference of experts
so that the decision is credible. In the end, the best compromised
solution is expressed as
3
xbest � arg minm�1,2,...,Nr h[�]�1 � fh(xm) − 0�2 · ω2h (24)
To sum up, the flowchart of MMSSA to solve the optimal
locating and sizing of BESSs is shown in Figure 3.
## CASE STUDIES
Test System
In this section, the optimal locating and sizing of BESSs based on
MMSSA is implemented in the extended IEEE-33 bus system for
verifying the effectiveness of the proposed algorithm. The
topology structure of the test system with a total load of 3,715
+ j2300 kVA is depicted in Figure 4. It is assumed that the
resource units include one PV and three WT, where the
maximum generation limits of PV and WT both are 0.2 MW.
The typical daily curves of load, wind and PV power are
demonstrated in Figure 5. In addition, multi-objective particle
swarm optimization (MOPSO) (Hlal et al., 2019) is used for
In this paper, the IPDM was adopted to filter out the best
compromised solution of the Pareto non-dominated solution set,
which is often used in multiple attribute decision making. Firstly,
the objective functions of all Pareto non-dominated solutions
obtained by MPOA are normalized as follows
fh(xm) � [y][h][(][x][m][) −] [y]h[min] (21)
yh[max] − yh[min]
where yh(xm) is the hth objective function value of the nondominated solution xm; fh(xm) represents the normalized value of
the hth objective function; yh[min] and yh[max] mean the maximum and
minimum of the hth objective function.
Thus, an ideal point can be selected in the target decisionmaking region formed by all Pareto non-dominated solutions. It
is worth mentioning that the objective functions of the ideal point
can be normalized to be (0, 0, 0) in terms of the minimization
-----
comparison. For the sake of a relatively fair comparison, the
population size of MMSSA and other algorithms all are set to
100, and the maximum iterations are set to be 500. The size of
the repository was chosen to equal 100 for multi-objective
optimization. Some specific parameters of all comparison
algorithms were set to the default values. If the parameters are
not chosen properly, the convergence time will be too long or the
local optimum will be trapped. It is worth mentioning that the key
parameters in the MMSSA algorithm, such as c1, the most important
parameter since they can directly influence the trade-off between
exploration and exploitation. To achieve a proper balance, it was
designed according to the iteration number.
## Simulation Results
Figure 6 and Figure 7, respectively, exhibit the bi-objective
Pareto front curves by two algorithms, including the total
investment cost versus the power loss cost, the total
investment cost versus the power fluctuation, as well as the
power loss cost versus power fluctuation, which demonstrates
these three bi-objective Pareto fronts obtained by MMSSA are
-----
more uniform than MOPSO from the perspective of distribution.
Figure 8 shows the three-objective Pareto front obtained by two
algorithms. As can be seen from Figure 8, MMSSA can acquire
the Pareto solution set with higher quality compared with
MOPSO. Moreover, the schematic diagram of the IPDM based
on MMSSA is illustrated in Figure 9. Figure 9 shows the
normalized objective function curve based on MMSSA, as well
as the decision-making schematic for the best compromise
solution of BESSs allocation. IPDM based on MMSSA can
obtain the objective weight coefficients and select the best
compromise solution by means of the sum of squares of
Euclidean distance.
To better compare the convergence and diversity of the Pareto
solution set obtained by two algorithms, the performance indexes are
evaluated in Table 2, including coverage over the Pareto front (CPF)
(Tian et al., 2019), spread (Wang et al., 2010), spacing (Schott, 1995),
and execution time. It is worth mentioning that CPF defines the
TABLE 2 | Comparison of performance metrics of two algorithms.
Algorithm Performance metric
CPF Spread Spacing Time (s)
MOPSO 0.4996 0.4753 9,075.45 1.5428e+04
MMSSA 0.1636 0.4481 3.3858 1.4676e+04
diversity of Pareto solution set as its coverage over the Pareto front in
an (M-1) dimensional hypercube (Wang et al., 2010), while spread
and spacing respectively denote the diversity and the evenness of the
Pareto solution set, which are all the negative indexes. In addition,
Table 3 shows the best compromise decision scheme of BESSs
allocation from two algorithms, along with the objective function
values. It is evident that the MMSSA outperforms the MOPSO in the
multi-objective optimization model for optimal locating and sizing
of BESSs:
-----
TABLE 3 | Optimization results of two algorithms.
Algorithm The best compromise allocation scheme of BESSs Objective function values under the best compromise
allocation scheme
Bus location Power capacity Energy capacity Total investment Power loss Power fluctuation
(MW) (MW·h) cost ($/year) cost ($/year) (MW/year)
MOPSO Trovão and Antunes (2015); Mirjalili et al. (2017) [0.1972, 0.2786] [1.6203, 1.6204] 2.0873e+05 1.3698e+05 292.9054
MMSSA Pang et al. (2019); Injeti and Thunuguntla (2020) [0.0849, 0.0618] [0.6535, 0.3943] 1.7417e+05 1.3679e+05 32.1682
### • It has the smallest CPF value, indicating that MMSSA owns
better diversity;
### • It gains the smallest spread and spacing value, which
indicates that the Pareto solutions obtained by MMSSA
are evenly and widely distributed on the Pareto front;
### • It also has the smallest execution time, which means that
MMSSA can converge to the Pareto front much faster than
conventional MOPSO;
### • It has the least investment cost, meaning that MMSSA can
improve the economy of BESSs investment;
### • It slightly reduces power loss cost, and ensures a higher
operation economy of DN;
### • It significantly gains lower power fluctuation of the grid
connection point, which means MMSSA can contribute to
power supply reliability.
## CONCLUSION
In this paper, a multi-objective optimization model based on the
Pareto principle was established. This study proposes MMSSA as
a method for solving the optimal location and size of BESSs in
DN. The contributions of the proposed approach are as follows:
### • The multi-objective optimization model takes the economic
criteria, incorporates time value into cost, and the technical
criteria relate to system reliability and take it into
consideration, which aims to make BESSs more costeffective and ensure the reliable operation of DN;
### • The proposed MMSSA has a strong global search ability and
convergence ability under complex multi-objective
functions, which can quickly search high-quality nondominated solutions, and then objectively select the best
compromised solution with the help of IPDM;
### • The simulation results based on the extended IEEE-33 bus
test system effectively verify that the best-compromised
solution of BESSs allocation scheme obtained by MMSSA
owns the lowest investment cost, power loss cost, and power
fluctuation, which is beneficial for DN to increase economic
efficiency and improve system reliability.
## REFERENCES
Behnam, Z., and Sanna, S. (2015). Electrical Energy Storage Systems: A
Comparative Life Cycle Cost Analysis. Renew. Sust. Energ. Rev. 42, 569–596.
[doi:10.1016/j.rser.2014.10.011](https://doi.org/10.1016/j.rser.2014.10.011)
However, there are several limitations to this work, including
the inapplicability of the proposed MMSSA for the highdimension optimization problem, and the limited scenario
design in terms of validating its effectiveness. Therefore, the
MMSSA can be further enhanced to improve the accuracy for
high-dimensional objective optimization. Meanwhile, a multiscenario design that combines different typical daily data in a year
should be conducted to capture the time-variable nature and
uncertainties related to RESs and load demand.
## DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in
the article/supplementary material, further inquiries can be
directed to the corresponding author.
## AUTHOR CONTRIBUTIONS
SP and XG contributed to conception and design of the study. XiL
and XuL performed the case analysis. SP wrote the first draft of
the manuscript. XG, XiL, XuL, and XA wrote sections of the
article. All authors contributed to article revision, read, and
approved the submitted version.
## FUNDING
The paper is funded by the project supported by the China
Southern Power Grid Science and Technology Project under
Project 037700KK52190011 (GDKJXM20198273).
## ACKNOWLEDGMENTS
The authors gratefully acknowledge that this project was
supported by the China Southern Power Grid Science and
Technology Project under Project 037700KK52190011
(GDKJXM20198273).
Chong, L. W., Wong, Y. W., Rajkumar, R. K., and Isa, D. (2018). An Adaptive
Learning Control Strategy for Standalone PV System with BatterySupercapacitor Hybrid Energy Storage System. J. Power Sourc. 394, 35–49.
[doi:10.1016/j.jpowsour.2018.05.041](https://doi.org/10.1016/j.jpowsour.2018.05.041)
Chong, L. W., Wong, Y. W., Rajkumar, R. K., and Isa, D. (2016). An Optimal
Control Strategy for Standalone PV System with Battery-Supercapacitor Hybrid
-----
Energy Storage System. J. Power Sourc. 331, 553–565. [doi:10.1016/](https://doi.org/10.1016/j.jpowsour.2016.09.061)
[j.jpowsour.2016.09.061](https://doi.org/10.1016/j.jpowsour.2016.09.061)
Coello, C. A. C., Pulido, G. T., and Lechuga, M. S. (2004). Handling Multiple
Objectives with Particle Swarm Optimization. Evol. Comput. IEEE Trans. 8,
[256–279. doi:10.1109/tevc.2004.826067](https://doi.org/10.1109/tevc.2004.826067)
Eusuff, M. M., and Lansey, K. E. (2015). Optimization of Water Distribution
Network Design Using the Shuffled Frog Leaping Algorithm. J. Water Resour.
[Plann. Manag. 129 (3), 210–225. doi:10.1061/40569(2001)412](https://doi.org/10.1061/40569(2001)412)
Faramarzi, A., Heidarinejad, M., and Stephens, b. (2020). Equilibrium Optimizer: a
Novel Optimization Algorithm. Knowledge Based Syst. 191, 105190.
[doi:10.1016/j.knosys.2019.105190](https://doi.org/10.1016/j.knosys.2019.105190)
Fonseca, C. M., and Fleming, P. J. (1993). Genetic Algorithms for
Multiobjective Optimization: Formulation Discussion and
Generalization. Icga 93, 416–423.
Gan, W., Ai, X., Fang, J., Yan, M., Yao, W., Zuo, W., et al. (2019). Security
Constrained Co-planning of Transmission Expansion and Energy Storage.
[Appl. Energ. 239, 383–394. doi:10.1016/j.apenergy.2019.01.192](https://doi.org/10.1016/j.apenergy.2019.01.192)
Harvey, H. L. D. (2020). Clarifications of and Improvements to the Equations Used
to Calculate the Levelized Cost of Electricity (LCOE), and Comments on the
[Weighted Average Cost of Capital (WACC). Energy 207, 118340. doi:10.1016/](https://doi.org/10.1016/j.energy.2020.118340)
[j.energy.2020.118340](https://doi.org/10.1016/j.energy.2020.118340)
He, X., Ai, Q., Qiu, R. C., Huang, W. T., Piao, L. J., and Liu, H. C. (2017). A Big Data
Architecture Design for Smart Grids Based on Random Matrix Theory. IEEE
Trans. Smart Grid. 8 (2), 674–686.
He, X., Qiu, R. C., Ai, Q., and Zhu, T. Y. (2020). A Hybrid Framework for Topology
Identification of Distribution Grid with Renewables Integration. IEEE Trans.
[Power Syst. 36 (2), 1493–1503. doi:10.1109/tpwrs.2020.3024955](https://doi.org/10.1109/tpwrs.2020.3024955)
Hlal, M. I., Ramachandaramurthya, V. K., Padmanaban, S., Kaboli, H. R.,
Pouryekta, A., and Tuan Abdullah, T. A. R. b. (2019). NSGA-II and
MOPSO Based Optimization for Sizing of Hybrid PV/wind/battery
Energy Storage System. Ijpeds 10 (1), 463–478. [doi:10.11591/](https://doi.org/10.11591/ijpeds.v10.i1.pp463-478)
[ijpeds.v10.i1.pp463-478](https://doi.org/10.11591/ijpeds.v10.i1.pp463-478)
Huang, Z., Fang, B. L., and Deng, J. (2020). Multi-objective Optimization Strategy
for Distribution Network Considering V2G Enabled Electric Vehicles in
Building Integrated Energy System. Prot. Control. Mod. Power Syst. 5 (1),
[48–55. doi:10.1186/s41601-020-0154-0](https://doi.org/10.1186/s41601-020-0154-0)
Injeti, S. K., and Thunuguntla, V. K. (2020). Optimal Integration of DGs into Radial
Distribution Network in the Presence of Plug-In Electric Vehicles to Minimize
Daily Active Power Losses and to Improve the Voltage Profile of the System
Using Bioinspired Optimization Algorithms. Prot. Control. Mod. Power Syst. 5
[(1), 21–35. doi:10.1186/s41601-019-0149-x](https://doi.org/10.1186/s41601-019-0149-x)
Jia, K., Chen, Y., Bi, T., Lin, Y., Thomas, D., and Sumner, M. (2017). Historicaldata-based Energy Management in a Microgrid with a Hybrid Energy
[Storage System. IEEE Trans. Ind. Inf. 13 (5), 2597–2605. doi:10.1109/](https://doi.org/10.1109/tii.2017.2700463)
[tii.2017.2700463](https://doi.org/10.1109/tii.2017.2700463)
Kerdphol, T., Fuji, K., Mitani, Y., Watanabe, M., and Qudaih, Y. (2016).
Optimization of a Battery Energy Storage System Using Particle Swarm
Optimization for Stand-Alone Microgrids. Int. J. Electr. Power Energ. Syst.
[81, 32–39. doi:10.1016/j.ijepes.2016.02.006](https://doi.org/10.1016/j.ijepes.2016.02.006)
Kerdphol, T., Qudaih, Y., and Mitani, Y. (2016). Optimum Battery Energy Storage
System Using PSO Considering Dynamic Demand Response for Microgrids.
[Int. J. Electr. Power Energ. Syst. 83, 58–66. doi:10.1016/j.ijepes.2016.03.064](https://doi.org/10.1016/j.ijepes.2016.03.064)
Li, R., Wang, W., Chen, Z., and Wu, X. (2018). Optimal Planning of Energy Storage
System in Active Distribution System Based on Fuzzy Multi-Objective Bi-level
[Optimization. J. Mod. Power Syst. Clean. Energ. 6 (2), 342–355. doi:10.1007/s40565-](https://doi.org/10.1007/s40565-017-0332-x)
[017-0332-x](https://doi.org/10.1007/s40565-017-0332-x)
Li, Y., Vilathgamuwa, M., Choi, S. S., Xiong, B., Tang, J., Su, Y., et al. (2020). Design
of Minimum Cost Degradation-Conscious Lithium-Ion Battery Energy Storage
System to Achieve Renewable Power Dispatchability. Appl. Energ. 260, 114282.
[doi:10.1016/j.apenergy.2019.114282](https://doi.org/10.1016/j.apenergy.2019.114282)
Liu, J., Yao, W., Wen, J., Fang, J., Jiang, L., He, H., et al. (2020). Impact of Power
Grid Strength and PLL Parameters on Stability of Grid-Connected DFIG Wind
Farm. IEEE Trans. Sustain. Energ. 11 (1), 545–557. [doi:10.1109/](https://doi.org/10.1109/tste.2019.2897596)
[tste.2019.2897596](https://doi.org/10.1109/tste.2019.2897596)
Liu, Z., Chen, Y., Zhuo, R., and Jia, H. (2018). Energy Storage Capacity
Optimization for Autonomy Microgrid Considering CHP and EV
[Scheduling. Appl. Energ. 210, 1113–1125. doi:10.1016/j.apenergy.2017.07.002](https://doi.org/10.1016/j.apenergy.2017.07.002)
Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., and Mirjalili, S. M.
(2017). Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering
Design Problems. Adv. Eng. Softw. 114, 163–191. [doi:10.1016/](https://doi.org/10.1016/j.advengsoft.2017.07.002)
[j.advengsoft.2017.07.002](https://doi.org/10.1016/j.advengsoft.2017.07.002)
Moscato, P. (1989). On Evolution, Search, Optimization, Genetic Algorithms and
Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation
Program, Technical Reports 826.
Murty, V. V. S. N., and Kumar, A. (2020). Multi-objective Energy Management
in Microgrids with Hybrid Energy Sources and Battery Energy Storage
[Systems. Prot. Control. Mod. Power Syst. 5 (1), 1–20. doi:10.1186/s41601-](https://doi.org/10.1186/s41601-019-0147-z)
[019-0147-z](https://doi.org/10.1186/s41601-019-0147-z)
Neri, F., and Cotta, C. (2012). Memetic Algorithms and Memetic Computing
[Optimization: A Literature Review. Swarm Evol. Comput. 2, 1–14. doi:10.1016/](https://doi.org/10.1016/j.swevo.2011.11.003)
[j.swevo.2011.11.003](https://doi.org/10.1016/j.swevo.2011.11.003)
Oudalov, A., Chartouni, D., and Ohler, C. (2007). Optimizing a Battery Energy
Storage System for Primary Frequency Control. IEEE Trans. Power Syst. 22 (3),
[1259–1266. doi:10.1109/tpwrs.2007.901459](https://doi.org/10.1109/tpwrs.2007.901459)
Pang, M., Shi, Y., Wang, W., and Pang, S. (2019). Optimal Sizing and Control of
Hybrid Energy Storage System for Wind Power Using Hybrid Parallel PSO-GA
[Algorithm. Energy. explorat & exploitat. 37 (1), 558–578. doi:10.1177/](https://doi.org/10.1177/0144598718784036)
[0144598718784036](https://doi.org/10.1177/0144598718784036)
Peng, X., Yao, W., Yan, C., Wen, J., and Cheng, S. (2020). Two-stage Variable
Proportion Coefficient Based Frequency Support of Grid-Connected
[DFIG-WTs. IEEE Trans. Power Syst. 35 (2), 962–974. doi:10.1109/](https://doi.org/10.1109/tpwrs.2019.2943520)
[tpwrs.2019.2943520](https://doi.org/10.1109/tpwrs.2019.2943520)
Schott, J. R. (1995). Fault Tolerant Design Using Single and Multicriteria Genetic
Algorithm Optimization. Cambridge, MA: Massachusetts Institute of
Technology.
Sepulveda Rangel, C. A., Canha, L., Sperandio, M., and Severiano, R. (2018).
Methodology for ESS-type Selection and Optimal Energy Management in
Distribution System with DG Considering Reverse Flow Limitations and
Cost Penalties. IET Generation, Transm. Distribution. 12 (5), 1164–1170.
[doi:10.1049/iet-gtd.2017.1027](https://doi.org/10.1049/iet-gtd.2017.1027)
Sun, K., Yao, W., Fang, J., Ai, X., Wen, J., and Cheng, S. (2020). Impedance
Modeling and Stability Analysis of Grid-Connected DFIG-Based Wind Farm
with a VSC-HVDC. IEEE J. Emerg. Sel. Top. Power Electron. 8 (2), 1375–1390.
[doi:10.1109/jestpe.2019.2901747](https://doi.org/10.1109/jestpe.2019.2901747)
Tian, Y., Cheng, R., Zhang, X., Li, M., and Jin, Y. (2019). Diversity Assessment of
Multi-Objective Evolutionary Algorithms: Performance Metric and Benchmark
[Problems. IEEE Comput. Intelligence Mag. 14 (3), 61–74. doi:10.1109/](https://doi.org/10.1109/mci.2019.2919398)
[mci.2019.2919398](https://doi.org/10.1109/mci.2019.2919398)
Trovão, J. P., and Antunes, C. H. (2015). A Comparative Analysis of MetaHeuristic Methods for Power Management of a Dual Energy Storage System for
Electric Vehicles. Energ. Convers. Manag. 95, 281–296. [doi:10.1016/](https://doi.org/10.1016/j.enconman.2015.02.030)
[j.enconman.2015.02.030](https://doi.org/10.1016/j.enconman.2015.02.030)
Wang, B., Yang, Z., Lin, F., and Zhao, W. (2014). An Improved Genetic Algorithm
for Optimal Stationary Energy Storage System Locating and Sizing. Energies 7
[(10), 6434–6458. doi:10.3390/en7106434](https://doi.org/10.3390/en7106434)
Wang, Y., Wu, L., and Yuan, X. (2010). Multi-objective Self-Adaptive Differential
Evolution with Elitist Archive and Crowding Entropy-Based Diversity Measure.
[Soft Comput. 14 (3), 193–209. doi:10.1007/s00500-008-0394-9](https://doi.org/10.1007/s00500-008-0394-9)
Wong, L. A., Ramachandaramurthy, V. K., Walker, S. L., Taylor, P., and Sanjari, M.
J. (2019). Optimal Placement and Sizing of Battery Energy Storage System for
Losses Reduction Using Whale Optimization Algorithm. J. Energ. Storage. 26,
[100892. doi:10.1016/j.est.2019.100892](https://doi.org/10.1016/j.est.2019.100892)
Wu, T., Shi, X., Liao, L., Zhou, C., Zhou, H., and Su, Y. (2019). A Capacity
Configuration Control Strategy to Alleviate Power Fluctuation of Hybrid
Energy Storage System Based on Improved Particle Swarm Optimization.
[Energies 12 (4), 642. doi:10.3390/en12040642](https://doi.org/10.3390/en12040642)
Yang, B., Wang, J., Chen, Y., Li, D., Zeng, C., Guo, Z., et al. (2020). Optimal Sizing
and Placement of Energy Storage System in Power Grids: a State-Of-The-Art
-----
One-Stop Handbook. J. Energ. Storage. 32, 101814. [doi:10.1016/](https://doi.org/10.1016/j.est.2020.101814)
[j.est.2020.101814](https://doi.org/10.1016/j.est.2020.101814)
Yu, H., Tarsitano, D., Hu, X., and Cheli, F. (2016). Real Time Energy
Management Strategy for a Fast Charging Electric Urban Bus Powered
[by Hybrid Energy Storage System. Energy 112, 322–331. doi:10.1016/](https://doi.org/10.1016/j.energy.2016.06.084)
[j.energy.2016.06.084](https://doi.org/10.1016/j.energy.2016.06.084)
Zhang, X. S., Yu, T., Yang, B., and Cheng, L. F. (2017). Accelerating Bio-Inspired
Optimizer with Transfer Reinforcement Learning for Reactive Power
[Optimization. Knowledge-Based Syst. 116, 26–38. doi:10.1016/j.knosys.2016.10.024](https://doi.org/10.1016/j.knosys.2016.10.024)
Zhou, B., Fang, J. K., Ai, X. M., Yang, C. X., Yao, W., and Wen, J. Y. (2021).
Dynamic Var Reserve-Constrained Coordinated Scheduling of LCCHVDC Receiving-End System Considering Contingencies and Wind
[Uncertainties. IEEE Trans. Sust. Energ. 12 (01), 469–481. doi:10.1109/](https://doi.org/10.1109/tste.2020.3006984)
[tste.2020.3006984](https://doi.org/10.1109/tste.2020.3006984)
Conflict of Interest: Authors SP, XG, XiL, and XuL were employed by the
company China Southern Power Grid Company Limited.
The remaining author declares that the research was conducted in the absence of
any commercial or financial relationships that could be construed as a potential
conflict of interest.
Publisher’s Note: All claims expressed in this article are solely those of the authors
and do not necessarily represent those of their affiliated organizations, or those of
the publisher, the editors and the reviewers. Any product that may be evaluated in
this article, or claim that may be made by its manufacturer, is not guaranteed or
endorsed by the publisher.
Copyright © 2021 Peng, Gong, Liu, Lu and Ai. This is an open-access article
[distributed under the terms of the Creative Commons Attribution License (CC BY).](https://creativecommons.org/licenses/by/4.0/)
The use, distribution or reproduction in other forums is permitted, provided the
original author(s) and the copyright owner(s) are credited and that the original
publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with
these terms.
-----
## GLOSSARY
### BESSs battery energy storage systems
CRF capital recovery factor
DN distribution network IPDM ideal-point decision method
MMSSA multi-objective memetic salp swarm algorithm MOPSO multi-objective particle swarm optimization
O&M operation and maintenance PV photovoltaic
RESs renewable energy sources
SOC state of charge SSA salp swarm algorithm
TOU time of use WT wind turbines
### Variables
PBESS,i power capacity of the ith BESSs. EBESS,i energy capacity of the ith BESSs.
Pcha,i(t) charging power of the ith BESSs at time t
Pdis,i(t) discharging power of the ith BESSs at time t ρloss TOU electricity prices
Ploss(t) power loss at time t
Pgrid(t) power fluctuation of the grid connection point at time t
x[j]mi [positions of the][ i][th follower salp in the][ m][th salp chain]
F[j]m [position of food source]
ωh weights of the hth objective function
n population size of single salp chain
M the number of salp chains
N r the maximum size of the repository
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3389/fenrg.2021.707718?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3389/fenrg.2021.707718, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.frontiersin.org/articles/10.3389/fenrg.2021.707718/pdf"
}
| 2,021
|
[] | true
| 2021-07-27T00:00:00
|
[
{
"paperId": "db5e3e4a61e799fe2b75f4d2aac434a64673c15c",
"title": "Dynamic Var Reserve-Constrained Coordinated Scheduling of LCC-HVDC Receiving-End System Considering Contingencies and Wind Uncertainties"
},
{
"paperId": "cfb29ed8f75329bd3aae7da8ee9bf70a97f88870",
"title": "Optimal sizing and placement of energy storage system in power grids: A state-of-the-art one-stop handbook"
},
{
"paperId": "4ffcd25e146e022c0e0c6694e30625d3069d1464",
"title": "Clarifications of and improvements to the equations used to calculate the levelized cost of electricity (LCOE), and comments on the weighted average cost of capital (WACC)"
},
{
"paperId": "84f8ea26638bbbb95faaf13043745fbda78d99e2",
"title": "A Hybrid Framework for Topology Identification of Distribution Grid With Renewables Integration"
},
{
"paperId": "0cd6ccae312339957f27b63eba409755e7243b7b",
"title": "Impedance Modeling and Stability Analysis of Grid-Connected DFIG-Based Wind Farm With a VSC-HVDC"
},
{
"paperId": "76275f466a409a87d050a0ce9cdeb3827b3647f6",
"title": "Two-Stage Variable Proportion Coefficient Based Frequency Support of Grid-Connected DFIG-WTs"
},
{
"paperId": "65b178fcf2fa537755a40db47bb2a5652b4d1e2f",
"title": "Equilibrium optimizer: A novel optimization algorithm"
},
{
"paperId": "7658a0511537f14941f5f23b444b0e3896357b69",
"title": "Design of minimum cost degradation-conscious lithium-ion battery energy storage system to achieve renewable power dispatchability"
},
{
"paperId": "bc65f033fd96777959e04f489c13057642e5f772",
"title": "Multi-objective optimization strategy for distribution network considering V2G-enabled electric vehicles in building integrated energy system"
},
{
"paperId": "04a86f467bde4ff182d32be80431a72fa53a9f2a",
"title": "Optimal integration of DGs into radial distribution network in the presence of plug-in electric vehicles to minimize daily active power losses and to improve the voltage profile of the system using bio-inspired optimization algorithms"
},
{
"paperId": "616998c81f1fe8fa26bc6273e7af7e56bcc64736",
"title": "RETRACTED ARTICLE: Multi-objective energy management in microgrids with hybrid energy sources and battery energy storage systems"
},
{
"paperId": "b9ba90834b61e8e378bcbb36ea5abf0585da73a4",
"title": "Impact of Power Grid Strength and PLL Parameters on Stability of Grid-Connected DFIG Wind Farm"
},
{
"paperId": "b12945cd80dbcfa593093ba292ebb1649dc6ced8",
"title": "Optimal placement and sizing of battery energy storage system for losses reduction using whale optimization algorithm"
},
{
"paperId": "7cc01fd5ff07da3688e8cdd2236a0788e33b2f7f",
"title": "Diversity Assessment of Multi-Objective Evolutionary Algorithms: Performance Metric and Benchmark Problems [Research Frontier]"
},
{
"paperId": "ef82a3ce236bb81aed19cdc2bcb74f2ce347e347",
"title": "Security constrained co-planning of transmission expansion and energy storage"
},
{
"paperId": "7ef3ed9198de3ddd5f207485f63e39781ffbf670",
"title": "NSGA-II and MOPSO based optimization for sizing of hybrid PV/wind/battery energy storage system"
},
{
"paperId": "3e781240b4ecaf8f1fdb45be7be9e6ffa1ec9837",
"title": "A Capacity Configuration Control Strategy to Alleviate Power Fluctuation of Hybrid Energy Storage System Based on Improved Particle Swarm Optimization"
},
{
"paperId": "f8ae130acf9b858bf3101c0e56ac76c4be720974",
"title": "An adaptive learning control strategy for standalone PV system with battery-supercapacitor hybrid energy storage system"
},
{
"paperId": "7d4d83894ca153fb070390839783feb9c36099ba",
"title": "Optimal sizing and control of hybrid energy storage system for wind power using hybrid Parallel PSO-GA algorithm"
},
{
"paperId": "f4402fb740019d901dd81a8ac7e6ac840dafab5e",
"title": "Optimal planning of energy storage system in active distribution system based on fuzzy multi-objective bi-level optimization"
},
{
"paperId": "21aef7c41cd90a84f8e2e47b83923a218cfe4a23",
"title": "Energy storage capacity optimization for autonomy microgrid considering CHP and EV scheduling"
},
{
"paperId": "2096fc288ef32942b52602fc671a8fc1bca5c001",
"title": "Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems"
},
{
"paperId": "9d6504270f62b47976b99ea8597a096a602957fc",
"title": "Methodology for ESS-type selection and optimal energy management in distribution system with DG considering reverse flow limitations and cost penalties"
},
{
"paperId": "3397acfb90c9058ad41df3d2e065c00ee2f8dd1c",
"title": "Historical-Data-Based Energy Management in a Microgrid With a Hybrid Energy Storage System"
},
{
"paperId": "45199bf25e7041cabbbcf7b6efe34f99e724c7ef",
"title": "Accelerating bio-inspired optimizer with transfer reinforcement learning for reactive power optimization"
},
{
"paperId": "382bc5f2e4b9d934bd277edab3f24c41eef42088",
"title": "Optimum battery energy storage system using PSO considering dynamic demand response for microgrids"
},
{
"paperId": "60e3cebdc25a0ae06e1e5680d23f3b2537818e03",
"title": "An optimal control strategy for standalone PV system with Battery-Supercapacitor Hybrid Energy Storage System"
},
{
"paperId": "f95e4e3458530eea5d1e48b6add13346e96f7908",
"title": "Optimization of a battery energy storage system using particle swarm optimization for stand-alone microgrids"
},
{
"paperId": "71db2286f08c02c645b124f63b78599ec61fa7ef",
"title": "Real time energy management strategy for a fast charging electric urban bus powered by hybrid energy storage system"
},
{
"paperId": "cf4fbd38465721cfa3d4c43a3291620ab87d6423",
"title": "A comparative analysis of meta-heuristic methods for power management of a dual energy storage system for electric vehicles"
},
{
"paperId": "e3315a2156b2f27484f8df4d722d638328d84847",
"title": "Electrical energy storage systems: A comparative life cycle cost analysis"
},
{
"paperId": "aec372fd2af3de28e896165e04ec86a26d0ae61d",
"title": "A Big Data Architecture Design for Smart Grids Based on Random Matrix Theory"
},
{
"paperId": "075ef9a314d99c57d2c7fa89cd6237da432f150d",
"title": "An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing"
},
{
"paperId": "df37beb5bbda42f039a059e571de253b043a221c",
"title": "Memetic algorithms and memetic computing optimization: A literature review"
},
{
"paperId": "9618c8c9debef6f3c8e613641df49b68f47b542d",
"title": "Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure"
},
{
"paperId": "16494c8936b6d8c2916aeaa7e0f84ccdecfeff4f",
"title": "Optimizing a Battery Energy Storage System for Primary Frequency Control"
},
{
"paperId": "73ee298302fab0d92ec445266b46ad3b57dde858",
"title": "Handling multiple objectives with particle swarm optimization"
},
{
"paperId": "461f133dce9728aa66029efb0b233ab31ba7edb8",
"title": "Water distribution network design using the shuffled frog leaping algorithm"
},
{
"paperId": "339a43d4eee1452921b51ffe6ecf9b8224cd2a82",
"title": "Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization."
},
{
"paperId": "7cc2bc3b1c6a673d3751e26c3c6b0e3feebc1cdb",
"title": "Genetic Algorithms for Multiobjective Optimization: FormulationDiscussion and Generalization"
},
{
"paperId": "8b9a748ae77f9235396e04301b82143feb1167fe",
"title": "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts : Towards Memetic Algorithms"
}
] | 13,344
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01324b0a808e3440c60626f6cacec48ecc261d44
|
[
"Medicine"
] | 0.915568
|
A pay for performance scheme in primary care: Meta-synthesis of qualitative studies on the provider experiences of the quality and outcomes framework in the UK
|
01324b0a808e3440c60626f6cacec48ecc261d44
|
BMC Family Practice
|
[
{
"authorId": "7268962",
"name": "N. Khan"
},
{
"authorId": "2272463225",
"name": "David Rudoler"
},
{
"authorId": "40581660",
"name": "Mary McDiarmid"
},
{
"authorId": "2249555",
"name": "S. Peckham"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"BMC Fam Pract"
],
"alternate_urls": [
"https://link.springer.com/journal/12875",
"http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=29",
"https://bmcfampract.biomedcentral.com/"
],
"id": "7faf8088-9241-4d06-8503-9d72da732aab",
"issn": "1471-2296",
"name": "BMC Family Practice",
"type": "journal",
"url": "http://www.biomedcentral.com/bmcfampract/"
}
|
Background The Quality and Outcomes Framework (QOF) is an incentive scheme for general practice, which was introduced across the UK in 2004. The Quality and Outcomes Framework is one of the biggest pay for performance (P4P) scheme in the world, worth £691 million in 2016/17. We now know that P4P is good at driving some kinds of improvement but not others. In some areas, it also generated moral controversy, which in turn created conflicts of interest for providers. We aimed to undertake a meta-synthesis of 18 qualitative studies of the QOF to identify themes on the impact of the QOF on individual practitioners and other staff. Methods We searched 5 electronic databases, Medline, Embase, Healthstar, CINAHL and Web of Science, for qualitative studies of the QOF from the providers’ perspective in primary care, published in UK between 2004 and 2018. Data was analysed using the Schwartz Value Theory as a theoretical framework to analyse the published papers through the conceptual lens of Professionalism. A line of argument synthesis was undertaken to express the synthesis. Results We included 18 qualitative studies that where on the providers’ perspective. Four themes were identified; 1) Loss of autonomy, control and ownership; 2) Incentivised conformity; 3) Continuity of care, holism and the caring role of practitioners’ in primary care; and 4) Structural and organisational changes . Our synthesis found, the Values that were enhanced by the QOF were power, achievement, conformity, security, and tradition. The findings indicated that P4P schemes should aim to support Values such as benevolence, self-direction, stimulation, hedonism and universalism, which professionals ranked highly and have shown to have positive implications for Professionalism and efficiency of health systems. Conclusions Understanding how practitioners experience the complexities of P4P is crucial to designing and delivering schemes to enhance and not compromise the values of professionals. Future P4P schemes should aim to permit professionals with competing high priority values to be part of P4P or other quality improvement initiatives and for them to take on an ‘influencer role’ rather than being ‘responsive agents’. Through understanding the underlying Values and not just explicit concerns of professionals, may ensure higher levels of acceptance and enduring success for P4P schemes.
|
p g
## RESEARCH ARTICLE Open Access
# A pay for performance scheme in primary care: Meta-synthesis of qualitative studies on the provider experiences of the quality and outcomes framework in the UK
### Nagina Khan[1*], David Rudoler[2], Mary McDiarmid[3] and Stephen Peckham[4]
Abstract
Background: The Quality and Outcomes Framework (QOF) is an incentive scheme for general practice, which was
introduced across the UK in 2004. The Quality and Outcomes Framework is one of the biggest pay for performance
(P4P) scheme in the world, worth £691 million in 2016/17. We now know that P4P is good at driving some kinds of
improvement but not others. In some areas, it also generated moral controversy, which in turn created conflicts of
interest for providers. We aimed to undertake a meta-synthesis of 18 qualitative studies of the QOF to identify
themes on the impact of the QOF on individual practitioners and other staff.
Methods: We searched 5 electronic databases, Medline, Embase, Healthstar, CINAHL and Web of Science, for
qualitative studies of the QOF from the providers’ perspective in primary care, published in UK between 2004 and
2018. Data was analysed using the Schwartz Value Theory as a theoretical framework to analyse the published
papers through the conceptual lens of Professionalism. A line of argument synthesis was undertaken to express the
synthesis.
Results: We included 18 qualitative studies that where on the providers’ perspective. Four themes were identified; 1)
Loss of autonomy, control and ownership; 2) Incentivised conformity; 3) Continuity of care, holism and the caring role
of practitioners’ in primary care; and 4) Structural and organisational changes. Our synthesis found, the Values that were
enhanced by the QOF were power, achievement, conformity, security, and tradition. The findings indicated that P4P
schemes should aim to support Values such as benevolence, self-direction, stimulation, hedonism and universalism,
which professionals ranked highly and have shown to have positive implications for Professionalism and efficiency of
health systems.
(Continued on next page)
[* Correspondence: nkhan786can@gmail.com](mailto:nkhan786can@gmail.com)
1Independent Researcher, Ontario, Canada
Full list of author information is available at the end of the article
© The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if
changes were made. The images or other third party material in this article are included in the article's Creative Commons
licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons
licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
[permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
[The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the](http://creativecommons.org/publicdomain/zero/1.0/)
data made available in this article, unless otherwise stated in a credit line to the data.
-----
Background
Internationally, there has been substantial interest in the
use of pay for performance (P4P) schemes for primary
care in high, medium and low-income countries. The
longest standing and most comprehensive scheme, is the
Quality and Outcomes Framework (QOF) for United
Kingdom (UK) general practice. However, in the UK
there have been increasing calls for the QOF to be abolished and in 2016 Scotland ended the scheme. The QOF
now continues only in England, Wales and Northern
Ireland [1, 2]. In early 2017, the British Medical Association (BMA) called for the QOF to be suspended to reduce bureaucratic pressures and free up clinical time [3].
In April 2016, National Health Service (NHS) England
commenced a review of the QOF, acknowledging that it
may have ‘served its purpose’ and may be ‘a barrier to
holistic management’ [4, 5]. Published in July 2018 the
Review of the Quality and Outcomes Framework in England [6], concluded that the scheme should be revised
with a greater emphasis on an approach that would “…
increase the likelihood of improved patient outcomes,
decrease the likelihood of harm from over-treatment
and improve personalisation of care” (p11). Among the
recommended changes, the report outlined an approach
that included supporting practices to undertake quality
improvement activities set out in the GP contract for
2019/20 [6]. It also supported the development of pooled
incentive schemes or shared savings programs for networks of practices [6]. In England the proposal for
shared savings and financial incentive schemes signals a
shift from the focus on individual practices with new incentive schemes seeking to influence primary care professional behaviour through more collective and quality
improvement approaches to “… facilitate achievement of
system efficiencies and increase income for reinvestment
to primary care networks” [6].
While QOF has predominantly had a clinical practice
focus (some process and organisational criteria were dropped
after just a few years), it has always had a practice wide impact and studies suggest it has had a significant influence on
the functioning and organisation of practices [6, 7]. Though
the QOF has had an impact on clinical practice, it has also
had some unintended consequences. Understanding the importance and impact of these consequences is useful for
decision-makers designing P4P schemes [7].
To date, most studies of the QOF have used quantitative
methods to evaluate the impact of QOF on clinical performance [8–10] and the universally high QOF achievement means that practices have little motivation to
improve achievement further. However, ‘high performance’ does not necessarily mean ‘high quality’ [6]. Motivation to deliver high quality care among health
professionals is complex, but it is likely that other motivational factors other than financial rewards may be effective [6]. Therefore, it is important to consider other ways
of motivating health professionals to deliver high quality
care [6]. MacDonald and others have argued that it is possible to avoid unintended consequences of P4P systems if
they are designed with the involvement of clinicians and
aligned with their underlying values [11, 12]. As governments are developing schemes for quality improvement,
they need relevant and context-sensitive evidence to support policy interventions, which means that there is significant ambiguity over the optimal design of such
schemes to maximise efficiency and tolerability. Decisionmakers are increasingly using qualitative evidence to
understand various socioeconomic contexts, health systems and communities [13]. Furthermore, this type of evidence is useful to assess the needs, values, perceptions and
experiences of stakeholders, including policymakers, providers, communities and patients, and is thus crucial for
complex health decision-making [7, 13].
For this paper, we conducted a meta-synthesis of the
available qualitative research on QOF to identify lessons
that will be useful for decision-makers in designing and
implementing new incentive schemes. Drawing on evidence from the UK provides the widest range of studies
on one scheme from which to develop clear lessons for
those factors that might support or hinder particular behaviours and outcomes within P4P schemes.
Method
For this review, we sought to understand impacts of
QOF on the individual clinicians and other groups of
-----
professionals in primary care, using a Lines-of-argument
(LOA) synthesis. The LOA synthesis involves building
up a picture of the whole from the studies of its parts
[14] and assists knowledge synthesis through a process
of re-conceptualisation of themes across several published qualitative papers [14, 15] and is a interpretative
approach.
We then applied the Schwartz Value Theory as a theoretical framework to our synthesis. Schwartz proposes
that there are ten broad Value Domains that are universal and fairly comprehensive [16]. The theory defines
these ten broad Values according to the motivation that
underlies each of them (described in Table 1) [17]. Although the theory discriminates ten Values, it postulates
that, Values form a continuum of related motivations
(the circular structure in Fig. 1 portrays the total pattern
of relations of conflict and congruity among Values, the
closer the Values are on the circular structure then that
indicates that they are more congruent and the further
away they are, indicates that they are more conflicting
[20]. The theory explains that among some Values there
is conflict with one another (e.g., benevolence and
power) whereas others are congruent (e.g., conformity
and security) [18]. One basis of the Value structure is
the fact that actions in pursuit of any Value have consequences that conflict with some Values but are congruent with others. Also actions in pursuit of Values have
practical, psychological, and social consequences for professionals [17] and their profession [21].
Professionalism is fundamental to good medical practice and so Professor Dame Judy Dacre also states Medical Professionalism has changed and must keep up to
date with the demands of modern day clinical practice
Table 1 Schwartz Value Theory: The Ten Basic Values
Openness to change
Self-Direction: Independent thought and action—choosing, creating,
exploring.
Stimulation: Excitement, novelty and challenge in life.
Hedonism: Pleasure or sensuous gratification for oneself.
Self-enhancement
Achievement: Personal success through demonstrating competence
according to social standards.
Power: Social status and prestige, control or dominance over people
and resources.
Conservation
Security: Safety, harmony, and stability of society, of relationships, and
of self.
Conformity: Restraint of actions, inclinations, and impulses likely to
upset or harm others and violate social expectations or norms.
Tradition: Respect, commitment, and acceptance of the customs and
ideas that one’s culture or religion provides.
Self-transcendence
Benevolence: Preserving and enhancing the welfare of those with
whom one is in frequent personal contact (the ‘in-group’).
Universalism: Understanding, appreciation, tolerance, and protection
for the welfare of all people and for nature.
[22]. It has been postulated that the professional organisation of medical work no longer reflects the changing
health needs caused by the growing number of complex
and chronically ill patients [21]. The Royal College of
Physicians (RCP) redefined Professionalism in 2018, advising its benefits for patients, that it increases the job
satisfaction of doctors, makes for superior organisations,
and improves the productivity of health systems. The
RCP defined Professionalism as ‘a set of values, behaviours and relationships that underpin the trust the public
has in doctors’ [22]. They described seven professional
roles; doctor as healer, patient partner, team worker,
manager and leader, advocate, learner and teacher and
as an innovator (Table. 3). The importance of Medical
Professionalism has been well documented in the literature [31], together with its effects on the doctors’ relationships with their patients, quality of care, and
ultimately health and illness outcomes [32]. For that reason, we further include Professionalism as a conceptual
lens to contextualise our analysis in this review [33].
Search strategy and data extraction
To identify relevant studies, we searched for peerreviewed empirical research on QOF using the electronic
database searching, hand-searching and web-based searching. The following databases were initially searched: Medline, Embase, Healthstar, CINAHL, and Web of Science.
We also searched the reference lists of obtained papers.
The details of our electronic search are included in the
Additional file 1.
We included studies that reported primary qualitative
research (in-depth interviews, focus groups, ethnography, observation, reflective diaries, case-studies and reviews containing qualitative analysis) of the QOF
published in English between 2004 (when QOF was introduced) and 2018. We excluded studies that did not
specifically focus on the QOF, UK and did not involve
primary qualitative research methods.
The search of electronic databases identified 33 relevant papers (see Fig. 2, PRISMA flowchart, including
reasons for exclusion). We excluded 15 papers and the
18 papers included were independently reviewed by two
researchers (N.K and D.R) and any disagreements discussed. We erred on the side of caution and endeavoured to keep all the 18 studies in until the researcher
(N.K.) had independently extracted data from these papers and applied the exclusion criteria.
The researcher (N.K.) extracted data and assessed the
eligibility criteria for all retrieved papers, which were
then appraised by a second researcher (D.R.). Disagreements between researchers’ were resolved through discussion with S.P. Differences between researchers
tended to arise because of different understandings of
some of the study questions and because of different
-----
interpretations of what authors of the papers had written
and generally related to the qualitative research methods
used. The qualitative papers were initially quality
assessed by N.K. using the British Sociological Association for the evaluation of qualitative research papers
[34] and if any discrepancies arose then they were discussed with S.P. The scale comprises 20 questions about
the relevance of the study question, appropriateness of
qualitative method, transparency of procedures, and ethics. In order to make judgements about the quality of
papers, we dichotomised each question to yes or no, in a
separate table. All the qualitative papers included in this
synthesis were published in peer reviewed journals and
adhered to transparency of high quality work.
Following the systematic steps of the metaethnography approach, we included 18 qualitative research studies for the final qualitative synthesis.
Data analysis and interpretation
Meta-ethnography is a systematic but interpretative approach to analysis that begins with noting verbatim and
coded text in terms of first-order and second-order constructs. Then translation of these constructs were synthesised across papers to form third-order constructs,
and finally constructing the synthesis using either reciprocal, refutational, or line of argument approaches [15,
35]. Our data analysis was undertaken using a ‘line of argument’ synthesis which serves to reveal what is hidden
in individual studies and to discover a ‘whole’ among a
set of parts [15]. This method has previously been
adapted for utility in the syntheses of qualitative data in
healthcare research [35, 36]. We placed the 18 papers
identified in a table that included relevant details of the
study setting and research design (see Additional file 2
Table. 4).
Our first-order constructs represented the primary
data reported in each paper (see Additional File 3 [A3],
Table. 5). The emergent themes from the papers represented our second-order constructs (A3, Table. 5). They
were extracted utilising a more fine-grained approach, in
which the researcher (N.K.) went through each paper in
a detailed and line-by-line manner and the papers were
reviewed for common and recurring concepts. As a way
of remaining faithful to the meanings and concepts of
each study; we preserved the terminology used in the
original papers in the grids. We then combined and synthesised these themes (taken from the published papers)
to create our third-order constructs (see Additional file 4,
Table. 6). Each cell of the table was considered in turn,
from this, we identified our key concepts and consequent themes and once these where identified (see in an
Additional file 5, Table. 7), we simultaneously mapped
the concepts against the ten Values using the ten Values
as our theoretical framework (Fig. 3). These were then
compared with Professionalism as defined by the seven
professional roles in Table. 3.
-----
Results
Main findings from the synthesis
The 18 papers were published between 2008 and 2018
in the UK. The 18 papers included where of the providers perspective; general practitioners (GP) including
GP leads, principals, partners and salaried [23–29, 37–
42], nurses [25, 26, 37, 38, 43, 44] (practice and condition specialist) [28–30, 37, 40–42, 44, 45], healthcare assistants [25, 37, 45] and administrative staff [25, 26, 30,
37, 38, 41, 45] (practice managers, IT) on their views
and experiences of the QOF. The majority of papers utilised one to one retrospective semi-structured interviews
[23, 24, 26, 27, 30, 40–44, 46, 47], focus groups [6, 28,
37, 45, 48], observations [25, 39], using thematic analysis
[27, 37, 38, 41], framework approach [26, 44], constant
comparison [29, 43] (additionally, see supplementary
material 2 for the summary of the sample size, research
questions and individual participant characteristics, including the findings from the studies).
The synthesis identified four themes (Table. 2): 1) Loss
of autonomy, control and ownership; 2) Incentivised conformity; 3) Continuity of care, holism and the caring role
of practitioners in primary care; and 4) Structural and organisational changes. In the next section we present the
thematic analysis (summarised in Table. 3) which includes
the application of the synthesis to the ten Values with implications for aspects of Professionalism.
Loss of autonomy, control and ownership
We found that this theme identified from the published
papers [6, 25–28, 30, 42] included professionals’ submission to the QOF targets despite their applied concerns.
Such as the ethical distress caused by a reductionist approach to managing markers of chronic disease and its
-----
being incompatible with the humanitarian values of general practice [49]. For instance most health professionals
believed that they needed to place biomedical care in the
context of their patients’ concerns and life experience
[50]. We also found that professionals wanted to retain
control and clinical autonomy; however on closer examination and within the context of the QOF this took the
form of modifying the way structured tools of the QOF
were utilised by the professionals.
“The more templates that get introduced, it takes
away the clinicians freedom and that sort of rapport
that you can build with a patient is much more difficult when you have to go through set (depression
score) questions.” (p. 413 ) [42]
“…but I don’t particularly like them... because I tend
to write my notes and then do everything on the
computer when the patients gone.” (p. 57) [45]
Both, professionals and patients were aware of the
QOF targets acting as an independent mechanism of
control, which essentially changed the nature of the
discussion between patient and professionals [26, 29,
44, 45].
“Some patients will come to you and they’ll plead
with you, ‘Please don’t give me any tablets, I’ll bring
my blood pressure down, I’ll do anything. I’ll bring
it down’, and again they’re not horrendously high,
they’re like say 140/90 or whatever … but we’re saying to them ‘well, look we’ve checked it three times
now and it remains raised, you’re clinically classed
as hypertensive, we follow these guidelines and this
is what we should be doing with you.” (p. 143) [25]
Nearly all the published papers showed that the main
motivation for practice staff to follow the QOF targets
was the link with income loss [23, 24, 26, 29, 40–45].
“So if you deviate from that [QOF] because of the
individual need. You have complete autonomy to,
but there are financial implications to you because
of that…So you still have autonomy, but you lose income.” (p. 57) [24]
This created a conflict for practice staff and suggested a decreasing sense of clinical autonomy. Especially in areas that were clinical and easy to measure
and were bound by templates or driven through the
use of IT tools [24, 42]. Respondents in one of the
-----
Table 2 The Impact of the QOF Mapped to the Ten Motivational Values
QOF modifications Synthesis of the main findings Influence on ten basic
values [18]
Congruent
Power
Conformity
Security
Achievement,
Conflict
Self-direction
Stimulation
Benevolence, Universalism
Hedonism, Tradition
Congruent
Achievement
Conformity
Security
Power
Tradition
Conflict
Self-direction
Stimulation
Benevolence
Hedonism
Universalism
Congruent
Conformity
Power
Security
Achievement
power
Conflict
Benevolence
Universalism
Self-direction
Stimulation
Tradition
Congruent
Power
Conformity
Achievement
Security
Stimulation
Self-direction
Universalism
Conflict
Tradition
Benevolence
Hedonism
Templates
Guidelines
Indicators
Governmental goals
Raised standards in basic
care
Drove provider care
Systemized and
standardised care
Neglected areas of care
targeted
Focus on chronic disease
management
Certain aspects of
professionalism threatened
Indicators conflict -patient
advocate
Information technology (IT)
Practice managers
Increased skill mix
Monitoring systems
Recording performance
Surveillance
(a) Loss of autonomy, control and ownership
Most papers described a sense of decreased clinical autonomy and loss of professionalism
[39]. They also described a sense of micromanagement from above [28] and frequently
cited the late communication about changes to the wider QOF and year-on-year variability in the occurrence and timing of changes to indicators as politically motivated [28, 39].
(b) Incentivised conformity
In the papers reviewed professionals recognized that QOF had led to considerable extra
income at the practice level [29]. As the owners of their organizations, economic factors
were more salient and apparent in principals’ accounts. Subsequently the finance and
achieving maximum income became an increasingly key issue in participants’ beliefs
about QOF and their adherence to QOF work [28].
(c) Continuity of care, holism and the caring role of clinicians in primary care
Although participants in the papers reviewed emphasised the importance of traditional
general practice values, such as holism and continuity, the majority felt that the 2004
changes had negatively impacted on these values. Participants related that patients now
experienced less continuity with their GPs [41].
(d) Structural & organisational changes
All the practices that were studied in the papers included in the review had changed
their modes of operation in response to the QOF [27, 29, 43, 45].
Role of monitoring compliance with the coding regime which feeds into the contract
monitoring system and of highlighting deficient coding and recording performance
amongst staff, contributed to on-one-hand to increased surveillance and on the other to
the doctors sense of self-worth [45].
papers suggested that most of the internationally
agreed attributes of medical professionalism were not
perceived or described as being threatened by the
introduction of the QOF [42]. Although, on further
analysis we found that acquiring a say in the development of indicators was important to GPs and was
linked to freedom to practise in the patient’s best
interest, indicating that aspects of Professionalism
were being affected [22].
Incentivised conformity
The papers indicated that extensive improvement in
QOF scores was perceived as a result of consistency and
recording of incentivised activities, the outcomes and
new protocols being introduced within practices and
that these were now connected to the wider governmental objectives through the mechanism of the QOF
[23, 24, 26, 29, 40–45].
“…There are lots of systems in operation here that
other people are operating.” (p. 53) [45]
“It’s raised standards, narrowed health inequalities,
and introduced evidence-based medicine and err
the rest of the world look up on err us and our implementation of QOF with a degree of envy. Its
-----
Table 3 Application of the Synthesis to the Values and Implication for Professionalism
Main Themes Application of the findings to the Values Implications for aspects of professionalism [22]
(a) Loss of autonomy Activated values
When values are stimulated, they become infused with
feeling.
Therefore, GPs for whom independence is an important
value may experience provocation if their independence
(self-direction) seemed to be threatened, discouraged
when they are helpless to keep their professional
autonomy (power), and would be happy when they can
enjoy their freedom as self-regulated practitioners
(security).
Control and ownership Professionals appeared preoccupied by their lack of
control in achieving indicator targets (achievement),
especially if dependent upon patient cooperation, quality
of care (security), and implementation of outsider
perceived changes (power) [23, 24].
(b) Incentivised conformity Motivating actions
Those GPs for whom social order, justice, and medical
superiority (power, achievement, and security) are
important values are motivated to pursue these
incentivised goals (self-satisfaction) in the context of pay
for performance schemes.
GPs’ values form an ordered system of priorities that
characterise them as individuals and general practitioners
(professionalism) with specialist set of values, behaviours
and relationships that underpin the trust the public has
in doctors [22] (tradition, benevolence, universalism,
tradition). GPs that hold expert positions as generalist
medical practitioners are seen as first point of contact for
patients in healthcare services (power, security). They
offer a doctor patient relationship with mutual
understanding of problems that are brought into the
practice (tradition, benevolence, universalism).
(c) Continuity of care, holism
and the caring role of
clinicians in primary care
Consequences of cherished values
Holism and continuity of care (benevolence) for example
are relevant in the workplace for GPs (universalism).
There was a tension between the standardised QOF
driven care, being ‘patient-centred’ with clinicians
reporting that “it’s not always easy to deal with
disregarding, or setting aside a patient’s’ perceived need
or to move onto a more pressing practice target
(conformity) during personal discussions” [23, 25–30].
The trade-off between relevant, competing values guides
attitudes and behaviours. When values are shown to be
in conflict, not corresponding to the cherished value,
then do practitioners attribute more importance to their
achievement (completing QOF targets, case finding etc.)
or justice (work in best interest of others, benevolence,
universalism), and to novelty or tradition (medical
model).
Any attitude or behaviour typically has implications for
more than one value. For example, A ‘tick box’ approach
to medicine encouraged by pay for performance
indicators might express and promote EBM and
conformity values at the expense of hedonism and
stimulation values for GPs. Values influence action when
they are relevant in the context (specific) – such as in
pay for performance (hence likely to be activated) and
important to the GPs (Status, professional progression,
and EBM – achievement, power, security) and
bureaucrats (focus on GPs performance to the QOF
targets-conformity).
Doctor as manager and leader
Loss of autonomy impacts clinical engagement and
leadership which is pivotal to the success of health
systems. Doctors make decisions that determine where
resources flow. Yet there is a conflict experienced
between doctors as employees of huge complex systems
and the autonomy of individual doctors. Autonomy is
crucial for the delivery of care, but modern autonomy is
more complex and nuanced and needs greater
judgement [22].
Doctor as team worker
Relinquishing control is important to allow an important
component of teamwork as professional satisfaction,
engagement, and effective teamwork improves patient
outcomes and satisfaction, as well as organisational
performance and productivity. Teamwork has become
more important because of the growing complexity of
patients’ problems and health systems, and the
increasing range of possible interventions [22].
Doctor as advocate
Professionalism requires that doctors’ advocate on behalf
of their patients, all patients and future patients, yet
incentivised conformity and indicators conflicted with
this aspect. However, this was one concern that should
be given the highest priority to advocate on patient
safety. Raising concerns about poor care, or the potential
for poor care, is a professional duty for all doctors but is
not easy; such advocacy needs training, practice, and
mentorship [22].
Doctor as patient partner
The patient–doctor relationship is at the core of the
doctor’s work. The traditional relationship of patient
deference to doctors has been replaced by an equal
partnership. Values, including integrity, respect, and
compassion must underpin the partnership with patients.
Integrity involves staying up to date, but also being
willing to admit one’s limitations. Doctors can show
respect for patients by listening to them actively,
involving them in decisions, and respecting their choices
(patient centred). Compassion means not just
recognising the suffering of the patient, acting to reduce
the suffering [22].
-----
Table 3 Application of the Synthesis to the Values and Implication for Professionalism (Continued)
Main Themes Application of the findings to the Values Implications for aspects of professionalism [22]
(d) Structural & Multiple values
organisational changes Values guide the selection or evaluation of actions,
policies, people, and events in practice organisations.
Hence, GPs in self-regulated disciplines (self-direction) decide what is good or bad, justified or illegitimate, worth
doing or avoiding, based on possible consequences for
their cherished values. But the impact of values in everyday decisions is rarely conscious and activates a multiple
set of values. The results show GP values entered awareness when the QOF actions or judgments GPs were considering had antagonistic or conflicting implications for
multiple values they also cherished. Such as undertaking
templates use (IT) during consultations. GPs are guided
by professional practice which is regulated by the guidelines agreed by GPs. They work to a degree, autonomously although subject to audit and some monitoring.
QOF impinges by directing activity in a standardised way
(conformity, power).
Doctor as innovator
The challenge for doctors is how to innovate amid the
innovation happening all around them.
The use of machine (in this context -template) learning
was feared could lead to the diluted face-to-face patient
doctor consultations with a collaboration in which the
machine (template) becomes effectively an independent
actor.
It is doctors, rather than machines, who can provide
solidarity, understanding, and compassion to patients
[22].
evidence-based medicine, standardised care.” (p.
412) [42]
Respondents in papers reviewed, also stated that the
incentive payments attached to QOF did drive provider
behaviour and that it encouraged them to work towards
performance targets [23–26, 29, 40–45].
“They’re trying to control our income and we’re trying to get as much money out of them as we can.”
(p. 412) [42]
Financial rewards in return for extra work was felt to
have increased morale for some within the profession
[23, 25, 30, 40].
“We’re so hard up at the moment, so desperate for
income wherever we can get it, you can’t afford to
pass up a chance of income, so that’s probably as
much a driver . . . even if we didn’t necessarily buy
into the clinical benefit, it was worth doing to try and
earn the money because we needed to.” (p. 7) [26]
Practices had experienced rising practice income and
our synthesis findings indicated that certain Values were
enhanced by this, particularly power, achievement, conformity, security, and tradition values (Fig. 4). Future
P4P schemes should aim to support Values such as benevolence, self-direction, stimulation, hedonism and universalism, which professionals ranked highly and have
shown to have positive implications for Professionalism
and the efficiency of health systems (Fig. 5). Correspondingly, lower job satisfaction was associated with
intention to leave general practice [51]. The papers in
the synthesis suggest that the rising income was also
linked to the practices adherence to the QOF as a factor
that led to the gradual routinisation of the scheme into
everyday practice increasing systematised and standardised care [25, 26, 29, 30]. It was also acknowledged that
some aspects of neglected clinical activity were appropriately targeted by QOF.
“Patient care has definitely improved because we’ve
been doing that, and so I think some people believe
we’re number crunching but I don’t think we are in
this practice, I think we are actually meeting targets
the patients’ care is benefiting.” (p. 52) [45]
Therefore, any changes to QOF are and will be controversial mainly because they represent a substantial proportion of general practitioners’ incomes [52]. Setting
the political machinations to one side (the Department
of Health has been clawing back from the original settlement since 2004); Gillam and Steel believe that the incentive payments in the QOF also comprise too large a
proportion of general practice income. They suggest that
money should be taken out of the QOF and redirected
to supporting general practice in other ways [52]. However, there is no link between the size of the financial incentive and likely health gain from the activity
incentivised [53].
There was also a greater acceptance of standardised approaches [23, 25, 29, 30, 40] which may have restricted
personalised care for the individual patient. Even complicated the management of multiple conditions over time
[52] and narrowed the focus of the consultation, reducing
the time to deal with the wider context of the illness [37].
Further confounded by very limited access to specialist input for patients with more complex treatment resistant or
recurrent mental health problems [37].
“We developed this zero tolerance to blood pressure
a while ago, no one is allowed to say it’s a little bit
up leave it, it’s not acceptable so it has to be if it’s
-----
up do something about it, if you’re not doing something about it because if we go and find they’re not
on target and you look and they’ve seen somebody
and they’ve not acted on it yeh, I’ll have a little
word.” (p. 55) [45]
“…the interesting thing for me is that since the
introduction of PHQ-9 I find in terms of material
I’m treating the score, not the patient. Because, you
know, it’s such a short barrier in the consultation.”
(p. 282) [37]
Yet, this does not diminish the ethical imperative to
practise in the light of best evidence and the challenge is
to deliver good quality technical care for medical conditions while simultaneously considering what is in the
best interests of the whole person [52].
Some of the QOF’s design flaws are inherent to all pay
for performance schemes [54]. As such, areas of high
performance will continue to elicit negative feelings,
arising from scepticism about achieving maximum
points [23, 25, 29, 30, 40].
“I think it’s anyone who gets maximum points is
probably bent, I think it’s almost impossible to get
maximum points without some kind of fudge. That
maybe unkind but we haven’t got maximum points.
. . I think its easy just to tick the boxes when you
haven’t done it.” (p. 137) [44]
Time pressures were reported to be the motivating
factor for prioritising areas of care that were financially
incentivised [30].
“I think because there is limited time and if you have
to focus on something in order to get the money, obviously if you don’t have the time, then it’s going to
be ignored automatically.” (p. 1059) [30]
Continuity of care, holism and the caring role of
practitioners in primary care
Continuity of care, was a central feature of both doctor and
practice nurse roles. Organisational and structural changes
were attributed to the loss of continuity of care; consequently, accessing the same GP was difficult for patients.
“Increased staff numbers and changed working patterns had contributed to a loss in continuity of care
and choice of who to see. The appointment targets
paradoxically seemed to have made access worse in
many practices, due to requirements to book on the
same day . . . We’ve had to have increased staff and
then you very quickly lose continuity if you’ve got a
lot of people waiting.” (p. 136) [44]
-----
For most nurses, interpersonal continuity was described as a relatively new feature of their role as they
assumed further responsibility for patients with chronic
conditions [40].
“…with asthma, the patients are beginning to see
the same nurse, you know, rather than a different
GP… I will see the diabetics and they know that I’ve
been trying to say to them, ‘Can you come, you
know you can always come back,’ and I always try
and make it so that there is open access for them if
they have got a problem.” (p. 230) [40]
Holistic care and the caring role of GP practitioners
was not recognised in the QOF despite this being seen
as a core component of clinical professional roles [22].
Patient-centredness was deemed to be of pronounced
significance in the papers reviewed [28, 29, 39, 43].
However, there was a tension between the standardised
QOF driven care and being ‘patient-centred’ with practitioners reporting that ‘it’s not always easy to deal with
disregarding, or setting aside a patient’s’ perceived need
or to move onto a more pressing practice target during
these personal discussions’ [24–26, 28–30, 40].
“I tend to deal with the problem patients come with
first. And then if it’s appropriate to ask questions,
you know, ticking the boxes, I will do that at the
end of the consultation.” (p. 231) [40]
“We spend a lot of time visiting... and yet frequency
of home visits doesn’t get QOF points ... Caring,
that’s what doctors do.” (p. 136) [43]
Papers showed that GPs were more likely to exception
report indicators they perceived as having relatively little
systematic evaluation or that they were not proven to
work. They felt the indicator was contrary to their role
as a patient advocate and in their clinical judgement, not
relevant to individual patient-centred care [46]. Patientcentredness was defended by professionals in ‘everyday
practice’, given the relevance to patient care and the
patient-doctor relationship [52, 55–58].
“…Well I think it has put a lot of strain on the partners and practices to get all the QOF points … I
mean when it came to get all these points just to
get more money, I think it’s put more strain on doctors and it has lost the … just normal care for
-----
patients, taking them as a patient rather than as another … object to get points.” (p. 283) [23]
“I think that the art of the job has declined and, I
don’t know, the sense of feeling that you could be
with people rather than be doing. It’s quite hard to
define but there’s more to general practice than
doing ... clinical things.” (p. 136) [43]
The synthesis indicates that the QOF embodied an approach to achieving evidence-based medicine (EBM), yet
we found no evidence in the papers that linked the compatibility of EBM with a more holistic approach to
patient-centred care, as perceived by the professionals
and as linked to achieving aspects of Professionalism.
Structural and organisational changes
QOF was viewed as increasing the responsibility of lead
partners (doctors) in most areas of their practice. This
included supervising the work of nursing colleagues,
which was seen as an increase in their workloads [26, 28,
40, 44].
“There is an environment and ethos of increased surveillance and performance monitoring.” (p. 232) [40]
“I suppose it feels like I’m being watched. It’s a bit
like big brother – you’ve not ticked these boxes.”
(p. 232) [40]
For some, this has come at the expense of work life
balance which manifested as an astonishment with
the way their selected profession grasped such issues
[24, 43, 44].
“My practice does not understand the concept
[work life balance]. And I, we’ve two or three away
days a year, I’m often talking about it. And they
don’t understand. They’ll take me aside and ‘what
do you mean?’ I just find that astonishing you
know…, if you have a bereavement of this or that,
you just get on with it basically and you don’t expect to be sick for anything… So I mean its just life
I’ve chosen, it’s very busy but I do manage to stay
sane through it.” (p. 54) [45]
Salaried GPs carried less responsibility for QOF activity than the QOF leaders in areas such as surveillance of
others, meeting targets on time, and for the business
side of the practice [23].
“I think the balance of, of that is [partners] have a lot
more responsibility...you have to take a lot more responsibility for the practice and more leadership. And
I quite enjoy ... coming in doing the job and, and not
having to worry about that so much. And you get
paid more money but I think the balance of the hours
you’d be spending and their stress of the job would
probably be higher as a partner.” (p. 284) [23]
Those who eventually wanted to succeed to GP
principal status took greater responsibility for QOF
activity from those who wanted to remain salaried
[24, 40, 45].
“But sometimes you do feel that you are not really
involved in decision making. That’s fine for some
people, but for me, I do like a bit of control. So I
think at the moment its fine, but I think eventually I
would want part of the decision making process.”
(p. 285) [23]
We found that the QOF also impacted the role of
nurses but not entirely in same way as it did their GP
Colleagues [43, 44]. Nurses initially perceived the
changes to their role to be beneficial, which led to professional progression (related to achievement values),
however not to any greater authority or any increase in
status, which for their GP colleagues were achieved
through alignment with the QOF income.
“I’m not comparing it [GP salary] to what the papers say they were walking off with, but (they got)
financial rewards for a lot of the work that has been
done by nurses.” (p. 714 ) [43]
P4P schemes have been focused on certain medical
professionals that make up the healthcare workforce,
and the incentives were focused on rewarding those
professionals. Our analysis indicates that the QOF
work was distributed throughout primary care practice, involving nurses, managerial staff and healthcare
assistants but without monetary reward for these
groups [28, 43, 44] and this was experienced by other
practice staff as an injustice in the reward system.
Yet, the effect of income inequality on population
health status continues to be described and the link
between population health status and socioeconomic
status has long been recognised [57] however this link
was discounted by the scheme.
Our analysis also showed that except for certain
medical professionals, all other groups that made up
the primary care staff adhered to the targets without
the incentivised reward. As such monetary gain was
not the only powerful determinant of employee motivation or positive returns in terms of the QOF performance and success. We also found that the QOF
changes for nurses were experienced in isolation of
-----
their self-interest and power values, or formal rank
(specific to the Nursing discipline), inferring a feeling
of continued inequity in primary care practice and
healthcare systems.
“Because the workload had increased particularly
monitoring wise. We needed to do an awful lot more
monitoring of the routine measures. So the combination of that, plus the fact that our nurse had done
the diabetes course and asthma course and a prescribing course, we felt that she could move on to
something a bit more senior and someone else [the
new healthcare assistant (HCA)] could do the routine
blood pressures and bloods.” (p. 56) [24]
As a consequence of achievement and increase in
workload, the papers in the synthesis revealed that there
was an increased blurring of the boundaries with other
medical tasks and between different practitioners [25,
28, 43, 44].
“I do in fact do most of the work for the contract
and in many ways that’s not a good thing as it is
supposed to be team work.” (p. 714) [43]
“... We do the work, the doctor gets the rewards and
it is up to him whether he decides to pass it on or
not because he gets the global sum now. So that is a
bit of a conflict with a lot of the nurses at the moment. So our role and responsibility has expanded
but at the same time the wages are staying much
the same.” (p. 714) [43]
IT systems were seen as a beneficial tool to help professionals as a form of a reminder, to manage record and
collect relevant patient illness related data. But on the
other hand, it was a system that made visible the performance of professional work against what was increasingly experienced as ‘outsider implemented targets’. It
was not perceived as well by the professionals [23, 40,
44, 45] as there was little scope for the professionals’ to
retain personal beliefs or to include patient agendas during reviews [26, 29, 43, 44].
“The more templates that get introduced, it takes
away the clinicians freedom and that sort of rapport
that you can build with a patient is much more difficult when you have to go through set [depression
scores] questions.” (p. 413) [42]
Application of the findings to the Schwartz value theory
and implications for professionalism
In addition to identifying ten basic Values, the Values
Theory explicates a structure of dynamic relations
among them (see Fig. 1). One basis of the value structure is the fact that actions in pursuit of any Value have
consequences that conflict with some Values but are
congruent with others. Essentially, choosing an action alternative that promotes one Value (e.g., following template work—conformity) may literally contravene or
violate a competing Value (disregarding a patients concerns—benevolence). When we think of our values, we
think of what is important to us, each of us hold numerous values (e.g., achievement, security, benevolence) with
varying degrees of importance [18]. Furthermore, those
actions in pursuit of some values alone had practical,
psychological, and social consequences for professionals.
Participants in some papers stated that most of the
internationally agreed attributes of Medical Professionalism were not perceived or described as being threatened
by the introduction of pay for performance [42]. Contrariwise, the findings of the synthesis revealed that there
was some discord experienced by practitioners with
some aspects of Professionalism which we present in this
section (See Table. 2).
Triggered values: relinquishing control and retaining
independence
Complexity of both patient problems and health systems
now requires professionals to work as an interrelated
team within the newer hierarchies and hence a relinquishing of control in achieving QOF targets. Initially
the issue of retaining control in making decisions in
clinical practice was seen as a contentious issue. The
concerns were especially regarding who, which or where
the body of evidence that was influencing ‘everyday clinical decisions was originating from’ [24, 45]. Other concerns were about government regulation and its
influence on the process of care and protecting the well
fare of patients and their treatment [24, 45]. Schwartz
argued that when Values are triggered, they become infused with feeling [16]. For instance, Schwartz posited
four steps in the activation of personal norms that apply
equally to basic values [17]. These steps include, awareness of need, awareness of viable actions, perceiving
one-self as able to help and then triggering a sense of responsibility to become involved. The synthesis indicates
that the introduction of QOF targets influenced behaviour of professionals and it was the operative feature of
the targets that triggered the Value for independence
linked to the welfare of patients and the care they received (self-transcendence value). Consequently, it was
the tension experienced by GP’s in routine practice between their accountability and role requirements under
the QOF conditions, which indicated a decrease and loss
of professional autonomy. It is important to acknowledge that professional autonomy is recognised by the
Royal College of Physicians as a core professional value
-----
(Table. 2) [22]. Our analysis proposes that GPs further
experienced self-restriction, hierarchical struggle, and
outsider control due to the tension imposed by the
QOFs influence on the development of indicators. In
particular, the Values that were aligned to Professionalism, such as self-direction and stimulation seemingly
were experienced as opposing the security, conformity
and tradition values, supported by the QOF (Fig. 3). As a
result, the restrictiveness of the self-direction Value may
have led to the triggering of these conflicts. There were
other aspects of the indicators where medical professionals themselves had limited influence (e.g. patient cooperation and access), which further challenged their
confidence in achieving the QOF targets [26, 42, 46]
causing concern.
Incentivised conformity in driving the required actions
Typically, people adapt their values to their circumstances [59] and they successively upgrade the importance they attribute to values they can readily attain and
downgrade the importance of values whose pursuit is
blocked [59]. When the QOF was first announced, primary care had been underfunded, there were large variations in quality between doctors, and general
demoralisation within the primary care workforce [60].
Studies in our review suggest that QOF related behaviours raised the profile of general practice (achievement,
power, status). This (already) set context may have also
contributed to high QOF opt in rates (voluntary) for this
P4P scheme in general practice. However, upgrading attainable values and downgrading thwarted values applies
to most, but not to all values [55]. We found that Values
that concern material well-being achievement, power
and security were particularly aligned to the QOF. We
also found evidence that when such Values were
obstructed, their importance increased and when they
were easily attained their importance dropped [61].
“Well it’s certainly improved my income. Probably increased my workload, not to the same degree as it increased my income. But I’m a bit worried that we’ve
sold our soul to the devil to some degree, because
they can change the goal posts later.” (p. 230) [40]
The presence of the QOF was a requisite and binding,
so despite having the choice to opt in, ‘no way out’ of
QOF was experienced by those that were in specific
QOF leadership roles [24]. Those GPs, for whom social
order, justice, and helpfulness in the specific context of
the QOF work were important values, would ideally be
the target individuals and therefore most likely be motivated to pursue the incentivised goals in the context of
this P4P scheme. This however, was experienced by
others as confusing in relation to their role of the
professional as a patient advocate. For example following
a form of prescriptive QOF work was experienced as,
taking away time to listen to patient concerns [29] which
were perceived as participating in a form of ‘poor’ or
‘low value’ patient care impacting the patient-doctor relationship. RCP suggest this aspect of Professionalism requires training, practice, and mentorship to highlight
such antagonisms in patient care [22]. Marcotte et al.,
propose physicians can and should embrace professionalism as the motivation for redesigning care. Payment
reform incentives that align with their professional
values should follow and encourage these efforts; that is,
payment reform should not be the impetus for redesigning care [62].
Significance of cherished values; continuity of care,
holism and the caring role of clinicians in primary care
Values guide the selection or evaluation of actions, policies, people, and events. Therefore, medical professionals work in self-regulated disciplines, where the
profession sets out the parameters of what is good or
bad, justified or illegitimate, worth doing or avoiding,
based on possible consequences for their cherished
values [22] that are related to their profession. However,
the impact of Values in everyday decisions is rarely conscious, power values can conflict with universalism and
benevolence and these were evident in the accounts of
professionals’ which resulted in high arousal to maintain
professional behaviours that were linked to their role as
patient partners and that were aligned especially to Professionalism (Fig. 5).
“It distracts from the consultations and it can leave
you know feeling a bit confused and perhaps as
though that, the thing the patients regard as the
problem hasn’t been addressed properly.” (p. 8) [26]
The conflicts in Values or changes that were occurring
would not have been at the forefront of every professional’s awareness, not until they had started to operate
under the QOF conditions or for example when they experienced or became aware of a discontinuity of care for
the patients in their daily practice.
“In a sense that it’s still a patient presenting to a
doctor with a problem, yes it is the same as it always
was. The difference is that it’s more likely that the
patient and the doctor won’t know each other.”
(p.230) [40]
This highlights the importance of intrinsic motivations [6, 23] for professionals in their day to day
work, which if thwarted leads to deepening any individually held disappointment with their profession
-----
(satisfaction, stimulation). Recent, GP career intention
data has shown that morale had reduced over the
past 2 years and intention to leave/retire in the next
2 years increased from 13% in the 2014 survey to 18%
2017 [51]. As a result the theme of personal congruence carried the message that the internal values of a
doctor should match the external behaviour and actions [63].
We found that the QOF work was more amenable to
the values under conservation and self-enhancement dimensions, and hence directly opposed to the values
under self-transcendence and openness to change dimensions (see Fig. 4). As a result, practitioners who were
self-directed and worked for the welfare of patients were
constrained in their ability to use knowledge attained
from previous interactions (patient agendas) with patients in guiding future consultations. This may have led
professionals to view standardised care as a ‘box-ticking’
exercise, and at odds with their professional training and
their caring role [30]. Holism and patient-centred care
were significant values that were particularly vulnerable
to QOF changes.
“I thought that you were supposed to tailor this care
to every individual patient and meet patient needs...I
think it takes away patient, you know, centred care
really...I don’t think people appreciate being phoned
up all the time and reminding them to come in and
things...rightly or wrongly [lead partner] strives for
perfection and I think sometimes you have to acknowledge you don’t get perfection all the time and
whenever you’re dealing with patients and people
you’ll never get perfection anyway.” (p. 56) [45]
Some of the papers, described the need of professionals to defend efforts to continue to deliver nonincentivised care as part of their professional role [25,
44, 45].
Initially, some GPs were apprehensive about the consequences of implementation of indicators in ‘everyday
clinical practice’ [26, 29, 30]. Furthermore, there seemed
to be insufficient governmental, organisational, administrative, executive, and managerial recognition of the link
between the ‘doctors on the ground floor’ working in
‘everyday clinical practice’, and the consequences for
‘routine clinical practice’ and for the professional-patient
relationships [24, 25, 45]. Acquiring a say in the development of indicators through negotiations between the
BMA and the NHS was an important aspect for professionals, linked to freedom to practice ‘in patients’ best
interest’ [6, 24, 42, 45].
“I’d like to see performance measures that really reflect the care.” (p. 553) [46]
“...Some things are ...within the control of the providers, but some things really aren’t, even done ...
with good intent.” (p. 553) [46]
“...Often what happens with physicians is things are
mandated to us and we don’t have any input in...the
process of how things some to us.” (p. 553) [46]
Valderas et al., recommends that person-centred care
should be a guiding principle for the development of assessment frameworks and quality indicators. As peoplecentredness is a core value of health systems, which acknowledges that individual service users should be the
key stakeholders and their values, goals and priorities
should shape care delivery [64].
Structural & organisational changes: the trade-off
between multiple values
The synthesis showed that all practices had changed
their styles of operation in response to the QOF [24, 25,
44, 65]. This involved an increase in the number of administrative staff, including those with responsibility for
information technology (IT) [25, 65] and the new managerial stratum worked to align clinical activities to the
wider organisational goals [24].
The findings from the synthesis also propose that the
QOF targets that were aligned to the conservation and
self-enhancement values of GPs, had led to extra income
and sizable pay differentials at the practice level were the
enabling factor that allowed for the vast organisational
and structural changes that took place. These changes
were described as a success (achievement) for practices
and patients.
“…it’s benefitting the patients, that they don’t get
missed, they don’t slip through the net, they get
their medicines reviewed, they get their blood tests,
they’re kept on optimum treatment.” (p. 135) [44]
Subsequently, the threat to status through competition
(stimulation) was seen as a motivator [26].
“It does feel a bit like competition with other surgeries, I don’t know how others feel but I wouldn’t
like to come last in our locality.” (p. 7) [26]
Yet, those professionals who were motivated to remain
self-directed and aligned their behaviours and attitudes
to the welfare of patients, experienced restriction in their
ability to use the knowledge attained from patient interactions to guide their future consultations.
“So it’s made the two agendas a little bit clearer and
I guess you’ve always had a health agenda and mine
-----
is probably never been the same, but now that mine
is encapsulated by QOF…it’s a bit more blatantly
not the same. So I think there is an intrusion there
and it’s not an entirely patient-led agenda, because
you’ve got things that you want to do that you think
are more important.” (p. 231) [40]
Professionals were making the trade-offs among relevant opposing values based on the QOF targets and that
these were guiding the attitudes and behaviours of
health care providers in their practice. When Values are
in conflict, practitioners will often attribute more importance to the achievement of one set of values at the
expense of the others. This hierarchical relationship between values also distinguishes values from norms and
attitudes that can be followed unfeelingly. Any attitude
or behaviour typically has implications for more than
one value, for example, a ‘tick box’ approach to medicine
encouraged by P4P indicators promoted EBM and conformity values leading to success, achievement and status at the expense of self-direction, hedonism and
stimulation values.
Discussion
This study involved a meta-synthesis of qualitative studies of provider views of the QOF program. We analysed
the literature through the lens of Schwartz’s, Theory of
Values as a theoretical framework and to contextualise
our analysis we also used Professionalism as a conceptual lens. Using this theoretical framework, we found
that QOF related work was experienced by providers as
incongruent with their self-direction and benevolence
values that are pivotal to professionalism as defined by
the Royal College of Physicians [22]. This understanding
is likely the result of the QOF being experienced as a
mechanism of value activation for only certain values
(see Table 2). Values affect behaviour only if they are activated [61]. Activation may or may not entail conscious
thought about a value and much information processing
occurs outside of awareness [61]. The more accessible a
value (the more easily it comes to mind) and the more
likely it will be activated and because more important
values are more accessible, they relate more to behaviour
[18, 66]. For policy and decision makers such insights
are valuable in terms of designing P4P schemes. In a report on designing incentive payments for quality care
the Conference Board of Canada identified three key
guiding concepts – getting the right blend of incentives,
alignment with health care goals, global experience and
human motivation [67]. Also recognising the importance
of values the NHS England review of QOF argued that
the scheme needed to be to repositioned “… as a scheme
which recognises and supports the professional values of
GPs and their teams in the delivery of first contact, comprehensive, coordinated and person-centred care” [65].
Our analysis suggests that the pursuit of achievement
Values in QOF related work was experienced as compatible with the pursuit of wealth, authority, success, and
ambition values that were linked to seeking personal
success for GPs. This was likely to reinforce and be supported by QOF actions that were aimed at enhancing
GPs social position and status. This also included
expanding practice activity, size and overall income,
which may be considered as organisational success factors by some GPs. Values such as creativity, social justice, equality, benevolence were experienced as restricted
as a result of the QOF targets. Accordingly, when Values
are activated, they become infused with feeling both
positive or negative [18]. Our synthesis has shown that
definition of ‘high quality care’ must be accepted by general practitioners for it to be integrated into practice behaviour. If it is merely derived from an ‘outside
regulation’ of clinical practice and assembled by an ‘outside agency’ it will not achieve enduring behaviour
change [24, 25, 45, 65].
The direct involvement of providers in the definition
of ‘high quality care’ could be one mechanism to balance
the discord that was experienced with QOF work. Correspondingly, quality improvement initiatives that are
constructed and implemented for the patients’ benefit
should be compatible with both EBM and encompass a
`patient-centred’ approach. Embedding concepts of high
quality primary care, such as those highlighted by Mead
and Bower which include a biopsychosocial perspective,
`patient-as-person’, sharing power and responsibility,
therapeutic alliance, and `doctor-as-person’ in quality
improvement initiatives may alleviate some of the tensions that have created unease in general practice as a
result of the QOF [68]. A recent systematic review has
shown that four of Mead and Bower’s dimensions are
still relevant today, and ‘coordinated care’ was a new dimension, reflecting increasing complexity of the health
care system [69]. This will likely become more significant as integrated care is planned as a more efficient
client-oriented health model [70].
The papers in our analysis described the caring role
that encompassed softer values such as the pursuit of
novelty, change and stimulation values was likely to be
seen to undermine the safeguarding of older customs/
tradition values of medicine such as the more biomedical
care model. Our analysis also demonstrates that the pursuit of traditional values (clinician-centred care, EBM,
and templates) is essentially congruent with the pursuit
of conformity values as both motivate actions of submission to external expectations (QOF targets). The Values
Theory suggests that everyone experiences conflict between pursuing openness to change values or
-----
conservation values and between pursuing selftranscendence or self-enhancement values. Conflicts between specific Values (e.g., power vs. universalism, tradition vs. hedonism) are also near-universal [18].
Values serve as standard or criteria and they tend to
guide the selection or evaluation of actions, policies, and
events. For example, individuals also decide what is good
or bad, justified or illegitimate, worth doing or avoiding
based on possible consequences for their cherished
values [18]. Achieving some kind of balance in this now
appears to have been crucial and the evidence suggests
that embracing a more complimentary working between
the two, with more focus on the combined efforts is
more likely to drive successful complex initiatives. Historically, practices have been autonomous in managerial
terms and GPs have been traditionally independently
minded [71]. They possess a wide range of norms and
values, many of which are desirable but some of which
may not be suited to the changes required in complex
health systems. For this reason, there are obvious tensions within this relationship with regards to the changes
that are ‘softly required’ by health system managers [72].
The synthesis suggests allowing practitioners with competing high priority values to be part of quality improvement initiatives, to take on an influencer role within
those initiatives, instead of being ‘responsive agents’ [73].
Initiatives need to consider and engage with concerns of
professionals as changes occur in health systems, with
timely consultation, piloting and prior to implementation [42].
Strengths and limitations
Meta-ethnography does offer considerable potential for
preserving the interpretive properties of primary data
[74]. We acknowledge that the qualitative synthesis cannot be reduced to a set of mechanistic tasks, which
raises issues of the transparency of the process [75]
which we have tried to make transparent. The goal is to
increase understanding, leading to greater explanatory
effect, rather than to aggregate and merge findings in a
kind of averaging process [76]. We did not have the
added benefit of access to any raw data (including transcriptions, reflective notes, and author insight about the
context of the studies) as some other meta synthesis
have done [76]. Yet, Estabrooks and Field (1994) suggest
that the recurrence of themes between compared studies
adds to validity similarly to triangulation that is another
technique, said to ensure soundness in analysis [77].
Pielstick (1998) understands this as using multiple studies and (meta-synthesis does this by definition) [78].
Undertaking a meta-synthesis is a demanding and laborious process, and would benefit from the development of suitable software [73]. However we feel this will
help manage the large amounts of data that emerge from
the papers but will not add anything to the process of
analysis itself.
Conclusion
The QOF was instrumental in bringing fundamental
changes to general practice organisations. Furthermore,
these changes have endured and been embedded into
general practice institutions, despite ongoing proposed
changes to the QOF. As a mechanism for activating and
triggering a select set of Values, the QOF is compatible
with the pursuit of wealth, authority, success, ambition
and achievement that have implications for Professionalism. In its implementation, QOF also created a ‘standardised success model’ for GPs to motivate and implement
‘actions of submission’ to achieve QOF targets. While
QOF was aligned with traditional medical values, influenced by clinician centred care, EBM, and clinical guidelines; our analysis suggests that despite conforming to
core medical values there were still some dilemmas regarding whether to pursue income and organisational
goals above patient-centred practice.
This analysis of the impact of QOF suggests that in
order for quality improvement initiatives, such as P4P
schemes to be endurable; they need to be compatible
with provider values. P4P schemes need to be designed
in order to integrate the personal and professional values
that professionals’ find are essential to their practice.
Professionals’ have shown that they are driven by their
views, beliefs, and experiences, and not just by hierarchy
and externally imposed constructs. Our review indicates
that policy makers and health service planners need to
carefully construct schemes in order to prioritise intrinsic professional values rather than rely on extrinsic motivators that show more limited alignment with
Professionalism and its professional core values. Research on QOF has identified that use of performance
targets has a limited impact on the quality of care and
caused some internal conflicts during the process of carrying out the QOF work. In the UK the shift towards
quality improvement approaches that are framed by national priorities and allow for professionals to design
their improvement approach, may provide a way of harnessing values of professional autonomy and control as
well as building on the motivation to develop patientcentred care. Moves to more network (groups of practices) based schemes may require further thinking as
they will be a more complex context with potentially differing, and possibly competing, motivations between
practices and practitioners. Our review of the QOF recommends valuable insights that provide those designing
P4P systems. It also identifies the need for more qualitative research on the implementation of P4P schemes to
fully understand their individual and organisational impact. Further research is also needed to more fully
-----
understand how schemes can influence practitioners and
support high quality care. In particular it is clear that
context; in terms the of the wider organisational structure, payment systems and health system design, need to
be more fully considered to fully understand the link between financial incentives, behaviour – both individual
and organisational, and quality of care.
Supplementary information
[Supplementary information accompanies this paper at https://doi.org/10.](https://doi.org/10.1186/s12875-020-01208-8)
[1186/s12875-020-01208-8.](https://doi.org/10.1186/s12875-020-01208-8)
Additional file 1. Search Strategy
Additional file 2. Table 4. Contextual Information for the 18 Published
Papers
Additional file 3. Table. 5 First and Second Order Constructs from the
Published Papers
Additional file 4. Table. 6 Identifying Third Order Constructs
Additional file 5. Table. 7 Application of the Third Order Constructs to
the Ten Motivational Values
Abbreviations
QOF: Quality and Outcomes Framework; P4P: Pay for Performance;
LOA: Lines-of-argument synthesis; BMA: British Medical Association;
RCP: Royal College of Physicians; NHS: National Health Service; QI: Quality
Improvement; UK: United Kingdom; EBM: Evidence Based Medicine;
GP: General Practitioner; ICT: Information and Communication Technology;
NICE: Institute for Health and Care Excellence.
Acknowledgements
We would like to thank Professor Martin Roland, Emeritus Professor of Health
Services Research- Fellow, Murray Edwards College, for some detailed
comments on the relevant papers at the inclusion stage of our research.
Authors’ contributions
N.K designed and managed the review and wrote the first draft of the
manuscript. N.K led the analysis of the data and led the writing. D.R assisted
with the second level constructs in the analyses and writing of the early
drafts. SP contributed to the analysis of the third order constructs and the
writing of later drafts. The literature searches were carried out with the
assistance of a specialist librarian M.M, with input from N.K and D.R. Authors
N.K and S.P participated equally in the editing of this manuscript.
Funding
No funding was received for this study.
Availability of data and materials
Data was generated from the published papers that were included in this
synthesis. The dataset used and/or analysed during the current study are
available from the corresponding author on reasonable request. N.K, D.R and
S.P had access to all relevant papers included, customised tables, and data
necessary for verifying the integrity of the data and the accuracy of the
analysis. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
None declared. All authors received no personal or financial gain from
carrying out this work.
Author details
1Independent Researcher, Ontario, Canada. 2Faculty of Health Sciences,
University of Ontario Institute of Technology, 2000 Simcoe Street North, Unit
UA3000, Oshawa, ON L1H 7K4, Canada. [3]Ontario Shores Centre for Mental
Health Sciences, 700 Gordon Street, Whitby, ON L1N 5S9, Canada. [4]Centre for
Health Services Studies, University of Kent, Kent, Canterbury CT2 7NF, UK.
Received: 24 July 2019 Accepted: 23 June 2020
References
1. Roland M, Guthrie B. Quality and Outcomes Framework: what have we
[learnt?: BMJ. 2016 Aug 4;354:i4060. Available from: http://www.bmj.com/](http://www.bmj.com/lookup/doi/10.1136/bmj.i4060)
[lookup/doi/10.1136/bmj.i4060 [cited 2018 22 Nov].](http://www.bmj.com/lookup/doi/10.1136/bmj.i4060)
2. Waqar S, Snow-Miller R, Weaver R. Building the Workforce - the New Deal
for General Practice GP Induction and Refresher Scheme, 2015-2018. NHS
[England; 2015. https://heeoe.hee.nhs.uk/sites/default/files/gp_induction_](https://heeoe.hee.nhs.uk/sites/default/files/gp_induction_and_refresher_scheme_2015-18.pdf)
[and_refresher_scheme_2015-18.pdf.](https://heeoe.hee.nhs.uk/sites/default/files/gp_induction_and_refresher_scheme_2015-18.pdf)
3. Bostock N. Exclusive NHS England rejected QOF Suspension bacause of
[opposition from GPs. GPonline; 2017. https://www.gponline.com/](https://www.gponline.com/exclusivenhs-england-rejected-qof-suspension-because-opposition-gps/article/1423168)
[exclusivenhs-england-rejected-qof-suspension-because-opposition-gps/](https://www.gponline.com/exclusivenhs-england-rejected-qof-suspension-because-opposition-gps/article/1423168)
[article/1423168.](https://www.gponline.com/exclusivenhs-england-rejected-qof-suspension-because-opposition-gps/article/1423168)
4. England N. Delivering the Forward View: NHS planning guidance [Internet].
[2016 [cited 2018 Nov 22]. Available from: https://www.england.nhs.uk/wp-](https://www.england.nhs.uk/wp-content/uploads/2015/12/planning-guid-16-17-20-21.pdf)
[content/uploads/2015/12/planning-guid-16-17-20-21.pdf.](https://www.england.nhs.uk/wp-content/uploads/2015/12/planning-guid-16-17-20-21.pdf)
5. Forbes LJ, Marchand C, Doran T, Peckham S. The role of the Quality and
Outcomes Framework in the care of long-term conditions: a systematic
review. Br J Gen Pract [Internet]. 2017 Nov 25;67(664):e775–e784. Available
[from: http://www.ncbi.nlm.nih.gov/pubmed/28947621 [cited 2018 5 Sep].](http://www.ncbi.nlm.nih.gov/pubmed/28947621)
6. NHS England. Report of the Review of the Quality and Outcomes
[Framework in England [Internet]. 2018. Available from: https://www.](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf)
[england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf)
[report-of-the-review.pdf [cited 2019 22 May].](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf)
7. Glenton, C., Lewin, S., Norris, S., & Norris S. 15 . Using evidence from
qualitative research to develop WHO guidelines [Internet]. WHO handbook
[for guideline development. 2014 [cited 2018 Sep 16]. Available from: http://](http://www.who.int/publications/guidelines/Chp15_May2016.pdf)
[www.who.int/publications/guidelines/Chp15_May2016.pdf.](http://www.who.int/publications/guidelines/Chp15_May2016.pdf)
8. Roland M. Does pay-for-performance in primary care save lives? Lancet.
[2016 Jul 16;388(10041):217–218. Available from: https://linkinghub.elsevier.](https://linkinghub.elsevier.com/retrieve/pii/S014067361600550X)
[com/retrieve/pii/S014067361600550X [cited 2019 19 Jun].](https://linkinghub.elsevier.com/retrieve/pii/S014067361600550X)
9. McNeil R, Guirguis-Younger M, Dilley LB. Recommendations for improving
the end-of-life care system for homeless populations: A qualitative study of
the views of Canadian health and social services professionals. BMC Palliat
Care. 2012.
10. Harrison MJ, Dusheiko M, Sutton M, Gravelle H, Doran T, Roland M. Effect of
a national primary care pay for performance scheme on emergency
hospital admissions for ambulatory care sensitive conditions: controlled
[longitudinal study. BMJ. 2014 Nov 11;349:g6423. Available from: http://www.](http://www.ncbi.nlm.nih.gov/pubmed/25389120)
[ncbi.nlm.nih.gov/pubmed/25389120 [cited 2019 19 Jun].](http://www.ncbi.nlm.nih.gov/pubmed/25389120)
11. McDonald RRM. Pay for performance in primary care in England and
California: comparison of unintended consequences. Ann Fam Med. 2009;
7(2):121–7.
12. Marshall M, Harrison S. It’s about more than money: financial incentives and
internal motivation. Qual Saf Heal Care. 2005;14(1):4–5.
13. Langlois E V., Tunçalp Ö, Norris SL, Askew I, Ghaffar A. Qualitative evidence
to improve guidelines and health decision-making. Bull World Health
[Organ. 2018;96(2):79-79A. Available from: http://www.who.int/entity/](http://www.who.int/entity/bulletin/volumes/96/2/17-206540.pdf)
[bulletin/volumes/96/2/17-206540.pdf [cited 2018 16 Sep].](http://www.who.int/entity/bulletin/volumes/96/2/17-206540.pdf)
14. Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research:
[a critical review. BMC Med Res Methodol. 2009;9(1):59 Available from: http://](http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-9-59)
[bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-9-59](http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-9-59)
[cited 2018 25 Jun].
15. Noblit GW, Hare RD. Meta-Ethnography: Synthesizing Qualitative Studies
(Qualitative Research Methods) [Internet]. Vol. 44, Counterpoints. Peter Lang
[AG; 1988 [cited 2018 Oct 7]. 88 p. Available from: https://www.jstor.org/](https://www.jstor.org/stable/42975557)
[stable/42975557.](https://www.jstor.org/stable/42975557)
16. Schwartz SH. Are there universal aspects in the structure and contents of
human values? J Soc Issues. 1994;50(4):19–45.
17. Schwartz SH. An Overview of the Schwartz Theory of Basic Values. Psychol
[Cult. 2012;2(1):12–13. Available from: http://scholarworks.gvsu.edu/orpc/](http://scholarworks.gvsu.edu/orpc/vol2/iss1/11)
[vol2/iss1/11 [cited 2018 23 Nov].](http://scholarworks.gvsu.edu/orpc/vol2/iss1/11)
-----
18. Schwartz SH. Universals in the content and structure of values: Theoretical
advances and empirical tests in 20 countries. In: Zanna M, editor. Advances
in experimental social psychology. New York: Academic Press; 1992. p. 1–65.
19. Estabrooks CA, Field PA, Morse JM. Aggregating Qualitative Findings: An
Approach to Theory Development. Qual Health Res. 1994;4(4):503–511.
[Available from: http://journals.sagepub.com/doi/10.1177/1049732394004](http://journals.sagepub.com/doi/10.1177/104973239400400410)
[00410 [cited 2019 21 Jan].](http://journals.sagepub.com/doi/10.1177/104973239400400410)
20. Schwartz SH. Universals in the content and structure of values: Theoretical
advances and empirical tests in 20 countries. Adv Exp Soc Psychol. 1992;
[25(C):1–65. Available from: https://plato.stanford.edu/entries/value-theory/](https://plato.stanford.edu/entries/value-theory/supplement2.html)
[supplement2.html [cited 2020 [cited 2020 Mar 11].](https://plato.stanford.edu/entries/value-theory/supplement2.html)
21. Plochg T, Klazinga NS, Starfield B. Transforming medical professionalism to
fit changing health needs. BMC Med. 2009;7(1):1–7.
22. Tweedie J, Hordern J, Dacre J. Advancing medical professionalism. Pepito J.
[2018. 1–123 p. Available from: http://www.rcplondon.ac.uk [cited 2020 10 Mar].](http://www.rcplondon.ac.uk)
23. Cheraghi-Sohi S, McDonald R, Harrison S, Sanders C. Experience of
contractual change in UK general practice: A qualitative study of salaried
[GPs. Br J Gen Pract. 2012;62(597):e282–e287. Available from: http://bjgp.org/](http://bjgp.org/cgi/doi/10.3399/bjgp12X636128)
[cgi/doi/10.3399/bjgp12X636128 [cited 2018 19 Apr].](http://bjgp.org/cgi/doi/10.3399/bjgp12X636128)
24. Cheraghi-Sohi S, Calnan M. Discretion or discretions? Delineating
professional discretion: Thecase of English medical practice. Soc Sci Med.
[2013;96:52–59. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24034](http://www.ncbi.nlm.nih.gov/pubmed/24034951%20)
[951 [cited 2018 19 Apr].](http://www.ncbi.nlm.nih.gov/pubmed/24034951%20)
25. Checkland K, Harrison S. The impact of the Quality and Outcomes
Framework on practice organisation and service delivery: summary of
evidence from two qualitative studies. Qual Prim Care. 2010;18(2):139–46
[Available from: http://www.ncbi.nlm.nih.gov/pubmed/20529476 [cited 2018](http://www.ncbi.nlm.nih.gov/pubmed/20529476)
19 Apr].
26. Hackett J, Glidewell L, West R, Carder P, Doran T, Foy R. ‘Just another incentive
scheme’: a qualitative interview study of a local pay-for-performance scheme
[for primary care. BMC Fam Pract. 2014;15(1):168 Available from: http://www.](http://www.ncbi.nlm.nih.gov/pubmed/25344735)
[ncbi.nlm.nih.gov/pubmed/25344735 [cited 2018 24 Jul].](http://www.ncbi.nlm.nih.gov/pubmed/25344735)
27. Gill PJ, Hislop J, Mant D, Harnden A. General practitioners’ views on quality
markers for children in UK primary care: a qualitative study. BMC Fam Pract.
[2012;13(1):92. Available from: http://bmcfampract.biomedcentral.com/](http://bmcfampract.biomedcentral.com/articles/10.1186/1471-2296-13-92)
[articles/10.1186/1471-2296-13-92 [cited 2018 24 Jul].](http://bmcfampract.biomedcentral.com/articles/10.1186/1471-2296-13-92)
28. Maxwell M, Harris F, Hibberd C, Donaghy E, Pratt R, Williams C, et al. A
qualitative study of primary care professionals’ views of case finding for
depression in patients with diabetes or coronary heart disease in the UK.
[BMC Fam Pract 2013;14(1):46. Available from: http://www.ncbi.nlm.nih.gov/](http://www.ncbi.nlm.nih.gov/pubmed/23557512)
[pubmed/23557512 [cited 2018 24 Jul].](http://www.ncbi.nlm.nih.gov/pubmed/23557512)
29. Chew-Graham CA, Hunter C, Langer S, Stenhoff A, Drinkwater J, Guthrie EA,
et al. How QOF is shaping primary care review consultations: a longitudinal
[qualitative study. BMC Fam Pract. 2013;14(1):103. Available from: http://](http://www.ncbi.nlm.nih.gov/pubmed/23870537)
[www.ncbi.nlm.nih.gov/pubmed/23870537 [cited 2016 27 Apr].](http://www.ncbi.nlm.nih.gov/pubmed/23870537)
30. Lester HE, Hannon KL, Campbell SM. Identifying unintended consequences
of quality indicators: A qualitative study. BMJ Qual Saf. 2011;20(12):1057–61
[Available from: http://www.ncbi.nlm.nih.gov/pubmed/21693464 [cited 2018](http://www.ncbi.nlm.nih.gov/pubmed/21693464)
19 Apr].
31. Fong W, Kwan YH, Yoon S, Phang JK, Thumboo J, Leung YY, et al.
Assessment of medical professionalism: preliminary results of a qualitative
study. BMC Med Educ. 2020 Jan 30;20(1):27.
32. Di Blasi Z, Harkness E, Ernst E, Georgiou A, Kleijnen J. Influence of context
effects on health outcomes: A systematic review. Lancet. 2001;357(9258):
757–62.
33. Maxwell JA. Qualitative Research Design: An Interactive Approach: An
Interactive Approach [Internet]. 2012 [cited 2020 May 11]. p. 218. Available
[from: http://books.google.com/books?hl=en&lr=&id=DFZc28cayiUC&pgis=1.](http://books.google.com/books?hl=en&lr=&id=DFZc28cayiUC&pgis=1)
34. British Sociological Association. Criteria for the evaluation of qualitative
research papers. Med Sociol News. 1996;22:34–7.
35. Coventry PA, Small N, Panagioti M, Adeyemi I, Bee P. Living with
complexity; Marshalling resources: A systematic review and qualitative
meta-synthesis of lived experience of mental and physical multimorbidity
Multimorbidity in Primary Care Knowledge, attitudes, behaviors, education,
[and communication. BMC Fam Pract. 2015;16(1):171 Available from: http://](http://www.biomedcentral.com/1471-2296/16/171)
[www.biomedcentral.com/1471-2296/16/171 [cited 2020 25 Mar].](http://www.biomedcentral.com/1471-2296/16/171)
36. Campbell R, Pound P, Morgan M, Daker-White G, Britten N, Pill R, et al.
Evaluating meta-ethnography: systematic analysis and synthesis of
qualitative research. Health Technol Assess (Rockv). 2011;15(43):1–164.
37. Mitchell C, Dwyer R, Hagan T, Mathers N. Impact of the QOF and the NICE
guideline in the diagnosis and management of depression: a qualitative
[study. Br J Gen Pract. 2011;61(586):e279–e289. Available from: http://bjgp.](http://bjgp.org/cgi/doi/10.3399/bjgp11X572472)
[org/cgi/doi/10.3399/bjgp11X572472 [cited 2018 19 Apr].](http://bjgp.org/cgi/doi/10.3399/bjgp11X572472)
38. Hannon KL, Lester HE, Campbell SM. Recording patient preferences for endof-life care as an incentivized quality indicator: What do general practice
[staff think? Palliat Med. 2012;26(4):336–341. Available from: http://www.ncbi.](http://www.ncbi.nlm.nih.gov/pubmed/21680749)
[nlm.nih.gov/pubmed/21680749 [cited 2018 24 Jul].](http://www.ncbi.nlm.nih.gov/pubmed/21680749)
39. Alderson SL, Russell AM, McLintock K, Potrata B, House A, Foy R. Incentivised
case finding for depression in patients with chronic heart disease and
diabetes in primary care: an ethnographic study. BMJ Open. 2014;4(8):
[e005146–e005146. Available from: http://www.ncbi.nlm.nih.gov/](http://www.ncbi.nlm.nih.gov/pubmed/25138803)
[pubmed/25138803 [cited 2018 24 Jul].](http://www.ncbi.nlm.nih.gov/pubmed/25138803)
40. Campbell SM, McDonald R, Lester H. The experience of pay for performance
in english family practice: A qualitative study. Ann Fam Med. 2008;6(3):228–
[34 Available from: http://www.ncbi.nlm.nih.gov/pubmed/18474885 [cited](http://www.ncbi.nlm.nih.gov/pubmed/18474885)
2018 22 Jul].
41. Campbell S, Hannon K, Lester H. Exception reporting in the quality and
outcomes framework: Views of practice staff - A qualitative study. Br J Gen
[Pract. 2011;61(585):e183–e189. Available from: http://www.ncbi.nlm.nih.gov/](http://www.ncbi.nlm.nih.gov/pubmed/21439176)
[pubmed/21439176 [cited 2018 24 Jul].](http://www.ncbi.nlm.nih.gov/pubmed/21439176)
42. Lester H, Matharu T, Mohammed MA, Lester D, Foskett-Tharby R.
Implementation of pay for performance in primary care: A qualitative study 8
years after introduction. Br J Gen Pract. 2013;63(611):e408–15 Available from:
[http://bjgp.org/lookup/doi/10.3399/bjgp13X668203 [cited 2018 19 Apr].](http://bjgp.org/lookup/doi/10.3399/bjgp13X668203)
43. McGregor W, Jabareen H, O’Donnell CA, Mercer SW, Watt GC. Impact of the
2004 GMS contract on practice nurses: a qualitative study. Br J Gen Pract.
[2008;58(555):711–719. Available from: http://bjgp.org/cgi/doi/10.3399/](http://bjgp.org/cgi/doi/10.3399/bjgp08X342183)
[bjgp08X342183 [cited 2018 19 Apr].](http://bjgp.org/cgi/doi/10.3399/bjgp08X342183)
44. Maisey S, Steel N, Marsh R, Gillam S, Fleetcroft R, Howe A. Effects of
payment for performance in primary care: qualitative interview study. J Heal
Serv Res Policy. 2008;13(3):133–9.
45. McDonald R, Harrison S, Checkland K. Incentives and control in primary
health care: findings from English pay-for-performance case studies. J
[Health Organ Manag. 2008;22(1):48–62 Available from: http://www.ncbi.nlm.](http://www.ncbi.nlm.nih.gov/pubmed/18488519)
[nih.gov/pubmed/18488519 [cited 2018 19 Apr].](http://www.ncbi.nlm.nih.gov/pubmed/18488519)
46. Arbaje AI, Newcomer AR, Maynor KA, Duhaney RL, Eubank KJ, Carrese JA.
Excellence in Transitional Care of Older Adults and Pay-for-Performance:
Perspectives of Health Care Professionals. Jt Comm J Qual patient Saf. 2014;
[40(12):550–551. Available from: http://www.ncbi.nlm.nih.gov/](http://www.ncbi.nlm.nih.gov/pubmed/26111380%20)
[pubmed/26111380 [cited 2018 19 Apr].](http://www.ncbi.nlm.nih.gov/pubmed/26111380%20)
47. Hannon KL, Lester HE, Campbell SM. Patients’ views of pay for performance in
primary care: A qualitative study. Br J Gen Pract. 2012;62(598):e322–8 Available
[from: http://www.ncbi.nlm.nih.gov/pubmed/22546591 [cited 2018 1 may].](http://www.ncbi.nlm.nih.gov/pubmed/22546591)
48. Corti L, Backhouse G. Acquiring qualitative data for secondary analysis.
Forum Qual Sozialforsch. 2005;.
49. GILLAM S, Steel N. QOF points: valuable to whom? [Internet]. Vol. 346, BMJ
(Overseas and retired doctors ed.). 2013 [cited 2020 Jun 5]. Available from:
[www.nice.org.uk/media/E19/EA/.](http://www.nice.org.uk/media/E19/EA/)
50. Heath I. Divided we fail: the Harveian oration 2011. Nurs Stand. 2011;12(20):16.
51. Owen K, Hopkins T, Shortland T, Dale J. GP retention in the UK: A worsening
crisis. Findings from a cross-sectional survey. BMJ Open. 2019;1, 9(2).
52. Gillam S, Steel N. QOF points: valuable to whom? Source BMJ Br med J. 2013;.
53. Fleetcroft R, Steel N, Cookson R, Walker S, Howe A. Incentive payments are
not related to expected health gain in the pay for performance scheme for
UK primary care: cross-sectional analysis. BMC Health Serv Res. 2012;12(1):94.
54. Woolhandler S, Ariely D, Himmelstein DU. Why pay for performance may be
incompatible with quality improvement. BMJ. 2012;345(7870).
55. Schwartz SH. Unit 2 Theoretical and Methodological Issues Subunit 1
Conceptual Issues in. Psychol Cult Artic [Internet]. 2012 [cited 2018 Nov 22];
[11:12–3. Available from: https://doi.org/10.9707/2307-0919.1116.](https://doi.org/10.9707/2307-0919.1116)
56. Lester H, Schmittdiel J, Selby J, Fireman B, Campbell S, Lee J, et al. The
impact of removing financial incentives from clinical quality indicators:
longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:
[c1898 Available from: http://www.ncbi.nlm.nih.gov/pubmed/20460330](http://www.ncbi.nlm.nih.gov/pubmed/20460330)
[cited 2019 21 Jan].
57. Durrheim D. Income inequality and health status: a nursing issue [Internet].
Vol. 25, AUSTRALIAN JOURNAL OF ADVANCED NURSING. 2007 [cited 2020
[May 12]. Available from: https://s3.amazonaws.com/academia.edu.](https://s3.amazonaws.com/academia.edu.documents/56474455/PROPUES_TPM.pdf?response-content-disposition=inline%3B)
[documents/56474455/PROPUES_TPM.pdf?response-content-disposition=](https://s3.amazonaws.com/academia.edu.documents/56474455/PROPUES_TPM.pdf?response-content-disposition=inline%3B)
[inline%3B filename%3DDiseno_de_un_plan_de_Mantenimiento_Prod.](https://s3.amazonaws.com/academia.edu.documents/56474455/PROPUES_TPM.pdf?response-content-disposition=inline%3B)
pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=
AKIAIWOWYYGZ2Y53UL3A%2F20200314%2Fus-e.
-----
58. Corti Bishop L. L. Strategies in Teaching Secondary Analysis of Qualitative
Data. FQS (Forum Qual Soc Res. 2005;.
59. Schwartz SH, Bardi A. Influences of Adaptation to Communist Rule on Value
Priorities in Eastern Europe. Polit Psychol. 1997;18(2):385–410. Available from:
[http://doi.wiley.com/10.1111/0162-895X.00062 [cited 2018 28 Nov].](http://doi.wiley.com/10.1111/0162-895X.00062)
60. Lester H, Campbell S. Developing Quality and Outcomes Framework (QOF)
indicators and the concept of “QOFability”. Qual Prim Care. 2010;18(2):103–9
[Available from: http://www.ncbi.nlm.nih.gov/pubmed/20529471 [cited 2019](http://www.ncbi.nlm.nih.gov/pubmed/20529471)
15 Apr].
61. Inglehart R. Modernization and postmodernization : cultural, economic, and
political change in 43 societies [Internet]. Princeton University Press; 1997
[[cited 2018 Nov 28]. 453 p. Available from: https://press.princeton.edu/](https://press.princeton.edu/titles/5981.html)
[titles/5981.html.](https://press.princeton.edu/titles/5981.html)
62. Marcotte LM, Moriates C, Wolfson DB, Frankel RM. Professionalism as the
bedrock of high-value care. Acad Med. 2019;1.
63. Wagner P, Hendrich J, Moseley G, Hudson V. Defining medical professionalism:
[a qualitative study. Med Educ. 2007;41(3):288–94 Available from: http://doi.](http://doi.wiley.com/10.1111/j.1365-2929.2006.02695.x)
[wiley.com/10.1111/j.1365-2929.2006.02695.x [cited 2020 5 Jun].](http://doi.wiley.com/10.1111/j.1365-2929.2006.02695.x)
64. Valderas JM, Gangannagaripalli J, Nolte E, Boyd CM, Roland M, SarriaSantamera A, et al. Quality of care assessment for people with
[multimorbidity. J Intern Med. 2019;285(3):289–300 Available from: https://](https://onlinelibrary.wiley.com/doi/abs/10.1111/joim.12881)
[onlinelibrary.wiley.com/doi/abs/10.1111/joim.12881 [cited 2020 25 Mar].](https://onlinelibrary.wiley.com/doi/abs/10.1111/joim.12881)
65. Marmot M. Report of the Review of the Quality and Outcomes
Framework in England [Internet]. 2010 [cited 2019 Feb 20]. Available
[from: https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf%0Ahttps:/www.england.nhs.uk/publication/report-of-the-review-of-the-quality-and-outcomes-framework-in-england)
[outcome-framework-report-of-the-review.pdf%0Ahttps://www.england.](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf%0Ahttps:/www.england.nhs.uk/publication/report-of-the-review-of-the-quality-and-outcomes-framework-in-england)
[nhs.uk/publication/report-of-the-review-of-the-quality-and-outcomes-](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf%0Ahttps:/www.england.nhs.uk/publication/report-of-the-review-of-the-quality-and-outcomes-framework-in-england)
[framework-in-england.](https://www.england.nhs.uk/wp-content/uploads/2018/07/quality-outcome-framework-report-of-the-review.pdf%0Ahttps:/www.england.nhs.uk/publication/report-of-the-review-of-the-quality-and-outcomes-framework-in-england)
66. Bardi A. Relations of values to behavior in everyday situations. The Hebrew
University; 2000.
67. Goldfarb D. Family Doctor Incentives: Getting closer to the Sweet Spot. The
[Conference Board of Canada; 2014;22. https://www.newswire.ca/](https://www.newswire.ca/newsreleases/doctors-pay-should-be-based-on-the-right-blend-of-incentives-514340331.html)
[newsreleases/doctors-pay-should-be-based-on-the-right-blend-of-](https://www.newswire.ca/newsreleases/doctors-pay-should-be-based-on-the-right-blend-of-incentives-514340331.html)
[incentives-514340331.html.](https://www.newswire.ca/newsreleases/doctors-pay-should-be-based-on-the-right-blend-of-incentives-514340331.html)
68. Mead N, Bower P. Measuring patient-centredness: A comparison of three
observation-based instruments. Patient Educ Couns 2000;.
69. Langberg EM, Dyhr L, Davidsen AS. Development of the concept of patientcentredness – A systematic review. Patient Educ Couns [Internet]. 2019 Feb
[27 [cited 2019 Mar 18]; Available from: https://www.sciencedirect.com/](https://www.sciencedirect.com/science/article/pii/S073839911830733X)
[science/article/pii/S073839911830733X.](https://www.sciencedirect.com/science/article/pii/S073839911830733X)
70. Tracy DK, Hanson K, Brown T, James AJB, Paulsen H, Mulliez Z, et al.
Integrated care in mental health: Next steps after the NHS Long Term Plan.
Vol. 214, British Journal of Psychiatry. Cambridge University Press; 2019. p.
315–7.
71. Marshall M, Sheaff R, Rogers A, Campbell S, Halliwell S, Pickard S, et al. A
qualitative study of the cultural changes in primary care organisations
needed to implement clinical governance. British Journal of General
Practice. 2002.
72. Marshall M, Sheaff R, Rogers A, Campbell S, Halliwell S, Pickard S, et al. A
qualitative study of the cultural changes in primary care organisations
needed to implement clinical governance. Br J Gen Pract. 2002;52(481):641–
[645. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12171222%5](http://www.ncbi.nlm.nih.gov/pubmed/12171222/nhttp:/www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC1314382)
[Cnhttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC1314382](http://www.ncbi.nlm.nih.gov/pubmed/12171222/nhttp:/www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC1314382)
[cited 2018 7 Oct].
73. Rogers A, Pilgrim D. A sociology of mental health and illness [Internet].
Primary Care. Open University Press; 2005 [cited 2018 Oct 4]. 269 p.
[Available from: http://psycnet.apa.org/psycinfo/1994-97819-000.](http://psycnet.apa.org/psycinfo/1994-97819-000)
74. Dixon-woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative
approaches to qualitative and quantitative evidence [Internet]. Vol. 181,
[Health Development Agency. 2004 [cited 2018 Oct 2]. Available from: www.](http://www.hda.nhs.uk)
[hda.nhs.uk.](http://www.hda.nhs.uk)
75. Khan N, Bower P, Rogers A. Guided self-help in primary care mental health:
Meta-synthesis of qualitative studies of patient experience. Br J Psychiatry.
[2007;191(3):206–11 Available from: http://www.ncbi.nlm.nih.gov/pubmed/1](http://www.ncbi.nlm.nih.gov/pubmed/17766759)
[7766759 [cited 2016 27 Apr].](http://www.ncbi.nlm.nih.gov/pubmed/17766759)
76. Walsh D, Downe S. Meta-synthesis method for qualitative research: A
[literature review. J Adv Nurs. 2005;50(2):204–11 Available from: http://www.](http://www.ncbi.nlm.nih.gov/pubmed/15788085)
[ncbi.nlm.nih.gov/pubmed/15788085 [cited 2019 25 Feb].](http://www.ncbi.nlm.nih.gov/pubmed/15788085)
77. Protheroe J, Rogers A, Kennedy AP, Macdonald W, Lee V. Promoting patient
engagement with self-management support information: A qualitative
meta-synthesis of processes influencing uptake. Implement Sci. 2008;3(1):44
[Available from: http://www.ncbi.nlm.nih.gov/pubmed/18851743 [cited 2016](http://www.ncbi.nlm.nih.gov/pubmed/18851743)
27 Apr].
78. Pielstick, C. D., The transforming leader: A meta-ethnographic analysis,
[Community College Review, 1998:26(3):15–34 https://doi.org/10.1177/](https://doi.org/10.1177/009155219802600302)
[009155219802600302.](https://doi.org/10.1177/009155219802600302)
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7359468, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://bmcprimcare.biomedcentral.com/track/pdf/10.1186/s12875-020-01208-8"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-07-13T00:00:00
|
[
{
"paperId": "faa45a320ea6e2a762277e492abe82bdce623b7d",
"title": "Modernization and Postmodernization"
},
{
"paperId": "653cc66bb9cde633dda203de6aace8bb3715a848",
"title": "Professionalism as the Bedrock of High-Value Care"
},
{
"paperId": "47e6a43c6562f2fb77e71a35c80f35ff808d1f4e",
"title": "Assessment of medical professionalism: preliminary results of a qualitative study"
},
{
"paperId": "f0cfc9a8b2f34d9b27164aeeeeb384e522406dfa",
"title": "Development of the concept of patient-centredness - A systematic review."
},
{
"paperId": "3ee294f2839fdefb0c1eeff269990881036c3710",
"title": "Integrated care in mental health: next steps after the NHS Long Term Plan."
},
{
"paperId": "363aea77fc6105c862e22266de1e8a07c21ce12e",
"title": "Quality of care assessment for people with multimorbidity"
},
{
"paperId": "407f053dc06ed935fccd60e0ff0249d097dcfaab",
"title": "GP retention in the UK: a worsening crisis. Findings from a cross-sectional survey"
},
{
"paperId": "4667782d7abab0d982f7be947dc1a9428cbdb5c6",
"title": "Qualitative evidence to improve guidelines and health decision-making"
},
{
"paperId": "55f17a6e9032267d91c1063c1274dd92066aa023",
"title": "The role of the Quality and Outcomes Framework in the care of long-term conditions: a systematic review"
},
{
"paperId": "4834efca2fd183faf8433c0398fb7d731e31586a",
"title": "Does pay-for-performance in primary care save lives?"
},
{
"paperId": "2dc7dc944b955539d0aee023934bc19059a64da2",
"title": "Living with complexity; marshalling resources: a systematic review and qualitative meta-synthesis of lived experience of mental and physical multimorbidity"
},
{
"paperId": "9ace79e4e45a4346126b1375ca4798d7ea40e3c4",
"title": "Excellence in Transitional Care of Older Adults and Pay-for-Performance: Perspectives of Health Care Professionals."
},
{
"paperId": "46337e1c1cc7b7437c90539a10b19ed7dace2564",
"title": "Effect of a national primary care pay for performance scheme on emergency hospital admissions for ambulatory care sensitive conditions: controlled longitudinal study"
},
{
"paperId": "505dd5cb144738cc60458b42faec7600235ffd2a",
"title": "‘Just another incentive scheme’: a qualitative interview study of a local pay-for-performance scheme for primary care"
},
{
"paperId": "782ba17210a7744bf620dda16e4cbd1084ef919f",
"title": "Incentivised case finding for depression in patients with chronic heart disease and diabetes in primary care: an ethnographic study"
},
{
"paperId": "62cfc00c93f82a89beb1e5d133e651c68ed545ea",
"title": "Discretion or discretions? Delineating professional discretion: the case of English medical practice."
},
{
"paperId": "e5988fdbd549355d0d73c6d4d19b812b4f0b061c",
"title": "How QOF is shaping primary care review consultations: a longitudinal qualitative study"
},
{
"paperId": "d3245e4b9dd344a984390b3242119e0adb00af1a",
"title": "Implementation of pay for performance in primary care: a qualitative study 8 years after introduction."
},
{
"paperId": "459e66e2d5c6907faa26fbe27f34223da6393b5e",
"title": "A qualitative study of primary care professionals’ views of case finding for depression in patients with diabetes or coronary heart disease in the UK"
},
{
"paperId": "de1067944cbe3574469d641738995de886622ddb",
"title": "An Overview of the Schwartz Theory of Basic Values"
},
{
"paperId": "6d1ca248e97573555f4ddf811158edff33c0d6cc",
"title": "Recommendations for improving the end-of-life care system for homeless populations: A qualitative study of the views of Canadian health and social services professionals"
},
{
"paperId": "5822da792f03bc44d36cfb4ff4bd80d397a0b1dd",
"title": "General practitioners’ views on quality markers for children in UK primary care: a qualitative study"
},
{
"paperId": "c36e74a4f5375c3ce54be43ec4d9b4663b758ffa",
"title": "Why pay for performance may be incompatible with quality improvement"
},
{
"paperId": "93c126cac72473335be3bcff40a002d2bbbec20a",
"title": "Recording patient preferences for end-of-life care as an incentivized quality indicator: What do general practice staff think?"
},
{
"paperId": "b1be1a503022981978a85c5a76df6a0b9b5f9a95",
"title": "Patients' views of pay for performance in primary care: a qualitative study."
},
{
"paperId": "d64d378d9e4edfe87a86a47abf1b41edd65a6210",
"title": "Incentive payments are not related to expected health gain in the pay for performance scheme for UK primary care: cross-sectional analysis"
},
{
"paperId": "2cb24dbb70315530f1f5746b8a32bee9028c0bf7",
"title": "Experience of contractual change in UK general practice: a qualitative study of salaried GPs."
},
{
"paperId": "9bcd99fc26c861b6033f1f31da43c5ee39164821",
"title": "Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research."
},
{
"paperId": "788c334aa8b2bb5a2b956fa3708ed7cedfc34cc5",
"title": "Living with complexity"
},
{
"paperId": "44be08b4f740f0e937fd0597edeea21c3f48c543",
"title": "Identifying unintended consequences of quality indicators: a qualitative study"
},
{
"paperId": "7f68515edffc5bb8f60d2c16d6f4970fb5dd59a2",
"title": "Impact of the QOF and the NICE guideline in the diagnosis and management of depression: a qualitative study."
},
{
"paperId": "99b9b37608977dc69da3cb46a6e6c5c8c9fea2d0",
"title": "Exception reporting in the Quality and Outcomes Framework: views of practice staff - a qualitative study."
},
{
"paperId": "1ea28192d9c00f69393e15b9b846cc2b64a507e2",
"title": "The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators"
},
{
"paperId": "f8641d2eb2fd50258595f4cf1ce22811aba7a5f3",
"title": "Transforming medical professionalism to fit changing health needs"
},
{
"paperId": "38ff5157d201cd9cc6284e063d00bbc28d0e0898",
"title": "Pay for Performance in Primary Care in England and California: Comparison of Unintended Consequences"
},
{
"paperId": "49fa7627db03cc6581dde1ddf0da8890be672ea6",
"title": "The association of posttraumatic stress disorder and metabolic syndrome: a study of increased health risk in veterans"
},
{
"paperId": "804eb93c3975d489582f71519c491889a9125a79",
"title": "Promoting patient engagement with self-management support information: a qualitative meta-synthesis of processes influencing uptake"
},
{
"paperId": "31eaa9d55217f96741055c1afa942eca8f5ea94e",
"title": "Impact of the 2004 GMS contract on practice nurses: a qualitative study."
},
{
"paperId": "94532e3f6b414f215fe9176bbdac807f4002aa35",
"title": "Effects of payment for performance in primary care: Qualitative interview study"
},
{
"paperId": "ebb7f2f7286932a8b7a7432340b8c651c3429583",
"title": "The Experience of Pay for Performance in English Family Practice: A Qualitative Study"
},
{
"paperId": "a20ded5ef5dd3ed6f1d66f0d5bd5b6146548f395",
"title": "Incentives and control in primary health care: findings from English pay-for-performance case studies."
},
{
"paperId": "7fae41a5db640ddfbfae636de23a7b98490f8bea",
"title": "Income inequality and health status: a nursing issue"
},
{
"paperId": "146e328b5b63522fe0f6015e6637fea9126945bc",
"title": "Guided self-help in primary care mental health"
},
{
"paperId": "e4e44dd89b8988863eecb15a98ab9b6b1d74de65",
"title": "Defining medical professionalism: a qualitative study"
},
{
"paperId": "5a948b9fbf9c9f86fc24e1a010db3389f1923273",
"title": "Acquiring Qualitative Data for Secondary Analysis"
},
{
"paperId": "4dc2a922b783dc7a090bad91c8c1f1b7799d223a",
"title": "Meta-synthesis method for qualitative research: a literature review."
},
{
"paperId": "779246c4ed513451cbadff8eb5c273995e8828f7",
"title": "Strategies in Teaching Secondary Analysis of Qualitative Data"
},
{
"paperId": "20010b4c4eea80a9dd99f98147e32c4976c18743",
"title": "A qualitative study of the cultural changes in primary care organisations needed to implement clinical governance."
},
{
"paperId": "0baa6343fad5baa372d90552a583b8cef02306ab",
"title": "Influence of context effects on health outcomes: a systematic review"
},
{
"paperId": "eb77a32d997fcb8a25ffa24bdd27ea5b9b61b917",
"title": "Modernization and Postmodernization: Cultural, Economic and Political Change in 43 Societies"
},
{
"paperId": "79e3aacd2c5bc1ccafb6acaaeb2ab76d62e2af9d",
"title": "The Transforming Leader: A Meta-Ethnographic Analysis"
},
{
"paperId": "409ce0841305791cf639a240eb7089b1b9950a42",
"title": "Qualitative Research Design: An Interactive Approach"
},
{
"paperId": "6d441a5723b5708413789cad352bce68bf8e283a",
"title": "Primary care"
},
{
"paperId": "d03566dc38b24426e3a63a2dc7b5af2a6fc377a7",
"title": "Readers panel Spending money: How should the NHS spend the £300 million set aside to prevent a winter bed crisis? Our readers panel replies"
},
{
"paperId": "ba1b982bdea9f9f721fb5906aa15b3aacd17ff9c",
"title": "Influences of Adaptation to Communist Rule on Value Priorities in Eastern Europe"
},
{
"paperId": "8c57841534b8102fcd351a22c06edf5c3da659f2",
"title": "Aggregating Qualitative Findings: An Approach to Theory Development"
},
{
"paperId": null,
"title": "a nursing issue [Internet"
},
{
"paperId": null,
"title": "Development of the concept of patientcentredness – A systematic review"
},
{
"paperId": null,
"title": ""
},
{
"paperId": null,
"title": "Advancing medical professionalism"
},
{
"paperId": null,
"title": "Exclusive NHS England rejected QOF Suspension bacause of opposition from GPs. GPonline"
},
{
"paperId": null,
"title": "Exclusive NHS England rejected QOF Suspension bacause of opposition from GPs"
},
{
"paperId": "7bda6229597353802c281405e6b8e83ed5649a17",
"title": "质量与结果框架:我们学到了什么 ? Quality and Outcomes Framework : what have we learnt ?"
},
{
"paperId": null,
"title": "Delivering the Forward View: NHS planning guidance [Internet"
},
{
"paperId": null,
"title": "Building the Workforce - the New Deal for General Practice GP Induction and Refresher Scheme, 2015-2018"
},
{
"paperId": null,
"title": "Building the Workforce -the New Deal for General Practice GP Induction and Refresher Scheme"
},
{
"paperId": "bc6bcc9d7047c88ebf1500d5e0c177e075d4eed2",
"title": "Criteria for the evaluation of qualitative research"
},
{
"paperId": null,
"title": "Using evidence from qualitative research to develop WHO guidelines [Internet]. WHO handbook for guideline development"
},
{
"paperId": null,
"title": "Family Doctor Incentives: Getting closer to the Sweet Spot"
},
{
"paperId": null,
"title": "Family Doctor Incentives: Getting closer to the Sweet Spot. The Conference Board of Canada"
},
{
"paperId": null,
"title": "QOF points: valuable to whom"
},
{
"paperId": null,
"title": "QOF points: valuable to whom? [Internet"
},
{
"paperId": null,
"title": "Overseas and retired doctors ed.)"
},
{
"paperId": null,
"title": "QOF points: valuable to whom? Source"
},
{
"paperId": "edd7e294b0f0e0208ad57beb9765da8b14c20461",
"title": "Unit 2 Theoretical and Methodological Issues Subunit 1 Conceptual Issues in Psychology and Culture Article 6 8-1-2002 Psychology of Morality"
},
{
"paperId": null,
"title": "Divided we fail: the Harveian oration 2011"
},
{
"paperId": null,
"title": ""
},
{
"paperId": null,
"title": ""
},
{
"paperId": "17c11597f1d27be7e4cafc62b6c30682795f3f3c",
"title": "The impact of the Quality and Outcomes Framework on practice organisation and service delivery: summary of evidence from two qualitative studies."
},
{
"paperId": "912877933b56af14cdf6c118652d9bf0b10255ae",
"title": "Developing Quality and Outcomes Framework (QOF) indicators and the concept of 'QOFability'."
},
{
"paperId": null,
"title": ""
},
{
"paperId": null,
"title": ""
},
{
"paperId": null,
"title": "Income inequality and health status: a nursing issue [Internet"
},
{
"paperId": "e5634ecd1809029680abe04ca53157cc0e5a3fa7",
"title": "Health Development Agency"
},
{
"paperId": null,
"title": "It’s about more than money: financial incentives and internal motivation"
},
{
"paperId": null,
"title": "It's about more than money: financial incentives and internal motivation. Qual Saf Heal Care"
},
{
"paperId": null,
"title": "L . Strategies in Teaching Secondary Analysis of Qualitative Data"
},
{
"paperId": "9929e90f4fc3c1e040eace8ad21b571f7f346729",
"title": "Integrative approaches to qualitative and quantitative evidence"
},
{
"paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c",
"title": "Publisher's Note"
},
{
"paperId": "62f5c9a7d5e8ac3e87ea32081427874c44844e99",
"title": "Measuring patient-centredness: a comparison of three observation-based instruments."
},
{
"paperId": null,
"title": "Relations of values to behavior in everyday situations. The Hebrew University"
},
{
"paperId": null,
"title": "Relations of values to behavior in everyday situations"
},
{
"paperId": "f8eadcbd6b684b86a964664925dbbdfde54754ee",
"title": "A Sociology of Mental Health and Illness"
},
{
"paperId": "aae327c6e142861dacdca6a4b3345d095ffec4ce",
"title": "Modernization and Postmodernization: Cultural, Economic, and Political Change in 43 Societies"
},
{
"paperId": null,
"title": "British Sociological Association. Criteria for the evaluation of qualitative research papers"
},
{
"paperId": null,
"title": ""
},
{
"paperId": "01315b84c630055a16e45f91b97509807680c70d",
"title": "Are There Universal Aspects in the Structure and Contents of Human Values"
},
{
"paperId": "dc49e27d0ed890cd3ed2e80ca0b0107207f12a64",
"title": "Universals in the Content and Structure of Values: Theoretical Advances and Empirical Tests in 20 Countries"
},
{
"paperId": null,
"title": ""
},
{
"paperId": null,
"title": "Meta-Ethnography: Synthesizing Qualitative Studies (Qualitative Research Methods) [Internet"
},
{
"paperId": "ccd8148a65f127e1d06c8d97a041ace14982b60f",
"title": "The sociology of mental health and illness"
},
{
"paperId": "bb194b9ae6428d282d60b5952b1b428e0545ad61",
"title": "Report of the Review of the Quality and Outcomes Framework in England"
},
{
"paperId": "fcb05f2c34cc55f46e44a4b389ad78d1dc98a5de",
"title": "Bmc Medical Research Methodology Open Access Optimal Classifier Selection and Negative Bias in Error Rate Estimation: an Empirical Study on High-dimensional Prediction"
},
{
"paperId": null,
"title": "Delivering the Forward View: NHS planning guidance"
},
{
"paperId": null,
"title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations"
},
{
"paperId": null,
"title": ""
}
] | 25,156
|
en
|
[
{
"category": "Psychology",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Psychology",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0132509cb91893022691151e825cdef22692df5c
|
[
"Psychology",
"Medicine"
] | 0.858548
|
Sensorimotor Rhythm Neurofeedback Enhances Golf Putting Performance.
|
0132509cb91893022691151e825cdef22692df5c
|
Journal of Sport & Exercise Psychology (JSEP)
|
[
{
"authorId": "2004830021",
"name": "M. Cheng"
},
{
"authorId": "117129350",
"name": "Chung-Ju Huang"
},
{
"authorId": "3155854",
"name": "Yu-Kai Chang"
},
{
"authorId": "21728617",
"name": "D. Koester"
},
{
"authorId": "9500975",
"name": "T. Schack"
},
{
"authorId": "79743957",
"name": "T. Hung"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Sport Exerc Psychol (JSEP",
"J Sport Exerc Psychol",
"Journal of Sport & Exercise Psychology"
],
"alternate_urls": null,
"id": "de07b8c7-cfb1-4ac5-bf01-bce16955ff22",
"issn": "0895-2779",
"name": "Journal of Sport & Exercise Psychology (JSEP)",
"type": "journal",
"url": "http://www.humankinetics.com/JSEP/"
}
| null |
**_Journal of Sport & Exercise Psychology, 2015, 37, 626 -636_**
http://dx.doi.org/10.1123/jsep.2015-0166
© 2015 Human Kinetics, Inc.
ORIGINAL RESEARCH
# Sensorimotor Rhythm Neurofeedback Enhances Golf Putting Performance
#### Ming-Yang Cheng,[1] Chung-Ju Huang,[2] Yu-Kai Chang,[3] Dirk Koester,[1] Thomas Schack,[1] and Tsung-Min Hung[4]
1Bielefeld University; 2University of Taipei; 3National Taiwan Sport University;
4National Taiwan Normal University
Sensorimotor rhythm (SMR) activity has been related to automaticity during skilled action execution. However,
few studies have bridged the causal link between SMR activity and sports performance. This study investigated
the effect of SMR neurofeedback training (SMR NFT) on golf putting performance. We hypothesized that
preelite golfers would exhibit enhanced putting performance after SMR NFT. Sixteen preelite golfers were
recruited and randomly assigned into either an SMR or a control group. Participants were asked to perform
putting while electroencephalogram (EEG) was recorded, both before and after intervention. Our results
showed that the SMR group performed more accurately when putting and exhibited greater SMR power than
the control group after 8 intervention sessions. This study concludes that SMR NFT is effective for increasing
SMR during action preparation and for enhancing golf putting performance. Moreover, greater SMR activity
might be an EEG signature of improved attention processing, which induces superior putting performance.
**_Keywords: precision sports, attention, EEG, sensorimotor rhythm, automaticity_**
The quality of mental regulation can differentiate
superior from inferior performance in precision sports
activities such as golf putting. In golf, the putt is considered one of the most important parts of the game, representing on average 43% of all shots taken during a single round
(Pelz & Frank, 2000). From a technical perspective, putting
is the simplest skill used in golf. However, mentally, putting is the most stressful and demanding activity in the
game (Nicholls, 2007). The mental challenge of putting
is reflected by previous psychophysiological studies
showing complex brain processes during putting performance (Babiloni et al., 2008). Hence, the maintenance
Ming-Yang Cheng is with Center of Excellence “Cognitive
Interaction Technology” (CITEC), Bielefeld University, Bielefeld, Germany. Chung-Ju Huang is with the Graduate Institute
of Sport Pedagogy, University of Taipei, Taipei City, Taiwan,
Republic of China. Yu-Kai Chang is with the Graduate Institute
of Athletics and Coaching Science, National Taiwan Sport
University, Taoyuan County, Taiwan, Republic of China. Dirk
Koester is with the Center of Excellence “Cognitive Interaction
Technology” (CITEC), Bielefeld University, Bielefeld, Germany. Thomas Schack is with the Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld University,
Bielefeld, Germany. Tsung-Min Hung is with the Department of
Physical Education, National Taiwan Normal University, Taipei
City, Taiwan, Republic of China. Address author correspon[dence to Tsung-Min Hung at ernesthungkimo@yahoo.com.tw.](mailto:ernesthungkimo@yahoo.com.tw)
of a mental state conducive to skilled execution is critical
for ideal precision sports performance.
Superior performance in precision sports can be
characterized as an automatic process as opposed to a
controlled process, which is typically observed in less
skilled performers (Fitts & Posner, 1967). An automatic
process is by nature reflexive, whereas a controlled process is an intentionally initiated sequence of cognitive
activity (Schneider & Shiffrin, 1977). Achieving automatic process in action execution is the primary goal of
mastery (Logan, Hockley, & Lewandowsky, 1991). Differences between these two levels of cognitive processing
are reflected at the neurophysiological level: participants
who were in the automatic stage exhibited weaker activity
of the bilateral cerebellum, presupplementary motor area,
premotor cortex, parietal cortex, and prefrontal cortex
compared with novices (Wu, Chan, & Hallett, 2008). In
addition, the somatosensory cortex has been related to
conscious perception of somatosensory stimuli (Nierhaus
et al., 2015), such that lower activity in the somatosensory
cortex might be a signature of reduced conscious involvement in movement execution, as is frequently observed
in highly skilled performers.
Although previous studies of the brain function
underlying superior golf putting performance have
provided insights into adaptive mental states and their
cortical processes, few studies have examined the
cortical processes that are more directly associated
with somatosensory activity. For example, Babiloni
-----
et al. (2008) demonstrated that successful putting was
preceded by higher high-frequency alpha (10–12 Hz)
event-related desynchronization over the frontal midline
and the right primary sensorimotor area compared with
unsuccessful putting performance. Similarly, studies
found that reduced (Kao, Huang, & Hung, 2013) and
stable (Chuang, Huang, & Hung, 2013) frontal midline
theta power was the precursor of superior performance
in precision sports. Since high-frequency alpha power
in these cortical areas reflect only task-related attention
(Klimesch, Doppelmayr, Pachinger, & Ripper, 1997)
whereas frontal midline theta power indicates top-down
sustained attention (Sauseng, Hoppe, Klimesch, Gerloff,
& Hummel, 2007), these findings support the importance
of specialized task-related attention on superior motor
performance. However, the information encoded during
automatic somatosensory processing during skilled
precision sport performance remains unexamined as yet.
Sensorimotor rhythm (SMR), the 12- to 15-Hz oscillation of the sensorimotor cortex, has shown promising
as a link between adaptive mental states (e.g., automatic
process-related attention) and skilled visuomotor performance. Sensorimotor rhythm is considered an indicator of cortical activation, which is inversely related to
somatosensory processing (Mann, Sterman, & Kaiser,
1996). A recent study showed that skilled dart-throwing
players demonstrated higher SMR power before dart
release than novices in a dart-throwing task (Cheng
et al., 2015). This result suggests that lower cognitive
involvement in processing somatosensory information
as reflected by higher SMR power is characteristic of
skilled performance. Furthermore, several lines of studies
pertaining to SMR power tuning for enhancing adaptive
cortical processing in motor performance have shown
promising results. Augmented SMR power resulting from
neurofeedback training (NFT) has been identified as a
relaxed focus state without somatosensory intervention
(Gruzelier, Foks, Steffert, Chen, & Ros, 2014). Similarly,
a reduced trait anxiety score and task-processing time
during microsurgery were observed after augmented
SMR NFT (Ros et al., 2009). Moreover, a facilitative
sense of control, confidence, and feeling at-one with a
role was demonstrated after augmented SMR NFT before
acting performance (Gruzelier, Inoue, Smart, Steed, &
Steffert, 2010). Thus, increased SMR activity implies
the maintenance of a relaxed, focused state by reducing
motor perception (e.g., somatosensory processing) by
the sensorimotor cortex (Vernon et al., 2003). This interpretation is similar to the mental characteristics of peak
performance in skilled athletes (Krane & Williams, 2006)
and is in agreement with the concept of automaticity
proposed by Fitts and Posner (1967). Hence, SMR power
not only might be a sensitive indicator of the activity of
sensorimotor cortex (Mann et al., 1996) but also shows
potential for a performance-enhancing intervention.
Although there is no direct evidence to support the
effectiveness of SMR NFT on performance enhancement
in precision sport, two lines of research lend support to
its potential use in sports. First, previous studies have
demonstrated the effectiveness of NFT on performance
enhancement in precision sports. For example, Landers et al. (1991) demonstrated that [“]correct[”] NFT (i.e.,
augmented slow cortical potential at the left temporal
lobe) led to superior performance, whereas [“]incorrect[”]
NFT (i.e., augmented slow cortical potential at the right
temporal lobe) impaired performance in skilled archers.
Similarly, Kao, Huang, and Hung (2014) reported that
NFT targeting to reduce the frontal midline theta resulted
in improved performance in skilled golfers. These findings support the feasibility of tuning EEG to improve
behavioral outcome in precision sports. The second line
of evidence is the finding that SMR NFT has a beneficial
effect on attention-related performance in various attentional tasks. For example, an increased P300b amplitude
at frontal, central, and parietal sites during the auditory
oddball task and reduced commission errors, and a
reduction in reaction time variability during the Test
of Variables of Attention (TOVA) was observed after
augmented SMR NFT (Egner, Zech, & Gruzelier, 2004).
These findings suggest that augmenting SMR power
might improve attention-related processes by improving
impulse control and the ability to integrate relevant environmental stimuli. Similarly, Ros et al. (2009) reported
that a shorter operation time and reduced trait anxiety
score were observed in surgeons following augmented
SMR NFT, suggesting that augmented SMR enhanced
the learning of a complex medical specialty by developing sustained attention and a relaxed attentional focus
as well as increasing working memory (Vernon et al.,
2003). Furthermore, Doppelmayr and Weber (Doppelmayr & Weber, 2011) revealed that augmented SMR
NFT not only resulted in a significant SMR amplitude
increase accompanied by a significant increase in reward
threshold, but also facilitated the performance of spatialrotation, simple, and choice-reaction time tasks. These
results indicate that visuospatial processing, semantic
memory regulation, and the integration of relevant stimuli
can be improved following augmented SMR NFT. Collectively, the benefits of augmented SMR NFT can be
attributed to an improved regulation of somatosensory
and sensorimotor pathways, which in turn leads to more
efficient attention allocation (Kober et al., 2014) that
results in an improved processing of task-relevant stimuli.
To the best of our knowledge, no study has directly
examined the effect of SMR NFT on precision sport
performance. Thus, this study investigated the effect of
SMR NFT on a golf putting task. We predicted that golfers would be able to increase SMR power before putting
execution following augmented SMR NFT. More importantly, we predicted that increased SMR power improves
putting performance as a result of augmented SMR NFT.
## Methods
#### Participants
Fourteen male and two female preelite and elite golfers were recruited (mean handicap = 0, _SD = 3.90)._
-----
Participants were matched based on performance history
supplemented by the assessment of a professional coach and
then randomly assigned into either an SMR neurofeedback
group (SMR NFT) or a control group (seven male and one
female for each group). The mean age of the SMR NFT and
control group were 20.6 (1.59) and 22.3 (2.07), respectively.
The years of experience in golf were 9.5 (2.67) for the SMR
NFT group and 9.2 (1.83) for the control group. An independent t test showed no difference in age [t(14) = 1.895,
_p = .079] or years of experience in golf [t(14) = 0.273, p_
= .789] between the two groups. None of the participants
reported psychiatric and neurological disorders and had
never been hospitalized for general brain damage.
#### Procedures
For the pretest and posttest, we used the same procedure
to collect data. At pretest, after being informed of the
general purpose of the study, all participants were asked
to read and sign an informed consent form approved by
our institutional review board. They were then given the
opportunity to ask questions about the experiment. The
participants were individually tested in a sound-proof indoor
artificial golf green, where they were initially required to
stand 3 m from a hole 10.8 cm in diameter to obtain an
individual putting distance (Arns, Kleinnijenhuis, Fallahpour, & Breteler, 2008). Participants performed a series of
10 putts, which were scored as successfully holed or not
holed. The percentage of successful putts in a series was
determined after each series. This process was repeated
until each participant achieved 50% accuracy.
After the individual putting distance was determined,
participants were fitted with a Lycra electrode cap (Neuroscan, Charlotte, NC, USA). After a 10-min warm-up,
participants were first asked to undergo a resting EEG
recording, including eye-closed and eye-opened conditions, while assuming a normal putting stance for 1 min
each. Then, all participants performed golf putting tasks
consisting of 40 self-paced putting trials in four separate
recording blocks while EEGs were recorded. The participants performed the putting task in the standing position and
were allowed to take a brief rest between each putt. They
were also allowed to sit briefly after each block of 10 putts.
The score was calculated based on the linear distance
from the edge of hole to the edge of the ball (cm). Putting into the hole successfully was determined as score 0.
Putting trials in which the ball was deflected by contacting the edge of the hole were excluded, and participants
were asked to perform extra putting trials to complete the
forty trials. The experiment lasted approximately 2 hr in
total. After completing the pretest, all participants were
scheduled to go through 8 sessions of neurofeedback
training. Then the posttest, which was identical to the
pretest, followed the neurofeedback intervention.
#### Instrumentation
**_Electroencephalography._** For the pretest and posttest,
EEGs were recorded at 32 electrode sites (FP1, FP2, F7,
F8, F3, F4, FZ, FT7, FT8, FC3, FC4, C3, C4, CZ, T3, T4,
T5, T6, TP7, TP8, CP3, CP4, CPZ, A1, A2, P3, P4, PZ,
O1, O2, OZ) corresponding to the International 10–10
system (Chatrian, Lettich, & Nelson, 1985). In addition,
four electrodes were attached to acquire horizontal and
vertical oculography (HEOL, HEOR, VEOU and VEOL).
All sites were initially referenced to A1 and then rereferenced to linked ears offline. A frontal midline site (FPz)
served as the ground. EEG data were collected and amplified using a Neuroscan Nuamps amplifier (Neuroscan,
Charlotte, NC, USA) with a band-pass filter setting of
1–100 Hz and a 60-Hz notch filter. The EEG and EOG
signals were sampled at 500 Hz and recorded online with
NeuroScan 4.5 (Neuroscan, Charlotte, NC, USA) software installed on a Lenovo R400 laptop (Lenovo, Taipei
City, R.O.C). Vertical and horizontal eye movement
artifacts were recorded via bipolar electro-oculographic
activity (EOG), in which vertical EOG was assessed by
electrodes placed above and below the left eye (VEOU
and VEOL), whereas horizontal EOG was assessed by
electrodes located at the outer canthi (HEOL, HEOR).
Impedance values for all electrode sites were maintained
below 5 kΩ. An infrared ray sensor was set to detect the
swing for each putt. Once the back swing movement was
detected, an event mark was sent to the EEG data, which
served as the time point for analyzing the EEG activity
before putting. Twelve to fifteen hertz of Cz was extracted
as the SMR (Babiloni et al., 2008).
**_Neurofeedback._** Neurofeedback training was completed with a NeuroTek Peak Achievement Trainer (NeuroTek, Goshen, KY). The EEG data from the assessment
were band-pass filtered using the BioReview software
(NeuroTek, Goshen, KY). The active scalp electrode was
placed at Cz for SMR training, with the reference placed
on both mastoids. Signal was acquired at 256 Hz and
then A/D converted and band filtered to extract the SMR
(12–15 Hz). The amplitude of the SMR was transformed
online into graphical feedback representations including
the low-frequency audio-feedback tone by acoustic bass
(No. 33) in the BioReview software.
#### Neurofeedback Training Procedure
Participants underwent an eight-session training program
lasting 5 weeks. Each session was composed of neurofeedback training lasting from 30 to 45 min. On average,
a total of 12 training trials were performed in a single session. Each training trial comprised 30 s. The total duration
of a single session was approximately 30 min. The SMR
NFT group aimed to increase absolute SMR amplitude
over the designated threshold, which was individually
determined by averaging 1.5 s of each participant’s successful putting trials during the pretest. To enhance the
participants’ efficacy during NFT, a progressive adjustment of the training threshold difficulty was employed.
The standard for adjusting the training threshold was
based on the individualized standard deviation which
derived from the SMR power of the final three 0.5-s
time windows before putting during the pretest. When
-----
participants’ SMR power was higher than the threshold,
the acoustic bass sound was displayed. Participants were
instructed to perform based on their own putting routine
while receiving the auditory feedback. The successful
training ratio, defined as the time spent above threshold
divided by the total time of a single training trial (30 s),
was reported to participants following every training trial.
In the control group, the training protocol was similar
to that used by Egner, Strawson, and Gruzelier (2002)
to establish a mock feedback condition. This protocol
was designed to prevent study participants from learning to regulate SMR by using the randomly prerecorded
feedback tone during the training trials from SMR NFT
group. The total length of this prerecorded mock feedback
tone was 4 min that were derived from a randomly chosen
participant in the SMR NFT group during the Session
1 training. Researchers played the mock feedback tone
from a random starting point to guarantee a randomized
feedback tone was received by participants in the control
group. On average, a total of seven training trials were
performed in a single session and the total duration of a
single session was approximately 30 min.
To evaluate the neurofeedback learning effect,
the mean successful training ratio of each session was
recorded and computed for subsequent analysis. To
reduce the number of sessions necessary for statistical
evaluation of the learning efficiency between the two
groups, we combined two consecutive sessions into one
section [e.g., Section 1 = (Session 1 + Session 2) / 2].
#### Data Reduction
The EEG data reduction was conducted offline using the
Scan 4.5 software (Neuroscan, Charlotte, NC, USA).
EEG data were sampled 1.5 s before putting execution
and were triggered by the event-related marker from
infrared ray sensors. Trial preparation periods of less than
1.5 s were excluded to establish the common structure of
artifact-free data across trials and participants. EOG correction (Semlitsch, Anderer, Schuster, & Presslich, 1986)
was carried out on continuous EEG data to eliminate blink
artifacts. EEG segments with amplitudes exceeding ±100
μV from baseline were excluded from subsequent analysis.
After artifact-free EEG data were acquired, fast Fourier
transforms were calculated at 50% overlap on 256-sample
Hanning windows for all artifact-free segments to transform
to spectral power (μV[2]). Sensorimotor rhythm power was
computed as the mean of 12–15 Hz from Cz and then natural
log transformed (Davidson, 1988). To compute a normalized EEG power for each golfer, the relative power was
used, for which the ratio of power at 12–15 Hz to 1–30
Hz was computed (Niemarkt et al., 2011).
#### Statistical Analyses
The average putting score and standard deviation between
the two groups was analyzed by a 2 (Group: SMR
NFT, Control) × 2 (Test: pretest, posttest) ANOVA with
repeated measures on the test factor.
The difference score (posttest to pretest) for the
relative power of SMR was subjected to a 2 (Group:
SMR NFT, Control) × 3 [Time window: –1.5 to –1.0 s
(T1), –1.0 to –0.5 s (T2), –0.5 to 0 s (T3)] ANOVA with
repeated measures on the time window factor.
In addition, we ran several control analyses to
provide additional evidence to support our conclusions.
The success of the training ratio was tested by a 2
(Group: SMR NFT, Control) × 4 (Training section: Section 1: sessions 1–2; Section 2: sessions 3–4; Section
3: sessions 5–6; Section 4: sessions 7–8) ANOVA with
repeated measures on the training section.
To characterize the within-session learning effect,
we compared the successful training ratio of the first and
last trials of each session across all eight sessions. A 2
(Group: SMR NFT, Control) × 8 (Session: session 1, 2,
3, 4, 5, 6, 7, 8) × 2 (Trial: first trial, last trial) three-way
ANOVA with repeated measures on the session, and trial
was used to examine this issue.
To ensure control of neurofeedback in the SMR
NFT group within the training program, we employed a
one-way ANOVA with training section (Training section:
Section 1: sessions 1–2; Section 2: sessions 3–4; Section
3: sessions 5–6; Section 4: sessions 7–8) as a variable to
detect the threshold fluctuation within the four training
sections.
To examine the regional fluctuation of 12–15 Hz
power before and after training, we carried out a 2 (Group:
SMR NFT, Control) × 4 (Region: frontal, central, parietal,
occipital) two-way ANOVA with repeated measures on
the region.
The examination of concurrent changes in neighboring frequency bands was conducted by analyzing the
pre-to-post difference scores for theta (4–7 Hz), alpha
(8–12 Hz), low beta (13–20 Hz), high beta (21–30 Hz),
and broad beta (13–30 Hz) frequency bands with a 2
(Group: SMR NFT, Control) × 3 [Time window: –1.5 to
–1.0 s (T1), –1.0 to –0.5 s (T2), –0.5 to 0 s (T3)] twoway ANOVA.
Mauchly’s test was used to assess the validity of
the ANOVA sphericity assumption whenever necessary. The degrees of freedom were corrected using the
Greenhouse–Geisser procedure, and least significant
difference analysis was used for post hoc comparisons
(p < .05). The partial eta square was used to estimate the
effect size, with values of .02, .12, and .26 suggesting
relatively small, medium, and large effect sizes, respectively (Cohen, 1992).
## Results
#### Putting Performance
The mean distance of the SMR group in the pretest
and posttest was 29.62 cm (5.59) and 16.59 cm (8.92),
respectively. The control group distance was 20.17 cm
(12.07) and 18.80 cm (5.58), respectively. An independent t test showed no difference in the mean distance in
the pretest between two groups [t(14) = 2.008, p = .073,
-----
η[2]p = .224]. The 2 (Group: SMR NFT, Control) × 2
(Test: pretest, posttest) mixed-model ANOVA revealed
a significant interaction effect on putting performance
[F(1, 14) = 5.029, p = .042, η[2]p = .264]. The SMR neurofeedback group exhibited a shorter distance from the
hole in posttest than pretest [t(7) = 3.417, p = .011, η[2]p
= .625]. No significant difference was observed for other
comparisons.
#### Putting Performance in Standard Deviation
A marginal interaction effect was observed in the 2
(Group: SMR NFT, Control) × 2 (Test: pretest, posttest)
ANOVA [F(1, 14) = 4.121, _p = .062,_ η[2]p = .227]. We
did not observe an effect on Group factor [F(1, 14) =
0.136, p = .717, η[2]p = .010]. The SMR group exhibited
a significantly lower SD in the posttest (16.11 cm) than
in the pretest (24.70 cm) [t(7) = 4.408, p = .003, η[2]p =
.735], whereas the control group showed no significant
variation in SD (21.03 cm to 18.38 cm) [t(7) = 1.208, p
= .266, η[2]p = .173].
#### SMR Relative Power
The difference scores of the SMR group members for T1,
T2, and T3 was 0.481 (0.588), 0.186 (0.378), and 0.040
(0.268), respectively. For the control group, the difference
scores was –0.200 (0.424), –0.143 (0.440), and 0.009
(0.444), respectively. We compared the difference scores
with a 2 (Group: SMR NFT, Control) × 3 [Time window:
–1.5 to –1.0 s (T1), –1.0 to –0.5 s (T2), –0.5 to 0 s (T3)]
two-way ANOVA and observed a marginally significant
two-way interaction effect [F(2, 28) = 3.315, p = .051,
η[2]p = .191]. To explore this marginal interaction effect
and examine the training effect before and after NFT, a
subsequent simple main effect analysis was performed
and revealed a marginal Time effect [F(2, 14) = 3.470,
_p = .060, η[2]p = .331] in the SMR NFT group. Post hoc_
analysis showed that the SMR power was significantly
greater in T1 than in T3 [t(7) = 2.925, p = .022, η[2]p =
.550]. No significant simple main effect was observed in
the control group [F(2, 14) = .671, p = .567, η[2] = .141]. In
addition, a simple main effect analysis revealed that the
SMR NFT group exhibited a relatively higher SMR power
than that of the control at T1 [t(14) = 2.657, p = .019, η[2]p
= 335]. The significant group main effect revealed that
the SMR NFT group had a higher SMR power than that
of the control group [F(1, 14) = 4.665, p = .049, η[2]p =
.250]. The difference scores between the two groups are
depicted in Figure 1.
#### Control Analyses
**_Successful Training Ratio._** The overall mean of the
golfers’ successful training ratio was 62.39 (8.88) %
for the SMR training group and 22.27 (22.28) % for the
control group. The 2 (Group: SMR NFT, Control) × 4
**Figure 1 — The difference scores of SMR relative power between the SMR NFT and control groups at T1 (–1.5 to –1.0 s), T2**
(–1.0 to –0.5 s), and T3 (–0.5 to 0 s).
-----
(Training section: Section 1: sessions 1–2; Section 2:
sessions 3–4; Section 3: sessions 5–6; Section 4: sessions
7–8) ANOVA showed no interaction effect [F(3,42) =
0.694, p = .497, η[2]p = .047], but a significant group main
effect was observed [F(1,14) = 22.188, p = .001, η[2]p =
.613]. The SMR group showed a significantly higher percentage of successful training ratios than did the control
group. Table 1 lists the successful training ratio for each
group during the training sections.
**_Within-Session Learning._**
The results of NFT can be affected by day-to-day fluctuations in arousal level (Gruzelier et al., 2014). Thus,
in addition to comparing the average successful training
ratios of the eight sessions between these two groups, we
compared the successful training ratios of the first and
last trials of each session for all eight sessions between
the two groups to determine whether participants in the
NFT group improved within each training session. We
hypothesized that the successful training ratio would be
greater in the last trial than in the first trial for the SMR
NFT group but not for control group. A 2 (Group: SMR
NFT, Control) × 8 (Session: sessions 1, 2, 3, 4, 5, 6, 7,
8) × 2 (Trial: first trial, last trial) three-way ANOVA
was employed to test this hypothesis. The result
showed that although the 3-way interaction effect was
not significant [F(7, 98) = 2.063, p = .082, η[2]p = .128], a
2 (Group: SMR NFT, Control) × 2 (Trial: first trial, last
trial) interaction effect [F(1, 14) = 33.192, p = .001, η[2]p
= .703] was revealed. Post hoc analysis was consistent with
our prediction; only the SMR NFT group demonstrated
a greater successful training ratio in the last trial (M =
77.65, SD = 7.84) than in the first trial (M = 50.58, SD =
10.65) for all sessions [t(7)= 8.344, p = .001, η[2]p = 909].
The control group did not show a significant difference
between the first trial (M = 12.19, SD = 11.86) and last
trial (M = 16.32, SD = 17.00) [t(7) = 1.784, p = .118, η[2]p
= 313]. In addition, the SMR NFT group demonstrated a
significantly higher training ratio on the first trial [t(7) =
6.810, p = .001, η[2]p = 768] and last trial [t(7) = 9.267, p
= .001, η[2]p = .860] than did the control group (Figure 2).
**_Threshold Increments Within SMR Training Sessions._**
Although our control analyses provided supportive evidence for the learning progress made by the SMR NFT
group, we further analyzed the change in threshold during
each session of SMR NFT. In our study, threshold level
was used as a difficulty index in the SMR NFT group, in
which golfers were instructed to increase the SMR above
designated level to meet our training demand. Thus, an
improvement in the successful training ratio from the
two previous control analysis was meaningful only
when the threshold for each session was also examined.
Previous studies evaluated the threshold variation within
day-to-day sessions and suggested that the increased
threshold could serve as a marker for improvement of
the controllability due to neurofeedback training (Doppelmayr & Weber, 2011). Thus, we converted the eight
training sessions into four sections as described in the
methods section and examined the training threshold
variation by employing an one-way ANOVA to examine
the effect of Training section (Section 1: sessions 1–2;
Section 2: sessions 3–4; Section 3: sessions 5–6; Section
4: sessions 7–8) in the SMR group. We hypothesized that
the threshold value would increase after the first training
section, which supports an improvement in controllability
due to SMR neurofeedback training. The average training thresholds for sections one to four in the SMR NFT
group were 5.862 (2.781), 7.636 (3.368), 8.214 (3.718),
and 7.750 (3.816), respectively. As predicted, a significant
difference was detected by the one-way ANOVA [F(3,
18) = 9.945, _p = .001,_ η[2]p = .624]. Post hoc analysis
demonstrated that the training thresholds in the second,
third, and fourth sections were significantly higher than
that of the first section.
**_Electrode Specificity._** Although the current study demonstrated that the relative SMR power of the SMR NFT
group was significantly higher than that of the control
group following SMR NFT, it remained unknown whether
the greater 12–15 Hz EEG relative power after training
was limited to the sensorimotor cortex or there was a
spillover to other regions, such as the frontal, parietal
and occipital cortices. Thus, we compared the difference
scores at 12–15 Hz EEG relative power among Fz, Cz, Pz,
and Oz between pre- and posttest sessions. Previous work
has shown that the SMR originated in the centro-parietal
region (Grosse-Wentrup, Schölkopf, & Hill, 2011). Thus,
we hypothesized that the difference score of 12–15 Hz at
Cz would be greater than that of the frontal and occipital
regions for SMR group participants after training. A 2
(Group: SMR NFT, Control) × 4 (Region: Frontal, Central, Parietal, Occipital) two-way ANOVA between the
two groups was performed to test this hypothesis.
The difference scores at Fz, Cz, Pz, and Oz were
0.035 (0.200), 0.212 (0.178), 0.135 (0.298), and 0.003
(0.241), respectively, for the SMR NFT group. For the
**Table 1 The Successful Training Ratios Between the SMR NFT**
**and Control Groups Across the Four Training Sections**
**(Every Two Consecutive Sessions Were Folded Resulting In Four Sections)**
**Section 1** **Section 2** **Section 3** **Section 4** **Total**
SMR 53.82 (19.71) 63.85 (12.53) 65.63 (9.52) 66.27 (17.91) 62.39 (5.08)
Control 20.51 (24.11) 23.02 (26.31) 22.94 (21.58) 22.62 (19.61) 22.27 (1.09)
_Note. The unit is the percentage of increasing time for successfully controlling SMR power._
-----
**Figure 2 — The mean successful training ratio for the first and last trial between the SMR NFT and control groups across the**
eight training sessions.
control group, the difference scores at Fz, Cz, Pz, and Oz
were –0.056 (0.309), –0.438 (0.169), –0.150 (0.268),
and –0.168 (0.640), respectively. This result yielded
a marginally significant interaction effect [F(3, 42) =
2.680, p = .089, η[2]p = .161]. Because of the exploratory
nature of this study, we conducted a follow-up analysis
of this interaction effect. The independent _t tests of_
the four regions between the two groups showed that
significance was only observed at a difference score
of Cz [t(14) = 5.159, p = .001, η[2]p = 655], in which
the SMR NFT group exhibited a significantly higher
difference score than the control group. Moreover, oneway ANOVA of four regions in the SMR NFT group
reached marginal significance [F(3, 21) = 2.644, p =
.076, η[2]p = .274]. The follow-up pairwise t tests found
that the difference score of Cz was higher than that of Fz
[t(7) = 3.740, p = .007, η[2]p = 666] and Oz [t(7) = 2.530,
_p = .039, η[2]p = .478]. These lines of evidence provide_
preliminary support for the electrode specificity of SMR
NFT in this study.
**_Frequency Specificity._** Previous studies have shown
that neurofeedback training may generate concurrent
changes in flanking frequency bands (Enriquez-Geppert
et al., 2014). The aim of this analysis was to investigate
whether SMR NFT resulted in a change in frequency
bands close to SMR. We compared the relative power
difference scores of theta (4–7 Hz), alpha (8–12 Hz),
low beta (13–20 Hz), high beta (21–30 Hz), and broad
beta (13–30 Hz) frequency bands before golf putting
from pretest and posttest between the two groups. The 2
(Group: SMR NFT, Control) × 3 [Time window: –1.5 to
–1.0 s (T1), –1.0 to –0.5 s (T2), –0.5 to 0 s (T3)] twoway ANOVA showed that neither interaction effects on
theta power [F(2, 28) = 0.550, p = .583, η[2]p = .038],
alpha power [F(2, 28) = 0.113, p = .802, η[2]p = .011],
low beta power [F(2, 28) = 0.052, p = .949, η[2]p = .004],
high beta power [F(2, 28) = 0.503, p = .496, η[2]p = .035],
and broad beta band [F(2, 28) = 0.883, p = .425, η[2]p =
.059] nor group main effects on theta power [F(1, 14) =
0.032, p = .860, η[2]p = .002], alpha power [F(1, 14) =
0.070, p = .795, η[2]p = .005], low beta power [F(1, 14)
= 0.764, p = .397, η[2]p = .052], high beta power [F(1,
14) = 0.677, p = .424, η[2]p = .046], and broad beta power
[F(1, 14) = 0.023, p = .881, η[2]p = .002] were observed.
The difference scores among these five frequency bands
are listed in Table 2.
## Discussion
The aim of this study was to investigate the effect of SMR
neurofeedback training on golf putting performance. Our
results showed that golfers receiving SMR neurofeedback
training demonstrated enhanced SMR activity during the
final 1.5 s before golf putting, resulting in better putting
performance compared with the control group. This finding lends preliminary support to the hypothesis that SMR
NFT is effective for increasing SMR power, and leads to
superior putting performance.
-----
**Table 2 Difference Scores (%) of Relative Power for Theta, Alpha, Low Beta, High Beta,**
**and Beta Frequency Bands in Three Time Windows Between the Two Groups, SMR and Control**
**T1 (–1.5 to –1.0 s)** **T2 (–1.0 to –0.5 s)** **T3 (–0.5 to 0 s)**
**Relative Power** **SMR** **Control** **SMR** **Control** **SMR** **Control**
Theta .025 (.621) .338 (.493) –.234 (.172) –.186 (.528) .311 (1.071) .085 (.452)
Alpha .006 (.134) .048 (.221) .052 (.177) .017 (.216) –.006 (.465) –.058 (.223)
Low beta .035 (.258) .014 (.135) –.069 (.124) –.029 (.164) –.097 (.183) –.033 (.082)
High beta .014 (.190) .015 (.109) –.046 (.085) .128 (.593) –.047 (.152) –.030 (.070)
Beta .034 (.164) .053 (.010) –.064 (.094) –.029 (.077) –.050 (.137) –.082 (.112)
Increased SMR power by NFT results in better visuomotor performance. For behavioral data, we observed that
SMR neurofeedback training improved skilled golfers’
putting performance, as indicated by the reduced average
distance from the hole and the variability of the score. No
significant change in putting performance was observed
in the control group. Previous studies have demonstrated
that augmenting SMR by NFT improved visual motor
performance (Ros et al., 2009) and increased self-rating
scores of subjective flow state in dancers (Gruzelier
et al., 2010). Furthermore, augmenting SMR by NFT
was related to an improved attention-related mental
state (Vernon et al., 2003) and memory performance
(Hoedlmoser et al., 2008). In addition, converging lines
of evidence support the effectiveness of NFT based
on non-SMR variables enhancing performance in the
sport domain (Arns et al., 2008; Gruzelier et al., 2010;
Kao et al., 2014; Landers et al., 1991; Raymond, Sajid,
Parkinson, & Gruzelier, 2005; Ring, Cooke, Kavussanu,
McIntyre, & Masters, 2015). Nevertheless, the current
study is the first, to our best knowledge, to use the SMR
protocol to investigate the effectiveness of NFT on sport
performance. Our results support the finding of the augmented SMR power which is linked with more adaptive
fine-motor performance (Cheng et al., 2015) and extend
the potential facilitation effects of SMR training to the
sport domain.
Less task-irrelevant interference of somatosensory
and sensorimotor processing, as reflected in augmented
SMR power after training, leads to improved putting
performance. A previous study has indicated that participants in the automatic stage showed weaker activity
in the presupplementary motor area, premotor cortex,
parietal cortex, and prefrontal cortex compared with
novices in a self-paced sequential finger movement task
(Wu et al., 2008). A negative relationship between SMR
power and sensorimotor activity has been suggested
(Mann et al., 1996). The drop in sensorimotor activity,
as reflected by increased SMR power, may indicate a
greater adaptive task-related attention allocation that
facilitates the execution of sport performance (Gruzelier
et al., 2010). Increasing SMR power through NFT is
also related to more efficient and modulated visuomotor
performance (Gruzelier et al., 2010; Ros et al., 2009).
These results suggest that augmenting SMR power led
to an improved adjustment of somatosensory and sensorimotor pathways (Kober et al., 2014), which resulted
in increased task-related attention toward specific tasks
(Egner & Gruzelier, 2001). Moreover, previous studies
have suggested that enhanced SMR power leads to a
relatively higher flow state (Gruzelier et al., 2010) and
calming mood (Gruzelier, 2014a). Based on the functional
role of SMR, these findings imply that a reduction in
sensorimotor activity may lessen the conscious processing involved in motor execution, which would lead to a
more conceptual automatic process (Cheng et al., 2015).
This interpretation is in line with converging evidence
supporting a beneficial effect of augmented SMR on
focusing and sustaining attention, working memory, and
psychomotor skills (Egner & Gruzelier, 2001; Ros et al.,
2009). Collectively, the superior golf putting performance
observed in the present SMR NFT group might be the
result of reduced somatosensory information processing
before the back swing, which leads to refined golf putting performance. The interpretation that a reduction in
conscious interference facilitates motor operation is in
line with the concept of automatic processing proposed
by Fitts and Posner (1967). However, given the relatively
small sample size, future research should verify the causal
relationship between augmented SMR power and finemotor performance.
Reduced cortical activity in the sensorimotor area, as
reflected by the higher power of 12–15 Hz, is sensitive to
superior putting performance. First, the electrode specificity of SMR NFT was demonstrated. Although electrode
specificity has been suggested to be an important step in
support of the NFT training effect on the corresponding
EEG component at a specific brain region (Gruzelier,
2014b), this is the first study in the area of NFT and
sport performance to provide such preliminary evidence
for the localized training effects. The lack of difference
between Cz and Pz might suggest that this region is
also part of a network associated with SMR activity in
motor performance. This speculation is in line with the
evidence that the parietal region is involved in processing
visual-spatial information during motor performance (Del
Percio et al., 2011).
Second, frequency specificity was analyzed. One
might argue that enhanced putting performance was
caused by variation in another frequency band at the Cz
-----
site, but this explanation is inconsistent with the lack
of significant changes on difference scores in the theta,
alpha, low beta and high beta frequency bands. These
results suggest that it is primarily SMR power that
accounts for the facilitating effect of SMR NFT on putting performance rather than other neighboring frequency
bands. Our demonstration of electrode and frequency
specificity strengthens the hypothesis that improved putting performance was the result of reduced sensorimotor
activity before putting execution.
The SMR NFT group improved the putting performance through the refined strategy for controlling the
SMR power and reached the training goal as a result
of the training program. First, our data showed that the
SMR group demonstrated a higher successful training
ratio than did the control group. Second, previous studies
proposed that the training effect would emphasize daily
training improvement (Gruzelier et al., 2014). In our
control analysis, we compared the successful training
ratio of the first and the last trial within eight sessions. A
significantly higher successful training ratio for the last
trial than for the first trial was observed, suggesting that
golfers in the SMR NFT group learned the tuning strategy
successfully after the initial trials and that the strategies
were effective in the subsequent trials of the remaining
sessions. This result lends support to the concept of neurofeedback trainability and further confirms the possibility
of EEG tuning within a single training session (Kao et
al., 2014; López-Larraz, Escolano, & Minguez, 2012).
Furthermore, we found a significant threshold increase
after the first session only in SMR NFT group, suggesting that our training protocol is facilitative to golfers.
This evidence was in line with previous work in which
the SMR amplitude increased above the daily adjusted
threshold (Weber, Köberl, Frank, & Doppelmayr, 2011).
We have several suggestions with regard to future
neurofeedback studies. First, combining these studies
with neuroimaging tools is necessary. Although we have
provided evidence that the regulation of SMR power
can enhance putting performance, this result would be
benefit from the experiments conducted with high-spatialresolution neuroimaging tools, such as fMRI, to provide
a more precise anatomical description of the NFT effect.
Second, the phenomenological report of neurofeedback
learning and its effects is often overlooked (Gruzelier,
2014b). A sophisticated measurement of subjective
mental state, such as an in-depth questionnaire or scale,
is needed to further elucidate the mental state associated with NFT (Gruzelier, 2014a). Third, the retention
of learning driven by NFT must be examined. Thus far,
this issue has received little attention, but it is critical
from a practical viewpoint to determine how long the
performance enhancement due to NFT lasts. Fourth, to
explore the effect of SMR NFT on anticipative motor
planning is needed. Future study should investigate the
link between neurophysiological and cognitive processes
by using the priming tests to further understand the neurocognitive architecture of golf performance. Last but
not least, the changes in network dynamics after NFT
should be further examined to fill the knowledge gap
of cortical interaction caused by NFT. For example, the
parietal and sensorimotor cortex networks are thought
to be functionally relevant during motor performance
(Baumeister et al., 2013).
Our findings should be interpreted with caution due
to the limitations of the study. First, the sample size was
limited. Some of our statistical analyses reached only
marginal significance, likely due to the small sample size.
Furthermore, given the exploratory nature of the study,
it is reasonable to speculate implications regarding the
the marginally significant effects. Second, although the
neurophysiological source of the SMR could not be precisely located due to limited spatial resolution by surface
EEG, the finding of a marginally significant larger SMR
difference score at the Cz site compared with the Fz and
Oz sites as well as the finding that the largest magnitude
of 12–15 Hz differences occurred at the Cz site rather
than other frequency bands in the SMR group provide
indirect evidence to support the impact of somatosensory
activity on superior putting performance after SMR NFT.
Third, putting is only one of many fundamental motor
skills involved in golf performance. Our results may be
difficult to generalize to other golf motor skills (e.g., the
drive shot and tee shot). Future studies should, therefore,
examine different skills involved in golf performance to
determine the generalizability of the present findings.
Fourth, the skill levels of the participants may impact
the effect of NFT, and caution should be exercised when
generalizing these findings to golfers at other skill levels.
In conclusion, an eight-session SMR NFT exhibited a putting performance enhancement and increased
SMR power in SMR NFT group compared with control
group, suggesting that SMR NFT is an effective protocol
for enhancing putting performance through fine-tuning
somatosensory interference, as reflected by augmented
SMR.
**Acknowledgments**
The work of Tsung-Min Hung was supported in part by the
National Science Council (Taiwan) under grant NSC 98-2410H-003 -124 -MY2.
## References
Arns, M., Kleinnijenhuis, M., Fallahpour, K., & Breteler, R.
(2008). Golf Performance Enhancement and Real-Life
Neurofeedback Training Using Personalized EventLocked EEG Profiles. _Journal of Neurotherapy,_ _11(4),_
[11–18. doi:10.1080/10874200802149656](http://dx.doi.org/10.1080/10874200802149656)
Babiloni, C., Del Percio, C., Iacoboni, M., Infarinato, F., Lizio,
R., Marzano, N., . . . Eusebi, F. (2008). Golf putt outcomes
are predicted by sensorimotor cerebral EEG rhythms.
_The Journal of Physiology,_ _[586(1), 131–139. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=17947315&dopt=Abstract)_
[doi:10.1113/jphysiol.2007.141630](http://dx.doi.org/10.1113/jphysiol.2007.141630)
Baumeister, J., Von Detten, S., Van Niekerk, S.M., Schubert,
M., Ageberg, E., & Louw, Q.A. (2013). Brain activity in
predictive sensorimotor control for landings: An EEG pilot
-----
study. International Journal of Sports Medicine, _34(12),_
[1106–1111. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=23740338&dopt=Abstract) [doi:10.1055/s-0033-1341437](http://dx.doi.org/10.1055/s-0033-1341437)
Chatrian, G.E., Lettich, E., & Nelson, P.L. (1985). Ten percent
electrode system for topographic studies of spontaneous
and evoked EEG activity. The American Journal of EEG
_Technology,_ _25, 83–92._
Cheng, M.Y., Hung, C.L., Huang, C.J., Chang, Y.K., Lo, L.C.,
Shen, C., & Hung, T.M. (2015). Expert-novice differences in SMR activity during dart throwing. Biological
_Psychology,_ _[110, 212–218. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=26277266&dopt=Abstract)_ [doi:10.1016/j.bio-](http://dx.doi.org/10.1016/j.biopsycho.2015.08.003)
[psycho.2015.08.003](http://dx.doi.org/10.1016/j.biopsycho.2015.08.003)
Chuang, L.Y., Huang, C.J., & Hung, T.M. (2013). The differences in frontal midline theta power between successful
and unsuccessful basketball free throws of elite basketball
players. International Journal of Psychophysiology, _90(3),_
[321–328. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24126125&dopt=Abstract) [doi:10.1016/j.ijpsycho.2013.10.002](http://dx.doi.org/10.1016/j.ijpsycho.2013.10.002)
Cohen, J. (1992). A power primer. _Psychological Bul-_
_letin,_ _[112(1), 155–159. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=19565683&dopt=Abstract)_ [doi:10.1037/0033-](http://dx.doi.org/10.1037/0033-2909.112.1.155)
[2909.112.1.155](http://dx.doi.org/10.1037/0033-2909.112.1.155)
Davidson, R.J. (1988). EEG measures of cerebral asymmetry:
conceptual and methodological issues. The International
_Journal of Neuroscience,_ _[39(1-2), 71–89. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=3290140&dopt=Abstract)_
[doi:10.3109/00207458808985694](http://dx.doi.org/10.3109/00207458808985694)
Del Percio, C., Iacoboni, M., Lizio, R., Marzano, N., Infarinato,
F., Vecchio, F., . . . Babiloni, C. (2011). Functional coupling
of parietal alpha rhythms is enhanced in athletes before
visuomotor performance: a coherence electroencephalographic study. _Neuroscience,_ _[175, 198–211. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=21144884&dopt=Abstract)_
[doi:10.1016/j.neuroscience.2010.11.031](http://dx.doi.org/10.1016/j.neuroscience.2010.11.031)
Doppelmayr, M., & Weber, E. (2011). Effects of SMR and theta/
beta neurofeedback on reaction times, spatial abilities, and
creativity. Journal of Neurotherapy, _[15(2), 115–129. doi:](http://dx.doi.org/10.1080/10874208.2011.570689)_
[10.1080/10874208.2011.570689](http://dx.doi.org/10.1080/10874208.2011.570689)
Egner, T., & Gruzelier, J. (2001). Learned self-regulation of
EEG frequency components affects attention and eventrelated brain potentials in humans. Neuroreport, _12(18),_
[4155–4159. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11742256&dopt=Abstract) [doi:10.1097/00001756-200112210-](http://dx.doi.org/10.1097/00001756-200112210-00058)
[00058](http://dx.doi.org/10.1097/00001756-200112210-00058)
Egner, T., Strawson, E., & Gruzelier, J. (2002). EEG signature and phenomenology of alpha/theta neurofeedback
training versus mock feedback. _Applied Psycho-_
_physiology and Biofeedback,_ _[27(4), 261–270. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12557453&dopt=Abstract)_
[doi:10.1023/A:1021063416558](http://dx.doi.org/10.1023/A:1021063416558)
Egner, T., Zech, T.F., & Gruzelier, J. (2004). The effects of
neurofeedback training on the spectral topography of the
electroencephalogram. Clinical Neurophysiology, _115(11),_
[2452–2460. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=15465432&dopt=Abstract) [doi:10.1016/j.clinph.2004.05.033](http://dx.doi.org/10.1016/j.clinph.2004.05.033)
Enriquez-Geppert, S., Huster, R.J., Scharfenort, R., Mokom,
Z.N., Zimmermann, J., & Herrmann, C.S. (2014). Modulation of frontal-midline theta by neurofeedback. Biological
_Psychology,_ _[95(1), 59–69. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=23499994&dopt=Abstract)_ [doi:10.1016/j.biopsy-](http://dx.doi.org/10.1016/j.biopsycho.2013.02.019)
[cho.2013.02.019](http://dx.doi.org/10.1016/j.biopsycho.2013.02.019)
Fitts, P.M., & Posner, M.I. (1967). Human Performance. Basic
_[Concepts in Psychology. Retrieved from http://lccn.loc.](http://lccn.loc.gov/67011662)_
[gov/67011662](http://lccn.loc.gov/67011662)
Grosse-Wentrup, M., Schölkopf, B., & Hill, J. (2011).
Causal influence of gamma oscillations on the sensorimotor rhythm. _NeuroImage,_ _[56(2), 837–842. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20451626&dopt=Abstract)_
[doi:10.1016/j.neuroimage.2010.04.265](http://dx.doi.org/10.1016/j.neuroimage.2010.04.265)
Gruzelier, J. (2014a). Differential effects on mood of 12–15
(SMR) and 15–18 (beta1) Hz neurofeedback. International
_Journal of Psychophysiology,_ _[93(1), 112–115. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=23357178&dopt=Abstract)_
[doi:10.1016/j.ijpsycho.2012.11.007](http://dx.doi.org/10.1016/j.ijpsycho.2012.11.007)
Gruzelier, J. (2014b). EEG-neurofeedback for optimising
performance. III: A review of methodological and theoretical considerations. _Neuroscience and Biobehavioral_
_Reviews,_ _[44, 159–182. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24690579&dopt=Abstract)_ [doi:10.1016/j.neubio-](http://dx.doi.org/10.1016/j.neubiorev.2014.03.015)
[rev.2014.03.015](http://dx.doi.org/10.1016/j.neubiorev.2014.03.015)
Gruzelier, J., Foks, M., Steffert, T., Chen, M.J.L., & Ros, T.
(2014). Beneficial outcome from EEG-neurofeedback
on creative music performance, attention and well-being
in school children. Biological Psychology, _95(1), 86–95._
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=23623825&dopt=Abstract) [doi:10.1016/j.biopsycho.2013.04.005](http://dx.doi.org/10.1016/j.biopsycho.2013.04.005)
Gruzelier, J., Inoue, A., Smart, R., Steed, A., & Steffert, T.
(2010). Acting performance and flow state enhanced
with sensory-motor rhythm neurofeedback comparing
ecologically valid immersive VR and training screen scenarios. Neuroscience Letters, _[480(2), 112–116. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20542087&dopt=Abstract)_
[doi:10.1016/j.neulet.2010.06.019](http://dx.doi.org/10.1016/j.neulet.2010.06.019)
Hoedlmoser, K., Pecherstorfer, T., Gruber, G., Anderer, P.,
Doppelmayr, M., Klimesch, W., & Schabus, M. (2008).
Instrumental conditioning of human sensorimotor rhythm
(12-15 Hz) and its impact on sleep as well as declarative
learning. Sleep, _[31(10), 1401–1408. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=18853937&dopt=Abstract)_
Kao, S.C., Huang, C.J., & Hung, T.M. (2013). Frontal midline
theta is a specific indicator of optimal attentional engagement during skilled putting performance. _Journal of_
_Sport & Exercise Psychology,_ _35(5), 470–478 Retrieved_
from [http://www.ncbi.nlm.nih.gov/pubmed/24197715.](http://www.ncbi.nlm.nih.gov/pubmed/24197715)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24197715&dopt=Abstract)
Kao, S.C., Huang, C.J., & Hung, T.M. (2014). Neurofeedback
training reduces frontal midline theta and improves putting performance in expert golfers. _Journal of Applied_
_Sport Psychology,_ _26(3), 271–286._ [doi:10.1080/104132](http://dx.doi.org/10.1080/10413200.2013.855682)
[00.2013.855682](http://dx.doi.org/10.1080/10413200.2013.855682)
Klimesch, W., Doppelmayr, M., Pachinger, T., & Ripper, B.
(1997). Brain oscillations and human memory: EEG correlates in the upper alpha and theta band. Neuroscience
_Letters,_ _[238(1-2), 9–12. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=9464642&dopt=Abstract)_ [doi:10.1016/S0304-](http://dx.doi.org/10.1016/S0304-3940(97)00771-4)
[3940(97)00771-4](http://dx.doi.org/10.1016/S0304-3940(97)00771-4)
Kober, S.E., Witte, M., Stangl, M., Väljamäe, A., Neuper, C., &
Wood, G. (2014). Shutting down sensorimotor interference
unblocks the networks for stimulus processing: An SMR
[neurofeedback training study. Clinical Neurophysiology.](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24794517&dopt=Abstract)
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24794517&dopt=Abstract)
Krane, V., & Williams, J.M. (2006). Psychological characteristics of peak performance. In J.M. Williams (Ed.), Applied
_sport psychology: Personal growth to peak performance._
New York: McGraw-Hill.
Landers, D.M., Petruzzello, S.J., Salazar, W., Crews, D.J.,
Kubitz, K.A., Gannon, T.L., & Han, M. (1991). The influence of electrocortical biofeedback on performance in
pre-elite archers. Medicine and Science in Sports and Exer_cise,_ _[23(1), 123–129. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=1997806&dopt=Abstract)_ [doi:10.1249/00005768-](http://dx.doi.org/10.1249/00005768-199101000-00018)
[199101000-00018](http://dx.doi.org/10.1249/00005768-199101000-00018)
Logan, G.D., Hockley, W.E., & Lewandowsky, S. (1991). Automaticity and memory. In W.E. Hockley & S. Lewandowsky
(Eds.), _Relating Theory and Data: Essays on Human_
-----
_Memory in Honor of Bennet B. Murdock (pp. 347–366)._
East Sussex, UK: Psychology Press.
López-Larraz, E., Escolano, C., & Minguez, J. (2012). Upper
alpha neurofeedback training over the motor cortex
increases SMR desynchronization in motor tasks. _Pro-_
_ceedings of the Annual International Conference of the_
_IEEE Engineering in Medicine and Biology Society,_
_EMBS, 2012, 4635–4638._ [http://doi.org/doi:10.1109/](http://doi.org/)
[EMBC.2012.6347000](http://dx.doi.org/10.1109/EMBC.2012.6347000)
Mann, C.A., Sterman, M.B., & Kaiser, D.A. (1996). Suppression of EEG rhythmic frequencies during somato-motor
and visuo-motor behavior. International Journal of Psy_chophysiology,_ _[23(1-2), 1–7. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8880361&dopt=Abstract)_ [doi:10.1016/0167-](http://dx.doi.org/10.1016/0167-8760(96)00036-0)
[8760(96)00036-0](http://dx.doi.org/10.1016/0167-8760(96)00036-0)
Nicholls, A.R. (2007). A longitudinal phenomenological analysis of coping effectiveness among Scottish international
adolescent golfers. _European Journal of Sport Science,_
_[7(3), 169–178. doi:10.1080/17461390701643034](http://dx.doi.org/10.1080/17461390701643034)_
Niemarkt, H.J., Jennekens, W., Pasman, J.W., Katgert, T., Van
Pul, C., Gavilanes, A.W.D., . . . Andriessen, P. (2011).
Maturational changes in automated EEG spectral power
analysis in preterm infants. _Pediatric Research,_ _70(5),_
[529–534. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=21772227&dopt=Abstract) [doi:10.1203/PDR.0b013e31822d748b](http://dx.doi.org/10.1203/PDR.0b013e31822d748b)
Nierhaus, T., Forschack, N., Piper, S.K., Holtze, S., Krause,
T., Taskin, B., . . . Villringer, A. (2015). Imperceptible
somatosensory stimulation alters sensorimotor background
rhythm and connectivity. _The Journal of Neuroscience,_
_[35(15), 5917–5925. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=25878264&dopt=Abstract)_ [doi:10.1523/JNEURO-](http://dx.doi.org/10.1523/JNEUROSCI.3806-14.2015)
[SCI.3806-14.2015](http://dx.doi.org/10.1523/JNEUROSCI.3806-14.2015)
Pelz, D., & Frank, J.A. (2000). Dave Pelz’s putting bible: The
_complete guide to mastering the green. New York, NY:_
Doubleday.
Raymond, J., Sajid, I., Parkinson, L.A., & Gruzelier, J. (2005).
Biofeedback and dance performance: A preliminary
investigation. Applied Psychophysiology and Biofeedback,
_[30(1), 65–73. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=15889586&dopt=Abstract)_ [doi:10.1007/s10484-005-2175-x](http://dx.doi.org/10.1007/s10484-005-2175-x)
Ring, C., Cooke, A., Kavussanu, M., McIntyre, D., & Masters,
R. (2015). Investigating the efficacy of neurofeedback
training for expediting expertise and excellence in
sport. _Psychology of Sport and Exercise,_ _16, 118–127._
[doi:10.1016/j.psychsport.2014.08.005](http://dx.doi.org/10.1016/j.psychsport.2014.08.005)
Ros, T., Moseley, M.J., Bloom, P.A., Benjamin, L., Parkinson,
L.A., & Gruzelier, J. (2009). Optimizing microsurgical
skills with EEG neurofeedback. BMC Neuroscience, _10(1),_
[87. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=19630948&dopt=Abstract) [doi:10.1186/1471-2202-10-87](http://dx.doi.org/10.1186/1471-2202-10-87)
Sauseng, P., Hoppe, J., Klimesch, W., Gerloff, C., & Hummel,
F.C. (2007). Dissociation of sustained attention from central executive functions: Local activity and interregional
connectivity in the theta range. _The European Journal_
_of Neuroscience,_ _[25(2), 587–593. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=17284201&dopt=Abstract)_ [doi:10.1111/](http://dx.doi.org/10.1111/j.1460-9568.2006.05286.x)
[j.1460-9568.2006.05286.x](http://dx.doi.org/10.1111/j.1460-9568.2006.05286.x)
Schneider, W., & Shiffrin, R.M. (1977). Controlled and
automatic human information processing: I. Detection,
search, and attention. Psychological Review, _84(1), 1–66._
[doi:10.1037/0033-295X.84.1.1](http://dx.doi.org/10.1037/0033-295X.84.1.1)
Semlitsch, H.V., Anderer, P., Schuster, P., & Presslich, O.
(1986). A solution for reliable and valid reduction of ocular
artifacts, applied to the P300 ERP. _Psychophysiology,_
_[23(6), 695–703. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=3823345&dopt=Abstract)_ [doi:10.1111/j.1469-8986.1986.](http://dx.doi.org/10.1111/j.1469-8986.1986.tb00696.x)
[tb00696.x](http://dx.doi.org/10.1111/j.1469-8986.1986.tb00696.x)
Vernon, D., Egner, T., Cooper, N., Compton, T., Neilands,
C., Sheri, A., & Gruzelier, J. (2003). The effect of
training distinct neurofeedback protocols on aspects of
cognitive performance. International Journal of Psycho_physiology,_ _[47(1), 75–85. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12543448&dopt=Abstract)_ [doi:10.1016/S0167-](http://dx.doi.org/10.1016/S0167-8760(02)00091-0)
[8760(02)00091-0](http://dx.doi.org/10.1016/S0167-8760(02)00091-0)
Weber, E., Köberl, A., Frank, S., & Doppelmayr, M. (2011).
Predicting successful learning of SMR neurofeedback
in healthy participants: Methodological considerations.
_Applied Psychophysiology and Biofeedback,_ _36(1), 37–45._
[PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=21053066&dopt=Abstract) [doi:10.1007/s10484-010-9142-x](http://dx.doi.org/10.1007/s10484-010-9142-x)
Wu, T., Chan, P., & Hallett, M. (2008). Modifications of the
interactions in the motor networks when a movement
becomes automatic. The Journal of Physiology, _586(Pt 17),_
[4295–4304. PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=18617569&dopt=Abstract) [doi:10.1113/jphysiol.2008.153445](http://dx.doi.org/10.1113/jphysiol.2008.153445)
_Manuscript submitted: June 30, 2015_
_Revision accepted: October 08, 2015_
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1123/jsep.2015-0166?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1123/jsep.2015-0166, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://pub.uni-bielefeld.de/download/2901056/2901057/Cheng%20et%20al.%20-%202015%20-%20Sensorimotor%20rhythm%20neurofeedback%20enhances%20golf%20putting%20performance.pdf"
}
| 2,015
|
[
"JournalArticle"
] | true
| 2015-12-01T00:00:00
|
[] | 18,046
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/013973151bd1db3552ea8d8278c6dece1bbdd352
|
[] | 0.934056
|
How to issue a privacy-preserving central bank digital currency
|
013973151bd1db3552ea8d8278c6dece1bbdd352
|
Social Science Research Network
|
[
{
"authorId": "3170736",
"name": "Christian Grothoff"
},
{
"authorId": "145140500",
"name": "T. Moser"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SSRN, Social Science Research Network (SSRN) home page",
"SSRN Electronic Journal",
"Soc Sci Res Netw",
"SSRN",
"SSRN Home Page",
"SSRN Electron J",
"Social Science Electronic Publishing presents Social Science Research Network"
],
"alternate_urls": [
"www.ssrn.com/",
"https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e",
"https://www.wikidata.org/wiki/Q53949192",
"www.ssrn.com/en",
"http://www.ssrn.com/en/",
"http://umlib.nl/ssrn",
"umlib.nl/ssrn"
],
"id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62",
"issn": "1556-5068",
"name": "Social Science Research Network",
"type": "journal",
"url": "http://www.ssrn.com/"
}
|
JEL
|
### SUERF Policy Briefs
### No 114, June 2021
# How to issue a privacy-preserving central bank digital currency
## By Christian Grothoff and Thomas Moser[1]
_JEL codes: E42, E51, E52, E58, G2._
_Keywords: Central Bank Digital Currency, privacy, blind signatures._
_Many central banks are currently investigating Central Bank Digital Currency (CBDC) and possible designs. A_
_recent survey conducted by the European Central Bank has found that both citizens and professionals consider_
_privacy the most important feature of a CBDC. We show how a central bank could issue a CBDC that would be_
_easily scalable and allow the preservation of a key feature of physical cash: transaction privacy. At the same_
_time the proposed design would meet regulatory requirements and thus offer an appropriate balance between_
_privacy and legal compliance._
##### Introduction
A central bank digital currency (CBDC) for the general public would be a new type of money issued by
central banks, alongside banknotes and reserve accounts for selected financial market participants. Despite
initial skepticism, the number of central banks investigating CBDC has grown steadily over the past three
years. However, there is currently no consensus on how a CBDC should be designed and what features it
should have. These questions are being intensively debated and researched.
1 Christian Grothoff, Bern University of Applied Sciences and Taler Systems SA.
Thomas Moser, Swiss National Bank.
-----
A recent survey conducted by the European Central Bank has found that both citizens and professionals consider
privacy the most important feature of a digital Euro (ECB 2021). This may be surprising, but the fact that citizens
place a high value on privacy is a consistent finding of many surveys. Skeptics sometimes counter that citizens
express just the opposite in their behavior; they consistently choose convenience, speed, and financial savings
over privacy. However, in doing so they are often not fully aware of the extent to which technological advances
have improved the ability to track, aggregate, and disseminate personal information. They also often do not
expect their data to be shared and used in a context other than the one in which they disclose the data
(Nissenbaum 2010).
Over the past decade, the public has become increasingly aware and concerned about the vast scale of data
collected and stored by governments and corporations. Goldfarb and Tucker (2012) provide behavior-based
evidence of increasing consumer privacy concerns. Payments data are particularly revealing, and a CBDC could
potentially provide a great deal of data on citizens, making them vulnerable to malicious use for political or
commercial purposes. We thus believe that a successful CBDC would need to provide credible transaction
protections in order to gain broad public acceptance. Moreover, privacy is not just an individual value, it also has
a social value. Privacy is essential for a free society and democracy.
At the same time, a CBDC should not provide protection for illegal transactions and tax evasion. A privacypreserving CBDC must ensure legal compliance, particularly compliance with anti-money laundering (AML) and
combating the financing of terrorism (CFT) regimes. It is thus crucial to find the right balance between privacy
[and legal compliance. We believe that our proposal recently published as a Swiss National Bank Working Paper](https://www.snb.ch/n/mmr/reference/working_paper_2021_03/source/working_paper_2021_03.n.pdf)
promises to do just that (Chaum et al. 2021). It builds on and improves the eCash technology (Chaum, 1983, and
Chaum et al. 1990) and uses GNU Taler (Dold, 2019). Taler is part of the GNU project, which is a collaborative
project for the development of “Free/Libre and Open Source Software” (FLOSS).[2] With FLOSS, all interested
parties have access to the source code and the right to tailor the software to their needs. The patent-free, open
standard protocol improves interoperability and competition among service providers. Instead of using
proprietary secrets or hardware security modules, Taler exclusively uses cryptographic software with public
specifications to provide privacy and security.
##### Privacy in payments: accounts versus tokens
Payment systems can be account-based or token-based. In an account-based system, a payment is made by
debiting the payer's account and crediting the payee's account. This implies that the transaction must be
recorded and involved parties identified. In a token-based system, a payment is made by transferring a token that
represents monetary value. The prime example is cash – coins or banknotes. Paying with cash means handing
over a coin or banknote. There is no need to record the transfer or identify the parties involved, possession of the
token is sufficient. However, the payee must be able to verify the token’s authenticity.
It has been suggested that the distinction between account- and token-based systems is not applicable to digital
currencies (Garratt et al. 2020). We believe that the distinction is also useful for digital currencies. The critical
distinction is the information carried by the information asset. In an account-based system, the assets (accounts)
are associated with transaction histories that include all of the credit and debit operations involving the accounts.
In a token-based system, the assets (tokens) carry information about their value and the entity that issued the
2 For more information about GNU, see [https://www.gnu.org. GNU Taler is released free of charge under the GNU](https://www.gnu.org)
Affero General Public License by the GNU Project. GNU projects popular among economists are the software packages
«R» and “GNU Regression, Econometrics and Time-series Library” (GRETL).
-----
token. The only possibility of attaining the transaction privacy property of cash, therefore, lies in token-based
systems.
We propose a token-based, software-only CBDC, where the CBDC token is issued and distributed just like
banknotes. Consequently, we will simply refer to these CBDC tokens as “coins.” Customers withdraw coins by
withdrawing money from their bank account; that is, they load coins onto their smartphone or computer and
their bank debits their account for the corresponding amount. The proposed CBDC is genuine digital bearer
instrument; it is stored locally on the computer or smartphone; there is no account or ledger involved. There is
also no record linking the CBDC and the owner.
Privacy is achieved with a cryptographic technique called blind signatures. Before the user interacts with the
central bank to obtain a digitally signed coin, a blinding operation performed locally on the user's device hides
the numeric value representing a coin from the central bank before requesting the signature. In GNU Taler, this
numeric value is a public key, with the associated private key only being known to the owner of the coin. The coin
derives its value from the central bank’s signature on the coin’s public key. The central bank makes the signature
with its own private key. A merchant or payee can use the central bank’s corresponding “public key” to verify the
central bank’s signature and thereby the authenticity of the CBDC.
Because the blind signatures are carried out under the control of the users themselves, users do not have to trust
the central bank or the commercial bank to safeguard their private spending history. The central bank only learns
the total amounts of digital cash withdrawn and the total amount spent. Commercial banks learn how much
digital cash their customers withdraw, but not how much an individual customer has spent or where they are
spending it. Privacy in this design is thus not a question of confidentiality; it is cryptographically guaranteed.
##### The benefits of NOT using Distributed Ledger Technology (DLT) for CBDC
Most central banks experiment with distributed ledger technology (DLT). DLT is an interesting design if no
central party is available or desired: the purpose of a blockchain or DLT is to establish an immutable consensus
across multiple parties. However, this is not required in the case for a retail CBDC issued by a trusted central
bank. Distributing the central bank’s ledger merely increases transaction costs; it does not provide tangible
benefits in a central bank deployment.
A critical benefit of not using DLT is improved scalability. Our proposed scheme would be easily scalable and as
cost-effective as modern RTGS systems currently used by central banks. GNU Taler can easily handle tens of
thousands of transactions per second. The main cost of the system would be the secure storage of 1-10 kilobytes
per transaction. Using Amazon Web Services pricing, experiments with an earlier prototype of GNU Taler showed
that the cost of the system (memory, bandwidth, and computation) at scale would be less than USD 0.0001 per
transaction.
Furthermore, achieving privacy with DLT is a challenge, because DLT is essentially an account-based system. The
only difference to a traditional account-based system is that the accounts are not kept in a central database but in
a decentralized append-only database.
Cryptographic privacy-enhancing technologies such as zero-knowledge proofs are possible but computationally
demanding in a DLT-context, so that the high resource requirements make their use on mobile devices
impractical. This does not apply to the Chaum-style blind signature protocol used in GNU Taler, which can be
executed efficiently and quickly.
-----
##### How to prevent double-spending in a token-based system
Money has value only when it is scarce, which means, among other things, that double-spending of a monetary
asset is prevented. In a token-based system, one way to prevent double spending is to make it difficult to copy the
token. This is the approach that central banks take with banknotes. With digital currencies, however, preventing
copying is a challenge. Two potential technologies to prevent digital copying are unclonable functions and secure
zones in hardware. However, physically unclonable functions cannot be exchanged over the Internet (eliminating
the main use case of CBDCs), and security features in copy-prevention hardware have been repeatedly
compromised.
Our proposal, which consists only of software, does not even attempt to prevent token copying. Rather, double
spending is prevented by the fact that each coin can be spent exactly once only. Once a coin has been spent, the
number of the corresponding coin goes on a list of spent coins managed by the central bank. This list contains
only the number of the spent coin but no transaction history. The coins also cannot be linked to the payers
because they blinded the coins when the CBDC was withdrawn. When a payee receives a coin, the payee consults
this list to see if the coin has already been spent before. If it has, the payment is rejected as invalid.
Because our proposal requires online checks to prevent double-spending it does not enable offline payments.
While this could be considered a disadvantage, Grothoff and Dold (2021) point out that any offline payment
system has inherent and severe risks and thus its own drawbacks. Given that central banks do not intend to
replace physical cash with CBDC, but rather to issue CBDC in addition to physical cash, physical cash can be used
as the secure offline fallback in the event of power outages or cyber attacks.
##### Regulatory and policy consideration
In the proposed scheme, central banks do not learn the identities of consumers or merchants or transaction
amounts. Central banks only see when electronic coins are withdrawn and when they are redeemed. Commercial
banks continue to provide crucial customer and merchant authentication and, in particular, remain the guardians
of know-your customer information. Commercial banks observe when merchants receive funds and can limit the
amount of CBDC per transaction that an individual merchant may receive, if required. Additionally, transactions
are associated with the relevant customer contracts. The resulting income transparency enables the system to be
compliant with the AML/CFT regulations.
The proposed scheme thus offers one-sided privacy, allowing the buyer to remain anonymous while making the
seller's incoming transactions and underlying contractual obligations available upon request by competent
authorities. If unusual patterns of merchant income are detected, the commercial bank, tax authorities, or law
enforcement can obtain and inspect the business contracts underlying the payments to determine whether the
suspicious activity is nefarious. Overall, the system implements privacy-by-design and privacy-by default
approaches (as required by the EU’s General Data Protection Regulation). Merchants do not inherently learn the
identity of their customers, banks have only necessary insights into their own customers’ activities, and central
banks are blissfully divorced from detailed knowledge of citizens’ activities.
A potential financial stability concern often raised with retail CBDCs is banking sector disintermediation. While
this would be a serious concern with an account-based CBDC, it should be less of a concern with a token-based
CBDC. Hoarding a token-based CBDC entails risks of theft or loss similar to those of hoarding cash. However,
should hoarding or massive conversions of money from bank deposits to CBDC become a problem, the proposed
-----
design would give central banks several options, including imposing per-account withdrawal limits or negative
interest rates.
Imposing limits could also be a requirement of the AML/CFT regime. While GNU Taler by design allows its users
to transact any amount in any currency, legislation could impose an enforceable ceiling on individual
transactions, requiring merchants that receive transactions exceeding the transaction limit to determine the
identity of the buyer. However, since there are no accounts, it would not be possible to impose holding limits. But
this is a good thing. Technologically enforced restrictions on holding or receiving CBDC should be avoided
anyway, as such restrictions would result in failures where users are unable to execute transactions despite
sufficient liquidity.
With the proposed design, central banks, commerce and citizens could reap the full benefits of the digital
economy. The efficiency and cost effectiveness, along with the improved consumer usability that comes from
shifting from authentication to authorization, make this system likely the first to support the long-envisioned goal
of online micropayments. In addition, the use of coins to cryptographically sign electronic contracts would enable
the use of smart contracts. This could lead to the emergence of entirely new applications for payment systems.
A recently designed extension for GNU Taler integrates privacy-preserving age verification that allows legal
guardians to impose age restrictions on digital purchases made with coins given to wards. Merchants would only
learn that the buyer meets the age requirement for the goods sold, while the identity and exact age of the child
would remain private. This is just one example of how central banks could use this system to issue programmable
money. ∎
-----
##### About the authors
**_Christian Grothoff_** _is a Professor for Computer Network Security at the Bern University of Applied Sciences,_
_researching future Internet architectures. His research interests include compilers, programming languages,_
_software engineering, networking, security and privacy. Before, he was leading the Décentralisé research team at_
_INRIA and an Emmy Noeter research group leader at TU Munich. He earned his PhD in computer science from UCLA,_
_an M.S. in computer science from Purdue University, and a Diplom in mathematics from the University of Wuppertal._
_He is an Ashoka Fellow and co-founder of Taler Systems SA and Anastasis SARL. He also served as an expert court_
_witness, and has reported on technology and national security as a freelance journalist._
**_Thomas Moser_** _is an Alternate Member of the Governing Board of the Swiss National Bank. Before joining the Swiss_
_National Bank, he was an Executive Director at the International Monetary Fund (IMF), and earlier in his career an_
_Economist at the Swiss Institute for Business Cycle Research (KOF) at the Swiss Federal Institute of Technology_
_(ETH), Zurich. Thomas Moser is also a member of the Managing Committee of the Swiss Institute of Banking and_
_Finance at the University of St. Gallen, a Member of the Board of Directors of Orell Füssli Ltd., and a Member of the_
_Advisory Board of the NZZ Swiss International Finance Forum. Thomas Moser holds a Master and a Doctorate in_
_Economics from the University of Zurich._
**SUERF is a network association of**
central bankers and regulators,
academics, and practitioners in the
financial sector. The focus of the
association is on the analysis,
discussion and understanding of
financial markets and institutions, the
monetary economy, the conduct of
regulation, supervision and monetary
policy.
SUERF’s events and publications
provide a unique European
network for the analysis and
discussion of these and related issues.
**SUERF Policy Briefs (SPBs)** serve to
promote SUERF Members' economic
views and research findings as well as
economic policy-oriented analyses.
They address topical issues and
propose solutions to current economic
and financial challenges. SPBs serve to
increase the international visibility of
SUERF Members' analyses and
research.
The views expressed are those of the
author(s) and not necessarily those of
the institution(s) the author(s) is/are
affiliated with.
All rights reserved.
**Editorial Board**
Ernest Gnan
Frank Lierman
David T. Llewellyn
Donato Masciandaro
Natacha Valla
SUERF Secretariat
c/o OeNB
Otto-Wagner-Platz 3
A-1090 Vienna, Austria
Phone: +43-1-40420-7206
www.suerf.org • suerf@oenb.at
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3965050?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3965050, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arbor.bfh.ch/15002/1/suerf.pdf"
}
| 2,021
|
[] | true
| null |
[] | 3,944
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01398bbd4ecc589fecef134641d1016fdc1c044c
|
[
"Computer Science",
"Medicine"
] | 0.83742
|
Design and Implementation of High-Performance ECC Processor with Unified Point Addition on Twisted Edwards Curve
|
01398bbd4ecc589fecef134641d1016fdc1c044c
|
Italian National Conference on Sensors
|
[
{
"authorId": "46977722",
"name": "Md. Mainul Islam"
},
{
"authorId": "144778068",
"name": "Md. Selim Hossain"
},
{
"authorId": "82670319",
"name": "Moh. Khalid Hasan"
},
{
"authorId": "153087089",
"name": "M. Shahjalal"
},
{
"authorId": "1816029",
"name": "Y. Jang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
With the swift evolution of wireless technologies, the demand for the Internet of Things (IoT) security is rising immensely. Elliptic curve cryptography (ECC) provides an attractive solution to fulfill this demand. In recent years, Edwards curves have gained widespread acceptance in digital signatures and ECC due to their faster group operations and higher resistance against side-channel attacks (SCAs) than that of the Weierstrass form of elliptic curves. In this paper, we propose a high-speed, low-area, simple power analysis (SPA)-resistant field-programmable gate array (FPGA) implementation of ECC processor with unified point addition on a twisted Edwards curve, namely Edwards25519. Efficient hardware architectures for modular multiplication, modular inversion, unified point addition, and elliptic curve point multiplication (ECPM) are proposed. To reduce the computational complexity of ECPM, the ECPM scheme is designed in projective coordinates instead of affine coordinates. The proposed ECC processor performs 256-bit point multiplication over a prime field in 198,715 clock cycles and takes 1.9 ms with a throughput of 134.5 kbps, occupying only 6543 slices on Xilinx Virtex-7 FPGA platform. It supports high-speed public-key generation using fewer hardware resources without compromising the security level, which is a challenging requirement for IoT security.
|
# sensors
_Article_
## Design and Implementation of High-Performance ECC Processor with Unified Point Addition on Twisted Edwards Curve
**Md. Mainul Islam** **[1]** **, Md. Selim Hossain** **[2]** **, Moh. Khalid Hasan** **[1]** **, Md. Shahjalal** **[1]**
**and Yeong Min Jang** **[1,]***
1 Department of Electronics Engineering, Kookmin University, Seoul 02707, Korea;
mainul.islam@ieee.org (M.M.I.); khalidrahman45@ieee.org (M.K.H.); mdshahjalal26@ieee.org (M.S.)
2 Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology
(KUET), Khulna 9203, Bangladesh; selim@eee.kuet.ac.bd
***** Correspondence: yjang@kookmin.ac.kr; Tel.:+82-02-910-5068
Received: 23 August 2020; Accepted: 4 September 2020; Published: 10 September 2020
[����������](http://www.mdpi.com/1424-8220/20/18/5148?type=check_update&version=1)
**�������**
**Abstract: With the swift evolution of wireless technologies, the demand for the Internet of Things**
(IoT) security is rising immensely. Elliptic curve cryptography (ECC) provides an attractive solution
to fulfill this demand. In recent years, Edwards curves have gained widespread acceptance in
digital signatures and ECC due to their faster group operations and higher resistance against
side-channel attacks (SCAs) than that of the Weierstrass form of elliptic curves. In this paper,
we propose a high-speed, low-area, simple power analysis (SPA)-resistant field-programmable
gate array (FPGA) implementation of ECC processor with unified point addition on a twisted
Edwards curve, namely Edwards25519. Efficient hardware architectures for modular multiplication,
modular inversion, unified point addition, and elliptic curve point multiplication (ECPM) are
proposed. To reduce the computational complexity of ECPM, the ECPM scheme is designed in
projective coordinates instead of affine coordinates. The proposed ECC processor performs 256-bit
point multiplication over a prime field in 198,715 clock cycles and takes 1.9 ms with a throughput
of 134.5 kbps, occupying only 6543 slices on Xilinx Virtex-7 FPGA platform. It supports high-speed
public-key generation using fewer hardware resources without compromising the security level,
which is a challenging requirement for IoT security.
**Keywords: elliptic curve cryptography (ECC); elliptic curve point multiplication (ECPM); twisted**
Edwards curve; unified point addition; simple power analysis (SPA) attacks
**1. Introduction**
The Internet of Things (IoT) refers a global network, where billions of devices are connected
through the Internet and share data with each other. Since most of these devices have constrained
resources, data are usually stored in the cloud, where people can continuously upload and download
data from anywhere via the Internet [1]. Security concerns arise as data owners have no control over
the data management in the cloud-computing environment. The importance of data security and the
limited resources of IoT devices motivate us to install lightweight cryptographic schemes that can
satisfy the security, low-energy, and low-memory requirements of the existing IoT applications.
Elliptic curve cryptography (ECC), a public-key cryptography (PKC), has become a promising
approach to the IoT security, smart card security, and digital signatures as it provides high levels of
security with smaller key sizes. Compared with traditional Rivest–Shamir–Adleman (RSA) algorithm,
ECC provides an equal level of security but with a shorter key length [2–4]. ECC can be implemented
with low hardware resource usage and low energy consumption without degrading its security
-----
_Sensors 2020, 20, 5148_ 2 of 19
level. Owing to low hardware use, it is well suited for the security of low-power, low-memory,
and resource-constrained IoT devices. ECC implemented in a small chip can provide high-speed
data encryption and decryption facilities. In addition, it prevents unauthorized devices from
gaining access to wireless sensor networks (WSNs) by providing a key agreement protocol for the
wireless sensor nodes connected to the IoT infrastructures in the networks [5–8]. An elliptic curve
cryptosystem would be one of the best candidates to meet the privacy and security challenges emerged
in radio-frequency identification (RFID) technologies [9–11]. Presently, ECC-based untraceable
RFID authentication protocols are used in smart healthcare environments to enhance medical data
security [12–14]. Elliptic curve-based digital signature schemes such as elliptic curve digital signature
algorithm (ECDSA) [2] and Edwards curve digital signature algorithm (EdDSA) [15,16] are adopted
in wireless body area networks (WBANs) to fulfill the security requirements for real-time health
data (e.g., blood pressure, heart rate, and pulse) management [17–19]. Modern security protocols
such as transport layer security (TLS) and datagram transport layer security (DTLS) deploy these
signature schemes for the energy efficient mutual authentication of the servers and clients in IoT
platforms [20–22].
An ECC hierarchy is equipped with four consecutive levels as shown in Figure 1. The first level
contains finite field arithmetic, such as addition, subtraction, multiplication, squaring, and inversion,
which can be performed in both the Galois binary field GF(2[n]) and Galois prime field GF(p). The second
level incorporates elliptic curve group operations, such as point addition (PA) and point doubling
(PD). In the third level, elliptic curve point multiplication (ECPM) is accomplished by combining the
elliptic curve group operations in a sequential manner. The top level includes ECC protocols such as
ECDSA and EdDSA. The central and the most time-consuming operation in an elliptic curve-based
cryptographic system is ECPM. The principle of ECPM can be specified as Q = kP, where P is a base
point on an elliptic curve, k is a nonzero positive integer, and Q is another point on the curve [23]. Q and
_k are considered to be public key and private key, respectively, and P is regarded as the public-key_
generator. The retrieval of k knowing the points P and Q is known as elliptic curve discrete logarithm
problem (ECDLP) [2] that measures the security strength of the ECPM operation and finds out the
weaknesses of the system. The easiest technique to accomplish ECPM is the binary/double-and-add
(DAA) algorithm [2] that requires fewer hardware resources compared with other available methods.
Therefore, ECC schemes adopting the DAA-based ECPM are suited for IoT applications because of
their lower hardware resource requirements and lower power consumption. The major disadvantage
of the DAA method is that the DAA-based ECPM is vulnerable to simple power analysis (SPA)
attacks [24,25] unless it uses unified point operations.
**Figure 1. Hierarchy of elliptic curve cryptography.**
Edwards curves, a family of elliptic curves, are gaining enormous attention among security
researchers because of their simplicity and high resistance against SCAs [26]. ECPM on Edwards curves
is faster and more secure than that on the Weierstrass form of elliptic curves [27,28]. Edwards curves
-----
_Sensors 2020, 20, 5148_ 3 of 19
have the advantage of providing strongly unified addition formulas [28], which cover both PA and
PD. Separate hardware architectures for PA and PD are not required to perform ECPM. Moreover,
unified PA prevents probable SPA attacks by making the secret key indistinguishable from power
tracing. When ECPM adopts the same module for PA and PD, the binary bit pattern of the secret key
cannot be retrieved by SPA. The twisted Edwards curves are a generalization of Edwards curves [29],
which are mainly used in the digital signature scheme EdDSA. One of the most compatible twisted
Edwards curves in digital signature systems is Edwards25519, which is the Edwards form of the
elliptic curve Curve25519 [23,30]. In modern times, Edwards25519 curve is used for a high-speed,
high-security digital signature scheme called Ed25519 [15,16]. ECPM using unified twisted Edwards
curve not only provides high resistance against SPA but also it reduces the area of ECC processors.
ECC can be accomplished with both hardware and software approaches. Although the software
implementation is simple and cost-effective, it cannot provide high-speed computation as the hardware
implementation can. Indeed, the hardware implementation of ECC with limited resources is a highly
challenging task because low hardware use leads to a lower computational speed. In this point of
view, Edwards curves are more effective than classical elliptic curves as they can be implemented on a
smaller area with higher processing speed. Most of the hardware implementations of ECC reported
in the literature are based on the Weierstrass form of elliptic curves. Few hardware implementations
based on twisted Edwards curves over GF(p) have been reported. Baldwin et al. [31] first documented
hardware implementation of a reconfigurable 192-bit ECC processor adopting twisted Edwards curve
over GF(p). They provide a comparison between the FPGA implementation of an elliptic curve-based
point multiplication and that of a twisted Edwards curve for different number of arithmetic logic
units (ALUs) operated in parallel, which shows the Edwards curve as more efficient. Additionally,
the twisted Edwards curve point operations are compared with the unified version of these operations.
Although the unified version shows little bit worse performance, it provides a higher resistance
against SPA. Liu et al. [21] present a computable endomorphism on twisted Edwards curves to
boost the speed of ECDSA verification process. They provide area-efficient hardware architecture for
signature verification with its FPGA implementation. Application specific integrated circuit (ASIC)
implementation of the architecture is also provided for low-cost applications. The implementation
results show that the design reduces approximately 50% of the number of PD operations required.
Parallel architectures for ECPM on extended twisted Edwards are proposed by Abdulrahman et al. [32].
The authors present a new radix-8 ECPM algorithm to cope with SCAs and speed up computations.
However, no hardware implementation of these architectures is reported.
In this paper, a lightweight FPGA-based hardware implementation of ECC over GF(p) is proposed
for IoT appliances. The major contributions of this paper are summarized as follows:
- An efficient radix-4 interleaved modular multiplier is proposed to perform 256-bit modular
multiplication over a prime field.
- A novel hardware architecture for strongly unified PA on the Edwards25519 curve is proposed.
- An efficient ECPM scheme is proposed to perform high-speed point multiplication on the
Edwards25519 curve. The same module is used for PA and PD to prevent probable SPA attacks.
The area required by the scheme is significantly lower than other available designs for ECPM.
- ECPM is performed in projective coordinates to avoid the most expensive (in terms of computational
complexity) modular division operation. In addition, a projective-to-affine (P2A) converter is
proposed to transform the projective output into its affine form. This type of transformation reduces
the computation time additionally required for the modular division operation performed in affine
coordinate-based PA.
- An ECC processor is designed by combining the ECPM scheme and the P2A converter in such
a manner as to reduce the number of modular inversion operations required. The area-delay
product of the proposed ECC processor is considerably small that ensures a better performance of
our processor.
-----
_Sensors 2020, 20, 5148_ 4 of 19
The rest of this paper is organized as follows:
Section 2 presents the mathematical background of the twisted Edwards curve and unified PA formula.
Section 3 presents the proposed hardware architectures for field operations (modular multiplication and
modular inversion), unified PA, ECPM, and ECC processor. Section 4 presents the implementation results
of the proposed designs. Section 5 shows a performance comparison of our proposed ECC processor with
other related processors. Finally, Section 6 concludes this research study.
**2. Mathematical Background**
This section presents the twisted Edwards curve with its affine and projective representations as
well as the unified PA formula for the curve.
_2.1. Twisted Edwards Curve_
The affine representation of a twisted Edwards curve over a prime field Fp with not characteristic
2 is given by the equation [23,29]:
_ta,d : ax[2]_ + y[2] = 1 + dx[2]y[2], (1)
where a, b ∈ Fp \ {0, 1} with a ̸= d. When a = 1, the curve is called untwisted Edwards curve or,
formally, Edwards curve. In the case of a = 1, the curve will be
_−_
_td : −x[2]_ + y[2] = 1 + dx[2]y[2]. (2)
when a = 1, d = 121665/121666, and p = 2[255] 19, the curve is called Edwards25519 that is the
_−_ _−_ _−_
Edwards form of the elliptic curve Curve25519 [23].
In a projective or Jacobian coordinate system, each point (x, y) on ta,d is represented by a triplet
form (X, Y, Z). The affine point P(x, y) corresponds to the projective point P(X = x, Y = y, Z = 1).
The projective point P(X, Y, Z) corresponds to the affine point P(x = X/Z, y = Y/Z) with Z = 0.
_̸_
The projective representation of the curve ta,d is given by the equation [23,29]:
_Ta,d : (aX[2]_ + Y[2])Z[2] = Z[4] + dX[2]Y[2]. (3)
The projective form of the curve td is given by the equation:
_Td : (−X[2]_ + Y[2])Z[2] = Z[4] + dX[2]Y[2]. (4)
_2.2. Unified Point-Addition Formula_
PA on the curve Td in projective coordinates is given by the equation:
_P1(X1, Y1, Z1) + P2(X2, Y2, Z2) = P3(X3, Y3, Z3)._ (5)
where P1 and P2 are two points on the curve and P3 is the resultant point.
The unified PA formula [29] for Td can be given as follows:
_A = X1X2, B = Y1Y2, C = Z1Z2, D = X1Y2,_
_E = X2Y1, F = AB, G = A + B,_
_H = C[2], I = D + E, J = dF, K = CG,_
_L = IC, M = H + J, N = H_ _J,_
_−_
_X3 = LN, Y3 = MK, Z3 = MN._
(6)
The above formula is applicable for both PA and PD. PD can be performed considering that the
points P1 and P2 are identical.
-----
_Sensors 2020, 20, 5148_ 5 of 19
**3. Proposed Hardware Architectures**
This Section presents the proposed hardware architectures for ECC operations and the final
ECC processor.
_3.1. Modular Multiplication_
Modular multiplication is the most important arithmetic operation of an ECC processor. The speed
and occupied area of the processor entirely depend on it. Although a radix-2 multiplier consumes
less hardware resources compared to higher radix (e.g., radix-4 and radix-8) multipliers [33], it is not
compatible for high-speed multiplication because of its high latency. To reduce the latency, an efficient
radix-4 interleaved modular multiplication algorithm is proposed as demonstrated in Algorithm 1.
It requires n/2 + 1 clock cycles (CCs) to multiply two n-bit integers A and B over the prime field
GF(p), where p is an n-bit prime number. Figure 2 illustrates the proposed modular multiplier based
on this algorithm.
**Algorithm 1 Proposed Radix-4 Interleaved Modular Multiplication**
**Input : A = ∑i[n]=[−]0[1]** _[a][i][2][i][,][ B][ =][ ∑]i[n]=[−]0[1]_ _[b][i][2][i][;][ a][i][,][ b][i][ ∈{][0, 1][}]_
**Output : C = (A** _B) mod p_
_·_
1: C ← 0;
2: T ← _B||"01";_
3: while T(n 1 downto 0) = 0 do
_−_ _̸_
4: _D ←_ 4C;
5: **if T(n + 1 downto n) = "01" then**
6: _E_ _D + A;_
_←_
7: **else if T(n + 1 downto n) = "10" then**
8: _E_ _D + 2A;_
_←_
9: **else if T(n + 1 downto n) = "11" then**
10: _E_ _D + 3A;_
_←_
11: **else**
12: _E ←_ _D;_
13: **end if;**
14: _C ←_ _E mod p;_
15: _T ←_ _T(n −_ 1 downto 0)||"00"; \\left shift operation
16: end while;
17: return C;
Modular multiplication is obtained by performing iterative addition of its interim partial products
reducing to modulo p. A shift-left register “Reg T” is used to perform left to right bitwise multiplication
and for a synthesizable loop operation. T[(n + 1) : 2] is precomputed as the multiplier B and T[1 : 0]
is precomputed as “01”. These two extra bits are added at the rightmost position of the register T to
determine the appropriate end of the loop in the case of b0 = 0. At the beginning of each iteration,
accumulator C is quadrupled and computed as D. For the bitwise multiplication, A, 2A, and 3A are
separately added to D. MUX1 is used to select one of the four outputs D, D + A, D + 2A, and D + 3A
as E based on the three bits T[(n + 1) : n]. If Tn+1 and Tn both are zero, D remains unchanged and E
becomes D. At the end of each iteration, E is reduced to modulo p and T is shifted to the left by 2 bits.
The modulo operation (E mod p) is performed by subtracting the prime numbers p to (j 1)p from E,
_−_
where E is always less than jp; (j = 3, 4, 5...). In this module, (E mod p) is obtained by subtracting the
prime numbers p to 6p from E as E is always less than 7p. These subtractions are executed using the 2’s
complement method. MUX2 selects one of the seven outputs E, E − _p, E −_ 2p, E − 3p, E − 4p, E − 5p,
and E − 6p as C for the next iteration based on the comparisons E ≥ _p, E ≥_ 2p, E ≥ 3p, E ≥ 4p, E ≥ 5p,
and E 6p. These comparisons are obtained by checking the three bits E[(n + 1) : (n 1)]. After n/2
_≥_ _−_
-----
_Sensors 2020, 20, 5148_ 6 of 19
number of iterations, B, as well as T[(n 1) : 0], is shifted to zero value and the execution is stopped.
_−_
The final content of the register “Reg C” is the modular multiplication of A and B.
**Figure 2. Proposed modular multiplier.**
A total of n/2 + 1 CCs are required to perform the modular multiplication operation, where n/2
CCs correspond to n/2 number of iterations and one extra CC is required for the initialization.
To perform modular squaring, the inputs A and B are taken as identical.
_3.2. Modular Inversion_
Modular inversion is the costliest (in terms of the hardware resource requirements) arithmetic
operation in finite fields. In affine representations, PA and PD require modular inversion operation
to perform modular division. In this study, although our ECC processor is designed in projective
coordinates, modular inversion is required for P2A conversion. Algorithm 2 [2] demonstrates the
binary modular inversion for the P2A conversion module proposed in this paper. The hardware
architecture of this module is depicted in Figure 3.
**Figure 3. Proposed hardware architecture for modular inversion.**
-----
_Sensors 2020, 20, 5148_ 7 of 19
**Algorithm 2 Binary Modular Inversion [2]**
**Input : B = ∑i[n]=[−]0[1]** _[b][i][2][i][;][ b][i][ ∈{][0, 1][}]_
**Output : C = B[−][1]** _mod p_
1: C ← 0, q ← _B, r ←_ _p, s ←_ 1, t ← 0;
2: while q = 1 do
_̸_
3: **while q(0) = 0 do**
4: _q ←_ _q/2;_
5: **if s(0) = 0 then**
6: _s ←_ _s/2;_
7: **else**
8: _s_ (s + p)/2;
_←_
9: **end if;**
10: **end while;**
11: **while r(0) = 0 do**
12: _r ←_ _r/2;_
13: **if t(0) = 0 then**
14: _t ←_ _t/2;_
15: **else**
16: _t_ (t + p)/2;
_←_
17: **end if;**
18: **end while;**
19: **if q > r then**
20: _q ←_ _q −_ _r;_
21: **if s > t then**
22: _s ←_ _s −_ _t;_
23: **else**
24: _s_ _s + p_ _t;_
_←_ _−_
25: **end if;**
26: **else**
27: _r ←_ _r −_ _q;_
28: **if t > s then**
29: _t ←_ _t −_ _s;_
30: **else**
31: _t_ _t + p_ _s;_
_←_ _−_
32: **end if;**
33: **end if;**
34: end while;
35: return s mod p;
The contents of the registers “Reg Q”, “Reg R”, “Reg S”, and “Reg T” are updated in every
iteration. Five multiplexers such as MUX1, MUX2, MUX3, MUX4, and MUX5 are used to select
corresponding outputs, satisfying different conditions by their select lines. In the case of q being
even, MUX1 selects q/2 and MUX3 selects s/2 if s is even or (s + p)/2 if s is odd. In the case of q
being odd and greater than r, MUX1 selects q _r and MUX3 selects s_ _t if s > t or s + p_ _t if s < t._
_−_ _−_ _−_
The comparisons q > r and s > t are obtained by checking the sign bits of the subtractions q _r and_
_−_
_s −_ _t, respectively. If q is odd and less than r, q and r both remain unchanged. Similarly, MUX2 selects_
one of the three outputs r, r/2, and r _q based on the conditions r(0) = 0 and r > q. MUX4 selects_
_−_
one of the five outputs t, t/2, (t + p)/2, t _s, and t + p_ _s based on the conditions r(0) = 0, t(0) = 0,_
_−_ _−_
_r > q, and t > s. MUX5 is used to select the final result as (s mod p) if q = 1. In this regard, q is_
subtracted by 2 to check whether q < 2 at the end of each iteration. When the sign bit of the subtraction
_q_ 2 is 1, (s mod p) is stored in the register “Reg C”, which is the modular inversion of B.
_−_
-----
_Sensors 2020, 20, 5148_ 8 of 19
In this architecture, on average n + n/4 CCs are required to perform the modular inversion
operation, where n number of iterations are to reduce the n-bit variable q to 1 in a regular manner
and additional n/4 number of iterations are for such uncertain case as q being odd. The clock cycles
required for the modular inversion operation may vary from our estimation depending on the binary
bit pattern of B.
_3.3. Unified Point Addition_
Unified PA is required to perform both PA and PD by the same module so as to prevent possible
SPA attacks in ECPM. The proposed hardware architecture for the unified PA formula described
in (6) is depicted in Figure 4. The architecture includes 12 multiplications, 1 squaring, 3 additions,
and 1 subtraction, which can be denoted as (12M+1S+4A). The proposed design consists of four
consecutive levels, in which the arithmetic modules are connected in a sequential manner. The modules
are arranged in horizontally parallel among the levels to achieve the shortest data path. The whole
architecture is efficiently balanced to reduce the area required. Start signals are used to start the
arithmetic operations and Done signals are used to confirm the end of the operations. The Done
signals of the modules at each level are considered to be the Start signals of the modules at its
subsequent level. AND blocks are used to synchronize the horizontal modules in time (e.g., if the
Done signals d1, d2, d3, d4, and d5 all be 1, the Start signal s1 will be 1; otherwise, it will be 0).
The modular multiplier and the squarer require n/2 + 1 CCs to perform modular multiplication
and squaring. Modular addition and subtraction are completed in only one CC. The level that
contains any multiplication or squaring operation requires n/2 + 1 CCs and the level that contains no
multiplication or squaring requires one CC to jump to the next level. In this design, a total of 2n + 5
CCs are required to complete the unified PA operation.
**Figure 4. Proposed hardware architecture for unified PA.**
-----
_Sensors 2020, 20, 5148_ 9 of 19
_3.4. Elliptic Curve Point Multiplication_
ECPM is the ultimate operation of an ECC processor. It multiplies a point on an elliptic curve
with a scalar. The execution time of ECC schemes is dominated by ECPM. Let P(X, Y, Z) be a point on
the curve Td, k be a scalar that is considered to be secret key. A public key Q(X, Y, Z) is generated
from the known base point P and the secret key k by performing ECPM as follows:
_Q = kP,_ (7)
where Q is also a point on the curve. It can be obtained by adding P to itself k − 1 times such as
_Q = P + P + ....... + P._
� �� �
_k−1 times_
(8)
If k is expressible as a power of 2, Q can be obtained by doubling P on itself log2k times such as
_Q = ...2(2(2(P)))._
� �� �
_log2k times_
(9)
In the binary/ DAA method, ECPM is performed by a combination of PD and PA following the
binary bit pattern of the secret key as shown in Algorithm 3. In this algorithm, separate modules are
required to perform PA and PD. The power consumption of the two separate modules are different.
Monitoring these two power levels by SPA, the bit pattern of k can be retrieved as shown in Figure 5.
Moreover, k can be assumed by timing analysis; hence, ECPM based on this algorithm is vulnerable to
SPA attacks. To cope with SPA, Algorithm 3 is modified to Algorithm 4, where PD is replaced by unified
PA. According to this algorithm, power is only consumed for PA with a fixed power consumption,
which is independent of the bit pattern of k as shown in Figure 6. Since the power consumption is
the same across all the iterations, this algorithm is free from SPA. Figure 7 illustrates the proposed
hardware architecture for ECPM based on Algorithm 4. Two point-addition blocks PA1, PA2 and
three multiplexers MUX1, MUX2, MUX3 are used in this processor. Initially, Q1 is precomputed as
_P. PA1 adds the point Q1 to itself and the output Q2 goes to the input of PA2. Identical inputs are_
inserted in PA1 to perform PD by means of PA. One of the two inputs of PA2 is the output of PA1 and
the other one is P or 0. If ki = 1, PA2 adds the point P to the point Q2 and the output Q3 goes to the
input of the PA1 via the register Rg. On the contrary, if ki = 0, PA2 remains idle and the output of
PA1 directly goes to its input via Rg. MUX1 is used to select the ith bit of k by log2l number of select
lines, where l is the bit length of k. Based on ki, MUX2 selects P or 0 as one of the two inputs of PA2;
MUX3 selects Q2 or Q3 as the input Q1 for the subsequent iteration.
**Algorithm 3 DAA ECPM without Unified PA [2]**
**Input : P(X, Y, Z), k = ∑i[l]=[−]0[1]** _[k][i][2][i][;][ k][i][ ∈{][0, 1][}][,][ k][l][−][1][ =][ 1]_
**Output : Q(X, Y, Z)**
1: Q ← _P;_
2: for i from l − 2 to 0 do
3: _Q ←_ 2Q; _\\PD_
4: **if ki = 1 then**
5: _Q_ _Q + P;_ PA
_←_ _\\_
6: **end if;**
7: end for;
8: return Q;
-----
_Sensors 2020, 20, 5148_ 10 of 19
**Algorithm 4 Proposed Unified PA-based ECPM**
**Input : P(X, Y, Z), k = ∑i[l]=[−]0[1]** _[k][i][2][i][;][ k][i][ ∈{][0, 1][}][,][ k][l][−][1][ =][ 1]_
**Output : Q(X, Y, Z)**
1: Q1 ← _P;_
2: for i from l − 2 to 0 do
3: _Q2 ←_ _Q1 + Q1;_ _\\PA_
4: **if ki = 1 then**
5: _Q3 ←_ _Q2 + P;_ _\\PA_
6: _Q1 ←_ _Q3;_
7: **else**
8: _Q1 ←_ _Q2;_
9: **end if**
10: end for
11: return Q1;
**Figure 5. Simplistic representation of SPA in conventional DAA ECPM.**
**Time**
**Figure 6. Simplistic representation of SPA in proposed unified PA-based ECPM.**
**Figure 7. Proposed hardware architecture for ECPM.**
-----
_Sensors 2020, 20, 5148_ 11 of 19
For the l-bit k, the register stores kP as the final result after l − 1 number of iterations. The average
CCs required to perform the ECPM can be calculated as
ECPMCC = (l − 1) × (PA1CC + RgCC) + l/2 × PA2CC
= (l 1) (2n + 5 + 1) + (l/2) (2n + 5) (10)
_−_ _×_ _×_
= 3nl 2n + 8.5l 6.
_−_ _−_
For l = n,
ECPMCC = 3n[2] + 6.5n − 6. (11)
PA1 and Rg remain active in every iteration, whereas PA2 goes idle in the case of ki = 0. In every
iteration, a total 2n + 6 CCs are spent by PA1 and Rg. Additional 2n + 5 CCs are spent by PA2 if ki = 1.
On average, l(n + 2.5) CCs are spent by PA2 across the ECPM. For the n-bit k, the latency of the ECPM
is approximately 3n[2] + 6.5n 6 CCs. This latency may vary depending on the bit pattern of the key;
_−_
it increases with the number of 1 and decreases with the number 0 present in the bit pattern. In this
study, an average case is considered. This means that the key has equal number of 1 and 0 in its bit
pattern, although this is not always the case.
_3.5. Proposed ECC Processor_
A time-area-efficient ECC processor is designed for public-key generation using the proposed
projective coordinate-based ECPM along with a P2A converter as shown in Figure 8. This processor will
generate a public key from a private key and a base point on Td. Initially, the affine base point P(x, y)
is transformed into its projective form such as P(X, Y, Z) by an affine-to-projective (A2P) converter.
The public key Q(X, Y, Z) is obtained by performing ECPM of the projective point P(X, Y, Z) with the
secret key k. Finally, Q(X, Y, Z) is transformed into its affine form such as Q(x, y) by the P2A converter.
For the P2A conversion, Z is inverted by the proposed modular inversion module and separately
multiplied by X and Y. The latency required by the processor to process the ECPM operation along
with the coordinate conversions is 3n[2] + 8.25n 5 CCs, which is the total sum of the latency of ECPM,
_−_
modular inversion, and modular multiplication.
**Figure 8. Proposed ECC processor for public-key generation.**
-----
_Sensors 2020, 20, 5148_ 12 of 19
**4. Implementation Results**
The proposed ECC processor was programmed in VHDL and implemented using the Xilinx
ISE 14.7 Design Suite software. Xilinx ISim simulator was used to simulate the ECC operations.
The simulation results were verified by the Maple 18 software. Synthesizing, mapping, placing,
and routing of the proposed ECC modules were performed on Xilinx Virtex-7 and Virtex-6 FPGA
platforms, separately. The details of these FPGA platforms and settings are as follows:
- Platform 1: Virtex-7 (XC7VX690T)
- Platform 2: Virtex-6 (XC6VHX380T)
- Design Goal: Balanced
- Design Strategy: Xilinx Default
- Optimization Goal: Speed
- Optimization Effort: Normal
The implementation results of the proposed ECC modules are summarized in Table 1.
On Platform 1, all the modules run at a maximum frequency of 104.39 MHz. The proposed ECC
processor occupies 6543 slices (25,898 LUTs) and generates a public key from a given 256-bit private
key in 1.9 ms with a throughput of 134.5 kbps. On Platform 2, the modules operate at a maximum
frequency of 93.23 MHz. The numbers of slices and LUTs used by the processor are 6579 and 25,968,
respectively, the delay of the public-key generation is 2.13 ms, and the throughput is 120.1 kbps.
**Table 1.** Implementation results of the proposed ECC modules on different FPGA platforms
over Fp-256.
**Area**
**Operation** **Platform** **CCs**
**(Slices)**
**Area**
**(LUTs)**
**Maximum**
**Frequency (MHz)**
**Time**
**(µs)**
**Throughput**
**(Mbps)**
Modular
multiplication
Modular
inversion
Virtex-7 129 416 1451 104.39 1.24 207.2
Virtex-6 129 420 1460 93.23 1.38 185
Virtex-7 320 1197 4155 110.65 2.89 83.5
Virtex-6 320 1209 4156 97.94 3.27 74.6
Virtex-7 517 4159 15,594 104.39 4.95 51.7
Unified PA
Virtex-6 517 4292 15,593 93.23 5.55 46.2
ECPM
(projective)
Public-key
generation
Virtex-7 198,266 5457 21,194 104.39 1899 134.8 10[−][3]
_×_
Virtex-6 198,266 5541 21,224 93.23 2126 120.4 10[−][3]
_×_
Virtex-7 198,715 6543 25,898 104.39 1903 134.5 10[−][3]
_×_
Virtex-6 198,715 6579 25,968 93.23 2131 120.1 10[−][3]
_×_
The performance of the ECC modules on the Virtex-6 FPGA platform is a little bit worse compared
to the Virtex-7 FPGA platform in terms of speed. However, the area use of the different modules on
these platforms are almost the same. It must be noted that no digital signal processing (DSP) slice
is used to implement our processor. Although DSP slices increase processing speed, they increase
processor’s cost as well.
**5. Performance Comparison**
Several hardware implementations of ECC have been reported in [34–53], where some authors
aimed to minimize the area use while others tried to reduce the computation time. Achieving a
higher processing speed with low-area use is technically challenging. We tried to maintain a balance
-----
_Sensors 2020, 20, 5148_ 13 of 19
between area and time as they are two important performance criteria of a cryptographic processor.
A performance comparison of our proposed ECC processor with other related designs is presented
in Table 2.
**Table 2.** Performance comparison of the proposed ECC processor with other related designs
over Fp-256.
**Area** **Frequency**
**Design** **Platform** **CCs**
**(Slices)** **(MHz)**
**Time**
**(ms)**
**Throughput**
**Area × Time**
**(kbps)**
Ours (a) Virtex-7 6.5k 198.7 104.39 1.9 134.49 _[a]_ 12.35
Ours (b) Virtex-6 6.6k 198.7 93.23 2.13 120.12 _[a]_ 14.05
[34] Virtex-7 24.2k, 2.8k DSPs 215.9 72.9 2.96 1816.2 71.63
[35] Kintex-7 11.3k 397.3 121.5 3.27 78.28 36.95
[36] Virtex-6 65.6k 153.2 327 0.47 546.42 _[a]_ 30.83
[37] Virtex-5 8.7k 361.6 160 2.26 113.27 _[a]_ 19.66
[38] Virtex-5 10.2k 442.2 66.7 6.63 38.61 _[a]_ 67.63
[39] Virtex-4 12k 459.9 36.5 12.6 20.32 _[a]_ 151.20
[40] Virtex-4 9.4k, 14 DSPs 610 20.44 29.84 8.58 _[a]_ 280.50
[41] Virtex-4 35.7k 207.1 70 2.96 86.53 _[a]_ 105.67
[42] Virtex-4 13.2k 200 40 5 51 66.00
[43] Virtex-4 20.6k 191.6 49 3.91 65.47 80.55
[44] Virtex-4 20.1k 331.1 43 7.7 33.25 _[a]_ 154.77
[45] Virtex-4 20.8k, 32 DSPs 414 60 6.9 37.10 _[a]_ 143.52
[46] Virtex-4 7k, 8 DSPs 993.7 182 5.46 46.88 _[a]_ 38.22
[47] Spartan-3 27.6k 708 40 17.7 14.46 _[a]_ 488.52
[48] Virtex-II Pro 12k 337.7 36 9.38 27.29 _[a]_ 112.56
[49] Virtex-II Pro 8.3k 163.2 37 4.41 58.04 _[a]_ 36.60
[50] Virtex-II Pro 15.8k, 256 DSPs 151.4 39.5 3.86 66.74 60.98
[51] Virtex-II Pro 41.6k 252.1 94.7 2.66 96.17 _[a]_ 110.66
[52] Virtex-E 16.4k 156.8 39.7 3.95 64.82 _[a]_ 64.78
[53] Virtex-E 14.2k 118.3 34.7 3.41 75.09 _[a]_ 48.42
_a Estimated by the authors of this paper as Throughput = (Maximum frequency ÷ CCs)× 256._
The residue number system (RNS)-based ECC design reported in [34] provides a higher
throughput (1816.2 kbps) by performing ECPM on 21 keys in parallel. Conventional DAA method
is adopted for ECPM, where PA and PD are executed by separate modules carrying high risk of
SPA attacks. On Virtex-7 FPGA, the design consumes 96,867 LUTs (approx. 24,216 slices) with
2799 additional DSP slices. Although the throughput of this design is higher than that of our
design, it costs 3.7 times more hardware resources. The novelty of this design is that it processes
21 keys simultaneously, which prevents template-based attacks by increasing the computation
complexity. In [35], the authors propose a high-performance ECC processor with its ASIC and
FPGA implementations. A novel hardware architecture for combined PA-PD operation in Jacobian
-----
_Sensors 2020, 20, 5148_ 14 of 19
coordinates is proposed to achieve high-speed ECPM with low hardware use. On Kintex-7 FPGA,
the processor separately designed in affine and Jacobian coordinates performs ECPM in 4.7 ms and
3.27 ms, occupying 9.3k and 11.3k slices, respectively. Our processor implemented on 7-series FPGA
is 1.72 times faster and costs 1.73 times less slices as compared with this processor designed in
Jacobian coordinates. The throughput of our design is 1.76 times higher. A high-speed ECC processor
is proposed in [36] providing redundant signed digit (RSD)-based carry free modular arithmetic.
The processor performs high-speed ECPM with a higher throughput. However, it occupies 10 times
more slices on Virtex-6 FPGA than our processor. Although RSD representation offers fast computation,
it consumes a vast amount of hardware resources, which makes processor bulky and hence not suited
for low-power IoT devices. The high-speed RSD-based modular multiplier proposed in this paper
performs single multiplication in only 0.34 µs, consuming 22k LUTs. In comparison with this multiplier,
our proposed modular multiplier performs single multiplication in 1.45 µs and consumes only 1.3k
LUTs with almost 4 times better efficiency in terms of area-time (AT) product. The RSD-based ECC
processors reported in [37,38] present comprehensive pipelining technique for Karatsuba–Ofman
multiplication to achieve high throughput. Our processor has smaller AT product compared with
these processors.
Liu et al. [39] propose a hardware-software approach for flexible duel-field ECC processor with
its ASIC and FPGA implementations. The traditional DAA method for ECPM is replaced by the
double-and-add-always (DAAA) method to protect the processor from SPA attacks. Although the
DAAA method for ECPM provides high resistance against SPA, it increases the computational
complexity and hence reduces the frequency and throughput. In addition, it consumes more power
than the conventional DAA method as PA and PD are performed in every iteration. Our processor
is protected against SPA attacks by implementing the cost-effective DAA algorithm with unified
PA. When compared to our processor, the main advantage of this processor is that it is flexible and
reconfigurable over different field orders. In addition, it can perform ECPM over both GF(2[n]) and
GF(p), whereas our processor performs ECPM over GF(p) only.
Hu et al. [40] propose an SPA-resistant ECC design over GF(p), providing its ASIC and FPGA
implementations. The design uses 9370 slices with 14 additional DSP slices on Virtex-4 FPGA.
Despite employing additional DSP slices, the speed of this design is considerably low. It takes 29.84 ms
with a frequency of 20.44 MHz to perform single ECPM over a 256-bit prime field. The advantage
of this design that makes it well suited for embedded applications is its reconfigurable computing
capability. A low latency ECPM design is proposed in [41] exploiting parallel multiplication over
GF(p). Protection against timing and SPA attacks is provided by using the DAAA method for
ECPM. The latency of this design is 3n[2] + 37n + 4n CCs, whereas the latency of our design is
3n[2] + 8.25n 5 CCs. Therefore, the computational complexity of ECPM in this design is higher
_−_
than that in our design. The radix-4 parallel interleaved modular multiplier proposed in this paper
performs multiplication in 0.79 µs, consuming 6.3k LUTs. Four multiplier units are operated in parallel
to speed up the multiplication process. The main feature of this design is its capability to perform
ECPM over GF(p) with any arbitrary value of p less than or equal to 256 bits in size.
The design reported in [42] exploits the Montgomery ladder algorithm for SPA-resistant ECPM.
Although the Montgomery ladder algorithm offers lower latency ECPM and higher resistance against
SPA than the general DAA method [23], it deals with around 50% additional PA operations that results
in a higher power consumption. Hence, the DAA method is more efficient than the Montgomery
ladder technique in terms of energy consumption. The advantage of this design is that it supports any
prime number p ≤ 256-bit. In [43], the authors present a high-performance hardware design for ECPM
adopting non-adjacent form (NAF) method. Although NAF method has the advantage of reducing the
latency of ECPM, the computational complexity and its vulnerability to SCAs are high in this method.
Moreover, additional point subtraction operation is required for NAF scalar multiplication. Like the
designs reported in [40,41], this design is programmable for any prime p ≤ 256-bit. Parallel crypto
design is proposed in [44] using the DAAA method to perform SCA-resistant ECPM over different
-----
_Sensors 2020, 20, 5148_ 15 of 19
field orders. The design is represented in affine coordinates, where PA and PD require modular
division operations. Modular division is the most time-consuming arithmetic operation in finite
fields. Therefore, this design is not convenient for high-speed computation. However, it provides high
resistance against timing and SPA attacks by parallel computation of PA and PD.
Ananyi et al. [45] propose a flexible hardware ECC processor that supports five National Institute
of Standard and Technology (NIST) recommended prime curves. They provide a comparison between
the binary and NAF ECPM over all five NIST prime fields such as p192, p224, p256, p384, and p521,
where the NAF ECPM is found to be more time-efficient. Their processor consumes 20,793 slices
(31,946 LUTs) with 32 additional DSP blocks on Virtex-4 FPGA and performs the binary ECPM in
6.9 ms and the NAF ECPM in 6.1 ms over p256. The modular inverter designed in this paper operates
at a frequency of 58.6 MHz costing 10,921 slices with 32 DSP blocks, whereas our modular inverter
implemented on Virtex-7 FPGA runs at 110.65 MHz consuming 1197 slices without any DSP block.
A scalable ECC processor developed by Loi et al. [46] can perform ECPM on five NIST suggested
prime curves such as P-192, P-224, P-256, P-384, and P-521 without hardware reconfiguring. On Virtex-4
FPGA, this processor performs ECPM in 5.46 ms, occupying 7020 slices along with 8 additional DSP
slices. Despite using DSP slices, the computational speeds of the processors reported in [45,46] are
low. The main significance of these processors is that they are flexible over the five NIST prime fields
and hence they can be programmed to perform ECPM for variable prime numbers ranging from
192 to 521 bits in size without being architecturally reconfigured. The processors reported in [47–53],
are implemented on some backdated FPGA platforms, which are now obsolete.
Performance comparison in terms of AT product is shown in Figure 9. The AT product of our
design is lower than that of the other designs tabulated in Table 2. Figure 10 shows performance
comparison in terms of throughput per slice. The per slice throughput of our design is higher than that
of the other designs except [34]. The RNS-based design reported in [34] provides a higher throughput
by performing ECPM on 21 keys concurrently. Our processor’s low value of AT product and high value
of throughput ensure a better performance in IoT platforms. However, a fair comparison is not possible
because the compared processors are implemented on different FPGA platforms. Our proposed ECC
processor is implemented only on the Virtex-7 and Virtex-6 FPGAs because the number of input/output
blocks (IOBs) is limited in earlier FPGAs. Furthermore, the earlier FPGAs such as Virtex-II-Pro, Virtex-4,
and Virtex-5 are not compatible with low-power devices because of their high power consumption.
500
450
400
350
300
250
200
150
100
50
0
(a) (b) [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53]
### Reference
**Figure 9. Performance comparison in terms of AT product.**
-----
_Sensors 2020, 20, 5148_ 16 of 19
70
Spartan-3
Virtex-E
60 Virtex-II-Pro
Virtex-4
Virtex-5
50
Virtex-6
Virtex-7
40 Kintex-7
30
20
10
0
|Col1|Col2|Col3|Col4|Spartan-3 Virtex-E Virtex-II-Pro Virtex-4 Virtex-5 Virtex-6 Virtex-7 Kintex-7|Spartan-3 Virtex-E Virtex-II-Pro Virtex-4 Virtex-5 Virtex-6 Virtex-7 Kintex-7|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
(a) (b) [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53]
### Reference
**Figure 10. Performance comparison in terms of throughput per slice.**
**6. Conclusions**
In this paper, a high-performance ECC processor has been proposed exploiting unified PA on
Edwards25519 curve to perform SPA-resistant point multiplication. An efficient ECPM module has
been designed in projective coordinates, which supports 256-bit point multiplication over a prime field.
Unified PA is adopted for the ECPM module to provide strong protection against SPA attacks and
reduce the area required by an additional PD module. To perform high-speed modular multiplication,
an efficient radix-4 interleaved modular multiplier has been proposed. The proposed ECC processor
performs fast point multiplication with a considerably lower area use, providing high resistance against
SPA. Because of its less hardware resource requirements and high computation speed, it is well suited
for resource-constrained IoT devices. Since it provides a faster ECPM that is a rising demand of elliptic
curve-based digital signature schemes, it could be manipulated in Bitcoin-like cryptocurrencies for
high-speed digital signature generation and verification, which would reduce latency in transaction
confirmation. Based on the overall performance analyses, it can be concluded that the proposed ECC
processor could be a good choice for the IoT security as well as the emerging technology “Blockchain”.
**Author Contributions: All the authors contributed to this paper. Specifically, M.M.I. and M.S.H. proposed the**
idea and programmed the design; M.M.I. analyzed and verified the implementation results and wrote the paper;
M.K.H. and M.S. reviewed and edited the paper; and Y.M.J. supervised the work and provided funding support.
All authors have read and agreed to the published version of the manuscript.
**Funding: This research was supported by the Ministry of Science and ICT (MSIT), Korea, under the Information**
Technology Research Center (ITRC) support program (IITP-2018-0-01396) supervised by the Institute for
Information & communication Technology Promotion (IITP).
**Acknowledgments: As the first author of this paper, I would like to thank my beloved mother and brother who**
always supported me in every success and failure in my life.
**Conflicts of Interest: This manuscript has not been published elsewhere and is not under consideration by**
another journal. We have approved the manuscript and agree with submission in Sensors. There are no conflict of
interest to declare.
**References**
1. Ding, S.; Li, C. ; Li, H. A novel efficient pairing-free CP-ABE based on elliptic curve cryptography for IoT.
_[IEEE Access 2018, 6, 27336–27345. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2836350)_
2. Hankerson, D.; Menezes, A.; Vanstone, S. Guide to Elliptic Curve Cryptography; Springe: New York, NY,
USA, 2004.
-----
_Sensors 2020, 20, 5148_ 17 of 19
3. ElGamal, T. A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans.
_[Inf. Theory 1985, 31, 469–472. [CrossRef]](http://dx.doi.org/10.1109/TIT.1985.1057074)_
4. Rivest, R.L.; Shamir, A.; Adleman, L. A method for obtaining digital signatures and public-key cryptosystems.
_[Commun. ACM 1978, 21, 120–126. [CrossRef]](http://dx.doi.org/10.1145/359340.359342)_
5. [Diffie, W.; Hellman, M. New directions in cryptography. IEEE Trans. Inf. Theory 1976, 22, 644–654. [CrossRef]](http://dx.doi.org/10.1109/TIT.1976.1055638)
6. Liu, Z.; Huang, X.; Hu, Z.; Khan, M.K.; Seo, H.; Zhou, L. On emerging family of elliptic curves to secure
[internet of things: ECC comes of age. IEEE Trans. Dependable Secur. Comput. 2017, 14, 237–248. [CrossRef]](http://dx.doi.org/10.1109/TDSC.2016.2577022)
7. Challa, S.; Wazid, M.; Das, A.K.; Kumar, N.; Reddy, A.G.; Yoon, E.-J.; Yoo, K.-Y. Secure signature-based
authenticated key establishment scheme for future IoT applications. _IEEE Access 2017, 5, 3028–3043._
[[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2676119)
8. Lara-Nino, C.A.; Diaz-Perez, A.; Morales-Sandoval, M. Elliptic curve lightweight cryptography: A survey.
_[IEEE Access 2018, 6, 72514–72550. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2881444)_
9. Lee, Y.; Sakiyama, K.; Batina, L. Elliptic-curve-based security processor for RFID. IEEE Trans. Comput. 2008,
_[57, 1514–1527. [CrossRef]](http://dx.doi.org/10.1109/TC.2008.148)_
10. Liao, Y.; Hsiao, C. A secure ECC-based RFID authentication scheme integrated with ID-verifier transfer
[protocol. Ad Hoc Netw. 2014, 18, 133–146. [CrossRef]](http://dx.doi.org/10.1016/j.adhoc.2013.02.004)
11. Chou, J. An efficient mutual authentication RFID scheme based on elliptic curve cryptography. J. Supercomput.
**[2014, 70, 75–94. [CrossRef]](http://dx.doi.org/10.1007/s11227-013-1073-x)**
12. Zhang, Z.; Qi, Q. An efficient RFID authentication protocol to enhance patient medication safety using
[elliptic curve cryptography. J. Med. Syst. 2014, 38, 5. [CrossRef] [PubMed]](http://dx.doi.org/10.1007/s10916-014-0047-8)
13. Zhao, Z. A secure RFID authentication protocol for healthcare environments using elliptic curve
[cryptosystem. J. Med. Syst. 2014, 38, 5. [CrossRef] [PubMed]](http://dx.doi.org/10.1007/s10916-014-0046-9)
14. He, D.; Zeadally, S. An analysis of RFID authentication schemes for internet of things in healthcare
[environment using elliptic curve cryptography. IEEE Internet Things J. 2015, 2, 72–83. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2014.2360121)
15. Bernstein, D.J.; Duif, N.; Lange, T.; Schwabe, P.; Yang, B.Y. High-speed high-security signatures.
_[J. Cryptogr. Eng. 2012, 2, 77–89. [CrossRef]](http://dx.doi.org/10.1007/s13389-012-0027-1)_
16. Liusvaara, I.; Josefsson, S. Edwards Curve Digital Signature Algorithm (EdDSA). In Internet-Draft:
[Draft-irtf-cfrg-eddsa-05, Internet Engineering Task Force, 2017. Available online: https://tools.ietf.org/](https://tools.ietf.org/html/rfc8032)
[html/rfc8032 (accessed on 1 January 2017).](https://tools.ietf.org/html/rfc8032)
17. Liu, J.; Zhang, Z.; Chen, X.; Kwak, K. Certificateless remote anonymous authentication schemes for wireless
[body area networks. IEEE Trans. Parallel Distrib. Syst. 2014, 25, 332–342. [CrossRef]](http://dx.doi.org/10.1109/TPDS.2013.145)
18. He, D.; Zeadally, S.; Kumar, N.; Lee, J.-H. Anonymous authentication for wireless body area networks with
[provable security. IEEE Syst. J. 2017, 11, 2590–2601. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2016.2544805)
19. Saeed, M.E.S.; Liu, Q.-Y.; Tian, G.; Gao, B.; Li, F. Remote authentication schemes for wireless body area
[networks based on the Internet of Things. IEEE Internet Things J. 2018, 5, 4926–4944. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2876133)
20. Keoh, S.L.; Kumar, S.S.; Tschofenig, H. Securing the Internet of Things: A standardization perspective.
_[IEEE Internet Things J. 2014, 1, 265–275. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2014.2323395)_
21. Liu, Z.; Grosschadl, J.; Hu, Z.; Jarvinen, K.; Wang, H.; Verbauwhede, I. Elliptic curve cryptography
with efficiently computable endomorphisms and its hardware implementations for the Internet of Things.
_[IEEE Trans. Comput. 2017, 66, 773–785. [CrossRef]](http://dx.doi.org/10.1109/TC.2016.2623609)_
22. Banerjee, U.; Wright, A.; Juvekar, C.; Waller, M.; Arvind, A.; Chandrakasan, A.P. An Energy-Efficient
Reconfigurable DTLS Cryptographic Engine for Securing Internet-of-Things Applications. IEEE J. Comput.
**[2019, 54, 2339–2352. [CrossRef]](http://dx.doi.org/10.1109/JSSC.2019.2915203)**
23. Islam, M.M.; Hossain, M.S.; Hasan, M.K.; Shahjalal, M.; Jang, Y.M. FPGA implementation of high-speed
area-efficient processor for elliptic curve point multiplication over prime field. _IEEE Access 2019, 7,_
[178811–178826. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2958491)
24. Brier, E.; Joye, M. Weierstraß elliptic curves and side-channel attacks. In Public Key Cryptography (LNCS);
Springer: Heidelberg, Germany, 2002; Volume 2274, pp. 335–345.
25. Joye, M. Elliptic curves and side-channel analysis. ST J. Syst. Res. 2003, 4, 17–21.
26. [Edward, H.M. A normal form for elliptic curves. Bull. Am. Math. Soc. 2007, 44, 393–422. [CrossRef]](http://dx.doi.org/10.1090/S0273-0979-07-01153-6)
27. Bernstein, D.J.; Lange, T. Faster addition and doubling on elliptic curves. In Proceedings of the Advances in
_Cryptology (LNCS); Springer: Heidelberg, Germany, 2007; Volume 4833, pp. 29–50._
-----
_Sensors 2020, 20, 5148_ 18 of 19
28. Hisil, H.; Wong, K.K.H.; Carter G.; Dawson, E. Twisted edwards curves revisited. In Proceedings of the
_Advances in Cryptology (LNCS); Springer: Heidelberg, Germany, 2008; Volume 5350, pp. 326–343._
29. Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C. Twisted edwards curves. In Proceedings of the Advances in
_Cryptology (LNCS); Springer: Heidelberg, Germany, 2008; Volume 5023, pp. 389–405._
30. Bernstein, D.J. Curve25519: New Diffie-Hellman speed records. In Proceedings of the Public Key Cryptography
_(LNCS); Springer: Heidelberg, Germany, 2006; Volume 3958, pp. 207–228._
31. Baldwin, B.; Moloney, R.; Byrne, A.; McGuire, G.; Marnane, W.P. A hardware analysis of twisted Edwards
curves for an elliptic curve cryptosystem. In Proceedings of the Reconfigurable Computing: Architectures Tools
_and Applications (LNCS); Springer: Heidelberg, Germany, 2009; Volume 5453, pp. 355–361._
32. Abdulrahman, E.A.H.; Masoleh, A.R. New regular radix-8 scheme for elliptic curve scalar multiplication
[without pre-computation. IEEE Trans. Comput. 2015, 64, 438–451. [CrossRef]](http://dx.doi.org/10.1109/TC.2013.213)
33. Islam, M.M.; Hossain, M.S.; Shahjalal, M.; Hasan, M.K.; Jang, Y.M. Area-time efficient hardware
implementation of modular multiplication for elliptic curve cryptography. IEEE Access 2020, 8, 73898–73906.
[[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2988379)
34. Asif, S.; Hossain, M.S.; Kong, Y. High-throughput multi-key elliptic curve cryptosystem based on residue
[number system. IET Comput. Digit. Tech. 2017, 11, 165–172. [CrossRef]](http://dx.doi.org/10.1049/iet-cdt.2016.0141)
35. Hossain, M.S.; Kong, Y.; Saeedi, E.; Vayalil, N. High-performance elliptic curve cryptography processor over
[NIST prime fields. IET Comput. Digit. Tech. 2016, 11, 33–42. [CrossRef]](http://dx.doi.org/10.1049/iet-cdt.2016.0033)
36. Shah, Y.A.; Javeed, K.; Azmat, S.; Wang, X. Redundant signed digit based high-speed elliptic curve
[cryptographic processor. J. Circuits Syst. Comput. 2018, 28, 1950081. [CrossRef]](http://dx.doi.org/10.1142/S0218126619500816)
37. Marzouqi, H. ; Al-Qutayri, M.; Salah, K.; Schinianakis, D.; Stouraitis, T. A high-speed FPGA implementation
[of an RSD-based ECC processor. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2016, 24, 151–164. [CrossRef]](http://dx.doi.org/10.1109/TVLSI.2015.2391274)
38. Marzouqi, H.; Al-Qutayri, M.; Salah, K. An FPGA implementation of NIST 256 prime field ECC processor.
In Proceedings of the IEEE International Conference on Electronics, Circuits, and Systems (ICECS),
Abu Dhabi, UAE, 8–11 December 2013; pp. 493–496.
39. Liu, Z.; Liu, D.; Zou, X. An efficient and flexible hardware implementation of the dual-field elliptic curve
[cryptographic processor. IEEE Trans. Ind. Electron. 2017, 64, 2353–2362. [CrossRef]](http://dx.doi.org/10.1109/TIE.2016.2625241)
40. Hu, X.; Zheng, X.; Zhang, S.; Cai, S.; Xiong, X. A low hardware consumption elliptic curve cryptographic
[architecture over GF(p) in embedded application. Electronics 2018, 7, 104. [CrossRef]](http://dx.doi.org/10.3390/electronics7070104)
41. Javeed, K.; Wang, X. Low latency flexible FPGA implementation of point multiplication on elliptic curves
[over GF(p). Int. J. Circuit Theory Appl. 2016, 45, 214–228. [CrossRef]](http://dx.doi.org/10.1002/cta.2295)
42. Javeed, K.; Wang, X. FPGA based high-speed SPA-resistant elliptic curve scalar multiplier architecture. Int. J.
_[Reconfigurable Comput. 2016, 2016, 1–10. [CrossRef]](http://dx.doi.org/10.1155/2016/6371403)_
43. Javeed, K.; Wang, X.; Scott, M. High performance hardware support for elliptic curve cryptography over
[general prime field. Microprocess. Microsyst. 2017, 51, 331–342. [CrossRef]](http://dx.doi.org/10.1016/j.micpro.2016.12.005)
44. Ghosh, S.; Alam, M.; Chowdhury, D.R.; Gupta, I.S. Parallel crypto-devices for GF(p) elliptic curve
[multiplication resistant against side-channel attacks. Comput. Electr. Eng. 2009, 35, 329–338. [CrossRef]](http://dx.doi.org/10.1016/j.compeleceng.2008.06.009)
45. Ananyi, K.; Alrimeih, H.; Rakhmatov, D. Flexible hardware processor for elliptic curve cryptography over
[NIST prime fields. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2009, 17, 1099–1112. [CrossRef]](http://dx.doi.org/10.1109/TVLSI.2009.2019415)
46. Loi, K.C.C.; Ko, S.B. Scalable elliptic curve cryptosystem FPGA processor for NIST prime curves. IEEE Trans.
_[Very Large Scale Integr. (VLSI) Syst. 2015, 23, 2753–27. [CrossRef]](http://dx.doi.org/10.1109/TVLSI.2014.2375640)_
47. Sakiyama, K.; Mentas, N.; Batina, L.; Preneel, B.; Verbauwhede, I. Reconfigurable modular arithmetic
logic unit for high-performance public-key cryptosystems. In Proceedings of the Reconfigurable Computing:
_Architectures and Applications (LNCS); Springer: Heidelberg, Germany, 2006; Volume 3985, pp. 347–357._
48. Ghosh, S.; Mukhopadhyay, D.; Roychowdhury, D. Petrel: Power and timing attack resistant elliptic curve
scalar multiplier based on programmable GF(p) arithmetic unit. IEEE Trans. Circuits Syst. I-Regul. Pap. 2011,
_[58, 1798–1812. [CrossRef]](http://dx.doi.org/10.1109/TCSI.2010.2103190)_
49. Lee, J.-W.; Chung, S.C.; Chang, H.C.; Lee, C.Y. Efficient power-analysis-resistant dual-field elliptic curve
cryptographic processor using heterogeneous dual-processing-element architecture. IEEE Trans. Very Large
_[Scale Integr. (VLSI) Syst. 2014, 22, 49–61. [CrossRef]](http://dx.doi.org/10.1109/TVLSI.2013.2237930)_
50. Mcivor, C.J.; Mcloone, M.; Mccanny, J.V. Hardware elliptic curve cryptographic processor over GF(p).
_[IEEE Trans. Circuits Syst. I-Fundam. Theor. Appl. 2006, 53, 1946–1957. [CrossRef]](http://dx.doi.org/10.1109/TCSI.2006.880184)_
-----
_Sensors 2020, 20, 5148_ 19 of 19
51. Lai, J.Y.; Huang, C.-T. High-throughput cost-effective dual-field processors and the design framework for
elliptic curve cryptography. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2008, 16, 1567–1580.
52. Schinianakis, D.M.; Fournaris, A.P.L.; Michail, H.E.; Kakarountas, A.P.; Stouraitis, T. An RNS implementation
[of an Fp elliptic curve point multiplier. IEEE Tran. Circuits Syst. I-Regul. Pap. 2009, 56, 1202–1213. [CrossRef]](http://dx.doi.org/10.1109/TCSI.2008.2008507)
53. Esmaeildoust, M.; Schinianakis, D.; Javashi, H.; Stouraitis, T.; Navi, K. Efficient RNS implementation of
elliptic curve point multiplication over GF(p). IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2013, 21,
[1545–1549. [CrossRef]](http://dx.doi.org/10.1109/TVLSI.2012.2210916)
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7571177, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/20/18/5148/pdf?version=1599703165"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-09-01T00:00:00
|
[
{
"paperId": "716702e14dbe6b4d6e09af07061cb9c33fc6f847",
"title": "A High-Speed FPGA Implementation of an RSD-Based ECC Processor"
},
{
"paperId": "db2de3062b4b056f9b881b20abd60f2e8b34d171",
"title": "FPGA Implementation of High-Speed Area-Efficient Processor for Elliptic Curve Point Multiplication Over Prime Field"
},
{
"paperId": "83b1a49c0e8dbe9dd062e65077761fd02f4a13be",
"title": "An Energy-Efficient Reconfigurable DTLS Cryptographic Engine for Securing Internet-of-Things Applications"
},
{
"paperId": "f6b33f8c49227a6e71c6f6688d3c0172901054bc",
"title": "Redundant-Signed-Digit-Based High Speed Elliptic Curve Cryptographic Processor"
},
{
"paperId": "4adc45abf706cb05da2a596cba8dc79e26b48285",
"title": "Elliptic Curve Lightweight Cryptography: A Survey"
},
{
"paperId": "9e480c26aec4d5ede4cc50abd0ceeade28b8e7ae",
"title": "Remote Authentication Schemes for Wireless Body Area Networks Based on the Internet of Things"
},
{
"paperId": "c0a0830b27e1fbf48690176cd8c86ca30a1394ec",
"title": "A Low Hardware Consumption Elliptic Curve Cryptographic Architecture over GF(p) in Embedded Application"
},
{
"paperId": "b15b766569553f9cf55eb2af7c54cc7d20763682",
"title": "High-throughput multi-key elliptic curve cryptosystem based on residue number system"
},
{
"paperId": "904826e8b0f0ad6abe046ebbee57e4c22b9487fd",
"title": "High performance hardware support for elliptic curve cryptography over general prime field"
},
{
"paperId": "b9ea0de9f99b0e7c61403fd6591ad08d388da0e5",
"title": "On Emerging Family of Elliptic Curves to Secure Internet of Things: ECC Comes of Age"
},
{
"paperId": "8a063e91e3e563a63c2df23910860cf33e8bf8b6",
"title": "Elliptic Curve Cryptography with Efficiently Computable Endomorphisms and Its Hardware Implementations for the Internet of Things"
},
{
"paperId": "e5ed11fcb1b41d952d7e099b60c3d0bc9a90c2c7",
"title": "An Efficient and Flexible Hardware Implementation of the Dual-Field Elliptic Curve Cryptographic Processor"
},
{
"paperId": "e53d92bd1f2ddf080fd6088b32703225c04ff64f",
"title": "Secure Signature-Based Authenticated Key Establishment Scheme for Future IoT Applications"
},
{
"paperId": "f32d4448498e05aac45c0d2dcdd23db80e61cc42",
"title": "Low latency flexible FPGA implementation of point multiplication on elliptic curves over GF(p)"
},
{
"paperId": "c12a1fc6572b5cf2352bad74b7b6894a6aa00820",
"title": "FPGA Based High Speed SPA Resistant Elliptic Curve Scalar Multiplier Architecture"
},
{
"paperId": "96578f7284c3d1898b9d822f8c7b82509ca39616",
"title": "An Analysis of RFID Authentication Schemes for Internet of Things in Healthcare Environment Using Elliptic Curve Cryptography"
},
{
"paperId": "55ba62233d96b80e72f237858b1afa91e3827fe9",
"title": "New Regular Radix-8 Scheme for Elliptic Curve Scalar Multiplication without Pre-Computation"
},
{
"paperId": "76d39926a82e57d4ad5596027245fdd1ea186414",
"title": "Scalable Elliptic Curve Cryptosystem FPGA Processor for NIST Prime Curves"
},
{
"paperId": "a1c9b7885bdf5b2ba4f569982207ccf361d7765b",
"title": "A secure ECC-based RFID authentication scheme integrated with ID-verifier transfer protocol"
},
{
"paperId": "c00c9d300d41246574e586176ec7a7b7224a4d1a",
"title": "Securing the Internet of Things: A Standardization Perspective"
},
{
"paperId": "28121dee3de03f92d93ca4e6a5fb016e2de0bfb0",
"title": "A Secure RFID Authentication Protocol for Healthcare Environments Using Elliptic Curve Cryptosystem"
},
{
"paperId": "363f8798b2eb393e393404e53a2c3311246e11b3",
"title": "An Efficient RFID Authentication Protocol to Enhance Patient Medication Safety Using Elliptic Curve Cryptography"
},
{
"paperId": "591cc520dedf349550a274083797fd4435ba5a0f",
"title": "Certificateless Remote Anonymous Authentication Schemes for WirelessBody Area Networks"
},
{
"paperId": "84b05e48ef50bab129b9170f5906616a8fcec917",
"title": "An efficient mutual authentication RFID scheme based on elliptic curve cryptography"
},
{
"paperId": "47a42a726dc33098e1b44b220a9e5ca50913dc01",
"title": "An FPGA implementation of NIST 256 prime field ECC processor"
},
{
"paperId": "b9ae6599edb62a858baaa0674e741509132245f3",
"title": "Efficient RNS Implementation of Elliptic Curve Point Multiplication Over ${\\rm GF}(p)$"
},
{
"paperId": "218b4099a98a7c015ff5d9ad15af61de53be8fd6",
"title": "High-speed high-security signatures"
},
{
"paperId": "45eda384030511576961120aeb85d393fbce15c8",
"title": "Petrel: Power and Timing Attack Resistant Elliptic Curve Scalar Multiplier Based on Programmable ${\\rm GF}(p)$ Arithmetic Unit"
},
{
"paperId": "22204f2c9331275fdca046a6ca071580bdaa237b",
"title": "Flexible Hardware Processor for Elliptic Curve Cryptography Over NIST Prime Fields"
},
{
"paperId": "7e9c64b2ef9b7ea35400cd1c7320475a612f9a0c",
"title": "An RNS Implementation of an $F_{p}$ Elliptic Curve Point Multiplier"
},
{
"paperId": "b150b027371bca7e821aaa3abdfe23ff62237022",
"title": "A Hardware Analysis of Twisted Edwards Curves for an Elliptic Curve Cryptosystem"
},
{
"paperId": "e14584885c368290bf057676d28323c94cda1421",
"title": "Parallel crypto-devices for GF(p) elliptic curve multiplication resistant against side channel attacks"
},
{
"paperId": "3866633f00639437b8c776659b46f7884a8aa0d0",
"title": "Twisted Edwards Curves Revisited"
},
{
"paperId": "a7697f50a24574d741deae1cf733fe8664db4e4e",
"title": "Elliptic-Curve-Based Security Processor for RFID"
},
{
"paperId": "4dec5540a6d58d5ed694ba8820fd498d98839e0b",
"title": "Elixir: High-Throughput Cost-Effective Dual-Field Processors and the Design Framework for Elliptic Curve Cryptography"
},
{
"paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e",
"title": "Protection"
},
{
"paperId": "6bd787497fb26ba4633f1e0ffc06d9b814234e6a",
"title": "Twisted Edwards Curves"
},
{
"paperId": "7d366a6f74f2046ebddaf9baf664f2715dad92f9",
"title": "Faster Addition and Doubling on Elliptic Curves"
},
{
"paperId": "93471564302ad9ac9fbe64e345e9888b50e096a5",
"title": "A normal form for elliptic curves"
},
{
"paperId": "42938ecab8f3d7c5448098c016b99fa43d69502e",
"title": "Hardware Elliptic Curve Cryptographic Processor Over$rm GF(p)$"
},
{
"paperId": "48febafd4e799a576235e75e6d1a9049426f5235",
"title": "Curve25519: New Diffie-Hellman Speed Records"
},
{
"paperId": "eb7d79719f621bbb9fff85b08472de18a9ccf090",
"title": "Reconfigurable Modular Arithmetic Logic Unit for High-Performance Public-Key Cryptosystems"
},
{
"paperId": "aba3474052dff485ca23c7262b078d110570b065",
"title": "Weierstraß Elliptic Curves and Side-Channel Attacks"
},
{
"paperId": "bc533d2f27381d81d8e0cd3f445c54556e938816",
"title": "The State of Elliptic Curve Cryptography"
},
{
"paperId": "ba624ccbb66c93f57a811695ef377419484243e0",
"title": "New Directions in Cryptography"
},
{
"paperId": "48518658d5120d73518baabe74c549b1c50447b7",
"title": "Area-Time Efficient Hardware Implementation of Modular Multiplication for Elliptic Curve Cryptography"
},
{
"paperId": "61cc545915203331c8bb67b603b3d20bc1dfefee",
"title": "Cryptanalysis and Improvement of Anonymous Authentication for Wireless Body Area Networks with Provable Security"
},
{
"paperId": "dab5f9d4105074d2d587e40c6919715f8449a2d1",
"title": "A Novel Efficient Pairing-Free CP-ABE Based on Elliptic Curve Cryptography for IoT"
},
{
"paperId": "1d152337a4b3126bf22d1648e64deea3481acde2",
"title": "Edwards-Curve Digital Signature Algorithm (EdDSA)"
},
{
"paperId": "f3558c00bcbdddbcb7967b4318247ac2ab34d1a6",
"title": "High-performance elliptic curve cryptography processor over NIST prime fields"
},
{
"paperId": "ae4ac89d4a347e5830ab4078f99f96d414b3947b",
"title": "Efficient Power-Analysis-Resistant Dual-Field Elliptic Curve Cryptographic Processor Using Heterogeneous Dual-Processing-Element Architecture"
},
{
"paperId": "7fc91d3684f1ab63b97d125161daf57af60f2ad9",
"title": "Elliptic Curves and Side-Channel Analysis"
},
{
"paperId": "a1cd437a924849d19e0713f042e45e79dc8b95a1",
"title": "A public key cyryptosystem and signature scheme based on discrete logarithms"
},
{
"paperId": null,
"title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license"
},
{
"paperId": null,
"title": "Draft-irtf-cfrg-eddsa-05, Internet Engineering Task Force, 2017"
}
] | 17,516
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/013d1b03a0f7dfb6675f903e52da4f3ae809f719
|
[
"Computer Science"
] | 0.879323
|
Model for Simulation of Heterogeneous High-Performance Computing Environments
|
013d1b03a0f7dfb6675f903e52da4f3ae809f719
|
International Conference on High Performance Computing for Computational Science
|
[
{
"authorId": "145921064",
"name": "R. Mello"
},
{
"authorId": "1966773",
"name": "Luciano José Senger"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"High Performance Computing for Computational Science (Vector and Parallel Processing)",
"High Perform Comput Comput Sci (vector Parallel Process",
"Int Conf High Perform Comput Comput Sci",
"VECPAR"
],
"alternate_urls": null,
"id": "1eed7a6b-9e3b-431b-a1a1-c61ba1a420d6",
"issn": null,
"name": "International Conference on High Performance Computing for Computational Science",
"type": "conference",
"url": "https://link.springer.com/conference/vecpar"
}
| null |
# Model for Simulation of Heterogeneous High-Performance Computing Environments
Rodrigo Fernandes de Mello[1] and Luciano José Senger[2][⋆]
1 Universidade de São Paulo – Departamento de Computação
Instituto de Ciências Matematicas e de Computação
Av. Trabalhador Saocarlense, 400 Caixa Postal 668
CEP 13560-970 São Carlos, SP, Brazil
mello@icmc.usp.br
2 Universidade Estadual de Ponta Grossa – Departamento de Informatica
Av. Carlos Cavalcanti, 4748
CEP 84030-900 Ponta Grossa, PR, Brazil
ljsenger@icmc.usp.br
**Abstract. This paper proposes a new model to predict the process execution be-**
havior on heterogeneous multicomputing environments. This model considers the
process execution costs such as processing, hard disk acessing, message transmitting and memory allocation. A simulator of this model was developed which help
to predict the execution behavior of processes on distributed environments under
different scheduling techniques. Besides the simulator, it was developed a suite of
benchmark tools in order to parameterize the proposed model with data collected
from real environments. Experiments were conduced to evaluate the proposed
model which used a parallel application executing on a heterogeneous system.
The obtained results show the model ability to predict the actual system performance, providing an useful model for developing and evaluating techniques for
scheduling and resource allocation over heterogeneous and distributed systems.
## 1 Introduction
The evaluation of a computing system allows the analysis of its technical and economic
feasibility, safety, performance and correct execution of processes. In order to evaluate a
system, techniques that estimate its behavior on different situations are used. Such techniques provide numerical results which allow the comparison among different solutions
for the same problem [1]. The evaluation of a computing system may use elementary
or indirect techniques. The elementary ones are directly applied over the system, so
it is necessary to have it previously implemented. The indirect ones allow the system
evaluation before its implementation, what is relevant at the project phase [2–6].
The indirect techniques use mathematic models to represent the behavior of the
main system components. Such models should be as similar as possible to the real problems, generating results for a good evaluation without being necessary to implement
⋆ The authors thank to William Voorsluys for improving the source code of the benchmark memo
and the fundings from Capes and Fapesp Brazilian Foundations (under the process number
04/02411-9).
-----
them [6]. Several models have been proposed for the evaluation of the execution time
and the process delay. They consider the CPU consumption, the performance slowdown
due to the use of the virtual memory [7] and the time spent with messages transmitted
through the communication network [8].
Amir et al. [7] have proposed a method for job assignment and reassignment on
cluster computing. This method uses a queuing network model to represent the slowdown caused by virtual memory usage. In such model the static memory m(j) used by
the process is known. This model defines the load of each computer in accordance with
the equation 1, where: L(t, i) is the load of computer i at the instant t; lc(t, i) is the CPU
occupation; lw(t, i) is the amount of main memory used; rw(i) is maximum capacity
of the main memory; β is the slowdown factor due to the use of virtual memory. Such
factor increases the process response time, what consequently reflects in a lower final
performance. This work attempts to minimize the slowdown by means of scheduling
operations.
L(t, i) =
�
lc(t, i) if lw(t, i) ≤ rw(i)
(1)
lc(t, i) ∗ β otherwise
Mello et al. [9] have proposed improvements to the slowdown model by Amir et
_al. [7]. This work includes new parameters which allow a better modelling of process_
slowdown. Such parameters are the capacity of CPU and memory, throughput for reading and writing on hard disk and delays generated by the use of the communication network. However, this model presents similar limitations to the work by Amir et al. [7], as
it does not offer any resource to model, through equations, the delay caused by the use
of virtual memory (represented in equation 1 by the parameter β), nor consider other
delays of the process execution time generated by: message transmission, hard disk access and other input/output operations. The modeling of message transmission delays
is covered by other works [8,10].
Culler et al. [8] have proposed the LogP model to quantify the overhead and the
network communication latency among processes. The overhead and latency cause delays among processes which communicate. This model is composed of the following
parameters: L which represents the high latency limit or delay incurred in transmitting a
message containing a word (or a small number of words) from the source computer to a
destination; o represents the overhead which is the time spent by processor to prepare a
message for sending or receiving; g is the minimum time interval between consecutive
message transmittion (sending or receiving); P is the number of processors. The LogP
model assumes a finite capacity network with the maximum transmission defined by
L/g messages.
Sivasubramaniam [10] used the LogP model to propose a framework to quantify the
overhead of parallel applications. In such framework are considered aspects such as the
processing capacity and the communication system usage. This framework joins efforts
of actual experiments and simulations to refine and define analytic models. The major
limitation of this work is that it does not present a complete case study.
The LogP model can be aggregated to the model by Amir et al. [7] and Mello et
_al. [9], permitting to evaluate the process execution time and slowdowns considering_
the resources of CPU, memory and transmitted messages on the network. Although
-----
unifying the models, they are still incomplete because do not consider the spatial and
message generation probability distributions. Motivated by such limitations, some studies have been proposed [11,12].
Chodnekar et al. [11] have presented a study to characterize the probability distribution of messages on communication systems. In such work, the 1D-FFT and IS [13],
Cholesky and Nbody [14], Maxflow [15], 3D-FFT and MG [16] parallel applications are
evaluated executing on real environments. In the experiments, some informations have
been captured such as the message sending and receiving moments, size of messages
and destination. These informations were analyzed through statistic tools, and the spatial and message generation probability distributions obtained. The spatial distribution
defines the frequency each process communicates with others. The message generation
distribution defines the probability that each process sends messages to others.
They have concluded that the most usual message generation probability distribution for parallel applications are the exponential, hyperexponential and Weibull. It has
also been concluded that the spatial distribution is not uniform and there are different
traffic patterns during the applications’ execution. In the most part of applications there
is a process which receives and sends a large number of messages to the remainder processes (like a master for PVM – Parallel Virtual Machine – and MPI – Message Passing
Interface – applications). The work also presents some features about message volume
distribution, but there is not a precise analysis about the message size, overhead and
latency.
Vetter and Mueller [12] have studied the communication behavior of scientific applications using MPI (Message Passing Interface). This study quantifies the average
volume of transmitted messages and their size. It has been concluded that in peer-topeer systems 99% of the transmitted messages vary from 4 to 16384 bytes. In collective
calls this number varies from 2 to 256 bytes. This was combined with the studies on
spatial and message generation distributions by Chodnekar et al. in [11] and to the
LogP model [8] which allow the identification of overhead and communication latency
in computing systems. By unifying these studies to the previously described slowdown
models it is possible to evaluate the process behavior considering CPU, virtual memory
and message transmittion. However, it is not possible to model voluntary delays in the
execution of processes (generated by sleep calls) and accesses to hard disks.
Motivated by the unification of the previously presented models, the aggregation of
the applications’ voluntary delays and hard disk access, this paper presents the UniMPP
(Unified Modeling for Predicting Performance) model. This model unifies the CPU
consumption considered in the models by Amir et al. [7] and Mello et al. [9], the time
spent to transmit messages modeled by Culler et al. [8] and Sivasubramaniam [13],
the message volume and the spatial and message generation probability distributions
by Chodnekar et al. [11], and Vetter and Mueller [12]. Experiments confirmed that
this model can be used to predict the behavior of process execution on heterogeneous
environments, once it generates the process response times very similar to the observed
on real executions.
This model was implemented in a simulator which is parameterized with system
configurations (CPUs, main and virtual memories, hard disk thoughput and network
capacity) and receives processes for execution. Distribution functions are used to char
-----
acterize the process CPU, memory, hard disk and network occupations. The simulator
also generates new processes according to a probability distribution function, allowing
to evaluate different scheduling and load balancing policies without needing the real
execution.
As presented before, the simulator needs to be parameterized with the actual system
configurations. For this purpose, a suite of benchmark tools was developed to collect informations such as the capacity of CPUs in MIPS (millions of instructions per second),
the main and virtual memory behavior under a progressive occupation (this generates
delay functions), the hard disk throughput in reading and writing operations (in MBytes
per second) and the network delay (considering the overhead and latency in seconds).
The main contribution of this work is the UniMPP model which can be used with
the simulator allied to the benchmark tools to predict the process execution time on
heterogeneous environments. The simulator is prepared to receive new scheduling and
load balancing policies and evaluate them using different workload models [17].
This paper is divided into the following sections: 2 The model; 3 Parameterization;
4 Model Validation; 5 Conclusions and References.
## 2 The Model
Motivated by the unification of the virtual memory slowdown models [7,9], by the models of delays in process execution caused by messages transmission [8, 10], by studies
about spatial and message generation probability distributions [11], by the slowdown
caused in main and virtual memory ccupation, by the definition of voluntary delay and
access to hard disks, the UniMPP (Unified Modeling for Predicting Performance) model
has been designed. These models are presented in the previous section. Unifying the
ideas of each model and adding voluntary delays and hard disk access, we have defined
a new model to predict the execution behavior of processes running on heterogeneous
computers. By using this model, researchers can evalutate different techniques such as
scheduling and load balancing without being necessary to run an application on an real
environment.
In this model, a process pj arrives at the system, following a probability distribution
function, at the instant aj. Such process is started by the computer ci. Each computer
maintains its queue qi,t of processes at the instant t. In this model, every computer
ci is composed of the sextuple {pci, mmi, vmi, dri, dwi, loi}, where: pci is the total
computing capacity of each computer measured in instructions per unit of time; mmi
is the total main memory; vmi is the total virtual memory capacity; dri is the hard disk
reading throughput; dwi is the hard disk writing throughput; loi is the time spend in
sending and receiving messages.
In the UniMPP, each process is represented by the sextuple {mpj, smj, pdfdmj,
pdfdrj, pdfdwj, pdfnetj}, where: mpj represents the processing consumption; smj
is the amount of static memory allocated by the process; pdfdmj is the probability
distribution function used to represent the dynamic memory occupation; pdfdrj is the
probability distribution function used to represent the hard disk reading; pdfdwj is the
probability distribution function used to represent the hard disk writing; pdfnetj is the
-----
probability distribution function used to represent the sending and receiving operations
on communication system.
Having formally defined computers and processes, equations were defined to obtain the process response time and delay. The first equation (equation 2) presents the
response time (T Epj,ci) of a process pj being executed in a computer ci, where the
total computing capacity pci of ci and the processing consumption of pj should be represented by the same metric, such as MI (millions of instructions when the capacity of
processors was obtained in Mips – Millions of instructions per second) or MF (millions
of float-point instructions when the capacity of processors was obtained in Mflops –
Millions of float-point instructions per second).
T Epj,ci = [mp][j] (2)
pci
The equation 2 presents a calculation method for the execution time of a process under ideal conditions, in which there is no competition nor delays caused by the memory
and input/output usage. The work by Amir et al. [7] presents a more adequate equation
in which, from the moment that the virtual memory starts to be used, there is a delay
in the process execution. These authors use a constant delay in their equations. However, by using the benchmark tools described in section 3, it was observed that there are
limitations in their model, since the performance slowdown is linear during the main
memory usage and exponential from the moment the virtual memory starts to be used.
T EMpj,ci = T Epj,ci (1 + α) (3)
∗
The Amir’s performance model does not consider this linear performance slowdown
caused by the use of the main memory and considers a constant factor for the performance slowdown caused by the use of the virtual memory when, in fact, this slowdown
is exponential. The UniMPP models the process performance slowdown generated by
the use of main and virtual memories, by the equation 3, where α represents a percentage obtained from a delay function and T E is presented in equation 2. This delay
function is generated by a benchmark tool (section 3) where in the x axis is the mem−
ory occupation up to use all the virtual memory and in the y axis is the α value (the
−
slowdown imposed in the process execution by the memory occupation).
A model which considers the process execution slowdown caused by the use of
main and virtual memories become more adequate, however, it does not allow the precise quantification of the total execution time of processes which perform input and
output operations to the hard disk. For this reason, experiments have been conduced
and equations developed to measure the delays generated by accesses to hard disk. The
equation 4 models the process delay generated by reading operations from hard disk,
where: nr represents the number of reading accesses; bsize represents the data buffer
size; dri represents the throughput capacity for reading accesses from hard disk; and
wtdrk represents the waiting time for using the resource.
SLDRpj,ci =
nr
�
k=1
bsizek + wtdrk (4)
dri
-----
The hard disk writing delay is defined by equation 5, where: nw represents the
number of writing accesses; bsize represents the data buffer size to be written; dwi is
the throughput capacity for writing accesses in hard disk; wtdw is the waiting time for
using the resource.
SLDWpj,ci =
nw
�
k=1
bsizek + wtdwk (5)
dwi
In addition to the delays caused by memory usage and input/output to hard disks,
there are delays generated by sending and receiving messages on communication systems. Such delays vary according to the network bandwidth, latency and overhead of
communication protocols [18–20]. The protocol latency involves the transmission time
on communication system, which vary in accordance with the message size and control
messages generated by the protocol [18–20]. The protocol overhead is the time involved
for packing and unpacking messages for transmission. This time also varies according
to the messages size [18–20]. The delay for sending and receiving messages is defined
by equation 6, where: nm represents the number of sent and received messages; θs,k,
described in equation 7 is the time used for sending and receiving messages on communication system, not considering the wait for resources; and wtnk represents the wait
time, the queue time, to send or receive a message, when the resource is busy. The
components of equation 7 are: os,k overhead, which when multiplied by two allows the
quantification of packing time (by the sender) and the unpacking time (by the receiver)
of a message; and ls,k is the latency to transmit a message.
SLNpj,ci =
nm
�
θs,k + wtnk (6)
k=1
θs,k = 2 ∗ os,k + ls,k (7)
Aiming the unification of all previously described delay models, it is proposed the
equation 8, which allows the definition of the response time (the prediction of this time
in a real enviroment) of a process pj in a computer ci, where: lz is the process voluntary
delay generated by the system calls sleep. In the case of load transference (that is, process migration) the communication channels may modify their behaviors and perform
a higher or lower number of input/output operations (a process migrating to a computer
where there are others which it communicates, reduces the latency and overhead because does not use the communication system, although it can overload the CPU). The
equation 9 is the response time of a process pj transferred among n computers.
SLpj = SLpj,ci = T EMpj,ci + SLDRpj,ci +
SLDWpj,ci + SLNpj,ci +
lz (8)
SLpj =
n
�
SLpj,ck (9)
k=1
-----
The UniMPP model unifies the concepts from models by Amir et al. [7], Mello et
_al. [9] and Culler et al. [8] and extends them by adding voluntary delay equations and_
the time for reading and writing accesses to hard disks. In addition, based on experiments, this work proposes new equations to define the main and virtual memory slowdown. By these equations, it was observed that the slowdown is linear when using the
main memory, and exponential using the virtual. Such experiments were carried though
using the benchmark tools from section 3. This model allows studies of scheduling,
load balancing algorithms and prediction of process response times on heterogeneous
environments.
The proposed model has been implemented in a simulator, named SchedSim [3],
which allows other researchers to conduct related studies. Such simulator is implemented in Java language and uses the object oriented concepts that simplify its extension and functionality additions. The simulator is parameterized with system configurations (CPUs, main and virtual memories, hard disk thoughput and network capacity)
and receives processes for execution. It generates new processes according to a probability distribution function, allowing to evaluate different scheduling and load balancing
policies without needing the real execution.
## 3 Parameterization
In order to parameterize the SchedSim simulator using real environment characteristics, a suite of benchmark tools[4] was developed. These tools measure the capacity of
CPU, reading and writing hard disk throughput and the message transmission delays.
Such tools evaluate these characteristics until they reach a minimum sample size based
on the central limit theorem, allowing to apply statistical summary measures such as
confidence interval, standard deviation and average [21]. This suite is composed by the
following tools:
1. mips: it measures the capacity of a processor, in millions of instructions per second. This tool uses a bench() function implemented by Kerrigan [22];
2. memo: it creates child processes until all main and virtual memories are filled up,
measuring the delays of the context switches among processes. The child processes
only allocate the memory and then sleep for some seconds, thus it does not consider
the processor usage;
3. discio: it measures the average writing throughput (buffered and unbuffered)
and the average reading throughput in local storage devices (hard disks) or remote
storage devices (via network file systems);
4. net - it is composed of two applications, a customer and a server, which allow
the evaluation of the time spent to send and receive messages over communication
networks (based on the equation 7).
3 Source code available at http://www.icmc.usp.br/˜mello/outr.html
4 Benchmark – source code available at http://www.icmc.usp.br/˜mello/outr.html
-----
## 4 Validation
In order to validate the proposed model, executions of a parallel application developed
in PVM (Parallel Virtual Machine) [23] in a scenario composed of two homogeneous
computers have been considered. This adopted application is composed of a master
and worker processes. The master process launches one worker on each computer and
defines three parameters: the problem size, that is, the number of mathematic operations
executed to solve an integral (eq. 10) defined between two points a and b using the
trapezium rule [24,25], the number of bytes that will be transferred over the network
and recorded in the hard disk. These workers are composed of four stages: message
receiving, processing, writing into the hard disk and message sending. The message
exchange happens between master and worker at the beginning and at the end of the
workers’ execution. The workers are instrumented to account the time consumed in
operations.
� b
2 sin x + e[x] (10)
∗
a
Scenario details are presented on the table 1 and they have been obtained with the
benchmark suite. A message size of 32 bytes has been considered for the benchmark
_net. The table 2 presents the slowdown equations generated by using main and virtual_
memories, respectively, on the computers c1 and c2. Such equations have been obtained
through the experiments with the benchmark memo. The linear format of the equations
is used when the main memory is not completely filled up, for instance, in the case
of computers c1 and c2 not exceed 1 Gbyte of its memory capacity. After exceeding
such limit, the virtual memory is used and the delay is represented by the exponential
funtion.
**Table 1. System details**
**Resource** c1 c2
CPU (Mips) 1145.86 1148.65
Main memory (Mbytes) 1Gbyte 1Gbyte
Virtual memory (Mbytes) 1Gbyte 1Gbyte
Disk writing throughput (MBytes/seg) 65.55 66.56
Disk reading throughput (MBytes/seg) 76.28 75.21
Overhead + Latency (seconds) 0.000040
The experiment results are presented in the table 3. It may be observed that the
error among the curves is low, close to zero. Ten experiments have been conduced
for different numbers of applications, each one composed of two workers executing
on two computers. Such experiment was used to saturate the capacity of all computing
resources of the environment. The figure 1 shows the experiment and simulation results.
-----
**Table 2. Memory slowdown functions for computers c1 and c2**
**Memory** **Regression** **Equation** R[2]
Main memory Linear y = 0.0012x 0.0065 0.991
−
Main and
Virtual memory Exponential y = 0.0938 e[0][.][0039][x] 0.8898
∗
**Table 3. Simulation results for computers c1 and c2**
**Processes Actual Average Predicted Error (%)**
10 151.40 149.51 0.012
20 301.05 293.47 0.025
30 447.70 437.46 0.022
40 578.29 573.58 0.008
50 730.84 714.92 0.021
60 856.76 862.52 0.006
70 1002.10 1012.17 0.009
80 1147.44 1165.24 0.015
90 1245.40 1318.37 0.055
100 1396.80 1471.88 0.051
1600
1400
1200
1000
800
600
400
200
0
10 20 30 40 50 60 70 80 90 100
Number of Processes
**Fig. 1. Actual and predicted average response times for computers c1 and c2**
The simulation obtained results show the model ability to reproduce the real system
behavior. It is important to notice the increasing of the prediction errors when the system
runs a number of processes between 90 and 100.
The real executions, using 90 and 100 processes, overloaded the computers and
some processes were killed by the PVM system. The premature stopping of processes
-----
(at about 5 processes where killed) decreases the computer’s load, justifiyng the model
prediction error. The simulator was used aiming to predict the system behavior considering a number of processes greater than the number of processes executed by PVM.
After experiments in an homogeneous system, a new environment composed of
heterogeneous computers were parameterized using the benchmark tools. In this environment, it was executed the same application, which computes an integral function
between two points using the trapezium rule. The features of the heterogeneous computers are presented in the table 4.
**Table 4. System details**
**Resource** c3 c4
CPU (Mips) 927.55 1600.40
Main memory (Mbytes) 256 512
Virtual memory (Mbytes) 400 512
Disk write throughput (MBytes/seg) 47.64 15.99
Disk read throughput (MBytes/seg) 41.34 32.55
Overhead + Latency (seconds) 0.000056924
The tables 5 and 6 present the slowdown equations, obtained by the memo benchmarking, considering the main and virtual memory usage.
**Table 5. Memory slowdown functions for computer c3**
**Memory** **Regression** **Equation** R[2]
Main memory Linear y = 0.0018x 0.0007 0.9998
−
Main and
Virtual memory Exponential y = 0.7335 e[0][.][0097][x] 0.8856
∗
**Table 6. Memory slowdown functions for computer c4**
**Memory** **Regression** **Equation** R[2]
Main memory Linear y = 0.0018x 0.0035 0.9821
−
Main and
Virtual memory Exponential y = 0.0924 e[0][.][0095][x] 0.8912
∗
The experiment results are presented in the table 7. The error values obtained comparing the simulated and the actual execution time values are close to 0, allowing to
confirm the model ability in predicting real executions. The figure 2 shows the experiment and simulation results.
-----
**Table 7. Simulation results for computers c3 and c4**
**Processes Actual Average** **Predicted Error (%)**
10 153.29 152.38 0.0059
20 306.63 304.66 0.0064
30 457.93 457.46 0.0010
40 593.66 610.78 0.0280
50 760.02 764.65 0.0060
60 892.29 918.97 0.0290
70 1040.21 1074.18 0.0316
80 1188.14 1230.75 0.0346
90 1333.70 1388.14 0.0392
100 1488.97 1572.22 0.0529
1600
1400
1200
1000
800
600
400
200
0
10 20 30 40 50 60 70 80 90 100
Number of Processes
**Fig. 2. Actual and predicted average response times for computers c3 and c4**
When a number at about 60 processes are running, some problems were observed,
due to PVM process management. It was observed that using some computers with less
processing power, PVM started to kill processes earlier, when running more than 60
processes. These problems explain the difference between the actual and the simulated
time values and the increasing in predicting errors.
The experiments presented in this section validate the model used by the simulator.
The model and the simulator is able to predict the behavior of a real and dynamic system, modelling distinct parallel applications which solve problems from different areas,
such as: aeronautics, fluid dynamics and geoprocessing. Thus, the system behavior can
be predicted earlier, in project phase, minimizing the development costs.
-----
## 5 Conclusions
Several models have been proposed to measure the response time of processes in computing systems [7,9]. Such models have presented some contributions, considering that
the virtual memory occupation causes delays in process executions [7,9], as well as delays generated by the message transmissions on communication systems [8, 10]. Nevertheless, such models do not unify all possible delays of a process execution.
Motivated by such limitations, this work has presented a new unified model to
predict the applications’ execution running on heterogeneous distributed envionments.
This model considers the process execution time in accordance with the processing,
accesses to hard disk, message transmissions on communication networks, main and
virtual memory slowdowns.
This work has contributed by modeling the delays in reading and writing accesses to
hard disks and presenting a new technique which uses equations to represent the delays
generated by the main and virtual memory usage. This has complemented studies by
Amir et al. [7] and Mello et al. [9], which consider a constant delay.
In addition it was developed a simulator of the proposed model which can be used to
predict the execution of applications on heterogeneous multicomputing environments.
Such simulator has been developed considering extensions such as the design of new
scheduling and load balancing policies. This simulator is licensed under GNU/GPL
which allows its broad use by the researchers interested in developing and evaluating
resource allocation techniques. In order to complement this simulator and allow its
parameterization using real environment information, a suite of benchmark tools was
developed and is also available under the GNU/GPL license.
In order to validate the simulator, a parallel application was implemented, simulated
and executed on a real environment. It was observed that the percentage error obtained
between the actual and the predicted execution times was lower than 1%, what confirms
the accuracy of the proposed model to predict the application execution on heterogeneous multicomputing environments.
## References
1. de Mello, R.F.: Proposta e Avaliacão de Desempenho de um Algoritmo de Balanceamento de
Carga para Ambientes Distribuídos Heterogêneos Escaláveis. PhD thesis, SEL-EESC-USP
(2003)
2. et. al, E.L.: Quantitative System Performance: Computer System Analysis Using Queueing
Networks Models. Prentice Hall (1984)
3. et. al, P.B.: A Guide to Simulation. Spring-Verlag (1987)
4. Kleinrock, L.: Queueing Systems - Volume II: Computer Applications. John Wiley & Sons
(1976)
5. Lavenberg, S.S.: Computer Performance Modeling Handbook. Academic Press (1983)
6. Jain, R.: The Art of Computer Systems Performance Analysis: Techniques for Experimental
Design, Measurements, Simulation and Modeling. John Wiley & Sons (1991)
7. Amir, Y.: An opportunity cost approach for job assignment in a scalable computing cluster.
IEEE Transactions on Parallel and Distributed Systems 11(7) (2000) 760–768
-----
8. Culler, D.E., Karp, R.M., Patterson, D.A., Sahay, A., Schauser, K.E., Santos, E., Subramonian, R., von Eicken, T.: LogP: Towards a realistic model of parallel computation. In:
Principles Practice of Parallel Programming. (1993) 1–12
9. et. al, R.F.M.: Analysis on the significant information to update the tables on occupation
of resources by using a peer-to-peer protocol. In: 16th Annual International Symposium
on High Performance Computing Systems and Applications, Moncton, New-Brunswick,
Canada (2002)
10. Sivasubramaniam, A.: Execution-driven simulators for parallel systems design. In: Winter
Simulation Conference. (1997) 1021–1028
11. et. al, S.C.: Towards a communication characterization methodology for parallel applications. In: Proceedings of the 3rd IEEE Symposium on High-Performance Computer Architecture (HPCA ’97), IEEE Computer Society (1997) 310
12. Vetter, J.S., Mueller, F.: Communication characteristics of large-scale scientific applications
for contemporary cluster architectures. J. Parallel Distrib. Comput. 63(9) (2003) 853–865
13. Sivasubramaniam, A., Singla, A., Ramachandran, U., Venkateswaran, H.: An approach to
scalability study of shared memory parallel systems. In: Measurement and Modeling of
Computer Systems. (1994) 171–180
14. Singh, J.P., Weber, W., Gupta, A.: Splash: Stanford parallel applications for shared-memory.
Technical report (1991)
15. Anderson, R.J., Setubal, J.C.: On the parallel implementation of goldberg’s maximum flow
algorithm. In: Proceedings of the fourth annual ACM symposium on Parallel algorithms and
architectures, San Diego, California, United States, ACM Press (1992) 168–177
16. Bailey, D.H., Barszcz, E., Barton, J.T., Browning, D.S., Carter, R.L., Dagum, D., Fatoohi,
R.A., Frederickson, P.O., Lasinski, T.A., Schreiber, R.S., Simon, H.D., Venkatakrishnan, V.,
Weeratunga, S.K.: The NAS Parallel Benchmarks. The International Journal of Supercomputer Applications 5(3) (1991) 63–73
17. Feitelson, D.G., Rudolph, L., Schwiegelshohn, U., Sevcik, K.C., Wong, P.: Theory and
Practice in Parallel Job Scheduling. In: Job Scheduling Strategies for Parallel Processing.
Volume 1291. Springer (1997) 1–34 Lect. Notes Comput. Sci. vol. 1291.
18. Chiola, G., Ciaccio, G.: A performance-oriented operating system approach to fast communications in a cluster of personal computers. In: In Proc. 1998 International Conference on
Parallel and Distributed Processing, Techniques and Applications (PDPTA’98). Volume 1.,
Las Vegas, Nevada (1998) 259–266
19. Chiola, G., Ciaccio, G.: (Gamma: Architecture, programming interface and preliminary
benchmarking)
20. Chiola, G., Ciaccio, G.: Gamma: a low cost network of workstations based on active messages. In: Proc. Euromicro PDP’97, London, UK, January 1997, IEEE Computer Society
(1997)
21. W.C.Shefler: Statistics: Concepts and Applications. The Benjamin/Cummings (1988)
22. Kerrigan, T.: Tscp benchmark (2004)
23. Beguelin, A., Gueist, A., Dongarra, J., Jiang, W., Manchek, R., Sunderam, V.: PVM: Parallel
Virtual Machine: User’s Guide and tutorial for Networked Parallel Computing. MIT Press
(1994)
24. Pacheco, P.S.: Parallel Programming with MPI. Morgan Kaufmann Publichers (1997)
25. Burden, R.L., Faires, J.D.: Análise Numérica. Thomson (2001)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-540-71351-7_9?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-540-71351-7_9, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://vecpar.fe.up.pt/2006/programme/papers/5.pdf"
}
| 2,006
|
[
"JournalArticle"
] | true
| 2006-06-10T00:00:00
|
[] | 8,167
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/013f3dbde05a9be858502c6cc5e85ea5ebae5ab6
|
[
"Computer Science"
] | 0.879892
|
Collective hybrid intelligence: towards a conceptual framework
|
013f3dbde05a9be858502c6cc5e85ea5ebae5ab6
|
International Journal of Crowd Science
|
[
{
"authorId": "145660977",
"name": "Morteza Moradi"
},
{
"authorId": "2056161260",
"name": "Mohammad Moradi"
},
{
"authorId": "2065706784",
"name": "Farhad Bayat"
},
{
"authorId": "2130483840",
"name": "Adel Nadjaran Toosi"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Crowd Sci"
],
"alternate_urls": [
"http://www.emeraldinsight.com/loi/ijcs"
],
"id": "1c3fd765-9228-4d18-ba06-e2927f0d352d",
"issn": "2398-7294",
"name": "International Journal of Crowd Science",
"type": "journal",
"url": "http://www.emeraldgrouppublishing.com/services/publishing/ijcs/index.htm"
}
|
PurposeHuman or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.Design/methodology/approachAccording to the extensive acceptance and efficiency of crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the (complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework are delineated. Moreover, some discussion on architectural and realization issues are presented.FindingsThe paper describes the conceptual architecture, workflow and schematic representation of a new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements, practical roadmap and architectural notes are explained.Originality/valueThe major contribution of this work is introducing the conceptual foundations to combine and integrate collective intelligence of humans and machines to achieve higher efficiency and (computing) performance. To the best of the authors’ knowledge, this the first study in which such a blessing integration is considered. Therefore, it is believed that the proposed computing concept could inspire researchers toward realizing such unprecedented possibilities in practical and theoretical contexts.
|
## IJCS 3,2
198
Received 26 March 2019
Revised 3 June 2019
Accepted 11 July 2019
International Journal of Crowd
Science
Vol. 3 No. 2, 2019
pp. 198-220
EmeraldPublishingLimited
2398-7294
[DOI 10.1108/IJCS-03-2019-0012](http://dx.doi.org/10.1108/IJCS-03-2019-0012)
www.emeraldinsight.com/2398-7294.htm
# Collective hybrid intelligence: towards a conceptual framework
## Morteza Moradi
### Department of Electrical Engineering, University of Zanjan, Zanjan, Iran
## Mohammad Moradi
### Young Researchers and Elite Club, Qazvin, Islamic Republic of Iran
## Farhad Bayat
### Department of Electrical Engineering, University of Zanjan, Zanjan, Iran, and
## Adel Nadjaran Toosi
### Faculty of Information Technology, Monash University, Melbourne, Australia
Abstract
Purpose – Human or machine, which one is more intelligent and powerful for performing computing and
processing tasks? Over the years, researchers and scientists have spent significant amounts of money and
effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the
intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly
cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper,
the notion of collective hybrid intelligence as a new computing framework and comprehensive.
Design/methodology/approach – According to the extensive acceptance and efficiency of
crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the
(complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the
efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework
are delineated. Moreover, some discussion on architectural and realization issues are presented.
Findings – The paper describes the conceptual architecture, workflow and schematic representation of a
new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements,
practical roadmap and architectural notes are explained.
Originality/value – The major contribution of this work is introducing the conceptual foundations to
combine and integrate collective intelligence of humans and machines to achieve higher efficiency and
(computing) performance. To the best of the authors’ knowledge, this the first study in which such a
blessing integration is considered. Therefore, it is believed that the proposed computing concept could
inspire researchers toward realizing such unprecedented possibilities in practical and theoretical
contexts.
Keywords Crowdsourcing, Human computation, Autonomous control, Collective machine intelligence,
Human–machine collaboration, Hybrid intelligence
Paper type Conceptual paper
© Morteza Moradi, Mohammad Moradi, Farhad Bayat and Adel Nadjaran Toosi. Published in
International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is
published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce,
distribute, translate and create derivative works of this article (for both commercial and
non-commercial purposes), subject to full attribution to the original publication and authors. The full
terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
-----
1. Introduction
The concept of computation has evolved over the years with respect to real-world
requirements and technological advancements (Mahoney, 1988; Copeland, 2000). In this
regard, many computing paradigms have been introduced so far, such as Kephart and Chess
(2003), Bargiela and Pedrycz (2016); and Shi et al. (2016). In addition to the infrastructural
necessities of any computing process, an old dream in this context is the realization of full
autonomy in computing, decision making and similar intellectual processes. Achieving this
level of automation, in essence, needs to add intelligence to the process in some way. In other
words, to be able to come up with (super) human-level decisions, an autonomous
(computing/control) system should be equipped with adequate infrastructural facilities,
computing power and intelligence (Feigenbaum, 2003; Nilsson, 2005; Cassimatis, 2006).
Nowadays, thanks to the availability of powerful hardware, advanced processing
components, inexpensive data storage equipment, sophisticated algorithms and so on, the
major challenge in achieving such dreamy machines is the lack of sufficient human-level
intelligence. Although many efforts have been spent in this direction (Decker, 2000; Hibbard,
2001; Zadeh, 2008; Bundy, 2017), replacing human intelligence by machines’ has not yet
been realized literally. On the other side, leveraging humans’ brainpower to improve
machines’ performance has become an efficient approach during recent years (Weyer et al.,
2015; Ofli et al., 2016; Chang et al., 2017). Therefore, one may think that instead of trying to
build machines to take the place of humans, it would be better to establish a foundation to
facilitate joint work of humans and machines to tackle large-scale problems. Although
hybrid intelligence paradigm introduces some opportunities to take benefits of human and
machine intelligence (Huang et al., 2017), lack of a reference model/general architecture to
adhere to its principles causes some non-uniformity. Moreover, adhering to this approach
may not warrant taking advantages of available possibilities. On the other side, volunteer
computing (Beberg et al., 2009) as an interesting and working idea mainly focuses on
leveraging computing resources of the participants, e.g. their PCs and browsers (Fabisiak
and Danilecki, 2017).
One can apparently observe that despite the huge available opportunities to synthesize
various capabilities of humans and machines, absence of a comprehensive approach to
make the most of them is an obvious drawback. In other words, any framework/mechanism
which could integrate intelligence and computational resources of human agents and
machine entities in different levels could come up with the best of both worlds. In this
respect, with the aim of studying previous efforts and current status of similar researches, a
brief overview is conducted. Then, to take the efficiency of such human–machine
cooperation and collaboration to an unprecedented level, the conceptual architecture of a
new evolutionary computing/automation framework, entitled collective hybrid intelligence
(CHI), is proposed and its related issues and considerations are discussed in detail.
According to the current findings and achievements as the building blocks of the introduced
solution, it is expected that the proposed concept could extend borders of the researches in
the field to increase efficacy of human–machine synergy in performing computing tasks.
The rest of this paper is organized as follows. At first, an overview of the context and
intention of the paper is provided in Section 2. The background and preliminary concepts
are briefly overviewed in section 3. The concept of Collective Hybrid Intelligence, its
fundamentals, benefits, challenges and realization models are discussed in Section 4. Finally,
to clearly describe and discuss how typical systems of this kind (that is constructed based
on the proposed framework of CHI) may work in different application domains, three
example scenarios are delineated in Section 5.
## Collective hybrid intelligence
199
-----
## IJCS 3,2
200
2. Big picture
Undoubtedly, computers – i.e. smart/intelligent machines – are among the most important
and influential inventions of the modern era. Their ever-increasing capabilities in handling a
wide variety of computational problems have made computers the artificial superheroes of
all times. Over the years and with thanks to the outstanding progress in hardware
technology, computing paradigms, machine learning and artificial intelligence, the machines
have received an overestimated (and even exaggerated) applause. Affected by science-fiction
stories and movies, the public though may be concerned of an early domination of machines
over human race. In this regard, defeating the world chess champion by a computer (i.e.
IBM’s Deep Blue) in 1997[1] and beating a professional Go player by DeepMind’s AlphaGo
in 2015[2] were convincing evidences for robophobics to conclude that machines finally win
over humans and they will be coronated in the near future.
Despite many advancements, the truth is that even latest machines are not jack of all
trades and there are many battlefields in which humans can defeat a billion bucks machine[3].
In other words, when it comes to cognitive and intelligent tasks, current machines are not
stronger than humans at all (for some example, see Fleuret et al., 2011; Stabinger et al., 2016;
Dodge and Karam, 2017). Such facts have driven the research community to rethink the
computational paradigms by putting humans in the loop.
In addition to compensate the machine’s weaknesses in some ways, human agents could
provide human-level training data for machine learning purposes (Zhong et al., 2015; Yang
et al., 2018). Because of effectiveness of such cooperation, the (mostly fictional) war between
humans and machines has turned into a synergistic collaboration. However, this is not the
final destination for the long journey of achieving super intelligence and computational
capabilities.
The authors believe that the last step before realization of super human intelligence (or
artificial super intelligence) is to make the most of current neglected potentiality that
humans and machines can present in a cooperative way. In the rest of the paper, roles of
both parties as the building blocks of a new comprehensive computational concept, entitled
Collective Hybrid Intelligence, are investigated. As concluding remark, throughout the
paper the term machine refers to any non-human and intelligent entity including computers,
programs, robots, etc.
3. Background
3.1 Collective human intelligence
Human is an integral part of any computing process; however, over the years his role,
position and responsibilities have been changed and evolved. User, operator, supervisor and
collaborator are main categories that could reflect humans’ role in such processes (Folds,
2016), “For thousands of years, humans’ intelligence, problem solving and reasoning
abilities presented numerous game-changing ideas and inventions to make the life easier
(Sarathy, 2018). Nonetheless, handling sophisticated and complicated situations and issues
needed something more than a genius or intelligent decision-maker. Such a fact probably
was sparked the motivation to establish the first councils and organized group decisionmaking bureaus (Burnstein and Berbaum, 1983; Maoz, 1990; Zanakis et al., 2003; Buchanan
and O’Connell, 2006).
In the age of computers, for years humans were mostly consumers while a minority
group of supervisors were in charge of keeping the machines up and running. In fact, those
days can pessimistically be referred to as human-independent computing or machine-driven
computing era. Fortunately, many things have changed forever by introduction of
crowdsourcing concept (Howe, 2006). The underlying idea of this revolutionary paradigm
-----
was taking advantages of humans’ collective abilities and efforts to provide more efficient
performance. Thanks to its potentials, the initial concept has been soon after widely
accepted and evolved into a working decision making and problem-solving strategy
(Brabham, 2008; Guazzini et al., 2015; Yu et al., 2018). Although the idea was not an
essentially new one[4]; its formulation and attitudes towards leveraging wisdom of crowds
and collective human intelligence to cope with problems have made it a popular approach.
Based upon the preliminary idea, several computing concepts such as human computation
(Von Ahn, 2008), social computing (Wang et al., 2007) and community intelligence (Luo et al.,
2009) have been introduced.
Within the recent decade, putting the human in the loop of computing, decision-making
(Chiu et al., 2014), ideation (Huang et al., 2014; Schemmann et al., 2016) and similar processes
have gained momentum so that one can witness a wide variety of application domains that
taking benefits of humans’ intelligence and problem-solving potentials. Nonetheless, there is
not any serious intention to completely replace machines with humans because this is
impossible at all. Instead, the major goal of human-based computation is to compensate
machines’ deficiency in performing some specific tasks and processes including cognitive
and intelligence-intensive ones (Wightman, 2010; Quinn and Bederson, 2011). For example,
outsourcing image labeling tasks to the people can provide more accurate efficient and in
some cases less-expensive results than relying on machines (Nowak and Rüger, 2010).
In other words, when it comes to the situation in which human-level intelligence is
needed, regarding the current machines’ state, recruiting human participant is the silver
bullet. Further, one can expect more insightful and elaborated answers through involving
experts in the form of expert crowdsourcing (Retelny et al., 2014) (Figure 1). Such benefits,
by the way, will not come without cost because employment and management of a
remarkable number of users in crowdsourcing projects can be a pain in the neck.
Therefore, there is need for elaborated and reliable infrastructure, managerial supervision
and workflows. The good news in this context is that availability of technological support
and platforms such as Amazon Mechanical Turk (AMT)[5], TurkPrime (Litman et al., 2017)
and Figure-Eight[6] (formerly Crowdflower) have made conducting a crowdsourcing
campaign as simple as posting a blog.
3.2 Collective machine intelligence
Speaking about artificial intelligence, one of the first things will prompt in the mind is sciencefiction movies. Despite the remarkable advancements in the field (Dai and Weld, 2011;
## Collective hybrid intelligence
201
Figure 1.
Simplified schematic
of CHI workflow
-----
## IJCS 3,2
202
Pan, 2016; Makridakis, 2017; Lu et al., 2018; Li et al., 2018) and predictions concerned about
future of AI (Del Prado, 2015; Müller and Bostrom, 2016; Russell, 2017), there is a long
unpaved way to the age of predomination of machines which are capable of controlling
everything.
Therefore, one should not be concerned of becoming slave or even agent of an artificial
entity in the near future. Things are far different in the real world and (perhaps) the major
issue in the field is how to make the most of machines to be more useful and efficient. From a
general point of view, machine intelligence can be interpreted as capabilities of machines in
handling and performing computational and processing tasks as well as decision making in
a more accurate, accelerated and effective way than humans.
Needless to say that coming up with a universal and comprehensive definition of
machine intelligence is a controversial and interdisciplinary issue and out of scope of this
paper. Anyway, following studies can provide some useful information in this regard
(Hernández-Orallo and Minaya-Collado, 1998; Bien, et al., 2002; Legg and Hutter, 2007;
Dobrev, 2012).
As mentioned earlier, however, in some cases – including cognitive tasks – machines
could not even present human-level performance (Fleuret et al., 2011; Stabinger et al., 2016;
Dodge and Karam, 2017); there are many scenarios (such as huge computation, high-volume
data analysis, real-time knowledge-based decision making and so on) that may not be
realized without help of them. Such outstanding achievements are owing to many years of
research and development in machine learning and artificial intelligence as well as
advancements in hardware technology and communication/computation infrastructures.
All these facilities and progresses, though, could not quench humans’ thirst of creating
comprehensive and polymath machines. The ultimate intention in the field is to realize the
idea of universal AI (Everitt and Hutter, 2018) or Artificial General Intelligence (Gurkaynak
et al., 2016) rather than case-specific ones, e.g. Artificial Narrow Intelligence (Gurkaynak
et al., 2016). Achieving such level of autonomy and intelligence, of course, is not practically
impossible; however a great deal of (multidimensional) intelligence and resources are
needed.
Looking for such an ambitious vision asserts that the days of kingdom of independent
and single-dimension artificial intelligence are gone (or will be gone soon) (Wiedermann,
2012; Yampolskiy, 2015; Miailhe and Hodes, 2017). This ongoing revolution borrowed the
idea from humans who could think and operate more effectively when being organized in
the form of a crowd (Bonabeau, 2009; Leimeister, 2010). The adoption of the concept of
collective human intelligence in the context of machines known as collective machine
intelligence (Halmes, 2013), wisdom of artificial crowds (Yampolskiy and El-Barkouky,
2011), collective robot intelligence (Kube and Zhang, 1992), etc. (Figure 2).
Regardless of differences in nomenclature and (even) details, the goal is almost a similar
and identical one: aggregation and integration of independent (homogeneous/heterogeneous)
machines’ intelligence, power and resources to produce more effective and efficient outputs.
Seems to be partially similar to swarm intelligence (Kennedy, 2006), cluster computing
(Sadashiv and Kumar, 2011) and so on, collective machine intelligence (CMI) is a
comprehensive and multipurpose concept aimed at taking advantages of (almost) every
aspects of a single machine to improve the team performance.
Moreover, in such multi-agent systems the ultimate intention is facilitating collaborative
learning, knowledge, experience and resource sharing (Gifford, 2009). Clearly, the core
concept of CMI is synergy and all-out cooperation. One of the very early well-experienced
realization of the concept is SETI@home project in which millions of computers all over the
world contributed in search for the extraterrestrial intelligence through analyzing radio
-----
## Collective hybrid intelligence
203
Figure 2.
Simplified schematic
of CMI workflow
signals (Anderson, et al., 2002). Although the major goal of the project was compensating the
lack of adequate processing resources rather establishing a platform to aggregate
independent machine’s intelligence; it could be an inspirational case study to prove the
applicability of such a strategy.
Further, several remarkable research works have been conducted to empirically
study the efficiency of teaming up machines to benefit more of their aggregated
utilization, such as projects reported in (Chien et al., 2003; Larson et al., 2009; Pedreira
and Grigoras, 2017). Of course, there is still a notable challenge that, e.g. a cluster of
powerful machines may face severe difficulties to handle it, namely lack of human-level,
cognitive intelligence.
3.3 Hybrid intelligence
The major untouchable difference between humans and most powerful artificial intelligence
is the humanity. Thinking, understanding, learning, recognizing and judging like what
humans do are the essential barriers that no artificial human-made creature (i.e. machine)
could yet overcome them[7][8][9]. Regarding this fact, behind every successful machine,
there is a least one human that is in charge of supervising, training or collaborating with it
(Folds, 2016).
Emphasizing on the intellectual aspects of such constructive symbiosis, it is referred to
as hybrid intelligence (Kamar, 2016). Taking a closer look at the literature reveals there are
cases in which the term (hybrid intelligence) was used to point out to other concepts,
especially collective machine intelligence, e.g. research conducted in (Deng et al., 2012). In
other words, in those instances applying various machine learning algorithms to perform
same task in a more efficient way interpreted as leveraging hybrid intelligence. Such an
appellation, by the way, may not be completely wrong and irrelevant; though, according to
the aforementioned concepts and principles, the term collective machine intelligence can
better reflect the underlying concept of interest.
-----
## IJCS 3,2
204
Figure 3.
Simplified schematic
of hybrid intelligence
workflow
Whether clearly stated or not, when it comes to supporting machine learning algorithms
with human intelligence (usually in the form of crowdsourcing), the hybrid intelligence is
leveraged (Vaughan, 2017; Nushi et al., 2018; Klumpp et al., 2019) (Figure 3).
One can witness best practices of following this strategy in the field of robotics (Chang,
et al., 2017) and particularly for human-robot interaction purposes (Breazeal et al., 2013).
Such an approach – at the simplest scenario- can be simulated by training an image
processing algorithm with human-labeled images (data sets) (Vaughan, 2017). Among
various advantages of incorporating human intelligence in the machine learning workflow
(Barbier et al., 2012; Vaughan, 2017; Verhulst, 2018), the followings can be enumerated:
� simplifying problems and making them machine-understandable;
� compensating machines’ weaknesses and inefficiency, especially for cognitive tasks;
� facilitating and optimizing learning process; and
� saving costs and time.
Mapping general problems into computational ones and making them machine-readable
and –understandable are of hard-to-tackle challenges. Equipping machines with general
intelligence – if possible at this time- may not be economical in every case and demands a
great deal of efforts and resources with no guarantee of being efficient. Specifically, when it
comes to cognitive and human-specific issues, machines face extremely sophisticated
challenges. Therefore, taking advantages of humans’ intelligence and problem solving
power could be considered as the silver bullet. In spite of many advantages hybrid
intelligence can present, there is also room for further improvement by mobilizing all the
possibilities for great, unprecedented breakthroughs.
-----
3.4 Discussion (Are these enough?)
To be or not to be? To answer this question about the need for another intelligence-oriented
computing concept, the first and foremost is evaluation of the current state progress and
challenges. From a high level perspective, computing tasks and processes – based on the
contextual and intrinsic requirements- can be categorized into two major classes:
intelligence-intensive and resource-intensive. The former refers to the tasks that require
some type of cognitive-based judgments, intelligent decision-making, computational
intelligence and similar soft (and mostly human-specific) abilities (Maleszka and Nguyen,
2015; Chen and Shen, 2019). On the other side, the latter ones are of time- and powerconsuming tasks which introduce dealing with large amount of data (Liu et al., 2015;
Jonathan et al., 2017) and high computational and processing requirements (Ilyashenko et al.,
2017; 2019; Singh et al., 2019). Natural language processing, semantic-based processing,
concept understanding and interpretation are some general intelligence-intensive tasks, while
multi-dimensional information processing, big data analysis, high volume communication
control and management are among resource-intensive challenges. Notwithstanding the wide
variety of real-world needs and requirements, numerous computational processes with
different levels of complexity could be introduced.
Therefore, to efficiently handle such situations, the most appropriate computing concept
should be used. As an overview on the previously mentioned concepts, their features are
summarized and compared in the following table (Table I).
As noted in the Table I, there are some essential issues with current computational
paradigms such as scalability and insufficiency to deal with complicated, hybrid tasks that
require both enormous intelligence and resources. For example, assume a series of very
large-scale semantic and cognitive image and video processing tasks that should provide
real-time outputs as well as presenting reliable continuous performance.
As we know, none of the described computational solutions could properly cope with
these challenges and being satisfied with the current available solutions is, in fact, a case of
any port in a storm. In this regard, it seems necessary to take advantages of current
infrastructures and facilities in a novel arrangement for dealing with ever-growing
computational requirements.
4. A new human–machine cooperation framework
The availability of human participants, computing resources and software platforms as
building blocks of any computational process have facilitated ambitious perspectives.
Clearly, we are facing an unprecedented presence and distribution of tangled intelligence
and computing power that have partially been overlooked and remained unused.
At the lowest level, a very large, active and interested community of intelligent
participants who equipped with the state-of-the-art smartphones are yet to be recruited.
Strategy Context Major challenges Major drawbacks
CHI Intelligence- User management, incentive Scalability, non-real time response, limited
intensive tasks mechanism design types of tasks
CMI Resource- Implementation, cooperation Lack of standard interaction modality, lack
intensive tasks management, task allocation of human intelligence, availability issues
Hybrid (Mostly) Human–machine interaction, Scalability, machine-dependent
intelligence intelligence- synchronization performance
intensive tasks
## Collective hybrid intelligence
205
Table I.
Summarization of
computing
paradigms
-----
## IJCS 3,2
206
Mobile data mining (Stahl et al., 2010) as well as location-based computing (Karimi, 2004),
further, have leveraged such smart entities as the most eligible candidates to take part in
computational processes of all kind (Vij and Aggarwal, 2018; Zhao et al., 2019).
On the other hand, distributed, ubiquitous and cloud computing paradigms, high-speed
network connection and communication as well as similar technological facilities have
provided a fertile land of opportunities to tame the groundbreaking possibilities. Therefore,
not as a completely mold-breaking concept but as a complementary and evolutionary one,
Collective Hybrid Intelligence (CHI) has everything to be realized.
Defined as a framework for “integration and convergence of (intelligent and nonintelligent) capabilities of humans and machines in an organized and structured way to
perform a (series of) specific (intelligence- and resource-intensive) computing tasks,” CHI can
be considered as a comprehensive, multipurpose and scalable concept.
The notion of collective hybrid intelligence, in addition to intelligence-intensive
processes, can also be extended to any human–machine cooperative tasks. Basically, besides
sharing the intelligence, the agents can collaborate for, e.g. data collection, testing,
validation, ideation and any process that needs a remarkable amount of cooperative efforts.
The CHI, principally, is an umbrella term to describe various ways of leveraging human–
machine cooperation and collaboration to come up with solutions for highly complicated and
sophisticated problems. In other words, this study is aimed to put forward a brand new
vision for enabling humans and machines (in a bilateral way) to establish some type of
super-collaboration.
According to the concept, every single entity with sufficient capabilities and
qualifications can be a nominee (i.e. potential contributor) to participate in a computational
process. In this regard, in the presence of appropriate utilization mechanisms, e.g.
computing platforms and portals, various computational and processing tasks of interest
can be performed in (almost) everywhere and at every time (Figure 4).
Owing to wide range of possible situations, requirements and computational problems,
the proposed framework is presented at the conceptual level. Doing so, in addition to make it
flexible so as to be able to fit various needs, implementation of different instances in
different contexts will be facilitated. Therefore, the architectural notes in the following
sections present a high-level view of the framework and its fundamentals (i.e. general
organization of CHI) not a specific implementation of that.
Besides proposing a modern computing perspective, CHI is greatly related to the
concepts discussed in the previous section. Such relationships are illustrated in Figure 5.
4.1 Architectural notes
From a general point of view, the conceptual architecture of a typical realization of
CHI-based systems can be depicted as in Figure 6. According to this conceptual
representation, any practical realization needs a complicated and multi-level
implementation. Specifically, some mechanisms are required for distributed task
management, result aggregation, integration and validation. The general workflow of such a
system can be described as follows.
After specifying the goal [i.e. problem(s) to be solved] and decomposing it into subtasks,
the active agents will be identified/selected based on some criteria. Then, the task
management component firstly analyzes the (ordered) task to determine its requirements,
including primary resources, priority, estimated completion time, etc. Then, the appropriate
available resources will be specified for performing the task in an efficient way.
Decomposition of the initial task into several subtasks for distributing them over the
computing network is the next step. Such a partitioning was based on the type of tasks and
-----
## Collective hybrid intelligence
207
Figure 4.
Simplified schematic
of CHI workflow
Figure 5.
Relationships
between CHI and
related concepts
available resources. For example, managing a data-intensive task is far different from a
time-dependent one. Finally, the subtasks will be assigned to the selected agents. Moreover,
the task management component is in charge of aggregating and integrating the results, i.e.
agent-generated responses. The agent management component maintains a complete and
-----
## IJCS 3,2
208
Figure 6.
Conceptual
architecture of a
CHI-based system
continuously updating profile (list) for all the available agents and their processing and
computational capabilities.
The agents will be prioritized based on some major factors, such as availability, active
resources and (quality of) performance history. Those information plays a vital role in
assigning tasks to the agents. Generally, two main scenarios can be considered for the task
assignment process.
First, the tasks will be presented in a task pool, then the volunteer agents in an auctionlike process and based on their capabilities, resources and also problem requirements will
take responsibility of performing those tasks.
In the second approach, those agents in the ready queue that match the requirements
(such as being in an appropriate geographical location, having a specific resource, etc.)
specified by the task coordinator; will be selected to perform the tasks. Then, the tasks will
be performed by the participants and the outputs will be returned to the cloud-based server.
Finally, the gathered results will be integrated and validated so that they become usable
for the intended goal(s) (Figure 7).
To demonstrate how such an approach may be benefited, three example scenarios are
described in the section 6.
According to the aforementioned workflow, as a high-level viewpoint, such a system
should be shaped over a cloud-based infrastructure to support huge communication and
computing processes. To manage the computing procedures, including task management
and integration, a distributed computing platform should be leveraged as a middleware.
-----
However, handling such possibly huge computing processes may face with many
difficulties; thanks to the emerging fog (Bonomi et al., 2012) and edge computing (Shi et al.,
2016) concepts, they can be managed efficiently.
As illustrated in the layered architecture (Figure 8), on the top of the stack, a web service
is in charge of providing participant agents with appropriate interface – similar to existing
crowdsourcing platforms- so that they could perform assigned tasks.
One important aspect of adhering to the CHI principles is leveraging maximum benefits
of distributed computing. Specifically, thanks to flourishing of mobile crowdsourcing and
data mining; location-based intelligence and computing are pervasively available. Moreover,
thanks to ubiquitous smart devices spread globally, including smartphones, gadgets,
laptops, closed-circuit cameras, PCs and state-of-the-art game consoles, we are witnessing a
highly distributed, untamed computing potentialities.
To capture such diverse dynamics, there are needs to well-organized and purposeful
mechanisms and platforms. As the inspirational practical examples of how humans’ power
## Collective hybrid intelligence
209
Figure 7.
General internal
workflow of CHI
Figure 8.
Layered architectural
representation of CHI
-----
## IJCS 3,2
210
Figure 9.
Homogeneous
realization of CHI
could be used and converged, general- and specific-purpose crowdsourcing platforms, such
as (Willis et al., 2017; Peer et al., 2017), are worth studying. In addition to take advantages of
current crowdsourcing systems, there may be need to design customized systems to fit the
case-specific requirements of computational processes.
From another point of view, establishing reliable mechanisms to organize machines’
participation and joint work is an essential requirement. In this regard, development of
platforms through which machines could interact and collaborate with each other put
forward priceless benefits. Previous efforts of this kind such as Robot-specific social
networks (Wang et al., 2012) and social internet of things (SIoT) (Atzori et al., 2012) are great
sources of inspiration, by the way.
4.2 Realization models
Based upon the proposed framework, machines, as passive entities, are thought to be in
charge of providing computational power and processing infrastructure. Therefore, a PC,
laptop, supercomputer and even a smartphone or a large network of computers can be
regarded as an independent/hybrid agent in the process. From another viewpoint, the
human agent besides his traditional roles (user or supervisor) can present a cooperative and
interactive character to assist machines in a broad range from collecting training data sets to
perform more complicated tasks, such as result validation and verification. Moreover,
decision-making on how to distribute tasks between humans and machines is another
important and determining consideration. Such a decision affects the bilateral human–
machine cooperation as well as resource management. For example, inefficient separation of
an intelligence-intensive task between agents may result in wasting times of machines for
what those are not very powerful in and imposing complex and heavy computations (that
take too long to complete) on humans. To avoid such flaws in realization of the CHI, two
general task separation models are presented.
The first one is a homogeneous model in which the tasks will be presented to the
machines and humans in a distinctive manner. Then the results produced by each group will
be collected and integrated. In the final stage, both results generated by the machine and
human will be combined to produce the expected output (Figure 9).
As a heterogeneous solution, the second model is based on using direct human–machine
collaboration in the form of hybrid intelligence from the very early steps (Figure 10).
-----
## Collective hybrid intelligence
211
Figure 10.
Heterogeneous
realization of CHI
As mentioned earlier, such a separation of tasks and duties comes in handy for managing
available resources, costs, completion time and accuracy as well as striking a balance
between efficiency and complexity. This is mainly because, not all tasks are appropriate for
all agents and not all problems can be solved in an identical way.
The first model, in essence, is the appropriate choice for the mostly resources-intensive
tasks or those ones in which requirements and different aspects of tasks are clearly
distinctive and separable. In such a situation, this kind of organization can drastically
resolve unnecessary complexities. Accordingly, intrinsically hybrid and complicated
processes are better to be organized based on the second realization model.
4.3 Discussion
Generally, crowdsourcing-based and distributed processes introduce some intrinsic
challenges and difficulties. Consequently, when it comes to synthesize these processes in an
organized and cooperative workflow, facing unexampled and incidental challenges are
inevitable. As a matter of fact, in spite of its presumed efficiency and applicability, the major
challenge CHI struggles with is a cost-effective and reliable implementation. However, the
authors are working to come up with such a solution, it seems there are needs more efforts
and time to that point. In this respect, to cope with such issues, some essential
considerations [including general (1-4), human-centric (5-7) and machine-centric (7, 8) ones]
should be taken into account as follows.
4.3.1 Problem formulation. CHI is basically a high-level solution when the problem is a
multidimensional, computationally expensive and usually large-scale one. Such a problem,
on its own, addresses several intrinsic complexities that may affect the effectiveness of the
process. Therefore, there is need to a preliminary analysis step for specifying different
aspects of the problem, the category it belongs to, required resources and so on. Such a preevaluation provides necessary information to map the problem to the appropriate realization
approach. As the matter of fact, the heart of a system constructed based on the proposed
concept is efficient separation of duties (tasks) among the participants and this largely
depends on the problem formulation process.
4.3.2 Distribution management. The distribution of tasks among agents and managing
them is one of the most important and critical issues. Owing to intrinsic heterogeneity of the
participant agents in the process, managing and coordinating them so as to result in
-----
## IJCS 3,2
212
providing most efficient and possible performance is of the highest importance. Analyzing
performance log records, real-time agent management facilities as well as continuous
monitoring and efficiency assessment are among the major considerations in this regard.
4.3.3 Interaction facilitation. The communication among various agents involved in the
process and their interaction with control/management unit are other essential issues that
should be taken into account. In addition to demand for (possibly) some new communication
protocols, there is an essential need to an interface (agent interaction modality), e.g. a task
management system such as Amazon Mechanical Turk, through which agents can interact
with the system, perform the assigned tasks and submit the results.
4.3.4 Availability management. Although the availability issue is a well-studied topic for
distributed systems (Kondo et al., 2008; Rawat et al., 2016); dealing with similar problems in
the context of the proposed concept is way different and more challenging. Specifically,
there should be several strategies for the cases in which human participants refuse to
complete tasks in the scheduled time. Such problems are particularly associated with
voluntary participation. The case will be more critical if the unavailability occurs in hybrid
(heterogeneous) processes by each of the participant parties.
4.3.5 Participation engagement. In the context of crowdsourcing, attracting participation
is an influential and challenging issues. Because relying on volunteer participants could not
guarantee the desired performance in most of cases (Mao et al., 2013; Baruch et al., 2016);
some strict, foolproof and reliable engagement strategies are needed. According to the best
practices (Pilz and Gewald, 2013; Khoi et al., 2018), monetary incentives can be convincing
for most of humans. So, when it comes to recruiting professional (expert) crowdworkers,
higher costs (and even other incentives) may be imposed. Further, using non-human agents
(i.e. machines) is even more difficult and troublesome. A probably working suggestion is
establishing a cloud-based market in the reverse direction through which individuals could
sell their own machines’ capabilities by enrolling in available computational processes.
Then, they will be paid per completed tasks.
4.3.6 Quality assurance. One of the most important concerns in human-mediated
processes in general and crowdsourcing in particular is the quality (i.e. accuracy and
preciseness) of performance (e.g. submitted results). Despite efforts have been made to cope
with this issue (Daniel et al., 2018), its unfavorable consequences can be severe in
complicated and multidimensional projects. As an example, low quality labels in a
crowdsourced image annotation process address very limited negative effects in contrast
with inaccurate evaluation of a machine learning model. In addition to considering strict
criteria for crowdworker recruitment, monitoring participants’ performance and adhering to
rigorous task assignment standards are some practical steps to ensure the quality of the
completed tasks.
4.3.7 Adversarial intentions. Untruthful workers and those with adversarial intentions in
mind (Difallah et al., 2012; Steinhardt et al., 2016) can threaten any crowdsourcing process.
Hence, trust management (Yu et al., 2012; Feng et al., 2017) plays a key role in participant
recruitment and task assignment processes to deal with inaccurate and wrong submissions
or even organized attacks aimed at affecting the process. Because there are situations in
which some private information can be revealed (Boutsis and Kalogeraki, 2016), relying on
untrusted workers may result in privacy breach and violation. Therefore, the needs for
identifying malicious participants (both humans and machines), neutralizing wrongdoings
and preserving privacy (for information and even participants (Kajino et al., 2014) are a
must.
4.3.8 Machine inefficiency. Owing to differences in hosting systems’ configuration,
implementation, initial training data and so on, the efficiency of (even same) machine
-----
learning algorithms may vary case by case. For this reason, various machines introduce
various levels of efficiency for different problems. In this regard, there should be some
mechanisms to manage such unbalanced capabilities and performance – specifically in the
case of hybrid collaboration- to make the computational process as reliable as possible.
5. Example scenarios
Explaining the operation of a system that works based on the proposed concept, three
motivating example scenarios are presented in this section. Applications of CHI are not
limited to these cases; however, they could be regarded as inspirational instances to
generalize the underlying concepts.
5.1 Collective hybrid intelligence for computing tasks
In this example, the given goal is to recognize similar images from a large data set and annotating
them to obtain appropriate results. To participate in this location-independent (and mostly
intelligence-intensive) task, there are no specific criteria for human agents but their position in the
task allocation queue. On the other side, being equipped with Open CV machine vision library is
the specified criterion for the machines. Then, such machines will be selected from the ready
queue to be a participant. Though, there are various methods for assigning tasks to the workers
(agents), “In the context of this example, the tasks are divided into two groups: Resource-intensive
and cognitive ones. Thanks to the development in the field of machine vision and image
processing, finding similar images, in general, is not a difficult task. Therefore, these relatively
time-consuming tasks that do not need high level of cognitive ability will be assigned to the
machines. Moreover, machines are in charge of performing initial automatic annotation. To
guarantee the accuracy and efficiency of annotations, for a specific image or a set of images that
convergence rate, similarity of classification and annotation are less than a determined threshold,
the results will be assigned to humans for further considerations. Moreover, the output of
humans’ efforts, after analysis, may be leveraged as a gold standard to evaluate machines’
performance. Also, such human-generated data can be used to train machines.
5.2 Collective hybrid intelligence for autonomous urban vehicles control
One of the most important issues in controlling autonomous vehicle is need for an accurate,
up-to-date and comprehensive map or some advanced peripherals to provide environmental
information in real-time, (Vochin et al., 2018; Bayat et al., 2018) and references therein. In this
example, the application of CHI in providing such a specialized map is considered. Doing so,
in one side, human agents should collect information from different streets of the city
including rush hour situations, the safest paths, detours in various times and conditions.
Moreover, their own experiences and recommendations for navigation in such situations are
of the high importance. On the other side, traffic cameras and other urban monitoring
sensors provide specialized machines (i.e. specific-purpose computers) with some real-world
information on different situations of the city. Alongside with satellite and global maps
information, such machines which leverage advanced algorithms can come up with some
navigation patterns for the autonomous vehicles. Finally, fusing these two types of
intelligence – that could be gathered asynchronously – can be used for predictive control of
such vehicles within different streets of a crowded city in different times.
5.3 Collective hybrid intelligence for human–robot cooperative surgery
Human-robot cooperative surgery is another context that adhering to collective hybrid
intelligence principles may improve its workflow and performance. As an imaginary
## Collective hybrid intelligence
213
-----
## IJCS 3,2
214
scenario, the CHI can facilitate a complex operation as follows: depending on the case, the
previous experiences and information are gathered from experts. Such invaluable data will
feed the automatic robotic arm(s) with the necessary information. In the case of any
unprecedented issues or exceptions, if the (expert) system could not find any reliable
solution (recommendation), the experts who are monitoring the operation will present their
ideas (suggestions) based on the situation and machine’s feedback. Then, the integrated
responses will be sent to the robot as the collective advice. Needless to say that, in this case,
all the mentioned processes should be performed in real-time.
6. Conclusion
In this paper the notion and general concept of CHI as a new complementary computing and
automation concept is proposed. The main idea behind the Collective Hybrid intelligence is
leveraging humans and machines’ capabilities in a new manner to maximize the efficiency of
human–machine cooperation and collaboration. The major building blocks of the presented
framework are some well-experienced and successful approaches, namely distributed
computing, collective human intelligence, human computing, hybrid intelligence and collective
machine intelligence. To support the introduced idea, its different realization models, the
conceptual architecture and workflow are delineated and discussed. The authors anticipate that
this concept can provide unprecedented functionality and performance for human–machinecooperated processing and computing procedures in the near future. Meanwhile, it is
emphasized that the proposed idea in this paper is in its early stages and there are still several
unanswered questions and challenges yet to be resolved. Specifically, the implementation of a
real-world system based on the presented framework is future work of the authors.
Notes
[1. www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/](http://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/)
[2. www.scientificamerican.com/article/how-the-computer-beat-the-go-master/?redirect=1](http://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/?redirect=1)
[3. www.dailymail.co.uk/sciencetech/article-6695515/Human-debate-champion-defeats-IBMs-smartest-](http://www.dailymail.co.uk/sciencetech/article-6695515/Human-debate-champion-defeats-IBMs-smartest-AI-powered-machine.html)
[AI-powered-machine.html](http://www.dailymail.co.uk/sciencetech/article-6695515/Human-debate-champion-defeats-IBMs-smartest-AI-powered-machine.html)
[4. www.crowdsource.com/blog/2013/08/the-long-history-of-crowdsourcing-and-why-youre-just-now-](http://www.crowdsource.com/blog/2013/08/the-long-history-of-crowdsourcing-and-why-youre-just-now-hearing-about-it/)
[hearing-about-it/](http://www.crowdsource.com/blog/2013/08/the-long-history-of-crowdsourcing-and-why-youre-just-now-hearing-about-it/)
[5. www.mturk.com/](http://www.mturk.com/)
[6. www.figure-eight.com/](http://www.figure-eight.com/)
[7. www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-](http://www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-intelligence-singularity)
[intelligence-singularity](http://www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-intelligence-singularity)
[8. https://medium.com/@lancengym/3-simple-reasons-why-ai-will-not-rule-man-yet-22d8069d8321](https://medium.com//3-simple-reasons-why-ai-will-not-rule-man-yet-22d8069d8321)
[9. https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-](https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-creativity/)
[creativity/](https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-creativity/)
References
Anderson, D.P., Cobb, J., Korpela, E., Lebofsky, M. and Werthimer, D. (2002), “SETI@ home: an
experiment in public-resource computing”, Communications of the Acm, Vol. 45 No. 11, pp. 56-61.
Atzori, L., Iera, A., Morabito, G. and Nitti, M. (2012), “The social internet of things (siot)–when social
networks meet the internet of things: concept, architecture and network characterization”,
Computer Networks, Vol. 56 No. 16, pp. 3594-3608.
-----
Barbier, G., Zafarani, R., Gao, H., Fung, G. and Liu, H. (2012), “Maximizing benefits from crowdsourced
data”, Computational and Mathematical Organization Theory, Vol. 18 No. 3, pp. 257-279.
Bargiela, A. and Pedrycz, W. (2016), “Granular computing”, in Handbook on Computational Intelligence,
Vol. 1, Fuzzy Logic, Systems, Artificial Neural Networks, and Learning Systems, pp. 43-66.
Baruch, A., May, A. and Yu, D. (2016), “The motivations, enablers and barriers for voluntary participation
in an online crowdsourcing platform”, Computers in Human Behavior, Vol. 64, pp. 923-931.
Bayat, F., Najafinia, S. and Aliyari, M. (2018), “Mobile robots path planning: electrostatic potential field
approach”, Expert Systems with Applications, Vol. 100, pp. 68-78.
Beberg, A.L., Ensign, D.L., Jayachandran, G., Khaliq, S. and Pande, V.S. (2009), “Folding@ home:
lessons from eight years of volunteer distributed computing”, Proceedings of International
Parallel and Distributed Processing Symposium, Rome, Italy, pp. 1-8.
Bien, Z., Bang, W.C., Kim, D.Y. and Han, J.S. (2002), “Machine intelligence quotient: its measurements
and applications”, Fuzzy Sets and Systems, Vol. 127 No. 1, pp. 3-16.
Bonabeau, E. (2009), “Decisions 2.0: the power of collective intelligence”, MIT Sloan Management
Review, Article 45, Vol. 50 No. No. 2.
Bonomi, F., Milito, R., Zhu, J. and Addepalli, S. (2012), “Fog computing and its role in the internet of things”,
Proceedings of the first edition of the MCC workshop on Mobile Cloud Computing, ACM; pp. 13-16.
Boutsis, I. and Kalogeraki, V. (2016), ““Location privacy for crowdsourcing applications”, Proceedings of the
2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, pp. 694-705.
Brabham, D.C. (2008), “Crowdsourcing as a model for problem solving: an introduction and cases”,
Convergence: The International Journal of Research into New Media Technologies, Vol. 14 No. 1,
pp. 75-90.
Breazeal, C., DePalma, N., Orkin, J., Chernova, S. and Jung, M. (2013), “Crowdsourcing human-robot
interaction: new methods and system evaluation in a public environment”, Journal of HumanRobot Interaction, Vol. 2 No. 1, pp. 82-111.
Buchanan, L. and O’Connell, A. (2006), “A brief history of decision making”, Harvard Business Review,
Vol. 84 No. 1, pp. 32-40.
Bundy, A. (2017), “Smart machines are not a threat to humanity”, Communications of the Acm, Vol. 60
No. 2, pp. 40-42.
Burnstein, E. and Berbaum, M.L. (1983), “Stages in group decision making: the decomposition of
historical narratives”, Political Psychology, Vol. 4 No. 3, pp. 531-561.
Cassimatis, N.L. (2006), “A cognitive substrate for achieving human-level intelligence”, AI Magazine,
Vol. 27 No. 2, pp. 45-45.
Chang, J.C., Amershi, S. and Kamar, E. (2017), “Revolt: collaborative crowdsourcing for labeling
machine learning datasets”, Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems, ACM, pp. 2334-2346.
Chen, S.C.Y. and Shen, M.C. (2019), “The fourth industrial revolution and the development of artificial
intelligence”, Contemporary Issues in International Political Economy, Palgrave Macmillan,
Singapore, pp. 333-346.
Chien, A., Calder, B., Elbert, S. and Bhatia, K. (2003), “Entropia: architecture and performance of an enterprise
desktop grid system”, Journal of Parallel and Distributed Computing, Vol. 63 No. 5, pp. 597-610.
Chiu, C.M., Liang, T.P. and Turban, E. (2014), “What can crowdsourcing do for decision support?”,
Decision Support Systems, Vol. 65, pp. 40-49.
Copeland, B.J. (2000), The Modern History of Computing, Zalta, E.N. (Ed.), Winter 2017 Edition The
[Stanford Encyclopedia of Philosophy, available at: https://plato.stanford.edu/archives/win2017/](https://plato.stanford.edu/archives/win2017/entries/computing-history/)
[entries/computing-history/](https://plato.stanford.edu/archives/win2017/entries/computing-history/)
Dai, P. and Weld, D.S. (2011), “Artificial intelligence for artificial intelligence”, Proceedings of TwentyFifth AAAI Conference on Artificial Intelligence.
## Collective hybrid intelligence
215
-----
## IJCS 3,2
216
Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B. and Allahbakhsh, M. (2018), “Quality control in
crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions”,
ACM Computing Surveys, Article 7, Vol. 51 No. 1.
Decker, M. (2000), “Replacing human beings by robots. How to tackle that perspective by technology
assessment”, in Vision Assessment: Shaping Technology in 21st Century Society, Springer Berlin
Heidelberg, Berlin, Heidelberg, pp. 149-166.
Del Prado, G.M. (2015), “18 Artificial intelligence researchers reveal the profound changes coming to
[our lives”, Business Insider, available at: www.businessinsider.com/researchers-predictions-](http://www.businessinsider.com/researchers-predictions-future-artificial-intelligence-2015-10)
[future-artificial-intelligence-2015-10 (accessed 10 September 2018).](http://www.businessinsider.com/researchers-predictions-future-artificial-intelligence-2015-10)
Deng, W., Chen, R., Gao, J., Song, Y. and Xu, J. (2012), “A novel parallel hybrid intelligence optimization
algorithm for a function approximation problem”, Computers and Mathematics with
Applications, Vol. 63 No. 1, pp. 325-336.
Difallah, D.E., Demartini, G. and Cudré-Mauroux, P. (2012), “Mechanical cheat: spamming schemes and
adversarial techniques on crowdsourcing platforms”, Proceedings of the First International
Workshop on Crowdsourcing Web Search, pp. 26-30.
Dobrev, D. (2012), A Definition of Artificial Intelligence”, arXiv preprint arXiv:1210.1568.
Dodge, S. and Karam, L. (2017), “A study and comparison of human and deep learning recognition
performance under visual distortions”, Proceedings of the 26th International Conference on
Computer Communication and Networks (ICCCN), IEEE, pp. 1-7.
Everitt, T. and Hutter, M. (2018), “Universal artificial intelligence”, In Foundations of Trusted
Autonomy, Springer, Cham; pp. 15-46.
Fabisiak, T. and Danilecki, A. (2017), “Browser-based harnessing of voluntary computational power”,
Foundations of Computing and Decision Sciences, Vol. 42 No. 1, pp. 3-42.
Feigenbaum, E.A. (2003), “Some challenges and grand challenges for computational intelligence”,
Journal of the ACM ( Acm), Vol. 50 No. 1, pp. 32-40.
Feng, W., Yan, Z., Zhang, H., Zeng, K., Xiao, Y. and Hou, Y.T. (2017), “A survey on security, privacy, and
trust in mobile crowdsourcing”, IEEE Internet of Things Journal, Vol. 5 No. 4, pp. 2971-2992.
Fleuret, F., Li, T., Dubout, C., Wampler, E.K., Yantis, S. and Geman, D. (2011), “Comparing machines
and humans on a visual categorization test”, Proceedings of the National Academy of Sciences,
Vol. 108 No. 43, pp. 17621-17625.
Folds, D.J. (2016), “Human executive control of autonomous systems: a conceptual framework”,
Proceedings of IEEE International Symposium on Systems Engineering (ISSE), pp. 1-5.
Gifford, C.M. (2009), “Collective machine learning: team learning and classification in multi-agent
systems”, Doctoral dissertation, University of Kansas.
Guazzini, A., Vilone, D., Donati, C., Nardi, A. and Levnaji�c, Z. (2015), “Modeling crowdsourcing as
collective problem solving”, Scientific reports, Vol. 5, Article number: 16557.
Gurkaynak, G., Yilmaz, I. and Haksever, G. (2016), “Stifling artificial intelligence: human perils”,
Computer Law and Security Review, Vol. 32 No. 5, pp. 749-758.
Halmes, M. (2013), “Measurements of collective machine intelligence”, arXiv preprint arXiv:1306.6649.
Hernández-Orallo, J. and Minaya-Collado, N. (1998), “A formal definition of intelligence based on an
intensional variant of algorithmic complexity”, Proceedings of International Symposium of
Engineering of Intelligent Systems (EIS98), pp. 146-163.
Hibbard, B. (2001), “Super-intelligent machines”, ACM SIGGRAPH Computer Graphics, Vol. 35 No. 1,
pp. 11-13.
Howe, J. (2006), “The rise of crowdsourcing”, Wired Magazine, Vol. 14 No. 6, pp. 1-4.
Huang, Y., Vir Singh, P. and Srinivasan, K. (2014), “Crowdsourcing new product ideas under consumer
learning”, Management Science, Vol. 60 No. 9, pp. 2138-2159.
Huang, F.Y., Wang, K., An, Y. and Lasecki, W.S. (2017), “Towards hybrid intelligence for robotics”,
Proceedings of The 5th Edition of the Collective Intelligence Conference.
-----
Ilyashenko, A.S., Lukashin, A.A., Zaborovsky, V.S. and Lukashin, A.A. (2017), “Algorithms for
planning resource-intensive computing tasks in a hybrid supercomputer environment for
simulating the characteristics of a quantum rotation sensor and performing engineering
calculations”, Automatic Control and Computer Sciences, Vol. 51 No. 6, pp. 426-434.
Jonathan, A., Ryden, M., Oh, K., Chandra, A. and Weissman, J. (2017), “Nebula: distributed edge cloud
for data intensive computing”, IEEE Transactions on Parallel and Distributed Systems, Vol. 28
No. 11, pp. 3229-3242.
Kajino, H., Arai, H. and Kashima, H. (2014), “Preserving worker privacy in crowdsourcing”, Data
Mining and Knowledge Discovery, Vol. 28 Nos 5/6, pp. 1314-1335.
Kamar, E. (2016), “Directions in hybrid intelligence: complementing AI systems with human
intelligence”, Proceedings of IJCAI, pp. 4070-4073.
Karimi, H.A. (Ed.) (2004), “Telegeoinformatics: Location-Based Computing and Services”, CRC Press.
Kennedy, J. (2006), “Swarm intelligence”, in Handbook of Nature-Inspired and Innovative Computing,
Springer, pp. 187-219.
Kephart, J.O. and Chess, D.M. (2003), “The vision of autonomic computing”, Computer, Vol. 1, pp. 41-50.
Khoi, N., Casteleyn, S., Moradi, M. and Pebesma, E. (2018), “Do monetary incentives influence users’
behavior in participatory sensing?”, Sensors, Vol. 18 No. 5, Article 1426.
Klumpp, M., Hesenius, M., Meyer, O., Ruiner, C. and Gruhn, V. (2019), “Production logistics and humancomputer interaction—state-of-the-art, challenges and requirements for the future”, The
[International Journal of Advanced Manufacturing Technology, doi: 10.1007/s00170-019-03785-0.](http://dx.doi.org/10.1007/s00170-019-03785-0)
Kondo, D., Andrzejak, A. and Anderson, D.P. (2008), “On correlated availability in internet-distributed
systems”, Proceedings of the 2008 9th IEEE/ACM International Conference on Grid Computing,
IEEE, pp. 276-283.
Kube, C.R. and Zhang, H. (1992), “Collective robotic intelligence”, Proceedings of the Second
International Conference on Simulation of Adaptive Behavior, pp. 460-468.
Larson, S.M. Snow, C.D. Shirts, M. and Pande, V.S. (2009), “Folding@ Home and Genome@ Home:
Using distributed computing to tackle previously intractable problems in computational
biology”, arXiv preprint arXiv:0901.0866.
Legg, S. and Hutter, M. (2007), “Universal intelligence: a definition of machine intelligence”, Minds and
Machines, Vol. 17 No. 4, pp. 391-444.
Leimeister, J.M. (2010), “Collective intelligence”, Business and Information Systems Engineering, Vol. 2
No. 4, pp. 245-248.
Li, J., Pan, Z., Xu, J., Liang, B., Chen, Y. and Ji, W. (2018), “Quality-time-complexity universal intelligence
measurement”, International Journal of Crowd Science, Vol. 2 No. 2, pp. 99-107.
Litman, L., Robinson, J. and Abberbock, T. (2017), “TurkPrime. com: a versatile crowdsourcing data
acquisition platform for the behavioral sciences”, Behavior Research Methods, Vol. 49 No. 2,
pp. 433-442.
Liu, J., Pacitti, E., Valduriez, P. and Mattoso, M. (2015), “A survey of data-intensive scientific workflow
management”, Journal of Grid Computing, Vol. 13 No. 4, pp. 457-493.
Lu, H., Li, Y., Chen, M., Kim, H. and Serikawa, S. (2018), “Brain intelligence: go beyond artificial
intelligence”, Mobile Networks and Applications, Vol. 23 No. 2, pp. 368-375.
Luo, S., Xia, H., Yoshida, T. and Wang, Z. (2009), “Toward collective intelligence of online communities:
a primitive conceptual model”, Journal of Systems Science and Systems Engineering, Vol. 18
No. 2, pp. 203-221.
Mahoney, M.S. (1988), “The history of computing in the history of technology”, Ieee Annals of the
History of Computing, Vol. 10 No. 2, pp. 113-125.
Makridakis, S. (2017), “The forthcoming artificial intelligence (AI) revolution: its impact on society and
firms”, Futures, Vol. 90, pp. 46-60.
## Collective hybrid intelligence
217
-----
## IJCS 3,2
218
Maleszka, M. and Nguyen, N.T. (2015), “Integration computing and collective intelligence”, Expert
Systems with Applications, Vol. 42 No. 1, pp. 332-340.
Mao, A., Kamar, E., Chen, Y., Horvitz, E., Schwamb, M.E., Lintott, C.J. and Smith, A.M. (2013),
“Volunteering versus work for pay: incentives and tradeoffs in crowdsourcing”, Proceedings of
the First AAAI Conference on Human Computation and Crowdsourcing.
Maoz, Z. (1990), “Framing the national interest: the manipulation of foreign policy decisions in group
settings”, World Politics, Vol. 43 No. 1, pp. 77-110.
Miailhe, N. and Hodes, C. (2017), “The third age of artificial intelligence. Field actions science reports”,
The Journal of Field Actions, (Special Issue), Vol. 17, pp. 6-11.
Müller, V.C. and Bostrom, N. (2016), “Future progress in artificial intelligence: a survey of expert
opinion”, In Fundamental Issues of Artificial Intelligence, Springer, Cham, pp. 555-572.
Nilsson, N.J. (2005), “Human-level artificial intelligence? Be serious!”, AI Magazine, Vol. 26 No. 4,
pp. 68-68.
Nowak, S. and Rüger, S. (2010), “How reliable are annotations via crowdsourcing: a study about interannotator agreement for multi-label image annotation”, Proceedings of the international
conference on Multimedia Information Retrieval, ACM, pp. 557-566.
Nushi, B., Kamar, E. and Horvitz, E. (2018), “Towards accountable AI: hybrid human-machine analyses
for characterizing system failure”, Proceedings of the Sixth AAAI Conference on Human
Computation and Crowdsourcing.
Ofli, F., Meier, P., Imran, M., Castillo, C., Tuia, D., Rey, N., Briant, J., Millet, P., Reinhard, F., Parkan, M.
and Joost, S. (2016), “Combining human computing and machine learning to make sense of big
(aerial) data for disaster response”, Big Data, Vol. 4 No. 1, pp. 47-59.
Pan, Y. (2016), “Heading toward artificial intelligence 2.0”, Engineering, Vol. 2 No. 4, pp. 409-413.
Pedreira, M.M. and Grigoras, C. (2017), “Scalable Global Grid catalogue for LHC Run3 and beyond”,
arXiv preprint arXiv:1704.05272.
Peer, E., Brandimarte, L., Samat, S. and Acquisti, A. (2017), “Beyond the Turk: alternative platforms for
crowdsourcing behavioral research”, Journal of Experimental Social Psychology, Vol. 70, pp. 153-163.
Pilz, D. and Gewald, H. (2013), “Does money matter? Motivational factors for participation in paid-and
non-profit-crowdsourcing communities”, Wirtschaftsinformatik, Vol. 37, pp. 73-82.
Quinn, A.J. and Bederson, B.B. (2011), “Human computation: a survey and taxonomy of a growing
field”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM,
pp. 1403-1412.
Rawat, A.S., Papailiopoulos, D.S., Dimakis, A.G. and Vishwanath, S. (2016), “Locality and availability
in distributed storage”, IEEE Transactions on Information Theory, Vol. 62 No. 8, pp. 4481-4493.
Retelny, D., Robaszkiewicz, S., To, A., Lasecki, W.S., Patel, J., Rahmati, N., Doshi, T., Valentine, M. and
Bernstein, M.S. (2014), “Expert crowdsourcing with flash teams”, Proceedings of the 27th annual
ACM symposium on User Interface Software and Technology, ACM, pp. 75-85.
Russell, S. (2017), “Artificial intelligence: the future is superintelligent”, Nature, Vol. 548 No. 7669,
pp. 520-522.
Sadashiv, N. and Kumar, S.D. (2011), “Cluster, grid and cloud computing: a detailed comparison”,
Proceedings of the 6th International Conference on Computer Science and Education (ICCSE),
IEEE, pp. 477-482.
Sarathy, V. (2018), “Real world problem-solving”, Frontiers in Human Neuroscience, Vol. 12PMC6028615.
Schemmann, B., Herrmann, A.M., Chappin, M.M. and Heimeriks, G.J. (2016), “Crowdsourcing ideas:
involving ordinary users in the ideation phase of new product development”, Research Policy,
Vol. 45 No. 6, pp. 1145-1154.
Shi, W., Cao, J., Zhang, Q., Li, Y. and Xu, L. (2016), “Edge computing: vision and challenges”, IEEE
Internet of Things Journal, Vol. 3 No. 5, pp. 637-646.
-----
Singh, S.P., Nayyar, A., Kaur, H. and Singla, A. (2019), “Dynamic task scheduling using balanced VM
allocation policy for fog computing platforms”, Scalable Computing: Practice and Experience,
Vol. 20 No. 2, pp. 433-456.
Stabinger, S., Rodríguez-Sánchez, A. and Piater, J. (2016), “25 Years of CNNs: Can we compare to human
abstraction capabilities?”, Proceedings of International Conference on Artificial Neural Networks,
Springer, Cham, pp. 380-387.
Stahl, F., Gaber, M.M., Bramer, M. and Philip, S.Y. (2010), “Pocket data mining: towards collaborative
data mining in mobile computing environments”, Proceedings of 22nd IEEE International
Conference on Tools with Artificial Intelligence (ICTAI), IEEE, Vol. 2, pp. 323-330.
Steinhardt, J., Valiant, G. and Charikar, M. (2016), “Avoiding imposters and delinquents: adversarial
crowdsourcing and peer prediction”, In Advances in Neural Information Processing Systems,
pp. 4439-4447.
Vaughan, J.W. (2017), “Making better use of the crowd: how crowdsourcing can advance machine
learning research”, Journal of Machine Learning Research, Vol. 18, pp. 1-46.
Verhulst, S.G. (2018), “Where and when AI and CI meet: exploring the intersection of artificial and
collective intelligence towards the goal of innovating how we govern”, AI and Society, Vol. 33
No. 2, pp. 293-297.
Vij, D. and Aggarwal, N. (2018), “Smartphone based traffic state detection using acoustic analysis and
crowdsourcing”, Applied Acoustics, Vol. 138, pp. 80-91.
Vochin, M., Zoican, S. and Borcoci, E. (2018), “Intelligent vehicle navigation system with assistance and
alerting capabilities”, Concurrency and Computation: Practice and Experience, p. e4402,
[available at: https://doi.org/10.1002/cpe.4402](https://doi.org/10.1002/cpe.4402)
Von Ahn, L. (2008), “Human computation”, Proceedings of the 2008 IEEE 24th International
Conference on Data Engineering, pp. 1-2.
Wang, F.Y., Carley, K.M., Zeng, D. and Mao, W. (2007), “Social computing: from social informatics to
social intelligence”, IEEE Intelligent Systems, Vol. 22 No. 2.
Wang, W., Johnston, B. and Williams, M.A. (2012), “Social networking for robots to share knowledge,
skills and know-how”, Proceedings of International Conference on Social Robotics, Springer,
Berlin, Heidelberg, pp. 418-427.
Weyer, J., Fink, R.D. and Adelt, F. (2015), “Human–machine cooperation in smart cars: an empirical
investigation of the loss-of-control thesis”, Safety Science, Vol. 72, pp. 199-208.
Wiedermann, J. (2012), “Is there something beyond AI? Frequently emerging, but seldom answered
questions about artificial Super-Intelligence”, Proceedings of the International Conference
Beyond AI, pp. 76-86.
Wightman, D. (2010), “Crowdsourcing human-based computation”, Proceedings of the 6th Nordic
Conference on Human-Computer Interaction: Extending Boundaries, ACM, pp. 551-560.
Willis, C.G., Law, E., Williams, A.C., Franzone, B.F., Bernardos, R., Bruno, L., Hopkins, C., Schorn, C., Weber,
E., Park, D.S. and Davis, C.C. (2017), “CrowdCurio: an online crowdsourcing platform to facilitate
climate change studies using herbarium specimens”, New Phytologist, Vol. 215 No. 1, pp. 479-488.
Yampolskiy, R.V. (2015), “On the limits of recursively self-improving AGI”, Proceedings of
International Conference on Artificial General Intelligence, Springer, Cham, pp. 394-403.
Yampolskiy, R.V. and El-Barkouky, A. (2011), “Wisdom of artificial crowds algorithm for solving
NP-hard problems”, International Journal of Bio-Inspired Computation, Vol. 3 No. 6, pp. 358-369.
Yang, J., Drake, T., Damianou, A. and Maarek, Y. (2018), “Leveraging crowdsourcing data for deep
active learning an application: learning intents in Alexa”, Proceedings of the 2018 World Wide
Web Conference on World Wide Web, pp. 23-32.
Yu, C., Chai, Y. and Liu, Y. (2018), “Literature review on collective intelligence: a crowd science
perspective”, International Journal of Crowd Science, Vol. 2 No. 1, pp. 64-73.
## Collective hybrid intelligence
219
-----
## IJCS 3,2
220
Yu, H., Shen, Z., Miao, C. and An, B. (2012), “Challenges and opportunities for trust management in
crowdsourcing”, Proceedings of the IEEE/WIC/ACM International Joint Conferences on Web
Intelligence and Intelligent Agent Technology, IEEE, Vol. 2, pp. 486-493.
Zadeh, L.A. (2008), “Toward human level machine intelligence-is it achievable? The need for a
paradigm shift”, IEEE Computational Intelligence Magazine, Vol. 3 No. 3.
Zanakis, S.H., Theofanides, S., Kontaratos, A.N. and Tassios, T.P. (2003), “Ancient Greeks’ practices and
contributions in public and entrepreneurship decision making”, Interfaces, Vol. 33 No. 6, pp. 72-88.
Zhao, X., Li, J., Han, R., Xie, B. and Ou, J. (2019), “GroundEye: a mobile crowdsourcing structure seismic
response monitoring system based on smartphone”, in Health Monitoring of Structural and
Biological Systems XIII, Vol. 10972, International Society for Optics and Photonics, available at:
[https://doi.org/10.1117/12.2514905](https://doi.org/10.1117/12.2514905)
Zhong, J., Tang, K. and Zhou, Z.H. (2015), “Active learning from crowds with unsure option”,
Proceedings of International Joint Conferences on Artificial Intelligence, pp. 1061-1068.
Further reading
Moret-Bonillo, V. (2018), “Emerging technologies in artificial intelligence: quantum rule-based
systems”, Progress in Artificial Intelligence, Vol. 7 No. 2, pp. 155-166.
Corresponding author
[Farhad Bayat can be contacted at: bayat.farhad@znu.ac.ir](mailto:bayat.farhad@znu.ac.ir)
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1108/IJCS-03-2019-0012?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1108/IJCS-03-2019-0012, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.emerald.com/insight/content/doi/10.1108/IJCS-03-2019-0012/full/pdf?title=collective-hybrid-intelligence-towards-a-conceptual-framework"
}
| 2,019
|
[
"JournalArticle",
"Review"
] | true
| 2019-08-30T00:00:00
|
[
{
"paperId": "9fc93a7ae7fabaabe1923a5914a29761dead1551",
"title": "Future Progress in Artificial Intelligence: A Survey of Expert Opinion"
},
{
"paperId": "cd20a344091737e018ae1066f9d8476ec64dba73",
"title": "Intelligent vehicle navigation system with assistance and alerting capabilities"
},
{
"paperId": "5249d9d8fd75488f14d65725bad51d1c8647cf3c",
"title": "Production logistics and human-computer interaction—state-of-the-art, challenges and requirements for the future"
},
{
"paperId": "3528a3dd6c4ac1f9d154fcc1b5f4149ce470ffd1",
"title": "Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platforms"
},
{
"paperId": "122466eb0c138e4774ab595a0dc4780847242150",
"title": "GroundEye: a mobile crowdsourcing structure seismic response monitoring system based on smartphone"
},
{
"paperId": "258b3190707d25118208c8a1f2c56c75d063e67d",
"title": "Smartphone based traffic state detection using acoustic analysis and crowdsourcing"
},
{
"paperId": "4690255ca7d4679a3fa10eaa93fb0c1e7c41b669",
"title": "Real World Problem-Solving"
},
{
"paperId": "c280c95c17194de83e5234cca3586cd567ecd47c",
"title": "Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure"
},
{
"paperId": "99f4c9115d4e086aa00515057ead867b7698e68a",
"title": "Quality-time-complexity universal intelligence measurement"
},
{
"paperId": "79d576150b5d2da3d83d26e5c7c231ba6d4ea8a6",
"title": "A Survey on Security, Privacy, and Trust in Mobile Crowdsourcing"
},
{
"paperId": "63fcc106f9a4053a79a1c2c1fd34211d827d1194",
"title": "Mobile robots path planning: Electrostatic potential field approach"
},
{
"paperId": "baa9c67319222b4e7a74fb3130a24552a2d66c89",
"title": "Do Monetary Incentives Influence Users’ Behavior in Participatory Sensing?"
},
{
"paperId": "2c08f8190d5dcd0278ce5f94ac39d0462c794508",
"title": "Literature review on collective intelligence: a crowd science perspective"
},
{
"paperId": "0bf3f2c880333a8e7fe1ca430ab47ab349cb1021",
"title": "Leveraging Crowdsourcing Data for Deep Active Learning An Application: Learning Intents in Alexa"
},
{
"paperId": "10b416132cf8c19c35ef3abb0acefb0352f48ad8",
"title": "Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern"
},
{
"paperId": "48fb8ee6a786e13c3be21b4cd441b22dbc434f16",
"title": "Emerging technologies in artificial intelligence: quantum rule-based systems"
},
{
"paperId": "72fdae43bb36c2470d04710e7192e4fe1093d14d",
"title": "The Third Age of Artificial Intelligence"
},
{
"paperId": "6c0a71998932322d4d1658f0f1a4393e43e41352",
"title": "Algorithms for planning Resource-Intensive computing tasks in a hybrid supercomputer environment for simulating the characteristics of a quantum rotation sensor and performing engineering calculations"
},
{
"paperId": "5fbf7af3b44172a1b071c8422ff7c5c87c83f066",
"title": "Nebula: Distributed Edge Cloud for Data Intensive Computing"
},
{
"paperId": "fc0678e48ccc540c37c8cc3ceae0c9c935e60645",
"title": "Artificial intelligence: The future is superintelligent"
},
{
"paperId": "9de3e1ee313a42d48bc340cad0c302570951fa67",
"title": "CrowdCurio: an online crowdsourcing platform to facilitate climate change studies using herbarium specimens."
},
{
"paperId": "32754d496768685cfcc7fbdb323d87db2b33a09c",
"title": "Brain Intelligence: Go beyond Artificial Intelligence"
},
{
"paperId": "2c21a89a91e6920ab9ac9a40af05055b865d5666",
"title": "The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms"
},
{
"paperId": "89b4111f14cdf342188f96d3962581fd0afa042f",
"title": "A Study and Comparison of Human and Deep Learning Recognition Performance under Visual Distortions"
},
{
"paperId": "05f39431776d2b8858132b2118b68a6a68bf4f14",
"title": "Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets"
},
{
"paperId": "4c9f06e5e4eb93f16a739821be472364af66b427",
"title": "Scalable global grid catalogue for Run3 and beyond"
},
{
"paperId": "a06707bf0d51d5a8c3980c7395b525b419729915",
"title": "Browser-based Harnessing of Voluntary Computational Power"
},
{
"paperId": "e2847253f50ad7da6fc93de8eaa64c28c679978d",
"title": "Smart machines are not a threat to humanity"
},
{
"paperId": "668bfea0aaa0565f58aa76d44d700e8a3a576f52",
"title": "Heading toward Artificial Intelligence 2.0"
},
{
"paperId": "308e362e7893e8aa8a676e02a4068364a1bb23ff",
"title": "The motivations, enablers and barriers for voluntary participation in an online crowdsourcing platform"
},
{
"paperId": "cff5cc26544f7ac057da867d7335ce3c4b9af461",
"title": "Stifling artificial intelligence: Human perils"
},
{
"paperId": "c5824c3dcb42962cc1ece4ec4d2347afb6b06f8e",
"title": "Human executive control of autonomous systems: A conceptual framework"
},
{
"paperId": "4cedc20e0414991487aced16ea068a6bb28b35f0",
"title": "Location privacy for crowdsourcing applications"
},
{
"paperId": "71c4c255c1fbca2b25bd668abf9312e18b9ef1cf",
"title": "25 Years of CNNs: Can We Compare to Human Abstraction Capabilities?"
},
{
"paperId": "545f142a246db9e65c65ec81f619d3ef093cca64",
"title": "Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence"
},
{
"paperId": "94ddef73ba921ed9e087537c97854fbf1f36aa2e",
"title": "Crowdsourcing ideas: Involving ordinary users in the ideation phase of new product development"
},
{
"paperId": "e2d78d2800910060696c3a376043b88b92aa5665",
"title": "Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction"
},
{
"paperId": "e3a442aa24e5df7e6b2a25e21e75c4c325f9eedf",
"title": "Edge Computing: Vision and Challenges"
},
{
"paperId": "ca9f3a04a1b6524105f51676abdd198b118a7d18",
"title": "Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research"
},
{
"paperId": "2f35b305b6c56046b631b0cdb6d2f08e4ee577a7",
"title": "TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences"
},
{
"paperId": "f8cd9e3da23d01ee7487be29ca251b6ad20889af",
"title": "Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response"
},
{
"paperId": "9ffc2368a492669f1c330e3d787c04878d9c8ab7",
"title": "Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression"
},
{
"paperId": "068eb1a12766a5e8e408e650a3e1d4b4ec7ab42d",
"title": "Active Learning from Crowds with Unsure Option"
},
{
"paperId": "b2012ac7cd3d78d4c49b20fab53f7fd4b6b63b50",
"title": "On the Limits of Recursively Self-Improving AGI"
},
{
"paperId": "c8fdfab5996752372e21e7961c78556788551373",
"title": "Modeling crowdsourcing as collective problem solving"
},
{
"paperId": "9ab91fdd78695cd11e5f4f9dfc0152bc48705d29",
"title": "A Survey of Data-Intensive Scientific Workflow Management"
},
{
"paperId": "c1657703fc9da202679cd2cde91f47e3b41c211a",
"title": "Human–Machine Cooperation in Smart Cars: An Empirical Investigation of the Loss-of-Control Thesis"
},
{
"paperId": "ff48b661bd15089c1e0bd58bd9b708234113d35e",
"title": "Expert crowdsourcing with flash teams"
},
{
"paperId": "91fd67a6644cfb8745831dd9f72a4156210d4926",
"title": "Crowdsourcing New Product Ideas Under Consumer Learning"
},
{
"paperId": "155e48e2159d08c0f810f418cb9f70f5f4f8ce8e",
"title": "What can crowdsourcing do for decision support?"
},
{
"paperId": "191026ff5ced0363dd9396f6253724a1a8ec8822",
"title": "Preserving worker privacy in crowdsourcing"
},
{
"paperId": "be29c6a7e23aebc1672bc5c47731d14b59944404",
"title": "Locality and Availability in Distributed Storage"
},
{
"paperId": "7c56e633b6c6f1ab16684a65dad75755a9f47b9e",
"title": "Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing"
},
{
"paperId": "455b6d27c195d6f2308e31b47b2d41a9d491c5bf",
"title": "Measurements of collective machine intelligence"
},
{
"paperId": "9561c635a6bc27c4940aca854958cac26e8717f0",
"title": "Crowdsourcing human-robot interaction"
},
{
"paperId": "d1fd2c8d8c921658af8de93c6903abe449705dd2",
"title": "Challenges and Opportunities for Trust Management in Crowdsourcing"
},
{
"paperId": "9cfe5fe748e0900dbd99d8d9195e317c5d4e5b4c",
"title": "The Social Internet of Things (SIoT) - When social networks meet the Internet of Things: Concept, architecture and network characterization"
},
{
"paperId": "6b10723dbe6d37c836284945b3607dc740eea107",
"title": "Social Networking for Robots to Share Knowledge, Skills and Know-How"
},
{
"paperId": "d4d9a80708d592ab2e48cff3508791b4e142fe52",
"title": "Super-intelligent machines"
},
{
"paperId": "407d88cd832ba8ae80a325d7074fec8d0ada6422",
"title": "Maximizing benefits from crowdsourced data"
},
{
"paperId": "2f47de50306aef67e17d124d86a4df1a768476cd",
"title": "Wisdom of artificial crowds algorithm for solving NP-hard problems"
},
{
"paperId": "2ef26af206f7ebbbd90a7d68d77a7639c8dfc6b6",
"title": "Comparing machines and humans on a visual categorization test"
},
{
"paperId": "6019408d642776763fd0d90c17ea3ddeb41f09a0",
"title": "Cluster, grid and cloud computing: A detailed comparison"
},
{
"paperId": "77d899375144a418dea1f6a7bb9abacdb58e4c37",
"title": "Human computation: a survey and taxonomy of a growing field"
},
{
"paperId": "1cc44baf36bb40a99084a7df9468e86b6f5e2691",
"title": "Pocket Data Mining: Towards Collaborative Data Mining in Mobile Computing Environments"
},
{
"paperId": "69e9b4d8033545d390bc6bb5a912c34523d1fdcc",
"title": "Crowdsourcing human-based computation"
},
{
"paperId": "4e227d4919caa1780ec2cd67e834430037e294bb",
"title": "How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation"
},
{
"paperId": "1c37285eb427f9d5bae45dc0624cb5ee78abe827",
"title": "Folding@home: Lessons from eight years of volunteer distributed computing"
},
{
"paperId": "f537128e57f798c2345c162830a45200a22861e4",
"title": "Human computation"
},
{
"paperId": "ed0497d39c66b2a3494e005fc8704a71bbaa9101",
"title": "Toward collective intelligence of online communities: A primitive conceptual model"
},
{
"paperId": "090fda5b623ee2ed7cf2492857cf2c991ee7c8ad",
"title": "Folding@Home and Genome@Home: Using distributed computing to tackle previously intractable problem"
},
{
"paperId": "e395797903f5a48c0e268a50b6c23f32861d9d30",
"title": "On correlated availability in Internet-distributed systems"
},
{
"paperId": "c51b6605c2d2ae8e2d6d47ed75da8efa3dc1ad2b",
"title": "Crowdsourcing as a Model for Problem Solving"
},
{
"paperId": "8e8ec502208f29ee9f78ded19226578e027ecd16",
"title": "Universal Intelligence: A Definition of Machine Intelligence"
},
{
"paperId": "8b2f16ecdfd564f31a1ac1a899ed4f5a8828beff",
"title": "A Cognitive Substrate for Achieving Human-Level Intelligence"
},
{
"paperId": "d27523090504a707c982ae6045755d928e2f59f7",
"title": "Human-Level Artificial Intelligence? Be Serious!"
},
{
"paperId": "30a64bdf778b8f561af9ae589e822c2c800920b1",
"title": "Universal artificial intelligence"
},
{
"paperId": "5549a758987bbb87efd228637876146a6741498e",
"title": "Telegeoinformatics: Location-based Computing and Services"
},
{
"paperId": "5915a6f9e0df52c0dafde6a01f3b117fd0675d06",
"title": "Ancient Greeks' Practices and Contributions in Public and Entrepreneurship Decision Making"
},
{
"paperId": "fad235dd7655c2ff1a51eb90a4e7089f71a44f41",
"title": "Newsvendors Tackle the Newsvendor Problem"
},
{
"paperId": "9bfd3642299420fb8a874d543e2d9f138f527a69",
"title": "Entropia: architecture and performance of an enterprise desktop grid system"
},
{
"paperId": "6663809bf91d253972e2f4408d58c399b42151c1",
"title": "SETI@home: an experiment in public-resource computing"
},
{
"paperId": "8916ec41cab019e9023ccae12525861ef557ff8d",
"title": "Virtual humans for validating maintenance procedures"
},
{
"paperId": "5482018d2109665b0b090d505c44e2790dcb66f6",
"title": "Machine intelligence quotient: its measurements and applications"
},
{
"paperId": "2eaa54e926a95e2fd0c0edfe6dbb921fc426faf3",
"title": "Framing the National Interest: The Manipulation of Foreign Policy Decisions in Group Settings"
},
{
"paperId": "af2a1f1d18ba7e5476cad0b8550da175072655c0",
"title": "The History of Computing in the History of Technology"
},
{
"paperId": "6816899edd04b958f2d5968f26c0abeb9d3e254e",
"title": "Stages in Group Decision Making: The Decomposition of Historical Narratives"
},
{
"paperId": "69b0a222fde51a052c2613f83b4aa98fad00c814",
"title": "The Fourth Industrial Revolution and the Development of Artificial Intelligence"
},
{
"paperId": null,
"title": "Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions"
},
{
"paperId": "7e6ce5bd878f24839ba38fe8fb91f30aab3cd863",
"title": "Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research"
},
{
"paperId": null,
"title": "Towards hybrid intelligence for robotics"
},
{
"paperId": null,
"title": "“ 18 Arti fi cial intelligence researchers reveal the profound changes coming to our lives ”"
},
{
"paperId": "ceb2dfed3921ced660d65a0a022e48bd4858d8b0",
"title": "Collective Intelligence"
},
{
"paperId": "14c724d303202086dc4805f7545b7433c18d5f30",
"title": "Does Money Matter? Motivational Factors for Participation in Paid- and Non-Profit-Crowdsourcing Communities"
},
{
"paperId": "135975122db060b3a425a86ef8b7ddf74a9aa8c7",
"title": "Integration Computing and Collective Intelligence"
},
{
"paperId": "275dc12c439e06deec7b9694ec0220cae22a9093",
"title": "A novel parallel hybrid intelligence optimization algorithm for a function approximation problem"
},
{
"paperId": "80a560f8e3a6bba85051ff8a418ed80f5cabd33f",
"title": "Mechanical Cheat: Spamming Schemes and Adversarial Techniques on Crowdsourcing Platforms"
},
{
"paperId": "918a6d57f779b8f8269bc0d1489dcd014afb8120",
"title": "Swarm Intelligence"
},
{
"paperId": null,
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": null,
"title": "Is there something beyond AI? Frequently emerging, but seldom answered questions about artificial Super-Intelligence"
},
{
"paperId": "63eb4c86a1a881d269436f90ac02b679865c9fe7",
"title": "Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems"
},
{
"paperId": "65f0033ef461d287978987e6f54bf35f5102396c",
"title": "Decisions 2.0: the power of collective intelligence"
},
{
"paperId": "d77f8db2b1ec89205da707ebf28ed1dc3366b86f",
"title": "Granular Computing"
},
{
"paperId": "fc3b768e023bc756a352a530faf8d1ef8924e445",
"title": "A brief history of decision making."
},
{
"paperId": "5185c810d1851dae0de7aee97d51dc59bbd1d2a9",
"title": "Human-Level Artificial Intelligence ? Be Serious !"
},
{
"paperId": "f20efbc02ac331848c1e48b5dfb165f34b1c1bfe",
"title": "The Modern History of Computing"
},
{
"paperId": null,
"title": "“ The rise of crowdsourcing ”"
},
{
"paperId": "ee7384f216fe8272f9677b03147b986574e53372",
"title": "Some challenges and grand challenges for computational intelligence"
},
{
"paperId": "6e51d8e92729366721de4b886ce3c26b124c9e58",
"title": "A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity"
},
{
"paperId": "bff65e9da04d89e9248feb89110b1cc5fa1d99ca",
"title": "The Vision of Autonomic Computing"
},
{
"paperId": "19f540f91131113a321f95af296b4cc4e1e32297",
"title": "Replacing Human Beings by Robots. How to Tackle that Perspective by Technology Assessment"
},
{
"paperId": "dd4a7f3a5373bea8329d09c2d53ca6fa2dae74ba",
"title": "Collective robotic intelligence"
},
{
"paperId": "ff11ca3ecafbd7690c96f58d6399725a825da6fc",
"title": "Past and Present"
},
{
"paperId": "ce7e6aebb9d6174452d083f3cbbf10fb8a3ddcd3",
"title": "August 2008 | Ieee Computational Intelligence Magazine 11 toward Human Level Machine Intelligence— Is It Achievable? the Need for a Paradigm Shift"
},
{
"paperId": null,
"title": "www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-arti fi cial-intelligence-singularity"
},
{
"paperId": null,
"title": "www.crowdsource.com/blog/2013/08/the-long-history-of-crowdsourcing-and-why-youre-just-now-hearing-about-it/"
},
{
"paperId": null,
"title": "www.dailymail.co.uk/sciencetech/article-6695515/Human-debate-champion-defeats-IBMs-smartest-AI-powered-machine.html"
}
] | 17,759
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01407e4317cba968ae41a1d029b154d8e50a0f08
|
[] | 0.913731
|
Next Generation Middleware Technology for Mobile Computing
|
01407e4317cba968ae41a1d029b154d8e50a0f08
|
[
{
"authorId": "66339021",
"name": "B. Darsana"
},
{
"authorId": "2343710208",
"name": "Karabi Konar"
},
{
"authorId": "2247386153",
"name": "Sr. Lecturer"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## International Journal of Computer and Communication International Journal of Computer and Communication
Technology Technology
##### Volume 2 Issue 2 Article 1
April 2011
## Next Generation Middleware Technology for Mobile Computing Next Generation Middleware Technology for Mobile Computing
##### B. Darsana
Sr. Lecturer, Dept of ISE, The Oxford College of Engineering, Bangalore – 560068, Karnataka,
darsana@gmail.com
##### Karabi Konar
Sr. Lecturer, Dept of ISE, The Oxford College of Engineering, Bangalore – 560068, Karnataka,
karabi@gmail.com
[Follow this and additional works at: https://www.interscience.in/ijcct](https://www.interscience.in/ijcct?utm_source=www.interscience.in%2Fijcct%2Fvol2%2Fiss2%2F1&utm_medium=PDF&utm_campaign=PDFCoverPages)
##### Recommended Citation Recommended Citation
Darsana, B. and Konar, Karabi (2011) "Next Generation Middleware Technology for Mobile Computing,"
International Journal of Computer and Communication Technology: Vol. 2 : Iss. 2, Article 1.
DOI: 10.47893/IJCCT.2011.1074
[Available at: https://www.interscience.in/ijcct/vol2/iss2/1](https://www.interscience.in/ijcct/vol2/iss2/1?utm_source=www.interscience.in%2Fijcct%2Fvol2%2Fiss2%2F1&utm_medium=PDF&utm_campaign=PDFCoverPages)
This Article is brought to you for free and open access by the Interscience Journals at Interscience Research
Network. It has been accepted for inclusion in International Journal of Computer and Communication Technology
by an authorized editor of Interscience Research Network. For more information, please contact
[sritampatnaik@gmail.com.](mailto:sritampatnaik@gmail.com)
-----
# Next Generation Middleware Technology for Mobile Computing
#### B. Darsana, Karabi Konar Sr. Lecturer, Dept of ISE, The Oxford College of Engineering, Bangalore – 560068, Karnataka
**Abstract-Current advances in portable devices, wireless**
**_technologies, and distributed systems have created a_**
**_mobile computing environment that is characterized by_**
**_a large scale of dynamism. Diversities in network_**
**_connectivity,_** **_platform_** **_capability,_** **_and_** **_resource_**
**_availability can significantly affect the application_**
**_performance. Traditional middleware systems are not_**
**_prepared to offer proper support for addressing the_**
**_dynamic aspects of mobile systems. Modern distributed_**
**_applications need a middleware that is capable of_**
**_adapting to environment changes and that supports the_**
**_required level of quality of service._**
**_This paper represents the experience of several research_**
**_projects related to next generation middleware systems._**
**_We first indicate the major challenges in mobile_**
**_computing systems and try to identify the main_**
**_requirements for mobile middleware systems. The_**
**_different categories of mobile middleware technologies_**
**_are reviewed and their strength and weakness are_**
**_analyzed._**
#### Key Words: dynamism, platform capability, quality
of service, resource availability, network connectivity.
#### 1. Introduction
The availability of lightweight, portable computers
and wireless technologies has created a new class of
applications called mobile applications. These
applications often run on scarce resource platforms such
as personal digital assistants, notebooks, and mobile
phones, each of which have limited CPU power, memory,
and battery life. They are usually connected to wireless
links, which are characterized by lower bandwidths,
higher error rates, and more frequent disconnections.
Most distributed applications and services were designed
with the assumption that the terminals were powerful,
stationary and connected to fixed networks. Conventional
middleware technologies thus have focused on masking
out the problems of heterogeneity and distribution to
facilitate the development of distributed systems. They
allow the application developers to focus on application
functionality rather than on dealing explicitly with
distribution issues.
Under the highly variable computing environment
conditions that characterize mobile platforms, it is
believed that existing traditional middleware systems are
not capable of providing adequate support for the mobile
wireless computing environment. There is a great demand
for designing modern middleware systems that can
support new requirements imposed by mobility. This
paper provides a most relevant mobile middleware
systems and goals that still need to be achieved.
#### 2. Mobile Architectural Requirements
Middleware is an enabling layer of software that
resides between the application program and the
networked layer of heterogeneous platforms and
protocols. It decouples applications from any
dependencies on the plumbing layer that consists of
heterogeneous operating systems, hardware platforms and
communication protocols
Middleware plays a vital role in hiding the complexity
of distributed applications. These applications typically
operate in an environment that may include
heterogeneous computer architectures, operating systems,
network protocols, and databases. It is unpleasant for an
application developer to deal with such heterogeneous
“plumbing”.
Middleware’s primary role is to conceal this
complexity from developers by deploying an isolated
layer of APIs[6]. This layer bridges the gap between
International Journal of Computer and Communication Technology (IJCCT) ISSN: 2231 0371 Vol 2 Iss 2
-----
application program and platform dependency.
Middleware is defined as follows by Linthicum.
##### 2.1 The Limitations of Mobile Computing
There are at least three common factors that affect the
design of the middleware infrastructure required for
mobile computing: mobile devices, network connection,
and mobility, which vary from one to another in term of
resource availability. Devices like laptops can offer fast
CPUs and large amount of RAM and disk space while
others like pocket PCs and phones usually have scarce
resources. Hence, middleware should be designed to
achieve optimal resource utilization. Network connections
in mobile scenarios is characterized by limited bandwidth,
high error rate, higher cost, and frequent disconnections
due to power limitations, available spectrum, and
mobility.
Due to these limitations, conventional middleware
technologies designed for fixed distributed systems are
not prepared to support mobile systems. They target a
static execution platform where the host location is fixed,
the network bandwidth does not fluctuate, and services
are well defined. We next identify a number of important
requirements that must be provided by middleware for
mobile computing.
##### 2.2 Analyzing the Requirements for Mobile Computing
During the system lifetime, the application behavior
may need to be altered due to dynamic changes in
infrastructure facilities, such as the availability of
particular services.
Dynamic reconfiguration is thus required and can be
achieved by adding a new behavior or changing an
existing one at system runtime. Dynamic changes in
system behavior and operating context at runtime can
trigger re-evaluation and reallocation of resources.
Middleware supporting dynamic reconfiguration needs to
detect changes in available resources and either reallocate
resources, or notify the application to adapt to the
changes.
Adaptability[9] is also part of the new requirements
that allows applications to run efficiently and predictably
under a broader range of conditions. Through adaptation a
system can adapt its behavior instead of providing a
uniform interface in all situations. The middleware needs
to monitor the resource supply/demand, compute
adaptation decisions, and notify applications about
changes.
Asynchronous interaction tackles the problems of high
latency and disconnected operations that can arise with
other interaction models. A client using asynchronous
communication primitives issues a request and continues
operating and then collects the result at any appropriate
time. The client and server components do not need to be
running concurrently to communicate with each other. A
client may issue a request for a service, disconnect from
the network, and collect the result later on. This type of
interaction style reduces the network bandwidth
consumption, achieves decoupling of client and server,
and elevates system scalability.
Context-awareness is an important requirement to
build an effective and efficient adaptive system. The
context of a mobile unit is usually determined by its
current location which, in turn, defines the environment
where the computation associated with the unit is
performed. The context may include device
characteristics, user’s activities, services, as well as other
resources of the system. Context-awareness is used by
several systems; however, few systems sense execution
context other than location. The system performance can
be increased when execution context is disclosed to the
upper layer that assists middleware in making the right
decision.
Lightweight middleware needs to be considered when
constructing middleware for mobile computing. Current
middleware platforms like CORBA[7] are too heavy to
run on devices with limited resources. By default, they
contain a wide range of optional features and all possible
functionalities, many of which will be unused by most
applications. For example, invoking a method on a remote
object involves only client side functionality and either
Dynamic or Static Invocation Interface. Most of the
existing ORB implementations provide either a single or
two separate libraries for the client and server sides that
contains all functionality. This forces the client program
to be glued with the entire functionality without having a
choice to select a specific subset of this functionality.
#### 3. Mobile Middleware Technologies
This section sheds some light on the different types of
mobile middleware technologies. We start by introducing
a classification that allows us to contrast and evaluate the
different categories. Among the middleware systems we
reviewed, we have identified four categories of
middleware. Each category aims to support at least one of
the above requirements imposed by mobility. These
categories are reflective middleware, tuple space, contextaware middleware, and event-based middleware, each of
which attempts to address the previous requirements
using different approaches. The following table illustrates
how various requirements are met by the different
categories.
-----
Table: 3.1. Requirements Vs Categories
Tuple Context Event
Requirements Reflective
Space Aware Based
Synchronous/
connection X X
based
Asynchronous/
connectionless X X
ReX
configuration
Adaptation X X
Awareness X X
Light weight X
The above table shows the relation between Requirements
and Categories. For synchronous /connection based
Reflective and Context Aware is applicable. For
asynchronous/connection Event based and Tuple space is
applicable. For Light weight requirement Event based is
the best approach. For Awareness and Adaption
Reflective and context Aware meet the requirements.
##### 3.1 Reflective Middleware
The reflection technique was initially used in the field
of programming languages to support the design of more
open and extensible languages. Reflection is also applied
in other fields including operating systems and more
recently distributed systems. The principle of reflection
enables a program to access, reason about and change its
own behavior.
Smith defined the concept of reflection in the
following quote: “In as much as a computational process
can be constructed to reason about an external world in
virtue of comprising an ingredient process (interpreter)
formally manipulating representations of that world, so
too a computational process could be made to reason
about itself in virtue of comprising an ingredient process
(interpreter) formally manipulating representations of its
own operations and structures”.
A reflective system consists of two levels referred to as
meta-level and base-level[11]. The former performs
computation on the objects residing in the lower levels.
The latter performs computation on the application
domain entities. The reflection approach supports the
inspection and adaptation of the underlying
implementation (the base-level) at run time. A reflective
system provides a meta-object protocol (meta-interface)
to define the services available at the meta-level. The
meta-level can be accessed via a concept of reification.
Reification means exposing some hidden aspect of the
internal representation and hence they can be accessed by
the application (the base-level). The implementation
openness offers a straightforward mechanism to insert
some behavior to monitor or alter the internal behavior of
the platform. This enables the application to be in charge
of inspecting and adapting the middleware behavior based
on its own needs. Thus, a lightweight middleware with a
minimal set of functionality is achieved to run on mobile
systems.
The main motivation of this approach is to make the
middleware more adaptable to its environment and better
able to cope with changes. Examples of middleware
systems that adopted the concept of reflection are
OpenCorba, Open-ORB(Object request Broaker),
DynamicTAO, FlexiNet, and Globe.
##### 3.2 Tuple Space Middleware
Communication in a wireless environment is
characterized by frequent disconnections and limited
bandwidth. Communication models such as message
passing, RPC, or RMI[6] all have the drawback of tight
coupling. This means that a sender has to know the exact
identity and address of a receiver. Also, the sender has to
wait for the receiver to be ready for exchanging
information (synchronization paradigm). In distributed
open systems this tends to be too restrictive. A decoupled
and opportunistic style of computing is thus required.
Computing is expected to proceed even in the presence of
disconnection and to exploit connectivity whenever it
becomes available.
One solution is the concept of tuple space, which was
initially introduced by Gelernter in as part of the Linda
coordination language. Tuple Space systems[10] have
proved their ability for facilitating communication in
wireless settings. In general, a tuple space is a globally
shared, associatively addressed memory space that is used
by processes to communicate. A tuple space system can
be realized as a repository of tuples, which are basically a
vector of typed values or fields. Client processes create
tuples and place them in the tuple space using a write
operation. Also, they can concurrently access tuples using
read or take operations. Most tuple space systems support
both versions of the tuple retrieval operations, blocking
and non-blocking.
A template, which is similar to a tuple, is used to
match the contents of tuples in the tuple space during the
retrieval operations. A template matches a tuple if they
have an equal number of fields and each template field
matches the corresponding tuple field. This form of
communication fits well in mobile setting where logical
and physical mobility is involved.
|Requirements|Reflective|Tuple Space|Context Aware|Event Based|
|---|---|---|---|---|
|Synchronous/ connection based|X||X||
|Asynchronous/ connectionless||X||X|
|Re- configuration|X||||
|Adaptation|X||X||
|Awareness|X||X||
|Light weight||||X|
-----
The Tuple space communication model, such as the
one used in Linda,provides great flexibility for modeling
concurrent process. This approach has also been extended
with distributed tuple space.
##### 3.3 Context-Aware Middleware
Mobile systems run in an extremely dynamic
environment. The execution context changes frequently
due to the user’s mobility. Mobile hosts often roam
around different areas, and services that are available
before disconnecting may not be available after
reconnecting. Also, the bandwidth and connectivity
quality may quickly alter based on the mobile host
movements and their locations.
The application developers cannot predict all the
possible execution contexts that allow the application to
know how to react in every scenario. The middleware has
to expose the context information to the application to
make it aware of the dynamic changes in execution
environment.
The application then instructs the middleware on how
to adapt its own behavior in order to achieve the best
quality of service. Many research groups gave special
attention in particular to location awareness. For example,
location information was exploited to provide travelers
directional guidance, to discover neighboring services,
and to broadcast messages to users in a specific area.
Most location-aware systems depend on the underlying
network operating system to obtain location information
and generate a suitable format to be used by the system.
The heterogeneity of coordination information is not
supported and hence different positioning systems are
required to deal with different sensor technologies, such
as the Global Positioning System (GPS) outdoors, and
infrared and radio frequency indoors.
MobiPADS is a middleware system for mobile
environment. The principal entity is Mobilets, Which are
entities that provide a service, and which can be migrated
between different MobiPADS environment.
##### 3.4 Event-Based Middleware
Invocation-based middleware systems such as
CORBA(Common object request Broaker Architecture)
or Java (Remote Method Invocation)[7] are useful
abstractions for building distributed systems. The
communication model for these platforms is based on a
request/reply pattern: an object remains passive until a
principle performs an operation on it. This kind of model
is adequate for a local area network (LAN) with a small
number of clients and servers, but it does not scale well to
large networks like the Internet.
The main reason is that the request/reply model only
supports one-to-one communication and imposes a tight
coupling between the involved participants because of the
synchronous paradigm. This model is also unsuitable for
unreliable and dynamic environment.
The event-based communication paradigm is a
possible alternative for dealing with large-scale systems.
Event notification is the basic communication paradigm
that is used by event-based middleware systems. Events
contain data that describes a request or message. They are
propagated from the sending components to the receiver
components. In order to receive events, clients
(subscribers) have to express (subscribe) their interest in
receiving particular events. Once clients have subscribed,
servers (publishers) publish events, which will be sent to
all interested subscribers.
This paradigm hence offers a decoupled, many-tomany communication model between clients and servers.
Asynchronous notification of events is also supported
naturally. There are several examples of middleware
based on the event-based systems, but not limited to,
Hermes, CEA, STEAM, JEDI and ToPSS.
##### 3.5 Other Middleware Solutions
There are many other middleware solutions that have
been proposed particularly to target mobility aspects.
Unpredictable disconnections are one of the major
mobility issues that have been addressed by several
systems. Systems like Coda, its successor Odyssey],
Bayou, and xmiddle have used data replication to increase
data availability to mobile users. This allows users access
to replicas and to continue their tasks whenever the
disconnection operations take place.
Each system uses different mechanisms to guarantee
the ultimate consistency among the replicas. These
mechanisms include the support for discovery of
inconsistent data as well as data reconciliation. Services
discovery is another well-know problem introduced by
user mobility. In a static environment, new services can
be easily discovered by asking service providers to
register with a well-known location service. In a mobile
computing environment, the situation is different since
mobile hosts often roam around various areas.
Services that were present before disconnecting from
the network may not exist after reconnecting. Jini and
Ninja Service Discovery Service (SDS) are examples of
systems that support dynamic service discovery, Bayou is
-----
the system which support s disconnected operations and
Jini is the system which support discovery of services.
#### 4. Analysis of next-generation middleware
This section summarizes the previous discussion on
next-generation middleware with an emphasis on lessons
learned from investigating the proposed solutions
presented in the previous section. We particularly aim to
highlight in which extent these solutions are suitable for
mobile settings.
It is a major challenge to solve all problems of mobile
distributed systems. This is true due to the high degree of
dynamism in mobile environments. Current middleware
platforms like CORBA cannot successfully run in such an
environment. Hence, there is an urgent need for new
solutions that support particular application requirements
such as dynamic reconfiguration, context-awareness, and
adaptation. We believe that the reflective approach
provides a solid base for building next generation
middleware platforms and overcomes the limitations of
the current middleware technologies.
More specifically, the architecture follows a white box
philosophy that provides principled and comprehensive
access to internal details. It can also decrease problems of
maintaining integrity since each object/interface is
attached to a single meta-object at a time. Therefore, any
modification to a meta-object can only affect a single
object.
Some reflective systems support higher level of
reflection since they can add or remove methods from
objects and classes dynamically and even alter the class of
an object at run time. In contrast, others concentrate on a
simpler reflective paradigm to achieve a better
performance. Their reflective mechanisms are not part of
the usual flow of control and only invoked when required.
Reflective middleware like FlexiNet and DynamicTAO
are built around the concept of object-oriented and
component frameworks respectively.
Component Frameworks (CFs) were initially defined
by Szyperski as “collection of rules and interfaces that
govern the interaction of a set of components plugged into
them” There are several advantages of using CFs over the
object-oriented approach. The uses of CFs are not limited
to a particular programming language and there is no
inheritance relation between components and framework.
Hence, components and CFs can be developed
independently, distributed in binary form, and combined
at run time. We have noticed that the issue of consistent
dynamic reconfiguration is still under research. There is
some work in this area that has focused on developing
reconfiguration models and algorithms that enforce well
defined consistency rules while minimizing system
disturbance.
Performance is another issue that remains a matter for
further investigation. All of the reflective systems
presented previously impose a heavy computational load
that would cause significant performance degradation on
mobile devices. Tuple-space systems exploit the
decoupled nature of tuple spaces for supporting
disconnected operations in a natural manner. By default
they offer an asynchronous interaction paradigm that
appears to be more appropriate for dealing with
intermittent connection of mobile devices, as is often the
case when a server is not in reach or a mobile client
requires to voluntary disconnect to save battery and
bandwidth.
By using a tuple-space approach, we can decouple the
client and server components in time and space. In other
words, they do not need to be connected at the same time
and in the same place. Tuple-space systems support the
concept of a space of spaces that offers the ability to join
objects into appropriate spaces for ease of access. This
opens up the possibility of constructing a dynamic super
space environment to allow participating spaces to join or
leave at arbitrary time. The ability to use multiple spaces
will elevate the overall throughput of the system.
Throughout our study, we have noticed that
JaveSpaces and TSpaces typically require at least
60Mbytes of RAM. This is not affordable by most
handheld devices available on the market nowadays.
Context-Aware systems provide mobile applications
with the necessary knowledge about the execution context
in order to allow applications to adapt to dynamic changes
in mobile host and network condition. The execution
context includes but is not limited to: mobile user
location, mobile device characteristics, network condition,
and user activity (i.e., driving or sitting in a room). The
context information is typically disclosed in a convenient
format to the applications that instruct the middleware
system to apply a certain adaptation policy. To our
knowledge, most context-aware applications are only
focusing on a user’s location while other things of interest
are also mobile and changing.
We believe that a reflective approach may improve the
development of context-aware services and applications.
In general, a reflective system provides mobile
applications with context information that they need to
optimize middleware and their own behaviors. One
reflection solution has suggested the use of metadata and
reflection to support context-aware applications.
Traditional, invocation-based middleware like
CORBA follow a request/reply communication style,
which does not scale well to large networks like the
Internet.
Event-based paradigms present an interesting style that
supports the development of large-scale distributed
systems. In such a system, clients first announce their
-----
interest in receiving specific events and then servers
broadcast events to all interested clients. Hence, the
event-based model achieves a highly decoupled system
and many-to-many interaction style between clients and
servers.
We believe that not a lot of work has managed to
merge the publish/subscribe communication approach
with event-based middleware systems. Most existing
systems do not combine traditional middleware
functionality (i.e., security, QoS, transactions, reliability,
access control, etc.) with the event-based paradigm. We
feel that event-based middleware can be more successful
if such functionality is provided in the future.
Event-based systems also do not integrate well with
object-oriented programming languages due to the major
mismatch between the concept of objects and events.
Events are viewed as untyped collection of data
(attribute/value pairs) whereas current programming
languages only support typed objects. Hence, events
should support data typing in order to be treated as
objects. In addition, the developers are responsible for
handling the low-level event transmission issues. Current
publish/subscribe systems are restricted to certain
application scenarios such as instant messaging and stock
quote dissemination. This indicates that such systems are
not designed as general middleware platforms. From this
discussion, we can realize that until this moment there is
no middleware system that can fully support the
requirements for mobile applications. Several solutions
have considered one aspect or another; however, the door
for further research is still wide open.
### 5. Conclusion
The proliferation and development of wireless
technologies and portable appliances have paved the way
for a new computing paradigm called mobile computing.
Mobile computing software is expected to operate in
environments that are highly dynamic with respect to
resource availability and network connectivity.
Traditional middleware products, like CORBA and Java
RMI, are based on the assumptions that applications in
distributed systems will run in a static environment;
hence, they fail to provide the appropriate support for
mobile applications. This gives a strong incentive to many
researchers to develop modern middleware that supports
and facilitates the implementation of mobile applications.
We discussed the state-of-the-art of middleware for
mobile computing. We presented common characteristics
and a set of requirements for mobile computing
middleware, which allows us to better understand the
relationship between the existing bodies of work on nextgeneration middleware. We explained the reasons behind
the failure of traditional middleware systems for
supporting mobile settings. We also identified, illustrated,
and comparatively discussed four middleware classes:
reflective middleware, tuple space, context-aware
middleware, and event-based middleware. Beside these
four categories, a pool of other middleware solutions has
been developed to address specific mobility issues.
However, none of these middleware systems support all
the requirements. We concluded each category with a
simple qualitative evaluation and made a number of
observations related to some issues that need further
investigations.
#### 6. References
[1] Gordon S. Blair, Geoff Coulson, Anders Andersen,
Lynne Blair, Michael Clarke, F´abio Costa, Hector Duran
Limon, Tom Fitzpatrick, Lee Johnston, Rui Moreira,
Nikos Parlavantzas, and Katia Saikoski, “The Design and
Implementation of Open ORB 2”, IEEE Distributed
Systems Online, 2(6), 2009.
[2] R. Meier and V. Cahill, "STEAM: Event-Based
Middleware for Wireless Ad Hoc Networks", in
Proceedings of the International Workshop on Distributed
Event-Based Systems (ICDCS/DEBS'02). Vienna,
Austria, 2009, pp. 639-644.
[3] Antonio Carzaniga and Alexander L. Wolf, “ContentBased Networking: A New Communication
Infrastructure”, In NSF Workshop on an Infrastructure for
Mobile and Wireless Systems, Scottsdale, AZ, October
2008.
[4] Antony Rowstron and Peter Druschel, “Pastry:
Scalable, Decentralized Object Location and Routing for
Large-scale Peer-to-Peer Systems”, In Proc. of
Middleware 2009, November 2009.
[5] G. Ashayer, H. K. Y. Leung, and H.-A. Jacobsen,
“Predicate Matching and Subscription Matching in
Publish/Subscribe Systems,” in Proceedings of the
Workshop on Distributed Event-based Systems, 22nd
International Conference on Distributed Computing
Systems, (Vienna, Austria), IEEE Computer Society
Press, July 2008.
[6]International Journal of Ad Hoc and Ubiquitous
Computing (2007)
Volume: 2, Issue: 4, Publisher: Inderscience Publishers,
Pages: 263
ISSN: 17438225
-----
[7] The impact of research on middleware technology
Volume 41, Issue 1 (January 2007
Pages: 89 - 112 Year of Publication: 2007 ISSN:01635980
[8]W. Andreas. HyperDesk's response to the ORB RFP.
OMG TC Document 91.1.6, Object Management Group,
492 Old Connecticut Path, Framingham, MA, USA, Jan.
1991.
[9] T. Andrews, F. Curbera, H. Dholakia, Y. Goland, J.
Klein, F. Leymann, K. Liu, D. Roller, D. Smith, S. Thatte,
I. Trickovic, and S. Weerawarana ANSA.
[10] The Advanced Network Systems Architecture
(ANSA). Reference manual, Architecture Project
Management, Castle Hill, Cambridge, UK, 1989.
[11] H. E. Bal. _The Shared Data Object Model as a_
_Paradigm for Programming Distributed Systems. PhD_
thesis, Dept. of Computer Science, Vrije Universiteit
Amsterdam, The Netherlands, 1989.
International Journal of Computer and Communication Technology (IJCCT) ISSN: 2231 0371 Vol 2 Iss 2
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.47893/ijcct.2011.1074?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.47893/ijcct.2011.1074, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,010
|
[
"Review"
] | false
| null |
[] | 6,842
|
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01416385e99636f6fdff4d317f449a3df426cd4e
|
[] | 0.861302
|
STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset
|
01416385e99636f6fdff4d317f449a3df426cd4e
|
Electronics
|
[
{
"authorId": "2075371711",
"name": "Feng Zhu"
},
{
"authorId": "2051697771",
"name": "Jiangshan Hao"
},
{
"authorId": "2144174054",
"name": "Zhong Chen"
},
{
"authorId": "2109811281",
"name": "Yanchao Zhao"
},
{
"authorId": "2146711040",
"name": "Bing Chen"
},
{
"authorId": "2248421",
"name": "Xiaoyang Tan"
}
] |
{
"alternate_issns": [
"2079-9292",
"0883-4989"
],
"alternate_names": null,
"alternate_urls": [
"http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-247562",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-247562",
"https://www.mdpi.com/journal/electronics"
],
"id": "ccd8e532-73c6-414f-bc91-271bbb2933e2",
"issn": "1450-5843",
"name": "Electronics",
"type": "journal",
"url": "http://www.electronics.etfbl.net/"
}
|
With the development of the Internet of Things, edge computing applications are paying more and more attention to privacy and real-time. Federated learning, a promising machine learning method that can protect user privacy, has begun to be widely studied. However, traditional synchronous federated learning methods are easily affected by stragglers, and non-independent and identically distributed data sets will also reduce the convergence speed. In this paper, we propose an asynchronous federated learning method, STAFL, where users can upload their updates at any time and the server will immediately aggregate the updates and return the latest global model. Secondly, STAFL will judge the user’s data distribution according to the user’s update and dynamically change the aggregation parameters according to the user’s network weight and staleness to minimize the impact of non-independent and identically distributed data sets on asynchronous updates. The experimental results show that our method performs better on non-independent and identically distributed data sets than existing methods.
|
# electronics
_Article_
## STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset
**Feng Zhu** **[1,2], Jiangshan Hao** **[1,]*** **, Zhong Chen** **[1,2], Yanchao Zhao** **[1,]*, Bing Chen** **[1]** **and Xiaoyang Tan** **[1]**
1 Collage of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
Nanjing 211106, China; smc@nuaa.edu.cn (F.Z.); 736531683@nuaa.edu.cn (Z.C.);
cb_china@nuaa.edu.cn (B.C.); x.tan@nuaa.edu.cn (X.T.)
2 Nanjing Research Institute of Electronics Engineering, Nanjing 210007, China
***** Correspondence: jiangshanhao@nuaa.edu.cn (J.H.); yczhao@nuaa.edu.cn (Y.Z.)
**Abstract: With the development of the Internet of Things, edge computing applications are paying**
more and more attention to privacy and real-time. Federated learning, a promising machine learning
method that can protect user privacy, has begun to be widely studied. However, traditional synchronous federated learning methods are easily affected by stragglers, and non-independent and
identically distributed data sets will also reduce the convergence speed. In this paper, we propose an
asynchronous federated learning method, STAFL, where users can upload their updates at any time
and the server will immediately aggregate the updates and return the latest global model. Secondly,
STAFL will judge the user’s data distribution according to the user’s update and dynamically change
the aggregation parameters according to the user’s network weight and staleness to minimize the
impact of non-independent and identically distributed data sets on asynchronous updates. The
experimental results show that our method performs better on non-independent and identically
distributed data sets than existing methods.
[����������](https://www.mdpi.com/article/10.3390/electronics11030314?type=check_update&version=1)
**�������**
**Citation: Zhu, F.; Hao, J.; Chen, Z.;**
Zhao, Y.; Chen, B.; Tan, X. STAFL:
Staleness-Tolerant Asynchronous
Federated Learning on Non-iid
Dataset. Electronics 2022, 11, 314.
[https://doi.org/10.3390/](https://doi.org/10.3390/electronics11030314)
[electronics11030314](https://doi.org/10.3390/electronics11030314)
Academic Editor: Claus Pahl
Received: 22 December 2021
Accepted: 17 January 2022
Published: 20 January 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Keywords: federated learning; edge computing; weight divergence; non-iid**
**1. Introduction**
Mobile phones, wearable devices, and autonomous vehicles are just a few of the
modern distributed networks that are generating a wealth of data each day. Due to the
growing computational power of these devices, coupled with concerns over transmitting
private information, it is increasingly attractive to store data locally and push network
computation to the edge.
The concept of edge computing is not a new one. Indeed, computing simple queries
across distributed, low-powered devices is a decades-old area of research that has been
explored under the purview of query processing in sensor networks, computing at the
edge, and fog computing [1]. Recent works have also considered training machine learning
models centrally but serving and storing them locally; for example, this is a common
approach in mobile user modeling and personalization [2].
As the storage and computational capabilities of the devices within edge computing
grow, it is possible to leverage enhanced local resources on each device. However, privacy concerns over transmitting raw data require user-generated data to remain on local
devices. This has led to a growing interest in federated learning, which explores training
statistical models directly on remote devices. In federated learning, data will not appear
in other places except the data source, and each edge device cooperates to train a shared
global model.
The existing research on federated learning is primarily about synchronous communication. Researchers use the method of client selection to reduce the negative impact of
stragglers on the global model in federated learning. However, synchronous communication will waste computing resources, since different devices have different computing
-----
_Electronics 2022, 11, 314_ 2 of 15
capabilities. In addition, the server can not efficiently use the data in all s due to only a
small number of s participating in the model training of each global round. Compared with
synchronous communication, asynchronous federated learning will increase the burden on
the server and consume a large amount of communication resources; however, it can significantly improve training efficiency. Training efficiency is crucial in Internet of Things tasks
since they increasingly focus on real-time. Most of the existing asynchronous algorithms
are semi-asynchronous strategies. These methods allow s to upload updates independently;
however, they still need to synchronize their information. Another challenge of federated
learning is the heterogeneity of data. Since federated learning does not allow data to
be transmitted in the communication link, data heterogeneity cannot simply resolve by
traditional methods such as scheduling data. The convergence speed of the asynchronous
federated learning mechanism will be severely affected if each user has the same aggregation parameter. Moreover, as a type of distributed machine learning, more than 90% of the
information exchange in federated learning is redundant. Compressing updates sent from
the can reduce the consumption of communication resources in asynchronous federated
learning and reduce the overall convergence time of federated learning.
In this paper, we design a staleness-tolerate asynchronous federated learning method,
STAFL. STAFL can asynchronously receive and aggregate updates from the and reduce the
impact of stragglers on the global model by adding penalty parameters. Secondly, STAFL
will adjust the aggregation parameters according to the heterogeneity of the user’s data and
dynamically change the aggregation factor for each epoch by maintaining a local model
parameter list. Finally, this paper uses the method of bit quantization. We compress the
updates uploaded by the user and the global information sent by the server. Specifically,
the contributions of this paper are as follows:
- We have designed a federated learning architecture for asynchronous communication,
STAFL. After the user completes the local iteration, it can update the local model
information at any time. The server immediately aggregates the update and delivers
the latest global model to the user when receiving the update information, thereby
reducing the waste of computing resources;
- We use the weight divergence of the local model to group users and maintain a list
of users’ updated information on the server-side. The longer the list, the more users
are considered when aggregating. Different aggregation weights are assigned based
on the arrival time of the user update, the amount of data of the user, and the group
to which the user belongs to reduce the negative impact of non-independent and
identically distributed data sets on asynchronous aggregation;
- We conducted many experiments to prove the effectiveness of the proposed method
and used the communication compression method to reduce the communication cost
of asynchronous federated learning. The experimental results show that STAFL has a
significant advantage in convergence speed compared with other methods.
The organizational structure of our paper is as follows: in Section 1, we introduce
the unique challenges and difficulties of asynchronous federated learning, and propose
corresponding solutions. Section 2 describes the related work of this paper. We will
introduce the design ideas and a detailed description of the STAFL system in Section 3.
In order to verify the effectiveness of the proposed method, we show the performance of
STAFL in different scenarios in Section 4. Finally, Section 5 summarizes the main work of
this paper.
**2. Related Work**
In recent years, federated learning has received widespread attention as an efficient
cooperative machine learning method. Federated learning can address the concerns of a
response time requirement, battery life constraint, bandwidth cost-saving, and data safety
and privacy [3–6]. Guo et al. [7] explore the security issues of federated learning and how
to design efficient, fast, and verifiable aggregation methods in federated learning. As IoT
applications have higher and higher requirements for real-time performance, asynchronous
-----
_Electronics 2022, 11, 314_ 3 of 15
federated learning has begun to receive attention as a method that can speed up convergence and reduce wasted computing resources. Although asynchronous federated learning
can improve real-time performance, some challenges still need to be resolved [8,9]. In [10],
Lu et al. designed an asynchronous federated learning scheme for resource sharing on the
Internet of Vehicles and solve many challenges in the Internet of Vehicles scenario with
very ingenious methods. In [11], the author proposes an age-aware communication strategy that realizes federated learning through wireless networks by jointly considering the
parameters on user devices and the staleness of heterogeneous functions. However, these
schemes pay more attention to security and privacy in federated learning, sacrifice part of
the training efficiency, and ignore the negative impact of non-IID datasets on the shared
model. Damaskinos et al. [12] designed an online system that can be used as middleware
on Android OS and machine learning applications to solve the staleness problem. Wu et al.
proposed an asynchronous aggregated federated learning architecture [13]. The author
designed a bypass to store user updates arriving later in the global epoch and used the user
selection method to improve training efficiency. In [14], Chai et al. proposed the use of a
tier to distinguish users arriving at different times in a global epoch, aggregate each tier,
and analyze convergence. However, these methods are semi-asynchronous communication
and do not consider completely asynchronous scenarios, therefore staleness will still affect
the system.
Statistical heterogeneity is also one of the problems that need to be solved in federated learning and attracts the attention of researchers [15,16]. The baseline FL algorithm FedAvg [17] is known to suffer from instability and convergence issues in heterogeneous settings related to device variability or non-identically distributed data [18]. Since
McMahan et al. [17] proposed a benchmark synchronization federated learning data aggregation method, many researchers have explored communication methods, communication
architectures, and user selection methods to make machine learning algorithms more
efficient. Zhao et al. [19] proposed a method to ensure accuracy when the data are not
independent and identically distributed. Sahu et al. [20] improved the algorithm of FedAvg.
Chen et al. proposed a federated learning communication framework and joint learning
method based on wireless networks [21]. Regarding the combination of non-independent
identical distribution and asynchronous communication, Chen et al. [22] proposed a novel
asynchronous federated learning algorithm and conducted a theoretical analysis of non-iid
scenarios and asynchronous problems.
**3. System Design**
According to what we proposed in the previous section, this paper improves the
existing work as follows. We believe that our method is better than existing methods in
federated learning based on asynchronous communication. The symbols used in this paper
are shown in Table 1. We assume that, in federated learning, users can upload updates
independently and are not restricted by server selection. The central server has sufficient
resources to encourage the user to participate in federated learning. In addition, the server
has sufficient computing capacity to perform asynchronous aggregation operations without
consuming a large amount of time. We consider that there are n users participating in the
federated learning. The training data in each user’s device are subject to non-independent
and identical distribution. That is, each user will have missing labels. The missing data
will pull the weight of local updates in the wrong direction, and users need to cooperate in
offsetting the negative impact of missing labels. Each local dataset will not be leaked to any
other users during the entire training process.
-----
_Electronics 2022, 11, 314_ 4 of 15
**Table 1. Notation and Parameters.**
**Notation/Term** **Description**
Dataset
_D_
_Tg_ Global epoch
_α_ staleness hyper-parameter
_a_ staleness penalty parameter
_S(t)_ staleness penalty function
_E(m)_ data distribution payoff
_η_ learning rate
= [1, 2, . . ., m, . . . ] set of users
_M_
model list
_L_
_n/β_ length of L
_nc_ total data of group c
**wm** model weight of user m
_3.1. Staleness Tolerant Model_
Compared with synchronous communication, asynchronous federated learning requires additional costs to reduce the staleness caused by straggles. Since there is no clear
concept of the global epoch, the server needs to perform global aggregation every time
it receives an update uploaded by the user. We assume that α is a hyper-parameter that
controls the weight of each new user update. Therefore, for each user’s update on the
server-side, there are the following aggregation methods:
**wm,** if l = 0;
_al1+b_ **[w][m][ + (][1][ −]** _al1+b_ [)][w][old][,] if 0 < l ≤ _n;_
_αwm + (1 −_ _α)wold,_ _otherwise._
**wnew =**
(1)
where wold is the global weight information of the last aggregation; l is the total number of
times the server aggregates user updates. Whenever the server aggregates a user update,
the value of l is increased by one. The time at l denotes as tl, and wm is the update
information of user i at time tl. As shown in Figure 1, if the user i’s update is the first
update that arrives at the server (w1[1][), the server will send global weight to both users at the]
same time (t1) after the next user’s update arrives, because aggregation for only one user is
a useless operation. After each update arrives at the server, the server will aggregate this
information except at time t0.
The term 1
_al+b_ [in Equation (][1][) is the gradually decreasing aggregation parameter for]
newly arrived users’ update. When l < n, the previous round of global aggregation wold
does not represent the global data distribution. We call wold at this time the prototype global
weight. As prototype aggregates more and more user updates, the difference between it and
centralized updates will become smaller, and it will be significantly better than the update
of a certain user. Therefore, we designed a linearly decreasing weight for aggregation. As
the number of user updates aggregated by prototype increases, the aggregate weight of
new arrival user updates will be smaller. In this paper, we set a = _n−2_ 2 [and][ b][ =][ 2]n[n]−[−]2[8] [. That]
is, the weight will slowly drop from the original 1/2 to 1/4. For prototype, we do not set
any penalty terms because the server needs to aggregate more updates at this time.
-----
_Electronics 2022, 11, 314_ 5 of 15
**Local**
**Model List**
client1
client2
client3
client4
client5
1 1
_[T]1_ … … … _wmkm_ _[T]2_
_w4_ _wk_
|ww 01k11|Col2|Col3|T0 0 w 11w1 2w1 4 w|Col5|T1 0 w1w1 w1w2 1 2 4 1 w t|T0 1 …w1 4w 12w31 w|T1 1 …… …w mkm|
|---|---|---|---|---|---|---|---|
|||w1w1 1 2||||||
|w11 w12 w12 w31 w1 4|t t 0 1||1 t 2|||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
**Server**
_w51_
user device
cloud server
_w11_ update informationnormal
_w31_ update informationstaleness
_w51_ update informationmissing
_S t( )4_ staleness penalty
|w51|Col2|
|---|---|
**Figure 1. The flow of STAFL between the cloud server and user device.**
When the server aggregates enough updates (l > n), we think that the global model
can represent all users’ data and will impose corresponding penalties for stragglers at this
time (reflected in α in Equation (1)). Before introducing α, we first introduce the concept
of the global epoch of our method. There is a local update counter at the user-side, and
the counter’s value will increase by one every time the user uploads an update. When the
server receives the updates of the next round from more than 10% of users (i.e., when w[2]1
in Figure 1 arrives at the server), the server considers that it has entered the next global
epoch. At this time, if the server receives the previous round of user updates, it will impose
corresponding staleness penalties. As shown in Figure 1, when the first update information
of client 4 arrives at the server, the server has already entered the second global epoch, and
the server will impose staleness penalties on client 4’s update. The staleness penalties term
_S(t) will make the aggregation term α smaller. However, α will not only be affected by S(t)._
The difference in user distribution will also affect the size of α. This will be explained in
detail in the following subsection. The server considers that the user’s update information
has been lost when it still does not receive a user’s update after two global epochs and will
send the latest global parameters to this user. Second, the server will not aggregate updates
two rounds away from the global epoch because this information will seriously slow down
the convergence speed of the global model.
_3.2. Weight Divergence Discussion_
Since data cannot be transmitted in the federated learning scenario, it is necessary to
find a method to improve convergence efficiency without data scheduling. The server can
adjust the appropriate aggregation parameters if it learns the user’s data distribution so
that the weights of the global model quickly move closer to the weights of the centralized
update. However, due to privacy considerations, users are unlikely to tell the server
the number of each label in their local dataset. This would mean that the server cannot
accurately obtain the user’s data distribution. Therefore, we need to use the information
updated by users to obtain data distribution divergence between users.
We first define the concept of the weight difference in this paper. That is the sum of the
weight divergence of all layers of the same neural network. Recall the calculation equation
of the user’s local iteration:
**wm[k][m][+][1]** = w[k]m[m] _m_ [,][ D][m][)][,] (2)
_[−]_ _[η][∇][F][(][w][k][m]_
where η is the local learning rate, F is the loss function, and Dm is the dataset owned by
user m. It can be seen from Equation (2) that, when all users have the same neural network
model, the data owned by the users are the main factor that affects the neural network
weight of user m. Intuitively speaking, if the weight divergence in the models owned by
two users is smaller, the data distribution is more similar. According to this insight, the
-----
_Electronics 2022, 11, 314_ 6 of 15
server can assign different aggregation parameters to the user’s update based on the weight
divergence of the update to the user.
Intuitively, we need to impose penalties on the users whose local data distribution is
more different from the global to ensure that these users’ data will not negatively affect the
global model. However, this is not the case. The situation that weight uploaded by the user
is quite different from the global model often indicates that the global model owned by
the server has not fully aggregated this user’s “knowledge”. Each user’s data are helpful
for machine learning tasks and can represent user preferences or other characteristics.
Therefore, we give specific compensation to users with large weight divergence so that the
server can adapt to the local models of these users more quickly. In this paper, we use E(m)
to denote the data distribution payoff.
Our initial method is based on the following insight: the server may not have the
information carried by the arriving user update if this user update is very different from
the global model. At this time, the server will allocate a more considerable aggregation
parameter α to the user, which is called the data distribution reward in this paper, even
if the user’s update is stale. However, under the asynchronous and parallel federated
learning framework, one concern regarding such a method is that the excessive weight
difference may be caused not only by uneven data distribution but by potential attackers
sending the model trained by wrong or low-quality data. In addition, even if the data
owned by each user are of a certain quality, the server cannot determine the divergence
between the user’s local weight and the global weight is caused by whether the difference
in data distribution is due to staleness. The former is easy to distinguish, since the weight
of malicious users will not gradually decrease with the progress of aggregation, and the
difference from the global model will always be very high. However, solving the latter
server requires additional information.
Recall the global aggregation formulation (1) of federated learning. After performing
the first round of aggregation, the server has obtained the first round of updated information
for most users. We also use experiments to verify the feasibility of the weight difference
to measure the difference in data distribution. Figure 2 shows the effect of different
distributions of data on the MNIST data set on the user’s model parameters. We can see
that the weight divergence of users with the same data distribution will be slight. The data
tags owned by user2 and user24 are both 1 and 5, and the number of each data tag is the
same. It can be seen from the figure that the weight divergence between the two users is
minimal. The data held by user38 are a fuzzy picture generated by GAN, therefore it is very
different from all users.
_3.3. Aggregation Parameter Settings_
The server will group users according to the user’s updates through the updates
aggregated in the first few global epochs. Users with similar weights will be classified
together, and the server will consider these users to have the same data distribution.
Therefore, our aggregation parameter α consists of two parts: data distribution reward and
staleness penalty.
The amount of data will affect the time of local iteration. Some users have a small
amount of data, and the user’s update may reach the server earlier in each global epoch.
Similarly to the FedAvg algorithm, the global aggregation parameters will be affected by the
amount of data the user has, that is, _[|D]D[m][|]_ [, where][ D][ is the total number of the training data.]
Besides, when a straggler update arrives at the server, the server will add an exponential
penalty term e[−][a][(][t][m][−][T][g][0][)] to the update information, where a is a parameter that controls the
staleness penalty. The larger a is, the greater the penalty for stale updates. tm is the time
when the user arrives at the server, and Tg[0] [is the start time of the latest global epoch. The]
staleness penalty is a necessary setting for asynchronous aggregation. Next, we will mainly
introduce the influence of data distribution on aggregation parameters.
-----
_Electronics 2022, 11, 314_ 7 of 15
**Figure 2. Weight difference between users.**
When the user’s update of m arrives at the server, the server will determine whether
the user’s update is a delayed update according to the previously described method. When
the number of local iterations km of the user m is the same as the global iteration counter
_Tg, the staleness penalty parameter a is always 0. This is because most users’ updates can_
arrive in a short time. Imposing staleness penalties for these users is unnecessary and will
have a negative impact on the convergence speed.
As shown in Figure 1, the server maintains a model list (denoted as ) with a length
_L_
of _[n]_
_β_ [.][ β][ is a parameter similar to the “client selection” in synchronous communication,]
which controls the range of the aggregation model in a global epoch. That is, the latest
_n_
_β_ [users’ updated information will be considered during each global aggregation. The]
server dynamically maintains a topology of user model weight distribution based on this
information. The server groups different users according to the weight divergence when
the user uploads the update for the first time, and the distribution of users in each group is
roughly the same (users 2, 29, 24, and 25 in Figure 2).
As illustrated in Figure 2, when a user update arrives at the server, the server will
calculate the weight divergence dm between the update wm and the previous global model
**wold according to the stored local model list. Furhter, it will find the mean value of the**
distance between wold and all weights in the model list L, that is, Mean(L) (the black
dotted line in the figure). Obviously, when the user’s arrival update is in the outer circle,
the global model is more susceptible to the user’s influence. Accordingly, the server will
take corresponding reward and punishment measures based on the difference between the
arrival update weight and the global model. Specifically, if ∥wm − **wold∥≤** _Mean(L), the_
payoff of the data distribution in the aggregate weight E(m) = 1. When ∥wm − **wold∥≥**
-----
_Electronics 2022, 11, 314_ 8 of 15
_Mean(_ ), the server will determine which group the user belongs to. If the model-list
_L_
stored on the server has aggregated the data of the group with more than [1]
_β_ [, then the]
server will add a penalty term to the user’s update. Otherwise, a reward will be given. In
conclusion, we have:
�
1, if dm < dM;
_E(m) =_ (3)
_β_ [1][−]nc[n][c] [,] if dm ≥ _dM,_
where dM = Mean(L), and nc is the total amount of data of user group c assigned to user
_m by the server. We call the user m that satisfies dm < dM as the “central” user, otherwise_
they are the “edge” user. As shown in Figure 3, when the arriving user is an “edge” user,
the server will determine whether enough updates of the data distribution have been
aggregated. If the server has aggregated _[n][c]_
_β_ [data of the group, then the server will impose a]
penalty for the update (the red cross in the figure). Otherwise, the corresponding reward
will be given (orange cross in the picture). If we use α[∗] to denote the aggregate parameters
of all users in the model list L, and α[∗][m] is the aggregate parameter of the user m’s update
information in model list, then, when l > n, Equation (1) can be rewritten as:
_L_
**wnew =**
len(L)
### ∑
_m=0_
_α[∗][m]_
if l > n. (4)
sum(α[∗]) _[L][[][m][]][,]_
where function sum() is the sum of all items in the aggregate parameter list α[∗]. The detailed
process of the STAFL method is described in Algorithm 1.
Aggregated update
Unaggregated user
updates _dm_
Different case of
updates
Global gradient after
last aggregation
_dm_
(b) give weight divergence
payoff
(a) no weight divergence
payoff
**label 2**
**label 4**
**Label**
**3**
**label 1**
(c) different case for weight
divergence payoff
**Figure 3. The impact of data distribution on user weight.**
-----
_Electronics 2022, 11, 314_ 9 of 15
**Algorithm 1 Staleness-tolerate asynchronous federated learning.**
**Input: user update information wm, the amount of data nm owned by the user m, staleness**
function S(l), data distribution payoff E(m), User m update counter km, global epoch
counter Tg, aggregation counter l, model-list L.
**Output: finalized global model**
At server side:
**while User local update w[k]m[m]** [arrives at the server][ do]
**if user m is straggler then**
compute staleness penalty S(l) = exp(a(tm − _Tg[0][))]_
**end if**
**if l > n then**
compute data distribution payoff E(m) accroding to Equation (3)
compute α = _[|D]|D|[m][|]_ _[S][(][l][)][E][(][m][)]_
**end if**
perform aggregation using Equation (1)
update model-list
_L_
**if meet the accuracy requirements then**
break;
**end if**
**end while**
At client side:
**for each user m do**
update local model accroding Equation (2)
**end for**
_3.4. Model List Update and Weight Divergence Computation_
As mentioned above, for the server to judge the aggregated data distribution of the
global model, we need to maintain a model list on the server-side. Although storing user
update information will consume additional storage costs, our experimental results show
that even holding a very short model list can significantly improve the negative impact of
the non-IID dataset on the global model. The longer the model list, the more accurately
the server can estimate the data distribution required for the global model. When a user
update arrives at the server, the server immediately aggregates the update and stores the
user’s weight in the model list. We will delete the oldest user update stored in the model
list if the model list is full. In this way, we maintained a fixed-length sequence of user-local
model weights that represents the latest global model.
We use the Euclidean distance between the weights of the two models to represent
the difference in data distribution for users. When a user update arrives at the server,
the server immediately computes the Euclidean distance dm between that update and the
global model. In addition, it should be noted that, since the computational complexity of
calculating the Euclidean distance of the neural network weight will increase exponentially
with the number of users participating in the training, we will only group users based on
their first-round update information. The local model information uploaded by the user for
the first time will only be affected by its local training dataset and will not be affected by the
weight of other user models. Therefore, the weight divergence between users with different
data distributions in the first epoch will be the largest, thus facilitating server grouping.
-----
_Electronics 2022, 11, 314_ 10 of 15
**User**
de‐ **10** **1** local **2**
train quantization
quantization weight
**3**
**9**
**Server**
de‐ **5** local **6** model **7** global **8**
quantization
quantization weight aggregation weight
**Figure 4. Weight quantification steps.**
_3.5. Reduce Communication Overhead_
Since many users participate in asynchronous communication, it is necessary to use the
communication compression mechanism to reduce the communication cost. In this paper,
we use the method of communication quantization to reduce the weight accuracy of the
network from float (32 bits) to 4 bits. The overall scheme of communication compression is
shown in Figure 4. First, each user trains their model locally. After the training is completed,
the user quantizes the model weight and sends the quantized content to the server. After
receiving the quantized model weight, the server performs a de-quantization operation to
restore the original update of the user. Subsequently, the server uses the weight obtained
by de-quantization to perform model aggregation to obtain a new global model. Then, it
quantizes the model and sends it to all users participating in the training. The user then
starts the next local epoch based on the new global model obtained by de-quantization.
**4. Experiment Evaluations**
To make sure that our experiments are closer to the Internet of Things, we integrated
the training of federal learning into the actual environment. We assume that 50 users
participate in the training process of federated learning. We also assume that all users are
evenly distributed. In other words, the distances between all users, the server, and the
transmission delay consumed are equal. The network condition is basically stable, and
there is no massive packet loss. However, our method can tolerate a certain amount of
data loss and is robust to non-large amounts of data loss. In the comparative experiment,
the waiting time for each global epoch is the longest time consumed by the user in that
global epoch minus the shortest time. In order to simulate the heterogeneity of devices in
federated learning, we set different computing capacities for different devices (that is, we
reduce the computing capacities of a device by reducing the frequency) and set a different
amount of data for all users.
_4.1. Data Settings_
Datasets: we use MNIST, FEMNIST, and CIFAR-10 datasets in this experiment to prove
the effectiveness of our proposed method. The MNIST dataset comprises 60,000 training
samples and 10,000 testing samples. The image is a fixed size (28 × 28 pixels) with a value
of 0 to 1. Each image is flattened and converted into a one-dimensional numpy array of
784 (28 28) features. The CIFAR-10 data set contains 60,000 32 32 color images which
_×_ _×_
comprise 50,000 training images and 10,000 test images. These samples are divided into five
training batches and one test batch. Each batch has 10,000 samples. The test batch contains
1000 randomly selected images from each category. Training batches contain the remaining
images; however, some may contain more images from one category than another. The
five training sets contain exactly 5000 images from each class. The Federated Extended
|User|Col2|Col3|
|---|---|---|
|de‐ 10 1 local 2 train quantization quantization weight 3 9 Server|||
|de‐ 5 local 6 model 7 global 8 quantization quantization weight aggregation weight|||
||||
de‐ **5** local **6** model **7** global **8**
quantization
quantization weight aggregation weight
**9**
-----
_Electronics 2022, 11, 314_ 11 of 15
MNIST dataset (FEMNIST) is the extended MNIST dataset based on the writer of the digit
and character.
Non-independent and identically distributed setting: in order to simulate a nonindependent and identically distributed experimental environment, each user in the experiment randomly selects a part from different categories of the training data set, that is, each
user does not completely own all the categories of the training data set, and the number
of data in each user is different. In particular, we will replace the data set of one of the
users with fuzzy data (noise is added to the image or an image data set without complete
GAN training).
_4.2. Experimental Results of Model List and Weight Divergence_
We first focus on the performance of our method under different data distributions.
In order to simulate different degrees of non-independent and identically distributed
data sets, we divide the experimental scenarios into two types. In the first type of nonindependent and identically distributed data set, most users have five or more data labels,
and a small number of users only have data with one or two labels. Under this scenario,
the data are not extremely non-independent and identically distributed. User updates
that arrive continuously within a specific time period can cover all data labels when
performing federated learning under this data distribution. In the second non-independent
and identically distributed case, each user has at most two data labels. In this case, if the
global aggregation operation is performed, the global model will be severely affected by the
“non-independent and identically distributed”, and the accuracy jitter will be fairly obvious.
As shown in Figures 5 and 6, we compare STAFL with the traditional asynchronous
method (i.e., heuristic method, the aggregation parameter α is large at the beginning of
training, and, as the global aggregation progresses, the updated aggregation parameter α
assigned to new users update will be smaller). In addition, in order to verify the influence
of the weight difference grouping on the convergence speed, we use STAFL-1 to represent
the accuracy change in the model when we only perform the staleness penalty without the
data distribution payoff. We use STAFL-2 to describe the situation where we impose the
staleness penalty on stragglers and add the data distribution payoff to “edge” users. When
the user’s training data has one or two labels, as shown in Figure 6, the test accuracy of
_STAFL-2 improved the fastest. Compared with the heuristic method, only aggregating the_
model list can reduce the impact of the previous stale user update on the global model.
_L_
The staleness penalty can also reduce the degree of accuracy jitter. As shown in Figure 5,
when most users’ data have more than five labels, the weight divergence between users is
not particularly obvious. Therefore it is difficult for the server to divide users into multiple
groups. In this case, the effect of adding data distribution payoff to each user is not very
obvious. However, it still has a certain outcome.
(a) (b)
**Figure 5. STAFL performance under scenario 1. (a) Test accuracy of scenario 1. (b) Test loss of**
scenario 1.
-----
_Electronics 2022, 11, 314_ 12 of 15
(a) (b)
**Figure 6. STAFL performance under scenario 2. (a) Test accuracy of scenario 2. (b) Test loss of**
scenario 2.
_4.3. Model List Length Discussion_
As mentioned in Section 3, the parameter β controls the range of user updates considered during server aggregation. Compared with the traditional asynchronous aggregation
method, aggregating the continuously updated model list can reduce the stale updates
_L_
that the server aggregated a long time ago. The global model will only be affected by the
latest few user updates. Correspondingly, the length of the model list L will affect the scope
of server aggregation. Similar to the client selection of synchronous federated learning, the
larger the β, the more user updates in a global aggregation, and the greater the possibility
of covering all data distributions.
Correspondingly, the length of the model list will affect the scope of server ag_L_
gregation. Similar to the client selection in synchronous federated learning, the larger
the β, the more user updates in a global aggregation, and the greater the possibility of
covering all data distributions. However, in asynchronous federated learning, the length
of is not as long as possible. First of all, storing a user’s model requires storage space,
_L_
especially a complex neural network, which requires keeping a large amount of model
weight information. Secondly, the longer the model list, the more obvious the impact of
stale updates during aggregation. Further, a one-time aggregation of some users’ updates
is enough for the global model to learn the corresponding knowledge for the server. We
explore the impact of different β on the global model in the FEMNIST data set. We use
the final aggregated model in each global epoch as the evaluation standard. As can be
seen from Figure 7, because the aggregation parameters can compensate for the weight
divergence, when the length of the model list is within a certain range, the difference in the
accuracy of the overall model is not very obvious.
(a) (b)
**Figure 7. Different β on FEMNIST dataset. (a) Test accuracy. (b) Test loss.**
-----
_Electronics 2022, 11, 314_ 13 of 15
_4.4. Comparison of STAFL with Other Methods_
In this subsection, we compare the performance of STAFL with existing methods.
These methods are:
- **FedAvg [17]: one of the most traditional synchronous aggregation methods of feder-**
ated learning randomly selects a part of users to participate in training in each global
epoch and simply discards the stragglers;
- **FedProx [23]: FedProx is an improvement over FedAvg. Compared to FedAvg, Fed-**
Prox still aggregates updates from some stragglers. In addition, similar to STAFL, it
also uses Euclidean distance to improve the model’s performance on non-IID datasets;
- **ASO [24]: ASO is a novel asynchronous federated learning algorithm that adaptively**
trades off convergence speed and accuracy based on staleness.
We evaluate the performance of different methods on the FEMNIST and CIFAR-10
datasets. We assume that the data are extremely non-IID and that 50% of users are stragglers.
Table 2 shows the evaluation results. We highlight the optimal performance in different
situations in bold. As can be seen from the table, our proposed method does not perform
well at the beginning of training. The reason for this is that STAFL needs to calculate the
weight divergence between users at the beginning of training, as opposed to other schemes
that only perform weighted averages or focus only on staleness. As mentioned above, when
a large number of users participates in training, the server needs a lot of time to calculate
the Euclidean distance between them, even if the server can perform many calculations
in parallel. As training progresses, STAFL can dynamically set aggregation parameters
based on the model list and previous grouping results, resulting in better performance than
other methods. In contrast, the synchronous aggregation strategy of FedAvg and FedProx
cannot maintain the original convergence efficiency when there are a lot of stragglers.
The convergence effect of the ASO is also unsatisfactory when the data distribution is
extremely heterogeneous.
**Table 2. Comparison with other methods.**
**Dataset** **Accuracy at 90 s** **Accuracy at 180 s** **Accuracy at 300 s**
FEMNIST 0.39 0.51 0.70
**FedAvg**
CIFAR-10 0.27 0.49 0.64
FEMNIST 0.37 0.58 0.75
**FedProx**
CIFAR-10 **0.39** 0.47 0.69
FEMNIST **0.45** 0.49 0.71
**ASO**
CIFAR-10 0.32 0.54 0.67
FEMNIST 0.41 **0.62** **0.81**
**STAFL**
CIFAR 0.24 **0.56** **0.72**
_4.5. Communication Cost Comparison_
In order to reduce the communication overhead of asynchronous federated learning,
we adopt the communication compression method described in Section 3. We set the
number of users to 2, 3, 4, and 5 to verify the time spent before and after communication compression and the total time required to reach 90% accuracy. We distributed the
data to the corresponding number of Raspberry Pis for experimentation. It can be seen
from Figure 8a that, after compression, the communication time has been reduced by 10%.
Compressing the transmission data does not increase the convergence time. As shown
in Figure 8b, the time required for the global model to reach 90% accuracy is reduced.
Figure 8c shows the accuracy comparison chart before and after communication compression on the MNIST data set. Since there are fewer users participating in the training, the
influence of non-independent and identical distribution of data on convergence is not
very obvious.
-----
_Electronics 2022, 11, 314_ 14 of 15
(a) (b)
(c)
**Figure 8. Comparison before and after communication compression. (a) comparison of communica-**
tion time. (b) comparison of training time. (c) comparison of test accuracy.
**5. Conclusions**
In this paper, we design a federated learning system architecture for asynchronous
communication, called STAFL. In STAFL, users can upload local updates at any time.
The server can use the information stored in a model list to determine whether enough
information about a certain data distribution has been aggregated in the global model so
that it can better impose corresponding penalties or rewards on arriving updates. We also
use communication compression to reduce the communication cost caused by asynchronous
aggregation. Compared with other methods, our method focuses more on the effect of
data heterogeneity on the global model. STAFL can control the negative impact of non-IID
datasets on the convergence rate with the least cost. The experimental results show that
our method has a significant improvement in convergence efficiency compared with other
methods. In future work, we will focus on model performance on other complex datasets
and focus on the points where STAFL can be improved, such as obtaining prior knowledge
of data distribution more efficiently and conducting theoretical research.
**Author Contributions: Funding acquisition, F.Z., Y.Z. and Z.C.; Methodology, F.Z., J.H., Y.Z. and**
Z.C.; Validation, X.T.; Visualization, B.C.; Writing—original draft, J.H.; Writing—review and editing,
Y.Z. All authors have read and agreed to the published version of the manuscript.
**Funding: This work was supported in part by the National Key Research and Development Program**
of China under Grant 2019YFB2102000 and in part by the National Natural Science Foundation
of China under Grant(No. 62172215) and in part by the Natural Science Foundation of Jiangsu
Province(No. BK20200067), in part by the A3 Foresight Program of NSFC (Grant No. 62061146002).
**Conflicts of Interest: The authors declare no conflicts of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript:
FL Federated Learning
iid independent and identically distributed
non-iid non-independent and identically distributed
IoT Internet of Things
**References**
1. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog computing and its role in the internet of things. In Proceedings of the First
Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland, 17 August 2012; pp. 13–16.
2. Kuflik, T.; Kay, J.; Kummerfeld, B. Challenges and solutions of ubiquitous user modeling. In Ubiquitous Display Environments;
Springer: Berlin/Heidelberg, Germany, 2012; pp. 7–30.
3. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. _IEEE Internet Things J. 2016, 3, 637–646._
[[CrossRef]](http://doi.org/10.1109/JIOT.2016.2579198)
4. Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-edge ai: Intelligentizing mobile edge computing, caching and
[communication by federated learning. IEEE Netw. 2019, 33, 156–165. [CrossRef]](http://dx.doi.org/10.1109/MNET.2019.1800286)
5. Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Federated learning for data privacy preservation in vehicular cyber-physical
[systems. IEEE Netw. 2020, 34, 50–56. [CrossRef]](http://dx.doi.org/10.1109/MNET.011.1900317)
-----
_Electronics 2022, 11, 314_ 15 of 15
6. Yu, Z.; Hu, J.; Min, G.; Lu, H.; Zhao, Z.; Wang, H.; Georgalas, N. Federated learning based proactive content caching in edge
computing. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab
Emirates, 9–13 December 2018; pp. 1–6.
7. Guo, X.; Liu, Z.; Li, J.; Gao, J.; Hou, B.; Dong, C.; Baker, T. VeriFL: Communication-Efficient and Fast Verifiable Aggregation for
[Federated Learning. IEEE Trans. Inf. Forensics Secur. 2021, 16, 1736–1751. 10.1109/TIFS.2020.3043139. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2020.3043139)
8. Vonderwell, S. An examination of asynchronous communication experiences and perspectives of students in an online course: A
[case study. Internet High. Educ. 2003, 6, 77–90. [CrossRef]](http://dx.doi.org/10.1016/S1096-7516(02)00164-1)
9. Chen, Y.; Ning, Y.; Slawski, M.; Rangwala, H. Asynchronous online federated learning for edge devices with non-iid data. In
Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 15–24.
10. Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Differentially Private Asynchronous Federated Learning for Mobile Edge
[Computing in Urban Informatics. IEEE Trans. Ind. Inform. 2020, 16, 2134–2143. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2942179)
11. Liu, X.; Qin, X.; Chen, H.; Liu, Y.; Liu, B.; Zhang, P. Age-aware Communication Strategy in Federated Learning with Energy
Harvesting Devices. In Proceedings of the 2021 IEEE/CIC International Conference on Communications in China (ICCC),
Xiamen, China, 28–30 July 2021; pp. 358–363.
12. Damaskinos, G.; Guerraoui, R.; Kermarrec, A.M.; Nitu, V.; Patra, R.; Taïani, F. Fleet: Online federated learning via staleness
awareness and performance prediction. In Proceedings of the 21st International Middleware Conference, Delft, The Netherlands,
7–11 December 2020; pp. 163–177.
13. Wu, W.; He, L.; Lin, W.; Mao, R.; Maple, C.; Jarvis, S. SAFA: A semi-asynchronous protocol for fast federated learning with low
[overhead. IEEE Trans. Comput. 2020, 70, 655–668. [CrossRef]](http://dx.doi.org/10.1109/TC.2020.2994391)
14. Chai, Z.; Chen, Y.; Zhao, L.; Cheng, Y.; Rangwala, H. Fedat: A communication-efficient federated learning method with
asynchronous tiers under non-iid data. arXiv 2020, arXiv:2010.05958.
15. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated
learning. In International Conference on Machine Learning; PMLR: McKees Rocks, PA, USA, 2020; pp. 5132–5143.
16. Yu, F.; Rawat, A.S.; Menon, A.; Kumar, S. Federated learning with only positive labels. In International Conference on Machine
_Learning; PMLR: McKees Rocks, PA, USA, 2020; pp. 10946–10956._
17. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from
decentralized data. In Artificial Intelligence and Statistics; PMLR: McKees Rocks, PA, USA, 2017; pp. 1273–1282.
18. Khaled, A.; Mishchenko, K.; Richtárik, P. Tighter theory for local SGD on identical and heterogeneous data. In International
_Conference on Artificial Intelligence and Statistics; PMLR: McKees Rocks, PA, USA, 2020; pp. 4519–4529._
19. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-iid data. arXiv 2018, arXiv:1806.00582.
20. Sahu, A.K.; Li, T.; Sanjabi, M.; Zaheer, M.; Talwalkar, A.; Smith, V. On the convergence of federated optimization in heterogeneous
networks. arXiv 2018, arXiv:1812.06127.
21. Chen, M.; Yang, Z.; Saad, W.; Yin, C.; Poor, H.V.; Cui, S. A joint learning and communications framework for federated learning
[over wireless networks. IEEE Trans. Wirel. Commun. 2020, 20, 269–283. [CrossRef]](http://dx.doi.org/10.1109/TWC.2020.3024629)
22. Chen, M.; Mao, B.; Ma, T. FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data. Future
_[Gener. Comput. Syst. 2021, 120, 1–12. [CrossRef]](http://dx.doi.org/10.1016/j.future.2021.02.012)_
23. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process.
_[Mag. 2020, 37, 50–60. [CrossRef]](http://dx.doi.org/10.1109/MSP.2020.2975749)_
24. Xie, C.; Koyejo, S.; Gupta, I. Asynchronous federated optimization. arXiv 2019, arXiv:1903.03934.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/electronics11030314?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/electronics11030314, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2079-9292/11/3/314/pdf?version=1642669141"
}
| 2,022
|
[] | true
| 2022-01-20T00:00:00
|
[
{
"paperId": "fb1a85443b4fcddba3ba38cc78c02073f18aa0c8",
"title": "Stochastic Controlled Averaging for Federated Learning with Communication Compression"
},
{
"paperId": "bafbb4115c71631746ffaca77a2bfe94a1d3cee1",
"title": "Age-aware Communication Strategy in Federated Learning with Energy Harvesting Devices"
},
{
"paperId": "81c3c02f8d82c826833c7ac74746f65564930feb",
"title": "Federated Learning on Non-IID Data: A Survey"
},
{
"paperId": "11e6675cad1c82f8066b07a392396151322b9063",
"title": "FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data"
},
{
"paperId": "0f6ca44d390cf83a533a62f5925e0b9f20d081c3",
"title": "FLeet: Online Federated Learning via Staleness Awareness and Performance Prediction"
},
{
"paperId": "8954778efe3bb122b7ce5924d9b25a33cf05d754",
"title": "Federated Learning for Data Privacy Preservation in Vehicular Cyber-Physical Systems"
},
{
"paperId": "8452a1317237ddebffd80e610ecc773bfb678e9c",
"title": "Federated Learning with Only Positive Labels"
},
{
"paperId": "38fbc724bafeb1b4908cd9608f866e2bd76871ca",
"title": "Differentially Private Asynchronous Federated Learning for Mobile Edge Computing in Urban Informatics"
},
{
"paperId": "d1a7d35ac84502c14737567b72249e952e824d72",
"title": "Asynchronous Online Federated Learning for Edge Devices with Non-IID Data"
},
{
"paperId": "fc7b1823bd8b59a590d0bc33bd7a145518fd71c5",
"title": "SCAFFOLD: Stochastic Controlled Averaging for Federated Learning"
},
{
"paperId": "51b6c71899ad2416b8904a099a8bf5cca1e77139",
"title": "SAFA: A Semi-Asynchronous Protocol for Fast Federated Learning With Low Overhead"
},
{
"paperId": "47883960418480d89c0d180c1c78171b149d9b3a",
"title": "A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks"
},
{
"paperId": "4f783752a59c28df08bad9b22dd9c7bafe4efb08",
"title": "Tighter Theory for Local SGD on Identical and Heterogeneous Data"
},
{
"paperId": "49bdeb07b045dd77f0bfe2b44436608770235a23",
"title": "Federated Learning: Challenges, Methods, and Future Directions"
},
{
"paperId": "8f6602b6ebe2962dacf0b563e73852183e628ddf",
"title": "Asynchronous Federated Optimization"
},
{
"paperId": "3370d724bbba3b8793c7a743bd7ef8b57a88195b",
"title": "On the Convergence of Federated Optimization in Heterogeneous Networks"
},
{
"paperId": "91f7e3856b9ac81bdaceb67e322084c811ed22b3",
"title": "Federated Learning Based Proactive Content Caching in Edge Computing"
},
{
"paperId": "6f89a632ceb8fcb81eac3d7b52e937099659cc6a",
"title": "In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning"
},
{
"paperId": "5cfc112c932e38df95a0ba35009688735d1a386b",
"title": "Federated Learning with Non-IID Data"
},
{
"paperId": "e3a442aa24e5df7e6b2a25e21e75c4c325f9eedf",
"title": "Edge Computing: Vision and Challenges"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "03c49f3072f5f98f6272f78b3baa06fe555317dd",
"title": "Fixing Issues and Achieving Maliciously Secure Verifiable Aggregation in \"VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning\""
},
{
"paperId": "040039a3d84f0741d76a67727588f83c0bd53eb5",
"title": "FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data"
},
{
"paperId": "22449b1e182f8aef13ee921ce0917a0c3fe02a49",
"title": "Challenges and Solutions of Ubiquitous User Modeling"
},
{
"paperId": null,
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": "b2d721eb5fe3452a4c44e0b32b3081750a4c73d7",
"title": "An examination of asynchronous communication experiences and perspectives of students in an online course: a case study"
}
] | 12,698
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0141e74a0695b544b3fd99677de1723dd1fe1548
|
[
"Computer Science"
] | 0.857179
|
Formal Correctness of an Automotive Bus Controller Implementation at Gate-Level
|
0141e74a0695b544b3fd99677de1723dd1fe1548
|
IFIP Working Conference on Distributed and Parallel Embedded Systems
|
[
{
"authorId": "2112861",
"name": "Eyad Alkassar"
},
{
"authorId": "2070528983",
"name": "P. Böhm"
},
{
"authorId": "39961125",
"name": "Steffen Knapp"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"DIPES",
"IFIP Work Conf Distrib Parallel Embed Syst"
],
"alternate_urls": null,
"id": "35a64850-665d-4cc1-8588-b8f3dbb93be8",
"issn": null,
"name": "IFIP Working Conference on Distributed and Parallel Embedded Systems",
"type": "conference",
"url": null
}
| null |
# Formal Correctness of an Automotive Bus Controller Implementation at Gate-Level
Eyad Alkassar, Peter B¨ohm, and Steffen Knapp
**Abstract We formalize the correctness of a real-time scheduler in a time-triggered**
architecture. Where previous research elaborated on real-time protocol correctness,
we extend this work to gate-level hardware. This requires a sophisticated analysis
of analog bit-level synchronization and message transmission. Our case-study is a
concrete automotive bus controller (ABC). For a set of interconnected ABCs we
formally prove at gate-level, that all ABCs are synchronized tight enough such that
messages are broadcast correctly. Proofs have been carried out in the interactive
theorem prover Isabelle/HOL using the NuSMV model checker. To the best of our
knowledge, this is the first effort formally tackling scheduler correctness at gatelevel.
## 1 Introduction
As more and more safety-critical functions in modern automobiles are controlled by
embedded computer systems, formal verification emerges as the only technique to
ensure the demanded degree of reliability. When analyzing correctness, as a bottom
layer, often, only some synchronous model of distributed electronic control units
(ECUs) sharing messages in lock-step is assumed. However, such models are im
Eyad Alkassar[1] _· Steffen Knapp[1]_
Saarland University, Dept. of Computer Science, 66123 Saarbr¨ucken, Germany
e-mail: {eyad,sknapp}@wjpserver.cs.uni-sb.de
Peter B¨ohm[1]
Oxford University Computing Laboratory, Wolfson Building, Oxford, OX1 3QD, England
e-mail: peter.boehm@comlab.ox.ac.uk
1 Work partially funded by the German Research Foundation (DFG), by the German Federal Ministry of Education and Research (BMBF), and by the International Max Planck Research School
(IMPRS).
_Please use the following format when citing this chapter:_
-----
plemented at gate-level as highly asynchronous time-triggered systems. Hence it
can not suffice to verify certain aspects of a system, as algorithms or protocols only.
In this paper we examine a distributed system implementation consisting of
ECUs connected by a bus. Our study has to combine arguments from three different areas: (i) asynchronous bit-level transmission, (ii) scheduling correctness, and
(iii) classical digital hardware verification at gate-level.
Our contribution is to show, by an extended case-study, how analog, real-time
and digital proofs can be integrated into one pervasive correctness statement.
The hardware model has been formalized in the Isabelle/HOL theorem prover [11]
based on boolean gates. It can be translated to Verilog and run on a FPGA. All lemmata relating to scheduling correctness have been formally proven in Isabelle/HOL.
We have made heavy use of the model checker NuSMV [5] and automatic tools, e.g.
IHaVeIt [18], especially for the purely digital lemmata. Most lemmata dealing with
analog communication (formalized using reals) have been shown interactively.
**Overview. The correctness of our gate-level implementation splits in two main**
parts: (i) the correctness of the transmission of single messages and (ii) the correctness of the scheduling mechanism initiating the message transmission and providing
a common time base. Next we outline these two verification goals in detail.
The verification of asynchronous communication systems must, at some point,
deal with the low-level bit transmission between two ECUs connected to the same
bus. The core idea is to ensure that the value broadcast on the bus is stable long
enough such that it can be sampled correctly by the receiver. To stay within such
a so-called sampling window, the local clocks on the ECUs should not drift apart
more than a few clock ticks and therefore need to be synchronized regularly. This
is achieved by a message encoding that enforces the broadcast of special bit sequences to be used for synchronization. The correctness of this low-level transmission mechanism cannot be carried out in a digital, synchronous model. It involves
asynchronous and real-time-triggered register models taking setup and hold-times
of registers as well as metastability into account. Our efforts in this respect are based
on [3,8,16].
Ensuring correct message transmission between two ECUs is only a part of the
overall correctness. Let us consider a set of interconnected ECUs. The scheduler has
to avoid bus contention, i.e. to ensure that only one ECU is allowed to broadcast at
a time and that all others are only listening. For that, time is divided into rounds,
which are further subdivided into slots. A fixed schedule assigns a unique sender to
a given slot number. The gate-level implementation of the scheduler has to ensure
that all ECUs have roughly the same notion of the slot-start and end times, i.e. they
must agree on the current sender and the transmission interval. Due to drifting clocks
some synchronization algorithm becomes necessary. We use a simple idea: A cycle
offset is added at the beginning and end of each slot. This offset is chosen large
enough to compensate the maximal clock drift that can occur during a full round.
The local timers are synchronized only once, at the beginning of each round. This
is done by choosing a distinguished master ECU, being the first sender in a round.
-----
The combination of the results into a lock-step and synchronous view of the
system is now simple. The scheduler correctness ensures that always only one ECU
is sending and all other ECUs do listen. Then we can conclude from the first part
that the broadcast data is correctly received by all ECUs.
Organization of the paper: In the remainder of this section we discuss the re
lated work. In Section 2 we introduce our ABC implementation. Our verification
approach is detailed in Section 3. Finally we conclude in Section 4.
**Related Work. Serial interfaces were subject to formal verification in the work**
of Berry et al. [1]. They specified a UART model in a synchronous language and
proved a set of safety properties regarding FIFO queues. Based on that a hardware
description can be generated and run on a FPGA. However, data transmission was
not analyzed.
A recent proof of the Biphase-Mark protocol has been proposed by Brown and
Pike [4]. Their models include metastability but verification is only done at specification level, rather than at the concrete hardware. The models were extracted manually.
Formal verification of clock synchronization in timed systems has a long his
tory [9, 12, 17]. Almost all approaches focused on algorithmic correctness, rather
than on concrete system or even hardware verification. As an exception Bevier and
Young [2] describe the verification of a low-level hardware implementation of the
Oral Message algorithm. The presented hardware model is quite simplified, as synchronous data transmission is assumed.
Formal proofs of a clock-synchronization circuit were reported by Miner [10].
Based on abstract state machines, a correctness proof of a variant of the WelchLynch algorithm was carried out in PVS. However, the algorithm is only manually
translated to a hardware specification, which is finally refined semi-automatically
to a gate-level implementation. No formal link between both is reported. Besides,
low-level bit transmission is not covered in the formal reasoning.
The formal analysis of large bus architectures was tackled among others by
Rushby [15] and Zhang [19]. Rushby worked on the time-triggered-architecture
(TTA), and showed correctness of several key algorithms as group membership
and clock synchronization. Assuming correct clock synchronization, Zhang verified
properties of the Flexray bus guardian. Both approaches do not deal with any hardware implementation. The respective standard is translated to a formal specification
by hand.
In [14] Rushby proposes the separation of the verification of timing-related prop
erties (as clock synchronization) and protocol specifications. A set of requirements
is identified, which an implementation of a scheduler (e.g. in hardware) has to obey.
In short (i) clock synchronization and (ii) a round offset large enough to compensate
the maximum clock drift are assumed. The central result is a formal and generic PVS
simulation proof between the real-time system and its lock-step and synchronous
specification. Whereas the required assumptions are similar to ours, they have not
been discharged for concrete hardware.
-----
In [12] Rushby’s framework is instantiated with the time triggered protocol
(TTP). Pike [13] corrects and extends Rushby’s work, and instantiates the new
framework with SPIDER, a fly-by-wire communication bus used by NASA. The
time-triggered model was extracted from the hardware design by hand. But neither
approaches proved correctness of any gate-level hardware.
## 2 Automotive Bus Controller (ABC) Implementation
We consider a time-triggered scenario. Time is divided into so-called rounds each
consisting of ns slots. We uniquely identify slots by a tuple consisting of a roundnumber r ∈ N and a slot-number s ∈ [0 : ns _−_ 1]. Predecessors (r, _s)_ _−_ 1 and successors (r, _s)+_ 1 are computed modulo ns.
The ABC is split in four main parts: (a) the host-interface provides the connec
tion to the host, e.g. a microprocessor, and contains configuration registers (b) the
send-environment performs the actual message broadcast and contains a send-buffer
(c) the receive-environment takes care of the message reception and contains a
receive-buffer (d) the schedule-environment is responsible for the clock synchronization and the obedience to the schedule.
**Configuration Parameter. Unless synchronization is performed, slots are locally**
_T hardware cycles long. A slot can be further subdivided into three parts; an initial_
as well as a final offset (each off hardware cycles) and a transmission window (tc
hardware cycles). The length of the transmission window is implicitly given by the
slot-length and the offset. Within each slot a fixed-length message of ℓ bytes is
broadcast.
The local schedule sendl, that is implemented as a bit-vector, indicates if the
ABC is the sender in a given slot. Intuitively, in slot s, if sendl[s] = 1 then the ABC
broadcasts the message stored in the send-buffer. Note that the ABC implementation
is not aware of the round-number. It simply operates according to the slot-based
fixed schedule, that is repeated time and again.
The special parameter iwait indicates the number of hardware cycles to be
awaited before the ABC starts executing the schedule after power-up.
All parameters introduced so far are stored in configuration registers that need to
be set by the host (we support memory mapped I/O) during an initialization phase.
The host indicates that it has finished the initialization by invoking a setrd command.
We do not go into details here, the interested reader may consult [7,8].
**Message Broadcast. The send-environment starts broadcasting the message con-**
tained in the send-buffer sb if the schedule-environment raises the startsnd signal.
The receive-environment permanently listens on the bus. At an incoming mes
sage, indicated by a falling edge (the bus is high-active), it signals the start of a reception to the schedule-environment by raising the startedrcv signal for one cycle.
In addition it decodes the broadcast frame and writes the message into the receive
buffer rb.
-----
**Fig. 1 Schedule Automaton**
**Scheduling. The schedule-environment maintains two counters: The cycle counter cy**
and the current slot counter csn. Both counters are periodically synchronized at the
beginning of every round. All ECUs except the one broadcasting in slot 0 (we call
the former slaves and the latter master) synchronize their counters to the incoming
transmission in slot 0. Hence, the startedrcv signal from the receive environment is
used to provide a synchronized time base (see below). Furthermore, the scheduleenvironment initiates the message broadcast by raising the startsnd signal for one
cycle.
The schedule environment implements the automaton from Fig. 1. The automa
ton takes the following inputs: The startedrcv signal as described above. The signal
_setrd denotes the end of the configuration phase. The signal sendl0 indicates if the_
ECU is the sender in the first slot and thus the master. Three signals are used to
categorize the cycle counter; eqiwait indicates if the initial iwait cycles have been
reached, similar to eqoff and eqT. The signal eqns indicates that the end of a round
has been reached, i.e. that the slot counter equals ns 1. Finally sendlcur indicates
_−_
if the ABC is the sender in the current slot, i.e. sendlcur = sendl[csn].
The automaton has six states and is clocked each cycle. Its functionality can be
summarized as follows: If the reset signal is raised (which is assumed to happen only
at power-up) the automaton is forced into the idle-state. If the host has finished the
initialization and thus invoked setrd we split cases depending on the sendl0 signal. If
the ABC is the master, i.e. if sendl0 holds, the ABC waits first iwait hardware cycles
(in the iwait-state), then an additional off cycles (in the offwait-state) before it starts
broadcasting the message (in the startbroad-state) and proceeds to the Twait-state.
If the ABC is a slave (sendl0 = 0), it waits in the rcvwait-state for an active
_startedrcv signal and then proceeds to the Twait-state. There all ABCs await the end_
of a slot indicated by eqT. Then we split cases if the round is finished or not. If
the round is not finished yet (indicated by _eqns), all ABCs proceed to the offwait-_
_¬_
state. Furthermore, the sender in the current slot (indicated by sendlcur) proceeds
to the startbroad-state, initiates the message broadcast and then proceeds to the
_Twait-state; all other ABCs skip the startbroad-state and proceed directly to the_
_Twait-state. At the end of a round, the master simply repeats the ‘normal’ sender_
-----
cycle (from the Twait-state to the offwait-state and finally to the Twait-state again).
All other ABCs proceed to the rcvwait-state to await an incoming transmission.
Once initialized, the master ABC follows the schedule without any synchroniza
tion. At the beginning of a round it waits off many cycles and initiates the broadcast.
The clock synchronization on the slave ABCs is done in the rcvwait-state. In this
state the cycle counter is not altered but simply stalls in its last value. At an incoming
transmission (from the master) the slaves clear their slot-counter and set their cycle
counter to off, i.e. the number of hardware cycles at which the master initiated the
broadcast. After this all ABCs are (relatively) synchronized to the masters clock.
**Hardware Construction. The number of ECUs connected to the bus is denoted ne.**
Thus an ECU number is given by u [0 : ne 1]. We use subscript ECU numbers
_∈_ _−_
to refer to single ECUs.
We denote the hardware configurations of ECUu by hu. If the index u of the ECU
does not matter, we drop it. The hardware configuration is split into a host configuration and an ABC configuration. Since we do not go into details regarding the host,
we stick to h to denote the configuration of our ABC. Its essential components are:
Two single bit-registers, one for sending and one for receiving. Both are directly
_•_
connected to the bus. We denote them h.S and h.R.
A second receiver register, denoted h.R[ˆ], to deal with metastability (see Sect. 3).
_•_
Send buffer h.sb and receive buffer h.rb each capable of storing one message.
_•_
The current slot counter h.csn and the cycle counter h.cy.
_•_
The schedule automaton is implemented straight-forward as a transition sys
_•_
tem on an unary coded bit-vector. We use h.state to code the current state (see
Fig. 1).
Configuration registers.
_•_
The configuration registers are written immediately after reset / power-up. They
contain in particular the locally relevant portions of the scheduling function.
To simplify arguments regarding the schedule we define a global scheduling
function send. Given a slot-number s it returns the number of the ECU sending
in this slot. Let sendlu denote the local schedule of ECUu, then send(s) = u ⇔
_sendlu[s] = 1. Note that this definition implicitly requires a unique sender definition_
for each slot. Otherwise correct message broadcast becomes impossible due to bus
contention.
Thus if ECUu is (locally) in a slot with slot index s and send(s) = u then ECUu
will transmit the content of the send buffer h.sb via the bus during some transmission
interval. A serial interface that is not actively transmitting during slot (r, _s) puts by_
construction the idle value (the bit 1) on the bus.
If we can guarantee that during the transmission interval all ECUs are locally in
slot (r, _s), then transmission will be successful. The clock synchronization algorithm_
together with an appropriate choice of the transmission interval will ensure that.
-----
_e (j)r_
_ts_ _th_
_clkr_
_cer_
_Rr_ _y_
_Ss,dinr_ _Ω_
_clks_
|ts|e ( r|(j)|)|th|Col6|
|---|---|---|---|---|---|
|ts||||th||
|||||||
|||||||
|y||||Ω||
|||||Ω|x|
|||||||
_e (i)s_
_tpd_
**Fig. 2 Clock Edges**
## 3 Verification
**Fig. 3 Schedule**
To argue about asynchronous distributed communication systems we have to formalize the behavior of the digital circuits connected to the analog bus. Using the
formalization of digital clocks we introduce a hardware model for continuous time.
In the remainder of this section we sketch the message transmission correctness, detail the scheduling correctness and combine both into a single correctness statement.
**Clocks. The hardware of each ECU is clocked by an oscillator having a nominal**
clock period of τref . The individual clock period τu of an ECUu is allowed to deviate
by at most δ = 0.15% from τref, i.e. ∀u. | τu _−τref |≤_ _τref ·δ_ . Note that this limitation
can be easily achieved by current technology.
Thus the relative deviation of two individual clock periods compared to a third
clock period is bounded by | τu − _τv |≤_ _τw ·_ _∆_ where ∆ = 2δ _/(1_ _−_ _δ_ ).
Given some clock-start offset ou < τu the date of the clock edge eu(i) that starts
cycle i on ECUu is defined by eu(i) = ou + _i_ _·_ _τu._
In our scenario all ECUs are connected to a bus. The sending ECUs broadcasts
data which is sampled by all other ECUs. Due to clock drift it is not guaranteed, that
the timing parameter of the sampling registers are obeyed. This problem is solved
by serial interfaces. To argue formally we first introduce a continuous time model
for bits being broadcast.
**Hardware Model with Continuous Time. The problems solved by serial inter-**
faces can by their very nature not be treated in a standard digital hardware model
with a single digital clock clk. Nevertheless, we can describe each ECUu in such a
model having its own hardware configuration hu.
To argue about the sender register h.S of a sending ECU transmitting data via
the bus to a receiver register h.R of a receiving ECU, we have to extend the digital
model.
For the registers connected to the bus –and only for those– we extend the hard
ware model such that we can deal with the concepts of propagation delay (tpd),
setup time (ts), hold time (th), and metastability of registers. In the extended model
used near the bus we therefore consider time to be a real valued variable t.
Next we define in the continuous time model the output of the sender register hu.S
during cycle i of ECUu, i.e. for t ∈ (eu(i) : eu(i + 1)]. The content of hu.S at time t is
-----
denoted by Su(t). In the digital hardware model we denote the value of some register,
e.g. R, during cycle i by h[i].R which equals the value at the clock edge eu(i + 1).
If in cycle i _−_ 1 the digital clock enable Sce(h[i]u[−][1]) signal was off, we see during
the whole cycle the old digital value h[i]u[−][1].S of the register. If the register was clocked
(Sce(h[i]u[−][1]) = 1) and the propagation delay tpd has passed, we see the new digital
value of the register, which equals the digital input Sdin(h[i]u[−][1]) during the previous
cycle (see Fig. 2). Otherwise we cannot predict what we see, which we denote by Ω :
_Su(t) =_
h[i]u[−][1].S : Sce(h[i]u[−][1]) = 0 _∧_ _t ∈_ (eu(i) : eu(i + 1)]
_Sdin(h[i]u[−][1])_ : Sce(h[i]u[−][1]) = 1 _∧_ _t ∈_ [eu(i)+ _tpd : eu(i_ + 1)]
Ω : otherwise
The bus is an open collector bus modeled as the conjunction over all registers Su(t)
for all t and u.
Now consider the receiver register hv.R on any ECUv. It is continuously turned
on; thus the register always samples from the bus. In order to define the new digital
value hv[j][.][R][ of register][ R][ during cycle][ j][ on][ ECU]v [we have to consider the value]
of the bus in the time interval (ev( _j) −_ _ts,_ _ev(_ _j) + th). If during that time the bus_
has a constant digital value x, the register samples that value, i.e. _x_ 0, 1 _._ _t_
_∃_ _∈{_ _}_ _∀_ _∈_
(ev( _j)_ _−_ _ts,_ _ev(_ _j)+_ _th). bus(t) = x ⇒_ _hv[j][.][R][ =][ x][. Otherwise we define][ h]v[j][.][R][ =][ Ω]_ [.]
We have to argue how to deal with unknown values Ω as input to digital hard
ware. We will use the output of register hu.R only as input to a second register hu.R[ˆ]
whose clock enable is always turned on, too. If Ω is clocked into hu.R[ˆ] we assume
that hu.R[ˆ] has an unknown but digital value, i.e. hu[j] _[.][R][ =][ Ω]_ _[⇒]_ _[h]u[j][+][1].R[ˆ] ∈{0,_ 1}.
In real systems the counterpart of register R[ˆ] exists. The probability that R be
comes metastable for an entire cycle and that this causes R[ˆ] to become metastable
too is for practical purposes zero.
**Continuous Time Lemmata for the Bus. Consider ECUs is the sender and ECUr**
is a receiver in a given slot. Let i be a sender cycle such that Sce(h[i]s[−][1]) = 1, i.e. the
output of S is not guaranteed to stay constant at time es(i). This change can only
affect the value of register R of ECUr in cycle j if it occurs before the sampling
edge er( _j) plus the hold time th, i.e. es(i) < er(_ _j)+th. The first cycle that is possibly_
being affected is denoted by cyr,s(i) = min{ j | es(i) < er( _j)+_ _th}._
In what follows we assume that all ECUs other than the sender unit ECUs put the
value 1 on the bus and keep their Sce signal off (hence bus(t) = Ss(t) for all t under
consideration). Furthermore, we consider only one receiving unit ECUr. Because
the indices r and s are fixed we simply write cy(i) instead of cyr,s(i).
**Theorem 1 (Message Broadcast Correctness). Let the broadcast start in sender-**
_cycle i. The value of the send buffer of ECUsend(s) is copied to all receive buffers on_
_the network side within tc sender cycles, i.e. ∀u. h[cy]u_ [(][i][+][tc][)].rb = h[i]send(s)[.][sb.]
This theorem is proven by an in-depth analysis of the send-environment and the
receive-environment. For details see [8]. We do not go into details regarding the
message transmission here. Instead we focus on the scheduling correctness.
-----
**Scheduling. We assume w.l.o.g. that the ECU with number 0 is the master, i.e.**
_send(0) = 0. Let pu be the point in time when ECUu is switched on. We assume_
that at most cpmax hardware cycles have passed on the master ECU from the point
in time it was switched on until all other ECUs are switched on, too. Thus _u._
_∀_ _|_
_pu −_ _p0 | ≤_ _cpmax ·_ _τ0._
Once initialization is done, all hosts invoke a setrd command. The master ECU
waits iwait hardware cycles before it starts executing the schedule. We assume that
that there exists a point in time denoted Imax at which all slaves have invoked the
_setrd command and await the first incoming message. This assumption can be easily_
discharged by deriving an upper bound for the duration of the initialization phase,
say imax hardware cycles in terms of the master ECU, and choosing iwait to be
_cpmax + imax. The upper bound can be obtained by industrial worst case execution_
time (WCET) analyzers [6] for the concrete processor and software.
We introduce some notation to simplify the arguments regarding single slots.
The start time of slot (r, _s) on an ECUu is denoted by αu(r,_ _s). Initially, for all u we_
define αu(0, 0) = Imax. To define the slot start times greater than slot (0, 0) we need
a predicate schedexec that indicates if the schedule automaton is in one of three
_executing states, i.e. schedexec(h[i]u[) =][ h][i]u[.][state][ ∈{][offwait][,]_ _[Twait][,]_ _[startbroad][}][. Let][ c]_
be the smallest local hardware cycle such that eu(c) is greater than αu((r, _s) −_ 1),
_schedexec(h[c]u[)][ holds,][ h][c]u[,]_ _[cy][ =][ 0, and][ h][c]u[.][csn][ =][ s][. Moreover let][ c][′][ be the smallest]_
cycle sucht that eu(c[′]) is greater than αu((r, _s)_ _−_ 1) and h[c]u[′][.][state][ =][ rcvwait][.]
_αu(r,_ _s) =_ � _eeuu((cc)[′]) : : otherwise u = 0_ _∨_ _s > 0_
Using the definition of a clock edge we obtain the hardware cycle corresponding
to αu(r, _s), denoted by αtu(r,_ _s)._
The local timers are synchronized each round. Next we define the point in time
when the synchronization is done in round r. The synchronization end time of round
_r on ECUu, denoted by βu(r), is defined similar to the slot start time. Let c be_
the smallest hardware cycle such that that schedexec(h[c]u[)][ holds,][ cycle][c]u [=][ off] [, and]
_slotu[c]_ [=][ 0. Then][ β][u][(][r][)][ is defined by][ e][u][(][c][)][.]
**Lemma 1 (Synchronization Times Relation). For all u the synchronization of**
_ECUu to the master is completed within the adjustment time ad = 10 cycles rel-_
_ative to an arbitrary clock period τw, i.e. β0(r) = α0(r,_ 0) + off · τ0 and βu(r) <
_β0(r)+_ 10 _·_ _τw_
The proof of this lemma is split in two parts. First, an analysis of the sender bounds
the delay between an active startsnd signal and the actual transmission start. Second,
we need to bound the delay on the receiver side until the startedrcv signal is raised
after an incoming transmission plus an additional cycle to update the counters and
the schedule control automaton. Next we relate the start times of slots on the same
ECU.
**Lemma 2 (Slot Start Times Relation). The start of slot (r,** _s) on the master ECU_
_depends only on the progress of the local counter, i.e. α0(r,_ _s) = α0((r,_ _s)−1)+T ·τ0._
_The start of slot (r,_ _s) on all other ECUs is given by:_
-----
_αu(r,_ _s) =_
�
_βu(r)+(T −_ _off_ ) _·_ _τu_ : s = 1
_αu((r,_ _s)_ _−_ 1)+ _T ·_ _τu_ : s ̸= 1
Proof by induction on r and s using arguments for the concrete hardware.
The transmission is started in slot (r, _s) by ECUsend(s) if the local cycle count_
equals off . This point in time is denoted by ts(r, _s) = αsend(s)(r,_ _s)+_ _off ·_ _τsend(s). Ac-_
cording to Theorem 1 the transmission ends at time te(r, _s) = ts(r,_ _s)+_ _tc_ _·_ _τsend(s) =_
_αsend(s)(r,_ _s)+(off +_ _tc)_ _·_ _τsend(s)._
The schedule is correct if the transmission interval [ts(r, _s),te(r,_ _s)] is contained_
in the time interval, when all ECUs are in slot (r, _s), as depicted in Fig. 3._
**Theorem 2 (Schedule Correctness). All ECUs are in slot (r,** _s) before the transmis-_
_sion starts. Furthermore, the transmission must be finished before any ECU thinks_
_it is in the next slot, i.e. αu(r,_ _s) < ts(r,_ _s) and te(r,_ _s) < αu((r,_ _s)+_ 1)
This theorem is proven by a case split on (r, _s) using Lemmata 1 and 2. Now we can_
state the overall transmission correctness in the digital hardware model:
**Theorem 3 (Overall Transmission Correctness). Consider slot (r,** _s). The value of_
_the send buffer of ECUsend(s) at the start of slot (r,_ _s) is copied to all receive buffers_
_by the end of that slot, i.e. ∀u. h[α]u_ _[t][u][((][r][,][s][+][1][))][−][1].rb = hαsendtsend(s()s)(r,s).sb_
To prove this theorem we combined Theorem 1 and Theorem 2. According to Theorem 1 the actual broadcast is correct if the transmission window [ts(r, _s),te(r,_ _s)] is_
big enough. The latter is proven by Theorem 2.
## 4 Conclusion
In this paper we present a formal correctness proof of a distributed automotive system at gate-level (Sect. 3) along with its hardware implementation (Sect. 2). The
hardware model has been formalized in Isabelle/HOL on boolean gates.
While a simple version of the message transmission correctness has already been
published before [8,16], in this new work, we have formally analyzed the scheduler
itself and have integrated both results into a single correctness statement. All lemmata relating to scheduling correctness have been formally proven in Isabelle/HOL
which took about one person year.
We used automatic tools as the symbolic, open source model checker NuSMV,
to discharge properties related to bit-vector operations and the schedule automaton
of the hardware. With our implementation heavily using bit-vectors, we ran into
the infamous state explosion problem. By resorting to IHaVeIt (a domain-reducing
preprocessor for model checkers) we were able to cope with this problem. However,
missing support for real-linear arithmetic in the automatic tool landscape, made the
verification of the analog and timed models tedious. Yet the integration of decision
procedures of dense-order logic would be helpful. In short: automatic tools took a
-----
heavy burden from us in the digital world but were almost useless for continoustimed analysis.
Summing up, our work provides a strong argument for the feasibility of formal
and pervasive verification of concrete hardware implementations at gate-level.
## References
1. Berry, G., Kishinevsky, M., Singh, S.: System level design and verification using a syn
chronous language. In: ICCAD, pp. 433–440 (2003)
2. Bevier, W., Young, W.: The proof of correctness of a fault-tolerant circuit design. In: Second
IFIP Conference on Dependable Computing For Critical Applications, pp. 107–114 (1991)
3. Beyer, S., B¨ohm, P., Gerke, M., Hillebrand, M., In der Rieden, T., Knapp, S., Leinenbach, D.,
Paul, W.J.: Towards the formal verification of lower system layers in automotive systems. In:
ICCD ’05, pp. 317–324. IEEE Computer Society (2005)
4. Brown, G.M., Pike, L.: Easy parameterized verification of biphase mark and 8N1 protocols.
In: TACAS’06, LNCS, vol. 3920, pp. 58–72. Springer (2006)
5. Cimatti, A., Clarke, E.M., Giunchiglia, E., Giunchiglia, F., Marco Pistore, M.R., Sebastiani,
R., Tacchella, A.: NuSMV 2: An open source tool for symbolic model checking. In: CAV ’02,
pp. 359–364. Springer-Verlag (2002)
6. Ferdinand, C., Martin, F., Wilhelm, R., Alt, M.: Cache Behavior Prediction by Abstract Inter
pretation. Sci. Comput. Program. 35(2), 163–189 (1999)
7. Hillebrand, M., In der Rieden, T., Paul, W.: Dealing with I/O devices in the context of perva
sive system verification. In: ICCD ’05, pp. 309–316. IEEE Computer Society (2005)
8. Knapp, S., Paul, W.: Realistic Worst Case Execution Time Analysis in the Context of Pervasive
System Verification. In: Program Analysis and Compilation, LNCS, vol. 4444, pp. 53–81
(2007)
9. Lamport, L., Melliar-Smith, P.M.: Synchronizing clocks in the presence of faults. J. ACM
**32(1), 52–78 (1985)**
10. Miner, P.S., Johnson, S.D.: Verification of an optimized fault-tolerant clock synchronization
circuit. In: Designing Correct Circuits. Springer (1996)
11. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL: A Proof Assistant for Higher-Order
Logic, LNCS, vol. 2283. Springer (2002)
12. Pfeifer, H., Schwier, D., von Henke, F.W.: Formal verification for time-triggered clock syn
chronization. In: DCCA-7, vol. 12, pp. 207–226. IEEE Computer Society, San Jose, CA
(1999)
13. Pike, L.: Modeling Time-Triggered Protocols and Verifying Their Real-Time Schedules. In:
FMCAD’07, pp. 231–238 (2007)
14. Rushby, J.: Systematic formal verification for fault-tolerant time-triggered algorithms. IEEE
Transactions on Software Engineering 25(5), 651–660 (1999)
15. Rushby, J.: An overview of formal verification for the time-triggered architecture. In:
FTRTFT’02, LNCS, vol. 2469, pp. 83–105. Springer-Verlag, Oldenburg, Germany (2002)
16. Schmaltz, J.: A Formal Model of Clock Domain Crossing and Automated Verification of
Time-Triggered Hardware. In: FMCAD’07, pp. 223–230. IEEE/ACM, Austin, TX, USA
(2007)
17. Shankar, N.: Mechanical verification of a generalized protocol for byzantine fault tolerant
clock synchronization. In: FTRTFT’92, vol. 571, pp. 217–236. Springer, Netherlands (1992)
18. Tverdyshev, S., Alkassar, E.: Efficient bit-level model reductions for automated hardware ver
ification. In: TIME 2008, to appear. IEEE Computer Society Press (2008)
19. Zhang, B.: On the Formal Verification of the FlexRay Communication Protocol. Automatic
Verification of Critical Systems (AVoCS’06) pp. 184–189 (2006)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-0-387-09661-2_6?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-0-387-09661-2_6, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/978-0-387-09661-2_6.pdf"
}
| 2,008
|
[
"JournalArticle"
] | true
| 2008-09-07T00:00:00
|
[] | 9,041
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Physics",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0141e776d7e5a2b7699f1b5cebf8af033b0c3e25
|
[
"Medicine"
] | 0.872367
|
Bright and dark Talbot pulse trains on a chip
|
0141e776d7e5a2b7699f1b5cebf8af033b0c3e25
|
Communications Physics
|
[
{
"authorId": "2116291604",
"name": "Jiaye Wu"
},
{
"authorId": "2239218697",
"name": "Marco Clementi"
},
{
"authorId": "15795306",
"name": "E. Nitiss"
},
{
"authorId": "24355229",
"name": "Jianqi Hu"
},
{
"authorId": "2237914606",
"name": "C. Lafforgue"
},
{
"authorId": "1691785",
"name": "C. Brès"
}
] |
{
"alternate_issns": [
"0868-3166"
],
"alternate_names": [
"Commun Phys",
"Communications in Physics"
],
"alternate_urls": null,
"id": "4c30a099-ac6f-4ed0-ac0e-0ca618941906",
"issn": "2399-3650",
"name": "Communications Physics",
"type": "journal",
"url": "http://www.nature.com/commsphys/"
}
|
Temporal Talbot effect, the intriguing phenomenon of the self-imaging of optical pulse trains, is extensively investigated using macroscopic components. However, the ability to manipulate pulse trains, either bright or dark, through the Talbot effect on integrated photonic chips to replace bulky instruments has rarely been reported. Here, we design and experimentally demonstrate a proof-of-principle integrated silicon nitride device capable of imprinting the Talbot phase relation onto in-phase optical combs and generating the two-fold self-images at the output. We show that the GHz-repetition-rate bright and dark pulse trains can be doubled without affecting their spectra as a key feature of the temporal Talbot effect. The designed chip can be electrically tuned to switch between pass-through and repetition-rate-multiplication outputs and is compatible with other related frequencies. The results of this work lay the foundations for the large-scale system-on-chip photonic integration of Talbot-based pulse multipliers, enabling the on-chip flexible up-scaling of pulse trains’ repetition rate without altering their amplitude spectra. The generation of temporal Talbot effect, i.e., the formation of temporal self-imaging patterns, in integrated photonic devices, is limited by bulky setups that limit the repetition rate. The authors design a compact and tunable Talbot chip where the GHz-repetition-rate bright and dark pulse trains can be doubled without affecting their spectra.
|
### ARTICLE
https://doi.org/10.1038/s42005-023-01375-x **OPEN**
## Bright and dark Talbot pulse trains on a chip
#### Jiaye Wu 1, Marco Clementi 1, Edgars Nitiss1, Jianqi Hu 1, Christian Lafforgue 1 & Camille-Sophie Brès 1✉
Temporal Talbot effect, the intriguing phenomenon of the self-imaging of optical pulse trains,
is extensively investigated using macroscopic components. However, the ability to manipulate pulse trains, either bright or dark, through the Talbot effect on integrated photonic chips
to replace bulky instruments has rarely been reported. Here, we design and experimentally
demonstrate a proof-of-principle integrated silicon nitride device capable of imprinting the
Talbot phase relation onto in-phase optical combs and generating the two-fold self-images at
the output. We show that the GHz-repetition-rate bright and dark pulse trains can be doubled
without affecting their spectra as a key feature of the temporal Talbot effect. The designed
chip can be electrically tuned to switch between pass-through and repetition-ratemultiplication outputs and is compatible with other related frequencies. The results of this
work lay the foundations for the large-scale system-on-chip photonic integration of Talbotbased pulse multipliers, enabling the on-chip flexible up-scaling of pulse trains’ repetition rate
without altering their amplitude spectra.
1 École Polytechnique Fédérale de Lausanne (EPFL), Photonic Systems Laboratory (PHOSL), STI-IEM, Station 11, Lausanne CH-1015, Switzerland.
✉ il ill b @ fl h
-----
albot effect, initially described by Henry Fox Talbot in 1836
as the spatial-interference-induced formation of selfimaging patterns[1], has given rise to enormous interest for
# T
nearly two centuries due to its underlying physical mechanisms
and potential applications. Since the invention of lasers and the
implementation of modern ultrafast experimental techniques and
instruments, there has been a rapid growth of the studies of the
Talbot effect in the temporal[2][–][7], spectral[8][,][9], and azimuthal[10][,][11]
domains in the last decades[12], which have consolidated the
physical and mathematical understanding of this class of
phenomena[13]. The temporal and spectral Talbot effects can be
linked together through a space-time duality[14], and it was
recently found and theorised that all these domains of observations can be unified by the duality by isomorphism[15], i.e., through
space-time, position-momentum, and time-frequency dualities.
The temporal Talbot effect, in particular, can be fruitfully
employed in many ultrafast applications, such as integer, fractional, and arbitrary repetition rate multiplication (RRM)[4][,][5][,][16],
broadband full-field invisibility[17], and noiseless intensity
amplification[18]. Among these, RRM is highly appealing, especially in the case of extra-cavity scenarios like optical
communications[19], passive amplification[20], and microwave
photonics[21], where it can be implemented by spectral amplitude
and/or phase filtering[2][,][3][,][5][,][7][,][12][,][22][–][24] and provides a train of highrepetition-rate pulses that is hard to achieve in intra-cavity geometries. Besides the commonly studied temporal Talbot effect of
bright pulse trains, the mixing Talbot patterns of dark pulse trains
at higher RRMs was also recently investigated[7]. Also not long ago,
a novel method, based on the combination of Mach-Zehnder
interferometers (MZIs) and delay-line interferometers (DLIs), to
generate temporal Talbot effects has been experimentally
demonstrated[5], shedding light on its suitability for scalable onchip integration.
Realising photonic integration of optical functions, e.g., light
sources (especially the well-established integrated microcomb
sources)[25], amplifiers[26], or signal-processing components[27], is a
necessary step towards future applications and has attracted a
huge amount of interest. Currently, the Talbot effects are
observed and utilised on-chip in microscopy[28], spectroscopy[29],
and Talbot-cavity integrated lasers[30][–][32], serving as a coherent inphase coupling element[31]. However, to the best of our knowledge,
the on-chip generation of the temporal Talbot effect has rarely
been discussed. One of the latest state-of-the-art demonstrations,
for example, is spectral phase filtering using a waveguide Bragg
grating on silicon[33]. Yet, the limited tunability of the Bragg
grating does not allow fine-tuning to produce a higher output
quality and leverage the full potential of Talbot photonic chips.
Also, the demonstration of an integrated Talbot processor at a
lower repetition rate than the reported 10 GHz is of significant
importance. The amount of dispersion required—and hence the
size of the device—scales indeed quadratically with the inverse of
the repetition rate[7], practically limiting the applicability of this
approach to lower repetition rate scenarios. Moreover, the Talbot
effect of the dark pulse trains has never been investigated in
integrated optics to date. Manipulating bright and dark pulse
trains with photonic integrated circuits (PICs), fully replacing
complex and bulky combinations of instruments and equipment,
is itself intriguing, enabling the possibility to seamlessly work
with other PICs, and might pioneer potential all-optical systemon-chip (SoC) applications.
In this work, we design and experimentally demonstrate, for
the first time to the best of our knowledge, a silicon nitride
(Si3N4) Talbot PIC based on a cascaded MZI and DLI geometry.
Unlike more conventional rate multiplier methods, where multiplying the repetition rate always leads to an alteration of the
ti l t k f t f th t l T lb t ff t i th t
the comb spectra could stay unaffected by energy redistribution
while achieving integer multiplications of the repetition rate. The
proof-of-principle PIC can imprint the temporal Talbot phase
relation onto the spectral components of an in-phase optical
comb and therefore produce the two-fold (2×) Talbot self-image,
doubling the repetition rates of bright or dark pulse trains. This
technology can be eventually scaled up to N delay lines, resulting
in an N-fold self-image. A key advantage over the conventional
dispersion-based method in terms of scalability is that, by the
proposed MZI-DLI scheme, the required length of the delay lines
scales linearly with the inverse of the repetition rate rather than
quadratically, making this approach particularly accessible for
lower repetition rate combs. In addition, embedded, electrically
controlled thermo-optic actuators enable the chip to arbitrarily
switch between a pass-through or a 2× RRM output. In principle,
it could also be designed for other repetition rates by changing the
length of the delay line. We believe that the results of this work
could serve as a reference design for large-scale photonic integration and broaden the understanding and the use of MZI onchip devices.
Results
Principle of temporal Talbot effect in delay line structures.
Controlling the repetition rate of a pulse train inside a laser
cavity, especially when integer multiplications are desired, can be
very difficult. Conventional approaches to realise a multiplied
repetition rate reduce the frequency components at the cost of
significantly changing the amplitude spectra, e.g., spectral
amplitude filtering[22][,][34][,][35]. The Talbot effect offers a solution to
preserve the amplitude spectral profile (and, in particular, the
number of comb lines) by linearly acting only on the phase
spectrum of the input comb. The temporal Talbot phase relation
takes the form ϕk = π(p/q)k[2], with k being the comb mode index
(k = ± 1, 2, 3, . . . ) relative to the centre line (k = 0). p and q are
mutually prime positive integers. By means of quadratic Gauss
sums[36], the phase of each self-image also satisfies the Talbot
phase relation[15]:
�s �
φn ¼ �π q [n][2][ þ][ c] : ð1Þ
Here, the parameters s and c are related to p and q via Eq. (2),
and n is the index of the pulse self-image (n = 0, 1, . . ., N − 1).
We follow the notations from refs. [15][,][36] and denote [1/a]b as the
a
modular multiplicative inverse operation and � � as the Jacobi
b
symbol. The relation between the parameters s, c, p, q can now be
expressed as[7]:
2
ð Þ
8
>>>>>>>><
>>>>>>>>:
� �
s 2 [1]
¼
2p
; c [q][ �] [1]
¼ þ
q 4
� p �
1
�
q
; if q O
2
2
��
s [1]
¼
p
; c
¼ � [p]
2q 4 [�]
� q �
1
�
p
; if q E
2
2
where O and E are, respectively, the odd and even integer sets.
The phase results of Eq. (2) rely on the parity of q. One can easily
show that when this aforementioned phase relation is satisfied,
the spectral shape is not affected.
By implementing the temporal Talbot phase relation, the
evolution diagram of temporal profiles of an optical pulse train
can be acquired, which is known as the Talbot carpet shown in
Fig. 1a. For clarity and without loss of generality, Fig. 1a is a halfperiod demonstration. The other half period (p/q ∈ [1, 2]) is the
i f Fi 1 ith th ti (t l fil ) t
-----
Fig. 1 The mechanisms of temporal Talbot effect. a The temporal Talbot carpet. Here, the Talbot carpet shown is half of its full period, with marks for
1×–5× self-images. A non-phase-shifted 1× self-image will appear at p/q = 2/1. p and q are phase parameters described in Eq. (1). The 2× self-image at
p/q = 1/2 is expected for both bright and dark pulse trains on the designed Talbot photonic chip, as theoretically and experimentally shown in Fig. 3c, f. The
physical mechanisms of b the conventional dispersion-based and c the proposed Mach-Zehnder-interferometer (MZI)-based Talbot repetition-ratemultiplication (RRM) realisation. In the transmission and phase spectra, the solid blue lines are the theoretical curves, and the red circles are the points
where the comb lines are.
p/q = 2 being exactly the same as at p/q = 0. The horizontal axis,
p/q, can be understood as phase evolution or propagation
distance, for it is also possible to launch a pre-assigned Talbotphased optical comb into an SMF and allow phase accumulation
through propagation[7], which is illustrated in Fig. 1b. If a doubled
repetition rate is desired, i.e., the 2× Talbot self-image, we seek
the phase relation with p/q = 1/2 whose line-by-line phase
relation of the comb should be [. . . π/2, 0, π/2, 0, π/2, 0, π/2, . . . ]
according to Eq. (2).
It is reported that by incorporating the N parallel optical
tapped delay line structure, the Talbot phase relation can be
losslessly imprinted by re-distributing energy into each delay line
with different designed lengths, resulting in N× RRM[5]. The key
idea of this mechanism is to combine[24] the effects of spectral
amplitude filtering, which achieves N× RRM and phase filtering,
which maintains the corresponding amplitude spectrum by
altering the phase, to satisfy the temporal Talbot effect conditions.
A schematic of this mechanism is shown in Fig. 1c. It is worth
emphasising that the mechanism of this process is not as
seemingly simple as just delaying the pulse long enough such that
the delayed pulse train fits into the intervals of the non-delayed
one when recombining and produces N× the repetition rate, as
this operation would not, in general, satisfy the Talbot phase
relation. On the contrary, the design for the phase is the key to
preserving the amplitude spectra. After the energy splitting, the
frequency components of the comb on each delay line accumulate
diff t h d i ti D t th l li
which is exactly [. . . π/2, 0, π/2, 0, π/2, 0, π/2, . . . ] for
k = [. . . − 3, − 2, − 1, 0, + 1, + 2, + 3, . . . ]. Therefore, the N = 2
Talbot PIC is capable of imprinting the Talbot phase relation
onto an input comb, doubling the repetition rate of the
corresponding pulse train while keeping the amplitude spectrum
unchanged.
It is worth noting that, as one can observe from Eq. (3), the
imparted Talbot phase scales linearly with the time delay τdelay. By
recalling the expression of the free-spectral range (FSR) for a
i DLI Δ /( L) b i L th b l d th
nature of the process, each frequency component remains
unchanged and maintains perfect coherence with those in other
delay lines. When they recombine, the line-by-line interference
gives a phase relation that satisfies Eq. (1). This can be regarded as
a combination of spectral amplitude filtering and spectral phaseonly filtering[5], and for the N = 2 Talbot PIC designed in this
work by using the generalised Landsberg-Schaar identity[37], the
transfer function H of the combined filtering can be derived as:
[1] N�1 � �[�]
¼ N n[∑]¼0 [exp][ �][2][π][nf][ k][τ][delay][ �] [i][φ][n] ���N¼2
¼ p[1]ffiffiffi2 exp �� i [π]4 [þ][ i][ϕ][k]�
¼ p[1]ffiffiffi2 exp �� i [π]4 [þ][ i][ π]2[k][2]�;
k
H f k ¼ 2τdelay
!
3
ð Þ
-----
Fig. 2 The Si3N4 Talbot chip. a Design schematic showing the components and the dimensions (cross-sectional view) of the photonic circuit;
b Microscopic photograph of the chip; c Theoretical (blue dashed curves) and experimental (red solid curves) transmission spectrum of the device in the
frequency multiplication configuration.
group index, we observe that, here, the length required scales
linearly with the inverse of repetition rate (L ∝ Δν[−][1]), rather than
quadratically (L ∝ Δν[−][2]) as in the case of dispersion-based
temporal Talbot effect (see the form factor comparison in
“Discussion”), making our approach intrinsically advantageous in
the perspective application to low repetition rate combs.
Chip design and characterisation. To realise the N = 2 tapped
delay line structure on a chip, we designed a two-stage cascaded
MZI configuration, in which the optical length of the reference
arms can be tuned by electrically-driven thermo-optic actuators.
A schematic diagram of the device is shown in Fig. 2a. The firststage MZI ensures equal energy distribution into the second stage
through local temperature control, while this control is also the
key to the pass-through operation mode that allows a direct pass
through the chip without manipulation. The second-stage MZI is
a DLI with a delay line of 125 ps, corresponding to the half-period
interval of a 4 GHz pulse train. The two branches are then
recombined at a coupler, whereas the recombination phase is
controlled by means of a thermo-optic actuator, which allows
slight variations of the effective local index of the waveguide as a
function of the bias current, ensuring the possibility of achieving
the Talbot condition. A microscopic photograph of the device is
shown in Fig. 2b.
A sinusoidal interferometric pattern exists owing to the
existence of the unbalanced DLI. The transmission spectrum of
the device in this configuration is shown in Fig. 2c. The FSR of
the interferometer is designed to match the output comb
repetition rate of 8 GHz (blue dashed line), as confirmed by the
experimental measurement (red solid line). Note that the visibility
of the interferogram fringes can be varied between 0 and 1 by
acting on the first phase shifter. Similarly, the interferogram
transmission function can be shifted horizontally by acting on the
DLI local heater, thus allowing the tuning of the alignment of the
input comb lines with the transmission function of the device.
Proof-of-principle experiments: repetition rate doubling of
bright and dark pulse trains. We demonstrate the proof-ofprinciple experiments using the setup shown in Fig. 3a. We use a
tunable 10 dBm continuous-wave (CW) laser at C-band as a light
source. In light of the methods introduced in refs. [10][,][38], we utilise
a harmonic combination of 4, 8, and 12 GHz sinusoidal waves to
t th 4 GH d l ti di f (RF) i l f
the 1 × 2 lithium niobate (LiNbO3) Mach-Zehnder modulator
(MZM). The MZM generates bright and dark pulse trains,
respectively, at its two outputs, whose patterns are complementary to each other. Both of such input pulse trains have a
full width at half maximum intensity (FWHM) of 50 ps. In the
experiment, we connect to one output at a time.
The chip is mounted on a temperature-stabilised holder and set
to work under a constant temperature of 25 °C. The local
temperature of the MZI is also controlled by introducing a DC
current through the embedded resistance. This local temperature
change slightly affects the effective index and therefore the optical
lengths of the reference arm of the first MZI, determining the
light energy distribution into the second-stage DLI. By this
control, the two modes of operation of this chip can be realised,
namely, the pass-through mode, where the light goes through
only one arm of the DLI (with negligible propagation loss), and
the Talbot mode, where the light split evenly into the two arms of
the DLI and recombined into a 2× Talbot self-image, which
doubles the repetition rate.
The output signal is collected and analysed by a highresolution optical spectrum analyser (OSA) and an oscilloscope
(OSC) for the retrieval of its spectral and temporal profiles,
respectively.
Besides the conventional bright pulse experiments, the
designed Talbot chip could also work with dark pulse trains by
the same principles. The results of the chip operations are
illustrated in Fig. 3b–g. To the best of our knowledge, this is the
first demonstration of the temporal Talbot effect of dark pulse
trains on a chip, following the discovery and discussions on the
mixing Talbot patterns of the dark pulse trains[7]. In Fig. 3b, e, by
tuning the local temperature control on the first MZI, the passthrough mode of chip operation presents the exact same temporal
profile as the input. Under this circumstance, the optical comb is
still an in-phase comb without any manipulation. The measured
pulse train fits the theoretical curves well.
By finely adjusting the DC voltage of the local temperature
control, the DLI starts to provide a relative 125-ps (1/2 × 1/(4
GHz) = 125 ps) delay to one branch of the energy, and the chip
can reach a state such that a relatively weaker secondary pulse
train appears at the intervals of the original pulse train, finally
stabilising at almost the same intensity. The created new pulse
train has twice the repetition rate (8 GHz), as shown in Fig. 3c, f,
which matches well with the theoretical 8-GHz pulse train in
dashed lines The measured FWHM of the bright 8-GHz pulse
-----
Fig. 3 Temporal Talbot effect of bright and dark pulse trains on a chip. a Schematic of the experimental setup. CW continuous wave, EDFA erbiumdoped fibre amplifier, PC polarisation controller, OSC oscilloscope, OSA optical spectrum analyser. Temporal profiles of b the original 4-GHz bright pulse
trains at pass-through mode operation and c the corresponding 2× Talbot self-image of 8-GHz. Solid curves denote the experimental data, and the dashed
curves represent the corresponding theoretical predictions. d Spectra comparison of these two pulse trains. e–g The corresponding temporal and spectral
profiles of the dark pulse train. In (b, c, e, f), the theoretical curves are plotted in dashed blue lines, and the experimental ones are illustrated in solid red
lines. In (d and g), the 4-GHz spectra are in yellow, and the 8-GHz (Talbot) spectra are in blue. h Temporal profile versus the dissipated power on the firststage interferometer, showing the two operations of the chip and the transition regions in between.
train is also 50 ps, while the FWHM of its dark counterpart is
slightly smaller with a more un-even DC component between
each two dark pulses, as can be interpreted from Fig. 3f. This is
due to the destructive intra-pulse interference happening between
the overlapping part of the two recombined pulse trains.
The spectral results are shown in Fig. 3d, g. The intervals
between each comb line for the pass-through and the Talbot
ti b th tl 0 032 di t th
4-GHz frequency difference at the C-band. In this figure, the
spectra of the pass-through operation mode are plotted in light
orange, and those of the Talbot operation mode are plotted in
dark blue. The output spectra fit each other quite well without any
frequency component loss or bandwidth change, denoting the
presence of the temporal Talbot effect, as discussed in the
previous section. The small observable spectral discrepancy
b t th t ti d i d t l f d
-----
power jittering, as well as possible amplifier noises, which is not a
part of the Talbot effect and can be neglected, making the
comparison between the theoretical analyses and experimental
results accurate. The preservation of the spectral profile could be
further improved by SoC-level packaging, taking advantage of a
well-established technology trend in integrated photonics.
We further experimentally investigate the temporal evolution
of the 4 GHz pulse train into the 8 GHz one, as depicted in
Fig. 3h for the bright pulse train and the chip temperature set to
25 °C. We record the dissipated electrical power on the first-stage
local thermo-optic actuator while the DC voltage of the secondstage actuator is kept constant such that in the Talbot operation
region, the chip produces an 8 GHz pulse train with equal
intensity and unchanged spectrum. The dissipated power is
monotonously linked to the effective phase shift, therefore, by
measuring the dissipated power, we are equivalently following the
evolution of the effective phase shift. It can be observed that
the designed Talbot PIC works in Talbot operation mode within
the small vicinity of 0 mW. When the DC power increases, the
intensity of the secondary pulse train gradually vanishes. This is
marked by the transition region in Fig. 3h. From 40 to 120 mW,
the chip operates at the pass-through mode, producing the
original 4 GHz pulse train. Near 150 mW, the 2× Talbot selfimage of the optical pulse train emerges again. For higher DC
power, the chip is in pass-through mode but with a 125-ps delay
with respect to the aforementioned pass-through mode region.
This also proves that in the pass-through mode, the energy indeed
flows mainly through one arm of the second-stage MZI. The
power of 180 mW is the upper limit we set for the chip in order to
protect it from potential damage. Therefore, Fig. 3h shows a
nearly complete operation period of the Talbot PIC, giving a clear
picture of its operating mechanism.
To further investigate the link between our observations and
the Talbot phase relation expressed by Eq. (3), we estimated the
phase imparted to each comb line by reconstructing the phase
transmission spectrum of our device when operated in the Talbot
configuration. Such calculated phase spectrum, shown in Fig. 4,
was inferred by exploiting the analytic relation between the real
and imaginary parts of the device transfer function. In particular,
the trace was obtained numerically from the amplitude
transmission spectrum shown in Fig. 2c by a Hilbert transform[39].
Similar to the widely used Kramers-Kronig relations, this method
is based on the principle of causality, leading to an analytical
relation between the real and imaginary parts of the transfer
function. This allows the extraction of the phase from the
/2
measured phase theoretical phase theoretical FSR
3 /8
/4
/8
0 0
-8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8
Comb line index (frequency: 4 GHz/div)
Fig. 4 The phase of each comb line of the output at Talbot operation. The
measured phase is extracted from the free-spectral range (FSR) data shown
in Fig. 2c. The grey vertical grid shows the spectral locations of each comb
line with a 4 GHz separation. The red dashed line is the theoretical FSR, the
purple circles are the theoretical phase, and the blue solid line denotes the
i t l h d t
transmission spectrum. The grey vertical lines represent the
spectral location of the comb lines with respect to the
transmission spectrum, and the purple circles are the theoretical
Talbot phase calculated by Eq. (1). As expected, the comb lines
experience a nearly [. . . π/2, 0, π/2, 0, π/2, 0, π/2, . . . ] phase shift,
confirming the matching of the Talbot phase relation in our
experiment. The small deviations between the measured phase
and the theoretical one are due to the limited wavelength control
resolution of the laser used for measuring the transmission
spectrum (Fig. 2c).
In principle, the designed Talbot PIC can work as a broadband
processing device, the main limiting factor being the 2 × 2
embedded multi-mode interference (MMI) couplers, whose
nominal 3 dB bandwidth is around 40 nm. For input wavelengths
outside this band, the MMI couplers splitting ratio is altered,
resulting in an imbalance in the MZI splitting and recombination
ratios. Besides, the DLI acts like a double-edged sword. On the
one hand, it provides the essential delay to realise Talbot effect,
but on the other hand, when operating on ultrashort pulses with a
broad spectrum, additional dispersion compensation might be
needed to counteract distortion.
Discussion
The proposed chip design can not only work with the standard
4-GHz pulse train but also with any other repetition frequencies that
are the odd multiples of 4 GHz, i.e., 12 GHz, 20 GHz, 28 GHz, etc.
Limited by the technical specifications of the RF components, here
we discuss theoretically the possibility to operate at these frequencies.
The illustrations in Fig. 5a show intuitively how and when the two
trains of pulses align to form the doubled repetition rate. The
repetition rates of the pulse trains in Fig. 5a are 12 GHz, 20 GHz,
and 28 GHz, respectively, and the first pulses in each panel can be
regarded as the very first input pulse.
The repetition rate requirement, fr = 1/trep, for the designed
chip to work is ξ/2 ⋅ trep = τdelay = 125 ps, with ξ being an odd
number, as for even values, the delayed pulse train will overlap
with the non-delayed one. This simple rule ensures that the
delayed pulse train falls precisely in the middle of the original
pulse intervals stably. Furthermore, the temporal Talbot phase
relation, as shown in Eq. (3), is still satisfied as it is independent
of the input repetition rate. Theoretically, at a given carrier
wavelength (e.g., in the C-band), the only physical limitations for
higher frequencies are pulse generation and temperature control
accuracy since the DC power region of Talbot operation might
vary. However, the latter can be better controlled in fully integrated platforms.
From an engineering perspective, the length of the delayed arm
of the second MZI could be extremely long if the design is altered
for a much lower frequency (e.g., in the regime of MHz), which
can cause a large amount of propagation loss. Fortunately, the
state-of-the-art SiN photonic platform provides a <0.2 dB ⋅ cm[−][1]
propagation loss[25], making it possible to realise even metre-long
delay lines which can work with significantly lower frequencies
than the demonstrated 4 GHz.
As for the scaling to higher RRMs, the challenge lies in concatenating multiple levels of MZI-DLI interferometers, either in
series (cascading) or in parallel[5]. Each level needs to be controlled
separately, and due to the potential error and non-uniformity in
fabrication, the DC power control can be different from level to
level. For higher RRM, the size of the chip grows accordingly, and
the system complexity also increases with the RRM factor N. A
careful device engineering, together with new cross-disciplinary
technologies—such as computer-controlled self-tuning enabled
by machine learning[40]—may be useful in such high RRM
li ti
-----
Fig. 5 Comparisons with other operation frequencies and different solutions. a Illustration of the compatibility with other operating frequencies, with the
examples being 12, 20, and 28 GHz inputs and forming the corresponding 24, 40, and 56 GHz outputs. The delay between the two vertical dashed lines is
125 ps, as in the chip design. The blue solid lines denote the original pulse trains, and the red dash-dotted lines represent the delayed pulse trains. b Form
factor comparison among the other solutions to double the repetition rate of a pulse train. The schematic sketch of the waveforms (red) is in the temporal
domain, and the comb spectra (yellow) are in the frequency domain. MZ DLI Mach-Zehnder delay line interferometer, SMF single-mode fibre.
It is intuitive to think that performing Talbot operations in the
designed device is not a lossless process due to interference. If we
consider the chip to be a one-input one-output device, for an
input K-line optical comb, the total power of the source should be
Pin / E�� 0��2K �1∑k A�� k�� with k = 0, 1, . . ., K − 1 and Ak being the
envelope of the signal. Due to the MZI interference, the total
power at the output should be Pout / E�� 0��2ðNKÞ�1∑k A�� k�� which
is half of the original input comb (N = 2). This is exactly what
happened in the proof-of-principle setup shown in Fig. 3a since
the focus lens could only pick up one of the two outputs at a time.
In this single-output implementation, the loss scales with 1/N,
which is non-negligible for higher N (e.g., 10 dB for N = 10 and
30 dB for N = 1000).
However, the efficiency can reach nearly 100% (without taking
into account the chip coupling loss) if both of the two outputs are
collected. The output coupler can be regarded as a 2 × 2 discrete
Fourier transform star network[41], which provides two 2× RRM
Talbot pulse trains[5], which may be beneficial for applications
such as optical clock distribution among many components. It
should be noted that the two output channels are coherent with
each other, carrying exactly the same optical signal up to a
constant phase factor. Consequently, they can be recombined at a
50:50 directional coupler, recovering the ideally lossless operation
that is a peculiarity of Talbot effect. In other words, the total
output power of the system is preserved among all the output
modes; these can be ideally recombined whenever a single-output
mode and low loss are required.
When our design is integrated with other optical functional
components at an SoC level, the chip coupling loss can be greatly
d d d th t t t b ffi i tl tili d Thi
again emphasises the importance of miniaturising macroscopic
optical setups and integrating them onto a PIC.
Another main characteristic of this design is its form factor,
made possible by photonic integration in silicon nitride technology. To assess the potential and practicality in future applications, we compare the sizes of approaches for repetition rate
doubling, which are plotted in Fig. 5b. In addition to the conventional split and delay practice, which affects the spectrum,
there are several methods to assign the Talbot phase relation to
each frequency line of an optical comb.
The tapped delay line structure can be realised by commercial MZIs and DLIs with conventional centimetre- to
decimetre-scale components. The system is therefore 10–20
times larger than the 4.94 mm Talbot PIC and can be more
expensive owing to a lower wafer yield. Additionally, they are
not suitable for SoC-level integration or seamlessly working
with other PICs. Another solution could be assigning the Talbot
phase line-by-line using programmable wave-shapers (pulseshapers)[7]. However, these devices are currently bulky (>20×
larger in their longest dimension) and less likely to be scaled
down and integrated. More importantly, they can be limited by
the resolution of their pixels when working with low repetition
rate combs.
The aforementioned SMF propagation method is simple and
relatively convenient when the repetition rate of the pulse train is
high. This technique allows the pulse train to naturally evolve to
the p/q = 1/2 state by dispersion-induced accumulated phase, as
shown in Fig. 1b. However, when it comes to low repetition rate,
the length of the SMF needed can be impractically long[13]
according to L = 2πp/(q∣β2∣Δω[2]). For a brief comparison, here we
t k th d d di i ( l it di i )
-----
β2 = −21.6 ps[2]km[−][1] for silica SMF. The required length for the
4-GHz pulse train at p/q = 1/2 is 230.26 km, which is 4.6 × 10[7]
times longer than a chip and would likely display impractically
high propagation losses. The lower the frequency is, the greater
the advantage of going on-chip.
On the PIC level comparison, the current state-of-the-art
Talbot chip using Bragg grating waveguide measures a length of
≈8 mm for 2× RRM at 10 GHz input, while for linearly chirped
waveguide Bragg gratings, it would be ≈4 cm (c.f., ref. [33]). These
values are respectively 1.6× and 8.1× larger than our proposed
design, that also operates at a much lower repetition rate of 4
GHz. If the previously reported state-of-the-art designs were to be
scaled for the same 4 GHz repetition rate we demonstrate, their
devices should be made >4× larger.
Conclusions
In this work, we propose, design, and experimentally demonstrate
a PIC with cascaded MZIs that converts in-phase optical combs
into Talbot-phased ones with their repetition rates doubled while
keeping their spectra unaltered. The chip is compatible with both
bright and dark pulse trains and works with a variety of different
repetition rates. The embedded temperature control also allows
the chip to work in a pass-through mode, such that switching of
the functions of this chip is readily realised. The evolution of the
temporal profiles from the original input to the 2× Talbot selfimage is analysed. Additionally, we show, for the first time, to the
best of our knowledge, the temporal Talbot effect of dark pulse
trains on a chip. On-chip makes our device more compact than
other repetition rate doubling solutions. We believe that the
results of this work could be very useful in the applications of onchip Talbot effects and could provide a deeper insight into the
physics of this phenomenon. The results discussed in this work
also have the potential to be applied in optical communications,
amplification, and imaging.
Methods
Chip design. The chip was fabricated through a multi-project
wafer run at a commercial foundry (LIGENTEC SA). The Si3N4
waveguides have a dimension of 1.6 × 0.8 μm, encapsulated in the
silicon dioxide (SiO2) layer on top of the silicon (Si) substrate, as
shown in the cross-sectional view of Fig. 2. According with
simulations, the fundamental transverse electric (TE) mode has
an effective refractive index of 1.6769 at 1550 nm. The separation
between the two parallel waveguides is 22 μm, while the coupler
structures enable a 50:50 power splitting and therefore a continuously tunable splitting ratio at the output of the first-stage
MZI. The second-stage MZI is a DLI with a delay line corresponding to the half-period interval of a 4 GHz pulse train (125
ps). The electrical resistance of the two thermo-optic phase
shifters is ≈50 Ω.
Pulse train generation and instruments. The RF signals are
generated by an Anritsu MG3692C, an Agilent E8257D, and an
Agilent MXG N5183A. The RF amplifier is a ZVA-0.5W303G+
working with a 10 MHz–20 GHz frequency range from MiniCircuits. The MZM LiNbO3 intensity modulator is an AX-1x20MSS-20-PFA-PFA-LV from EOSPACE. The pulse train is generated by a quasi-Nyquist method adapted from ref. [38]. The
EDFA is an Optilab EDFA-16-LC-M. The tunable laser source is
Yenista Tunics-T100S-HP. The chip is mounted on a custom
stage with a Peltier array and sensors, monitored and controlled
by a Thorlabs TED200C temperature controller, and set to stabilise at 25 °C. The spectra are collected by an APEX AP2043B
OSA d th t l fil d b A il t
Infiniium DCA 86100A wide-bandwidth oscilloscope with an
Agilent 86105A 20 GHz optical module.
Data availability
[The data that support the plots within this paper are available at https://doi.org/10.5281/](https://doi.org/10.5281/zenodo.8272168)
[zenodo.8272168.](https://doi.org/10.5281/zenodo.8272168)
Code availability
[The codes used to produce the results of this paper are available at https://doi.org/10.](https://doi.org/10.5281/zenodo.8272168)
[5281/zenodo.8272168.](https://doi.org/10.5281/zenodo.8272168)
Received: 15 March 2023; Accepted: 6 September 2023;
References
1. Talbot, H. LXXVI. Facts relating to optical science. No. IV. Lond. Edinb.
Dublin Philos. Mag. J. Sci. 9, 401–407 (1836).
2. Huang, C.-B. & lai, Y. Loss-less pulse intensity repetition-rate multiplication
using optical all-pass filtering. IEEE Photonics Technol. Lett. 12, 167–169
(2000).
3. Caraquitena, J., Jiang, Z., Leaird, D. E. & Weiner, A. M. Tunable pulse
repetition-rate multiplication using phase-only line-by-line pulse shaping.
Opt. Lett. 32, 716 (2007).
4. Maram, R., Cortes, L. R., Van Howe, J. & Azana, J. Energy-preserving arbitrary
repetition-rate control of periodic pulse trains using temporal Talbot effects. J.
Light. Technol. 35, 658–668 (2017).
5. Hu, J., Fabbri, S. J., Huang, C.-B. & Brès, C.-S. Investigation of temporal
Talbot operation in a conventional optical tapped delay line structure. Opt.
Express 27, 7922 (2019).
6. Wu, J., Hu, J. & Brès, C.-S. Demonstration of temporal Talbot effect of dark
pulse trains. In Conference on Lasers and Electro-Optics. Paper STh5E.6.
[https://opg.optica.org/abstract.cfm?uri=cleo_si-2022-STh5E.6 (Optica](https://opg.optica.org/abstract.cfm?uri=cleo_si-2022-STh5E.6)
Publishing, 2022).
7. Wu, J., Hu, J. & Brès, C.-S. Temporal Talbot effect of optical dark pulse trains.
Opt. Lett. 47, 953–956 (2022).
8. Caraquitena, J., Beltrán, M., Llorente, R., Martí, J. & Muriel, M. A. Spectral
self-imaging effect by time-domain multilevel phase modulation of a periodic
pulse train. Opt. Lett. 36, 858 (2011).
9. Maram, R. & Azaña, J. Spectral self-imaging of time-periodic coherent
frequency combs by parabolic cross-phase modulation. Opt. Express 21, 28824
(2013).
10. Hu, J., Brès, C.-S. & Huang, C.-B. Talbot effect on orbital angular momentum
beams: azimuthal intensity repetition-rate multiplication. Opt. Lett. 43, 4033
(2018).
11. Lin, Z., Hu, J., Chen, Y., Yu, S. & Brès, C.-S. Spectral self-imaging of optical
orbital angular momentum modes. APL Photonics 6, 111302 (2021).
12. Azana, J. & Muriel, M. Temporal self-imaging effects: theory and application
for multiplying pulse repetition rates. IEEE J. Sel. Top. Quantum Electron. 7,
728–744 (2001).
13. Romero Cortés, L., Maram, R., Guillet de Chatellus, H. & Azaña, J. Arbitrary
energy-preserving control of optical pulse trains and frequency combs
through generalized Talbot effects. Laser Photonics Rev. 13, 1900176 (2019).
14. Kolner, B. Space-time duality and the theory of temporal imaging. IEEE J.
Quantum Electron. 30, 1951–1963 (1994).
15. Cortés, L. R., Guillet de Chatellus, H. & Azaña, J. On the generality of the
Talbot condition for inducing self-imaging effects on periodic objects. Opt.
Lett. 41, 340 (2016).
16. Maram, R., Romero Cortes, L. & Azana, J. Programmable fiber-optics pulse
repetition-rate multiplier. J. Light. Technol. 34, 448–455 (2016).
17. Romero Cortés, L., Seghilani, M., Maram, R. & Azaña, J. Full-field broadband
invisibility through reversible wave frequency-spectrum control. Optica 5, 779
(2018).
18. Maram, R., Van Howe, J., Li, M. & Azaña, J. Noiseless intensity amplification
of repetitive signals by coherent addition using the temporal Talbot effect.
Nat. Commun. 5, 5163 (2014).
19. Zheng, B., Xie, Q. & Shu, C. Comb spacing multiplication enabled widely
spaced flexible frequency comb generation. J. Light. Technol. 36, 2651–2659
(2018).
20. Pepino, V. M., da Mota, A. F., Borges, B.-H. V. & Teixeira, F. L. Terahertz
passive amplification via temporal Talbot effect in metamaterial-based Bragg
fibers J Opt Soc Am B 39 1763 (2022)
-----
21. Wun, J.-M. et al. Photonic high-power continuous wave THz-wave generation
by using flip-chip packaged uni-traveling carrier photodiodes and a
femtosecond optical pulse generator. J. Light. Technol. 34, 1387–1397
(2016).
22. Hillerkuss, D. et al. Simple all-optical FFT scheme enabling Tbit/s real-time
signal processing. Opt. Express 18, 9324 (2010).
23. A. Preciado, M. & A. Muriel, M. All-pass optical structures for repetition rate
multiplication. Opt. Express 16, 11162 (2008).
24. Chuang, H.-P. & Huang, C.-B. Generation and delivery of 1-ps optical pulses
with ultrahigh repetition-rates over 25-km single mode fiber by a spectral lineby-line pulse shaper. Opt. Express 18, 24003 (2010).
25. Lihachev, G. et al. Platicon microcomb generation using laser self-injection
locking. Nat. Commun. 13, 1771 (2022).
26. Liu, Y. et al. A photonic integrated circuit-based erbium-doped amplifier.
Science 376, 1309–1313 (2022).
27. Feldmann, J. et al. Parallel convolutional processing using an integrated
photonic tensor core. Nature 589, 52–58 (2021).
28. Han, C., Pang, S., Bower, D. V., Yiu, P. & Yang, C. Wide field-of-view on-chip
Talbot fluorescence microscopy for longitudinal cell culture monitoring from
within the incubator. Anal. Chem. 85, 2356–2360 (2013).
29. Katiyi, A. & Karabchevsky, A. Deflected Talbot-mediated overtone
spectroscopy in near-infrared as a label-free sensor on a chip. ACS Sens. 5,
1683–1688 (2020).
30. Wang, L. et al. Phase-locked array of quantum cascade lasers with an
integrated Talbot cavity. Opt. Express 24, 30275 (2016).
31. Meng, B. et al. Coherent emission from integrated Talbot-cavity quantum
cascade lasers. Opt. Express 25, 3077 (2017).
32. Xu, Y. et al. Phase-locked terahertz quantum cascade laser array integrated
with a Talbot cavity. Opt. Express 30, 36783 (2022).
33. Kaushal, S. & Azaña, J. On-chip dispersive phase filters for optical processing
of periodic signals. Opt. Lett. 45, 4603 (2020).
34. Geng, Z. et al. Photonic integrated circuit implementation of a sub-GHzselectivity frequency comb filter for optical clock multiplication. Opt. Express
25, 27635 (2017).
35. Xie, Y., Zhuang, L. & Lowery, A. J. Picosecond optical pulse processing using a
terahertz-bandwidth reconfigurable photonic integrated circuit.
Nanophotonics 7, 837–852 (2018).
36. Fernández-Pousa, C. R. On the structure of quadratic Gauss sums in the
Talbot effect. J. Opt. Soc. Am. A 34, 732 (2017).
37. Berndt, B. C. & Evans, R. J. The determination of Gauss sums. Bull. Am. Math.
Soc. 5, 107–129 (1981).
38. Soto, M. A. et al. Optical sinc-shaped Nyquist pulses of exceptional quality.
Nat. Commun. 4, 2898 (2013).
39. Dicaire, M.-C. N., Upham, J., De Leon, I., Schulz, S. A. & Boyd, R. W. Group
delay measurement of fiber Bragg grating resonances in transmission: Fourier
transform interferometry versus Hilbert transform. J. Opt. Soc. Am. B 31, 1006
(2014).
40. Baumeister, T., Brunton, S. L. & Nathan Kutz, J. Deep learning and model
predictive control for self-tuning mode-locked lasers. J. Opt. Soc. Am. B 35,
617 (2018).
41. Marhic, M. E. Discrete Fourier transforms by single-mode star networks. Opt.
Lett. 12, 63 (1987).
Acknowledgements
This work is supported by the Swiss National Science Foundation (Grant No. 200021188605).
Author contributions
J.H. conceived the original idea of this work. E.N. and J.H. designed the chip. J.W., M.C.,
and C.-S.B. designed the experiment. J.W. conducted the main experiments with the
assistance of M.C. C.L. helped with the automation of the transmission spectrum measurement. C.-S.B. provided the experimental resources. All authors took part in analysing
the data. J.W. wrote the manuscript with inputs from others. M.C. E.N., J.H., C.L., and
C.-S.B. provided in-depth reviews and discussions in the revision of the manuscript. C.S.B. supervised the project. All authors have proofread the manuscript.
Competing interests
The authors declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to Camille-Sophie Brès.
Peer review information Communications Physics thanks Jose Azaña and the other
anonymous reviewer(s) for their contribution to the peer review of this work.
[Reprints and permission information is available at http://www.nature.com/reprints](http://www.nature.com/reprints)
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license, and indicate if changes were made. The images or other third party
material in this article are included in the article’s Creative Commons license, unless
indicated otherwise in a credit line to the material. If material is not included in the
article’s Creative Commons license and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from
[the copyright holder. To view a copy of this license, visit http://creativecommons.org/](http://creativecommons.org/licenses/by/4.0/)
[licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
© The Author(s) 2023
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC11041698, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.nature.com/articles/s42005-023-01375-x.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-09-13T00:00:00
|
[
{
"paperId": "a36974c20235c066351ad0a42914e353680983dd",
"title": "Phase-locked terahertz quantum cascade laser array integrated with a Talbot cavity."
},
{
"paperId": "13050b7bcf11bdfc41bb82785ab83bc5536f17f1",
"title": "Demonstration of Temporal Talbot Effect of Dark Pulse Trains"
},
{
"paperId": "8d12687c2527ace9398f96d353d53e209d9f3fec",
"title": "A photonic integrated circuit–based erbium-doped amplifier"
},
{
"paperId": "1cc4ade9580ff726e1069b40035afd4c5fbe41cb",
"title": "Temporal Talbot effect of optical dark pulse trains."
},
{
"paperId": "6dee497fba90d418e8a1a52fc7cf09d2888fc23f",
"title": "Spectral self-imaging of optical orbital angular momentum modes"
},
{
"paperId": "02f94b036da23b154df3921c2b7db40424fa7dc9",
"title": "Platicon microcomb generation using laser self-injection locking"
},
{
"paperId": "f27c78f1ff6a8fcba34b6f85d92404cc85575655",
"title": "Parallel convolutional processing using an integrated photonic tensor core"
},
{
"paperId": "77f6e83beeb54b6aa45355394f6b74e71453b98a",
"title": "On-chip dispersive phase filters for optical processing of periodic signals."
},
{
"paperId": "17452a90d818777c54fe18176832e866e1dcc88c",
"title": "Deflected Talbot mediated overtone spectroscopy in near-infrared as a label-free sensor on a chip."
},
{
"paperId": "95fc04c0e76f7c72de9112449f83854f1285b6d7",
"title": "Arbitrary Energy‐Preserving Control of Optical Pulse Trains and Frequency Combs through Generalized Talbot Effects"
},
{
"paperId": "3df751b2d2bbca7e4f4779516cf27bc455d7eda2",
"title": "Investigation of temporal Talbot operation in a conventional optical tapped delay line structure."
},
{
"paperId": "75e5578dd31ffc7d5a72e5907c7215b928216b1a",
"title": "Talbot effect on orbital angular momentum beams: azimuthal intensity repetition-rate multiplication."
},
{
"paperId": "83be251797b93207ed75c207aa0e0059600e9ef2",
"title": "Comb Spacing Multiplication Enabled Widely Spaced Flexible Frequency Comb Generation"
},
{
"paperId": "25fde7be42380d37ec9211c5babeba0375f63b7f",
"title": "Full-field broadband invisibility through reversible wave frequency-spectrum control"
},
{
"paperId": "a82f7a94be34d366804d27a340ab902ef49cb353",
"title": "Picosecond optical pulse processing using a terahertz-bandwidth reconfigurable photonic integrated circuit"
},
{
"paperId": "35f35e2fd41b73592b2863e0f755942e9dde74c7",
"title": "Photonic integrated circuit implementation of a sub-GHz-selectivity frequency comb filter for optical clock multiplication."
},
{
"paperId": "7cb572dc89301820b58e46293ac5157912760339",
"title": "Coherent emission from integrated Talbot-cavity quantum cascade lasers."
},
{
"paperId": "3907ece14c17a5e42e10da6403f731e1c0f1dc28",
"title": "Energy-Preserving Arbitrary Repetition-Rate Control of Periodic Pulse Trains Using Temporal Talbot Effects"
},
{
"paperId": "de8013b5806c50c5e80b2638201fdaddc69e8b51",
"title": "On the structure of quadratic Gauss sums in the Talbot effect."
},
{
"paperId": "7a82e573ef33f3091f3705d6de1166b735bca00f",
"title": "Phase-locked array of quantum cascade lasers with an integrated Talbot cavity."
},
{
"paperId": "787c134abfc01703cf0bc5059f7a9e6dee9f897a",
"title": "Photonic High-Power Continuous Wave THz-Wave Generation by Using Flip-Chip Packaged Uni-Traveling Carrier Photodiodes and a Femtosecond Optical Pulse Generator"
},
{
"paperId": "b6b354224672f65ab57c9f3a90d2b9875ee60658",
"title": "On the generality of the Talbot condition for inducing self-imaging effects on periodic objects."
},
{
"paperId": "8a1942e4047256830dafecde51ad0f337b828a50",
"title": "Programmable Fiber-Optics Pulse Repetition-Rate Multiplier"
},
{
"paperId": "86ae2258b12d6b8f43a4e6a513ce8636b686fe55",
"title": "Noiseless intensity amplification of repetitive signals by coherent addition using the temporal Talbot effect"
},
{
"paperId": "bed8e491e8e9a50753c28d610af05ccee836ebe9",
"title": "Group delay measurement of fiber Bragg grating resonances in transmission: Fourier transform interferometry versus Hilbert transform"
},
{
"paperId": "cf75c88e54a43ff9a8e1ae2cfe7197fe0f3ed383",
"title": "Optical sinc-shaped Nyquist pulses of exceptional quality"
},
{
"paperId": "3972b0ae8b76b511f56fad4fa23041ad6d73a0a3",
"title": "Spectral self-imaging of time-periodic coherent frequency combs by parabolic cross-phase modulation."
},
{
"paperId": "e2cf6402dcd422894c9cb83e9e80a7497c63ae5d",
"title": "Wide field-of-view on-chip Talbot fluorescence microscopy for longitudinal cell culture monitoring from within the incubator."
},
{
"paperId": "87f14b79fe519d1b39da29c4b687dc3d04b640e3",
"title": "Spectral self-imaging effect by time-domain multilevel phase modulation of a periodic pulse train."
},
{
"paperId": "335a85ddf5e33db4225cdfc21a534a35408150fc",
"title": "Generation and delivery of 1-ps optical pulses with ultrahigh repetition-rates over 25-km single mode fiber by a spectral line-by-line pulse shaper."
},
{
"paperId": "ace13ab20c7c21f669c53d8e787554eb65f888c0",
"title": "Simple all-optical FFT scheme enabling Tbit/s real-time signal processing."
},
{
"paperId": "58866fe859e560cd45a8c57c200f12823723a3c2",
"title": "All-pass optical structures for repetition rate multiplication."
},
{
"paperId": "a6f813fd2b6ff29a571f4ea112cc237bee6cfedd",
"title": "Tunable pulse repetition-rate multiplication using phase-only line-by-line pulse shaping."
},
{
"paperId": "a3e40b734cb9a776713e8fc0c74d0ff54113b810",
"title": "Temporal self-imaging effects: theory and application for multiplying pulse repetition rates"
},
{
"paperId": "b18fa39dc6b9f74c24d81cdda141d7306a2fae05",
"title": "Loss-less pulse intensity repetition-rate multiplication using optical all-pass filtering"
},
{
"paperId": "c9cd9780b47f1c9e89a2923c0d40cc28886b73b2",
"title": "Space-time duality and the theory of temporal imaging"
},
{
"paperId": "001d692abfe32832cdaa5ad8a23a9a263d297c09",
"title": "Communications in Physics"
},
{
"paperId": "4eb2835cff3675ad7d6441a8caea5a75d07a9579",
"title": "The determination of Gauss sums"
},
{
"paperId": "7cf7bb922b9fe3adb7fc175d144d495338377856",
"title": "LXXVI. Facts relating to optical science. No. IV"
},
{
"paperId": null,
"title": "Deep learning and model predictive control for self-tuning mode-locked lasers. J. Opt. Soc. Am. B 35 , 617"
},
{
"paperId": "9b6cc9fccc61ea062997bb3e6e2257ca856bedbc",
"title": "Loss-less pulse intensity repetition-rate multiplication using optical all-pass filtering"
},
{
"paperId": "8c9e89ca84c15315e13418ced45c6e6562d3e0f8",
"title": "Discrete Fourier transforms by single-mode star networks."
},
{
"paperId": null,
"title": "published maps and institutional af fi liations"
},
{
"paperId": null,
"title": "Terahertz passive ampli fi cation via temporal Talbot effect in metamaterial-based Bragg fi bers"
}
] | 11,749
|
en
|
[
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01456d7ead48b17af151463d73aee8611ad0f450
|
[] | 0.923111
|
NFTs AND COPYRIGHT LAW
|
01456d7ead48b17af151463d73aee8611ad0f450
|
SCIENCE International Journal
|
[
{
"authorId": "2221475551",
"name": "Belma Mujević"
},
{
"authorId": "52647070",
"name": "M. Mujević"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
NFT stands for “non-fungible token” and refers to a cryptographically protected asset that represents a unique object, work of art, real estate, ticket, or certificate. Like cryptocurrencies, each NFT contains ownership data to facilitate identification and transfer between token holders. By purchasing a token, the owner gets ownership of an asset in the digital or real world. NFTs are often minted by artists and creatives, but collectors and investors who purchase and use them may have a different perspective on who owns the copyrights to the content associated with the NFTs. Attorneys specializing in art law, although they have not yet fully explored it, are developing a familiarity with the current crypto community and future metaverse in order to understand how a public ledger registration tool has created scarcity and value for digital assets, such as digital artwork.
|
UDK: 347.78:004.031.4(4 672EU)
# NFTs AND COPYRIGHT LAW
_Belma Mujević[1*], Mersad Mujević[2]_
[1The University of Szeged, Faculty of Law and Political Sciences, Hungary, e-mail: mujevic5@gmail.com](mailto:mujevic5%40gmail.com?subject=)
[[2]International University in Novi Pazar, Republic of Serbia, e-mail: mersadm@t-com.me](mailto:mersadm%40t-com.me?subject=)
**Abstract: NFT stands for “non-fungible token” and refers to a cryptographically protected asset that represents a**
unique object, work of art, real estate, ticket, or certificate. Like cryptocurrencies, each NFT contains ownership data to facilitate
identification and transfer between token holders. By purchasing a token, the owner gets ownership of an asset in the digital or
real world. NFTs are often minted by artists and creatives, but collectors and investors who purchase and use them may have
a different perspective on who owns the copyrights to the content associated with the NFTs. Attorneys specializing in art law,
although they have not yet fully explored it, are developing a familiarity with the current crypto community and future metaverse in
order to understand how a public ledger registration tool has created scarcity and value for digital assets, such as digital artwork.
Keywords: non-fungible tokens, blockchain, smart contracts, copyright, digital art
Field: Law sciences
## 1. INTRODUCTION
Blockchain technology has advanced so much over time that it has mainstreamed some new
trends in digital shopping and commerce on the Internet, such as cryptocurrencies that have “taken over”
the world. However, hold on for a moment, the world is getting very familiar with a new trend - NFT tokens.
Although cash and payment cards will continue to be the main payment processes for products
and services, the fact is that cryptocurrencies and blockchain technology itself have offered the world a
completely new payment value and the possibility of exchanging goods. Namely, we have been familiar
with cryptocurrencies for a long time, we talked about their mining and trading, and for most people, they
are not new. Cryptocurrencies have been around for a while now and their market value is followed daily,
like the stock market. However, there is a new trend that has been appearing in recent years that has to
do with digital money.
As we mentioned in the introduction of the text, we are talking about NFT tokens or simply
speaking non-fungible tokens which, in a way, represent the definition of digital assets. NFTs are a type
of cryptocurrency that allows various works of art on various media and sites to be “tokenized” and sold
through digital commerce mechanisms, such as Bitski.com.
## 2. BLOCKCHAIN TECHNOLOGY
Blockchains are databases that store records on computers all over the world. This makes the
blockchain a distributed database with a peer-to-peer architecture. The term ‘distributed’ means that the
data is stored in multiple locations, and the term ‘peer-to-peer’ means that there is no central authority
that holds the main copy of the data.
The thing that makes blockchain so special is that once something is written into the blockchain,
it can never be changed or deleted. Therefore, blockchain has become such a popular topic - because
it provides a secure way to store information about assets. In the future, blockchain will be used to store
data about who owns which house, apartment, car, insurance policy, etc.
‘There are four main characteristics of blockchain technology:
1. Transparency - All participants in the chain can see all records that have previously been entered
in “blocks”.
2. Decentralization and data forwarding - the existence of a certain number of computers that
coexist, having individually each of them the possibility of equal insight into data entered, and the
possibility of introducing new data.
[*Corresponding author: mujevic5@gmail.com](mailto:mujevic5%40gmail.com?subject=)
© 2023 by the authors. This article is an open access article distributed under the terms and conditions of the
[Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).](https://creativecommons.org/licenses/by/4.0/)
https://scienceij.com
-----
UDK: 347.78:004.031.4(4 672EU)
3. Non-refund - once the data is in the “block” chain it stays there forever.
4. Lack of intermediation - The central body, which would be managed, and closely regulated
transactions that occur online, does not exist. Everything on the network that represents a blockchain
system takes place and regulates between the participants in the “blocks” that are equal.’ (Strabac, M.,
2021).
**2.1. Tokens on a blockchain technology**
‘Tokens are assets that are encrypted into blocks on a blockchain. The process of creating a token
is commonly known as ‘minting’ the token. A blockchain serves as the foundation onto which a token is
encrypted, creating an immutable record of the existence and ownership of digital assets, such as artwork’
(Murray, DM 2022).
**2.2. NFTs: Unique digital assets**
‘Non-fungible tokens (NFTs) are cryptographically unique tokens that are linked to digital (and
sometimes physical) content, providing proof of ownership’ (Kramer, Graves, Philips, 2022).
**2.3. Fungible vs non-fungible**
‘A fungible token can be exchanged one for one with any other token of its kind’ (Murray, DM
2022).
A fungible token is a unit of cryptocurrency that can be exchanged for any other unit of cryptocurrency.
For example, one bitcoin on the Bitcoin blockchain can be exchanged for any other bitcoin. Because they
are fungible, each one of them has the same value.
In other words, you cannot exchange one NFT with another NFT, nor can you sell parts of it.
For example, one NFT that records the existence and ownership of a 30X40 oil painting
will not be presumed to be exchangeable for another NFT that records the existence and ownership of a
different 30X40 oil painting.
## 3. NFTS AND SMART CONTRACTS
‘Non-fungible tokens facilitate a registration process that makes tokenized digital artwork a unique
asset. Although other similar works may exist and may even have NFT registrations on the blockchain,
each NFT creation produces a unique blockchain record for a unique asset’ (Murray, DM 2022).
‘NFTs usually exist on a blockchain, which is, as noted above, a distributed ledger that records
transactions. The main difference between NFTs and smart contracts is that NFTs are digital assets
powered through smart contracts, meaning that smart contracts control the transferability and ownership
of NFTs. In other words, smart contracts are not the same as NFTs but are vital to their use. Furthermore,
both run on the blockchain, and so many of the disputes that arise with respect to NFTs go back to the
smart contracts that control them’ (Schmitz, JA 2022).
‘This is important because some argue that smart contracts will create efficiencies and may
largely eliminate the need for complicated and costly letters of credit, bonds, and security agreements by
digitizing automatic enforcement or payment’ (Schmitz, JA 2022).
As defined by Nick Szabo, ‘a smart contract is a computerized transaction protocol that executes
the terms of a contract’ (Szabo, N. 1994).
‘The key characteristic of these contracts is that they can be represented in the program code and
executed by computers: this differs from traditional contracts that are established through negotiations,
written documents, and concessional actions. Smart contracts are self-promotional and self-adjusting
computer programs based on a program algorithm’ (Cvetkovic, P. (2020).
‘Smart contracts are used to write and store non-fungible tokens (NFTs), which have basic sale
terms defined within them. The Ethereum blockchain has differing levels of token fungibility depending
on the technical standard applied. The ERC-20 standard is used for fungible tokens, which means that
one token is always equal to all other tokens. In essence, an ERC-20 token is the same as Ether (ETH)’
(Aksoy, Üner 2021).
**3.1. The rise of NFT**
Why would someone pay millions for a JPEG image?
‘There is no limit to what NFTs can represent. They can represent digital images, films, audio, or
something entirely intangible (such as an invisible sculpture)’ (Aksoy, Üner 2021).
https://scienceij.com
-----
UDK: 347.78:004.031.4(4 672EU)
The concept of non-fungible tokens (NFTs) has gained widespread attention in the blockchain
community due to the recent deployment of the NFT Standard on the Ethereum platform.
NFT became popular in 2017 when the game “CryptoKittes” was launched, and it is the first game
that used the ERC-21 protocol. In that game, virtual cats are unique and therefore tokenized and sold.
Eventually, the market for CryptoKitties had the highest transaction volume on the Ethereum blockchain.
‘Arguably the most astonishing price tag came with a piece of digital art by artist Beeple named
‘Everydays’ : The first 5000 days, an NFT of which sold for $ 69,346,250 through an auction at Christie’s
in early March 2021’ (Lantwin, T. 2021).
The buyer who received this NFT, got a jpeg file and a unique piece on the Ethereum blockchain
and this NFT did not include copyright ownership of its piece. Then, ‘Twitter founder Jack Dorsey’s firstever tweet has been sold for the equivalent of $2.9m’ (Harper, J. 2021).
As the buyer, Mr. Dorsey will be digitally signing and verifying a certificate for Mr. Estavi, which
will also include the metadata of the original tweet.
The data will include information such as the time the tweet was posted and its text content.
‘Also, The New York Times has entered the NFT game, selling the chance to own ‘the first article
in the almost 170-year history of The Times to be distributed as an NFT’ — and it’s sold for right around
$560,000’ (Clark, M. 2021).
**3.2. Are NFTs the future?**
As I wrote previously, the blockchain contains a collection of data stored in electronic form that
can be accessed quickly and easily by any number of users at the same time, which means that it is the
same as the NFT protocol.
In addition to the recording function, upon purchasing an NFT, it can be stored in a digital
asset wallet and shared virtually. The owner can then display their NFT to showcase their ownership of
the asset. It may even be considered a portable form of art (Aksoy, Üner 2021).
## 4. NFTs PLATFORMS
‘Once, when NFTs are minted, they become available for transactions, which are primarily
facilitated by intermediary platforms.
To exemplify the type of private ordering that shapes NFT transactions and is relevant for copyright
purposes, the terms and conditions of particular platforms, as well as EU law instruments, play a significant
role. (Bodó, Giannopoulou, Quintais, Mezei 2022).
Based on the NFT function and the subject matter interest, the following is being presented:
1. platforms that function as open marketplaces for all minted NFTs;
2. platforms that function as collection-based marketplaces;
(i) Platforms that function as open marketplaces
‘Open marketplaces enable the creation and exchange of NFTs by anyone. They function as
the eBay of the NFT ecosystem. Dominated by a few major players, including OpenSea, Rarible, and
Foundation.
The growth of NFT marketplaces can be attributed to several factors. The streamlined mining
process is particularly appealing to creators and companies, regardless of their technical experience.
NFTs generated outside can be conveniently listed, and these factors combine to increase the
variety and quantity of NFT supply. These variables could lead to a vicious cycle and eventually consolidate
this sector into a few dominant players.
Category (1) platforms impose the least amount of restrictions with respect to third-party minted
NFTs and different types of NFTs. This openness enables them to operate on a larger scale.
(Bodó, Giannopoulou, Quintais, Mezei 2022).
(ii) Platforms that function as collection-based marketplaces
‘With the advent of blockchain technology which enabled the creation of digital collectibles in
the form of NFT, many investors have diverted their attention to the NFT collectibles market in the last
two years, creating a FOMO in this space. The NFT craze can be seen from its sales volumes in the
last two years. The sales of non-fungible tokens (NFTs) were just $81.1 million in the first half of 2020
but surged to $2.5 billion in the first half of 2021. NFT collectibles share the same characteristics as
https://scienceij.com
-----
UDK: 347.78:004.031.4(4 672EU)
traditional collectibles like scarcity, uniqueness and more. However, it has added advantages like the
ability to authenticate the originality of an NFT collectible as well as in proving the ownership’ (Liew, VK
2021).
## 5. COPYRIGHT LAW AND NFTs
The fundamental idea is that just because you own the NFT doesn’t mean you own the copyright
as well. In other words, you can have possession of the object, but you might not have the copyright
associated with that object.
Copyright is not just a single right, but a collection of rights, and most of these rights are retained
by the original creator of the work.
‘An explanation of the idea of ‘Digital Exhaustion’ and its connection to the first-sale doctrine,
which maintains that the first authorized sale of a product with intellectual property attached exhausts the
rightsholder’s capacity to allege infringement by further sales of that product’ (Bjarnason, C. 2021).
So, what right are you getting when you buy an NFT?
Oftentimes, people are unaware that they do not obtain a full transfer of copyright when they
purchase an NFT. Although this is the case, the NFT still belongs to the buyer. They can then trade it, sell
it, or give it away as they choose.
Does owning an NFT grant you every right?
Usually, no.
When purchasing an NFT, it is important to ensure that the original copyright holder has expressly
agreed, in writing, to convey the right to you. Without this agreement, you may only be granted certain
rights to the NFT. There are, however, cases in which the original copyright holder grants full rights to
the buyer of an NFT. This information can be checked and verified by reading the description of the NFT
listing.
It’s not surprising that there are so many legal implications that come with NFTs.The following
chapter of this paper will center its focus on the matter of copyright law as it pertains to NFTs, including
but not limited to the Information Society Directive, the Resale Rights Directive, and the Digital Single
Market Directive. This is with specific reference to artwork and NFTs where the IP address is owned by
the original creator.
**5.1 EU Copyright Law Applied to NFTs**
The Information Society (InfoSoc) Directive
Under InfoSoc Directive, when you own a copyright, you typically own reproduction right, the right
of communication to the public of works, and right of making available to the public other subject-matter
and distribution right.
‘The first relevant clause of the InfoSoc Directive is Article 2 “Right to Reproduce”, which confers the
copyright holder the exclusive right to reproduce and make copies of the artwork’ (Bjarnason, C. 2021).
‘Article 3 of the InfoSoc Directive provides the creator with the “right of communication to the public
of works and the right of making available to the public”. This can be described as the ‘right to display’
(Bjarnason, C. 2021).
The original creator always has the right to display the artwork, regardless of exhaustion.
Furthermore, the owner of the NFT will also have the right to display the underlying artwork.
This clause, which is both relevant and key, outlines the various rights given to NFT holders.
One of these rights is the right to display the linked artwork.
‘Article 4 of the InfoSoc Directive grants the creator the exclusive right to authorize or prohibit the
distribution of their work to the public, whether through a transaction or otherwise. In this case, the doctrine
of first sale applies, which results in the creator’s rights being exhausted upon the sale of a specific version
of a particular creation. As exhaustion is applicable, if an original creation is sold, the purchaser has the
right to resell this creation. However, when it comes to NFTs, the NFT and its accompanying rights are the
items that can be sold. The underlying artwork, on the other hand, is not necessarily sold unless this right
is embedded in the terms of sale of the NFT. Nevertheless, it is common for the artwork to be transferred
along with the NFT, as demonstrated by the Beeple auction and the ‘Disclosure Face’ sale mentioned
above’ (Bjarnason, C. 2021).
https://scienceij.com
-----
UDK: 347.78:004.031.4(4 672EU)
**5.1.1. The Resale Right Directive and the Digital Single Market Directive**
‘The Resale Right Directive in EU copyright law requires that any type of seller who subsequently
sells an artist’s original artwork must provide a royalty-based commission to the artist. According to Article
2 of this directive, original works of art can include paintings, sculptures, ceramics, and photographs’
(Bjarnason, C. 2021).
**5.1.2. Digital Single Market (DSM) Directive**
‘Article 17 of the directive specifies that online content-sharing service providers are responsible for
any illicit content, including copyright infringement, on their platform.
NFTs and their marketplaces have inherent copyright protection mechanisms, such as the
marketplace’s terms of sale and transaction execution through smart contracts, that ensure compliance
with the rules established in the DSM directive’ (Bjarnason, C. 2021).
**5.2 NFTs and Copyright Ownership**
‘In an ideal world, the copyright owner of an artwork would also be the creator of its NFT.’
However, as one would expect, individuals who engage in infringement find their way around in
the digital sphere and create more intellectual property-related problems. Additionally, a new problem
has arisen in recent years. These individuals “mint” NFTs based on copied artwork without permission
and put them up for sale. As the decentralization, encryption, and anonymity features that are inherent in
blockchain ecosystems make it hard to find the copyright holder, this can be a big issue.
Coming to the question, how can someone sell work that is not theirs?
‘NFTs and copyright law have two significant zones of interaction. The first is related to the ‘minting’
when NFTs are created, and the second is focused on the dissemination of the digitized work’ (Idelberger,
Mezei 2022).
‘The concept of NFTs is such that the original content is not included in them. Rather, they are
compiled with standard contracts, resulting in unique metadata that can be written to the blockchain.
Essentially, an NFT functions as a digital receipt that links to the original content, much like a deed would
for a house. It is worth noting that the NFT itself is not a copy of the content’ (Bodó, Giannopoulou,
Quintais, Mezei 2022).
‘When it comes to NFTs, there is no real copyright ownership title over the tokenized work. This
means that the original creator retains control over the work, even after it has been sold on an online
marketplace. However, the metadata associated with the NFT may grant certain rights to the acquirer of
the token. These rights are usually quite limited, and they often restrict the commercial use of the work’
(Bodó, Giannopoulou, Quintais, Mezei 2022).
The validity and execution of these online agreements (aided by smart contracts) should ideally
be without issue and within the parties’ freedom of contract if these online agreements meet the formal
requirements of national copyright contract rules.
‘When it comes to NFTs, the sellers have the power to determine their own terms. These terms
can include options like transferring traditional rights, using the NFT to unlock additional content, or
implementing a digital royalty for resale. In any case, creators and owners of NFTs have significant control
over the destiny of their creations’(Lapatoura, 2021, p. 171).
While those who sell NFTs are free to establish their own licensing agreements for the tokenized
work, these agreements will have little impact from a copyright standpoint regarding the exhaustion of the
distribution right, as is frequently seen in collection-based or curated marketplaces.
‘For instance, Mike Shinoda from the band Linkin Park, who successfully sold the audio clip “Happy
Endings” accompanied by his artwork, published the terms of his NFT sales as follows: “Only limited
personal non-commercial use and resale rights in the NFT are granted and you have no right to license,
commercially exploit, reproduce, distribute, prepare derivative works, publicly perform, or publicly display
the NFT or the music or the artwork therein. All copyright and other rights are reserved and not granted.”
## 6. CONCLUSION
NFTs give their holders the illusion of ownership; in other words, they are a ‘cryptographically
signed receipt that you own a unique version of a work’ (Guadamuz, 2021c).
The global market has been significantly impacted by NFTs due to their disruption of the traditional
https://scienceij.com
-----
UDK: 347.78:004.031.4(4 672EU)
model of auctioning art. Cost-effectiveness is increased with NFTs because there is no need to worry
about storage or insurance expenses.
Non-fungible tokens should not be isolated to digital artwork or even the art industry. They will
have a certain degree of influence on physical assets and result in their tokenization.
Of course, like any other technological advancement, many questions and uncertainties are
raised regarding NFTs. The minting of protectable artistic works and their sale have led to copyright issues
that must be addressed. Many of these uncertainties can be clarified through license agreements.
While NFTs are a new and exciting technological advancement, there are many questions and
uncertainties raised in regard to them, especially in regard to copyright law. It is important to clarify some
of these issues in order to ensure that the minting and sale of NFTs is done in a legal and protected
manner.
## 7. BIBLIOGRAPHY
Andrew (2022) ‘NFTs use ‘smart’ contracts—but what exactly are they?’ Available on: https:// www.theartnewspaper.com
Aksoy, Üner (2021) ‘NFTs and copyright and opportunities. Available on: https://www. deepdyve.com/lp/oxford-university
press/nfts
Bjarnason, C. (2021) ‘NFT Explained: In the Eyes of EU Copyright Law’. Available on: https:// medium.com/@casperbjarnason/
nfts
Bodó, Giannopoulou, Quintais, Mezei (2022) ‘The Rise of NFTs: These Aren’t the Droids You’re Looking For’. Available on:
The Rise of NFTs: These Aren’t the Droids You’re Looking For by Balázs Bodó, Alexandra Giannopoulou, João Pedro
Quintais, Péter Mezei: SSRN
Clark, M. (2021) ‘The New York Times just sold an NFT for more than half a million dollars.’ Available on:https://www.theverge.
com/
Cvetkovic, P. (2020) “ Legal aspects applications blockchain: An example of smart contracts”. Available on: Legal The word
legal aspects of Blockchain.pdf
Guadamuz, A. (2021c). The treachery of images: Non-fungible tokens and copyright. Journal of Intellectual Property Law &
Practice, 16(12), 1367–1385. https://doi.org/10.1093/jiplp/jpab
Harper, J. (2021) ‘Jack Dorsey’s first ever tweet sells for $2.9m’, BBC News, 23 March. Available on: https :// www.bbc.com/
news/business
Idelberger, Mezei (2022) ‘Non-fungible tokens’. Available on: Non-fungible tokens by Florian Idelberger, Péter Mezei: SSRN
Lantwin, T. (2021) ‘Beyond the hype: NFTs, digital and copyright’. Available on: https://www. dusip.de /en/2021/06/16/beyond
the-hype-nfts-digital-art-and-copyright/
Lapatoura, I. (2021). Creative digital assets as NFTs: A new means for giving artists their power back? Entertainment Law
Review, 32(6), 169–172.
Liew, VK (2021) ‘ DeFi, NFT and GameFi Made Easy: A Beginner’s Guide to Understanding and Investing in DeFi, NFT and
GameFi Project
Kramer, Graves, Philips, (2022) ‘Beginner’s Guide to NFTs: What are Non-Fungible Tokens?’. Available on: https://decrypt.co/
resources/non-fungible-tokens-nfts-explained-guide-learn blockchain,
Lapatoura (2021) ‘Copyright & NFTs of Digital Artworks’. Available on: https://ipkitten.blogspot .com /2021/03/guest-post
copyright-nfts-of-digital.html
Murray, DM (2022) ‘NFT Ownership and Copyrights’. Available on: NFT Ownership and Copyrights by Michael D. Murray:
SSRN
Purtill (2021) ‘Artists report discovering their work is being stolen and sold as NFTs’. Available on: https://www.abc.net.au/news/
science/
Schmitz, JA (2022) ‘Resolving NFT and Smart Contract Disputes’. Available on: Resolving NFT and Smart Contract Disputes
by Amy J. Schmitz: SSRN
Strabac, M. (2021) “Protection of personal data in the blockchain”. Available on: https://www.milic.rs/ zastita-podataka/zastita
podataka-o-licnosti-u-blockchain-u/
Szabo, N. (1994) ‘Smart Contract’. Available on: https://www.fon.hum.uva.nl/rob/ Courses/ Information InSpeech/CDROM/
Literature/LOTwinterschool2006/szabo.best.vwh.net/idea.html
Sephton (2021) ‘Copyright infringement and NFTs: How artists can protect themselves’ Available on: https:// cointelegraph.
com/news/copyright-infringement-and-nfts-how-artists-can-protect-themselves
Vrbanus (2021) ‘Emily Ratajkowski prodaje vlastitu sliku kao NFT da bi “preuzela kontrolu” nad vlasništvom’. Available on:
https://www.bug.hr/blockchain/emily-ratajkowski-prodaje-vlastitu-sliku-kao-nft-da-bi-preuzela-kontrolu-nad-21067
https://scienceij.com
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.35120/sciencej020215m?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.35120/sciencej020215m, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://scienceij.com/index.php/sij/article/download/20/17"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-05-31T00:00:00
|
[] | 6,592
|
en
|
[
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0148207de2efe9b34947ed00528c7d09f7af7249
|
[] | 0.91426
|
Is Bitcoin the 'Digit Gold'A Potential Safe-haven Asset?
|
0148207de2efe9b34947ed00528c7d09f7af7249
|
Advances in Economics, Management and Political Sciences
|
[
{
"authorId": "2214470875",
"name": "Shupeng Guan"
},
{
"authorId": "2214533470",
"name": "Han Jiang"
},
{
"authorId": "2214509451",
"name": "Muyang Zhou"
},
{
"authorId": "49721770",
"name": "Jianuo Liu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Adv Econ Manag Political Sci"
],
"alternate_urls": null,
"id": "efee6e0c-f1db-4556-8c10-5a827a83e1b3",
"issn": "2754-1169",
"name": "Advances in Economics, Management and Political Sciences",
"type": "journal",
"url": null
}
|
In this paper, we examine whether bitcoin has the potential to become safe-haven asset that can rival gold in the future. We observed, compared and analyzed and the performance of bitcoin and gold in face of a falling market and inflation pressure. We can see if investors can rely on bitcoin to reduce risk exposure significantly through empirical tests. At the end of our research, we found that bitcoin did not perform as well as gold did when faced with market crash and inflation. Therefore, we conclude that bitcoin does not yet show the potential to possess risk-proof merits as gold, the traditional high-quality hedge asset. Gold would probably remain the preferred hedge asset against cryptocurrency for now.
|
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
# ***Is Bitcoin the 'Digit Gold'——A Potential Safe-haven Asset? ***
## **Shupeng Guan [1,] *, Han Jiang [2], Muyang Zhou [3], and Jianuo Liu [4]** *1 School of Mathamatics, University of Birmingham,B15 2TT,United Kingdom * *2 Wuhan Britain-China School,430030, China * *3 Information School, University of Washington, 98195, United States * *4 Material Science&Engineering, Nanyang Technology University,639798, Singapore* * wayneg0530@163.com * **corresponding author * Abstract: In this paper, we examine whether bitcoin has the potential to become safe-haven asset that can rival gold in the future. We observed, compared and analyzed and the performance of bitcoin and gold in face of a falling market and inflation pressure. We can see if investors can rely on bitcoin to reduce risk exposure significantly through empirical tests. At the end of our research, we found that bitcoin did not perform as well as gold did when faced with market crash and inflation. Therefore, we conclude that bitcoin does not yet show the potential to possess risk-proof merits as gold, the traditional high-quality hedge asset. Gold would probably remain the preferred hedge asset against cryptocurrency for now. Keywords: Bitcoin, Sharpe ratio, portfolio, gold. **1. Introduction ** In the view of a segment of investors, holding physical assets may all be at risk in the future, even gold, which is traditionally considered the most safe-haven asset. For this group of investors, some of the attributes of virtual currencies such as bitcoin are deeply favored. For example, Bitcoins are not subject to national monetary policies, meaning they are not influenced or controlled by governments. Also, as virtual property, bitcoins are not at risk of being destroyed or lost in a war or natural disaster. So, is it possible that virtual currencies could become a more desirable asset for investors in a potentially volatile time in the future? In the last two years, our world has experienced an event unlike anything before in this century, an worldwide epidemic that continues today. In fact, it has changed our world in many ways during last two years, including our ways of thinking, and has allowed us to observe and interpret the performance of different assets in the financial markets differently. In response to our questions, we gathered several articles which were published at different times, and their proposals differ from each other. Wong's paper argues that virtual currencies bring higher portfolio risk because of their high intrinsic volatility but bring higher Sharpe ratios to gold and stock portfolios at the same time [1]. While Corbet's research suggested that virtual currencies could provide investors with many benefits and safety during an epidemic [2]. Conlon argued that virtual assets like bitcoin could not protect investors' assets during critical times[3]. And Hasan's paper suggested that safe-haven assets may vary over time [4]. So, how has the situation changed since then?
© 2023 The Authors. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0
(https://creativecommons.org/licenses/by/4.0/).
392
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## The first quarter of 2022 has been a very turbulent period for international financial markets, immediately following a wave of worldwide outbreaks of the new omicron epidemic at the beginning of the year, a military conflict broke out in Eastern Europe, which has continued to this day (late May). Markets worldwide suffered a harsh test under double pressure of epidemic and geopolitical events. The general result is a series of market shakes and inflation spikes. When the world enters a period of uncertainty, the prices of commodities such as food and oil rise, stock markets fall, and gold prices rise. But what is interesting is that most virtual currencies have seen their prices up and down to a greater or lesser extent during this period. In order to investigate whether bitcoin can be considered as ‘digit gold’ through empirical evidence, we gathered weekly data sets covering a five-year period on bitcoin price, gold price, S&P 500 index, US 10-year treasury yields, and US 1-month treasury yields. We aim to combine them into portfolios and observe, compare the performance of gold and bitcoin in different market situations. **2. Data ** To conduct the whole research we used several data sets, including weekly bitcoin prices in US dollars, weekly gold price in US dollars as two manipulated variables. Values of S&P 500 were attained as the representatives of the stock market, while the yield of US 10-year treasury act as a typical sample of the general bond market. In addition, the yield of US 1-month treasury was set as the risk-free rate. Bitcoin was chosen as the representative of the cryptocurrency market because the market capacity makes up nearly 40% of the total cryptocurrency market and hence should give a powerful indication of the overall characteristics of the whole cryptocurrency market. As gold is widely acknowledged as an ideal hedge or safe haven asset against economic recession or inflation, it is anchored to be the comparison object. Data for bitcoin are collected from Blockchain.com, a leading platform for cryptocurrency exchange.Data for gold price are collected from LBMA (London Bullion Market Association). Data for S&P 500 are collected from WSJ (The Wall Street Journal). Data for the US 10-year treasury(US10Y) and the US 1-month(US1M) treasury are sourced from FRED.
|Col1|Table 1: Summary statistics of research data.|
|---|---|
|Variable|Obs Mean Std. Dev. Min Max Sharpe|
|BTC price Gold S&P500 US10Y US1M|256 18,306 17820.2 1,448 66,954 256 1,542 261.52 1,182 2,048 256 3,262 711.64 2,357 4,793 256 1.93% 0.75% 0.55% 3.22% 256 1.00% 0.90% 0.00% 2.45%|
|Ln(BTC) Ln(Gold) Ln(S&P500)|256 9.37 0.94 7.28 11.11 256 7.33 0.17 7.07 7.62 256 8.07 0.21 7.77 8.47|
|Re(BTC) Re(Gold) Re(S&P500) Re(US10Y) Re(US1M)|255 1.28% 11.01% -38.65% 32.83% 0.83 255 0.16% 2.00% -9.88% 6.91% 0.11 255 0.22% 2.48% -13.38% 10.72% 0.14 255 0.02% 0.86% -3.43% 2.92% 0.01 255 0.02% 0.02% 0.00% 0.05% 0.01| The data are all weekly observed back from 03/05/2017 to 27/04/2022, yielding 256 observations after processing. Table 1 provides a summary report of the whole dataset, including price of cryptocurrency(bitcoin),gold,and stocks(S&P500).The top part shows the statistics about closing
393
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## price in US dollars of bitcoin,gold and stocks(S&P 500), and yields of US 10-year treasury as well as 1-month treasury.The middle part shows the log price of bitcoin,gold and stocks(S&P500).The bottom part shows the log return of all five distinct assets. All the data transformations are intended for further calculation of optimal portfolio maximising the Sharpe ratio, which is shown in Table 2. Table 2: Optimal Sharpe portfolios, weight add to 100%.
|Col1|Bitcoin Gold S&P500 US 10Y|
|---|---|
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC|0% 0% 44% 56% 0% 82% 71% -53% 24% 81% 46% -51%| **3. Methodology ** In this section methodologies used to conduct our research will be described. To analyze whether bitcoin, the so-called ‘digit gold’, has the potential to be as good as gold when considered as a safe- haven investment, especially in when stock market is down and inflation worsen. We tried to separate the data into four sets--weeks both S&P500 and 10Y went up, weeks when both went down, weeks when S&P500 went up and 10Y went down, weeks when S&P500 went down and 10Y went up, that is, we considered all four possible market situations. Next we calculate the optimal Sharpe ratio portfolios for each of the four sets individually, initially the portfolio only consists of stocks(S&P500) and bonds(US 10-year treasury), then adding gold, then adding bitcoin, so that it would be convenient to compare the amount of bitcoin and gold we would like to hold if we knew what the stock and bond market would do. Especially if stock and bond market dropped, compare between gold and bitcoin, specifically, compare how much do gold and bitcoin improve the Sharpe ratios of the portfolio, to verify which is the ideal safe-haven asset. The specific data analysis is carried out in following steps.First step is the demeaning processing, where the mean equation is: 𝑅𝑑= (𝑅−𝑅) −(𝑟−𝑟) (1) where Rd is the demeaned excess returns, R is the return of the asset and 𝑅 is the mean return, r is the risk-free ratio while 𝑟 is the mean risk-free ratio. The second step is to calculate the optimal portfolio based on maximizing Sharpe ratio. Sharpe ratio is originally developed by Nobel laureate William F Sharpe [5], an indicator of calculating risk adjusted return. It is defined as
( 𝑅(𝑥)−𝑟 )
## (2)
𝜎
## where R(x) is the expected return of portfolio, r is the risk-free ratio, 𝜎 is the standard deviation of R(x). Normally, a higher Sharpe ratio indicates better investment performance, given the risk. If a Sharpe ratio is negative, it means the risk-free return is greater than expected return of portfolio, and Sharpe ratio conveys nothing meaningful. The mathematical model for the Sharpe Ratio based Portfolio optimization is given by
394
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
𝑁
∑ 𝑖 = 1 𝑊 𝑖 ’· 𝜇 𝑖 −𝑟
## max (3)
𝑖 𝑗 𝑊 𝑖 - 𝑊 𝑗 - 𝜎 𝑖𝑗
## ( √ [∑∑] ) subject to ∑ 𝑁𝑖=1 𝑊 𝑖 = 1 (4) where 0 ≤𝑊 𝑖 ≤1 (5) 𝑊 𝑖 is the weight of each asset.The numerator of the objective function denotes the excess returns of the investment over that of a risk-free asset 𝑟 and the denominator the risk of the investment. The objective is to maximize the Sharpe Ratio. The basic constraints indicate that this is a fully invested portfolio, in other words, weights adds to 100%. In the third step, since the optimal Sharpe portfolios of different combinations were attained, we computed the growth of $1 in all three portfolios, scaling each one to the same volatility(standard deviation mathematically), and visualize their growth to compare apples to apples. We will mainly focus on the situations where one of stock and bonds markets falls, so that we can compare bitcoin and gold to verify whether bitcoin also has the property to hedge against falling stock market or inflation as physical gold does. **4. Results ** We calculated the weight matrices and optimal Sharpe portfolio statistics of four sets. The numbers are shown in following tables—Table 3(weeks both S&P500 and 10Y went up),Table 4(weeks when both went down),Table 5(weeks when S&P500 went up and 10Y went down), Table 6(weeks when S&P500 went down and 10Y went up). And growth of portfolios(value of $1) are visualized. Table 3: Both S&P500 and 10Y went down.
|Col1|Bitcoin Gold S&P500 US Mean Std. Sharpe 10Y Dev.|Col3|
|---|---|---|
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC Standard deviation|0% 0% -19% -81% 0% 13% -21% -93% 2% 15% -19% -98% 12.63% 2.47% 2.54% 0.71%|1.05% 0.92% 8.04 1.13% 0.89% 8.97 1.10% 0.83% 9.49|
## (*Mean and standard deviation are weekly calculated while Sharpe ratio is annualized, same below.) Table 3 considers the situation when both S&P500 and 10Y went down. As tables above shown, the portfolio improved when gold is introduced, Sharpe ratio increased by 0.93, as gold is a well- known classic hedge against stock market and inflation. Bitcoin also adds subtle improvement to the portfolio, thought it doesn’t account for much in dollar size to the whole portfolio compared to gold, it makes a difference to the portfolio owing to its relatively high volatility, and the Sharpe ratio is raised by 0.52. Looking at the Figure 1, the growth trend of three different combinations basically follows the same mode, while combinations with more assets perform better in value over time.
395
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## Figure 1: Growth of value $1 when both S&P500 and 10Y went down. Table 4: Both S&P500 and 10Y went up.
|Col1|Bitcoin Gold S&P500 US Mean Std. Sharpe 10Y Dev.|Col3|
|---|---|---|
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC Standard deviation|0% 0% 36% 64% 0% 0% 36% 63% 0% 0% 33% 67% 10.92% 1.86% 1.12% 0.52%|0.81% 0.50% 11.29 0.83% 0.52% 11.32 0.83% 0.52% 11.33| Table 4 considers the situation where both S&P500 and 10Y went up, bull market without inflation. It seems that when you are in a bull market and have no pressure from inflation, bitcoin and gold seldom be considered in your portfolio, as the portfolios basically remain the same structure, simply because you know you can make a lot of money out of stocks and bonds investment as they are going to be unstoppable, in reality, that is a big if.
396
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## Figure 2: Growth of value $1 when both S&P500 and 10Y went up. Table 5: S&P500 went up and 10Y went down.
|Col1|Bitcoin Gold S&P500 US Mean Std. Sharpe 10Y Dev.|Col3|
|---|---|---|
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC Standard deviation|0% 0% 28% - 126% 0% -6% 34% - 127% 1% -6% 32% - 128% 9.87% 1.69% 1.48% 0.48%|1.21% 0.82% 10.48 1.32% 0.89% 10.57 1.30% 0.87% 10.59| Table 5 considers the situation when S&P500 went up and 10Y went down. Considering the second portfolio combination, gold shows subtle negative correlation with stock markets, which could be the result of the offset between a rising stock market and deflation. From the statistics, Sharpe ratio hardly changes, we can conclude bitcoin didn’t make much difference to the portfolio. Looking at the Figure 3, the claim is further supported.
397
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## Figure 3: Growth of value $1when S&P500 went up and 10Y went down. Table 5: S&P 500 went down and 10Y went up.
|Col1|Bitcoin Gold S&P500 US Mean Std. Sharpe 10Y Dev.|Col3|
|---|---|---|
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC Standard deviation|0% 0% -14% 114% 0% 6% -19% 114% 0% 6% -18% 112% 11.38% 2.00% 2.46% 0.67%|1.17% 0.95% 8.68 1.30% 1.05% 8.75 1.27% 1.03% 8.75| Table 6 considers when S&P500 went down and 10Y went up. As is well known, it is acknowledged when stocks did not perform well, gold should be introduced into the portfolio to make life easier. However, bitcoin does not function as gold helps, as the Sharpe ratio remains the same, indicating it is of trivial role to this situation.
398
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## Figure 4: Growth of value $1 S&P500 went down and 10Y went up.
|Col1|Table 6: The whole period.|Col3|
|---|---|---|
||Bitcoin Gold S&P500 US Mean Std. Sharpe 10Y Dev.||
|S&P500/10Y S&P500/10Y/Gold S&P500/10Y/Gold/BTC Standard deviation|0% 0% 44% 56% 0% 82% 71% -53% 24% 81% 46% - 130% 11.01% 2.00% 2.48% 0.86%|0.11% 1.13% 0.56 0.28% 2.57% 0.73 0.53% 3.73% 0.98| Table 7 gives the optimal portfolio statistics considering the whole dataset covering 256 weeks. As the numbers display, the Sharpe ratio rises from 0.56 to 0.73 when gold added, and increases to 0.98 after bitcoin added. Looking at the Figure 5, based on the blue line(S&P500/10Y), there are 2 obvious downturns across the period, the first happened around 03/05/2022, all three combinations fail to protect, the second happened near the end, compare the trend of green line(S&P500/10Y/Gold) and blue line(S&P500/10Y), one can notice that portfolio with gold reversed the falling trend of blue line and remains the general growing trend. However, bitcoin does not seem to protect in these downturns(compare red line and blue line), rather it helps by posting some spectacular returns during general good times.
399
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## Figure 5: Growth of value $1. **5. Conclusion ** This paper investigates the potential of bitcoin as ‘digit gold’, a safe-haven asset. We try to verify the usefulness of bitcoin in terms of hedging compared to physical gold. From the Results, we can conclude it is not that persuasive to view bitcoin as 'digit gold'. Compared to physical gold, bitcoin is not a better holding in down markets. Specifically, in all four possible situations, the contribution bitcoin made are not more significant than gold did. Further, from the charts we can see portfolio containing bitcoin does not generally outperform its counterparts significantly without bitcoin in volatility and return growth during hard times, and it hardly shows the potential to protect portfolio against bad times. However, from Figure 5, it implies that cryptocurrencies can be useful as a supplement asset to raise returns in a portfolio during general good times. Considering its hedging role is so limited that much less than gold, and basically it made no difference to the growth trend, we can reasonably conclude bitcoin is uncorrelated with stock and bond markets. In summary, bitcoin's hedging role is deficient compared to gold in bear markets, while in bull markets, as the theory of diversification of portfolio indicates, it has more of a volatility-reducing effect than it does a significant increase in returns. Bitcoin can be used as a hedge against stocks in portfolio simply because it is uncorrelated to the stock market, but it is not plausible that bitcoin is as a good safe-haven asset as physical gold. Bitcoin is characterised by high volatility and high returns compared to gold though, risk-seeking investors can increase their risk reward by investing in cryptocurrencies. Our research has limitations due to the narrow selection of data. Later we hope to examine a wider range of results and investment opportunities by combining more asset classes in portfolio.To date, there are still many uncertainty around cryptocurrencies. Regulators may further suppress cryptocurrencies, leading to the often predicted bursting of the cryptocurrency bubble, on the other hand, many investors view bitcoin as a speculative asset, which helps its widespread acceptance. Bitcoin may be still in its infancy but derivatives like crypto options are growing. It is unclear if bitcoin will be the cryptocurrency of choice in the future.
400
-----
The 6th International Conference on Economic Management and Green Development (ICEMGD 2022)
DOI: 10.54254/2754-1169/4/2022910
## **References **
*[1]* *Wong, W. S., Saerbeck, D., & Delgado Silva, D. (2018, February 18). Cryptocurrency: A new investment*
*opportunity? an investigation of the hedging capability of cryptocurrencies and their influence on stock, Bond and*
*gold portfolios.*
*[2]* *Corbet, S., Hou, Y. (G., Hu, Y., Larkin, C., & Oxley, L. (2020, July 7). Any port in a storm: Cryptocurrency safe-*
*havens during the COVID-19 pandemic. Economics Letters.*
*[3]* *Conlon, T., & McGee, R. (2020, May 24). Safe haven or risky hazard? bitcoin during the COVID-19 bear market.*
*Finance Research Letters.*
*[4]* *Hasan, M. B., Hassan, M. K., Rashid, M. M., & Alhenawi, Y. (2021, August 13). Are Safe Haven assets really safe*
*during the 2008 global financial crisis and covid-19 pandemic? Global Finance Journal.*
*[5]* *Sharpe, William F. Mutual Fund Performance, Journal of Business, January 1966, pp. 119–138.*
*[6]* *Henriques, Irene, and Perry Sadorsky. 2018. "Can Bitcoin Replace Gold in an Investment Portfolio?" Journal of*
*Risk and Financial Management 11, no. 3: 48.*
*[7]* *Bessler, W., Taushanov, G., & Wolff, D. (2021, May 29). Factor investing and asset allocation strategies: A*
*comparison of factor versus sector optimization - Journal of Asset Management.*
*[8]* *Yoshino, N., Taghizadeh-Hesary, F., & Otsuka, M. (2020, July 12). Covid-19 and optimal portfolio selection for*
*investment in Sustainable Development Goals. Finance Research Letters.*
401
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.54254/2754-1169/4/2022910?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54254/2754-1169/4/2022910, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://www.ewadirect.com/proceedings/aemps/article/view/744/pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-03-21T00:00:00
|
[] | 5,563
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014981e1454105af6a6275bef4baf47afbeb7377
|
[
"Computer Science"
] | 0.888705
|
AODV-Miner: Consensus-Based Routing Using Node Reputation
|
014981e1454105af6a6275bef4baf47afbeb7377
|
2022 18th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob)
|
[
{
"authorId": "2132854826",
"name": "Edward Staddon"
},
{
"authorId": "1803713",
"name": "V. Loscrí"
},
{
"authorId": "1801854",
"name": "N. Mitton"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
With the increase of Internet of Things (IoT) applications, securing their communications is an important task. In multi-hop wireless networks, nodes must unconditionally trust their neighbours when performing routing activities. However, this is often their downfall as malicious nodes can infiltrate the network and cause disruptions during routing. We grant nodes the ability to evaluate the behaviour of their neighbours and, through consensus inspired from blockchain's miners, agree on the credibility of each node. The resulting metric is expressed as a node's reputation allowing, in the case of a malicious node, to isolate it from network operations. By illustrating this in an AODV-like multi-hop routing protocol, we can influence route selection no longer based solely upon the shortest number of hops, but also the highest overall reputation. Simulation results revealed that our approach can decrease packet drop rates by ≈ 48% in a static context when subjected to multiple black hole attacks compared to the original routing protocol.
|
## AODV-Miner: Consensus-Based Routing Using Node Reputation
### Edward Staddon, Valeria Loscrì, Nathalie Mitton
To cite this version:
#### Edward Staddon, Valeria Loscrì, Nathalie Mitton. AODV-Miner: Consensus-Based Routing Using Node Reputation. WiMob 2022 - The 18th International Conference on Wireless and Mobile Com- puting, Networking and Communications, Oct 2022, Thessaloniki, Greece. hal-03787034
### HAL Id: hal-03787034
https://inria.hal.science/hal-03787034
#### Submitted on 23 Sep 2022
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# AODV-Miner: Consensus-Based Routing Using Node Reputation
#### Edward Staddon, Valeria Loscri and Nathalie Mitton
Inria, France - firstname.lastname @inria.fr
_{_ _}_
**_Abstract—With the increase of Internet of Things (IoT)_**
**applications, securing their communications is an important task.**
**In multi-hop wireless networks, nodes must unconditionally trust**
**their neighbours when performing routing activities. However,**
**this is often their downfall as malicious nodes can infiltrate the**
**network and cause disruptions during routing. We grant nodes**
**the ability to evaluate the behaviour of their neighbours and,**
**through consensus inspired from blockchain’s miners, agree on**
**the credibility of each node. The resulting metric is expressed**
**as a node’s reputation allowing, in the case of a malicious node,**
**to isolate it from network operations. By illustrating this in an**
**AODV-like multi-hop routing protocol, we can influence route**
**selection no longer based solely upon the shortest number of**
**hops, but also the highest overall reputation. Simulation results**
**revealed that our approach can decrease packet drop rates by**
_≈_ 48% in a static context when subjected to multiple black hole
**attacks compared to the original routing protocol.**
**_Index Terms—IoT, Reputation, Cyber Security, AODV, Blockchain_**
I. INTRODUCTION
The Internet of Things (IoT) is becoming more prominent
in every day life, processing increasing amounts of sensitive
data. Indeed, some applications come with extreme risks which
could result in severe consequences, from data breach to loss
of life. However, in some cases, devices are deployed in sparse
hostile environments, forcing them to employ other network
paradigms to communicate. This is the case of multi-hop
networks, where data is forwarded by intermediate devices
to reach its destination. Therefore, securing this exchange is
paramount as entrusting data to unknown nodes is a huge risk.
Many solutions combating threats in IoT networks have
been proposed in the literature. Such approaches as trust based
[1], grant the capability to identify different nodes, based on
their previous actions in the network. This allows routing
protocols to adapt to the situation, granting routing capabilities
only to the most trustworthy [2], be it based on cooperation
[3] or through the use of signatures [4] to identify potential
threats. Furthermore, by analysing node actions and assigning
them a trust value, it is possible to influence how routing is
performed [5]. By using the notion of reputation, inspired from
the human psyche, we can evaluate the actions of surrounding
nodes, allowing protocols to avoid attacks [6]. However, many
approaches have been designed as part of existing protocols
and, therefore, only work in conjunction with them.
Another area is that of blockchain, which has long been a
point of interest in security and when coupled with trust-based
methods, can be highly fruitful [7]. Indeed, the blockchain, a
decentralised immutable ledger [8] well known for its uses in
cryptocurrencies such as Bitcoin [9], can be levied to distribute
reputation values in a secure way to avoid tampering. All
data is stored in structures called ”blocks”, each containing
a reference to its predecessor in the form of a block hash.
As a result, any modifications would ripple through the chain
and be detected, rendering it immutable, its main advantage
[10]. New blocks must also go through an extensive validation
process prior to their insertion. This is performed by Miners,
which calculate a Proof of Work (PoW), confirming a blocks
credibility which is subsequently verified by other miners
before it can be inserted. Although computationally intensive,
this consensus-based mining process is the backbone which
keeps the blockchain functioning and secure. Due to its
immutability, it has been used in other areas such as IoT
Security, serving as a decentralised and secure medium for
IoT applications [11]. Furthermore, in recent years it has
also become an influence in securing routing techniques [12]
[13], and has even been used in aviation to secure Unmaned
Aircraft Systems against potential threats [14]. However, to
access the blockchain it must be stored, which can become
a heavy process the more blocks are added. This is further
emphasised when the devices using the blockchain possess
limited resources, such as storage, computational or even
energy capacity.
In this paper, we propose a consensus-based reputation
module, providing behavioural analysis to network activities,
inspired by [15]. We employ a lightweight version of
blockchain, reducing its functionalities to a dissemination tool
only, repurposing its Miners with the extra responsibility of
behavioural validation. Furthermore, we redefine the PoW
method with our own consensus-based confirmation scheme,
corresponding to the specifications and constraints of our
network and validation models. As a consequence, these new
_validation miners, significantly different from their blockchain_
origins, hold a key position in the network. By designing
our module with adaptability in mind, it can be used by
different routing protocols to influence the route selection
process. We illustrate this with the Ad hoc On-Demand
Distance Vector (AODV) reactive routing protocol [16], in a
new implementation called AODV-Miner. By using such a well
known protocol as AODV, we can illustrate the functionalities
of our approach, and how it interacts with the chosen protocol.
The rest of this paper is organised as follows: Section
II defines our system model before presenting AODV-Miner
in Section III. Section IV presents our results before finally
discussing future endeavours and concluding this work in
Section V.
-----
II. SYSTEM MODEL
_A. Network Model_
We consider a connected wireless network scenario with
_N static nodes possessing omnidirectional antenna’s with a_
fixed transmission range. Each node is aware of all traffic on
the wireless medium in proximity to them at all times. They
also possess the ability to determine their own role for the
lifetime of a route during discovery, making them either a
router or a validation miner, with priority given to routing.
As a result, receiving a route request identifies the node as a
router, whereas overhearing the request, identifies them as a
miner. Subsequently, nodes can participate in multiple routes
and can, as a consequence, take on multiple roles.
_B. Validation Model_
The role of validation miners is 1) to ”mine a route”,
validating routing behaviour between neighbours; and 2) to
”mine a block”, confirming and distributing the results using
the blockchain. For their first objective, each miner has
the ability to validate the behaviour of its neighbours. By
overhearing passing route requests, they can construct both
forwards (src _dst) and reverse (dst_ _src) Route_
_→_ _→_
_Validation Tables (RVT) containing the expected hops in order._
Each ”good” or ”bad” action is categorised by the miner for
each neighbouring routing node of a specific route. Since the
miners parse and extract the expected next hop from their RVT
to verify the activities, we can determine that the computation
and spatial complexities are linked, resulting in O(n), with n
nodes in the table.
Once the route has expired, the miners begin their second
objective and take on blockchain style responsibilities. Firstly,
they aggregate their results into a temporary block which is
then shared with neighbouring miners for confirmation. Once
complete, the resulting confirmed blocks are again shared
with neighbours, updating their network status. As stated
previously, we use a lightweight blockchain approach to share
blocks, which uses a custom PoW method, where miners
simply analyse the block’s contents and check if the actions are
inline with their own vision, responding if an error has been
detected. This reduces the number of exchanges needed and
makes the miners assume their work is valid if no response
is received, disseminating thereafter. In this case, two data
structures are explored, increasing the structural complexity
to O(m _n), with m entries in the received block. However,_
_×_
with a worse case scenario of m = n, we can deduce the
computational complexity to be O(n[2]).
_C. Threat Model_
**Routing threat. A malicious node can either simply destroy**
a packet, or send it elsewhere [17]. In the first case, be it
either a complete destruction (black hole) or selective (grey
_hole), the concerned data no longer traverses the network. In_
the second case, the malicious node can either transmit the
data to another node using another medium, called Wormhole
or redirect the packet by simply modifying its destination. In
case, the malicious node deviates from the expected behaviour
and their action’s are flagged as bad.
**Packet threat. By modifying a packet, a malicious node can**
change its contents. To resolve this, each miner keeps a CRC16
hash of passing packets during validation, thus detecting any
modification mid route. Furthermore, if a node re-transmits
a packet which has already been seen, known as replay, the
miners can detect an unexpected hop for the corresponding
hash and label the nodes behaviour as bad.
III. OUR CONTRIBUTION: AODV-MINER
This section introduces our consensus-based reputation
module, illustrated with AODV, named AODV-Miner
_A. Node Reputation_
A node’s reputation is calculated based upon their previous
actions. If a node acts as expected, by routing a valid
packet towards the correct next hop, it is considered to
have performed a ”good” action. Any other action taken is
considered to be malicious and flagged as ”bad”. By keeping
a record of all actions taken by a single node, it is possible to
determine their reliability. We define Sgoodn and Sbadn as the
sum of good and bad actions respectively for node n with Wn
the action window size, i.e. the number of previous actions
taken into account. By varying this value, we can change its
precision, limiting it to the most recent actions, or opening it
up to a larger portion of node’s history.
where β = 8 is the sensitivity factor influencing the sigmoid
function as in [15] and α the weight of malicious actions. By
adjusting the value of α, we modify the severity of bad actions
upon the reputation as is shown in Fig.1a. We can see that the
value of α influences the reputation, illustrating that the higher
the value, the more unforgiving the network becomes.
_1) Link Cost: To identify the best route, AODV uses a_
hop counter which is incremented on each hop. In a similar
fashion to [15], we replace the hop count with a different
metric called link cost, corresponding to the network ”cost” of
using a specific node based upon their reputation. This method
allows us to differentiate between good and bad nodes, where
a low reputation will ensue a higher cost. When an RREQ or
RREP packet is received, the node calculates the cost of the
link between itself and the transmitter. By using the same base
functionality of selecting the lowest hop count, we encourage
the network to select the lowest link cost, thus containing as
fewer malicious nodes as possible, increasing route integrity
at the potential cost of longer routes. We define Cn as the cost
of the link between n and its neighbours:
_Cn = ⌊(1 −_ _Rnt_ ) × (Cmax − (Cmin − 1)) + Cmin⌋ (5)
_Wn_
�
_bad actionsni_ (2)
_i=1_
_Sgoodn =_
_Wn_
�
_good actionsni_ (1) _Sbadn =_
_i=1_
For a specific node, the reputation, Rn ∈ [0, 1], is defined
as a sigmoid function where the exponent δn ∈ [−1, 1]
corresponds to the weighted value of the relation between
_Sgoodn and Sbadn_ :
1
_Rn =_ (3) _δn = β ×_ _[S][good][n][ −]_ _[α][ ×][ S][bad][n]_ (4)
1 + e[−][δ][n] _Sgoodn + α × Sbadn_
-----
1
0.75
5
4
1
0.75
0.5
0.25
3
2
0.5
0.25
|Col1|α = 0.5 α = 1 α = 2 α = 5 α = 10|
|---|---|
|||
|||
|||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||
||||Exponential Decay Linear Decay Static Decay = 0.1 Static Decay = 0.2 Static Decay = 0.5|||||||||
|||||||||||||
0 5 10 15 20 25 30 35 40 45 50 55 60
Time Index (minutes)
0
0% 25% 50% 75% 100%
Malicious Activity
1 _α = 2_
_α = 5_
_α = 10_
0
0% 25% 50% 75% 100%
Malicious Activity
(b) Link Cost evolution
0
(c) Reputation Decay
(a) Reputation evolution
Fig. 1: Impact of α on reputation and link cost, and reputation decay with a half life of 15min
where Rnt is the reputation of node n at time t. Since Rnt is
normalised between 0 and 1, we can scale the resulting cost
by defining minimum and maximum values, Cmin and Cmax.
We use Cmin = 1 meaning that even using a trustworthy
node possesses a cost. Furthermore, the resulting cost is then
reduced to the greatest natural number less than or equal to
the calculated value. With a maximum value of 255, we can
calculate the maximum possible cost based upon the number
of potential nodes in the network:
255
_Cmax =_ (6)
_Lmax_
_[−]_ [1 +][ C][min]
By decreasing Lmax (i.e. the maximum possible route length),
we can increase the precision of the link cost function. For
example, Lmax = 32 would result with a maximum value of
8, whereas Lmax = 64 would only allow for 4 values. The
resulting graph can be observed in Fig.1b, corresponding to
the link cost of Fig.1a with Lmax = 64. The influence of
_α is once again visible where we can see that, similar to_
Fig.1a, the higher its value, the steeper the climb in cost.
By using a dynamic scaling function based upon the size
of the network, the precision of the link-cost metric can be
adapted to the situation. Furthermore, by associating it with
the NET_DIAMETER configuration variable used by AODV,
our system can be integrated in a seamless manner.
_2) Reputation Decay: Once a node’s reputation has been_
calculated, it will only evolve if the node participates in
another route. However, if a node possesses a reputation of
0, it may not be used again in the near future even if it is
no longer malicious. In many cases, the malicious device is
abandoned by the attacker once it is no longer useful, thus no
longer posing a threat bur remaining excluded from routing
operations due to its low reputation.
To overcome this issue, we propose a new metric called
_Reputation Decay, where a node’s reputation decays overtime_
when not used towards 0.5, a neutral reputation. By doing so,
these abandoned or cleansed nodes can once again be used in
routing, allowing them to prove their intentions. However, the
decay value does not modify the list of good or bad actions,
simply modifying the calculated reputation, making it easier
to reincorporate nodes without changing their history:
_Rdnt = (t −_ _tRn_ ) × ( _t 1[λ]_ ) (7) _Rnt = Rn −_ _Rdnt_ (8)
2 _[R]_
where Rdnt is the reputation decay of node n at time t, λ
the decay factor, t 1
2 _[R][ the half life of the reputation and][ R][n][t]_
(as seen in (5)) the reputation of n at time t, after decay.
Fig.1c presents different decay functions used by λ with t 1
2 _[R][ =]_
15 min. We can see that each function impacts the decay rate
in different fashions, from the classic exponential half-life to
a more direct Linear or static approach. We decided to use a
linear decay function with λ = 0.25, meaning the reputation
will return to neutral after 2 _t 1_
_×_ 2 _[R][.]_
_B. RREP-2Hop_
To accurately identify good and bad behaviour, miners need
to know the next expected hop for a route. By overhearing
RREP’s transmitted between neighbours, it is possible to
construct both forwards and reverse RVTs containing the exact
sequence of hops. However, Fig.2a demonstrates the limitation
of RREPs, where an RREP from n to n 1 only informs of
_−_
the hop between them, but not the following towards n + 1.
To remedy this, we propose an update to the RREP packet
format called RREP-2Hop (Fig.3) to include the addresses of
the next hop. Fig.2b illustrates the difference where, when
compared to 2a, the hop n + 1 is known thanks to its layer 2
address. Furthermore, by also providing the layer 3 address,
2-Hop routes can be constructed if so desired. We also add a
new flag which allows the receiving node to be informed if
the 2Hop protocol is in use, allowing AODV to function with
or without this new addition.
_C. Behavioural Validation_
In our approach, any node can be a router or a miner for
a specific route. Role selection is performed during the route
discovery phase by listening for and analysing passing RREP_2Hop packets. If a received packet is destined for that node, it_
is processed as normal, marking the node as a router for that
route. If not, the node checks it isn’t already a router for this
route, as routing and mining for the same route would result
in a conflict of interest. This is to reduce the risk of malicious
routing nodes injecting false information during the validation
phase, thus corrupting the reputation.
If all is well, it then extracts the different addresses from the
packet header and adds the corresponding forward and reverse
hops to the RVTs as seen in Fig.2b. By ”mining a route”, the
miners are responsible for overhearing passing data packets
and verifying both the packet’s integrity and hop. For this,
-----
Sniffed RREP-2Hop Packet Route Verification Tables
Forward Reverse
Data Flow **From** **To** **Next** **From** **To** **From** **To**
Packet Sniffing _ni ni−1 ni+1_ _ni_ _ni+1_ _ni_ _ni−1_
Reverse Forward
**Verified** **Verified**
_ni_
_ni−1_ _ni+1_
(a) Validation with RREP
Sniffed RREP Packet Route Verification Tables
Forward Reverse
Data Flow **From** **To** **From** **To** **From** **To**
Packet Sniffing _ni ni−1_ _ni_ ? _ni_ _ni−1_
Reverse Forward
**Verified** **Unverified**
_ni_
_ni−1_ _ni+1_
(b) Validation with RREP-2Hop
Fig. 2: Illustration of the need for RREP-2Hop
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Type R AM Reserved Prefix Sz Hop Count � Miner
Flag
Next Hop IP Address (if needed)
Next Hop MAC Address (if needed)
Fig. 3: RREP-2Hop packet structure
2Hop
|0 1 2 3 4 5 6 7|8|9|10|11 12 13 14 15 16 17 18|Col6|19 20 21 22 23|24 25 26 27 28 29 30 31|
|---|---|---|---|---|---|---|---|
|Type|R|A|M|Reserved||Prefix Sz|Hop Count|
|Destination IP Address||||||||
|Destination Sequence Number||||||||
|Originator IP Address||||||||
|Lifetime||||||||
|Next Hop IP Address (if needed)||||||||
|Next Hop MAC Address (if needed)||||||||
|||||||||
**Algorithm 1 Route validation run at node n upon reception**
of pkt(llsrc,lldst,src,dst)
1: if New packet detected then
2: Create new bufpkt entry with hashpkt
3: set bufpkt as valid
4: else Previous malicious activity detected ; Exit ;
5: end if
6: RTE = Get route entry for [src _dst]_
_→_
7: RV T = get validation tables from RTE for llsrc
8: if RTE & RV T both empty then
9: _▷_ No route validation table, Malicious behaviour
10: Increment badllsrc; Set bufpkt as invalid
11: else
12: _nextHoppkt = get the next hop from RV T_
13: **if nextHoppkt ̸= lldst then** _▷lldst is not the next_
expected hop - Malicious behaviour
14: Increment badllsrc; Set bufpkt as invalid
15: **else** _▷_ Valid behaviour
16: Increment goodllsrc
17: **end if**
18: end if
each miner levies the corresponding RVT to determine the next
expected hop. However, we are only able to validate packets
originating from either end of the route and not intermediate
transmissions.
As packets traverse the network, the different miners
evaluate the behaviour of each routing node by performing
_Route Validation (see Alg. 1) so long as the route remains_
active. Once expired, each miner checks their Packet Buffer
for potentially dropped packets which haven’t completed all
expected hops. If drops are detected, the corresponding node’s
bad actions are updated, before the miner begins preparations
for blockchain dissemination. Before dissemination can take
place, the data must first be confirmed using a custom
consensus-based method. This method replaces the standard
_PoW used in blockchain, where instead of competing to solve_
a puzzle, the nodes simply request confirmation from their
neighbours. By using such an approach, we not only reduce
_PoW’s heavy computation, but also provide a simple method_
for sharing only valid data.
The miner, therefore, creates a temporary block containing
all calculated actions which is then broadcast up to two hops
to reach only miners who could have mined the same portion
of route (i.e. the same nodes). Upon receiving a block, each
miner proceeds with two calculations: First, they compute the
difference ratio in common nodes between the received block
and their own. If this value reaches a certain threshold (i.e
80%), the received block is considered invalid and the miner
transmits their own instead.
If, however, it is valid then the miner determines the
efficiency factor by calculating the percentage of nodes in
common in the received block, PB with their own PM, with
_M the nodes mined and B the nodes in the received block._
_PB =_ _[|][M][ ∪]_ _[B][|]_ (9) _PM =_ _[|][M][ ∪]_ _[B][|]_ (10)
_|B|_ _|M_ _|_
If PM >= PB, we consider B to be more efficient as it
contains more nodes overall. By using the efficiency factor, we
can send as few blocks as possible, thus increasing efficiency
and reducing overhead. However, this process relies on other
miners to ”overrule” previously transmitted blocks, indicating
that they are no longer considered valid and theirs should
take its place. This serves two purposes: correcting miners and
determining the most efficient block. If, however, no response
is received, the transmitter miner considers their block valid.
They then hash the contents, including the hash of the previous
block, before adding it to the blockchain. It is then broadcast
up to two hops so all neighbouring nodes can extract the list
of actions.
_D. Implementation_
Further to the two RVTs, each node contains a Packet
_Buffer and a Node Reputation Table. The former stores the_
CRC16 hashes of passing packets during routing, with their
next expected hop. The latter contains the list of node actions
extracted from the blockchain used to calculate the reputation
with Eq. (1) - (4). In our implementation, we emulate a
lightweight blockchain, where the blocks are not stored but
-----
1
0.75
1
0.75
1.00
0.5
0.25
0.5
0.25
0
0
0 2 4 6 8 10 12 14
0 2 4 6 8 10 12 14
Time Index (minutes)
(a) Reputation overtime with
varying malicious activity
Time Index (minutes)
(b) Impact of _α_ with 25%
malicious activity
Fig. 4: Evolution of reputation
TABLE I: Simulation Parameters
|Parameter|Value|Parameter|Value|
|---|---|---|---|
|Area Max length (Lmax) Malicious Activity Reputation Decay Initial Reputation Simulation Duration|150m×150m 64 100% Linear 0.5 15 min.|Transmission Range Number of Nodes (N) Malicious Weight (α) Window Size (Wn) Number of Simulations|50m 30 2 5 100|
broadcast up to two hops, reaching only the neighbours of
the nodes contained in the block. Another change is the
redefinition of the PoW consensus-based block validation
method. This functionality is integrated directly into the
validation miners, allowing them to automatically confirm their
work, without interactions with the blockchain. As a result,
only confirmed blocks are ”inserted” into the chain, keeping
the contained information as valid as possible.
With each passing RREQ and RREP-2Hop, the receiver
calculates the reputation decay and link cost using Eq. (8)
and (5) of the transmitter. By checking that the new cost is
higher than the previous, we can protect against potential field
overflow. By only forwarding RREQs with lower link costs,
we can propagate more reliable routes towards the destination,
which waits a certain amount of time for as many RREQs as
possible, before responding only to the most reputable path.
IV. RESULTS
0.00
Fig. 5: Visualisation of route reputation after 15 min with 25%
malicious nodes.
directly after the first route expires at around the 1 min. mark
and stay in the same overall vicinity.
Fig.4b extends this and illustrates the influence of α with
25% malicious activities. We can see that, conforming to our
initial hypothesis, the higher the value of α, the quicker the
reputation drops, and vice-versa. We can, therefore, actively
influence the weight of bad behaviour, instantly punishing a
node for misbehaving or forgiving them quickly.
Fig.5 presents the status of one of the simulated networks
of 30 nodes after 15 mins. with 25% of them acting as black
holes (thick circled), superimposing node reputation and the
most used route by AODV and AODV-Miner. We can see that
where AODV uses the most direct route via a malicious node,
_AODV-Miner is able to select a clear path. We can also see that_
we are able to assign a bad reputation to three malicious nodes,
allowing them to be avoided, whereas all nodes in the route
have received a good reputation. Furthermore, we can see that
other nodes also possesses varying levels of good reputation,
meaning that at some point in time they were used to route
data, with all other nodes possessing a neutral reputation of
0.5. As a result, we can conclude that the reputation metric
is crucial in our approach to limit malicious nodes from
impacting routing activities.
_AODV-Miner was implemented with Contiki-NG [18] and_
simulated using Cooja. The different parameters used in the
simulations are presented in Table I. Each node possesses a
wireless interface using the IPv6 netstack with 6LoWPAN
and a non-beacon-enabled always on CSMA radio to reduce
potential collisions. For our analysis, we consider Malicious
Nodes to perform black hole attacks, which are distributed
throughout the network at random. This preliminary study
allows us to validate our implementation using a simple form
of attack, paving the way for more advanced attack types in
the future. We also consider that this attack drops only data
packages, leaving AODV or block related traffic intact. Our
analysis pitches AODV-Miner against its older brother, AODV.
_A. Reputation Analysis_
Fig.4a shows the calculated reputation based on malicious
activities. With α = 2, a 25% malicious node has a reputation
close to neutral, whereas higher rates of activity rapidly
decrease the value. We can also see that they are attributed
_B. Route Analysis_
Fig.6 compares the efficiency of AODV-Miner to AODV.
We use the number of packets dropped ( _Sent_ _Received_ )
_|_ _| −|_ _|_
to determine the network throughput, visible in Fig.6a and 6b.
We can see that the number of packets dropped is reduced by
48% with 10% of nodes being malicious, resulting in a clear
_≈_
increase in the corresponding throughput. It is also noticeable
that whatever the percentage of malicious nodes, AODV-Miner
possesses a higher throughput than AODV. It must be noted,
however, that not all drops are prevented since reputations are
computed based upon malicious activities, allowing time for
nodes to wreak havoc. Furthermore, in many cases traversing a
malicious node with a link cost of 4, still has a lower cost than
five nodes with a link cost of 1. That being said, this better
efficiency comes at a cost as confirmed by Fig.6c, where we
can see that routes are on average longer in AODV-Miner.
Another cost is related to the activity of the miners, where
sharing blocks results in an uptake in packet transmissions.
-----
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
35
30
25
20
15
10
5
0
100%
75%
50%
25%
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
0 2 4 6 8 10 12 14
0 2 4 6 8 10 12 14
0 2 4 6 8 10 12 14
Time Index (minutes)
(a) Packets dropped with 10%
malicious nodes
(b) Throughput with varying
presence of malicious nodes
Malicious Nodes
Time Index (minutes)
(d) Normalised overhead with
10% malicious nodes
Time Index (minutes)
(c) Average route length with 10%
malicious nodes
Fig. 6: Routing efficiency between AODV-Miner and AODV in the presence of malicious nodes
Fig.6d puts this into perspective by normalising the overhead
of AODV-Miner in relation to AODV. We can see that our
overhead is higher than AODV’s, confirming that there is a
compromise where the increase in security comes at a cost.
V. CONCLUSION & FUTURE WORKS
In this paper, we introduce a consensus-based reputation
metric to identify the most trustworthy route available. By
exploiting blockchain technology to disseminate the reputation
in the network, we assure that no intruders can falsely modify
a nodes reputation. Furthermore, by introducing a validation
technique based on miner consensus, we can quickly and
accurately identify both highly trustworthy nodes as well
as malicious entities. Finally, by incorporating a reputation
decay functionality, we also reduce the risk associated
with compromised trustworthy nodes as well as assure the
reintegration of sanitised nodes back into the network. By
analysing its functionalities in conjunction with a reactive
routing protocol such as AODV, we demonstrate that our
system can detect and avoid malicious devices. Through
extensive simulations pitching this protocol AODV-Miner
against routing based threats, we have not only proved
the efficiency of our approach against AODV, but also the
importance of reputation-based routing in multi-hop networks.
However, our increased efficiency comes at a cost with a
rise in packet overhead due to the lightweight implementation
of blockchain. Indeed, although our module uses as few
communications as possible for block validation, storage is
still an issue, meaning each block must be broadcast to
the other nodes. Due to the static nature of our scenarios,
broadcasting blocks up to two hops is sufficient to inform
nodes of the status of their neighbours. By extending this
preliminary analysis with variable probability based attacks,
such as grey holes, we provide more of a challenge compared
to the situation provided by black holes. Furthermore, since
our module was developed outside of a specific routing
protocol, it can be adapted onto other protocols for further
in-depth analysis.
ACKNOWLEDGEMENTS
REFERENCES
[1] F. Bao, I.-R. Chen, M.J. Chang, and J.-H. Cho. Hierarchical trust
management for wireless sensor networks and its applications to trustbased routing and intrusion detection. IEEE Transactions on Network
_and Service Management, 9(2):169–183, 2012._
[2] D. K. Bangotra, Y. Singh, A. Selwal, N. Kumar, and P. K. Singh. A
trust based secure intelligent opportunistic routing protocol for wireless
sensor networks. Wireless Personal Communications, pages 1–22, 2021.
[3] N. Djedjig, D. Tandjaoui, F. Medjek, and I. Romdhani. Trust-aware
and cooperative routing protocol for iot security. Journal of Information
_Security and Applications, 52:102467, 2020._
[4] J. Tang, A. Liu, M. Zhao, and T. Wang. An aggregate signature
based trust routing for data gathering in sensor networks. Security and
_Communication Networks, 2018, 2018._
[5] Weidong Fang, Wuxiong Zhang, Wei Yang, Zhannan Li, Weiwei Gao,
and Yinxuan Yang. Trust management-based and energy efficient
hierarchical routing protocol in wireless sensor networks. _Digital_
_Communications and Networks, 7(4):470–478, 2021._
[6] L. Guillaume, J. van de Sype, L. Schumacher, G. Di Stasi, and
R. Canonico. Adding reputation extensions to aodv-uu. In IEEE Symp.
_on Comm. and Vehicular Technology in the Benelux (SCVT), 2010._
[7] A. Moinet, B. Darties, and J.-L. Baril. Blockchain based
trust & authentication for decentralized sensor networks. _ArXiv,_
abs/1706.01730, 2017.
[8] NARA. Blockchain white paper. White paper, National Archives and
Records Administration, February 2019.
[9] A. M Antonopoulos. _Mastering Bitcoin: Programming the open_
_blockchain. ” O’Reilly Media, Inc.”, 2017._
[10] X. Li, P. Jiang, T. Chen, X. Luo, and Q. Wen. A survey on the security of
blockchain systems. Future Generation Computer Systems, 107, 2020.
[11] M. S. Ali, M. Vecchio, M. Pincheira, K. Dolui, F. Antonelli, and M. H.
Rehmani. Applications of blockchains in the internet of things: A
comprehensive survey. IEEE Com. Surveys Tutorials, 21(2), 2019.
[12] C. Machado and C. M. Westphall. Blockchain incentivized data
forwarding in manets: Strategies and challenges. _Ad Hoc Networks,_
110:102321, 2021.
[13] H. Lazrag, A. Chehri, R. Saadane, and M. D. Rahmani. A blockchainbased approach for optimal and secure routing in wireless sensor
networks and iot. In Int. Conf. on Signal-Image Technology Internet_Based Systems (SITIS), 2019._
[14] J. Wang, Y. Liu, S. Niu, and H. Song. Lightweight blockchain assisted
secure routing of swarm uas networking. Computer Communications,
165:131–140, 2021.
[15] M. A. A. Careem and A. Dutta. Reputation based routing in MANET
using Blockchain. In Int. Conference on COMmunication Systems
_NETworkS (COMSNETS), 2020._
[16] S. R. Das, C. E. Perkins, and E. M. Belding-Royer. Ad hoc On-Demand
Distance Vector (AODV) Routing. RFC 3561, July 2003.
[17] E. Staddon, V. Loscri, and N. Mitton. Attack categorisation for iot
applications in critical infrastructures, a survey. _Applied Sciences,_
11(16), 2021.
[18] G. Oikonomou, S. Duquennoy, A. Elsts, J. Eriksson, Y. Tanaka, and
N. Tsiftes. The contiki-ng open source operating system for next
generation IoT devices. SoftwareX, 18:101089, 2022.
This work was partially supported by a grant from
CPER DATA and by the European Union’s H2020 Project
“CyberSANE”
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/WiMob55322.2022.9941558?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/WiMob55322.2022.9941558, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://hal.inria.fr/hal-03787034/file/AODV-Miner_WiMob_2022.pdf"
}
| 2,022
|
[
"JournalArticle",
"Conference"
] | true
| 2022-10-10T00:00:00
|
[] | 9,375
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0149a1792d4ed2dfb4972b5a49c089d2f0bead9e
|
[] | 0.903394
|
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities
|
0149a1792d4ed2dfb4972b5a49c089d2f0bead9e
|
Journal of Central Banking Theory and Practice
|
[
{
"authorId": "2082925186",
"name": "Erdogan Kaygin"
},
{
"authorId": "2059898696",
"name": "Yunus Zengin"
},
{
"authorId": "47422914",
"name": "Ethem Topçuoğlu"
},
{
"authorId": "2125741273",
"name": "Serdal Ozkes"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Central Bank Theory Pract"
],
"alternate_urls": null,
"id": "b665ae3f-05b5-443a-98de-ecbe4779954d",
"issn": "1800-9581",
"name": "Journal of Central Banking Theory and Practice",
"type": "journal",
"url": "http://www.degruyter.com/view/j/jcbtp"
}
|
Abstract Technological developments have always led to changes in all aspects of our lives. Crypto currency is one of those changes. As a result of those changes, thousands of currencies such as bitcoin, ripple, litecoin and ethereum have evolved and have found a use in business. The present study focuses upon Ripple and tries to explain its effects on banks and business theoretically. It has been stated that the money transfer performed through Ripple is faster and more economical when compared to present systems. Additionally, it has been realised that the present SWIFT system has been influenced by that speed and economy, and therefore taken considerable technologic steps with an effort to improve its system.
|
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **153**
*UDK: 336.71:004*
*DOI: 10.2478/jcbtp-2021-0029*
*Journal of Central Banking Theory and Practice, 2021, 3, pp. 153-167*
*Received: 26 February 2020; accepted: 06 October 2020*
## ***Erdogan Kaygin [*], Yunus Zengin [**],*** ***Ethem Topcuoglu [***], Serdal Ozkes [****]***
# **The Evaluation of Block Chain ** **Technology within the Scope of ** **Ripple and Banking Activities**
**Abstract** : Technological developments have always led to changes in
all aspects of our lives. Crypto currency is one of those changes. As
a result of those changes, thousands of currencies such as bitcoin,
ripple, litecoin and ethereum have evolved and have found a use in
business. The present study focuses upon Ripple and tries to explain
its effects on banks and business theoretically. It has been stated that
the money transfer performed through Ripple is faster and more
economical when compared to present systems. Additionally, it has
been realised that the present SWIFT system has been influenced by
that speed and economy, and therefore taken considerable technologic steps with an effort to improve its system.
**Keywords:** Ripple, Bitcoin, block chain, crypto currencies, banking
and Ripple, SWIFT.
**JEL Classification:** M15, M21, M48.
## **1. INTRODUCTION **
The internet, computer, mobile phones and other technologic activities have great impact upon daily lives of people
and the technology increasingly penetrates into our lives.
Technology alters existing habits over time. While reading a printed newspaper was a tool and an indicator of
cultural level in our society, currently reading a printed
newspaper instead of using mobile phones provides learn
** Kafkas University,*
*Kars,Turkey*
*E-mail:*
*kaygin@kafkas.edu.tr*
*** Kafkas University,*
*Kars, Turkey*
*E-mail:*
*yunuszengin@kafkas.edu.tr*
**** Kafkas University,*
*Kars,Turkey*
*E-mail:*
*ethemtopcuoglu@kafkas.edu.tr*
***** Kafkas University,*
*Kars, Turkey*
*E-mail:*
*serdalozkes@jandarma.gov.tr*
-----
**154** Journal of Central Banking Theory and Practice
ing later rather than being up to date. This change cannot only be observed on
this example but also by looking at the number of people who go to a pay office
to pay their gas and electricity bills as a result of mobile and electronic banking
activities, which constitutes the changes easily observed in daily life.
Business life has undergone changes as did our daily lives. Entrepreneurs have
a tendency to open virtual stores on websites instead of physical shops or stores.
Thanks to this method, rents for shops and stores are not paid, a store is not restricted to a single area, the restriction of space and place goes away, transactions
can be controlled by computers or even by mobile phones, strong companies are
addressed in terms of refund and the refund is definitely taken (Wong, Lau &
Yip, 2020). In many societies, it has been common to purchase extra-territorial
things through the use of foreign websites such as Alibaba, Ali Express, Gearbest,
Geekbuying, Geek, Amazon, and eBay.
The increasing spread of electronic shopping systems has resulted in an increase
in commissions paid to the banks and duplicated spending; secure electronic systems of spending and payment have been needed due to slow processing speed of
the banks, being the third party and the incidents of stealing credit card information (Luburić, 2020). The Bitcoin and block chain technology had been formed
in response in 2008 by Satoshi Nakamoto and the first Bitcoin transaction was
effected in 2009 (Gulec & Aktas, 2019). The technology of block chain has developed in time and paved the way for companies of crypto money and block chain
solution such as Ripple, Ethereum, Litecoin, Corda, Nexledger, and Hyperledger
using the same technology as Bitcoin and being privatized in accordance with its
area of use.
## **2. CONCEPTUAL ENVIRONMENT**
Throughout the present study, the money transfer systems used in our country
and in the world, Bitcoin and block chain technology and the negative effects of
Bitcoin upon business have been explained within the scope of conceptual environment. Furthermore, the opportunities to be provided by Ripple showing performance in the field of banking by privatizing the same technological elements
as Bitcoin have been considered.
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **155**
### **2.1. The Present Banking System **
There are three different practices in terms of money transfer. The first one involves remittance process in which money is transferred from bank A of the deposit account to another account in bank A. The remittance is allowed every day
for twenty four hours and the time for money transfer between accounts is stated
with seconds. The second practice involves EFT (Electronic Fund Transfer) in
which money is transferred from deposit account of bank A to a deposit account
in bank B in Turkey. Through this method, transactions can be performed every
weekday between 8:30 a.m and 05:30 p.m except during bank holidays (TCMB,
2019). When compared to the remittance system, the process of transaction is
longer and more costly. The third practice involves SWIFT (Society of Worldwide Interbank Financial Telecommunication) in which money is transferred
from a bank account in Turkey to another bank account abroad. The transaction through this system can be performed every weekday until 05:00 p.m except
during bank holidays (Kuwait Turk, 2019). It can be performed by authorized
banks. The transaction actualizes within 3-4 days while it changes from one bank
to another (Sanlısoy & Ciloglu, 2019). The SWIFT process is more expensive in
comparison to EFT and remittance practice. In the world, approximately two
millions daily and over 7.8 billions yearly SWIFT transactions are conducted in
200 regions. The value of SWIFT transactions performed in a day is above 300
billion dollars (SWIFT, 2018a).
### **2.2. Bitcoin and Block Chain Technology **
Many innovations and inventions exist and develop as a result of building new
and better ones upon present ones rather than finding out nonexistent ones. Indeed, the block chain technology was built upon peer-to-peer (P2P) technology.
The P2P technology was used in programs such as Napster, LimeWear, Bittorrent
in 1990s to enable videos, music and other data to be shared without a central authority. Together with block chain technology located upon P2P, Bitcoin is shared
instead of films and data. The security of Bitcoin system is provided through
crypto technology and interpersonal money transfer is performed without need
for third parties such as banks and financial institutions. This system provides
opportunity for faster and cheaper money transfer (Kaygin, Topcuoglu & Ozkes,
2018).
In Bitcoin system, the currency is named Bitcoin and abbreviated as BTC. One
BTC is separated into smaller currencies corresponding to 100 millions Satoshi
(Bonneau et al., 2015). For purchase and sale of Bitcoin, stock markets such as
-----
**156** Journal of Central Banking Theory and Practice
Binance, KuCoin, BitFinex, Coinbase and Kraken exist and transactions are conducted on those stock markets in exchange for dollar, euro, yen, and renminbi
whereas in Turkey there are stock markets like BtcTurk in which transactions are
performed through Turkish Lira (TRY).
As no central authority exists, reliable nodes in more than one point (i.e. computer systems) are needed to provide maintenance of the system and to perform
transactions. Called miners, those nodes take the responsibility of mathematical calculations so as to operate the system, complete the blocks and form new
bitcoins. The miners are given incentive payments so that they can cover CPU
power (electricity power) and other costs that they have spent during transactions. This incentive payment involves giving 50 BTC award (12.5 BTC since
July 2016) to the first miner forming successful block (Khalilov, Gündebahar, &
Kurtulmuşlar, 2017).
The system continues by interpenetrating in the form of a chain created by blocks
coming together. In nearly every ten minutes, a block having 1 MB processing limit is formed and 7 transactions are performed in a second in each block
(Zheng et al, 2017). The one who wants to transfer Bitcoin signs digitally the hash
(proofing keys) of the previous transaction and public key (anonymous name)
of the one who will get the money and form transaction by adding those to the
end of records. The credit side can confirm signatures, tenure and the chain via
system (Nakamoto, 2008).
To operate the system mentioned above, some critical elements are required as
follows;
*Security* ; the security of the system is provided by Secure Hash Algorithm (SHA),
a system which enables storing demanded information by separating into insignificant pieces with a definite algorithm and resolving them by combining those
insignificant pieces when demanded. It is known as a cryptographic system SHA2 (SHA-256) used by Bitcoin. At present, many applications utilise from SHA-1,
which could be broken formally in 2011 by the USA National Institute of Standards and Technology (NIST). Furthermore, over 9 quintillion SHA-1 accounting
(9.223.372.036.854.775.808) in total was made by the cryptology group in Google
and Centrum Wiskunde & Informatica (CWI) and it was proved that SHA-1 was
able to be broken with 6500 years CPU (processing unit) accounting to complete the first phase of the attack and 110 years GPU (graphic unit) accounting to
complete the second phase (Karakose, 2017). It is impossible to break SHA-256
cryptographic technology for now.
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **157**
*Distributed Accounts Recording Book (public ledger)* ; It is compulsory for businesses and banks to keep accounts or records, which are generally kept and recorded at a single central point. Moreover, the records kept are secret and the
businesses do not want them to be known by anyone except for stakeholders.
While the present system is like that, those records are used, kept and seen by all
nodes included in the system with block chain technology rather than being at a
single point. It is possible to see all records belonging to transactions performed
as of January 3, 2009 when the system started to be used. Those records are accessible to public and the vendors and purchasers can be viewed through anonymous names (i.e. nickname).
*Cyber Security* ; while collecting records at a single centre or in a hand pose a
risk to cyber attacks, keeping the records in a distributed way at various nodes
prevents the risk of a central attack. While attacking the banks and business systems practicing single, common and certain security protocols is quite easy for
cyber pirates, it seems impossible to succeed in attacking a system consisting of
ten thousands or perhaps hundred of thousand users, renewing itself every ten
minutes (every few seconds for Ripple) and working instantaneously over the
same record book. After a block is closed, both transactions of the previous block
and those belonging to the new block should be blocked until the other block is
reached (i.e. ten minutes). As stated by Nakamoto, such interference is impossible
without gaining 51 % of the system (Nakamoto, 2008).
In Bitcoin system, gaining 51% of the system by one segment is called Byzantine
Generals, which refers to the fact that some generals betray during a war and
work for the benefits of the enemies by participating in their folds. In this respect, Byzantine Generals problem appears as the matter in which some miners
in nodes cooperate and gain 51% of the management (Schwartz, Youngs & Britto,
2014). The shares of miners over mining pools system transactions are found to
be F2Pool (18.2 %), Poolin (15 %), BTC.com (11.1 %), Antpool (9 %) (btc.com).
In the light of this data, it is thought that experiencing the Byzantine Generals
problem will not be as difficult as expected.
*Double Posting*, the mistakes of double posting disappear as a result of making
instantaneous entries and approving within approximately ten minutes by majority of the users. It is not possible for the one not having money to spend and for
a spending to be taken in twice (Aggarwal et al., 2019).
*Time*
, confirming the accuracy of transactions and controls performed by the
banks sometimes takes hours and even days while complete accuracy of hashes
-----
**158** Journal of Central Banking Theory and Practice
providing confirmation in this system can be obtained within 30 minutes (Monrat, Schelén & Andersson, 2019).
The use of Bitcoin in processes such as laundering, illegal sale of drugs and weapons, child pornography is quite common. 46% of transactions performed by 26%
of total Bitcoin users consist of illegal transactions (Foley, Karlsen & Putniņš,
2018). It has been realized that the purchase and sale processes practiced with
Tether, a crypto currency, lead to speculative movements upon and increase its
prices (Griffin & Shams, 2018).
The value of Bitcoin can increase or decrease instantly as it has not been produced in exchange for a value (i.e. gold, silver etc.). The fact that one Bitcoin was
traded for 12$ in October 2012, 266$ in April 2013, 1240$ in December 2013, and
339$ in April 2014 in free stock market draws an inconsistent graph (ECB, 2015).
In free stock markets, a Bitcoin costing 13,854$ on average during January 2018
was traded for 10,125$ as of February 9, 2020.
The fact that the wishes of the states to collect taxes from earnings gained through
Bitcoin and to keep those markets in order increases day by day and it will make
it difficult to provide maintenance of the system. Especially observing the presence of terrorist organisations, illegal groups and laundering enforces the demand for controlling those markets. Carrying taxation into practice in France in
2014 and the legal arrangements performed in Sweden, Germany, the USA, and
Japan prove these cases (Uzer, 2017,).
When the matters mentioned above are considered, the currency of Bitcoin is
found to pose great risks for business and banks and is thought to bring about
damages rather than benefits. In this respect, the issue will be evaluated through
Ripple which will provide facilities for business and banks using block chain
technology and will not create problems in terms of legal procedures (Koc, 2019).
Ripple was founded by Jed McCaleb, Arthur Britto, David Schwartz, and Ryan
Fugger in 2012, which gives service under the name of Ripple (XRP), a crypto
money dealt in stock markets, and RippleNet as a supplier of infrastructure for
financial service institutions. The systems of XRP and RippleNet utilise from the
same infrastructure properties. Through RippleNet, the contracted banks and
its offices in different countries all around the world provide the services of fast
and safe money transfer from one point to another point with the help of block
chain technology. The company has offices in San Francisco, New York, London, Sidney, India, Singapore and Luxemburg. Contrary to other crypto currencies, the service points, company managers and those making investment in the
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **159**
company are displayed transparently on the authorised website. While security
and distributed ledger systems are the same as those of Bitcoin, it differs from
Bitcoin in terms of the ones keeping the distributed ledger, the company’s own
nodes and the nodes determined by the commission beforehand. In Ripple system, because of the fact that all the nodes have been solved and are known, the
problem of Byzantine Generals has been completely solved. This system is called
as UNL (Unique Node List). In order for transactions to be performed upon Ripple system, transaction instructions of at least 40% of more than one hundred
UNLs should match up with each other. Those matching data are carried into
draft blocks and more than one voting is performed so as to be approved by UNL
nodes. When matching at the rate of 80% is enabled as a result of voting, a new
block exists in distributed ledger (Ali et al., 2019). As in the Bitcoin system, transaction records are open to public and are anonymous (Jani, 2018).
XRP is the third greatest crypto currency following Bitcoin and Ethereum in
terms of market value (Gupta & Sadoghi, 2018). Ripple is dealt with the abbreviation of XRP in various crypto money stock markets. Ciaian, Rajcaniova and
Kancs (2018) measured the change observed in values of Bitcoin and sixteen
subcoins between the years of 2013-2016 with Autoregressive Distributed Lag
(ARDL) model analysis. As a result, it was found that the macro economic and
financial developments did not form a significant difference upon the value of
XRP and the changes experienced in Bitcoin prices did not influence XRP. Fry
(2018) practised rational bubble model for crypto currencies and detected bubble
in Bitcoin and Ethereum while no bubble was detected in XRP. He explains that
the reason why no bubble exists in Ripple stems from technological superiority
of Ripple over Bitcoin.
XRP can perform 50,000 transactions while Bitcoin can make 7 transactions and
Ethereum 14 ones in a second (Koens & Poll, 2018). When an increasing number
of users is added into the inefficacy of Bitcoin in that it performs 7 transactions
in a second, waits and losses of time become indispensable (Monrat, Schelén &
Andersson, 2019). As well as waiting for 10 minutes to perform a transaction
in Bitcoin, three blocks are required to be formed so as to understand that the
transaction has been executed and become definite and six blocks are required
in order to see that it is impossible to turn back. Briefly, the transaction becomes
definite and irrevocable. Through XRP system, this transaction is performed
only in four seconds (Armknecht et al., 2015).
There is no central unit to be applied when an incorrect operation is performed in
Bitcoin. It is nearly impossible to get the money back when you have sent money
to an unwanted person. While there are no systems to be addressed in other
-----
**160** Journal of Central Banking Theory and Practice
crypto currencies, a firm named Ripple Lab. exists in XRP, in which the banks
of Santander and Standard Chartered make investment. The headquarters of this
firm is in the USA, with offices in various countries. In the event that you perform an incorrect operation, it is possible to apply to the banks operating with
Ripple headquarters and offices.
The miners are not needed to operate XRP as in Bitcoin. A 100 million in crypto
money was prepared during foundation phase (Jani, 2017). The miners constitute
one of the most criticized issues of Bitcoin. The electricity spent for a transaction
performed by miners is found to be equal to monthly electricity consumption of
a house in the UK (Truby, 2018).
In RippleNet system, although it is not obligatory to buy XRP for money transfer,
the money transfer operations are charged. Along with the increasing number of
users in Bitcoin system, the money transfer operations which were free at the beginning have become 0.10 € (Boucher, Nascimento, & Kritikos, 2017), which can
increase and change in accordance with the amount of Bitcoin to be transferred.
In the event that the demand for Bitcoin continues, it is estimated that the price
will rise much more.
In the present study, the banking actions focus on Ripple rather than XRP actions. In this respect, RippleNet provides fast and safe money transfer through
contracted banks by working based upon block chain technology. RippleNet is a
solution partner that will provide benefits and opportunities for banks and business. The present customers of Ripple are mostly comprised of companies and
financial institutions (Xiao, Zhang, Lou, & Hou, 2020).
Ripple provides service with more than 300 institutions in forty countries, and
offers fast and economical money transfer to companies and institutions through
contracted banks. The money transfer system has been divided into two different
categories; the first of which involves members (i.e. banks and financial institutions) and the second one involves users (companies and customers) (Wang et
al., 2019). Giving information about the extent of the service provided and the
institutions worked together through some examples will be useful for understanding the issue.
An agreement was signed between Ripple and Standard Chartered (the UK),
National Australia Bank (Australia), Mizuho Financial Group (Japan), BMO
Financial Group (Canada), Siam Commercial Bank (Thailand) and Shanghai
Huarui Bank (People’s Republic of China) for pilot scheme in September 15, 2016
(Patterson, 2016). Furthermore, an agreement was made between ten financial
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **161**
institutions and Ripple in April 26, 2017. The agreed institutions involve financial institutions such as MUFG (Japan), BBVA (Spain), SEB (Sweden), Akbank
(Turkey), Axis Bank (India), YES BANK (India), SBI Remit (Japan), Cambridge
Global Payments (Canada), Star One Credit Union (the USA) and eZforex.com
(the USA). A partnership agreement was signed between American Express and
Ripple in terms of money transfer except for card actions on November 16, 2017
(Ripple Team, 2017). Moreover, a partnership agreement was signed between
Ripple and Moneygram on January 11, 2018 (Truby, 2018). An agreement was
made between Ripple and Saudi Arabia Money Authority (SAMA) and Kingdom
of Saudi Arabia Bank on February 2, 2018 over shifting to pilot scheme. SAMA
and KSA are formal central banks of the Kingdom of Saudi Arabia and the institutions managing monetary policies (Sanlısoy & Ciloglu, 2019).
## **3. CONCLUSION**
The present study is believed to have made a contribution to understanding the
block chain technology and Ripple, improvement of long waiting periods experienced during money transfer, payment and banking actions of the businesses, evaluation of views regarding protection from risks of exchange rate and to
the related studies (Al-Rjoub, 2021). The study utilised the method of literature
search, which was found to contribute to determination of the scope of research
problems, development of new research topics, elimination of useless methods,
finding possible future studies and forming an idea about the methods to be used
(Gultekin & Bulut, 2017). As a result of literature review conducted throughout
the present study, only one study examining the relationship between SWIFT
and Ripple was confronted. The study carried out by Qiu, Zhang and Gao (2019)
suggests that the new systems like Ripple will change greatly the market of offshore transfer within 5 or 10 years.
The globalisation of the world has paved the way for removal of the borders and
made it possible to have access to all geographies from China to the USA via the
internet in the living room. While business has removed national borders and
works through internet network for 7 days and 24 hours, the fact that the actions
of EFT and SWIFT are performed only during weekdays between 08:00 a.m and
17:00 p.m gives damage not only to domestic trade but also to foreign trade. The
mobile application developed by Ripple and Santander Bank provides opportunity to send money at an amount between 10 or 10.000 Sterlin to twenty one
countries through Euro exchange and to the USA through dollars exchange (Santander Bank, 2019). The increase in those apllication will bring about increase in
-----
**162** Journal of Central Banking Theory and Practice
satisfaction of customers for banks and the opportunity of instant transaction
for businesses.
The business transfer or receive money as a result of export and import actions
to other countries, and hence restricting those actions to office hours affects the
business negatively. When you buy something abroad for your business, the product arrives and you want to pay for it, you will have to wait for 2-4 working days.
Armknecht et al. (2015) realized that Ripple created a new distributed accounting records book within a few seconds at a rate of 99%. In the remaining 1%, this
duration changes between 30 or 40 seconds, while this percentile declined below
twenty seconds in the first two months of 2015.
In transactions performed through the RippleNet, it will be avoided that the institutions are blamed for laundering and tax evasion due to transactions performed through Bitcoin. Thus, the business and institutions will not be discredited. Moreover, the incidences of tax loss and post taxation will not exist since the
states collect taxes through banks.
The losses which are possible to be experienced between the first price of a commercial item purchased from international markets and its price when the payment is performed are called exchange rate risk. The fluctuation in foreign currency in our country is a factor affecting the business negatively. In market conditions where 1 $ was worth 4.57 ₺ on July 6, 2018; 4.87 ₺ on July 11, 2018; 4.73 ₺
on July 23, 2018; 5.06 ₺ on August 2, 2018, the return of trade in the amount of 1
million ₺ or 1 million $ displays daily change. Whereas 1 million ₺ in the pocket
of a tradesman costs 218,819 $ on July 6, 2018, it cost 205,339 $ on July 11, 2018
(doviz.com). Because the business will not have to wait and get money instantly
thanks to fast money transfer through Ripple, they will not be influenced by exchange risk. In the same vein, the banks will minimise customer objections and
loss of customers and remove the costs of supplementary staff for transaction
follow-up.
The action of double accounting of one transfer or spending stemming from the
banking system is called as double posting. The distributed accounts book developed by Ripple will prevent the incidences of double posting. Furthermore,
the use of distributed account book will prevent the need for the business and
the banks to search for how the money has been spent. Since this system also
performs the reconciliation actions with record book, the time, expenditure and
workforce spent for interbank reconciliation actions will be reduced, which in
turn will contribute to the banks and business positively.
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **163**
The use of Ripple transfer system in international payments will enable an opportunity for more affordable (approximately 60%) and faster money transfer with
the help of direct transactions (Ripple Team, 2017). Furthermore, the idea of using block chain for interbanks money transfer by Ripple has influenced other
business and institutions. For instance, SWIFT tries to perform money transfer
through use of Hyperledger block chain tecnology with the participation of 34
banks (SWIFT, 2018b).
The system formed by Ripple becomes a turning point in terms of banking actions. However, transfer from one country to another is limited due to low number of members of RippleNet system. Even if the country has this system, the
number of banks and financial institutions is limited as well. For example, in
Turkey, this system is practiced only by Akbank. For the customers who do not
have an account at Akbank, the use of this system may not seem practical and
applicable.
For the business, not only Ripple but also block chain technolgies enabling different solutions are available. Handling containers used by Maersk with block
chain technology achieve 300 $ saving per container (Diordiiev, 2018). With the
agreement made by Maersk with IBM upon block chain issue, the product named
TradeLens existed. According to the explanation made by IBM (IBM, 2018),
TradeLens has reduced packaging costs of the products carried by the ships of
Maersk performing in USA line in the ratio of 40% and made thousands of dollars profit.
The block chain technology is an innovation presenting many opportunities and
capabilities together and providing diversity in terms of practice for the business
and banks. The fact that the business focus upon block chain technology instead
of crypto money which is not based upon any authority and has unsteady market
price and banks find appropriate solutions for themselves will increase the profitability as in the example of Maersk and enhance competitiveness by achieving
saving in terms of time and workforce.
-----
**164** Journal of Central Banking Theory and Practice
## **References **
1. Aggarwal, S., Chaudhary, R., Aujla, G.S., Kumar, N., Choo, K.-K.R. &
Zomaya, A.Y. (2019). Blockchain for smart communities: Applications,
challenges and opportunities. *Journal of Network and Computer*
*Applications*, 144, 13-48.
2. Ali, M.S., Vecchio, M., Pincheira, M., Dolui, K., Antonelli, F. & Rehmani,
M.H. (2019). Applications of blockchains in the internet of things: A
comprehensive survey. *IEEE Communications Surveys & Tutorials*, 21 (2),
1676-1717.
3. Al-Rjoub, A.M.S. (2021). A financial Stability Index for Jordan. *Journal*
*of Central Banking Theory and Practice*, 10 (2), 157-178. http://dx.doi.
org/10.2478/jcbtp-2021-0018
4. Armknecht, F., Karame, G.O., Mandal A., Youssef F. & Zenner E. (2015).
Ripple: Overview and Outlook. *In Trust and Trustworthy Computing*,
ed. Conti M., Schunter M. & Askoxylakis I., 163-180. Basel: Springer
International Publishing.
5. Bonneau, J., Miller, A., Clark, J., Narayanan, A., Kroll, J.A. & Felten,
E.W. (2015). SoK: Research perspectives and challenges for Bitcoin and
cryptocurrencies. In Proc. IEEE Symp. Secur. Privacy, May 2015, 104–121.
*How Blockchain*
6. Boucher, P., Nascimento, S. & Kritikos, M. (2017).
*Technology Could Change Our Lives* . Brussels: European Parliamentary
Research Service (EPRS).
7. BTC.com (2020). Pool Distribution, https://btc.com/stats/pool (Accessed
11.10.2019).
8. Ciaian, P., Rajcaniova, M. & Kancs, A. (2018). Virtual relationships:
Short- and long-run evidence from Bitcoin and altcoin markets. *Journal of*
*International Financial Markets, Institutions and Money*, 52, 173-195.
9. Diordiiev, V. (2018). Blockchain Technology and Its Impact on Financial
and Shipping Services. *Institute for Market Problems and Economic and*
*Ecological Research of National Academy of Sciences of Ukraine*, 2 (1), 51-63.
10. Doviz.com (Noktacom Medya İnternet Hiz. San. ve Tic. A.Ş.) (2019). https://
kur.doviz.com/serbest-piyasa/amerikan-dolari#, (Accessed 11.10.2019).
11. European Central Bank (ECB) (2015). *Virtual Currency Schemes–A Further*
*Analysis*, Frankfurt.: European Central Bank (ECB).
12. Foley, S., Karlsen, J.R. & Putniņš, T.J. (2018). Sex, Drugs, and Bitcoin: How
much illegal activity is financed through cryptocurrencies?. *Review of*
*Financial Studies*, Advance online publication. http://dx.doi.org/10.2139/
ssrn.3102645
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **165**
13. Fry, J. (2018). Booms, busts and heavy-tails: The story of Bitcoin and
cryptocurrency markets?. *Economics Letters*, 171, 225-229.
14. Griffin, J.M. & Shams, A. (2018). Is Bitcoin Really Un-Tethered? Advance
online publication. http://dx.doi.org/10.2139/ssrn.3195066 (Accessed
11.10.2019)
15. Gupta S. & Sadoghi M. (2018). Blockchain Transaction Processing. In
Encyclopedia of Big Data Technologies. ed. Sakr S. & Zomaya A. Basel:
Springer International Publishing AG. https://doi.org/10.1007/978-3-31963962-8_333-1
16. Gulec, T. & Aktaş, H. (2019). Kripto para birimi piyasalarında etkinliğin
uzun hafıza ve değişen varyans özelliklerinin testi yoluyla analizi. *Eskişehir*
*Osmangazi Üniversitesi İktisadi ve İdari Bilimler Dergisi*, 14 (2), 491-510.
17. Gultekin Y. & Bulut, Y. (2017). Bitcoin Ekonomisi: Bitcoin Eko-Sisteminden
doğan yeni sektörler ve analizi. *Adnan Menderes Üniversitesi SBE Dergisi*, 3
(3), 82-92.
18. IBM (2018). Trade Lens, http://newsroom.ibm.com/2018-08-09-Maerskand-IBM-Introduce-TradeLens-Blockchain-Shipping-Solution, (Accessed
11.10.2019)
19. Jani S. (2018). An Overview of Ripple Technology & Its Comparison with
Bitcoin Technology. https://www.researchgate.net/publication/322436263,
(Accessed 11.10.2019)
20. Karakose, İ.S. (2017), Elektronik Ödemelerde Blok Zinciri Sistematiği ve
Uygulamaları, Erciyes Üniversitesi Sosyal Bilimler Enstitüsü, Master Thesis,
Kayseri.
*Bitcoin sistem ve özelliklerinin*
21. Kaygın, E., Topçuoğlu, E. & Ozkes, S. (2018).
*iş ahlakı kapsamında incelenmesi. İş Ahlakı Dergisi*, 11 (2), 165-192.
*Bitcoin ile*
22. Khalilov, K.M.C., Gündebahar, M. & Kurtulmuşlar, İ. (2017).
*Dünya ve Türkiye’deki Dijital Para Çalışmaları Üzerine Bir İnceleme* . https://
ab.org.tr/ab17/bildiri/100.pdf, (Accessed 11.10.2019).
23. Koc, C. (2019). Türk Ceza Kanununda Kişisel Verilerin Kaydedilmesi Sucu
TCK m 135. *Legal Hukuk Dergisi*, 17 (199), 2839-2867.
24. Koens, T. & Poll, E. (2018). What Blockchain Alternative Do You Need?, In
Data Privacy Management, Cryptocurrencies and Blockchain Technology,
ed. Alfaro, J.G., Joancomartí, J.H., Livraga, G. and Rios, R., 113-129.
Basel:Springer International Publishing.
25. Kuwait Turk (Kuveyt Türk Katılım Bankası A.Ş.) (2019). https://www.
kuveytturk.com.tr/kobi/nakit-yonetimi/odeme-yonetimi/doviz-transferiswift, (Accessed 11.10.2019).
26. Luburić, R. (2020). Crisis Prevention and the Coronavirus Pandemic as a
Global and Total Risk of Our Time.
*Journal of Central Banking Theory and*
*Practice*, 10 (1), 55-74. http://dx.doi.org/10.2478/jcbtp-2021-0003
-----
**166** Journal of Central Banking Theory and Practice
27. Monrat, A.A., Schelén, O. & Andersson, K. (2019). A survey of blockchain
from the perspectives of applications, challenges, and opportunities. *IEEE*
*Access*, 7, 117134-117151.
28. Nakamoto, S. (2008), Bitcoin: A Peer-to-Peer Electronic Cash System, www.
bitcoin.org, (Accessed 11.10.2019).
29. Patterson, D. (2016), Ripple Adds Several New Banks to Global Network,
https://ripple.com/ripple_press/ripple-adds-several-new-banks-globalnetwork/, (Accessed 11.10.2019)
30. Qiu, T., Zhang, R. & Gao, Y. (2019). Ripple vs. SWIFT: Transforming cross
border remittance using blockchain technology. *Procedia Computer Science*,
147, 428-434.
31. Ripple Team (2017). American Express Introduces Blockchain-enabled,
Cross-border Payments, https://ripple.com/ripple_press/american-expressintroduces-blockchain-enabled-cross-border-payments/, (Accessed
11.10.2019).
32. Sanlisoy, S. & Ciloglu, T. (2019). An investigation on the crypto currencies
and its future. *International Journal of eBusiness and eGovernment Studies*,
11 (1), 69-88.
33. Santander Bank (2019). https://www.santander.com/csgs/
Satellite?appID=santander.wc.CFWCSancomQP01&canal=CSCORP&cid=
1278712674240&empr=CFWCSancomQP01&leng=pt_PT&pagename=CF
WCSancomQP01%2FGSNoticia%2FCFQP01_GSNoticiaDetalleImpresion_
PT48, (Accessed 11.10.2019)
34. Schwartz, D., Youngs, N. & Britto, A. (2014). The ripple protocol consensus
algorithm. https://ripple.com/files/ripple_consensus_whitepaper.pdf.
(Accessed 11.10.2019).
35. SWIFT, (2018a). Annual Review, https://www.swift.com/file/62596/
download?token=5cf760oV (Accessed 11.10.2019).
36. SWIFT, (2018b) SWIFT completes landmark DLT proof of concept, https://
www.swift.com/news-events/news/swift-completes-landmark-dlt-proof-ofconcept), (Accessed 11.10.2019).
37. TCMB (Türkiye Cumhuriyeti Merkez Bankası) (2019). Ödeme Sistemleri
http://eftemkt.tcmb.gov.tr/odemeSistemleri_TR.htm, (Accessed 11.10.2019).
Reference in the paper?
38. Truby, J. (2018), Decarbonizing Bitcoin: Law and policy choices for reducing
the energy consumption of Blockchain technologies and digital currencies.
*Energy Research & Social Science*, 44, 399-410.
39. Uzer, B. (2017), *Sanal para birimleri*, Ankara: Türkiye Cumhuriyet Merkez
Bankası Ödeme Sistemleri Genel Müdürlüğü.
-----
The Evaluation of Block Chain Technology within the Scope of Ripple and Banking Activities **167**
40. Xiao, Y., Zhang, N., Lou, W. & Hou, T.Y. (2020). A survey of distributed
consensus protocols for blockchain networks. I *EEE Communications*
*Surveys & Tutorials (Early Access)*, https://doi.org/10.1109/
COMST.2020.2969706
41. Wang, Q., Zhu, X., Ni, Y., Gu, L. & Zhu, H. (2019). Blockchain for the IoT
and industrial IoT: A review. *Internet of Things*, Available Online: https://doi.
org/10.1016/j.iot.2019.10 0 081
42. Wong, T.L., Lau, W.Y. & Yip, T.M. (2020). Cashless Payments and Economic
Growth: Evidence from Selected OECD. Countries. *Journal of Central*
*Banking Theory and Practice*, 9 (SI), 189-213. http://dx.doi.org/10.2478/
jcbtp-2020-0028
43. Zheng Z., Xie S., Dai, H., Chen, X. & Wang, H. (2017). An Overview of
Blockchain Technology: Architecture, Consensus, and Future Trends, 2017
*IEEE 6* *[th]* *International Congress*, 557-564.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2478/jcbtp-2021-0029?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2478/jcbtp-2021-0029, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://sciendo.com/pdf/10.2478/jcbtp-2021-0029"
}
| 2,021
|
[] | true
| 2021-09-01T00:00:00
|
[
{
"paperId": "c33ac1bb6f0b0542a3c43064e30a91245118bde8",
"title": "Blockchain Transaction Processing"
},
{
"paperId": "5d62512ca4512b672cd3fe77422b40a0d8317ca7",
"title": "A financial Stability Index for Jordan"
},
{
"paperId": "489716461be34b325dc42c35720f248cbba31997",
"title": "Crisis Prevention and the Coronavirus Pandemic as a Global and Total Risk of Our Time"
},
{
"paperId": "3ed0baf901d55d3533a7e358673821530f700c61",
"title": "Cashless Payments and Economic Growth: Evidence from Selected OECD Countries"
},
{
"paperId": "8c7bf247d1bfea85b68c646aa2eb786b352ee093",
"title": "Blockchain for the IoT and industrial IoT: A review"
},
{
"paperId": "91b307dab6b5da230190ae6f7e3b126daac100c0",
"title": "Is Bitcoin Really Un-Tethered?"
},
{
"paperId": "d2b5eec036d9b5094a1e242e0d9a7c3e26c00d4b",
"title": "Blockchain for smart communities: Applications, challenges and opportunities"
},
{
"paperId": "4b2b5b8252dd2bdcb93e619065501eae9efae98e",
"title": "Kripto Para Birimi Piyasalarında Etkinliğin Uzun Hafıza Ve Değişen Varyans Özelliklerinin Testi Yoluyla Analizi"
},
{
"paperId": "627341e1a5872676798c3ab56355157c1fe78bcc",
"title": "A Survey of Blockchain From the Perspectives of Applications, Challenges, and Opportunities"
},
{
"paperId": "bca90c73d32bb7cd268d983390f4846cd7aa28e1",
"title": "AN INVESTIGATION ON THE CRYPTO CURRENCIES AND ITS FUTURE"
},
{
"paperId": "20d82e2cbf460df9fd7d1b461511e729d0e54f90",
"title": "A Survey of Distributed Consensus Protocols for Blockchain Networks"
},
{
"paperId": "b58ba438747f77530ecadf3667feef34d66f29e9",
"title": "How Blockchain Technology Could Change Our Lives"
},
{
"paperId": "1fbf77970e8bb129b9597e65a58d85fb1d76e230",
"title": "Sex, Drugs, and Bitcoin: How Much Illegal Activity Is Financed Through Cryptocurrencies?"
},
{
"paperId": "15c69c4ece24b82105fc41d270d642593ce4318f",
"title": "Decarbonizing Bitcoin: Law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies"
},
{
"paperId": "e50b9eebdff480013d47eed1a84a7a580b45d465",
"title": "What Blockchain Alternative Do You Need?"
},
{
"paperId": "e538f221a3ff45df561dcef25ebed552d8a8e39a",
"title": "Booms, busts and heavy-tails: The story of Bitcoin and cryptocurrency markets?"
},
{
"paperId": "6302676e3c48181341495520efb0f859bc338fc5",
"title": "Blockchain technology and its impact on financial and shipping services"
},
{
"paperId": "8fffb3e09abeb2dc6afd5b4cf43f24b5c8e2e489",
"title": "Bitcoin Ekonomisi: Bitcoin Eko-Sisteminden Doğan Yeni Sektörler Ve Analizi"
},
{
"paperId": "ee177faa39b981d6dd21994ac33269f3298e3f68",
"title": "An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends"
},
{
"paperId": "e68a72995419964d2fa42f2de34c5b1eef7cb51d",
"title": "Virtual Relationships: Short- and Long-run Evidence from BitCoin and Altcoin Markets"
},
{
"paperId": "cc9b027bd1b8c0ee854992bb56e64d72ebf3355e",
"title": "Ripple: Overview and Outlook"
},
{
"paperId": "9584863217908a530f7bc1bff2e9e19f423d8204",
"title": "SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies"
},
{
"paperId": "a5f7e2d583dd64dae9952ef5533c252e269a70c5",
"title": "Annual Review"
},
{
"paperId": null,
"title": "Pool Distribution"
},
{
"paperId": "3d267bbcce5a599ac9cc42964fefb40e7b49cbb1",
"title": "Applications of Blockchains in the Internet of Things: A Comprehensive Survey"
},
{
"paperId": null,
"title": "Kuwait Turk (Kuveyt Türk Katılım Bankası A.Ş.)"
},
{
"paperId": null,
"title": "Doviz.com (Noktacom Medya İnternet Hiz. San. ve Tic. A.Ş.)"
},
{
"paperId": null,
"title": "TCMB (Türkiye Cumhuriyeti Merkez Bankası)"
},
{
"paperId": null,
"title": "Türk Ceza Kanununda Kişisel Verilerin Kaydedilmesi Sucu TCK m 135"
},
{
"paperId": null,
"title": "Ödeme Sistemleri"
},
{
"paperId": "4e48e909a20869f9cde931f5b8f364c8cb11ac2c",
"title": "Ripple vs. SWIFT: Transforming Cross Border Remittance Using Blockchain Technology"
},
{
"paperId": null,
"title": "SWIFT completes landmark DLT proof of concept"
},
{
"paperId": null,
"title": "An Overview of Ripple Technology & Its Comparison with Bitcoin"
},
{
"paperId": null,
"title": "Trade Lens"
},
{
"paperId": null,
"title": "Bitcoin sistem ve özelliklerinin iş ahlakı kapsamında incelenmesi"
},
{
"paperId": null,
"title": "Elektronik Ödemelerde Blok Zinciri Sistematiği ve Uygulamaları, Erciyes Üniversitesi Sosyal Bilimler Enstitüsü, Master Thesis, Kayseri"
},
{
"paperId": null,
"title": "Ripple Team"
},
{
"paperId": null,
"title": "Sanal para birimleri , Ankara: Türkiye Cumhuriyet Merkez Bankası Ödeme Sistemleri Genel Müdürlüğü"
},
{
"paperId": null,
"title": "American Express Introduces Blockchain-enabled, Cross-border Payments"
},
{
"paperId": "ad26e43d2a03ca9f7d817907c83ba6b803b945e3",
"title": "Bitcoin ile Dünya ve Türkiye ’ deki Dijital Para Çalışmaları Üzerine Bir İnceleme Merve Can"
},
{
"paperId": null,
"title": "Ripple Adds Several New Banks to Global Network"
},
{
"paperId": null,
"title": "Virtual Currency Schemes–A Further Analysis , Frankfurt"
},
{
"paperId": "bff4ecdd2c40bb67abab8d49e99c81287a7b2810",
"title": "The Ripple Protocol Consensus Algorithm"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
}
] | 9,426
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0149bc70d355eaf564fd6bf2a25587b73c634961
|
[
"Computer Science"
] | 0.869611
|
A Comparative and Comprehensive Analysis of Smart Contract Enabled Blockchain Applications
|
0149bc70d355eaf564fd6bf2a25587b73c634961
|
International Journal on Recent and Innovation Trends in Computing and Communication
|
[
{
"authorId": "2175657136",
"name": "Vishalkumar Langaliya"
},
{
"authorId": "1500396861",
"name": "Jaypalsinh A. Gohil"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Recent Innov Trends Comput Commun"
],
"alternate_urls": null,
"id": "30f717c3-b4cb-461f-9152-25a2fc4208e8",
"issn": "2321-8169",
"name": "International Journal on Recent and Innovation Trends in Computing and Communication",
"type": "journal",
"url": "http://www.ijritcc.org/"
}
|
Blockchain is a disruptive innovation that is already reshaping corporate, social, and political connections, as well as any other form of value exchange. Again, this isn't simply a shift; it's a fast-moving phenomenon that has already begun. Top financial institutions and a large number of businesses have begun to investigate blockchain in order to cut transaction costs, speed up transaction times, reduce fraud risk, and eliminate the need for middlemen or intermediate services. Blockchain is believed to be the component that completes the Internet puzzle and makes it more open, more accessible, and more reliable. In this article, we first introduced the blockchain technology and smart contracts and their merits and demerits. Second, we present a comparative and comprehensive analysis of smart contract-enabled blockchain applications. Toward the end, we discussed the future development trends of smart contract enabled blockchain applications. This document is intended to serve as a guide and resource for future research initiatives.
|
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
# A Comparative and Comprehensive Analysis of
Smart Contract Enabled Blockchain Applications
### 1. INTRODUCTION
Blockchain is model for delivering the information because
it provides immediate, shareable, and completely transparent
data stored on an immutable ledger that can only be read by
permissioned network members. A blockchain network can
track orders, payments, accounts, production, and much
more. Because affiliates share a single view of the fact, you
can see all the facts of a transaction from beginning to end,
giving you more confidence as well as new efficiencies and
opportunities.
**_Blockchain[1]_** is a decentralised, immutable database that
simplifies the recording of transactions and asset tracking in
a corporate network. A tangible item (such as a house, car,
cash, or land) can also be an intangible asset (intellectual
property, patents, copyrights, branding). Almost everything
of value may be recorded and traded on the blockchain
network, which reduces risk and lowers costs for all parties
involved.
**_Bitcoin[2]_** is a peer-to-peer payment system that eliminates
the need for trusted third parties. Bitcoin is a decentralized
cryptocurrency that is not restricted to any nation and is a
global currency. It is decentralized in every aspect—
technical, logical, as well as political [1].
**_Ethereum[3]_** is a piece of software that runs on a network of
computers and ensures that data and small computer
programmes known as smart contracts are duplicated and
processed across the entire network without the need for a
central controller.
## Vishalkumar Langaliya[1]
Research Scholar, Department of Computer Application, Marwadi University, Rajkot, Gujarat 360003. India.
https://orcid.org/0000-0001-9581-547X,
vishal.langaliya@gmail.com
## Jaypalsinh A. Gohil[2]
Assistant Professor, Department of Computer Application, Marwadi University, Rajkot, Gujarat 360003. India.
jaypalsinh.gohil@marwadieducation.edu.in
https://orcid.org/0000-0003-0925-6646
**Abstract- Blockchain is a disruptive innovation that is already reshaping corporate, social, and political connections, as well as any other**
form of value exchange. Again, this isn't simply a shift; it's a fast-moving phenomenon that has already begun. Top financial institutions
and a large number of businesses have begun to investigate blockchain in order to cut transaction costs, speed up transaction times, reduce
fraud risk, and eliminate the need for middlemen or intermediate services. Blockchain is believed to be the component that completes the
Internet puzzle and makes it more open, more accessible, and more reliable. In this article, we first introduced the blockchain technology
and smart contracts and their merits and demerits. Second, we present a comparative and comprehensive analysis of smart contract-enabled
blockchain applications. Toward the end, we discussed the future development trends of smart contract enabled blockchain applications.
This document is intended to serve as a guide and resource for future research initiatives.
**Keywords- Blockchain, Smart contracts, Blockchain Applications, Comparative Analysis.**
### 1. INTRODUCTION of value may be recorded and traded on the blockchain
Blockchain is model for delivering the information because network, which reduces risk and lowers costs for all parties
it provides immediate, shareable, and completely transparent involved.
data stored on an immutable ledger that can only be read by **_Bitcoin[2]_** is a peer-to-peer payment system that eliminates
permissioned network members. A blockchain network can the need for trusted third parties. Bitcoin is a decentralized
track orders, payments, accounts, production, and much cryptocurrency that is not restricted to any nation and is a
more. Because affiliates share a single view of the fact, you global currency. It is decentralized in every aspect—
can see all the facts of a transaction from beginning to end, technical, logical, as well as political [1].
giving you more confidence as well as new efficiencies and
opportunities. **_Ethereum[3]_** is a piece of software that runs on a network of
**_Blockchain[1]_** is a decentralised, immutable database that computers and ensures that data and small computer
simplifies the recording of transactions and asset tracking in programmes known as smart contracts are duplicated and
a corporate network. A tangible item (such as a house, car, processed across the entire network without the need for a
cash, or land) can also be an intangible asset (intellectual central controller.
1. Blockchain. https://www.ibm.com/in-en/topics/what-is-blockchain
2. Bitcoin. https://bitcoin.org/en/
3. Ethereum. https://ethereum.org/
**_16_**
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
It builds on the Bitcoin blockchain principle of validating,
storing, and replicating transaction data across multiple
computers all over the world (thus the term "distributed
ledger"). Ethereum goes a step farther by running computer
code on numerous computers around the globe in the same
way. [2].
The word was invented in the 1990s by cryptographer Nick
Szabo, who defined it as "a set of promises, expressed in
digital form, including mechanisms within which the parties
fulfil on the other promises." **_Smart contracts[4]_** have grown
since then, particularly since the introduction of
decentralised blockchain platforms with the birth of Bitcoin
in 2009[3]. The majority of _smart contracts are written in a_
high-level language like Solidity. However, in order to run,
### 2. LITERATURE SURVEY
Fahim Ullah et al [2021], The authors employed the
systematic review method to examine and analyse material
published between 2000 and 2020. The literature focuses on
the application of blockchain smart contracts in smart real
estate and presents a conceptual framework for their
implementation in smart cities that govern real estate
negotiations [4]. Ten major characteristics of blockchain
smart contracts are addressed in the article, which are
organised into six tiers for smart real estate adoption. To
4 Solidity. https://docs.soliditylang.org/en/latest/
5 Hyperledger. https://www.hyperledger.org/
they must be assembled to the EVM's low-level bytecode.
They are installed on the Ethereum platform using a specific
contract creation transaction, which is identifiable as such
by being submitted to the special contract creation address,
after they have been compiled.
**_Hyperledger[5]_** is an open source project that aims to develop
blockchain technology across industries. It's a worldwide
cooperation that includes leaders in banking, finance, IoT,
manufacturing, supply chains, and technology. Hyperledger
is hosted by the Linux Foundation, a non-profit organisation
dedicated to facilitating mass creativity through open
source. The Linux Foundation also facilitates collaboration
and sharing of ideas, infrastructure, and code across a global
developer community.
demonstrate the development of a smart contract that may
be utilised for blockchain smart contracts in real estate, the
decentralised application and its interactions with the
Ethereum Virtual Machine (EVM) are described. Real estate
owners and users as smart contract parties benefit from a
sophisticated design and engagement mechanism. A stepby-step approach for establishing and ending smart contracts
is described, as well as a list of functions for initiating,
generating, changing, or terminating smart contracts. The
suggested framework is a contractual process that is more
immersive, user-friendly, and visualised.
**_17_**
**_Hyperledger[5]_** is an open source project that aims to develop
The word was invented in the 1990s by cryptographer Nick blockchain technology across industries. It's a worldwide
Szabo, who defined it as "a set of promises, expressed in cooperation that includes leaders in banking, finance, IoT,
digital form, including mechanisms within which the parties manufacturing, supply chains, and technology. Hyperledger
fulfil on the other promises." **_Smart contracts[4]_** have grown is hosted by the Linux Foundation, a non-profit organisation
since then, particularly since the introduction of dedicated to facilitating mass creativity through open
decentralised blockchain platforms with the birth of Bitcoin source. The Linux Foundation also facilitates collaboration
in 2009[3]. The majority of _smart contracts are written in a_ and sharing of ideas, infrastructure, and code across a global
high-level language like Solidity. However, in order to run, developer community.
_Fig. 1. Structure of Blockchain_
### 2. LITERATURE SURVEY demonstrate the development of a smart contract that may
be utilised for blockchain smart contracts in real estate, the
Fahim Ullah et al [2021], The authors employed the
decentralised application and its interactions with the
systematic review method to examine and analyse material
Ethereum Virtual Machine (EVM) are described. Real estate
published between 2000 and 2020. The literature focuses on
owners and users as smart contract parties benefit from a
the application of blockchain smart contracts in smart real
sophisticated design and engagement mechanism. A step-
estate and presents a conceptual framework for their
by-step approach for establishing and ending smart contracts
implementation in smart cities that govern real estate
is described, as well as a list of functions for initiating,
negotiations [4]. Ten major characteristics of blockchain
generating, changing, or terminating smart contracts. The
smart contracts are addressed in the article, which are
suggested framework is a contractual process that is more
organised into six tiers for smart real estate adoption. To
immersive, user-friendly, and visualised.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Adarsh kumar et al [2020], The authors propose a smart
healthcare system with a blockchain data network and
healthcare 4.0 processes, which include industry 4.0
processes such as the internet of things (IoT), industrial IoT
(IIoT), cognitive computing, artificial intelligence, cloud
computing, fog computing, edge computing to provide
transparency, easy and faster accessibility, security,
efficiency[5]. The Ethereum network, as well as associated
programming languages, tools, and techniques such as
solidity, web3.js, Athena, and others, are used to construct
the smart healthcare system. The learning created a
comprehensive and comparative survey of cutting- edge
blockchain-based smart healthcare systems. A simulationoptimization approach using JaamSim simulator is proposed
to improve the performance of the overall system and subsystems. The proposed approach is tested, verified and
validated through simulation and implementation.
Mayank Raikwar et al [2018], An experimental prototype
was built on Hyperledger fabric, an open source
permissioned blockchain design platform, by the authors [6].
They discussed the most important design needs and design
propositions, as well as how to encode various insurance
procedures into smart contracts. Extensive experiments were
conducted to analyse performance of framework and
security of the proposed design and transactions based on a
blockchain-enabled platform.
Hoai Luan Pham et al [2018], Patients, healthcare providers
(such as hospitals), and healthcare professionals (doctors)
formed a remote healthcare system. Sensors were used to
control the health of patients, and this information was
automatically put into the blockchain [7]. In addition, they
presented a processing technique for efficiently and
sparingly storing medical device information based on the
patient's health status. In specific, they filtered sensor data
before deciding whether or not to send it to blockchain. As a
result, they will be able to minimise the size of the
blockchain and save a significant quantity of coins for
transaction efficiency. However, abnormal data from
sensors would be promptly written to blockchain, triggering
an emergency call to a doctor and hospital for prompt
treatment. They tested the proposed smart contract on
Ethereum's TESTRPC test environment and built the system
in a real-world setting with real devices. At a small scale,
this system functioned successfully.
Toqeer Ali et al [2020], Authors proposed a Transparent and
Reliable Property Registration System on Permissioned
Blockchain which provide the solution of the problem
within this mechanism is that there is no transparency
regarding the integrity of data about a property. i.e. the
person-in-charge can manipulate the information within the
database and provide misinformation to the involved
stakeholders as being a manual process [8]. They took Saudi
Arabia as a use-case and designed the system accordingly
to transform the property registration on blockchain for the
country. This study delivers a solution for governing
transparency and satiates in the provision of a trusted
property registration system over the Blockchain for the
kingdom of Saudi Arabia. The infrastructure offers many
features to the stakeholders related to the purchasing and
retailing of property. The transparency, integrity of the
record, and trust factor is ensured via a tamper-proof ledger
Olawande Daramola et al [2020], demonstrates how the
Architecture Trade-off Analysis Method (ATAM) may help
stakeholders in national elections assess the risks,
opportunities, and challenges that a blockchain e-voting
system for national elections could bring. Using a study
context of South Africa as a case study, a proposed
blockchain e-voting architecture was used to assist election
stakeholders in reasoning on the concept of blockchain evoting in order to educate them on the possible hazards,
security concerns, important requirements qualities, and
flaws related with deploying blockchain e-voting for
national elections [9]. According to the report, blockchain evoting can prevent security breaches, internal vote
manipulation, and boost transparency.
Valentina Gatteschi et al [2018], The author of the paper
uses blockchain to illustrate the process of decision- making
by actors in the insurance system, analysing its benefits and
drawbacks, and discussing many use examples from the
insurance industry that may easily be expanded to other
areas [10].
Sujit Biswas et al [2020], they first analyse and explain how
business blockchain can be effectively used in healthcare,
followed by the unique requirements of a healthcare system.
In the latter parts of this article, they discuss the migration
challenges and possible solution, the trade-off between
unified and multi-chain environments, consensus algorithms
for healthcare, users & access privileges, smart contracts,
and e-healthcare specific industry regulations [11]. The goal
of this paper was to show how difficult it is to establish a
blockchain solution for e-healthcare systems and to explore
for potential alternatives.
Friorik P. Hjalmarsson et al [2018], They propose a unique
electronic voting system based on blockchain in this
research paper, which tackles some of the shortcomings of
existing systems and examines some of the most wellknown blockchain frameworks for the aim of building a
blockchain-based e-voting system [12]. They unveiled a
blockchain-based electronic voting system that makes use of
**_18_**
transparency, easy and faster accessibility, security, property registration system over the Blockchain for the
efficiency[5]. The Ethereum network, as well as associated kingdom of Saudi Arabia. The infrastructure offers many
programming languages, tools, and techniques such as features to the stakeholders related to the purchasing and
solidity, web3.js, Athena, and others, are used to construct retailing of property. The transparency, integrity of the
the smart healthcare system. The learning created a record, and trust factor is ensured via a tamper-proof ledger
comprehensive and comparative survey of cutting- edge
Olawande Daramola et al [2020], demonstrates how the
blockchain-based smart healthcare systems. A simulation-
Architecture Trade-off Analysis Method (ATAM) may help
optimization approach using JaamSim simulator is proposed
stakeholders in national elections assess the risks,
to improve the performance of the overall system and sub-
opportunities, and challenges that a blockchain e-voting
systems. The proposed approach is tested, verified and
system for national elections could bring. Using a study
validated through simulation and implementation.
context of South Africa as a case study, a proposed
Mayank Raikwar et al [2018], An experimental prototype blockchain e-voting architecture was used to assist election
was built on Hyperledger fabric, an open source stakeholders in reasoning on the concept of blockchain e-
permissioned blockchain design platform, by the authors [6]. voting in order to educate them on the possible hazards,
They discussed the most important design needs and design security concerns, important requirements qualities, and
propositions, as well as how to encode various insurance flaws related with deploying blockchain e-voting for
procedures into smart contracts. Extensive experiments were national elections [9]. According to the report, blockchain e-
conducted to analyse performance of framework and voting can prevent security breaches, internal vote
security of the proposed design and transactions based on a manipulation, and boost transparency.
blockchain-enabled platform.
Valentina Gatteschi et al [2018], The author of the paper
Hoai Luan Pham et al [2018], Patients, healthcare providers uses blockchain to illustrate the process of decision- making
(such as hospitals), and healthcare professionals (doctors) by actors in the insurance system, analysing its benefits and
formed a remote healthcare system. Sensors were used to drawbacks, and discussing many use examples from the
control the health of patients, and this information was insurance industry that may easily be expanded to other
automatically put into the blockchain [7]. In addition, they areas [10].
presented a processing technique for efficiently and
Sujit Biswas et al [2020], they first analyse and explain how
sparingly storing medical device information based on the
business blockchain can be effectively used in healthcare,
patient's health status. In specific, they filtered sensor data
followed by the unique requirements of a healthcare system.
before deciding whether or not to send it to blockchain. As a
In the latter parts of this article, they discuss the migration
result, they will be able to minimise the size of the
challenges and possible solution, the trade-off between
blockchain and save a significant quantity of coins for
unified and multi-chain environments, consensus algorithms
transaction efficiency. However, abnormal data from
for healthcare, users & access privileges, smart contracts,
sensors would be promptly written to blockchain, triggering
and e-healthcare specific industry regulations [11]. The goal
an emergency call to a doctor and hospital for prompt
of this paper was to show how difficult it is to establish a
treatment. They tested the proposed smart contract on
blockchain solution for e-healthcare systems and to explore
Ethereum's TESTRPC test environment and built the system
for potential alternatives.
in a real-world setting with real devices. At a small scale,
this system functioned successfully. Friorik P. Hjalmarsson et al [2018], They propose a unique
electronic voting system based on blockchain in this
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
smart contracts to ensure a secure and cost-effective election
while also protecting voters' privacy.
Tanesh Kumar et al [2018], Exploring the possible
applications of blockchain technology in present healthcare
systems, as well as the most critical needs for such systems,
such as trustless and transparent healthcare systems [13]. In
addition, this report also outlines the hurdles and roadblocks
that must be overwhelmed before blockchain technology
can be successfully implemented in healthcare systems. they
also introduce the smart contract for blockchain-based
healthcare systems, which is critical for setting pre-defined
agreements among various players.
Ioannis Karamitsos et al [2018], The goal of this article is to
present Blockchain and smart contracts in the context of real
estate. A full smart contract design is described, followed by
an examination of a use case for renting residential and
commercial premises [14]. They present a smart contract
design technique that allows for the development of various
use cases using Blockchain technology. A full description of
state finite functions and processes is provided for a specific
use case that makes significant contributions to the real
estate domain.
Rohan Bennett et al [2021], The authors show how
comparative analysis may be done utilising a variety of
frameworks, such as business requirements adherence,
technology eagerness and maturity assessment, and strategic
grid analysis. The findings suggest that the hybrid strategy
allows for compliance with land dealing business criteria,
and that proofs-of-concept are an important phase in the
development process [15]. Finally, a maturity model for the
use of blockchain and smart contracts in land transactions is
offered.
Vinay Thakura et al [2019], It highlights concerns such as
nominal transparency, accountability, incoherent data sets
with several government departments pertaining to the same
piece of land, and delays in the current Land Records
management procedure, as well as how to fix these issues
utilising Blockchain Technology [16]. The authors also
demonstrate a system design for the deployment of a Land
Titling system utilising Blockchain Technology, so that land
titles are tamper-proof and give legitimate and conclusive
rights on ownership. The research report recommends
utilising Blockchain's inherent benefits, with a focus on
smart contracts.
Each transaction, whether it is a property sale, an
inheritance, a court order, or a land acquisition, will be
captured and permanently recorded by the system. This
means you get near-real-time updated records with accurate
traceability and visibility into the state of your property
records.
**_19_**
such as trustless and transparent healthcare systems [13]. In
offered.
addition, this report also outlines the hurdles and roadblocks
that must be overwhelmed before blockchain technology Vinay Thakura et al [2019], It highlights concerns such as
can be successfully implemented in healthcare systems. they nominal transparency, accountability, incoherent data sets
also introduce the smart contract for blockchain-based with several government departments pertaining to the same
healthcare systems, which is critical for setting pre-defined piece of land, and delays in the current Land Records
agreements among various players. management procedure, as well as how to fix these issues
utilising Blockchain Technology [16]. The authors also
Ioannis Karamitsos et al [2018], The goal of this article is to
demonstrate a system design for the deployment of a Land
present Blockchain and smart contracts in the context of real
Titling system utilising Blockchain Technology, so that land
estate. A full smart contract design is described, followed by
titles are tamper-proof and give legitimate and conclusive
an examination of a use case for renting residential and
rights on ownership. The research report recommends
commercial premises [14]. They present a smart contract
utilising Blockchain's inherent benefits, with a focus on
design technique that allows for the development of various
smart contracts.
use cases using Blockchain technology. A full description of
state finite functions and processes is provided for a specific Each transaction, whether it is a property sale, an
use case that makes significant contributions to the real inheritance, a court order, or a land acquisition, will be
estate domain. captured and permanently recorded by the system. This
means you get near-real-time updated records with accurate
Rohan Bennett et al [2021], The authors show how
traceability and visibility into the state of your property
comparative analysis may be done utilising a variety of
records.
frameworks, such as business requirements adherence,
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
## 3. COMPARATIVE AND COMPREHENSIVE REVIEW ANALYSIS BASED ON
SELECTIVE CRITERIA.
**Whet**
**Blockchai**
**Researc** **her** **Deployme**
**Year of** **n**
**Problem** **h** **Soluti** **nt or** **Future**
**Authors** **Publica** **Applicati** **Solution**
**Statement** **Method** **on is** **Implement** **scope**
**tion** **on**
**tested** **ation**
**Domain**
**?**
Fahim 2021 Real blockchain conceptu propose a Yes Yes To
Ullah, Estate smart al new smart illustrate
Fadi Al- Deals in contract framewo contract the smart
Turjman smart adoption rk architectur contract
cities to manage e for real process in
real estate estate smart real
deals in transaction estate, a
smart s practical
cities framewor
k in the
form of a
sophisticat
ed website
or app can
be
establishe
d.
Adarsh 2020 Novel Create a conceptu Integration Yes Implement The
Kumar Smart smart al and ed on proposed
Rajalaksh Healthcar healthcare framewo interopera Ethereum work will
mi e 4.0 system rk bility of using tools be
Krishnam Processes using Blockchai solidity, expanded
urthi, Blockchai n 3.0 and web3.js, to include
Anand n 3.0 and Healthcare Athena, implement
Nayyar, Healthcare 4.0 to etc. ation
Kriti 4.0 create a across
Sharma, connectivit smart many
Vinay y and healthcare blockchai
Grover interopera system n
and Eklas bility. networks
Hossain using
various
tools and
methodolo
gies.
Mayank 2018 Insurance Design an Design Prepare a Yes No Each
Raikwar Processes experimen Experim model for smart
Subhra tal ental a cost- contract in
Mazumdar prototype Prototyp effective the model
, Sushmita on e way to has its
Ruj, Hyperledg processing own
Sourav er fabric, insurance- collection
Sen an open related of
Gupta, source transaction endorsing
**_20_**
|Authors|Year of Publica tion|Blockchai n Applicati on Domain|Problem Statement|Researc h Method|Solution|Whet her Soluti on is tested ?|Deployme nt or Implement ation|Future scope|
|---|---|---|---|---|---|---|---|---|
|Fahim Ullah, Fadi Al- Turjman|2021|Real Estate Deals in smart cities|blockchain smart contract adoption to manage real estate deals in smart cities|conceptu al framewo rk|propose a new smart contract architectur e for real estate transaction s|Yes|Yes|To illustrate the smart contract process in smart real estate, a practical framewor k in the form of a sophisticat ed website or app can be establishe d.|
|Adarsh Kumar Rajalaksh mi Krishnam urthi, Anand Nayyar, Kriti Sharma, Vinay Grover and Eklas Hossain|2020|Novel Smart Healthcar e 4.0 Processes|Create a smart healthcare system using Blockchai n 3.0 and Healthcare 4.0 connectivit y and interopera bility.|conceptu al framewo rk|Integration and interopera bility of Blockchai n 3.0 and Healthcare 4.0 to create a smart healthcare system|Yes|Implement ed on Ethereum using tools solidity, web3.js, Athena, etc.|The proposed work will be expanded to include implement ation across many blockchai n networks using various tools and methodolo gies.|
|Mayank Raikwar Subhra Mazumdar , Sushmita Ruj, Sourav Sen Gupta,|2018|Insurance Processes|Design an experimen tal prototype on Hyperledg er fabric, an open source|Design Experim ental Prototyp e|Prepare a model for a cost- effective way to processing insurance- related transaction|Yes|No|Each smart contract in the model has its own collection of endorsing|
**Authors** **Publica** **Applicati** **Solution**
**Statement** **Method** **on is** **Implement** **scope**
**tion** **on**
**tested** **ation**
**Domain**
**?**
Fahim 2021 Real blockchain conceptu propose a Yes Yes To
Ullah, Estate smart al new smart illustrate
Fadi Al- Deals in contract framewo contract the smart
Turjman smart adoption rk architectur contract
cities to manage e for real process in
real estate estate smart real
deals in transaction estate, a
smart s practical
cities framewor
k in the
form of a
sophisticat
ed website
or app can
be
establishe
d.
Adarsh 2020 Novel Create a conceptu Integration Yes Implement The
Kumar Smart smart al and ed on proposed
Rajalaksh Healthcar healthcare framewo interopera Ethereum work will
mi e 4.0 system rk bility of using tools be
Krishnam Processes using Blockchai solidity, expanded
urthi, Blockchai n 3.0 and web3.js, to include
Anand n 3.0 and Healthcare Athena, implement
Nayyar, Healthcare 4.0 to etc. ation
Kriti 4.0 create a across
Sharma, connectivit smart many
Vinay y and healthcare blockchai
Grover interopera system n
and Eklas bility. networks
Hossain using
various
tools and
methodolo
gies.
Mayank 2018 Insurance Design an Design Prepare a Yes No Each
Raikwar Processes experimen Experim model for smart
Subhra tal ental a cost- contract in
Mazumdar prototype Prototyp effective the model
, Sushmita on e way to has its
Ruj, Hyperledg processing own
Sourav er fabric, insurance- collection
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Anupam permission s on a peers,
Chattopad ed blockchain which
hyay, and blockchain network. may be
Kwok- design extended
Yan Lam framework to the
. transactio
n level to
allow for a
separate
group of
endorsing
peers for
each
transactio
n.
Hoai Luan 2018 Secure Proposed Design Prepared Yes verified the For the
Pham, Thi Remote remote of and tested proposed suggested
Hong Healthcar healthcare Experim a Remote smart remote
Tran, e System system ental Healthcare contract on healthcare
Yasuhiko for using processi System Ethereum system,
Nakashim Hospital blockchain ng based on test decentralis
a based on mechani Smart environme ed storage
the sm Contract nt called can be
Ethereum on TESTRPC developed.
protocol blockchain and
technolog implemente
y d the
system on
an
experiment
al
environme
nt with real
devices.
Toqeer 2020 Property Discover a Use For the No No After
Ali, Registrati Permission case- Kingdom testing,
Adnan on System ed based of Saudi the
Nadeem, Blockchai study Arabia, recommen
Ali n-based and this ded
Alzahrani, Transpare design solution framewor
Salman nt and framewo controls k and
Jan Trusted rk. transparen system
Property cy and design
Registratio satisfies in will be
n System. the implement
provision ed and
of a improved
trusted utilising
property the
registratio appropriat
n system e
on the platform.
Blockchai
n. The
infrastruct
ure
provides
**_21_**
|Anupam Chattopad hyay, and Kwok- Yan Lam|Col2|Col3|permission ed blockchain design framework .|Col5|s on a blockchain network.|Col7|Col8|peers, which may be extended to the transactio n level to allow for a separate group of endorsing peers for each transactio n.|
|---|---|---|---|---|---|---|---|---|
|Hoai Luan Pham, Thi Hong Tran, Yasuhiko Nakashim a|2018|Secure Remote Healthcar e System for Hospital|Proposed remote healthcare system using blockchain based on the Ethereum protocol|Design of Experim ental processi ng mechani sm|Prepared and tested a Remote Healthcare System based on Smart Contract on blockchain technolog y|Yes|verified the proposed smart contract on Ethereum test environme nt called TESTRPC and implemente d the system on an experiment al environme nt with real devices.|For the suggested remote healthcare system, decentralis ed storage can be developed.|
|Toqeer Ali, Adnan Nadeem, Ali Alzahrani, Salman Jan|2020|Property Registrati on System|Discover a Permission ed Blockchai n-based Transpare nt and Trusted Property Registratio n System.|Use case- based study and design framewo rk.|For the Kingdom of Saudi Arabia, this solution controls transparen cy and satisfies in the provision of a trusted property registratio n system on the Blockchai n. The infrastruct ure provides|No|No|After testing, the recommen ded framewor k and system design will be implement ed and improved utilising the appropriat e platform.|
allow for a
separate
group of
endorsing
peers for
each
transactio
n.
Hoai Luan 2018 Secure Proposed Design Prepared Yes verified the For the
Pham, Thi Remote remote of and tested proposed suggested
Hong Healthcar healthcare Experim a Remote smart remote
Tran, e System system ental Healthcare contract on healthcare
Yasuhiko for using processi System Ethereum system,
Nakashim Hospital blockchain ng based on test decentralis
a based on mechani Smart environme ed storage
the sm Contract nt called can be
Ethereum on TESTRPC developed.
protocol blockchain and
technolog implemente
y d the
system on
an
experiment
al
environme
nt with real
devices.
Toqeer 2020 Property Discover a Use For the No No After
Ali, Registrati Permission case- Kingdom testing,
Adnan on System ed based of Saudi the
Nadeem, Blockchai study Arabia, recommen
Ali n-based and this ded
Alzahrani, Transpare design solution framewor
Salman nt and framewo controls k and
Jan Trusted rk. transparen system
Property cy and design
Registratio satisfies in will be
n System. the implement
provision ed and
of a improved
trusted utilising
property the
registratio appropriat
n system e
on the platform.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
various
benefits to
those
involved
in the
purchasing
and selling
of real
estate. A
tamperproof
ledger
ensures
openness,
record
integrity,
and
trustworthi
ness.
Olawande 2020 E-Voting To gain a Architect Through a No No This
Daramola, System better ure collaborati research
Darren for understand Trade-of ve was
Thebus National ing of the Analysis architectur limited to
Elections risks, Method al South
opportuniti (ATAM) assessmen Africa. It
es, and t and could be
challenges documenta improved
involved tion in the
with a process, future for
blockchain demonstra various
e-voting ted how countries
system for the throughou
national architectur t the
elections. e trade-of world.
analysis
technique
(ATAM)
might be
used to
enable
election
stakeholde
rs to
understand
the
possible
risks,
problems,
and
prospects
of
blockchain
e-voting.
**_22_**
|Col1|Col2|Col3|Col4|Col5|various benefits to those involved in the purchasing and selling of real estate. A tamper- proof ledger ensures openness, record integrity, and trustworthi ness.|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|Olawande Daramola, Darren Thebus|2020|E-Voting System for National Elections|To gain a better understand ing of the risks, opportuniti es, and challenges involved with a blockchain e-voting system for national elections.|Architect ure Trade-of Analysis Method (ATAM)|Through a collaborati ve architectur al assessmen t and documenta tion process, demonstra ted how the architectur e trade-of analysis technique (ATAM) might be used to enable election stakeholde rs to understand the possible risks, problems, and prospects of blockchain e-voting.|No|No|This research was limited to South Africa. It could be improved in the future for various countries throughou t the world.|
of real
estate. A
tamper-
proof
ledger
ensures
openness,
record
integrity,
and
trustworthi
ness.
Olawande 2020 E-Voting To gain a Architect Through a No No This
Daramola, System better ure collaborati research
Darren for understand Trade-of ve was
Thebus National ing of the Analysis architectur limited to
Elections risks, Method al South
opportuniti (ATAM) assessmen Africa. It
es, and t and could be
challenges documenta improved
involved tion in the
with a process, future for
blockchain demonstra various
e-voting ted how countries
system for the throughou
national architectur t the
elections. e trade-of world.
analysis
technique
(ATAM)
might be
used to
enable
election
stakeholde
rs to
understand
the
possible
risks,
problems,
and
prospects
of
blockchain
e-voting.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Valentina 2018 Insurance To provide SWOT Outline No No Based on
Gatteschi, Processes assistance Analysis the the results
Fabrizio to those benefits of the
Lamberti, involved and SWOT
Claudio in the drawbacks analysis,
Demartini decision-, as well as blockchai
Chiara making explore n
Pranteda process, as specific technolog
and Víctor well as to applicatio y might be
Santamarí discuss n instances easily
a many use from the extended
examples insurance to
from the industr different
insurance industries.
industry
that might
easily be
applied to
other
domains.
Sujit 2020 E- To gain a Literatur examine No No The
Biswas, Healthcar better e Survey and findings of
Kashif e understand explain the study
Sharif, Systems. ing of how how can be
Fan Li, difficult it business applied to
Saraju P. is to blockchain the
Mohanty establish a can be developm
blockchain used ent of a
solution effectively blockchai
for e- in n
healthcare healthcare, applicatio
systems as well as n.
and to the special
seek for needs of
potential the
solutions. healthcare
sector.
Frorik Þ. 2018 E-Voting developing Design Proposed a Yes Yes Additional
Hjalmarss System an e- of blockchain measures
on, voting experime -based would be
Gunnlaug system ntal electronic required
ur K. based on framewo voting for
Hreioarsso the rk system countries
n, blockchain that uses of greater
Mohamma smart size to
d contracts accommo
Hamdaqa, to ensure date
Gisli that higher
Hjalmtyss elections transactio
on are secure n volume
and cost- per
effective second.
while
maintainin
g voter
anonymity
.
**_23_**
|Valentina Gatteschi, Fabrizio Lamberti, Claudio Demartini Chiara Pranteda and Víctor Santamarí a|2018|Insurance Processes|To provide assistance to those involved in the decision- making process, as well as to discuss many use examples from the insurance industry that might easily be applied to other domains.|SWOT Analysis|Outline the benefits and drawbacks , as well as explore specific applicatio n instances from the insurance industr|No|No|Based on the results of the SWOT analysis, blockchai n technolog y might be easily extended to different industries.|
|---|---|---|---|---|---|---|---|---|
|Sujit Biswas, Kashif Sharif, Fan Li, Saraju P. Mohanty|2020|E- Healthcar e Systems.|To gain a better understand ing of how difficult it is to establish a blockchain solution for e- healthcare systems and to seek for potential solutions.|Literatur e Survey|examine and explain how business blockchain can be used effectively in healthcare, as well as the special needs of the healthcare sector.|No|No|The findings of the study can be applied to the developm ent of a blockchai n applicatio n.|
|Frorik Þ. Hjalmarss on, Gunnlaug ur K. Hreioarsso n, Mohamma d Hamdaqa, Gisli Hjalmtyss on|2018|E-Voting System|developing an e- voting system based on the blockchain|Design of experime ntal framewo rk|Proposed a blockchain -based electronic voting system that uses smart contracts to ensure that elections are secure and cost- effective while maintainin g voter anonymity .|Yes|Yes|Additional measures would be required for countries of greater size to accommo date higher transactio n volume per second.|
Pranteda process, as specific technolog
and Víctor well as to applicatio y might be
Santamarí discuss n instances easily
a many use from the extended
examples insurance to
from the industr different
insurance industries.
industry
that might
easily be
applied to
other
domains.
Sujit 2020 E- To gain a Literatur examine No No The
Biswas, Healthcar better e Survey and findings of
Kashif e understand explain the study
Sharif, Systems. ing of how how can be
Fan Li, difficult it business applied to
Saraju P. is to blockchain the
Mohanty establish a can be developm
blockchain used ent of a
solution effectively blockchai
for e- in n
healthcare healthcare, applicatio
systems as well as n.
and to the special
seek for needs of
potential the
solutions. healthcare
sector.
Frorik Þ. 2018 E-Voting developing Design Proposed a Yes Yes Additional
Hjalmarss System an e- of blockchain measures
on, voting experime -based would be
Gunnlaug system ntal electronic required
ur K. based on framewo voting for
Hreioarsso the rk system countries
n, blockchain that uses of greater
Mohamma smart size to
d contracts accommo
Hamdaqa, to ensure date
Gisli that higher
Hjalmtyss elections transactio
on are secure n volume
and cost- per
effective second.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Tanesh 2018 healthcare To outline Literatur present the No No Facts may
Kumar, systems the issues e survey smart be put into
Vidhya and contract practise
Ramani, roadblocks for with the
Ijaz that must blockchain right tools.
Ahmad, be -based
An overcome healthcare
Braeken, before systems,
Erkki blockchain which is
Harjula, technology critical for
Mika can be setting
Ylianttila successfull pre
y defined
implement agreement
ed in s among
healthcare the
systems. numerous
stakeholde
rs
involved.
Ioannis 2018 Real offer a Concept A full Yes Yes must
Karamitso Estate smart ual descriptio evaluate
s, Maria System contract framewo n of state the impact
Papadaki, design rk finite of various
Nedaa process functions platforms
Baker Al that allows and such as
Barghuthi for the processes Hyperledg
developme is er Fabric
nt of provided
various for a
use cases specific
using use case
Blockchai that makes
n significant
technology contributio
. ns to the
real estate
domain.
Rohan 2021 Land Proposed a Compara A maturity Yes Yes Institution
Bennett, Administr Hybrid tive model for al trust,
Todd ation Approache analysis the use of legal, and
Miller, s for Smart blockchain policy
Mark Contracts and smart challenges
Pickering, in Land contracts are some
Al-Karim Administr in land of the
Kara ation transaction major
s. issues that
can be
addressed.
**_24_**
|Tanesh Kumar, Vidhya Ramani, Ijaz Ahmad, An Braeken, Erkki Harjula, Mika Ylianttila|2018|healthcare systems|To outline the issues and roadblocks that must be overcome before blockchain technology can be successfull y implement ed in healthcare systems.|Literatur e survey|present the smart contract for blockchain -based healthcare systems, which is critical for setting pre- defined agreement s among the numerous stakeholde rs involved.|No|No|Facts may be put into practise with the right tools.|
|---|---|---|---|---|---|---|---|---|
|Ioannis Karamitso s, Maria Papadaki, Nedaa Baker Al Barghuthi|2018|Real Estate System|offer a smart contract design process that allows for the developme nt of various use cases using Blockchai n technology .|Concept ual framewo rk|A full descriptio n of state finite functions and processes is provided for a specific use case that makes significant contributio ns to the real estate domain.|Yes|Yes|must evaluate the impact of various platforms such as Hyperledg er Fabric|
|Rohan Bennett, Todd Miller, Mark Pickering, Al-Karim Kara|2021|Land Administr ation|Proposed a Hybrid Approache s for Smart Contracts in Land Administr ation|Compara tive analysis|A maturity model for the use of blockchain and smart contracts in land transaction s.|Yes|Yes|Institution al trust, legal, and policy challenges are some of the major issues that can be addressed.|
Braeken, before systems,
Erkki blockchain which is
Harjula, technology critical for
Mika can be setting
Ylianttila successfull pre-
y defined
implement agreement
ed in s among
healthcare the
systems. numerous
stakeholde
rs
involved.
Ioannis 2018 Real offer a Concept A full Yes Yes must
Karamitso Estate smart ual descriptio evaluate
s, Maria System contract framewo n of state the impact
Papadaki, design rk finite of various
Nedaa process functions platforms
Baker Al that allows and such as
Barghuthi for the processes Hyperledg
developme is er Fabric
nt of provided
various for a
use cases specific
using use case
Blockchai that makes
n significant
technology contributio
. ns to the
real estate
domain.
Rohan 2021 Land Proposed a Compara A maturity Yes Yes Institution
Bennett, Administr Hybrid tive model for al trust,
Todd ation Approache analysis the use of legal, and
Miller, s for Smart blockchain policy
Mark Contracts and smart challenges
Pickering, in Land contracts are some
Al-Karim Administr in land of the
Kara ation transaction major
s. issues that
can be
addressed.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Vinay 2015 Land Land Concept Illustrates Yes Yes to
Thakura,, Titling records on ual a system combine
M.N. System Blockchai framewo design for Blockchai
Dojab, n for rk and implement n
Yogesh K. implement design of ing a Land technolog
Dwivedic, ation of system Titling y with
Tanvir Land system in artificial
Ahmadd, Titling in the intelligenc
Ganesh India country e (AI) in
Khadanga utilising order to
e Blockchai make the
n entire land
Technolog manageme
y, so that nt
land titles ecosystem
are safer,
tamper- faster,
proof and more
give transparen
authentic t, and
and more
conclusive responsive
ownership .
rights.
_Table 1. Comparative and Comprehensive Review Analysis Based on Selective Criteria_
_._
3. Andreas M. Antonopoulos, Gavin Wood, "What is a Smart
Contract?" 2018.
## 4. CONCLUSION AND DISCUSSION
4. Fahim Ullah, Fadi Al-Turjman "A conceptual framework
This article began by stating that the advent of
for blockchain smart contract adoption to manage real estate
"blockchain" technology prompted conceptual and deals in smart cities," 2021.
design work in a variety of fields aimed at realising 5. Adarsh Kumar Rajalakshmi Krishnamurthi, Anand Nayyar,
Kriti Sharma, Vinay Grover and Eklas Hossain, "A Novel
the previous "smart contract" notion. The research
Smart Healthcare Design, Simulation, and Implementation
focused on a Systemic Literature Review of Using Healthcare 4.0 Processes," 2020.
contemporary research work conducted between 2015 6. Mayank Raikwar Subhra Mazumdar, Sushmita Ruj, Sourav
and 2021. Its current analysis is a comparative and Sen Gupta, Anupam Chattopadhyay, and Kwok-Yan Lam.
"A Blockchain Framework for Insurance Processes," 2018.
comprehensive review of block chain applications in
7. Hoai Luan Pham, Thi Hong Tran, Yasuhiko Nakashima, "A
numerous domains. According to current research, Secure Remote Healthcare System for Hospital Using
only a few application sectors have been covered for Blockchain Smart Contract," 2018.
8. Toqeer Ali, Adnan Nadeem, Ali Alzahrani, Salman Jan “A
blockchain technology deployment, such as the health
Transparent and Trusted Property Registration System on
sector, insurance sector, e-voting sector, or land sector. Permissioned Blockchain," 2020.
In the future, Blockchain technology combined with 9. Olawande Daramola, Darren Thebus "Architecture-Centric
smart contacts could be used in a variety of sectors Evaluation of Blockchain-Based Smart Contract E- Voting
for National Elections," 2020.
that are currently untapped.
10. Valentina Gatteschi, Fabrizio Lamberti, Claudio Demartini
Chiara Pranteda and Víctor Santamaría, “Blockchain and
## FUNDING: This study was not funded by any Smart Contracts for Insurance: Is the Technology Mature
organization. Enough?” 2018.
11. Sujit Biswas, Kashif Sharif, Fan Li, Saraju P. Mohanty,
“Blockchain for E-Healthcare Systems:Easier Said Than
## CONFLICT OF INTEREST: The authors declare
Done,” 2020.
that they have no conflicts of interest. 12. Frorik Þ. Hjalmarsson, Gunnlaugur K. Hreioarsson,
Mohammad Hamdaqa, Gisli Hjalmtysson, “BlockchainBased E-Voting System,” 2018.
## REFERENCES
13. Tanesh Kumar, Vidhya Ramani, Ijaz Ahmad, An Braeken,
1. Satoshi Nakamoto "Bitcoin: A Peer-to-Peer Electronic Cash
Erkki Harjula, Mika Ylianttila “Blockchain Utilization in
System," 2008.
Healthcare: Key Requirements and Challenges” 2018.
2. Antony Lewis "A Gentle Introduction to Ethereum," 2016.
14. Ioannis Karamitsos, Maria Papadaki, Nedaa Baker Al
**_25_**
|Vinay Thakura,, M.N. Dojab, Yogesh K. Dwivedic, Tanvir Ahmadd, Ganesh Khadanga e|2015|Land Titling System|Land records on Blockchai n for implement ation of Land Titling in India|Concept ual framewo rk and design of system|Illustrates a system design for implement ing a Land Titling system in the country utilising Blockchai n Technolog y, so that land titles are tamper- proof and give authentic and conclusive ownership rights.|Yes|Yes|to combine Blockchai n technolog y with artificial intelligenc e (AI) in order to make the entire land manageme nt ecosystem safer, faster, more transparen t, and more responsive .|
|---|---|---|---|---|---|---|---|---|
Ahmadd, Titling in the intelligenc
Ganesh India country e (AI) in
Khadanga utilising order to
e Blockchai make the
n entire land
Technolog manageme
y, so that nt
land titles ecosystem
are safer,
tamper- faster,
proof and more
give transparen
authentic t, and
and more
conclusive responsive
ownership .
rights.
_Table 1. Comparative and Comprehensive Review Analysis Based on Selective Criteria_
_._
3. Andreas M. Antonopoulos, Gavin Wood, "What is a Smart
Contract?" 2018.
## 4. CONCLUSION AND DISCUSSION
4. Fahim Ullah, Fadi Al-Turjman "A conceptual framework
This article began by stating that the advent of
for blockchain smart contract adoption to manage real estate
"blockchain" technology prompted conceptual and deals in smart cities," 2021.
design work in a variety of fields aimed at realising 5. Adarsh Kumar Rajalakshmi Krishnamurthi, Anand Nayyar,
Kriti Sharma, Vinay Grover and Eklas Hossain, "A Novel
the previous "smart contract" notion. The research
Smart Healthcare Design, Simulation, and Implementation
focused on a Systemic Literature Review of Using Healthcare 4.0 Processes," 2020.
contemporary research work conducted between 2015 6. Mayank Raikwar Subhra Mazumdar, Sushmita Ruj, Sourav
and 2021. Its current analysis is a comparative and Sen Gupta, Anupam Chattopadhyay, and Kwok-Yan Lam.
"A Blockchain Framework for Insurance Processes," 2018.
comprehensive review of block chain applications in
7. Hoai Luan Pham, Thi Hong Tran, Yasuhiko Nakashima, "A
numerous domains. According to current research, Secure Remote Healthcare System for Hospital Using
only a few application sectors have been covered for Blockchain Smart Contract," 2018.
8. Toqeer Ali, Adnan Nadeem, Ali Alzahrani, Salman Jan “A
blockchain technology deployment, such as the health
Transparent and Trusted Property Registration System on
sector, insurance sector, e-voting sector, or land sector. Permissioned Blockchain," 2020.
In the future, Blockchain technology combined with 9. Olawande Daramola, Darren Thebus "Architecture-Centric
smart contacts could be used in a variety of sectors Evaluation of Blockchain-Based Smart Contract E- Voting
for National Elections," 2020.
that are currently untapped.
10. Valentina Gatteschi, Fabrizio Lamberti, Claudio Demartini
Chiara Pranteda and Víctor Santamaría, “Blockchain and
## FUNDING: This study was not funded by any Smart Contracts for Insurance: Is the Technology Mature
organization. Enough?” 2018.
11. Sujit Biswas, Kashif Sharif, Fan Li, Saraju P. Mohanty,
“Blockchain for E-Healthcare Systems:Easier Said Than
## CONFLICT OF INTEREST: The authors declare
Done,” 2020.
-----
**_ISSN: 2321-8169 Volume: 9 Issue: 9_**
**_DOI: https://doi.org/10.17762/ijritcc.v9i9.5489_**
**_Article Received: 20 June 2021 Revised: 23 July 2021 Accepted: 30 August 2021 Publication: 30 September 2021_**
______________________________________________________________________________________________________________________
Barghuthi “Design of the Blockchain Smart Contract: A Use
Case for Real Estate.” 2018.
15. Rohan Bennett, Todd Miller, Mark Pickering, Al-Karim
Kara “Hybrid Approaches for Smart Contracts in Land
Administration: Lessons from Three Blockchain Proofs-ofConcept” 2021.
16. Vinay Thakura,, M.N. Dojab, Yogesh K. Dwivedic, Tanvir
Ahmadd, Ganesh Khadangae “Land records on Blockchain
for implementation of Land Titling in India, ” 2015.
**_26_**
Ahmadd, Ganesh Khadangae “Land records on Blockchain
for implementation of Land Titling in India, ” 2015.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.17762/ijritcc.v9i9.5489?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.17762/ijritcc.v9i9.5489, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://www.ijritcc.org/index.php/ijritcc/article/download/5489/5376"
}
| 2,021
|
[] | true
| 2021-09-30T00:00:00
|
[
{
"paperId": "b045bed810823b2667c8325e015fb1bb6b1a459f",
"title": "A conceptual framework for blockchain smart contract adoption to manage real estate deals in smart cities"
},
{
"paperId": "29bdd042b7f34b7d412a2961a0b54febf8e20d2d",
"title": "Hybrid Approaches for Smart Contracts in Land Administration: Lessons from Three Blockchain Proofs-of-Concept"
},
{
"paperId": "99df24d3f9fa3b524b2cf04a0a05f611dad42c5d",
"title": "A Novel Smart Healthcare Design, Simulation, and Implementation Using Healthcare 4.0 Processes"
},
{
"paperId": "04ea37512c74a290148693d2749ad2080feb8ecb",
"title": "Land records on Blockchain for implementation of Land Titling in India"
},
{
"paperId": "e5982a60d60d28dc6ded3c034bef7d66862714e8",
"title": "Architecture-Centric Evaluation of Blockchain-Based Smart Contract E-Voting for National Elections"
},
{
"paperId": "b7f13d6c6f8dfcc1d070230731672ca8d37e6c11",
"title": "A Transparent and Trusted Property Registration System on Permissioned Blockchain"
},
{
"paperId": "3903ace178b79f35ed08987ae0276cce6b24ccd9",
"title": "A Secure Remote Healthcare System for Hospital Using Blockchain Smart Contract"
},
{
"paperId": "dc8384ac62e420e358e3bfaf2f93dfa075ada1f4",
"title": "Blockchain Utilization in Healthcare: Key Requirements and Challenges"
},
{
"paperId": "54d50269928dafc6a0744e46044c17d973fdb01c",
"title": "Blockchain-Based E-Voting System"
},
{
"paperId": "a857db244890540325950efe1f15e3772c76c50b",
"title": "Design of the Blockchain Smart Contract: A Use Case for Real Estate"
},
{
"paperId": "a338e7181374da20ae974bbe7025bf759d3e7547",
"title": "Blockchain and Smart Contracts for Insurance: Is the Technology Mature Enough?"
},
{
"paperId": "40cbc5913249fce1d56c9558649e53074f4ccfe8",
"title": "A Blockchain Framework for Insurance Processes"
},
{
"paperId": "ccc0ba2add9ed38b735224c519707f15429c407a",
"title": "Smart contract"
},
{
"paperId": null,
"title": "Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 9 Issue: 9 DOI: https://doi.org/10.17762/ijritcc.v9i9.5489"
},
{
"paperId": null,
"title": "A Gentle Introduction to Ethereum"
},
{
"paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4",
"title": "Bitcoin : A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Mohanty , “ Blockchain for E - Healthcare Systems : Easier Said Than Done"
}
] | 13,749
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Biology",
"source": "external"
},
{
"category": "Biology",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014a35c538928a59935d1940bad737afa4159dfa
|
[
"Medicine",
"Biology"
] | 0.89658
|
Morphological and Biochemical Correlations of Abnormal Tau Filaments in Progressive Supranuclear Palsy
|
014a35c538928a59935d1940bad737afa4159dfa
|
Journal of Neuropathology and Experimental Neurology
|
[
{
"authorId": "2111133033",
"name": "Makio Takahashi"
},
{
"authorId": "4730818",
"name": "K. Weidenheim"
},
{
"authorId": "2278202",
"name": "D. Dickson"
},
{
"authorId": "1399784349",
"name": "H. Ksiȩżak-Reding"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Neuropathol Exp Neurol"
],
"alternate_urls": [
"http://journals.lww.com/jneuropath/pages/default.aspx"
],
"id": "a09d6660-d07d-4847-a4e7-160d8c06b0ab",
"issn": "0022-3069",
"name": "Journal of Neuropathology and Experimental Neurology",
"type": "journal",
"url": "http://www.aanp-jnen.com/jnenframes.html"
}
| null |
**Journal of Neuropathology and Experimental Neurology** Vol. 61, No. 1
Copyright � 2002 by the American Association of Neuropathologists January, 2002
pp. 33 45
# Morphological and Biochemical Correlations of Abnormal Tau Filaments in Progressive Supranuclear Palsy
MAKIO TAKAHASHI, MD, KAREN M. WEIDENHEIM, MD, DENNIS W. DICKSON, MD, AND
HANNA KSIEZAK-REDING, PHD
**Abstract.** Progressive supranuclear palsy (PSP) is characterized by specific filamentous tau inclusions present in 3 types of
cells including oligodendrocytes (coiled bodies), astrocytes (tufted astrocytes), and neurons (neurofibrillary tangles; NFTs).
To correlate the morphological features and biochemical composition of tau in the inclusions, we examined tau filamentenriched fractions isolated from selected brain regions. Frontal and cerebellar white matter manifested a predominance of
coiled bodies. The isolated fractions contained straight, 14-nm-wide filaments of relatively smooth appearance. Caudate
nucleus and motor cortex with numerous tufted astrocytes contained mostly straight, but irregular, 22-nm-wide filaments with
jagged contours. Perirhinal cortex and hippocampus, rich in NFTs, contained 22-nm-wide filaments that were twisted at 80nm intervals. Among the regions, those with tufted astrocytes showed the most heterogeneity in the ultrastructure of filaments.
In all regions, isolated filaments were immunolabeled with PHF-1, Tau 46, and AT8. Fractions from all regions showed 2
PHF-1 immunoreactive bands of 64 and 68 kDa, while an additional band of 60 kDa was detected in NFT-enriched regions.
All fractions, in varying extents, showed Tau-1-immunoreactive bands between 45–64 kDa. The results indicate that the 3
types of PSP tau inclusions vary in the ultrastructure although with some overlapping features. Neuronal and glial inclusions
also vary in the biochemical profile of tau protein. These differences may depend on the metabolism of tau in the diseased
oligodendrocytes, astrocytes, and neurons.
**Key Words:** Biochemical tau mapping; Glial lesions; Paired helical filaments; Progressive supranuclear palsy; Tau inclusions; Tau phosphorylation; Ultrastructure.
INTRODUCTION
Progressive supranuclear palsy (PSP) is one of the rare
neurodegenerative disorders that is clinically characterized by supranuclear ocular palsy, pseudobulbar palsy,
parkinsonism with axial dystonia and postural instability,
and progressive subcortical dementia (1–3). The presence
of glial as well as neuronal pathology has recently been
highlighted in PSP (4–8). The pathological features have
been described as abnormal intracellular tau inclusions in
specific anatomical areas involving astrocytes, oligodendrocytes, and neurons. Astrocytic pathology is seen in 3
recognizable forms: tufted astrocytes, astrocytic plaques,
and thorn-shaped astrocytes. Specific tau-positive astrocytic inclusions in PSP are tufted astrocytes. For example, tufted astrocytes found in frontal lobe (areas 4 and
6) and putamen (6) are highly suggestive of PSP. These
are extremely rare in the basal region (temporal lobe and
insular cortex) and limbic system (amygdaloid nucleus,
cingulate gyrus, and hippocampus). On the other hand,
astrocytic plaques are rare in PSP and more common in
corticobasal degeneration (CBD) (9, 10). Thorn-shaped
From the Department of Pathology (Neuropathology) (MT, KMW,
HK-R), Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York; Department of Pathology (DWD), Neuropathology Laboratory, Mayo Clinic, Jacksonville, Florida; Visiting scientist from the Department of Brain Pathophysiology (MT), University
of Kyoto, Graduate School of Medicine, Kyoto, Japan
Correspondence to: Dr. Makio Takahashi, D1–801, Shinsenriminami
3-7, Toyonaka, Osaka 565-0084, Japan.
Grant support: Awarded to HKR by the Society for Progressive Supranuclear Palsy and the Alzheimer’s Disease Association.
astrocytes are argyrophilic tau-positive astrocytes detected in subpial and subependymal regions of some PSP
cases (11). They resemble reactive astrocytes containing
argyrophilic-rich cytoplasm and are mostly located in the
vicinity of subpial and perivascular areas corresponding
to astrocytic end feet. An unusual type of PSP tau-positive astrocyte has also been reported, resembling the Alzheimer-type 1 glial cell commonly seen in hepatic encephalopathy and Wilson’s disease (12). Tau inclusions
seen in oligodendrocytes of PSP are described as coiled
bodies and similar structures are detected also in other
degenerations, e.g. CBD (13). Neurofibrillary tangles
(NFTs) in PSP resemble either cortical flame-shaped neuronal inclusions as seen in Alzheimer disease (AD) or
subcortical globose-shaped inclusions often referred to as
globose tangles (4, 14).
The abnormal tau inclusions are composed of aggregated and highly phosphorylated tau protein, which is a
microtubule-associated protein playing a role in microtubule assembly and stability. Normal brain tau contains
6 isoforms, with the 1:1 ratio between isoforms expressing 3 repeats (3Rtau) and 4 repeats (4Rtau) in the microtubule binding domain (15). The isoforms are generated
by the alternative splicing of a single tau gene on chromosome 17. Some of the neurodegenerative disorders
designated as tauopathies are linked to specific abnormalities in the tau gene affecting the splicing ratio of
3Rtau vs 4Rtau and binding of tau to microtubules (16).
Tauopathies are characterized by accumulation of intracellular tau inclusions in the absence of amyloid pathology and also include PSP. In selected regions of PSP
-----
34 TAKAHASHI ET AL
MATERIALS AND METHODS
Brain Tissue
Case # Diagnosis
TABLE 1
Patient Data
Clinical
course Age Postmortem
(yr) (yr) Sex interval (h)
1
2
3
4
PSP
PSP
PSP
AD
7
8
6
—
62
70
69
72
F
M
M
F
23
12
3
—
Abbreviations: yr, years; h, hours.
brain, 4Rtau isoforms appear to predominate as reported
at mRNA (17) and protein (18, 19) levels. Tau gene mutations favoring the 4Rtau splicing, however, have been
only rarely associated with familial PSP (20). Instead, the
_tau A0 allele may represent a genetic risk factor for PSP_
(21). Analysis of tau levels, phosphorylated epitopes, and
tau isoform content clarified some of the morphological
and biochemical differences among tauopathies (13, 22,
23). Tau may therefore serve as a potential biomarker
identifying PSP and other tauopathies.
Abnormal intracellular inclusions contain tau aggregated into filaments. Two kinds of tau filaments have
been ultrastructurally demonstrated in PSP: straight filaments and paired helical filaments (PHFs), also seen as
loosely intertwined paired filaments (24–26). It is unclear
which kinds of filaments are present in the 3 morphologically distinct tau inclusions. It is also unclear whether
tau content in these filaments is biochemically diverse. A
minimal diversity in tau content is suggested by the biochemical studies of PSP brain homogenates showing a
uniform composition of tau in several brain regions (27).
In the present study, we examined both tau protein
content and the ultrastructure of abnormal filaments isolated from pathologically distinct PSP brain regions. Our
goals were to find correlations between the morphological appearance of abnormal tau inclusions in various
types of cells and (i) the ultrastructure of tau filaments in
these inclusions, and (ii) the composition of tau protein.
The results of our studies suggest that the 3 kinds of PSP
tau inclusions are distinct from each other in ultrastructure. Glial and neuronal inclusions also differ in the biochemical composition of tau protein. We conclude that
these differences depend on the type of affected cells.
Data on PSP and AD patients used in the present studies are
listed in Table 1. An AD patient (case 4) had moderately advanced pathological stage of AD. Frontal lobe used in our studies had 2 to 3 NFTs per 40� field. Brains were obtained from
brain banks at Albert Einstein College of Medicine/Montefiore
Medical Center (Bronx, NY) and Mayo Clinic (Jacksonville,
FL). In all cases, left hemispheres were fixed in 4% formalin
and used for neuropathological examination, and right hemispheres were kept frozen at �85�C until used for biochemical
and ultrastructural studies. Our PSP cases were typical and were
not expected to show asymmetrical changes. It is, however, difficult to completely exclude the possibility that the burden of a
particular lesion was different in the frozen and fixed tissue.
Primary Antibodies
Tau antibodies, their characteristics, and the location of epitopes on the tau molecule are shown in Figure 1 and Table 2.
Tau antibodies obtained commercially were Tau 14 and Tau 46
(Zymed Laboratories, South San Francisco, CA), Tau-1 (Boehringer Mannheim, Indianapolis, IN), and AT100 and AT8 (Polymedco, Inc, Cortlandt Manor, NY). Other tau antibodies included PHF-1, MC-1, and CP13 donated by Peter Davies
(Albert Einstein College of Medicine, Bronx, NY), 12E8 donated by Peter Seubert (Elan Pharmaceuticals, Inc. formerly
Athena Neurosciences, Inc., South San Francisco, CA), and E10
donated by Andre´ Delacourte and Luc Boue´e (Inserm 422,
Lille, France). Non-tau antibodies included 2 polyclonal antibodies against glial fibrillary acidic protein (GFAP) (Sigma, St.
Louis, MO and BioGenex, San Ramon, CA).
Immunohistochemistry
Paraffin-embedded 5-�m-thick sections were obtained from
motor cortex and cerebellar white matter from all PSP brains.
In addition, sections of frontal white matter, perirhinal cortex
with amygdala, and caudate nucleus were obtained from case
1. All sections were stained with H&E, Bielschowsky, and Gallyas methods. Immunocytochemistry for tau was performed
with selected antibodies (MC-1, PHF-1, CP13, Tau-1, and AT8)
using standard methods and the Avidin-Biotin Vectastain kit
(Vector Laboratories, Inc., Burlingame, CA). Double immunostaining was performed as described (35) using pairs of antibodies GFAP/MC-1, GFAP/PHF-1 and GFAP/AT8 and the Avidin-Biotin system. Two chromogen substrates for horseradish
peroxidase included 3,3�-diaminobenzidine tetrahydrochloride
(DAB; Sigma) and SG substrate kit (Vector� SG, Vector Laboratories, Inc.). These substrates resulted in either brown (DAB)
or blue (SG) precipitates.
**Fig. 1.** Diagram of the longest tau isoform and location of epitopes.
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 35
TABLE 2
Tau Antibodies and Location of Epitopes
Antibody Dilution* Tau epitope Phosphate dependence Reference
MC-1 1:50 7–9 Conformation-dependent (28)
Tau 14 1:1000 141–178 None (29)
Tau-1 1:2000 192–199 Ser199/Ser202** (29)
AT8 1:200 202–205 Ser202/Thr205 (30)
CP13 1:100 �202 Ser202 (a)
AT100 1:250 212–214 Thr212/Ser214 (23)
12E8 1:200 262–356 Ser262/Ser356 (31)
E-10 1:10,000 274–283 None (32)
PHF-1 1:200 396–404 Ser396/Ser404 (33)
Tau 46 1:2000 428–441 None (34)
- Immunostaining and/or Western blotting.
** Phosphorylation inhibits the antibody binding.
(a) Peter Davies, personnal communication.
Isolation of Filament-Enriched Fractions
Tau filament-enriched fractions were isolated from the same
regions of PSP brains as described for immunohistochemistry.
The amount of frozen brain tissue available was as little as 0.3
g (caudate nucleus and perirhinal cortex) up to 2 g or more
(motor cortex and white matter). Tau filaments (PHFs) were
also isolated from an AD brain (frontal lobe). The isolation
procedure has been described previously (36) and used with
minor modifications. In brief, brain tissue was homogenized in
isolation buffer A (1:10 w/v; 20 mM MES/NaOH, pH 6.8, 80
mM NaCl, 1 mM MgCl2, 2 mM EGTA, 0.1 mM EDTA, 0.2
mM phenylmethylsulfonyl fluoride [PMSF]) and the homogenate was centrifuged for 20 min at 27,000� _g. The obtained_
supernatant, which contained most of the soluble tau protein
(nonfilamentous tau), was separated and designated as S1 fraction. The remaining pellet was suspended in buffer B (1:10, w/
v; 10 mM MES/NaOH, pH 7.4, 0.8 M NaCl, 10% sucrose, 1
mM EGTA, and 0.2 mM PMSF) and centrifuged as above. The
obtained supernatant was supplemented with sarcosyl (1%, w/
v), incubated at 4�C overnight and centrifuged for 2 hours (h)
at 100,000� _g. The resulting pellet was suspended in 50 mM_
MES/NaOH, pH 7.2 (0.5 ml/g tissue) and designated as sarcosyl-insoluble fraction. This fraction was enriched in sarcosylinsoluble tau filaments as examined by electron microscopy. In
fractions from AD, abundant twisted filaments typical of ADtype PHFs were observed.
protein signals were detected with an enhanced chemiluminescence system (ECL) from Amersham Life Science (Arlington
Heights, IL).
Immunogold Labeling and Electron Microscopy
Filament-enriched fractions (10–25 �l) were deposited for 5
min onto 200 mesh copper grids (Fullam, Latham, NJ) precoated with Formvar and carbon. Unfixed samples on grids
were incubated for 30 min at 25�C in the blocking medium
(phosphate buffered saline, pH 7.4 with 0.1% bovine serum
albumin, 0.1% gelatin and 5% horse serum), immunolabeled as
described below and then stained with 2% uranyl acetate (37).
For immunogold labeling, grids were incubated for 1 h with
each of the primary and secondary antibodies. Primary antibodies were 2- to 10-fold more concentrated than that for Western blotting (Table 2). Secondary antibodies were conjugated to
10-nm colloidal gold particles and used at 1:25 dilution (Amersham Life Science). Samples were examined using a JEOL
100CX electron microscope.
RESULTS
Western Blotting
Samples of filament-enriched fractions were mixed with
Laemmli buffer to obtain final concentrations of SDS (2%), �mercaptoethanol (2%), and Tris/HCl, pH 6.8 (62.5 mM). Samples were boiled for 5 min, then spun for 1 min at 12,000� _g_
before separation on SDS-PAGE using 10% polyacrylamide
gels. Separated proteins were electrotransferred onto nitrocellulose membranes and incubated with 5% nonfat milk in 10
mM Tris/HCl, pH 7.4, and 150 mM NaCl (TBS) to block nonspecific protein binding sites. Membranes were incubated with
primary antibodies overnight at 4�C and then with secondary
antibodies conjugated to horseradish peroxidase (Vector Laboratories, Burlingame, CA) for 1 h at 25�C. Both primary and
secondary antibodies were diluted in 5% milk in TBS. Specific
Characterization of PSP Cases
All 3 PSP patients clinically presented with parkinsonism, postural instability, supranuclear gaze palsy, and
pseudobulbar palsy. These features were consistent with
PSP, but inconsistent with mixed dementia because of
fairly well-preserved cognitive functions. The diagnosis
of PSP based on the clinical history was confirmed by a
neuropathologic examination of autopsy material. All 3
cases followed the neuropathologic criteria for typical
PSP (7) in that they had an abnormal semiquantitative
distribution of NFTs and neuropil threads, especially in
the basal ganglia and brainstem, and tau-positive astrocytes in the involved areas. Rare amyloid plaques typically found in PSP (38, 39) were present in cases 1 and
3 according to the expected frequency for age.
In each brain, we identified anatomical regions containing a single predominant type of pathological tau inclusion: tufted shape inclusions in astrocytes, coiled bodies in oligodendrocytes, and NFTs in neurons. In a given
-----
36 TAKAHASHI ET AL
section, the type of inclusion was determined by using
biochemical and morphological criteria. Initially, to select
for tufted astrocytes, double immunocytochemistry was
performed using tau and the astrocytic cell marker, GFAP.
Co-localization of both markers confirmed that inclusions
were of astrocytic origin. The shape of the tufted astrocytes was very characteristic, resembling that described
earlier (6, 9). Tufted astrocytes exhibited no apparent cytoplasm but had tufts of argyrophilic, long, fine radiating
fibers. Tau-positive but GFAP-negative inclusions indicated cells of either neuronal or oligodendroglial origin.
To differentiate these cells, the morphological appearance
of inclusions was evaluated with Gallyas stain or tau immunohistochemistry. Globose or flame-like-shaped NFTs
were indicative of neuronal pathology. Coiled body shape
was indicative of oligodendroglia (40, 41). A small number of tufted astrocytes were also found to be immunonegative for GFAP. In these instances, the morphology of
the inclusion material was a deciding factor.
Immunohistochemistry of Tau Inclusions
_Coiled Bodies: Coiled bodies were particularly prom-_
inent in both frontal and cerebellar white matter with Gallyas stain (Fig. 2A, insertion). They were immunoreactive
with tau antibodies MC-1 (Fig. 2A), CP13, and AT8. Neither tau-immunoreactive astrocytes nor tau-positive neurons could be distinguished in the white matter (Fig. 2A),
suggesting that oligodendroglial pathology was an exclusive tau pathology in these regions.
_Tufted Astrocytes: Tufted astrocytes were numerous in_
motor and premotor cortex, caudate nucleus, nucleus accumbens, and putamen, particularly as shown on sections
with Gallyas stain (Fig. 2B). In the motor cortex, the
number of tufted astrocytes varied between cases in a
decreasing order (case 1 � case 3 � case 2). Tufted astrocytes were immunoreactive with tau antibodies, including MC-1 (Fig. 2C), CP13 (Fig. 2D), and AT8. In
double-stained tufted astrocytes (Fig. 2C), a strong tau
immunoreactivity was detected at the periphery of the
astrocytic processes, whereas the GFAP immunoreactivity predominated in the perinuclear region. In some cells,
the presence of nuclei was difficult to discern either due
to staining artifact or to intrinsic nuclear loss. In motor
cortex of cases 2 and 3, tufted astrocytes were interspersed with a few NFTs and coiled bodies. In all 3 PSP
cases, thorn-shaped astrocytes were absent.
_Neurofibrillary Tangles: NFTs were predominantly_
found in perirhinal cortex and anterior hippocampus. Although these NFTs were easily detected with Gallyas
stain, they were less abundant than those typically seen
in AD. Two kinds of NFTs could be distinguished: classic
flame-like NFTs seen particularly in perirhinal cortex
(Fig. 2E, F) and globose type of NFTs found mostly in
deep subcortical nuclei (Fig. 2G). Regardless of the
shape, both kinds of NFTs were immunoreactive with
PHF-1, MC-1 (Fig. 2E, G), CP13 (Fig. 2F), and AT8, but
were not immunoreactive with the Tau-1 antibody (not
shown). The absence of Tau-1 immunoreactivity suggests
that this epitope is blocked by phosphorylation as seen
in other neurodegenerative disorders. For example, Tau1 binding is inhibited in AD and this inhibition is considered a hallmark of neurodegeneration (42). In perirhinal cortex, in addition to NFTs, a few coiled bodies
were also detected. In the vicinity of NFTs, classical senile plaques surrounded by astrocytes were interspersed
(0–1 plaque/40� field) as identified using tau and thioflavin-S staining.
Ultrastructure and Labeling of Tau Filaments
The results of ultrastructural analysis are summarized
in Table 3. In the present studies, twisted filaments were
defined as those displaying periodicity in width with decisive crossovers of hemi-filaments. Straight filaments
were those lacking periodicity in width. In addition, a
third type of filament was identified. These filaments
were neither straight nor twisted and were designated as
jagged filaments (see ‘‘Tufted Astrocytes’’ below). The
results of immunogold labeling were compatible with immunohistochemical data (Table 3).
_Coiled Bodies: Coiled bodies were the only inclusions_
identified microscopically in frontal and cerebellar white
matter regions (Fig. 2A). Filament-enriched fractions isolated from these regions contained predominantly, but not
exclusively, straight filaments labeled at various density
with PHF-1 (Fig. 3A, C, D, F, G) and only rarely with
Tau-1 antibodies (Fig. 3B, E). These filaments were approximately 14-nm wide (n � 19), but their width varied
extensively from 7 to 20 nm, often within a single filament. The variations in width appeared random rather
than periodic, and decisive crossovers of hemi-filaments
were absent. These features were distinct from those described in AD (36). Such morphology could result from
PSP-specific or cell type-specific aggregation of tau.
Most of the filaments had a smooth surface, with an occasional small bud extending from or attached to the filament proper (Fig. 3A, B, arrows). Rarely, approximately
7- to 8-nm-wide filaments were seen in tight, parallel
bundles (Fig. 3F) or loose aggregates.
_Tufted Astrocytes: Tufted astrocytes were seen predom-_
inantly in the caudate nucleus of case 1 and motor cortices from all 3 cases (Fig. 2B–D). Filament-enriched
fractions isolated from these regions contained filaments
that were highly heterogeneous and widely varied in
width and in an overall appearance (Fig. 4). The majority
was similar to straight filaments although their surface
appeared rough and irregular and their contours were jagged. The ends of filaments were blunt or splayed. In
splayed filaments, fan-like ends consisted of distinct fibrils (Fig. 4A, B, E, asterisks) as seen at a higher magnification (Fig. 5A, C). Most of the filaments had irregular, nonperiodic changes in width, which ranged from 7
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 37
**Fig. 2.** Immunohistochemistry/Gallyas stain of tau inclusions in oligodendrocytes (A), astrocytes (B–D), and neurofibrillary
tangles (E–G) in paraffin sections from PSP brains (A–C, E, G; case 1 and D, F; case 3). A: Frontal white matter, GFAP (blue)/
MC-1 (brown) and Gallyas stain (insert). B: Motor cortex, Gallyas stain. C: Caudate, GFAP (blue)/MC-1 (brown). D: Motor
cortex, CP13 (brown). E: Perirhinal cortex, MC-1 (blue)/GFAP (brown). F: Perirhinal cortex, CP13 (brown). G: Perirhinal cortex,
MC-1 (brown) and Gallyas stain (insert). Tufted astrocytes are clearly demonstrated with Gallyas stain (B) and GFAP (blue)/
MC-1 (brown) double immunoreactive as seen in (C), but not in (A), which shows typical coiled bodies of oligodendrocytes
(brown). Note a number of GFAP-positive (blue) and tau-negative astrocytes in (A) and (C). Flame-like neurofibrillary tangles
are seen in (E) and (F), whereas typical globose neurofibrillary tangles are seen in (G). Insert in (A) has a 2-fold magnification
compared to background.
-----
38 TAKAHASHI ET AL
TABLE 3
Summary of the Results Obtained Using 3 PSP Brains
Ultrastructure
of tau filaments
(width � SD)
Tau protein (Western blots)
PHF-1 immunoreactivity Tau 46 immunoreactivity
Brain regions
Predominant
tau pathology
(inclusions)
Frontal and cerebellar Coiled bodies Straight and smooth, 2 bands (64 and 68 2 bands (64 and 68
white matter 14 nm � 3.9 (n � kDa) kDa)
19)*
Caudate and motor cor- Tufted astrocytes Straight and jagged, 2 bands (64 and 68 2–6 bands (45–68
tex 22 nm � 5.5 (n � kDa) kDa), vary between
22) cases
Perirhinal cortex and NFT PHF-like, 22 nm � 3 bands (60, 64, and 68 Not determined
hippocampus 4.8 (n � 24) kDa)—as in AD
brain
- p � 0.001 vs other regions (Student t-test).
**Fig. 3.** Tau filaments from coiled body-predominant regions. Filament-enriched fractions were isolated from frontal
white matter (A, C, D, G; case 1) and cerebellar white matter
(B, E, F; cases 3, 2 and 3, respectively). Samples were immunolabeled for tau with PHF-1 (A, C, D, F, G) or Tau-1 (B,
E) and 10-nm immunogold particles. Arrows in (A) and (B)
indicate small buds projecting from the filament proper. Scale
bar in (A) applies to all panels. JEOL 100CX.
**Fig. 4.** Tau filaments from tufted astrocyte-predominant regions. Filament-enriched fractions were isolated from motor
cortex (A, B, H; case 1), (F; case 2), (C, E, G, I; case 3), and
caudate (D; case 1). Samples were immunolabeled with tau antibodies and 10-nm immunogold particles as follows: PHF-1
(A, D, F, H), Tau-1 (B, E, I), AT100 (C) and Tau 46 (G). Asterisks (A, B, E) denote splayed ends of filaments. Scale bar in
(A) applies to all panels. JEOL 100CX.
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 39
**Fig. 5.** Details of splayed tau filaments. The filaments are from motor cortex, cases 3 and 1 (A, C) and perirhinal cortex,
case 1 (B, D). The filaments in (A) and (C) are also shown in Figure 4 (A, E). Scale bar in (C) applies to all panels. JEOL
100CX.
to 33 nm. An average maximal width of these filaments
was approximately 22 nm (n � 22). Although changes in
width were more regular in some filaments (Fig. 4A, F,
G, arrowheads), these filaments lacked distinct crossovers
and thus were not considered twisted. Since the filaments
in tufted astrocyte-enriched fractions were neither straight
nor twisted, they were regarded as a third type of filament
and designated as jagged filaments. As an exception, a
single 27-nm-wide filament was detected, which was regularly twisted at 62-nm intervals (Fig. 4I) and thus closely resembled PHFs of AD type. This filament might have
originated from the rare NFTs in the deeper cortical neurons. The filaments in tufted astrocyte-enriched fractions
were consistently labeled with PHF-1, AT8, AT100, and
Tau 46 but not with Tau-1 (Fig. 4B, E, I).
_Neurofibrillary Tangles: NFTs were predominantly_
seen in perirhinal cortex and hippocampus of case 1. Filament-enriched fractions from both regions contained the
majority of regularly twisted filaments with clearly defined crossovers (Fig. 6A–E). The twisted filaments were
17- to 33-nm wide (average maximal width of 22 nm; n
� 24). The twisting interval ranged from 50 to 110 nm,
with an average value of 80 nm. Twisting in some exceptionally wide 27- to 33-nm filaments, however, was
uncertain (Fig. 6A, insert). The appearance of twisted filaments from perirhinal cortex (Fig. 6A, C, E) closely
resembled that of AD-PHFs (Fig. 6F). Interestingly, some
of the twisted filaments were splayed into 2 or 3 fibrils
(Fig. 5B, D). A small number of straight, 13-nm-wide
filaments was also identified (Fig. 6D). The appearance
of straight filaments was similar to that seen in frontal
and cerebellar white matter. Both twisted and straight filaments were immunolabeled with PHF-1 (Fig. 6A–D)
and AT8 (not shown), similar to those in AD (Fig. 6F).
In Western blotting, PHF-1 was found to be the most
sensitive of the tau antibodies used. In all regions, there
were 2 major PHF-1 immunoreactive polypeptides of 64
and 68 kDa, except for perirhinal cortex and hippocampus, where an additional 60 kDa polypeptide was seen
(Fig. 7A–C). The 3-band, but not 2-band, pattern was
similar to that found in samples from AD (Fig. 7C). The
Tau 46 and E-10 immunoreactivity was examined only
in selected regions (Fig. 7A, B) due to a limited material.
With E-10, the pattern of 2 major bands of 64 and 68
kDa was detected in all selected fractions, similar to that
obtained with PHF-1. With Tau 46, besides the 64 and
68 kDa polypeptides, there were 4 additional bands detected between 45–60 kDa. These additional bands were
particularly prominent in motor cortex of cases 2 and 3
(Fig. 7B-MTR2 and MTR3). The 45–60 kDa bands are
likely to represent the Tau 46-positive tau protein lacking
phosphorylation at the PHF-1 site. They are unlikely to
Tau Protein: Band Pattern
-----
40 TAKAHASHI ET AL
**Fig. 6.** Tau filaments from neurofibrillary tangle-predominant regions. Filament-enriched fractions were isolated from PSP
perirhinal cortex (A, D, E; Case 1), PSP hippocampus (B, C; Case 1), and AD brain (F). Samples were immunolabeled with tau
antibodies and 10-nm gold particles as follows: PHF-1 (A–D), None (E), and AT8 (F). Scale bar in (A) applies to all panels.
JEOL 100CX.
represent tau degradation products, e.g. C-terminal fragments retaining only the Tau 46, but not the PHF-1 epitope, since both epitopes are located close to each other
(Fig. 1; Table 2).
Tau Protein: Variable Phosphorylation
The presence of Tau 46-positive, but PHF-1-negative,
bands in only some samples suggested that phosphorylation of PHF-tau polypeptides varied among cases and
regions. Such a possibility was confirmed by using phosphate-dependent tau antibodies, including Tau-1, which
binds only to a nonphosphorylated epitope. Consistent
with previous reports, Tau-1 showed no binding to PHFtau polypeptides in our AD samples (Fig. 7C). In contrast, Tau-1 detected 4–6 polypeptides migrating between
45–64 kDa in most of the PSP samples (Fig. 7A–C).
These polypeptides co-migrated with Tau 46-positive
bands between 45–64 kDa but not with the 68 kDa band,
which was Tau 46-positive but Tau-1-negative. The extent of immunoreactivity with Tau-1 differed among the
regions. For example, motor cortex and hippocampus of
case 1 (Fig. 7B-MTR1 and Fig. 7C-Hipp1) showed the
least, and cerebellar white matter of case 3 (Fig. 7ACbl3) showed the most intense binding. Although it was
difficult to precisely estimate the extent of phosphorylation, the results suggest that the motor cortex of case 1
(Fig. 7B-MTR1) contains almost all tau phosphorylated
at the Tau-1 site, whereas the motor cortex (Fig. 7BMTR3) and cerebellar white matter of case 3 (Fig. 7ACbl3) contain only a small fraction of tau phosphorylated
at this site. Further comparisons made between motor
cortex of the 2 most diverse cases (case 1 and case 3)
clearly demonstrate that the total tau content (Tau 46
binding) is lower, but the phosphorylation is higher in
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 41
**Fig. 7.** Western blotting: all cases. Filament-enriched fractions were isolated from PSP regions (cases 1–3) enriched in coiled
bodies (A), tufted astrocytes (B), and NFTs (C) and from AD brain, as indicated. Samples were immunoblotted with PHF-1, Tau1, Tau 46, and E-10 as marked. Samples on blots were equivalent in the amount of tissue. Major PHF-tau polypeptides (arrows)
immunoreactive with PHF-1 and E-10 migrated at 64 and 68 kDa (coiled bodies and tufted astrocytes) and 60, 64, and 68 kDa
(NFTs). Four major Tau 46 and Tau-1-positive bands migrated at 52–64 kDa. The 68-kDa band was also immunoreactive with
Tau 46, but not with Tau-1, indicating that phosphorylation at the Tau-1 binding epitope blocked the antibody reactivity. Abbreviations: Front, frontal white matter; Cbl, cerebellar white matter; Caud, caudate nucleus; MTR, motor cortex; Perirh, perirhinal
cortex; Hipp, hippocampus.
-----
42 TAKAHASHI ET AL
**Fig. 8.** Western blotting: motor cortex of selected PSP cases. Filament-enriched fractions were isolated from the motor cortex
of case 1 (lane a) and case 3 (lane b) and immunoblotted with PHF-1, AT8, 12E8, AT100, Tau-1, and Tau 46, as indicated.
Although the total PHF-tau content as estimated by Tau 46 immunoreactivity is lower in case 1 than in case 3, more protein is
phosphorylated in case 1, especially at the PHF-1, AT8 and Tau-1 sites. The number of PHF-tau polypeptides detected varies
depending on the antibody used from 2 (PHF-1, AT8, 12E8, AT100) to 6 (Tau 46).
case 1 than case 3 (Fig. 8). Higher phosphorylation was
particularly evident with the PHF-1 and AT8 binding. A
significant inhibition of Tau-1 binding also indicated
higher phosphorylation at this site. With 2 other antibodies (12E8 and AT100), the difference in phosphorylation
between the cases was less pronounced. In case 1, two
bands of 56 and 60 kDa were Tau-1-positive but Tau 46negative, suggesting that they may represent tau degradation products devoid of the most C-terminal region.
Alternatively, the differential binding may be due to a
higher binding affinity of Tau-1 than Tau 46.
DISCUSSION
PSP appears to be unique among tauopathies by involving 3 types of cells in specific anatomical areas. Also,
each cell pathology presents with morphologically distinct tau inclusions. These unusual features of PSP were
employed in the present studies to correlate the morphology of inclusions with the tau protein content. We
conclude that the morphological heterogeneity of inclusions is accompanied by the heterogeneity of tau protein
at the ultrastructural and biochemical levels.
Filament-enriched fractions from regions with the predominant glial pathology displayed structural heterogeneity depending on whether they derived from regions
enriched in coiled bodies or tufted astrocytes. Filaments
from coiled bodies were 14-nm wide, smooth, and on
average 40% thinner than those from tufted astrocytes.
They resembled abnormal thin tubules (13–15-nm diameter) of coiled bodies described in immunoelectron
microscopy studies by Arima et al (41). In comparison,
filaments isolated from regions enriched in tufted astrocytes had uneven jagged surface and were as wide (22
nm) as PHFs in AD. Previous studies described either
15-nm-wide, straight filaments occasionally narrowing to
9 nm without evidence of periodicity (43), 15- to 20-nmwide, straight tubular structures immunoreactive for tau
as seen in GFAP-positive cells (44), or 18–22-nm-wide,
straight filaments of tubular appearance (45). Such filaments appear to share similarity with our jagged filaments. The fine ultrastructure of jagged filaments was
uncertain. If jagged filaments were indeed made of 2 helically twisted filaments, as are those in AD, the twisting
was haphazard and lacked regular periodicity or distinct
crossovers. Furthermore, jagged filaments appeared to
consist of multiple thin fibrils arranged parallel to the
long axis, particularly prominent at splayed ends. A similar dissociation into fibrils has rarely been noted in PHFs
from AD, but is seen more frequently in filaments from
other disorders, e.g. CBD (46, 47). In studies of PSP
filaments, cross-sectional views revealed the presence of
6 or more protofilaments (45). On the other hand, it is
unlikely that jagged filaments resulted from simple pairing of straight filaments from coiled bodies. Jagged filaments were thinner than 2 straight filaments (e.g. 22 nm
vs 28 nm) and they were not smooth. Since jagged filaments resembled neither PHFs of AD nor smooth straight
filaments from coiled bodies, they were considered a
unique third type of filament in PSP. They also differed
from filaments found in other disorders involving glia,
e.g. CBD, by displaying only minimal twisting and 22nm width rather than 29-nm width (46). The estimation
of mass per unit length of filaments using scanning transmission electron microscopy will be necessary to determine the regional and disease-specific characteristics of
PSP filaments.
Both straight and jagged filaments resolved into 2
PHF-1 immunoreactive polypeptides migrating at 64 kDa
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 43
and 68 kDa. The double-band pattern was similar to that
reported by Vermersch et al (27) and others (48) for most
of PSP anatomical areas except for hippocampus and entorhinal cortex, where an additional 60 kDa band was
found. The present results indicate that besides PHF-1positive tau species, PHF-1-negative polypeptides are
also components of straight and jagged filaments. Tau
antibodies against nonphosphorylated epitopes, e.g. Tau1 and Tau 46, revealed up to 4 additional bands, migrating between 45 kDa and 60 kDa. The total number of
tau bands differed between cases and regions, but we
failed to demonstrate any consistent difference in the tau
content between smooth and jagged filaments. Moreover,
our preliminary analysis showed that both filaments contained similar isoforms of tau, which expressed predominantly, but not exclusively, exon 10, with or without
exon 2 and lacking exon 3 (case 1, not shown). We conclude that the diversity of tau at the PHF-1 epitope has
no apparent effect on the ultrastructure of filaments. Studies of tau assembly in vitro using nonphosphorylated recombinant tau (49) and phosphorylated species of bovine
brain tau (50) reached a similar conclusion based on the
ultrastructural similarity between assembled and authentic PHFs. Further studies will determine whether phosphorylation plays only a secondary role in the formation
of tau filaments in neurodegenerative disorders. It is still
unclear what are the factors responsible for ultrastructural
heterogeneity between smooth and jagged filaments.
Since these filaments originated in different cell types, it
is possible that the type of cell is one of the determining
factors. Both oligodendrocytes and astrocytes contain tau
protein (51, 52) and may differ in the metabolism of tau
and/or in factors stimulating aggregation of tau in pathological conditions. In studies of tau assembly in vitro,
stimulatory compounds (49, 53) as well as unknown factor (55) were able to determine ultrastructure of filaments.
The variability in phosphorylation of tau among regions and cases was unexpected, especially considering
the similarity of the clinical history and postmortem delay. The motor cortex from 2 cases (case 1 and case 3)
was most diverse. It was rich in tufted astrocytes and
displayed the absence of morphological or immunohistochemical differences. In spite of differential phosphorylation of tau at PHF-1, AT8, 12E8, AT100, and Tau-1
epitopes in these cases (Fig. 7), both contained similar
jagged filaments (compare Fig. 4A with 4G). These results emphasize an apparent lack of correlation between
the ultrastructure of jagged filaments and the phosphorylation state of tau. However, the underlying reason for
differential phosphorylation of tau among cases and regions remains unclear.
Similar observations of tau lacking certain phosphoepitopes were made in other disorders (32, 55), including
Pick’s disease (50, 56, 57). For example, only 2 of 4
PHF-tau bands were found to be PHF-1-immunoreactive
in Pick’s disease (50). In PSP, a weak immunoreactivity
with phospho-dependent but not phospho-independent
antibodies was also noted (19). This was attributed to
normal tau in homogenates. In our studies, we examined
isolated filament-enriched fractions devoid of a soluble
pool of tau. Therefore, tau lacking certain phospho-epitopes cannot be attributed to contamination with normal
tau protein. Moreover, such a contamination should be
detected consistently in all PSP samples rather than in
selected cases and regions only.
On the other hand, Iwatsubo et al probed several tauopathies, including PSP, and concluded that the widely occurring tau-positive inclusions share common phosphorylation characteristics irrespective of underlying disease
or cell type (58). Since the latter observations were based
on immunostaining of tissue sections, examination of tau
by immunoblotting is needed to more precisely define
disease- and/or cell-dependent phosphorylation patterns.
It is clear that, in addition to neurons, both astrocytes and
oligodendrocytes express tau protein (51, 52). It is uncertain whether similar neuronal kinases and phosphatase
are found in these glial cells. Further study of the differential phosphorylation of tau in the tauopathies is necessary.
PHF-1 bound to a subset of tau polypeptides suggests
the possibility that 2 discrete subpopulations of filaments
were present. By immunogold labeling, however, all the
filaments were PHF-1-positive, suggesting that only a
single population of filaments was present and that both
PHF-1-positive and PHF-1-negative polypeptides were
inter-mixed in the same filament population. Tau-1 also
showed differential binding to tau polypeptides by blotting. With Tau-1, however, we failed to detect any significant immunogold labeling, suggesting that this epitope, readily accessible by blotting, was inaccessible to
labeling in filaments. We speculate that the inaccessibility
may be due to intertwining of Tau-1-positive and Tau-1negative polypeptides in the same filament effectively
blocking this epitope from labeling. Alternatively, the
density of Tau-1-positive polypeptides is below the level
of detection by immunogold labeling. These results underline the importance of Western blotting in determining
the phosphorylation state of PHF-tau.
In contrast to glial pathology, neuronal pathology resulted in twisted filaments. For example, filament-enriched fractions from regions involving neurons, e.g. perirhinal cortex or hippocampus, contained an abundance
of twisted filaments. As in glia, some filaments splayed
into distinct fibrillary components. It will be of interest
to determine the mass of the individual fibrils by scanning transmission electron microscopy. The maximal and
minimal widths of these filaments, as well as their regular
80-nm interval of twisting, indicate that ultrastructurally
-----
44 TAKAHASHI ET AL
these filaments closely resemble PHFs of AD. They resemble PHFs of AD also by immunoblotting as demonstrated by the presence of 3 major tau bands of 60, 64,
and 68 kDa, similar to those described for AD. We speculate that PHFs from PSP and AD have a common ultrastructural and biochemical character. Such similarity
between PSP and AD has been previously reported (27,
47) and attributed to the specific tau pathology of neurons.
As in AD, twisted PHF-like filaments of PSP were
found in anatomical areas containing NFTs. Unlike AD,
however, 2 types of NFTs with globose or flame-shape
could be distinguished morphologically. We were unable
to differentiate the tau content of these NFTs because of
their co-localization. It is likely that the 2 types of NFTs
contain similar tau, since filaments isolated from NFTenriched regions appear ultrastructurally homogenous.
In conclusion, the results of our studies suggest that 3
kinds of tau inclusions present in PSP are distinct from
each other in ultrastructure although some overlap exists.
Neuronal and glial inclusions also differ in biochemical
composition. These differences may depend on cell-typespecific metabolism of tau.
ACKNOWLEDGMENTS
The authors express special thanks to Dr. TV Le (Mayo Clinic, Jacksonville, FL) for his expertise in application of double immunohistochemistry and Dr. A. Hirano (Montefiore Medical Center, Bronx, NY)
for his valuable suggestions in the course of our studies. Supported by
the Society for Progressive Supranuclear Palsy and the Alzheimer’s Disease Association (to HKR).
REFERENCES
1. Steele JC, Richardson JC, Olszewski J. Progressive supranuclear
palsy. Arch Neurol 1964;10:333–59
2. Golbe LI, Davis PH, Schoenberg BS, Duvoisin RC. Prevalence and
natural history of progressive supranuclear palsy. Neurology 1988;
38:1031–34
3. Goedert M, Spillantini MG, Davies SW. Filamentous nerve cell
inclusions in neurodegenerative diseases. Curr Opin Neurobiol
1998;8:619–32
4. Dickson DW. Neuropathologic differentiation of progressive supranuclear palsy and corticobasal degeneration. J Neurol (Suppl 2)
1999;246:6–15
5. Dickson DW. Astrocytes in other neurodegenerative diseases. In:
Schipper HM, ed. Astrocytes in brain aging and neurodegeneration.
Austin: Landes RG Co, 1998:165–88
6. Matsusaka H, Ikeda K, Akiyama H, Arai T, Inoue M, Yagishita S.
Astrocytic pathology in progressive supranuclear palsy: Significance for neuropathological diagnosis. Acta Neuropathol 1998;96:
248–52
7. Hauw J-J, Daniel SE, Dickson DW, et al. Preliminary NINDS neuropathologic criteria for Steele-Richardson-Olszewski syndrome
(progressive supranuclear palsy). Neurology 1994;44:2015–19
8. Komori T, Arai N, Oda M, et al. Morphologic difference of neuropil
threads in Alzheimer’s disease, corticobasal degeneration and progressive supranuclear palsy: A morphometric study. Neuroscience
Lett 1997;233:89–92
9. Komori T, Arai N, Oda M, et al. Astrocytic plaques and tufts of
abnormal fibers do not coexist in corticobasal degeneration and
progressive supranuclear palsy. Acta Neuropathol 1998;96:401–8
10. Bergeron C, Pollanen MS, Weyer L, Lang AE. Cortical degeneration in progressive supranuclear palsy. A comparison with corticalbasal ganglionic degeneration. J Neuropathol Exp Neurol 1997;56:
726–34
11. Ikeda K, Akiyama H, Haga C, Tanno E, Tokuda T, Ikeda S. Thornshaped astrocytes: Possibly secondary induced tau-positive glial fibrillary tangles. Acta Neuropathol 1995;90:620–25
12. Yamada T, Calne DB, Akiyama H, McGeer EG, McGeer PL. Further observations on Tau-positive glia in the brains with progressive
supranuclear palsy. Acta Neuropathol 1993;85:308–15
13. Feany MB, Dickson DW. Neurodegenerative disorders with extensive tau pathology: A comparative study and review. Ann Neurol
1996;40:139–48
14. Cervos-Navarro J, Schumacher K. Neurofibrillary pathology in progressive supranuclear palsy (PSP). J Neural Transm Suppl 1994;42:
153–64
15. Ksiezak-Reding H, Shafit-Zagardo B, Yen SH. Differential expression of exon10 and 11 in normal tau and tau associated with paired
helical filaments. J Neurosci Res 1995;41:583–93
16. Murrell JR, Spillantini MG, Zolo P, et al. Tau gene mutation G389R
causes a tauopathy with abundant Pick body-like inclusions and
axonal deposits. J Neuropathol Exp Neurol 1999;58:1207–26
17. Chambers LB, Lee JM, Troncoso JC, Reich S, Muma NA. Overexpression of four-repeat tau mRNA isoforms in progressive supranuclear palsy but not in Alzheimer’s disease. Ann Neurol 1999;46:
325–32
18. Litvan I, Dickson DW, Buttner-Ennever JA, et al. Research goals
in progressive supranuclear palsy. Mov Disord 2000;15:446–58
19. Sergeant N, Wattez A, Delacourte A. Neurofibrillary degeneration
in progressive supranuclear palsy and corticobasal degeneration:
Tau pathologies with exclusive ‘‘exon 10’’ isoforms. J Neurochem
1999;72:1243–49
20. Stanford PM, Halliday GM, Brooks WS, et al. Progressive supranuclear palsy pathology caused by a novel silent mutation in exon
10 of the tau gene: Expansion of the disease phenotype caused by
tau gene mutations. Brain 2000;123:880–93
21. Conrad C, Andreadis A, Trojanowski JQ. Genetic evidence for the
involvement of tau in progressive supranuclear palsy. Ann Neurol
1997;41:277–81
22. Feany MB, Ksiezak-Reding H, Liu WK, Vincent I, Yen SH, Dickson DW. Epitope expression and hyperphosphorylation of tau protein in corticobasal degeneration: Differentiation from progressive
supranuclear palsy. Acta Neuropathol 1995;90:37–43
23. Hoffmann R, Lee VMY, Leight S, Varga I, Otvos JL. Unique Alzheimer’s disease paired helical filament specific epitopes involve
double phosphorylation at specific sites. Biochemistry 1997;36:
8114–24
24. Tabaton M, Whitehouse PJ, Perry G, Davies P, Autilio-Gambetti L,
Gambetti P. Alz 50 recognizes abnormal filaments in Alzheimer’s
disease and progressive supranuclear palsy. Ann Neurol 1988;24:
407–13
25. Tellez-Nagel I, Wisniewski HM. Ultrastructure of neurofibrillary
tangles in Steele-Richardson-Olszewski syndrome. Arch Neurol
1973;29:324–27
26. Takauchi S, Mizuhara T, Miyoshi K. Unusual paired helical filaments in progressive supranuclear palsy. Acta Neuropathol (Berl)
1983;59:225–28
27. Vermersch P, Robitaille Y, Bernier L, Wattez A, Gauvreau D, Delacourte A. Biochemical mapping of neurofibrillary degeneration in
a case of progressive supranuclear palsy: Evidence for general cortical involvement. Acta Neuropathol 1994;87:572–77
-----
ULTRASTRUCTURE AND BIOCHEMISTRY OF PSP FILAMENTS 45
28. Jicha GA, Berenfeld B, Davies P. Sequence requirements for formation of conformational variants of tau similar to those found in
Alzheimer’s disease. J Neurosci Res 1999;55:713–23
29. Kosik KS, Orecchio LD, Binder LI, Trojanowski JQ, Lee VM-Y,
Lee G. Epitopes that span the tau molecule are shared with paired
helical filaments. Neuron 1998;1:817–25
30. Goedert M, Jakes R, Vanmechelen E. Monoclonal antibody AT8
recognizes tau protein phosphorylated at both serine202 and threonine205. Neurosci Lett 1995;189:167–69
31. Seubert M, Mawal-Dewan R, Barbour R, et al. Detection of phosphorylated Ser262 in fetal tau, adult tau, and paired helical filament
tau. J Biol Chem 1995;270:18917–22
32. Sergeant N, David JP, Lefranc D, Vermersch P, Wattez A, Delacourte A. Different distribution of phosphorylated tau protein in
Alzheimer’s and Pick’s diseases. FEBS Lett 1997;412:578–82
33. Otvos L, Feiner L, Lang E, Szendrei GI, Goedert M, Lee VMY.
Monoclonal antibody PHF-1 recognizes tau protein phosphorylated
at serine residues 396–404. J Neurosci 1994;39:669–73
34. Carmel G, Mager EM, Binder LI, Kuret J. The structural basis of
monoclonal antibody Alz50‘s selectivity for Alzheimer’s disease
pathology. J Biol Chem 1996;271:32789–95
35. Ishizawa K, Ksiezak-Reding H, Davis P, Delacourte A, Tiseo P,
Yen SH, Dickson DW. A double labeling immunohistochemical
study of tau exon 10 in Alzheimer’s disease, progressive supranuclear palsy and Pick’s disease. Acta Neuropathol 2000;100:235–44
36. Ksiezak-Reding H, Liu W-K, Wall JS, Yen S-H. Biochemical isolation and characterization of paired helical filaments. In: Avila J,
Brandt R, Kosik KS, eds. Brain microtubule associated proteins;
modifications in disease. Amsterdam: Harwood Academic Publishers, 1997:331–50
37. Yang LS, Ksiezak-Reding H. Ubiquitin immunoreactivity of paired
helical filaments differs in Alzheimer’s disease and corticobasal degeneration. Acta Neuropathol 1998;96:520–26
38. Hauw JJ, Verny M, Delaere P, Cervera P, He Y, Duyckaerts C.
Constant neurofibrillary changes in the neocortex in progressive
supranuclear palsy. Basic differences with Alzheimer’s disease and
aging. Neurosci Lett 1990;119:182–86
39. Hof PR, Delacourte A, Bouras C. Distribution of cortical neurofibrillary tangles in progressive supranuclear palsy. Acta Neuropathol
1992;84:45–51
40. Feany MB, Mattiace LA, Dickson DW. Neuropathologic overlap of
progressive supranuclear palsy, Pick’s disease and corticobasal degeneration. J Neuropathol Exp Neurol 1996;55:53–67
41. Arima K, Nakamura M, Sunohara N, et al. Ultrastructural characterization of the tau-immunoreactive tubule in the oligodendroglial
perikarya and their inner loop processes in progressive supranuclear
palsy. Acta Neuropathol 1997;93:558–66
42. Grundke-Iqbal I, Iqbal K, Tung YC, Quinlan M, Wisniewski HM,
Binder LI. Abnormal phosphorylation of the microtubule associated
protein tau in Alzheimer cytoskeletal pathology. Proc Natl Acad
Sci USA 1986;83:4913–17
43. Bugiani O, Mancardi GL, Brusa A, Ederli A. The fine structure of
subcortical neurofibrillary tangles in progressive supranuclear palsy. Acta Neuropathol 1979;45:147–52
44. Abe H, Yagishita S, Amano N, Bise K. Ultrastructural and immunochemical study of ‘‘astrocytic tangles’’ (ACT) in patients with
progressive supranuclear palsy [abstract]. Clin Neuropathol 1992;
11:278
45. Montpetit V, Clapin DF, Guberman A. Substructure of 20 nm filaments of progressive supranuclear palsy. Acta Neuropathol 1985;
68:311–18
46. Ksiezak-Reding H, Tracz E, Yang LS, Dickson DW, Martha S, Wall
JS. Ultrastructural instability of paired helical filaments from corticobasal degeneration as examined by scanning transmission electron microscopy. Am J Pathol 1996;149:639–51
47. Tracz E, Dickson DW, Hainfeld JF, Ksiezak-Reding H. Paired helical filaments in corticobasal degeneration: The fine fibrillary structure with Nano Van. Brain Res 1997;773:33–44
48. Schmidt ML, Huang R, Martin JA, et al. Neurofibrillary tangles in
progressive supranuclear palsy contain the same tau epitopes identified in Alzheimer’s disease PHFtau. J Neuropathol Exp Neurol
1996;55:534–39
49. Goedert M, Jakes R, Spillantini MG, Hasegawa M, Smith MJ,
Crowther RA. Assembly of microtubule-associated protein tau into
Alzheimer-like filaments induced by sulphated glycosaminoglycans. Nature 1996;383:550–53
50. King ME, Ghoshal N, Wall JS, Binder LI, Ksiezak-Reding H.
Structural analysis of Pick’s disease derived and in vitro-assembled
tau filaments. Am J Pathol 2001;158:1481–90
51. Ksiezak-Reding H, He D, Gordon-Krajcer W, Kress Y, Lee S, Dickson DW. Induction of Alzheimer-specific tau epitope AT100 in apoptotic human fetal astrocytes. Cell Motil Cytoskeleton 2000;47:
236–52
52. LoPresti P, Szuchet S, Papasozomenos SC, Zinkowski RP, Binder
LI. Functional implications for the microtubule-associated protein
tau: Localization in oligodendrocytes. Proc Natl Acad Sci USA
1995;92:10369–73
53. Arrasate M, Perez M, Valpuesta JM, Avila J. Role of glycosaminoglycans in determining the helicity of paired helical filaments.
Am J Pathol 1997;151:1115–22
54. Ksiezak-Reding H, Yang G, Simon M, Wall JS. Assembled tau
filaments differ from native paired helical filaments as determined
by scanning transmission electron microscopy. Brain Res 1998;814:
86–98
55. Mailliot C, Sergeant N, Bussiere T, Caillet-Boudin ML, Delacourte
A, Bouee L. Phosphorylation of specific sets of tau isoforms reflects
different neurofibrillary degeneration processes. FEBS Lett 1998;
433:201–4
56. Rizzini C, Goedert M, Hodges JR, et al. Tau gene mutation K257T
causes a tauopathy similar to Pick’s disease. J Neuropathol Exp
Neurol 2000;59:990–1001
57. Probst A, Tolnay M, Langui D, Goedert M, Spillantini MG. Pick’s
disease: Hyperphosphorylated tau protein segregates to the somatoaxonal compartment. Acta Neuropathol 1996;92:588–96
58. Iwatsubo T, Hasegawa M, Ihara Y. Neuronal and glial tau-positive
inclusions in diverse neurologic diseases share common phosphorylation characteristics. Acta Neuropathol 1994;88:129–36
Received February 28, 2001
Revision received June 18, 2001 and September 6, 2001
Accepted September 11, 2001
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1093/JNEN/61.1.33?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1093/JNEN/61.1.33, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://academic.oup.com/jnen/article-pdf/61/1/33/9552913/61-1-33.pdf"
}
| 2,002
|
[
"JournalArticle"
] | true
| 2002-01-01T00:00:00
|
[] | 13,980
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014ad6b5c003ecf7566570266d43154ac8683758
|
[
"Computer Science"
] | 0.820874
|
Convolutional Transformer based Dual Discriminator Generative Adversarial Networks for Video Anomaly Detection
|
014ad6b5c003ecf7566570266d43154ac8683758
|
ACM Multimedia
|
[
{
"authorId": "4738891",
"name": "Xinyang Feng"
},
{
"authorId": "2451800",
"name": "Dongjin Song"
},
{
"authorId": "2786131",
"name": "Yuncong Chen"
},
{
"authorId": "1766853",
"name": "Zhengzhang Chen"
},
{
"authorId": "2090567",
"name": "Jingchao Ni"
},
{
"authorId": "2145225543",
"name": "Haifeng Chen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"MM"
],
"alternate_urls": null,
"id": "f2c85de5-7cfa-4b92-8714-a0fbdcf0274e",
"issn": null,
"name": "ACM Multimedia",
"type": "conference",
"url": null
}
|
Detecting abnormal activities in real-world surveillance videos is an important yet challenging task as the prior knowledge about video anomalies is usually limited or unavailable. Despite that many approaches have been developed to resolve this problem, few of them can capture the normal spatio-temporal patterns effectively and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of temporal dynamics in video sequences. To this end, we propose Convolutional Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. Specifically, we first present a convolutional transformer to perform future frame prediction. It contains three key components, i.e., a convolutional encoder to capture the spatial information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder to integrate spatio-temporal features and predict the future frame. Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction. Finally, the prediction error is used to identify abnormal video frames. Thoroughly empirical studies on three public video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the effectiveness of the proposed adversarial spatio-temporal modeling framework.
|
## Convolutional Transformer based Dual Discriminator Generative Adversarial Networks for Video Anomaly Detection
### Xinyang Feng
##### Columbia University New York, New York, USA xf2143@columbia.edu
### Dongjin Song[∗]
##### University of Connecticut Storrs, Connecticut, USA dongjin.song@uconn.edu
### Zhengzhang Chen
##### NEC Laboratories America, Inc. Princeton, New Jersey, USA zchen@nec-labs.com
#### ABSTRACT
### Jingchao Ni
##### NEC Laboratories America, Inc. Princeton, New Jersey, USA jni@nec-labs.com
#### KEYWORDS
### Yuncong Chen
##### NEC Laboratories America, Inc. Princeton, New Jersey, USA yuncong@nec-labs.com
### Haifeng Chen
##### NEC Laboratories America, Inc. Princeton, New Jersey, USA haifeng@nec-labs.com
Detecting abnormal activities in real-world surveillance videos is
an important yet challenging task as the prior knowledge about
video anomalies is usually limited or unavailable. Despite that many
approaches have been developed to resolve this problem, few of
them can capture the normal spatio-temporal patterns effectively
and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of
temporal dynamics in video sequences. To this end, we propose
Convolutional Transformer based Dual Discriminator Generative
Adversarial Networks (CT-D2GAN) to perform unsupervised video
anomaly detection. Specifically, we first present a convolutional
transformer to perform future frame prediction. It contains three
key components, i.e., a convolutional encoder to capture the spatial
information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder
to integrate spatio-temporal features and predict the future frame.
Next, a dual discriminator based adversarial training procedure,
which jointly considers an image discriminator that can maintain
the local consistency at frame-level and a video discriminator that
can enforce the global coherence of temporal dynamics, is employed
to enhance the future frame prediction. Finally, the prediction error
is used to identify abnormal video frames. Thoroughly empirical
studies on three public video anomaly detection datasets, i.e., UCSD
Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the
effectiveness of the proposed adversarial spatio-temporal modeling
framework.
#### CCS CONCEPTS
Video anomaly detection; Generative adversarial networks; Transformer model; Convolutional neural network; Spatio-temporal modeling
**ACM Reference Format:**
Xinyang Feng, Dongjin Song, Yuncong Chen, Zhengzhang Chen, Jingchao
Ni, and Haifeng Chen. 2021. Convolutional Transformer based Dual Discriminator Generative Adversarial Networks for Video Anomaly Detection.
In Proceedings of the 29th ACM International Conference on Multimedia (MM
_’21), October 20–24, 2021, Virtual Event, China. ACM, New York, NY, USA,_
[9 pages. https://doi.org/10.1145/3474085.3475693](https://doi.org/10.1145/3474085.3475693)
#### 1 INTRODUCTION
- Computing methodologies → **Scene anomaly detection;**
**Adversarial learning; Anomaly detection; Neural networks.**
∗Corresponding author
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
_MM ’21, October 20–24, 2021, Virtual Event, China_
© 2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-8651-7/21/10...$15.00
[https://doi.org/10.1145/3474085.3475693](https://doi.org/10.1145/3474085.3475693)
With the rapid growth of video surveillance data, there is an increasing demand to automatically detect abnormal video sequences
in the context of large-scale normal (regular) video data. Despite
a substantial amount of research effort has been devoted to this
problem [3, 8, 13, 14, 16, 19, 22, 31, 34], video anomaly detection,
which aims to identify the activities that do not conform to regular patterns in a video sequence, is still a challenging task. This
is because real-world abnormal video activities can be extremely
diverse while the prior knowledge about these anomalies is usually
limited or even unavailable.
With the assumption that a model can only generalize to data
from the same distribution as the training set, abnormal activities
in the test set will manifest as deviance from regular patterns. A
common approach to resolve this problem is to learn a model that
can capture regular patterns in the normal video clips during the
training stage, and check whether there exists any irregular pattern
that diverges from regular patterns in the test video clips. Within
this framework, it is crucial to not only represent the regular appearances but also capture the normal spatio-temporal dynamics to
differentiate abnormal activities from normal activities in a video
sequence. This serves as an important motivation for our proposed
methods.
Early studies have used handcrafted features to represent video
patterns [13, 16, 19, 29]. For instance, Li et al. [13] introduced mixtures of dynamic textures and defined outliers under this model
as anomalies. These approaches, however, are usually not optimal
for video anomaly detection since the features are extracted based
upon a different objective.
-----
Recently, deep neural networks are becoming prevalent in video
anomaly detection, showing superior performance over handcrafted
feature based methods. For instance, Hasan et al. [8] developed a
convolutional autoencoder (Conv-AE) to model the spatio-temporal
patterns in a video sequence simultaneously with a 2D CNN. The
temporal dynamics, however, are not explicitly considered. To
better cope with the spatio-temporal information in a video sequence, convolutional long short-term memory (LSTM) autoencoder (ConvLSTM-AE) [17, 27] was proposed to model the spatial
patterns with fully convolutional networks and encode the temporal dynamics using convolutional LSTM (ConvLSTM). ConvLSTM,
however, suffers from computational and interpretation issues. A
powerful alternative for sequence modeling is the self-attention
mechanism [33]. It has demonstrated superior performance and
efficiency in many different tasks, e.g., sequence-to-sequence machine translation [33], time series prediction [24], autoregressive
model based image generation [23], and GAN-based image synthesis [39]. However, it has seldom been employed to capture regular
spatio-temporal patterns in the surveillance videos.
More recently, adversarial learning has shown impressive
progress on video anomaly detection. For instance, Ravanbakhsh
et al. [25] developed a GAN based anomaly detection approach
following conditional GAN framework [10]. Liu et al. [14] proposed
an anomaly detection approach based on future frame prediction.
Tang et al. [31] extended this framework by adding a reconstruction task. The generative models in these two works were based
on U-Net [26]. Similar to Conv-AE, the temporal dynamics in the
video clip were not explicitly encoded and the temporal coherence
was enforced by a loss term on the optical flow. Moreover, the
potential discriminative information in the form of consistency at
frame-level and global coherence of temporal dynamics in video
sequences were not fully considered in previous works.
In this paper, to better capture the regular spatio-temporal patterns and cope with the potential discriminative information at
frame-level and in video sequences, we propose Convolutional
Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. We first present a convolutional transformer to perform future
frame prediction. The convolutional transformer is essentially a
encoder-decoder framework consisting of three key components,
_i.e., a convolutional encoder to capture the spatial patterns of the_
input video clip, a novel temporal self-attention module adapted for
video temporal modeling that can explicitly encode the temporal
dynamics, and a convolutional decoder to integrate spatio-temporal
features and predict the future frame. Because of the temporal
self-attention module, convolutional transformer can capture the
underlying temporal dynamics efficiently and effectively. Next, in
order to maintain the local consistency of the predicted frame and
the global coherence conditioned on the previous frames, we adapt
dual discriminator GAN to deal with video frames and employ an
adversarial training procedure to further enhance the prediction
performance. Finally, the prediction error is adopted to identify abnormal video frames. Thoroughly empirical studies on three public
video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue,
and Shanghai Tech Campus, demonstrate the effectiveness of the
proposed framework and techniques.
#### 2 RELATED WORK
The proposed Convolutional Transformer based Dual Discriminator
Generative Adversarial Networks (CT-D2GAN) is closely related
to deep learning based video anomaly detection and self-attention
mechanism [33].
Note that we focus our discussions on methods based on unsupervised settings, which are efficient in generalization without the
time-consuming and error-prone process of manual labeling. We
are aware that there are numerous works on weakly supervised
or supervised video anomaly detection, e.g., Sultani et al. (2018)
proposed a deep multiple instance ranking framework using videolevel labels and achieves better performance than convolutional
auto-encoder (Conv-AE) based method [8], but it employs both
normal and abnormal video clips for training which is different
from our setting.
Deep neural networks based video anomaly detection methods
demonstrate superior performance over traditional methods based
on handcrafted features. Hasan et al. (2016) developed Conv-AE
method to simultaneously learn the spatio-temporal patterns in a
video with 2D convolutional neural networks by concatenating the
video frames in the channel dimension. The temporal information
is mixed with the spatial information in the first convolutional layer,
thus not explicitly encoded. Xu et al. (2017) proposed appearance
and motion DeepNet (AMDN) to learn video feature representations,
which however still requires a decoupled one-class SVM classifier
applied on learned representation to generate anomaly score. Dong
et al. (2019) proposed a memory-augmented autoencoder (MemAE)
that uses a memory module to constrain the reconstruction.
More recently, adversarial learning has demonstrated flexibility
and impressive performance in multiple video anomaly detection
studies. A generative adversarial networks (GANs) based anomaly
detection approach [25] was developed following cGAN framework of image-to-image translation [10]. Specifically, it employs
image and optical flow as source domain and target domain, and
vice versa, and trains cross-channel generation through adversarial
learning. The reconstruction error is used to compute anomaly
score. The only temporal constraint is imposed by the optical flow
calculation. Liu et al. (2018) proposed an anomaly detection approach based on future frame prediction in GAN framework and
U-Net [26]. Similar to Conv-AE, the temporal information is not
explicitly encoded and the temporal coherence between neighboring frames is enforced by a loss term on the optical flow. Tang et al.
(2020) extended the future frame prediction framework by adding
a reconstruction task. One way to alleviate the temporal encoding
issue in video spatio-temporal modeling is to use convolutional
LSTM autoencoder (ConvLSTM-AE) based methods [4, 17, 27, 38],
where the spatial and temporal patterns are encoded with fully
convolutional networks and convolutional LSTM, respectively. Despite its popularity, ConvLSTM suffers from issues such as large
memory consumption. The complex gating operations add to the
computational cost and complicate the information flow, making
interpretation difficult.
A more effective and efficient alternative for sequence modeling
is the self-attention mechanism [33], which is essentially an attention mechanism relating different positions of a single sequence to
compute a representation of the sequence, in which the keys, values,
-----
and queries are from the same set of features. Some related applications include autoregressive model based image generation [23],
GAN-based image synthesis [39].
In this work, based on related works, we introduce the convolutional transformer by extending the self-attention mechanism to
video sequence modeling and develop a novel self-attention module specialized for spatio-temporal modeling in video sequences.
Compared to existing approaches for video anomaly detection, the
proposed convolutional transformer model has the advantage of
being able to explicitly and efficiently encode the temporal information in a sequence of feature maps, where the computation of
attentions can be fully parallelized via matrix multiplications. Based
on the convolutional transformer, a dual discriminator generative
adversarial networks (D2GAN) approach is developed to further
enhance the future frame prediction through enforcing local consistency of the predicted frame and the global coherence conditioned
on the previous frames. Note that the proposed D2GAN differs from
existing works on dual discriminator based GAN which have been
applied to different scenarios [5, 21, 35, 37].
#### 3 CT-D2GAN
In this section, we first introduce the problem formulation and
input to our framework. Then, we present the motivation and technical details of the proposed CT-D2GAN framework including convolutional transformer, dual discriminator GAN, the overall loss
function, and lastly the regularity score calculation. An overview
of the framework is illustrated in Figure 1.
In CT-D2GAN, a convolutional transformer is employed to generate future frame prediction based on past frames, an image discriminator and a video discriminator are used to maintain the local
consistency and global coherence.
#### 3.1 Problem Statement
Given an input representation of video clip of length 𝑇, i.e., 𝐼 =
(𝐼𝑡 −𝑇 +1, ..., 𝐼𝑡 ) ∈ R[ℎ][×][𝑤][×][𝑐][×][𝑇], where ℎ, 𝑤, 𝑐 are the height, width
and number of channels, we aim to predict the (𝑡 + 1)-th frame
as _𝐼[^]𝑡_ +1 ∈ R[ℎ][×][𝑤][×][𝑐] and identify abnormal activities based upon
the prediction error, i.e., 𝑒MSE,𝑡 = _ℎ_ - 1 - �𝑐𝑖=1 [∥][𝐼][^][:][,][:][,𝑖,𝑡] [+][1][ −] _[𝐼][:][,][:][,𝑖,𝑡]_ [+][1] [∥]𝐹[2] [,]
where 𝐼:,:,𝑖,𝑡 +1 ∈ R[ℎ][×][𝑤].
#### 3.2 Input
As appearance and motion are two characteristics of video data, it is
common to explicitly incorporate optical flow together with the still
images to describe a video sequence [28], e.g. optical flow has been
employed to represent video sequences in the cGAN framework [25]
and used as a motion constraint [14].
In this work, we stack image with pre-computed optical flow
maps [2, 9] in channel dimension as inputs similar to Simonyan et
al. [28] for video action recognition and Ravanbakhsh et al. [25] for
video anomaly detection. The optical flow maps consist of a horizontal component, a vertical component and a magnitude component.
To be noted that, the optical flow map is computed from the previous image and current image, thus does not contain future frame
information. Therefore, the input can be given as 𝐼 ∈ R[ℎ][×][𝑤][×][4][×][𝑇],
and we used 5 consecutive frames as inputs, i.e., 𝑇 = 5, similar to
Liu et al. [14].
#### 3.3 Convolutional Transformer
Convolutional transformer is developed to obtain a future frame
prediction based on past frames. It consists of three key components:
a convolutional encoder to encode spatial information, a temporal
self-attention module to capture the temporal dynamics, and a
convolutional decoder to integrate spatio-temporal features and
predict future frame.
_3.3.1_ _Convolutional Encoder. The convolutional encoder [15] is_
employed to extract spatial features from each frame of the video.
Each frame of the video is first resized to 256 × 256 and then
fed into the convolutional encoder. The convolutional encoder
consists of 5 convolutional blocks. And the convolutional block
follows common structure in CNN. All the convolutional kernels
are set as 3 × 3 pixels. For brevity, we denote a convolutional
layer with stride 𝑠 and number of filters 𝑛 as Conv𝑠,𝑛, a batch
normalization layer as BN, a scaled exponential linear unit [12] as
SELU, and a dropout operation with dropout ratio 𝑟 as dropout𝑟 .
The structure of the convolutional encoder is: [Conv1,64-SELUBN]-[Conv2,64-SELU-BN-Conv1,64-SELU]-[Conv2,128-SELU-BNConv1,128-SELU]-[Conv2,256-SELU-BN-dropout0.25-Conv1,256SELU]-[Conv2,256-SELU-BN-dropout0.25-Conv1,256-SELU],
where each [ ] represents a convolutional block.
At the 𝑙-th convolutional block conv[𝑙], 𝐹𝑡[𝑙]−𝑖 [∈] [R][ℎ][𝑙] [×][𝑤][𝑙] [×][𝑐][𝑙] _[,𝑖]_ [∈]
[0, ...,𝑇 − 1] denotes the input feature maps to the self-attention
module with ℎ𝑙, 𝑤𝑙, 𝑐𝑙 as the height, width, and number of channels,
respectively. The temporal dynamics among the spatial feature
maps of different time steps will be encoded with temporal selfattention module.
_3.3.2_ _Temporal Self-attention Module. To explicitly encode the tem-_
poral information in the video sequence, we extend self-attention
mechanism in the transformer model [33] and develop a novel temporal self-attention module to capture the temporal dynamics of
the multi-scale spatial feature maps generated from the convolutional encoder. This section applies to all layers, thus we omit the
layer for clarity. An illustration of the multi-head temporal selfattention module is shown in the upper panel of Figure 1. Spatial
**Feature Vector. We first use global average pooling (GAP) to ex-**
tract a feature vector f𝑡 from the feature map 𝐹𝑡 extracted in the
convolutional encoder. The feature vector in current time step f𝑡
will be used as part of the query and each historical feature vector
f𝑡 −𝑖, 𝑖 ∈[1,𝑇 − 1] will be used as part of the key to index spatial
feature maps.
**Positional Encoding. Different from sequence models such as**
LSTM, self-attention does not model sequential information inherently, therefore it is necessary to incorporate temporal positional
information into the model. We generate a positional encoding
vector PE ∈ R[𝑑][𝑝] following [33]:
PE𝑝,2𝑖 = sin(𝑝/10000[2][𝑖][/][𝑑][𝑝] )
(1)
PE𝑝,2𝑖+1 = cos(𝑝/10000[2][𝑖][/][𝑑][𝑝] )
where 𝑑𝑝 denotes the dimension of PE, 𝑝 denotes the temporal position and 𝑖 ∈[0, ..., (𝑑𝑝 /2−1)] denotes the index of the dimension.
Empirically, we fix 𝑑𝑝 = 8 in our study.
**Temporal Self-Attention. We concatenate the positional encod-**
ing vector with the spatial feature vector for each time step and
-----
|Col1|,)(*) …,)(-)|
|---|---|
|Head-h||
|Discriminator 2D Conv ~ t+1 t+1 Real or Fake|~ Discriminator 3D Conv 1, 2, … t, t+1 Real or Fake 1, 2, … t, t+1|
|---|---|
()
**_1, 2,_** **_…_**
**_…_** **_t,_**
**_…_** **_t,_**
#### ~
**_t+1_**
**_t,_** **_t+1_**
#### ~
**_t,_** **_t+1_**
**Figure 1: The architecture of the proposed CT-D2GAN framework. (Upper panel) The convolutional transformer generator is**
**consisted of a convolutional encoder, a temporal self-attention module, and a convolutional decoder. Multi-head self-attention**
**is applied on the feature maps 𝐹𝑡** **extracted from convolutional encoder: 𝐹𝑡** **is transformed to multi-head feature maps 𝐹𝑡[(][k][)]** **via**
**a convolutional operation; within each head, we apply a global average pooling (GAP) operation on 𝐹𝑡[(][𝑘][)]** **to generate a spatial**
**feature vector by aggregating over spatial dimension, and concatenate the positional encoding (PE) vector; we then compare**
**the similarity 𝐷cos between query q𝑡[(][𝑘][)]** **and memory m𝑡[(][𝑘][)]** **feature vectors and generate the attention weights by normalizing**
**across time steps using softmax 𝜎; the attended feature map 𝐻𝑡[(][ℎ][)]** **is a weighted average of the feature maps at different time**
**steps; the final attended map 𝐻𝑡[MH]** **is the concatenation over all the heads; the final integrated map 𝑆𝑡** **is a weighted average of**
**the query 𝐹𝑡[MH]** **and the attended feature maps according to a spatial selective gate (SSG). 𝑆𝑡** **is decoded to the predicted future**
**frame with the convolutional decoder. (Lower panels) The image discriminator (left) and video discriminator (right) used in**
**our dual discriminator GAN framework.**
**_…_** **_t,_** **_t+1_**
**_1, 2,_** **_…_**
**_1, 2,_** **_…_**
use the concatenated vectors as the queries and keys, and the feature maps as the values in the setting of self-attention mechanism.
For each query frame at time 𝑡, the current concatenated feature
vector q𝑡 = [f𝑡 ; PE] ∈ R[𝑐][𝑙] [+][𝑑][𝑝] is used as query, and compared
to the feature vector of each frame from the input video clip i.e.
memory m𝑡 −𝑖 = [f𝑡 −𝑖 ; PE] ∈ R[𝑐][𝑙] [+][𝑑][𝑝] _,𝑖_ ∈[1, ...,𝑇 − 1] using cosine
similarity:
_𝐷_ (q𝑡 _, m𝑡_ −𝑖 ) = ∥qq𝑡𝑡∥∥· mm𝑡𝑡−−𝑖𝑖 ∥ _[.]_ (2)
Based on the similarity between q𝑡 and m𝑡 −𝑖, we can generate
the normalized attention weights 𝑎𝑡,𝑖 ∈ R across the temporal
dimension using a softmax function:
_𝑎𝑡,𝑡_ −𝑖 = � exp(𝛽𝐷 (q𝑡 _, m𝑡_ −𝑖 )) (3)
_𝑗_ ∈[1...𝑇 −1] [exp][(][𝛽𝐷] [(][q]𝑡 _[,][ m]𝑡_ −𝑗 [))][,]
where a positive temperature variable 𝛽 is introduced to sharpen the
level of focus in the softmax function and is automatically learned
**_1, 2,_** **_…_**
**_t+1_**
in the model through a single hidden densely-connected layer with
the query as the input.
The final attended feature maps 𝐻𝑡 are a weighted sum of all
feature maps 𝐹 using the attention weights in Eq. (3):
∑︁
_𝐻𝑡_ = _𝑎𝑡,𝑡_ −𝑖 - 𝐹𝑡 −𝑖 _._ (4)
_𝑖_ ∈[1,...,𝑇 −1]
**Multi-head** **Temporal** **Self-Attention.** Multi-head selfattention [33] enables the model to jointly attend to information
from different representation subspaces at different positions. We
adapt it to spatio-temporal modeling by first mapping the spatial
feature maps to 𝑛ℎ = 8 groups, each using 32 1 × 1 convolutional
kernels. For each group of feature maps with dimension 𝑐ℎ = 32,
we then perform the single head self-attention as described in the
previous subsection and generate attended feature maps for head 𝑘
-----
as 𝐻𝑡[(][𝑘][)] :
∑︁
_𝐻𝑡[(][𝑘][)]_ = _𝑎𝑡,𝑡[(][𝑘][)]−𝑖_ [·][ 𝐹]𝑡[(]−[𝑘]𝑖[)][,] (5)
_𝑖_ ∈[1,...,𝑇 −1]
where 𝐹𝑡[(]−[𝑘]𝑖[)] [∈] [R][ℎ][𝑙] [×][𝑤][𝑙] [×][𝑐][ℎ] [is the transformed feature map at frame]
_𝑡_ − _𝑖_ for head 𝑘, 𝑎𝑡,𝑡[(][𝑘][)]−𝑖 [is the corresponding attention weight. The]
final multi-head attended feature map 𝐻𝑡[MH] ∈ R[ℎ][𝑙] [×][𝑤][𝑙] [×(][𝑐][ℎ] [·][𝑛][ℎ][)] is
the concatenation of the attended feature maps from all the heads
along the channel dimension:
_𝐻𝑡[MH]_ = Concat(𝐻𝑡[(][1][)], ..., 𝐻𝑡[(][𝑛][ℎ][)] ). (6)
In this way, the final attended feature maps not only integrate
spatial information from the convolutional encoder, but also capture temporal information from multi-head temporal self-attention
mechanism.
**Spatial Selective Gate. The aforementioned module extends the**
self-attention mechanism to the temporal modeling of 2D image
feature maps, however, it comes with the loss of fine-grained spatial resolution due to the GAP operation. To compensate this, we
introduce spatial selective gate (SSG), which is a spatial attention
mechanism to integrate the current and historical information.
The attended feature maps from the temporal self-attention module and the feature maps of the current query are concatenated,
on which we learn a spatial selective gate using a sub-network
NSSG with structure: Conv1,256-BN-SELU-Conv1,256-BN-SELUConv1,256-BN-SELU-Conv1,256-Conv1,256-Sigmoid. The final output is a pixel-wise weighted average of the attended maps 𝐻𝑡[MH]
and the current query’s multi-head transformed feature maps
_𝐹𝑡[MH]_ ∈ R[ℎ][𝑙] [×][𝑤][𝑙] [×(][𝑐][ℎ] [·][𝑛][ℎ][)], according to 𝑆𝑆𝐺:
_𝑆𝑡_ = 𝑆𝑆𝐺 ◦ _𝐹𝑡[MH]_ + (1 − _𝑆𝑆𝐺) ◦_ _𝐻𝑡[MH]_ (7)
where denotes element-wise multiplication.
◦
We add SSG at each level of temporal self-attention module. As
the spatial dimensions are larger at shallow layers and we want
to include contextual information while preserving the spatial resolution, we use dilated convolution [36] with different dilatation
factors at the 4 convolutional blocks in the sub-network NSSG,
specifically from conv[2] to conv[5], the dilation factors are (1,2,4,1),
(1,2,2,1), (1,1,2,1), (1,1,1,1). Note that SSG is computationally more
efficient than directly forwarding the concatenated feature maps to
the convolutional decoder.
_3.3.3_ _Convolutional Decoder. The outputs of the temporal self-_
attention module 𝑆𝑡 are fed into the convolutional decoder. The
convolutional decoder predicts the video frame using 4 transposed
convolutional layers with stride 2 on the feature maps in a reverse
order of the convolutional encoder. The fully-scaled feature maps
then go through one convolutional layer with 32 filters and one
convolutional layer with 𝑐 filters of size 1 × 1 that maps to the same
size of channels 𝑐 in the input. In order to predict finer details, we
utilize the skip connection [26] to connect the spatio-temporally
integrated maps at each level of the convolutional encoder to the
corresponding level of the convolutional decoder, which allows the
model to further fine-tune the predicted frames.
#### 3.4 Dual Discriminator GAN
We propose a dual discriminator GAN using both an image discriminator and a video discriminator to further enhance the future
frame prediction of convolutional transformer via adversarial training. The image discriminator 𝐷𝐼 critiques on whether the current
frame is generated or real just on the basis of one single frame to
assess the local consistency. The video discriminator 𝐷𝑉 performs
critique on the prediction conditioned on the past frames to assess
the global coherence. Specifically, we stack the past frames with
current generated or real frame in the temporal dimension and
the video discriminator is essentially a video classifier. This idea of
combining local and global (contextual) discriminator is similar to
adversarial image inpainting [37] but is used in a totally different
context.
The network structures of the two discriminators are kept the
same except that we use 2D operations in image discriminator and
the corresponding 3D operations in the video discriminator. We
use PatchGAN architecture as described in [10] and use spectral
normalization [20] in each convolutional layer. In the 3D version,
the stride and kernel size in the temporal dimension were set at 1
and 2 respectively.
The method in Liu et al. [14] is similar to using the image discriminator only. Different from the video discriminator in Tulyakov
et al. [32], which applies on the whole synthetic video clip, our
proposed video discriminator conditions on the real frames.
#### 3.5 Loss
For the adversarial training, we use the Wasserstein GAN with
gradient penalty (WGAN-GP) setting [1, 7]. The generator 𝐺 is
the mapping : 𝐼 → [�]𝐼𝑡 +1. For discriminators, 𝐷𝑉 : (𝐼, _𝐼[^]𝑡_ +1) →
_𝑝_ [(𝐼, _𝐼[^]𝑡_ +1) is real] and 𝐷𝐼 : 𝐼[^]𝑡 +1 → _𝑝_ [𝐼[^]𝑡 +1 is real] are video and
image discriminators respectively. The GAN loss is:
_𝐿𝑎𝑑𝑣_ (𝐺, 𝐷𝐼 _, 𝐷𝑉_ ) = E𝐼,�𝐼𝑡 +1 [𝐷𝑉 (𝐼, [�]𝐼𝑡 +1)] − E𝐼,𝐼𝑡 +1 [𝐷𝑉 (𝐼, 𝐼𝑡 +1)]
+ 𝜆E𝐼,𝐼^𝑡 +1 [(∥∇𝐷𝑉 (𝐼, _𝐼[^]𝑡_ +1)∥2 − 1)[2]] (8)
+ E�𝐼𝑡 +1 [𝐷𝐼 ([�]𝐼𝑡 +1)] − E𝐼𝑡 +1 [𝐷𝐼 (𝐼𝑡 +1)]
+ 𝜆E𝐼^𝑡 +1 [(∥∇𝐷𝐼 (𝐼[^]𝑡 +1)∥2 − 1)[2]]
where _𝐼[^]𝑡_ +1 = 𝜖𝐼𝑡 +1 +(1−𝜖)[�]𝐼𝑡 +1, 𝜖 ∼ _𝑈_ [0, 1]. The penalty coefficient
_𝜆_ is fixed as 10 in all our experiments.
In addition, we consider the pixel-wise 𝐿1 loss of the prediction.
Therefore the total loss 𝐿 is:
_𝐿_ = 𝐿𝑎𝑑𝑣 + ∥𝐼𝑡 +1 − [�]𝐼𝑡 +1 ∥1 (9)
We trained our models on each dataset separately by minimizing
the loss above using ADAM [11] algorithm with learning rate 0.0002
and a batch size of 5.
#### 3.6 Regularity Score
A regularity score based on the prediction error 𝑒𝑡 is calculated for
each video frame:
_𝑟𝑒𝑡_ = 1 − _𝑒𝑡_ − min𝜏𝑒𝜏 (10)
max𝜏𝑒𝜏 − min𝜏𝑒𝜏
In Hasan et al. [8], 𝑒𝑡 is the frame-wise reconstruction 𝑒MSE,𝑡 . In
Liu et al. [14], 𝑒𝑡 is equivalently negative frame-wise prediction
-----
**Table 1: Video anomaly detection datasets details**
|Dataset|Total # frames/clips|Training # frames/clips|Testing # frames/clips|Anomaly Types|
|---|---|---|---|---|
|UCSD Ped2|4,560/28|2,550/16|2,010/12|biker, skater, vehicle|
|CUHK Avenue|30,652/37|15,328/16|15,324/21|running, loitering, object throwing|
|ShanghaiTech|315,307/437|274,516/330|40,791/107|biker, skater, vehicle, sudden motion|
PSNR (Peak Signal to Noise Ratio): PSNR = 10log10 max𝑒MSE([�],𝑡𝐼𝑡 ) [. In]
this study, we use similar setting to the two methods above with:
_𝑒𝑡_ = log10𝑒MSE,𝑡 .
#### 4 EXPERIMENTS
In this section, we first introduce the three public datasets used
in our experiments, which follow the same setup as other similar
unsupervised video anomaly detection studies. Then, we report the
video anomaly detection performance and comparison with other
methods. Finally, we perform ablation studies to demonstrate the
contribution of each component and interpret the results based on
the proposed CT-D2GAN.
#### 4.1 Datasets
We evaluate our framework on three widely used public video anomaly detection datasets, i.e., UCSD Ped2 dataset [13] [1], CUHK Avenue
dataset [16] [2], and ShanghaiTech Campus (SH-Tech) dataset [18] [3].
We describe the dataset-specific characteristics and the effects on
video anomaly detection performance, some details can be found
in Table 1:
_4.1.1_ _UCSD Ped2. UCSD Ped2 includes pedestrians, vehicles_
largely moving in parallel to the camera plane.
_4.1.2_ _CUHK Avenue. CUHK Avenue includes pedestrians and ob-_
jects both moving parallel to or toward/away from the camera.
Slight camera motion is present in the dataset. Some of the anomalies are staged actions.
_4.1.3_ _ShanghaiTech. Different from the other datasets, the Shang-_
haiTech dataset is a multi-scene dataset (13 scenes), and includes
pedestrians, vehicles, and sudden motions, and the ratios of each
scene in the training set and test set can be different.
#### 4.2 Evaluation
The model was trained and evaluated on a system with an NVIDIA
GeForce 1080 Ti GPU and implemented with PyTorch. To measure
the effectiveness of our proposed CT-D2GAN framework for video
anomaly detection, we report the area under the receiver operating
characteristics (ROC) curve i.e., AUC. Specifically, AUC is calculated
by comparing the frame-level regularity scores with frame-level
ground truth labels.
[1http://www.svcl.ucsd.edu/projects/anomaly/dataset.html](http://www.svcl.ucsd.edu/projects/anomaly/dataset.html)
[2http://www.cse.cuhk.edu.hk/leojia/projects/detectabnormal/dataset.html](http://www.cse.cuhk.edu.hk/leojia/projects/detectabnormal/dataset.html)
[3https://github.com/StevenLiuWen/sRNN_TSC_Anomaly_Detection#](https://github.com/StevenLiuWen/sRNN_TSC_Anomaly_Detection##shanghaitechcampus-anomaly-detection-dataset)
[shanghaitechcampus-anomaly-detection-dataset](https://github.com/StevenLiuWen/sRNN_TSC_Anomaly_Detection##shanghaitechcampus-anomaly-detection-dataset)
**Table 2: Frame-level video anomaly detection performance**
**(AUC).**
Method UCSD Ped2 CUHK SH-Tech
MPPCA+SF [19] 61.3 -
MDT [13, 19] 82.9 -
Conv-AE [8] 85.0 [†] 80.0 [†] 60.9 [†]
3D Conv [40] 91.2 80.9
Stacked RNN [18] 92.2 81.7 68.0
ConvLSTM-AE [17] 88.1 77.0
memAE [6] 94.1 83.3 71.2
memNormality [22] 97.0 **88.5** 70.5
ClusterAE [3] 96.5 86.0 73.3
AbnormalGAN [25] 93.5 -
Frame prediction [14] 95.4 85.1 72.8
Pred+Recon [31] 96.3 85.1 73.0
CT-D2GAN **97.2** 85.9 **77.7**
† Evaluated in [14];
-: Not evaluated in the study.
Ordered in publication year. The best performance in
each dataset is highlighted in boldface.
#### 4.3 Video Anomaly Detection
To demonstrate the effectiveness of our proposed CT-D2GAN framework for video anomaly detection, we compare it against 12 different baseline methods. Among those, MPPCA (mixture of probabilistic principal component analyzers) + SF (social force) [19],
MDT (mixture of dynamic textures) [13, 19] are handcrafted feature based methods; Conv-AE [8], 3D Conv [40], Stacked RNN [18],
and ConvLSTM-AE [17] are encoder-decoder based approaches;
MemAE [6], MemNormality [22] and ClusterAE [3] are recent
encoder-decoder based methods enhanced with memory module or clustering; AbnormalGAN [25], Frame prediction [14], and
Pred+Recon [31] are methods based on adversarial training.
Table 2 shows the frame-level video anomaly detection performance (AUC) of various approaches. We observed that encoderdecoder based approaches in general outperform handcrafted feature based methods. This is because the handcrafted features
are usually extracted based upon a different objective and thus
can be sub-optimal. Within encoder-decoder based approaches,
ConvLSTM-AE outperforms Conv-AE since it can better capture
temporal information. We also notice that adversarial training based
methods perform better than most baseline methods. Finally, our
|Method|UCSD Ped2|CUHK|SH-Tech|
|---|---|---|---|
|MPPCA+SF [19]|61.3|-|-|
|MDT [13, 19]|82.9|-|-|
|Conv-AE [8]|85.0†|80.0†|60.9†|
|3D Conv [40]|91.2|80.9|-|
|Stacked RNN [18]|92.2|81.7|68.0|
|ConvLSTM-AE [17]|88.1|77.0|-|
|memAE [6]|94.1|83.3|71.2|
|memNormality [22]|97.0|88.5|70.5|
|ClusterAE [3]|96.5|86.0|73.3|
|AbnormalGAN [25]|93.5|-|-|
|Frame prediction [14]|95.4|85.1|72.8|
|Pred+Recon [31]|96.3|85.1|73.0|
|CT-D2GAN|97.2|85.9|77.7|
-----
**Figure 2: Examples of video anomaly detection. The blue lines in the line graphs delineate frame-level regularity scores. The**
**green and red shaded segments in the line graphs indicate the ground truth normal and abnormal video segments respectively.**
**The frames in the green boxes are regular frames from the regular video segments, the frames in the red boxes are abnormal**
**frames from abnormal video segments. The abnormal objects are annotated.**
proposed CT-D2GAN framework achieves the best performance
on UCSD Ped2 and SH-Tech, and close to the best performance in
CUHK [22]. This is because our proposed model can not only capture the spatio-temporal patterns explicitly and effectively through
convolutional transformer but also leverage the dual discriminator GAN based adversarial training to maintain local consistency
at frame-level and global coherence in video sequences. Recent
memory or clustering enhanced methods [3, 6, 22] show good performance and is orthogonal to our proposed framework and can
integrate with our proposed framework in future work to further
improve performance. Examples of video anomaly detection results
overlaid on the abnormal activity ground truth of all three datasets
are shown in Figure 2, along with example video frames from the
regular and abnormal video segments.
Due to the multi-scene nature of SH-Tech dataset, we also analyzed the most ample single scene that constitutes 25% (83/330
clips) of training set and 32% (34/107 clips) of test set, the AUC
is 87.5 which is much better than the overall dataset and reach
similar level with other single-scene datasets. This could imply that
generalizing to less ample scenes is still a challenging task given
unbalanced training set.
Thanks to the convolutional transformer architecture and optimizations including spatial selective gate, our model is computationally efficient. At inference time, our model runs at 45 FPS on
one NVIDIA GeForce 1080 Ti GPU.
**Table 3: Video anomaly detection performance under differ-**
**ent ablation settings on UCSD Ped2 dataset.**
#### 4.4 Ablation Studies
To understand how each component contributes to the anomaly
detection task, we conducted ablation studies with different settings: (1) convolutional transformer only without the adversarial
training (Conv Transformer), (2) Conv Transformer with image
discriminator only, (3) Conv Transformer with video discriminator
only, (4) U-Net based generator (as has been utilized in image-toimage translation [10] and video anomaly detection [14]) with dual
discriminator, and compare with our full CT-D2GAN model. The
performance comparison can be found in Table 3. We observed
that adversarial training can enhance the performance for anomaly detection, either with the image discriminator or the video
discriminator. Video discriminator alone achieves nearly similar
performance as using dual discriminator, but we observed the loss
decreased faster when combined with image discriminator. Using
image discriminator alone was not as effective, and the loss was
less stable. Finally, we observed that CT-D2GAN achieved superior
performance than U-Net with dual discriminator, suggesting that
convolutional transformer can better capture the spatio-temporal
dynamics and thus can make a more accurate detection.
#### 4.5 Interpretation
We illustrate an example of predicted future frame _𝑡[�]+ 1 and com-_
pare it with the previous frame 𝑡 and the ground truth future frame
_𝑡_ +1 in Figure 3. The prediction performance is poor for the anomaly
(red box). And also we noted that the model is able to capture the
temporal dynamics by predicting the future behavior in normal
part of the image (green box).
**Self-attention weights under perturbation. It is not straightfor-**
ward to directly interpret the temporal self-attention weight vector,
as temporal self-attention is applied to an abstract representation
of the video. Therefore, to further investigate the effectiveness of
temporal self-attention, we perturb two frames of the video and
run the inference on this perturbed video segment. For one frame
(Figure 4, red), we added a random Gaussian noise with zero mean
|Ablation setting|AUC|
|---|---|
|Conv Transformer|94.2|
|Conv Transformer + image discriminator|95.7|
|Conv Transformer + video discriminator|96.9|
|U-Net + dual discriminator|95.5|
|CT-D2GAN|97.2|
-----
**Figure 3: An example showing the future frame prediction**
**in the normal part of the image (green box, pedestrian in this**
**case) where we observe the model capturing the dynamics of**
**the behavior, and abnormal part of the image (red box, bicy-**
**cle in this case) where there is large prediction error. From**
**left to right, we show the last frame in the input video clip**
**(𝑡), the predicted future frame** _𝑡[�]+ 1, and the ground truth fu-_
**ture frame 𝑡** + 1.
**Figure 4: Temporal self-attention weights in perturbed video**
**clip.**
and 0.1 standard deviation to the image to simulate the deterioration in video quality; for another frame (Figure 4, purple), we scaled
the optical flow maps by 0.9 to simulate the frame rate distortion.
We plot the temporal attention weights for the frame right after
the two perturbed frames in Figure 4. The weights assigned to the
perturbed frames are clearly lower than the others, implying less
contribution to the attended map. This suggests that self-attention
module can adaptively select relevant feature maps and is robust
to input noise.
#### 5 CONCLUSIONS
In this paper, we developed Convolutional Transformer based Dual
Discriminator Generative Adversarial Networks (CT-D2GAN) to
perform unsupervised video anomaly detection. The convolutional
transformer which consists of three components, i.e., a convolutional encoder to capture the spatial patterns of the input video clip,
a temporal self-attention module to encode the temporal dynamics,
and a convolutional decoder to integrate spatio-temporal features,
was employed to perform future frame prediction. A dual discriminator based adversarial training approach was used to maintain
the local consistency of the predicted frame and the global coherence conditioned on the previous frames. Thorough experiments on
three widely used video anomaly detection datasets demonstrate
that our proposed CT-D2GAN is able to detect anomaly frames
with superior performance.
#### REFERENCES
[1] Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In International Conference on Machine Learning
_(ICML). PMLR, 214–223._
[2] Thomas Brox, Andrés Bruhn, Nils Papenberg, and Joachim Weickert. 2004. High
accuracy optical flow estimation based on a theory for warping. In European
_Conference on Computer Vision (ECCV). Springer, 25–36._
[3] Yunpeng Chang, Zhigang Tu, Wei Xie, and Junsong Yuan. 2020. Clustering
Driven Deep Autoencoder for Video Anomaly Detection. In European Conference
_on Computer Vision (ECCV). Springer, 329–345._
[4] Yong Shean Chong and Yong Haur Tay. 2017. Abnormal event detection in
videos using spatiotemporal autoencoder. In International Symposium on Neural
_Networks (ISNN). Springer, 189–196._
[5] Fei Dong, Yu Zhang, and Xiushan Nie. 2020. Dual Discriminator Generative
Adversarial Network for Video Anomaly Detection. IEEE Access 8 (2020), 88170–
88176.
[6] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour,
Svetha Venkatesh, and Anton van den Hengel. 2019. Memorizing normality
to detect anomaly: Memory-augmented deep autoencoder for unsupervised
anomaly detection. In IEEE International Conference on Computer Vision (ICCV).
IEEE, 1705–1714.
[7] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and
Aaron C Courville. 2017. Improved training of Wasserstein GANs. In Advances
_in Neural Information Processing Systems (NIPS). 5767–5777._
[8] Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K Roy-Chowdhury, and
Larry S Davis. 2016. Learning temporal regularity in video sequences. In IEEE
_Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 733–742._
[9] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy,
and Thomas Brox. 2017. Flownet 2.0: Evolution of optical flow estimation with
deep networks. In IEEE Conference on Computer Vision and Pattern Recognition
_(CVPR). IEEE, 2462–2470._
[10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-toImage Translation with Conditional Adversarial Networks. In IEEE Conference
_on Computer Vision and Pattern Recognition (CVPR). IEEE, 5967–5976._
[11] Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR) (2015).
[12] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter.
2017. Self-normalizing neural networks. In Advances in Neural Information
_Processing Systems (NIPS). 971–980._
[13] Weixin Li, Vijay Mahadevan, and Nuno Vasconcelos. 2014. Anomaly detection
and localization in crowded scenes. IEEE Transactions on Pattern Analysis and
_Machine Intelligence 36, 1 (2014), 18–32._
[14] Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao. 2018. Future Frame
Prediction for Anomaly Detection – A New Baseline. In IEEE Conference on
_Computer Vision and Pattern Recognition (CVPR). IEEE, 6536–6545._
[15] Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional
networks for semantic segmentation. In IEEE Conference on Computer Vision and
_Pattern Recognition (CVPR). IEEE, 3431–3440._
[16] Cewu Lu, Jianping Shi, and Jiaya Jia. 2013. Abnormal event detection at 150 FPS
in MATLAB. In IEEE International Conference on Computer Vision (ICCV). IEEE,
2720–2727.
[17] Weixin Luo, Wen Liu, and Shenghua Gao. 2017. Remembering history with
convolutional LSTM for anomaly detection. In IEEE International Conference on
_Multimedia and Expo (ICME). IEEE, 439–444._
-----
[18] Weixin Luo, Wen Liu, and Shenghua Gao. 2017. A revisit of sparse coding based
anomaly detection in stacked RNN framework. IEEE International Conference on
_Computer Vision (ICCV) 1, 2 (2017), 3._
[19] Vijay Mahadevan, Weixin Li, Viral Bhalodia, and Nuno Vasconcelos. 2010. Anomaly detection in crowded scenes. In IEEE Conference on Computer Vision and
_Pattern Recognition (CVPR). IEEE, 1975–1981._
[20] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018.
Spectral Normalization for Generative Adversarial Networks. In International
_Conference on Learning Representations (ICLR)._
[21] Tu Nguyen, Trung Le, Hung Vu, and Dinh Phung. 2017. Dual discriminator
generative adversarial nets. In Advances in neural information processing systems
_(NIPS). 2670–2680._
[22] Hyunjong Park, Jongyoun Noh, and Bumsub Ham. 2020. Learning Memoryguided Normality for Anomaly Detection. In IEEE Conference on Computer Vision
_and Pattern Recognition (CVPR). 14372–14381._
[23] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer,
Alexander Ku, and Dustin Tran. 2018. Image Transformer. In International
_Conference on Machine Learning (ICML). PMLR, 4052–4061._
[24] Yao Qin, Dongjin Song, Haifeng Chen, Wei Cheng, Guofei Jiang, and Garrison W.
Cottrell. 2017. A Dual-Stage Attention-Based Recurrent Neural Network for
Time Series Prediction. In International Joint Conference on Artificial Intelligence
_(IJCAI). 2627–26332._
[25] Mahdyar Ravanbakhsh, Moin Nabi, Enver Sangineto, Lucio Marcenaro, Carlo
Regazzoni, and Nicu Sebe. 2017. Abnormal Event Detection in Videos using
Generative Adversarial Nets. IEEE International Conference on Image Processing
_(ICIP) (2017), 1577–1581._
[26] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional
networks for biomedical image segmentation. In International Conference on
_Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer,_
234–241.
[27] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and
Wang-chun Woo. 2015. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing
_Systems (NIPS). 802–810._
[28] Karen Simonyan and Andrew Zisserman. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Pro_cessing Systems (NIPS). 568–576._
[29] Dongjin Song and Dacheng Tao. 2010. Biologically Inspired Feature Manifold for
Scene Classification. IEEE Transactions on Image Processing 19, 1 (2010), 174–184.
[30] Waqas Sultani, Chen Chen, and Mubarak Shah. 2018. Real-World Anomaly
Detection in Surveillance Videos. In IEEE Conference on Computer Vision and
_Pattern Recognition (CVPR). IEEE, 6479–6488._
[31] Yao Tang, Lin Zhao, Shanshan Zhang, Chen Gong, Guangyu Li, and Jian Yang.
2020. Integrating prediction and reconstruction for anomaly detection. Pattern
_Recognition Letters 129 (2020), 123–130._
[32] Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. 2018. MoCoGAN:
Decomposing Motion and Content for Video Generation. In IEEE Conference on
_Computer Vision and Pattern Recognition (CVPR). IEEE, 1526–1535._
[33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you
need. In Advances in Neural Information Processing Systems (NIPS). 6000–6010.
[34] Dan Xu, Yan Yan, Elisa Ricci, and Nicu Sebe. 2017. Detecting anomalous events
in videos by learning deep representations of appearance and motion. Computer
_Vision and Image Understanding 156 (2017), 117–127._
[35] Han Xu, Pengwei Liang, Wei Yu, Junjun Jiang, and Jiayi Ma. 2019. Learning
a Generative Model for Fusing Infrared and Visible Images via Conditional
Generative Adversarial Network with Dual Discriminators.. In International
_Joint Conference on Artificial Intelligence (IJCAI). 3954–3960._
[36] Fisher Yu and Vladlen Koltun. 2016. Multi-scale context aggregation by dilated
convolutions. International Conference on Learning Representations (ICLR) (2016).
[37] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S. Huang. 2018.
Generative Image Inpainting With Contextual Attention. In IEEE Conference on
_Computer Vision and Pattern Recognition (CVPR). IEEE, 5505–5514._
[38] Chuxu Zhang, Dongjin Song, Yuncong Chen, Xinyang Feng, Cristian Lumezanu,
Wei Cheng, Jingchao Ni, Bo Zong, Haifeng Chen, and V. Nitesh Chawla. 2019. A
Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in
Multivariate Time Series Data. In Association for the Advancement of Artificial
_Intelligence (AAAI). AAAI, 1409–1416._
[39] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2019. Selfattention generative adversarial networks. In International Conference on Machine
_Learning (ICML). PMLR, 7354–7363._
[40] Yiru Zhao, Bing Deng, Chen Shen, Yao Liu, Hongtao Lu, and Xian-Sheng Hua.
2017. Spatio-Temporal AutoEncoder for Video Anomaly Detection. In ACM
_International Conference on Multimedia (ACM MM). ACM, 1933–1941._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2107.13720, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2107.13720"
}
| 2,021
|
[
"JournalArticle",
"Book"
] | true
| 2021-07-29T00:00:00
|
[
{
"paperId": "18f207d8dab7357f4f674211ec4f150de1c93a0e",
"title": "Learning Memory-Guided Normality for Anomaly Detection"
},
{
"paperId": "fe09f7a379944444201552e952b910188c0aeaca",
"title": "Integrating prediction and reconstruction for anomaly detection"
},
{
"paperId": "74fd298aa27b390194596a91f3db949a8ce8e72e",
"title": "Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators"
},
{
"paperId": "d65eb30e5f0d2013fd5e4f45d1413bc2969ee803",
"title": "Memorizing Normality to Detect Anomaly: Memory-Augmented Deep Autoencoder for Unsupervised Anomaly Detection"
},
{
"paperId": "d9cb449af41b6da76c98475ff268b3ef06d6dec1",
"title": "A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data"
},
{
"paperId": "1db9bd18681b96473f3c82b21edc9240b44dc329",
"title": "Image Transformer"
},
{
"paperId": "84de7d27e2f6160f634a483e8548c499a2cda7fa",
"title": "Spectral Normalization for Generative Adversarial Networks"
},
{
"paperId": "6b0bbf3e7df725cc3b781d2648e41782cb3d8539",
"title": "Generative Image Inpainting with Contextual Attention"
},
{
"paperId": "96ed8ce9ef9fc475db9e02c79f984dc110409b62",
"title": "Real-World Anomaly Detection in Surveillance Videos"
},
{
"paperId": "8a6acba7fb2aad1299fcf35701417e063d410ed4",
"title": "Future Frame Prediction for Anomaly Detection - A New Baseline"
},
{
"paperId": "fef6f1e04fa64f2f26ac9f01cd143dd19e549790",
"title": "Spatio-Temporal AutoEncoder for Video Anomaly Detection"
},
{
"paperId": "99dff291f260b3cc3ff190106b0c2e3e685223a4",
"title": "A Revisit of Sparse Coding Based Anomaly Detection in Stacked RNN Framework"
},
{
"paperId": "46bc2a73ff36d00357eda4e8051f0b015b63e246",
"title": "Dual Discriminator Generative Adversarial Nets"
},
{
"paperId": "9d5290fadb7625862a966e0330bd0f9e111fc99d",
"title": "Abnormal event detection in videos using generative adversarial nets"
},
{
"paperId": "e76edb86f270c3a77ed9f5a1e1b305461f36f96f",
"title": "MoCoGAN: Decomposing Motion and Content for Video Generation"
},
{
"paperId": "acd87843a451d18b4dc6474ddce1ae946429eaf1",
"title": "Wasserstein Generative Adversarial Networks"
},
{
"paperId": "792250ae660b7c25f85eeea7dcae623e4301d97c",
"title": "Remembering history with convolutional LSTM for anomaly detection"
},
{
"paperId": "204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"title": "Attention is All you Need"
},
{
"paperId": "424a6e62084d919bfc2e39a507c263e5991ebdad",
"title": "Self-Normalizing Neural Networks"
},
{
"paperId": "76624f8ff1391e942c3313b79ed08a335aa5077a",
"title": "A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction"
},
{
"paperId": "edf73ab12595c6709f646f542a0d2b33eb20a3f4",
"title": "Improved Training of Wasserstein GANs"
},
{
"paperId": "e5366a704ffa3b41aacd385f3c087ec3fd566934",
"title": "Detecting anomalous events in videos by learning deep representations of appearance and motion"
},
{
"paperId": "527cc8cd2af06a9ac2e5cded806bab5c3faad9cf",
"title": "Abnormal Event Detection in Videos using Spatiotemporal Autoencoder"
},
{
"paperId": "edd846e76cacfba5be37da99c006e3ccc9b861b0",
"title": "FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks"
},
{
"paperId": "8acbe90d5b852dadea7810345451a99608ee54c7",
"title": "Image-to-Image Translation with Conditional Adversarial Networks"
},
{
"paperId": "97e7c94a78ae17cfb90848c1cfca8c431082a7b2",
"title": "Learning Temporal Regularity in Video Sequences"
},
{
"paperId": "7f5fc84819c0cf94b771fe15141f65b123f7b8ec",
"title": "Multi-Scale Context Aggregation by Dilated Convolutions"
},
{
"paperId": "f9c990b1b5724e50e5632b94fdb7484ece8a6ce7",
"title": "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting"
},
{
"paperId": "6364fdaa0a0eccd823a779fcdd489173f938e91a",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"paperId": "a6cb366736791bcccc5c8639de5a8f9636bf87e8",
"title": "Adam: A Method for Stochastic Optimization"
},
{
"paperId": "6fc6803df5f9ae505cae5b2f178ade4062c768d0",
"title": "Fully convolutional networks for semantic segmentation"
},
{
"paperId": "67dccc9a856b60bdc4d058d83657a089b8ad4486",
"title": "Two-Stream Convolutional Networks for Action Recognition in Videos"
},
{
"paperId": "869b17632ed4f19f93b3b58dcaa9f0b8e92108f3",
"title": "Abnormal Event Detection at 150 FPS in MATLAB"
},
{
"paperId": "9d3f0d47449c7db37d1bae3b70db2928610a8db7",
"title": "Anomaly detection in crowded scenes"
},
{
"paperId": "91228e00fe33ed6072cfe849ab9e98160461549d",
"title": "High Accuracy Optical Flow Estimation Based on a Theory for Warping"
},
{
"paperId": "7c75203739f5f89e109b11144d170d4d3f2a6abc",
"title": "Clustering Driven Deep Autoencoder for Video Anomaly Detection"
},
{
"paperId": "e37daeaa20bc8cd68db07641201faf1db3b8c31d",
"title": "Dual Discriminator Generative Adversarial Network for Video Anomaly Detection"
},
{
"paperId": "aae932cf9c2434f52b03991fcab050a61a960d48",
"title": "Anomaly Detection and Localization in Crowded Scenes"
},
{
"paperId": "5da739abaa85ba155c6aa76a29d5cbdefb6f018e",
"title": "Biologically Inspired Feature Manifold for Scene Classification"
},
{
"paperId": null,
"title": "Viral Bhalodia, and Nuno Vasconcelos"
}
] | 14,460
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014be151dfab3a65e5da397810b0bd834b2aa06d
|
[
"Computer Science"
] | 0.846703
|
A Survey of Local Differential Privacy and Its Variants
|
014be151dfab3a65e5da397810b0bd834b2aa06d
|
arXiv.org
|
[
{
"authorId": "2215284412",
"name": "Likun Qin"
},
{
"authorId": "2238048286",
"name": "Nan Wang"
},
{
"authorId": "2234372112",
"name": "Tianshuo Qiu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
The introduction and advancements in Local Differential Privacy (LDP) variants have become a cornerstone in addressing the privacy concerns associated with the vast data produced by smart devices, which forms the foundation for data-driven decision-making in crowdsensing. While harnessing the power of these immense data sets can offer valuable insights, it simultaneously poses significant privacy risks for the users involved. LDP, a distinguished privacy model with a decentralized architecture, stands out for its capability to offer robust privacy assurances for individual users during data collection and analysis. The essence of LDP is its method of locally perturbing each user's data on the client-side before transmission to the server-side, safeguarding against potential privacy breaches at both ends. This article offers an in-depth exploration of LDP, emphasizing its models, its myriad variants, and the foundational structure of LDP algorithms.
|
# A Survey of Local Differential Privacy and Its Variants
### Likun Qin, Nan Wang, Tianshuo Qiu Department of Electrical and Computer Engineering Shandong University, Jinan, China
Abstract—The introduction and advancements in Local Differential Privacy (LDP) variants have become a cornerstone in
addressing the privacy concerns associated with the vast data
produced by smart devices, which forms the foundation for datadriven decision-making in crowdsensing. While harnessing the
power of these immense data sets can offer valuable insights,
it simultaneously poses significant privacy risks for the users
involved. LDP, a distinguished privacy model with a decentralized architecture, stands out for its capability to offer robust
privacy assurances for individual users during data collection and
analysis. The essence of LDP is its method of locally perturbing
each user’s data on the client-side before transmission to the
server-side, safeguarding against potential privacy breaches at
both ends. This article offers an in-depth exploration of LDP,
emphasizing its models, its myriad variants, and the foundational
structure of LDP algorithms.
I. INTRODUCTION
Collecting and analyzing data introduces significant privacy
concerns because it often includes sensitive user information.
With the advent of sophisticated data fusion and analysis
methods, user data becomes even more susceptible to breaches
and exposure in this era of big data. For instance, by studying
appliance usage, adversaries can deduce daily routines or
behaviors of individuals, like when they are home or their
specific activities such as watching TV or cooking. It’s crucial
to prioritize the protection of personal data when gathering
information from diverse devices. Currently, the European
Union (EU) has released the GDPR [1], which oversees EU
data protection laws for its citizens and outlines the specifics
related to the handling of personal data. Similarly, the U.S. National Institute of Standards and Technology (NIST) is in the
process of crafting privacy frameworks. These frameworks aim
to more effectively recognize, evaluate, and address privacy
risks, enabling individuals to embrace innovative technologies
with increased trust and confidence [2], [3].
From a privacy-protection standpoint, differential privacy
(DP) has been introduced over a decade ago [4], [5]. Recognized as a robust framework for safeguarding privacy, it’s often
termed as global DP or centralized DP. DP’s strength lies in its
mathematical rigor; it operates independent of an adversary’s
background knowledge and assures potent privacy protection
for users. It has found applications across various domains [6].
However, DP assumes the presence of a trustworthy server,
which can be a challenge since many online platforms or
crowdsourcing systems might have untrustworthy servers keen
on user data statistics [7] [8]
Emerging from the concept of DP, local differential privacy
(LDP) was introduced [9]. LDP stands as a decentralized
version of DP, offering individualized privacy assurances and
making no assumptions about third-party server trustworthiness. LDP has become a focal point in privacy research due
to its theoretical significance and practical implications [10].
Numerous corporations, including Apple’s iOS [11], Google
Chrome, and the Windows operating system, have integrated
LDP-driven algorithms into their systems. Owing to its robust
capabilities, LDP has become a preferred choice to address
individual privacy concerns during various statistical and analytical operations. This includes tasks like frequency and mean
value estimation [12], the identification of heavy hitters [13],
k-way marginal release, empirical risk minimization (ERM),
federated learning, and deep learning.
While LDP is powerful, it’s not without its challenges,
notably in striking an optimal balance between utility and
privacy [14]. To address this, there are two primary approaches. Firstly, by devising improved mechanisms - leading
to the introduction of numerous LDP-based protocols and
sophisticated mechanisms in academic circles. Secondly, by
revisiting the definition of LDP itself, with researchers suggesting more flexible privacy concepts to better cater to the
utility-privacy balance required for real-world applications.
Given the growing significance of LDP, a thorough survey of
the topic is both timely and essential. While there exists some
literature reviewing LDP, the focus has often been narrow.
They either focus on specific applications or certain types of
mechanisms.
In this paper, we delve deep into the world of LDP
and its various offshoots, meticulously studying their recent
advancements and associated mechanisms. We embark on
a thorough exploration of the foundational principles that
drive LDP and the evolutionary trajectories of its multiple
variants. We aim to identify the cutting-edge developments,
shedding light on the innovations that have shaped these
privacy tools and the challenges they aim to address in our
contemporary digital landscape. Furthermore, we analyze the
specific mechanisms that support and enhance the capabilities
of LDP, understanding their technical intricacies and the realworld applications they cater to. Through this comprehensive
study, we aspire to provide readers with a panoramic view of
the current state of LDP research, setting the stage for future
inquiries and innovations in this critical domain
-----
II. LOCAL DIFFERENTIAL PRIVACY, PROPERTIES AND
MECHANISMS
In this section, we study LDP and its properties, and LDP
based mechanisms. We start from the definition of LDP.
Definition 1 (ε-Local Differential Privacy (ε-LDP) .
A randomized mechanism M satisfies ε-LDP if and only if
for any pairs of input values v, v[′] in the domain of M, and
for any possible output y ∈ Y, it holds
P [M (v) = y] ≤ e[ε] - P [M (v[′]) = y], (1)
where P [·] denotes probability and ε is the privacy budget. A
smaller ε means stronger privacy protection, and vice versa.
The basic properties of LDP include the followings:
Composition: [15] Given two mechanisms M1 and M2
that provide ε1-LDP and ε2-LDP respectively, their sequential
composition provides (ε1 + ε2)-LDP.
M (v) = (M1(v), M2(v)) =⇒ M is (ε1 + ε2)-LDP (2)
Post-processing: Any function applied to the output of an
ε-LDP mechanism retains the ε-LDP guarantee.
If M (v) is ε-LDP, then f (M (v)) is also ε-LDP. (3)
Robustness to Side Information: LDP guarantees hold
even if an adversary has access to auxiliary or side information.
Utility-Privacy Tradeoff: Generally, a lower value of ε
implies stronger privacy but might result in reduced utility
of the perturbed data.
Independence of Background Knowledge: The privacy
guarantees of LDP mechanisms are designed to hold regardless
of any background knowledge an adversary might have.
Next, we study mechanisms that satisfy LDP:
Randomize Response [16]
The Randomized Response Mechanism is a simple yet
effective approach to achieving LDP. It’s particularly used for
binary data, i.e., when a user’s data item is either 0 or 1. The
mechanism operates as follows:
1) With probability [1]2 [, the user truthfully answers a ques-]
tion.
2) With probability 12 [, the user randomly answers the]
question.
Mathematically, given a user’s true data item v ∈{0, 1},
the mechanism outputs v with probability [1]2 [and outputs][ 1] [−] [v]
(i.e., the opposite of v) with probability 2[1] [.]
The probability mass function (pmf) is given by:
P [M (v) = 1] = [1] (4)
2 [v][ + 1]2 [(1][ −] [v][) = 1]2
P [M (v) = 0] = [1] (5)
2 [(1][ −] [v][) + 1]2 [v][ = 1]2
where ∆f is the sensitivity of the function f and Lap(·)
represents the Laplace distribution.
Gaussian Mechanism
Similar to the Laplace Mechanism, the Gaussian Mechanism
adds noise but from the Gaussian distribution:
Given a data item v, the mechanism outputs:
M (v) = v + N (0, σ[2])
where σ[2] determines the amount of noise based on the desired
ε and function sensitivity ∆f .
Exponential Mechanism
The Exponential Mechanism selects an output based on a
scoring function and weights outputs with the exponential of
their score. Given a set of possible outputs R, a data item
v, and a scoring function q(v, r), the probability of selecting
output r is proportional to:
� εq(v, r)
exp
2∆q
�
This mechanism ensures ε-LDP with ε = ln(2).
Laplace Mechanism [17]
The Laplace Mechanism adds noise drawn from the Laplace
distribution to the true value of the data. For LDP, this
mechanism can be adjusted as:
Given a data item v, the mechanism outputs:
M (v) = v + Lap( [∆][f] )
where ∆q is the sensitivity of q.
Perturbed Histogram Mechanism
For a set of items, instead of perturbing each item, this
mechanism perturbs the histogram of the data items. Given a
data item set V, the mechanism constructs a histogram H and
then outputs:
M (H) = H + Lap( [∆][H]
ε [)]
where ∆H is the sensitivity of the histogram construction.
Observe that each mechanism’s efficacy is closely tied to the
sensitivity of the query, denoted as ∆f . In the realm of Local
Differential Privacy (LDP), this sensitivity can often grow
significantly, especially when the input domain is vast. The
larger the sensitivity, the more noise needs to be introduced
by the mechanism to ensure the desired privacy level. This
can lead to significant distortion in the data, compromising its
utility.
Furthermore, as the input support size increases, maintaining the desired privacy guarantee becomes a challenge. Noise
calibrated to a high sensitivity can sometimes overshadow the
actual data, rendering the results almost meaningless or leading
to misinterpretations.
The consequence of this is a pronounced tradeoff between
utility and privacy. Achieving stronger privacy often means
accepting reduced accuracy and utility in the results, and vice
versa. For applications that require high precision, this can be
problematic. It implies that while these mechanisms provide a
robust privacy guarantee in theory, their practical applicability
can be constrained, especially in scenarios where fine-grained
insights from data are crucial.
Hence, while the promise of LDP is enticing, its real-world
implementation requires careful consideration of the utilityprivacy balance, pushing researchers to seek more efficient
mechanisms or modified privacy models to better cater to
practical needs
-----
III. ADVANCED LDP MECHANISMS
As we mentioned in the introduction, to improve the utilityprivacy tradeoff provided by LDP, there are typically two
manners. One is to design dedicated mechanism or advanced
protocols. The other is to relax the definition of LDP to
enhance the data utility. In this section, we summarized several
advanced LDP algorithms, aiming to improve the general
utility-privacy tradeoff.
RAPPOR [10] (Randomized Aggregatable PrivacyPreserving Ordinal Response):
Introduced by Google. RAPPOR enhances the randomized
response mechanism through the incorporation of Bloom
filters. Each user’s value is hashed multiple times into a
Bloom filter, which is then perturbed using the RR technique.
This allows multiple string values to be encoded before
randomization. Advantage: Its main strength lies in collecting
statistics about low-frequency items in the user population.
It can provide meaningful insights even when items are not
commonly observed.
Local Hashing [12]:
Addressing the problem of efficiency in the RR technique
when dealing with a large domain of inputs, local hashing
maps the original vast domain into a smaller domain using
hash functions. This condensed domain can then be analyzed
using traditional RR techniques. Advantage: It substantially
reduces the noise introduced in the randomization process,
enabling accurate estimation of frequencies for individual
items in the domain. This mechanism improves the utility,
especially when the original domain is considerably large.
Piecewise RR:
Instead of applying the same randomization mechanism
across the entire input domain, the Piecewise RR technique
divides the domain into multiple segments or pieces. Each
segment then gets its own randomization mechanism tailored
to its characteristics. Advantage: This method achieves a more
granular utility-privacy tradeoff. It can offer enhanced privacy
in sensitive segments while improving utility in less-sensitive
ones. Optimized RR:
The protocol doesn’t just use a fixed randomization parameter; instead, it optimizes the parameters of the RR.
This optimization is often based on real data distribution or
some auxiliary information, ensuring that the randomization
provides the best possible utility. Advantage: By adjusting the
randomization according to data distribution, it achieves better
accuracy in aggregate statistics.
Fourier Perturbation Algorithm (FPA) [18]:
Instead of perturbing the raw data directly, FPA operates in
the frequency domain. The data undergoes a Fourier transformation, after which the perturbation is applied. This allows
for randomization in a different space that might be more
conducive to certain types of analyses. Advantage: Provides
enhanced utility for specific query types, especially those that
are frequency-based or need insights from periodic patterns in
data.
IV. LDP VARIANTS AND MECHANISMS
In this section, we introduce LDP variants that aim to
provide better utility privacy tradeoff in different applications
A. Variants and Mechanisms of LDP
1) (ε, δ)-LDP: Drawing parallels with how (ε, δ)-DP [19]
extends ε-DP, (ε, δ)-LDP (sometimes termed as approximate
LDP) serves as a more flexible counterpart to ε-LDP (or pure
LDP).
Definition 1 (Approximate Local Differential Privacy). A
randomized process M complies with (ε, δ)-LDP if, for all
input pairs v and v[′] within M ’s domain and any probable
output y ∈ Y, the following holds:
P [M (v) = y] ≤ e[ε] - P [M (v[′]) = y] + δ.
Here, δ is customarily a small value.
In essence, (ε, δ)-LDP implies that M achieves ε-LDP with
a likelihood not less than 1−δ. If δ = 0, (ε, δ)-LDP converges
to ε-LDP.
2) BLENDER Model: BLENDER [20], a fusion of global
DP and LDP, optimizes data utility while retaining privacy. It
classifies users based on their trust in the aggregator into two
categories: the opt-in group and clients. BLENDER enhances
utility by balancing data from both. Its privacy measure mirrors
that of (ε, δ)-DP [21].
3) Geo-indistinguishability: Originally tailored for location
privacy with global DP, Geo-indistinguishability [22] uses the
data’s geographical distance. Alvim et al. [23] argued for
metric-based LDP’s advantages in specific contexts.
Definition 2 (Geo-indistinguishability). A randomized function M adheres to Geo-indistinguishability if, for any input
pairs v and v[′] and any output y ∈ Y, the subsequent relation
is met:
P [M (v) = y] ≤ e[ε][·][d][(][v,v][′][)] - P [M (v[′]) = y],
where d(., .) designates a distance metric.
This model adjusts privacy depending on data distance,
augmenting utility for datasets like location or smart meter
consumption that are sensitive to distance.
4) Local Information Privacy: Local Information Privacy
(LIP) was originally proposed in [24] as a prior-aware version
of LDP, and then, in [25], Jiang et al relax the prior-aware
assumption to partial prior-aware (Bounded Prior in their
version). The definition of LIP is shown as follows:
Definition 3. (ǫ, δ)-Local Information Privacy [26] A mechanism M satisfies (ǫ, δ)-LIP, if ∀x ∈X, y ∈ Range(M):
P (Y = y) ≥ e[−][ǫ]P (Y = y|X = x) − δ,
(6)
P (Y = y) ≤ e[ǫ]P (Y = y|X = x) + δ.
5) Sequential Information Privacy: Sequential Information
Privacy (SIP), built upon LIP, measures the privacy leakage for
a data sequence, or time series data. SIP naturally decomposes
using chain rule-similar techniques and is comparable to that
of LDP.
Definition 4. [(ǫ)-Sequence Information Privacy] [27] A
mechanism M satisfies (ǫ)-SIP for some ǫ ∈ `R[+], if ∀X1[T]` [∈X] [,]
Y1[T] [∈] [Range][(][M][)][:]
1 [) =][ y]1[T] []]
e[−][ǫ] ≤ [P] [[][M][ [(][x]T[T] T ] ≤ e[ǫ] (7)
-----
The operational meaning of LIP is, the output Y provides
limited additional information about any possible input X, and
the amount of the additional information is measured by the
privacy budget ǫ and failure probability δ.
In [28], multiple LIP mechanisms were proposed and testified, showing that even though ǫ-LIP is stronger than 2ǫ-LDP
in terms of privacy protection. The mechanisms achieve more
than 2 times of utility gain.
6) CLDP: Recognizing LDP’s diminished utility with
fewer users, Gursoy et al. [29] introduced the metric-based
model of condensed local differential privacy (CLDP).
Definition 5 (α-CLDP). For all input pairs v and v[′] in M ’s
domain and any potential output y ∈ Y, a randomized function
M satisfies α-CLDP if:
P [M (v) = y] ≤ e[α][·][d][(][v,v][′][)] - P [M (v[′]) = y],
where α > 0.
In CLDP, a decline in α compensates for a growth in
distance d. Gursoy et al. employed an Exponential Mechanism
variant to devise protocols, particularly benefitting scenarios
with limited users.
7) PLDP: PLDP [30] offers user-specific privacy levels.
Here, users can modify their privacy settings, denoted by ε.
Definition 6 (ε-PLDP). For a user U, and all input pairs v
and v[′] in M ’s domain and any potential output y ∈ Y, a
randomized function M meets εU -PLDP if:
P [M (v) = y] ≤ e[ε][U] - P [M (v[′]) = y].
Approaches like the personalized count estimation protocol
and advanced combination strategy cater to users with varying
privacy inclinations.
8) Utility-optimized LDP (ULDP): Traditional LDP assumes all data points have uniform sensitivity, often causing
excessive noise addition. Recognizing that not all personal
data have equivalent sensitivity, the Utility-optimized LDP
(ULDP) model was proposed. In this model, let KS ⊆ K
be the sensitive dataset and KN = K \ KS be the nonsensitive dataset. Let YP ⊆ Y be the protected output set and
YI = Y \YP be the invertible output set. The formal definition
of ULDP is:
Definition 7. Given KS ⊆ K, YP ⊆ Y, a mechanism M
adheres to (KS, YP, ǫ)-ULDP if:
- For every y ∈ YI, there is a v ∈ KN with P [M (v) =
y] > 0 and P [M (v[′]) = y] = 0 for any v[′] ̸= v.
- For all v, v[′] ∈ K and y ∈ YP, P [M (v) = y] ≤ e[ǫ] P [M (v[′]) = y].
In simpler terms, (KS, YP, ǫ)-ULDP ensures that sensitive
inputs are mapped only to the protected output set.
9) Input-Discriminative LDP (ID-LDP): While ULDP classifies data as either sensitive or non-sensitive, the ID-LDP
model offers a more nuanced approach by acknowledging
varying sensitivity levels among data It is defined as:
Definition 8. Given a set of privacy budgets E = {ǫv}v∈K,
a mechanism M adheres to E-ID-LDP if for all input pairs v
and v[′], and any output y ∈ Y :
P [M (v) = y] ≤ e[r][(][ǫ][v][,ǫ][v][′][ )] - P [M (v[′]) = y]
where r(·, ·) is a function of two privacy budgets.
The study in [31] primarily utilizes the minimum function
between ǫv and ǫv′ and introduces the MinID-LDP as a
specialized case.
10) Parameter Blending Privacy (PBP): PBP was proposed
as a more flexible LDP variant [32]. In PBP, let Θ represent the
domain of privacy parameters. Given a privacy budget θ ∈ Θ,
let P (θ) denote the frequency with which θ is selected. PBP
is defined as:
Definition 9. A mechanism M adheres to r-PBP if, for all
θ ∈ Θ, v, v[′] ∈ K, y ∈ Y, there exists a θ[′] ∈ Θ such that:
P (θ)P [M (v; θ) = y] ≤ e[r][(][θ][)] - P (θ[′])P [M (v[′]; θ[′]) = y]
B. A Summary of LDP variants
Local Differential Privacy (LDP) is a foundational approach
tailored for all data types and operates using the randomized
response (RR) technique. Its primary advantage is its broad
applicability, but it may add more noise than necessary,
especially when not all data attributes have the same sensitivity
levels. To address this, approximate LDP, which allows for
minor violations in privacy guarantees, introduces flexibility.
However, this relaxation can be a double-edged sword, potentially compromising privacy in highly sensitive scenarios.
BLENDER, on the other hand, is crafted explicitly for categorical data. By synergizing aspects of both global Differential
Privacy and LDP, it aims to improve data utility. Yet, its
reliance on grouping user data might introduce challenges in
dynamic or constantly changing environments. Local d-privacy
is another variant, designed with metric spaces in mind. It’s
particularly beneficial for data like location points, but may not
be the first choice for other data structures due to its specific
metric-based method.
CLDP stands out for its unique approach to address challenges that arise with smaller user counts, an often overlooked
but crucial aspect in privacy. However, while it addresses
issues in smaller datasets, it might introduce complexities
when the user base grows, making scalability a potential
concern. PLDP, meanwhile, strives to provide a more granular
level of privacy. While this granularity is its strength, the tradeoff might be a more significant computational overhead and
intricate implementation details.
ULDP takes a novel stance by focusing on optimizing utility
through an emphasis on sensitive data. The premise here is
that not all data pieces hold equal sensitivity. However, the
challenge and responsibility of correctly categorizing which
data is sensitive can be daunting. ID-LDP further refines this
concept by providing protection based on the actual sensitivity
of the input, using unary encoding to achieve this. Its main
challenge is the intricate parameter setting required to ensure
optimal performance. Lastly, PBP is distinct in its pursuit
of robust privacy By maintaining the secrecy of provider
-----
parameters, it bolsters privacy assurances. Yet, this added layer [16] S. L. Warner, “Randomized response: A survey technique for
of secrecy might introduce complexities in implementation and eliminating evasive answer bias,” Journal of the American Statistical
Association, vol. 60, no. 309, pp. 63–69, 1965. [Online]. Available:
understanding. [http://www.jstor.org/stable/2283137](http://www.jstor.org/stable/2283137)
[17] C. Dwork, M. Naor, T. Pitassi, and G. N. Rothblum, “Differential privacy
under continual observation,” in Proceedings of the Forty-Second ACM
V. CONCLUSION
Symposium on Theory of Computing, ser. STOC ’10, New York, NY,
In the realm of data privacy, Local Differential Privacy USA, 2010, p. 715–724.
(LDP) stands out as a vital tool for preserving user data. This [18] E. Bozkir, O. G¨unl¨u, W. Fuhl, R. Schaefer, and E. Kasneci, “Differential
privacy for eye tracking with temporal correlations,” PLOS ONE, vol. 16,
research delves into various LDP mechanisms, protocols and p. e0255979, 08 2021.
variants in definition, each addressing unique challenges. From [19] R. Bassily, “Linear queries estimation with local differential
the foundational LDP to specialized versions like BLENDER privacy,” CoRR, vol. abs/1810.02810, 2018. [Online]. Available:
[http://arxiv.org/abs/1810.02810](http://arxiv.org/abs/1810.02810)
for categorical data and Local d-privacy for metrics, the [20] B. Avent, A. Korolova, D. Zeber, T. Hovden, and
spectrum of solutions is vast. Techniques like CLDP tackle B. Livshits, “BLENDER: Enabling local search with a
smaller datasets, while PLDP, ULDP, and ID-LDP optimize hybrid differential privacy model,” in 26th USENIX Security
Symposium (USENIX Security 17). Vancouver, BC: USENIX
data utility and privacy levels. The introduction of PBP em- Association, Aug. 2017, pp. 747–764. [Online]. Available:
phasizes secrecy in privacy parameters. Ultimately, this paper [https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/aven](https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/avent)
underscores the importance of selecting the right LDP variant, [21] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our
data, ourselves: Privacy via distributed noise generation,” in Advances
given the specific nature of data and privacy needs. in Cryptology - EUROCRYPT 2006, 25th Annual International
Conference on the Theory and Applications of Cryptographic
Techniques, St. Petersburg, Russia, May 28 - June 1, 2006,
REFERENCES
Proceedings, ser. Lecture Notes in Computer Science, S. Vaudenay,
[1] European Parliament and Council of the European Union. Regulation Ed., vol. 4004. Springer, 2006, pp. 486–503. [Online]. Available:
(EU) 2016/679 of the European Parliament and of the Council. [https://doi.org/10.1007/11761679 29](https://doi.org/10.1007/11761679_29)
[[Online]. Available: https://data.europa.eu/eli/reg/2016/679/oj](https://data.europa.eu/eli/reg/2016/679/oj) [22] M. E. Andr´es, N. E. Bordenabe, and K. Chatzikokolakis,
[2] R. MEYER. (2014) Facebook’s mood manipulation “Geo-indistinguishability: Differential privacy for location-based
experiment might have been illegal. [Online]. Available: systems,” CoRR, vol. abs/1212.1984, 2012. [Online]. Available:
[https://www.theatlantic.com/technology/archive/2014/09/facebooks-mood-manipulation-experiment-might-be-illegal/380717/http://arxiv.org/abs/1212.1984](https://www.theatlantic.com/technology/archive/2014/09/facebooks-mood-manipulation-experiment-might-be-illegal/380717/)
[3] “No free lunch in data privacy,” in Proceedings of SIGMOD 2011 [23] M. S. Alvim, K. Chatzikokolakis, C. Palamidessi, and
and PODS 2011, ser. Proceedings of the ACM SIGMOD International A. Pazii, “Metric-based local differential privacy for statistical
Conference on Management of Data. Association for Computing applications,” CoRR, vol. abs/1805.01456, 2018. [Online]. Available:
Machinery, 2011, pp. 193–204. [http://arxiv.org/abs/1805.01456](http://arxiv.org/abs/1805.01456)
[4] C. Dwork, F. McSherry, and K. Nissim, “Calibrating noise to sensitivity [24] B. Jiang, M. Li, and R. Tandon, “Context-Aware data aggregation with
in private data analysis,” in Theory of Cryptography: Third Theory localized information privacy,” in 2018 IEEE Conference on Communiof Cryptography Conference, 2006, pp. 265–284. [Online]. Available: cations and Network Security (CNS), May 2018.
[https://doi.org/10.1007/11681878 14](https://doi.org/10.1007/11681878_14) [25] ——, “Local information privacy with bounded prior,” in ICC 2019
[5] C. Dwork, “Differential privacy,” in Automata, Languages and 2019 IEEE International Conference on Communications (ICC), May
Programming: 33rd International Colloquium, ICALP 2006, Part II, 2019, pp. 1–7.
M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, Eds., 2006, pp. [26] ——, “Local information privacy and its application to privacy[1–12. [Online]. Available: https://doi.org/10.1007/11787006 1](https://doi.org/10.1007/11787006_1) preserving data aggregation,” IEEE Transactions on Dependable and
[6] Y. Xiao and L. Xiong, “Protecting locations with differential privacy Secure Computing, vol. 19, no. 3, pp. 1918–1935, 2022.
under temporal correlations,” in Proceedings of the 22nd ACM SIGSAC [27] ——, “Online context-aware data release with sequence information
Conference on Computer and Communications Security, ser. CCS ’15, privacy,” 2023.
New York, NY, USA, 2015, p. 1298–1309. [28] B. Jiang, M. Seif, R. Tandon, and M. Li, “Context-aware local informa
[7] W. Primoff and S. Kess, “The equifax data breach: What cpas and firms tion privacy,” IEEE Transactions on Information Forensics and Security,
need to know now,” The CPA Journal, vol. 87, no. 12, pp. 14–17, 2017. vol. 16, pp. 3694–3708, 2021.
[8] J. Lu, “Assessing the cost, legal fallout of capital one data breach,” Legal [29] M. E. Gursoy, A. Tamersoy, S. Truex, W. Wei, and L. Liu, “Secure
Fallout Of Capital One Data Breach (August 15, 2019), 2019. and utility-aware data collection with condensed local differential
[9] C. Dwork, “Differential privacy: A survey of results,” in Theory and privacy,” CoRR, vol. abs/1905.06361, 2019. [Online]. Available:
Applications of Models of Computation: 5th International Conference, [http://arxiv.org/abs/1905.06361](http://arxiv.org/abs/1905.06361)
TAMC, M. Agrawal, D. Du, and Z. Duan, Eds., 2008, pp. 1–19. [30] Y. NIE, W. Yang, L. Huang, X. Xie, Z. Zhao, and S. Wang, “A utility
[[Online]. Available: https://doi.org/10.1007/978-3-540-79228-4 1](https://doi.org/10.1007/978-3-540-79228-4_1) optimized framework for personalized private histogram estimation,”
[10] ´Ulfar Erlingsson, V. Pihur, and A. Korolova, “Rappor: IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 4,
Randomized aggregatable privacy-preserving ordinal response,” in pp. 655–669, 2019.
Proceedings of the 21st ACM CCS, 2014. [Online]. Available: [31] T. Murakami and Y. Kawamoto, “Utility-optimized local differential
[https://arxiv.org/abs/1407.6981](https://arxiv.org/abs/1407.6981) privacy mechanisms for distribution estimation,” in Proceedings of the
[11] A. Greenberg, “Apple’s ‘differential privacy’ is about collecting 28th USENIX Conference on Security Symposium, ser. SEC’19. USA:
your data—but not your data,” 2016. [Online]. Available: USENIX Association, 2019, p. 1877–1894.
[https://www.wired.com/2016/06/apples-differential-privacy-collecting-data/](https://www.wired.com/2016/06/apples-differential-privacy-collecting-data/) [32] S. Takagi, Y. Cao, and M. Yoshikawa, “Poster: Data collection via
[12] T. Wang, J. Blocki, N. Li, and S. Jha, “Locally differentially local differential privacy with secret parameters,” in Proceedings of
private protocols for frequency estimation,” in 26th USENIX Security the 15th ACM Asia Conference on Computer and Communications
17. USENIX Association, 2017, pp. 729–745. [Online]. Available: Security, ser. ASIA CCS ’20. New York, NY, USA: Association
[https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/wang-tianhaofor Computing Machinery, 2020, p. 910–912. [Online]. Available:](https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/wang-tianhao)
[13] Z. Qin, Y. Yang, and T. Yu, “Heavy hitter estimation over set-valued [https://doi.org/10.1145/3320269.3405441](https://doi.org/10.1145/3320269.3405441)
data with local differential privacy,” in Proceedings of the 2016
ACM SIGSAC, ser. CCS ’16, 2016, pp. 192–203. [Online]. Available:
[http://doi.acm.org/10.1145/2976749.2978409](http://doi.acm.org/10.1145/2976749.2978409)
[14] C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal, “Contextaware generative adversarial privacy,” CoRR, vol. abs/1710.09549,
[2017. [Online]. Available: http://arxiv.org/abs/1710.09549](http://arxiv.org/abs/1710.09549)
[15] P. Kairouz, S. Oh, and P. Viswanath, “The composition theorem for
differential privacy,” IEEE Transactions on Information Theory, vol. 63,
6 4037 4049 2017
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2309.00861, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": "https://arxiv.org/pdf/2309.00861"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-09-02T00:00:00
|
[] | 8,538
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014d53aa1795a4e229037afe328ae014e5e2c18f
|
[
"Computer Science"
] | 0.841913
|
AmbientDB: P2P data management middleware for ambient intelligence
|
014d53aa1795a4e229037afe328ae014e5e2c18f
|
IEEE Annual Conference on Pervasive Computing and Communications Workshops, 2004. Proceedings of the Second
|
[
{
"authorId": "1684284",
"name": "Willem Fontijn"
},
{
"authorId": "1687211",
"name": "P. Boncz"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# AmbientDB: P2P Data Management Middleware for Ambient Intelligence
## Willem Fontijn Peter Boncz Philips Research CWI willem.fontijn@philips.com boncz@cwi.nl
Abstract system has to present to the user a unified and consistent
_The future generation of consumer electronics devices is_ view irrespective of the context and location of the user or
_envisioned to provide automatic cooperation between_ the type of interaction. In an ad-hoc mobile environment,
_devices and run applications that are sensitive to people's_ this requires novel synchronization procedures,
_likings, personalized to their requirements, anticipatory of_ transparent to the user [4].
_their behavior and responsive to their presence. We see_ The _Connected Home [5] may be seen as a first step_
_this ‘Ambient Intelligence’ as a key feature of future_ towards AmI. Devices contain their own embedded
_pervasive computing. We focus here on one of the_ DBMS, and operate in isolation below the network layer
_challenges_ _in_ _realizing_ _this_ _vision:_ _information_ (see Fig.1a). If one device requires information located on
_management._ _This_ _entails_ _integrating,_ _querying,_ another device it will first have to find the latter device
_synchronizing and evolving structured data, on a_ and query it for the information. When the complexity of
_heterogeneous and ad-hoc collection of (mobile) devices._ the network increases, i.e. more and more diverse devices
_Rather than hard-coding data management functionality_ are introduced, the increasing number of possible
_in each individual application, we argue for adding high-_ combinations will make it hard to create robust
_level data management functionalities to the distributed_ applications. Also, the performance of such a system will
_middleware layer. Our AmbientDB P2P database_ degrade rapidly. This could be countered by pre-emptive
_management system addresses this by providing a global_ data aggregation on resource-rich central servers, but this
_database abstraction over an ad-hoc network of_ has multiple drawbacks. It creates mounting overhead as
_heterogeneous peers._ the complexity increases, implies that certain functionality
is available only when in range of a server and makes the
system vulnerable to point failures. Finally, it is
questionable from a marketing viewpoint whether
## 1. Introduction
customers will buy an expensive home-server that just
improves the performance of other devices.
Future generations of consumer electronic devices will
An alternative to the central server approach is to keep the
make computing power and connectivity omnipresent, yet
participating devices fully autonomous but cooperative. A
hidden from the users view. This pervasive computing
collection of such devices we call a Device Society. Each
infrastructure will be used to create an environment that
device has its own responsibility to deliver certain
anticipates users wishes instead of just responding to
functionality and the collection as a whole delivers the
direct commands. The aim is to improve quality of life by
environment to run high-level applications. In such a
creating the desired atmosphere and functionality via
system, most data is stored distributed and only integrated
personalized, interconnected systems and services. This
at query time. Such distributed storage is robust and
vision, called _Ambient Intelligence (AmI), is the focal_
point of much ongoing research [1]. The ‘Ambient’ part of (a) **Application** **Application** **Application**
AmI refers to the unobtrusiveness of the technology, both **Network Middleware**
physically, by embedding it in the environment, and **DBMS** **DBMS** **DBMS**
functionally, by making user interaction mostly implicit. **Hardware** **Hardware** **Hardware**
The latter will entail, for instance, habit watching [3]. The (b)
‘Intelligence’ part of AmI refers to the way the system **Application** **Application** **Application**
**AmbientDB**
integrates and relates the data from a wide variety of **Network Middleware**
sources to create the perception of intelligence. The **DBMS** **DBMS** **DBMS** **DBMS**
sources range from simple sensor nodes to Personal Video **Hardware** **Hardware** **Hardware** **Hardware**
Recorders (PVRs) with sophisticated preference-based
**Figure 1: Concept of (a) Connected Home, (b)**
meta data. Perceived intelligence implies that the AmI
**Ambient Intelligence environment**
|Application|Col2|Application|Col4|Application|
|---|---|---|---|---|
|Network Middleware|||||
|DBMS||DBMS||DBMS|
|Hardware||Hardware||Hardware|
|Col1|Application|Col3|Application|Col5|Col6|Col7|Application|Col9|
|---|---|---|---|---|---|---|---|---|
|AmbientDB|||||||||
||||||||||
|Network Middleware|||||||||
|DBMS||||DBMS|||DBMS||
|Hardware||||Hardware|||Hardware||
|DBMS|Col2|
|---|---|
|Hardware||
-----
scalable but data management is not easy anymore.
In this paper we discuss a data management approach for
the latter strategy and present a prototype system. We first
illustrate its requirements using a scenario. In Section 2.2
a technical realization is presented.
### 1.1. Scenario: Music Playlist Generation
Consider the problem of playing music for a user that is
appropriate to his context and state irrespective of his
location. All music owned by this particular user, as well
as meta data describing this music is stored in a music
database distributed over one or more devices, such as
portable music (mp3) players, Smart Phone, PVR and/or
PC. To the user, this music collection should appear
conceptually as a single collection.
Three autonomous processes are active on this music
database. The first aims to improve the profile of the user.
The user can actively rate pieces of music, indicating
which songs he likes or dislikes. This _explicit rating is_
useful to establish a first draft profile but requires direct
user interaction. A more convenient way to improve the
profile is implicit rating. The system logs the reaction of
the user to different pieces of music in different contexts
and the logs are later processed to derive preferences of
the user in various contexts.
The second process aims to extend the music collection.
The profile of the user may be compared to profiles of
others, using collaborative filtering [3]. If one user likes
many songs another user likes, the first may also like
songs unknown to him that the second user likes.
The third process recommends music. Based on the
context, the user is offered a selection of his total music
collection. The context of the user may be derived from
many clues, e.g. ambient lighting, time of day and facial
expression. If the user leaves the house and takes a
portable music store with him, the system assists him in
selecting the music subset that will probably meet his
musical needs for the particular trip. While away, the
portable music device records implicit rating cues and
may pick up some additional pieces of music.
This system should not be static but able to evolve. If the
user buys a watch that senses his skin temperature, the
music recommender should discover that by combining
this new context information with other data, it could
gauge the mood of the user. From that moment on, the
recommender takes mood into account.
### 1.2. The idea: database middleware
Our challenge of managing a sea of evolving data sources
among possibly large and dynamic Device Societies
should be addressed in the middleware layer (see Fig. 1b).
Middleware services already provide the basic
infrastructure for application integration [9], integrating
not only data but also a wide variety of context sources
[6], providing applications with information about
network and device resources, and allowing applications
to reconfigure when conditions change [5]. While there
have been middleware systems with some data
management support [8][9] we aim to raise this to the
level of full DBMS functionality.
By putting such DBMS functionality inside or on top of
the middleware layer, all data sources are virtually
merged, shielding applications from the underlying
complexities. Applications release their queries and get
the results as if accessing a single database system. Events
such as the integration of new devices or data sources, or
failure of devices, and the creation of new data types, can
in great part be handled as schema evolution in this
database. Accepting queries in a high-level language that
describes what data an application needs instead of how to
exactly obtain its results, allows a _query optimizer rather_
than the application programmer to automatically find an
efficient solution for executing a query, taking into
account indexing structures, system and network load
conditions and concurrent requests. Also, this provides
_data independence, meaning that the data representation_
can change over time without this breaking the
applications using the data. These are the classic database
advantages that we are now able to bring into the
pervasive computing domain. Finally, this approach
enables evolutionary introduction of new functionality. By
supplementing middleware for the Connected Home with
DBMS functionality, we create a breeding ground for
more and more sophisticated and AmI applications.
## 2. AmbientDB: a P2P DBMS
The goal of our AmbientDB system [10] is to provide full
relational database functionality for standalone operation
in autonomous devices that may be mobile and disconnected for long periods of time, while enabling them to cooperate in an ad-hoc way with (many) other AmbientDB
devices. Hence, our choice for P2P, as opposed to designs
that use a central server. AmbientDB uses ‘abstract’
tables, i.e. applications are ignorant of where data resides.
Internally, a table may be private to the node, or
distributed over many nodes in the network. The actual
content of a distributed table is formed by the union of
table partitions in all nodes that are connected at that time.
### 2.1. Key Functionalities
Since our work touches upon many sub-fields of database
research [7], we highlight the main differences.
-----
**Figure 2: Concept of AmbientDB. The application**
**on top issues a query to the AmbientDB layer that**
**is propagated (dashed line) to all connected peers.**
**The query result (solid line) is aggregated along**
**the query path and presented to the application.**
**The binary tree in the network layer represents the**
**network topology.**
_Distributed database technology presupposes that the_
collection of participating sites and communication
topology is known a priori. AmbientDB does not.
_Federated database technology, the current approach to_
heterogeneous schema integration, focuses on statically
configured combinations of databases instead of ad-hoc
Device Societies. _Mobile database technology_ generally
assumes that mobile nodes are (weaker) clients that
synchronize with a centralized database server over a
narrow channel. Again, AmbientDB does not. Finally,
_P2P file sharing systems do support decentralized, ad-hoc_
Device Societies, but allow only simple keyword text
search (as opposed to structured database queries).
**2.1.1. Self Organization**
P2P technologies are able to adapt to changes in the
environment and work without central planning. In order
to provide efficient indexed lookup into its distributed
database tables, AmbientDB makes use of Chord [11].
Chord is a Distributed Hash Table (DHT), a scalable P2P
data structure for sharing data among a potentially large
collection of nodes, allowing nodes to join and leave
without making the network unstable. It uniformly
distributes data over all nodes using a hash-function,
enabling efficient _O(log(N)) data lookup. To improve_
scalability in situations where some devices are resourcepoor, AmbientDB keeps devices out of Chord to prevent
overloading them with data they cannot store or with
queries they cannot handle. Upon connection, lowresource nodes transfer their data to a resource-rich
neighbor that handles queries on behalf of them.
**2.1.2. Query Processing**
AmbientDB performs a three level query translation:
_(1) abstract algebra: A user query is posed in the_
“abstract global algebra”. This is a standard relational
query language, providing the basic operators for
selection, join, aggregation and sorting.
_(2) concrete algebra: These are concrete strategies for_
resolving the basic relational operators. Typically, each
abstract operator has multiple concrete variants. E.g.,
there is a broadcast-select, that executes a selection
operator on a distributed table by flooding the network
(broadcast) and collecting all matches. There is also a
variant that exploits a Chord DHT index, which may be
used if a global index on a table column was defined in
the schema. Thus, many different concrete plans may exist
for an ‘abstract’ query, and the query optimizer in
AmbientDB is used to select a good plan.
_(3) dataflow algebra: A very small kernel of basic_
operators is sufficient to implement the concrete algebra.
Each concrete operator is mapped onto a _wave-plan that_
consists of a graph of dataflow operators. Next to query
processing the dataflow operators provide functionality
for splitting and merging data streams.
We plan to augment AmbientDB with support for triggers,
such that applications can be alerted to interesting events
rather than poll the global database with queries [12].
**2.1.3. Synchronization**
The aim of traditional (distributed) database technology,
to provide strict consistency, is not appropriate for P2P
database systems. Algorithms, such as two-phase locking,
are too expensive for a large and sparsely connected
collection of nodes. Many applications do not need full
transactional consistency, but just a notion of final
convergence of updates. Also, applications often have
effective conflict resolution strategies that exploit
application-level semantics. Thus, the challenge for a P2P
DBMS is to provide a powerful formalism in which
applications can formulate synchronization and conflict
resolution strategies. Our first target is to support
applications that use rule-based synchronization expressed
in a prioritized set of database update queries.
**2.1.4. Schema Integration & Evolution**
As devices differ in functionality and make, their data
differs in semantics and form. We use table-view based
schema integration techniques [13] to map local schemata
to a global schema. AmbientDB itself does not address the
automatic construction of such mappings, but aims at
providing the basic functionality for applying, stacking,
sharing, evolving and propagating such mappings.
Providing support for schema evolution within one
schema, e.g. such that old devices can cooperate with
newer ones, is often forgotten. We foresee that a global
certifying entity keeps track of changes in the various subschemas, maintaining bi-directional mappings between
versions. Schema deltas are certified such that one peer
-----
may carry it to the next, without need for direct
communication with a centralized entity.
### 2.2. Scenario using P2P data management
The gist of our approach is that we believe that P2P data
management functionality will make it easier to construct
Ambient Intelligent applications. To illustrate how we see
that happen, let us go back to the problem of managing
and navigating music intelligently.
The schema created by the music player contains a LOG
table where per-user song play counts are kept (see
Fig. 3). This is a _distributed table, which means that the_
music application sees the union of all (overlapping)
horizontal fragments at all participating devices of that
moment as one big table. All devices maintain local playcounts for each (artist,user) combination in this LOG
table. The schema specifies an _index on LOG.artist, so_
each LOG entry is replicated in a Chord DHT and
distributed over all nodes of the Internet domain, using the
Chord hashing scheme (see Fig. 3). This allows to quickly
locate users that played a particular artist.
**2.2.1. Self Organization Example**
The family music collection -- typically in the order of a
few thousands of songs -- is distributed among the Device
Society owned by family members. Some of these devices
may have access to the Internet. The music players with
embedded AmbientDBs form a self-organizing P2P
network, connecting the nodes in order to share all music
content in the "home domain", and a second -possibly
huge- P2P network consisting of all music players
reachable via the Internet, among which only the meta```
create distributed table create distributed table
```
LizLiz U2U2 77 AA `LOG(user,artist,count,…)LOG(user,artist,count,…)`
RobRob U2U2 22 AA `create index`
AnnAnn U2U2 55 EE `IDX(user,artist,count)`
Liz U2 7 other `on LOG(artist)`
Rob U2 2 fields..
Ann blur 9 _global table_
_Chord_ A LizLizRob U2U2U2 772 otherotherfields..
AnnAnn blurblur 99 AA Ann blur 9
RobRob blurblur 44 EE Ann U2 5
LizLiz blurblur 66 CC Rob blur 4
Rob rem 1
Ann U2 5 other Liz blur 6
Rob blur 4 fields.. Ann rem 3
Rob rem 1 E Rob rem 1 Liz rem 8
AnnAnnAnnLiz remremremrem 3338 CCCC C E
_Find(‘blur’) :=_
LizLizAnn blurblurrem 663 fields..fields..other _hash(‘blur’)hash(‘blur’)��D_
Liz rem 8 _chord_lookup(D)chord_lookup(D)��E_
**Fig.3: Scenario for sharing music metadata**
**between many music players in the Internet**
**domain. The distributed table LOG holds artist**
**play-counts for each user. One can quickly find**
**users that play a certain artist using a Chord index.**
information is shared. The home domain may contain
some very low-resource devices in terms of CPU and
storage (e.g. phone) that are kept out of the Chord DHT.
In the Internet domain, the number of on-line nodes
maybe large and the number of songs huge.
**2.2.2. Schema Evolution Example**
In our scenario, the user buys a watch with integrated
body thermometer. This watch has Body Area Network
(BAN) functionality (e.g. Bluetooth) such that it can
communicate with the owner’s phone or mp3 player when
these are carried in his pocket. With the temperature meter
watch comes an AmbientDB _schema update that e.g._
introduces a new TEMPERATURE table that stores
(timestamp, temperature) records, and a data propagation
_profile with rules that specify the longevity of its records_
and a propagation strategy. Additionally, on a certified
(vendor) site, the user community of the music player may
store a trigger update that specifies a (complex) rule that
derives a mood from the body temperature curve in
conjunction with other personal characteristics stemming
from other sources. When this mood indicates
appreciation for the current song, an automatic playlist
creation process is scheduled, aggregating songs similar to
the one currently being played (this query is described in
Section 2.2.4).
Note that schema updates may propagate in a P2P fashion
(from watch to phone, from phone to home PC) or from a
central Internet site, in any case though with a certifying
mechanism. Also, schema updates may depend on a
collection of sub-schema versions being present, such that
during the next visit to the central vendor site, when the
combination of music and temperature sub-schemas is
detected, the user is alerted to the possibility of installing
the "music-appreciation-trigger".
**2.2.3. Update Propagation Example**
The watch has a limited storage capacity, it can hold only
a few records. Its synchronization rules, however, make it
replicate temperature records to devices in the
neighborhood (e.g. your Smart Phone). When it arrives at
home, the Smart Phone then propagates these records to
the home PC, where a health-monitoring agent might be
running that periodically analyzes this log data using data
mining techniques. The propagation rules may include a
maximum lifetime that causes old records to be
automatically deleted after e.g. a number of weeks.
**2.2.4. Query Processing Example**
The music player generates intelligent playlists, either
because the user explicitly chooses an sample artist to
generate a similar playlist from, or implicitly when the
“music-appreciation-trigger” notices that you like an artist
|LLLiiizzz RRRooobbb AAAnnnnnn|UUU222 UUU222 UUU222|777 222 555|AAA AAA EEE|
|---|---|---|---|
|LLiizz RRoobb AAnnnn|UU22 UU22 bblluurr|77 22 99|ootthheerr ffiieellddss....|
|A|Col2|Col3|Col4|
|---|---|---|---|
|AAAnnnnnn RRRooobbb LLLiiizzz|bbbllluuurrr bbbllluuurrr bbbllluuurrr|999 444 666|AAA EEE CCC|
|AAnnnn RRoobb RRoobb|UU22 bblluurr rreemm|55 44 11|ootthheerr ffiieellddss....|
|LLiizz RRoobb AAnnnn|UU22 UU22 bblluurr|77 22 99|ootthheerr ffiieellddss....|
|---|---|---|---|
|AAnnnn RRoobb RRoobb|UU22 bblluurr rreemm|55 44 11||
|LLiizz AAnnnn LLiizz|bblluurr rreemm rreemm|66 33 88||
|ccrreeaattee ddiissttrriibbuutteedd ttaabbllee LLOOGG((uusseerr,,aarrttiisstt,,ccoouunntt,,……)) LLLiiizzz UUU222 777AAA RRRooobbbUUU222 222AAA ccrreeaattee iinnddeexx AAAnnnnnnUUU222 555EEE IIDDXX((uusseerr,,aarrttiisstt,,ccoouunntt)) LLiizz UU22 77 ootthheerr oonn LLOOGG((aarrttiisstt)) RRoobbUU22 22 ffiieellddss.... AAnnnnbblluurr99 gglloobbaall ttaabbllee A LLiizz UU22 77 ootthheerr Chord RRoobbUU22 22 ffiieellddss.... AAAnnnnnnbbbllluuurrr999AAA AAnnnnbblluurr99 RRRooobbbbbbllluuurrr444EEE AAnnnnUU22 55 LLLiiizzz bbbllluuurrr666CCC RRoobbbblluurr44 RRoobbrreemm11 AAnnnnUU22 55 ootthheerr LLiizz bblluurr66 RRoobbbblluurr44 ffiieellddss.... AAnnnnrreemm33 RRRooobbbrrreeemmm111EEE RRoobbrreemm11 LLiizz rreemm88 AAAnnnnnnrrreeemmm333CCC C E LLLiiizzz rrreeemmm888CCC FFiinndd((‘‘bblluurr’’)) ::== LLiizz bblluurr66 ootthheerr hhaasshh((‘‘bblluurr’’))DD AAnnnnrreemm33 ffiieellddss.... LLiizz rreemm88 cchhoorrdd__llooookkuupp((DD))EE|Col2|Col3|Col4|
|---|---|---|---|
|RRRooobbb AAAnnnnnn LLLiiizzz|rrreeemmm rrreeemmm rrreeemmm|111 333 888|EEE CCC CCC|
|LLiizz AAnnnn LLiizz|bblluurr rreemm rreemm|66 33 88|ootthheerr ffiieellddss....|
-----
being played. The two database queries below express a
simple collaborative filtering method. The first query
computes a relevance of other users’ music taste from
their play-count of an sample artist. The second query
then computes a ranking by multiplying all artist playcounts of all users by the user relevance, and summing this
per artist, returning a top-N.
We kept this example very simple for presentation
purposes, but one can easily refine it, e.g. by increasing
the granularity to songs (instead of artists) or making it
work with a weighted collection of samples instead of one.
The benefit is that for the application programmer, writing
this kind of data-intensive applications on large ad-hoc
networks is reduced to writing some relatively simple
database queries. Database indices and query optimization
then make sure that it runs efficiently without the
application programmer having to worry about it [10].
```
% each record has the special field #nodeid
% that holds the device ID where it is stored
RELEVANT := % query will use Chord index
SELECT user, SUM(playcount) AS relevance,
#nodeid AS site
FROM LOG
WHERE artist = ‘normalized artist name’
GROUP BY user
ORDER BY relevance DESCENDING LIMIT 5
SELECT L.artist AS artist,
sum(L.playcount*R.relevance) AS score
FROM LOG L, RELEVANT R
WHERE L.user = R.user AND L.#nodeid = R.site
GROUP BY artist
ORDER BY score DESCENDING LIMIT 25
## 3. Current Status & Research Challenges
```
We hope to release a first version of AmbientDB early
next year. We have focused so far on distributed query
processing, and identified three functional levels that all
require further research. On the top level, we need more
experience with a wider variety of Ambient Intelligent
applications to see what exact requirements they impose
on a P2P DBMS. Also, if applications are to cooperate
seamlessly, they need to operate in a compatible semantic
framework. This is an "AI-hard" problem for the general
case. We do see possibilities when trusted and
standardized mappings are available. The second
functional level is P2P data management. While we have a
working query processor, it is likely that there are query
execution algorithms that exploit the P2P architecture
better. Also, loosely consistent or converging transactions
as well as a schema mapping infrastructure remain open
areas of research. The third functional level is P2P
networking. P2P overlay technology often exhibits
inefficient usage of physical resources, as these are
opaque on the TCP overlay level. This could be improved
by dynamic re-configuration of P2P networks, an
important middleware research issue [5]. Better adaptation
to device and network resources using e.g. slave- and
super-nodes could be ways forward here.
## 4. Conclusions
Transparent distributed data management is crucial to
Ambient Intelligent applications in Device Societies, and
the P2P approach with AmbientDB as middleware offers a
possible solution. It enables the creation of a high-level
application development interface that is flexible and
provides data independence, while taking the burden of
data management optimization in a dynamic and ad-hoc
distributed environment out of the hands of application
programmers. The ability of AmbientDB to cope with
adding devices and functionality dynamically provides for
an evolutionary path for the introduction of Ambient
Intelligence, thus alleviating one of the most prominent
problems from a systems and marketing point of view.
## 5. References
[1] www.philips.com/research/ami
[2] www.semiconductors.philips.com/connected_home
[3] D. Nichols. Implicit Rating and Filtering, Proc. DELOS
Workshop on Filtering and Collaborative Filtering, 1998.
[4] G. Montenegro. MNCRS: Industry Specifications for the
Mobile NC, IEEE Internet Computing, 1998.
[5] M. Roman, F. Kon, R. Campbell. Design and Implementation of Runtime Reflection in Communication Middleware: the
dynamicTAO Case. ICDCS Workshop on Middleware, 1999.
[6] A. Schmidt, M. Beigl, H.-W. Gellersen. There is more to
context than location. Proc. of the Intl. Workshop on Interactive
Applications of Mobile Computing, 1998.
[7] D. Kossmann. The state of the art in distributed query
processing. ACM Computing Surveys, 32(4), 2000.
[8] G. Picco, A. Murphy, G.-C. Roman. On Global Virtual
Data Structures. In: _Process Coordination and Ubiquitous_
_Computing, D. Marinescu, C. Lee, CRC Press._
[9] J. Carter, A. Ranganathan, S. Susarla. Khazana: An
infrastructure for building distributed services. In Proc. Int.
Conf. on Distributed Computing Systems (ICDCS’98), 1998.
[10] P. Boncz, C. Treijtel. AmbientDB: relational query
processing in a P2P network. Proc. Workshop On Databases,
Information Systems and P2P Computing (at VLDB’03), 2003.
[11] I. Stoica, R. Morris, D. Karger, M. Kaashoek, and H.
Balakrishnan. Chord: A scalable peer-to-peer lookup protocol
for Internet applications. Proc. SIGCOMM Conf., 2001.
[12] J. Widom, S. Ceri: Active Database Systems: Triggers and
Rules For Advanced Database Processing. Morgan Kaufmann.
[13] A. Halevy: Answering queries using views: A survey.
VLDB Journal 10(4): 270-294, 2001.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/PERCOMW.2004.1276932?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/PERCOMW.2004.1276932, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://ir.cwi.nl/pub/11118/11118B.pdf"
}
| 2,004
|
[
"JournalArticle",
"Conference"
] | true
| 2004-03-14T00:00:00
|
[] | 6,993
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014f27c5272292f84a182c49ca98f873fba06ae5
|
[
"Computer Science",
"Mathematics"
] | 0.845528
|
Detection of Insider Attacks in Distributed Projected Subgradient Algorithms
|
014f27c5272292f84a182c49ca98f873fba06ae5
|
IEEE Transactions on Cognitive Communications and Networking
|
[
{
"authorId": "1702905",
"name": "Sissi Xiaoxiao Wu"
},
{
"authorId": "46439532",
"name": "Gangqiang Li"
},
{
"authorId": "1817363",
"name": "Shengli Zhang"
},
{
"authorId": "144591140",
"name": "Xiaohui Lin"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Cogn Commun Netw"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6687307"
],
"id": "65e58b80-9699-4da6-bd60-929b57b8533d",
"issn": "2332-7731",
"name": "IEEE Transactions on Cognitive Communications and Networking",
"type": "journal",
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=6687307"
}
|
The gossip-based distributed projected subgradient algorithms (DPS) are widely used to solve decentralized optimization problems in various multi-agent applications, while they are generally vulnerable to data injection attacks by internal malicious agents as each agent locally estimates its descent direction without an authorized supervision. In this work, we explore the application of artificial intelligence (AI) technologies to detect internal attacks. We show that a general neural network is particularly suitable for detecting and localizing the malicious agents, as they can effectively explore nonlinear relationship underlying the collected data. Moreover, we propose to adopt one of the state-of-the-art approaches in decentralized federated learning, i.e., a gossip-based collaborative learning protocol, to facilitate training the neural network models by gossip exchanges. This advanced approach is expected to make our model more robust to challenges with insufficient training data, or mismatched test data. In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods. Simulation results demonstrate that the proposed AI-based methods are beneficial to improve performance of detecting and localizing malicious agents over score-based methods, and the peer-to-peer neural network model is indeed robust to target issues.
|
## Detection of Insider Attacks in Distributed Projected Subgradient Algorithms
#### Sissi Xiaoxiao Wu, Gangqiang Li, Shengli Zhang, and Xiaohui Lin
**_Abstract—The gossip-based distributed algorithms are widely_**
**used to solve decentralized optimization problems in various**
**multi-agent applications, while they are generally vulnerable**
**to data injection attacks by internal malicious agents as each**
**agent locally estimates its decent direction without an authorized**
**supervision. In this work, we explore the application of artificial**
**intelligence (AI) technologies to detect internal attacks. We**
**show that a general neural network is particularly suitable**
**for detecting and localizing the malicious agents, as they can**
**effectively explore nonlinear relationship underlying the collected**
**data. Moreover, we propose to adopt one of the state-of-art**
**approaches in federated learning, i.e., a collaborative peer-to-**
**peer machine learning protocol, to facilitate training our neural**
**network models by gossip exchanges. This advanced approach**
**is expected to make our model more robust to challenges**
**with insufficient training data, or mismatched test data. In our**
**simulations, a least-squared problem is considered to verify the**
**feasibility and effectiveness of AI-based methods. Simulation**
**results demonstrate that the proposed AI-based methods are**
**beneficial to improve performance of detecting and localizing**
**malicious agents over score-based methods, and the peer-to-peer**
**neural network model is indeed robust to target issues.**
**_Index Terms—Gossip algorithms, distributed projected sub-_**
**gradient (DPS), artificial intelligence (AI) technology, internal**
**attacks, malicious agents.**
I. INTRODUCTION
ECENTLY, decentralized optimization algorithm as a
popular tool to handle large scale computations has been
# R
broadly applied in various fields [1], [2]. Typical examples
of Internet of Things (IoT) [3], [4], multi-agent systems [5],
[6], wireless communications network [7], power grid [8],
and federated learning [9]. The design approach in above
applications is often refereed as gossip-based optimization
problems, wherein interacting agents are randomly selected
and exchange information following a point-to-point message
passing protocol so as to optimize shared variables. Aiming
at a coordinated response, these agents explicitly disclose
their estimates (states) to neighboring agents in each iteration,
thereby leading to a consistent globally optimal decision [1],
[2], [10]. It is well known that gossip-based algorithms are
inherently robust to intermittent communication and builtin fault-tolerance to agent failures. They can also provide
a degree of privacy in many applications for participating
This work is supported by the National Natural Science Foundation
of China under Grant 61701315; by Shenzhen Technology R&D Fund
JCYJ20170817101149906 and JCYJ20190808120415286; by Shenzhen University Launch Fund 2018018.
S. X. Wu, G. Li, S. Zhang and X. Lin are with the College of Electronics
and Information Engineering, Shenzhen University, Shenzhen, China. G.
Li is the corresponding author. E-mails: ligangqiang2017@email.szu.edu.cn,
i i l hli@ d
agents without exchanging local user information [11]. Despite
many advantages, these gossip-based algorithms, such as the
distributed projected subgradient (DPS) algorithm [2], are
inherently vulnerable to insider data injection attacks due to
the flat architecture, since each agent locally estimates its
(sub)gradient without any supervision [12]–[14].
Generally speaking, malicious agents’ (or attackers) attack
on decentralized algorithms depends on specific attacking
strategies. Attackers may interfere with distributed algorithms
by injecting random data that hinders convergence [15]. Especially in an insider attack, the attacker always sends misleading
messages to its neighbors to affect the distributed system,
resulting in false convergence results [16], [17]. For example, a
multi-agent system is forced to converge to the target values of
attackers in [18] and an average consensus result is disturbed
by coordinated attackers in [19]. The attack model we focus on
in this work is that the attacker behaves like stubborn agents
[20]. To be more specific, they coordinate and send messages
to peers that contain a constant bias [12], [13], [17], [21] and
their states can not be changed by other agents. As studied
in [18], [19], the network always converges to a final state
equal to the bias. This will bring serious security problems
to distributed algorithms if the attacker cannot be detected
effectively. Thus, a good defense mechanism is needed to
protect these algorithms from internal data injection attacks.
To detect anomalous behaviors in decentralized optimization, one commonly used approach in the literature is to calculate a dependent score through statistical techniques based
on the messages received during the execution of protocol.
For instance, in [15], the authors show that the convergence
speed of network will slow down when the attacker is present,
and design a score to identify potential attacks. In [19], two
score-based detection strategies are proposed to protect the
randomized average consensus gossip algorithm from malicious nodes. In [22], the authors design a comparison score to
search for differences between a node and its neighbors, then
adjust update rules to mitigate the impact of data falsification
attacks. In [13], the decision score is computed by a temporal
difference strategy to detect and isolate attackers. Similar
score-based methods are also shown in [23]–[25]. While such
methods have reasonable performance, the score design is
somewhat ad-hoc and relies heavily on the experts to design
sophisticated decision functions, and the detection thresholds
of these score-based methods need to be adjusted judiciously.
To circumvent the above difficulties, our idea in this work is to
utilize the artificial intelligent (AI) technology to approximate
more sophisticated decision functions. It is worth mentioning
that AI technology has succeed in many applications with
-----
the same purpose, including image recognition [26], natural
language processing [27], power grid [28], and communications [29]. Furthermore, AI also plays an important role in
network security [30], such as anomaly intrusion detection
[31], malicious PowerShell [32], distributed denial of service
(DDoS) attacks [33] and malicious nodes in communication
networks [34], [35].
The main purpose of this work is to apply AI technology
to address the problem of detecting and localizing attackers
in decentralized gossip based optimization algorithms. While
our AI-based methods and training philosophy can be applied
to a wide set of multi-agent algorithms and attack scenarios,
we focus on testing the approach on a case that has been
thoroughly studied in [13], [36], to facilitate the comparison.
Concretely, we proposed two AI-based strategies, namely the
temporal difference strategy via neural networks (TDNN) and
the spatial difference strategy via neural networks (SDNN). We
will show that even basic neural network (NN) models exhibit
a good ability to extract non-linearity from our training data
and thus can well detect and localize attackers, given that 1)
the collected training data can well represent the attack model,
and 2) training data from all agents can be fully learned at the
training center.
Unfortunately, collecting good and enough data which perfectly fits the real attack model is usually difficult. First of
all, due to the intrinsic of gossp-algorithm, it is difficult and
expensive to collect sufficient training samples at each agent.
Also, with the emergence of new large-scale distributed agents
in the network, sometimes it is hard to upload decentralized
data at each agent to a fusion center due to storage and
bandwidth limitations [37]. Furthermore, as the insider attacks
could occur at any agent in the network, the training data may
not cover all the occurrences of the attack. Therefore, some
individually trained NN model at each agent may not fit in all
insider attack events. A new approach to alleviate these issues
is to leverage the decentralized federated learning [38], which
utilizes the collaboration of agents to perform data operations
inside the network by iterating local computations and mutual
interactions. Such a learning architecture can be extremely useful for learning agents with access to only local/private data in
a communication constrained environment [39]. Specially, as
one of the state-of-the-art approach in decentralized federated
learning, gossip learning is very suitable for training NN models from decentralized data sources [40], with the advantages
of high scalability and privacy preservation. Thus, we propose
a collaborative peer-to-peer learning protocol to help training
our NN models by gossip exchanges. Specifically, each agent
in the network has a local model with the same architecture,
and only relies on local collaboration with neighbors to learn
model parameters. It is worth noting that in this process each
agent trains the local model by its local data periodically, and
then send the local model parameters to its neighbors. It is
expected that each agent can learn a local model close to the
_global model (i.e., a NN trained by the center, which contains_
training data from all agents), so as to provide robustness in
the case of insufficient and mismatched local data.
It is also worth mentioning differences between this work
and some previous work Previous work [19] aims at the
score-based method for securing the gossip-based average
consensus algorithm. [35] improves the score-based method
by using AI-based methods while it still targeted at an average
consensus algorithm. We remark that The inputs for AI model
in [35] does not always work for optimization algorithms.
[13], [36] provide some preliminary results for protecting
optimization algorithms while it only focus on partial neighboring information. This work is the first one which well
elaborates AI-based methods for a DPS algorithm using full
information from neighboring signal. More importantly, the
proposed collaborative learning method is novel and effective
to make the defense model more robust to different events
of attacks, making our models more practical to multi-agent
applications. In summary, the proposed AI-based strategies
have following characteristics: 1) they can automatically learn
appropriate decision models from the training data, thus reducing the dependence on complicated pre-designed models; 2)
they adaptively scale the decision thresholds between 0 and
1, which reduces the difficulty of threshold setting; 3) they
improve the performance of detecting and localizing attackers
and show good adaptability to different degree of agents. 4)
they have strong robustness to the scenarios with insufficient
training data, or mismatched training data. Preliminary numerical results demonstrate that the proposed AI-based strategies
are conducive to solve the insider attack problem faced by the
DPS algorithm.
The rest of the paper is organized as follows. In Section
II, we describe the decentralized multi-agent system and the
attack scheme against the DPS algorithm. In Section III,
we review score-based strategies and propose two AI-based
defense strategies to detect and locate attackers. Section IV
introduces a collaborative peer-to-peer training protocol for
NN, dealing with insufficient samples or mismatched samples
available on different agents. Simulation results are given
in Section V to confirm the effectiveness of the proposed
strategies. We conclude this work in section VI.
II. SYSTEM MODEL
We consider a multi-agent network which can be defined
by an undirected graph = ( _,_ ), wherein = 1, _, n_
_G_ _V_ _E_ _V_ _{_ _· · ·_ _}_
represents the set of all agents and represents the
_E ⊆V × V_
set of all edges. We define the set of the neighbor nodes of
an agent i ∈V by Ni = {vj ∈V : (vi, vj) ∈E}. All the
agents in the distributed network follow a gossip-based optimization protocol; see Algorithm 1. That is, in each iteration
of information exchange, an agent only directly communicates
with its neighbors. We thus define a time-varying graph as
(t) := ( _,_ (t)) for the tth iteration and the associated
_G_ _V_ _E_
weighted adjacency matrix is denoted by A(t) ∈ R[n][×][n], where
[A(t)]ij := Aij(t) = 0 if (vj, vi) /∈E(t). For this network
with n agents, we have the following assumption:
**Assumption 1. There exists a scalar ζ ∈** (0, 1) such that for
_all t_ 1 and i = 1, _, n :_
_≥_ _· · ·_
_• Aij(t) ≥_ _ζ if (i, j) ∈E(t),_
_•_ [�]i[n]=1 _[A][ij][(][t][) = 1][,][ A][ij][(][t][) =][ A][ji][(][t][)][;]_
_The graph (_ (t + ℓ)) is connected for B <
_V ∪[B][0]_ _E_ _∞_
-----
**Algorithm 1 The gossip-based optimization protocol**
**Input:** Number of instances K, and iterations T .
**for k = 1, · · ·, K do**
Initial states: x[k]i [(0) =][ β]i[k]
_[∀]_ _[i][ ∈V]_
**for t = 1, · · ·, T do**
Uniformly wake up a random agent i
_•_ _∈V_
_• Agent i selects agent j ∈Ni with probability Pij_
_• The trustworthy agents i, j ∈Vt update the states_
according to the rules in (2).
The malicious agents follow the attack scheme to
_•_
keep their original states, as seen in (3).
**end for**
**end for**
The goal of these agents is cooperatively to solve the
following optimization problem:
min
**_x_** _[f]_ [(][x][) := 1]n
_n_
�
_fi(x) s.t. x ∈_ _X ._ (1)
_i=1_
where X ⊆ R[d] is a closed convex set common to all agents
and fi : R[d] _→_ R is a local objective function of agent i.
Herein, fi is convex and not necessarily differentiable function
which is only known to agent i.
In this setting, we denote the optimal value of problem (1)
by f _[⋆]. A decentralized solution to estimate f_ _[⋆]_ of this problem
is the DPS algorithm [2]. In this algorithm, each agent locally
updates the decide variable by fusing the estimates from its
neighbors and then take the subgradient of this function at
the updated decide variable to be the decent direction for
the current iteration. To be more specific, when applied this
algorithm to solve problem (1), it performs the following
iterations:
**_x¯i(t) =_**
_n_
�
_Aij(t)xj(t) ._
_j=1_
(2)
the set of malicious agents (attackers) Vm, as seen in Fig. 1.
We have V = Vt∪Vm and n = |Vt|+|Vm|. In our attack model,
attackers are defined as agents whose estimates (or states)
can not be affected by other agents, and those coordinated
attackers try to drag the trustworthy agents to their desired
value. If i ∈Vt, a trustworthy agent will perform the rules in
(2). Otherwise, an attacker j ∈Vm will update its state with
the following rule:
**_xj(t) = α + rj(t), ∀_** _j ∈Vm ._ (3)
where α is the target value of attackers, rj(t) is an artificial
noise generated by attackers to confuse the trustworthy agents.
If there are more than one attacker in the network, we assume
that they will coordinate with each other to converge to the
desired value α. Meanwhile, to disguise attacks, they will
independently degenerate artificial noise rj(t) which decays
exponentially with time, i.e., limt→∞ _∥rj(t)∥_ = 0 for all
_j ∈Vm._
For the time varying network, let E(Vt; t) be the edge set
of the subgraph of (t) with only the trustworthy agents in
_G_
_Vt. The following assumption is needed to ensure a successful_
attack on the DPS algorithm:
**Assumption 2. There exists B1, B2 < ∞** _such that for all_
_t ≥_ 1, 1) the composite sub-graph (Vt, ∪[t]ℓ[+]=[B]t+1[1] _[E][(][V][t][;][ ℓ][))][ is]_
_connected; 2) there exists a pair i ∈Vt, j ∈Vm with (i, j) ∈_
_E(t) ∪_ _. . . ∪E(t + B2 −_ 1).
Based on this assumption, we have the following fact:
**Fact 2. [13] Under Assumptions 1 and 2. If ∥∇[ˆ]** _fi(x)∥≤_ _C2_
_for some C2 and for all x ∈_ _X, and γ(t) →_ 0, we have:
lim
_t→∞_ [max]i∈Vt _[∥][x][i][(][t][)][ −]_ **_[α][∥]_** [= 0][ .]
This fact implies that in this attack scheme, the attackers
will succeed in steering the final states. This was also proved
in our previous work [13].
III. DETECTION AND LOCALIZATION STRATEGIES
The DPS algorithm runs in a fully decentralized fashion
in trustworthy agents i ∈Vt. The neighborhood detection
(ND) task and neighborhood localization (NL) task are then
introduced for detecting and localizing attacker. To facilitate
our discuss, we consider the following hypotheses. The ND
task is defined as follows:
_H0[i]_ [:][ N][i] _[∩V][m]_ [=][ ∅][,] No neighbor is an attacker,
_H1[i]_ [:][ N][i] _[∩V][m]_ _[̸][=][ ∅][,]_ At least one neighbor is the attacker,
(4)
where H0[i] [and][ H]1[i] [as two events of agent][ i][ for the ND task.]
When event H1[i] [is true at agent][ i][, the second task is to check]
if the neighbor j ∈Ni is an attacker. The NL task is defined
as follows:
_H0[ij]_ [:][ j /][∈V][m][,] Neighbor j is not an attacker,
(5)
_H1[ij]_ [:][ j][ ∈V][m][,] Neighbor j is an attacker,
where H0[ij] [and][ H]1[ij] [are as two events of agent][ i][ for the NL]
task If event is true we say that the attacker is localized
_H[ij]_
**_xi(t + 1) = PX_** �x¯i(t) − _γ(t) ˆ∇fi�x¯i(t)��_ _._
for t ≥ 1, where Aij(t) is a non-negative weight and γ(t) >
0 is a diminishing stepsize. PX � _·_ � denotes the projection
operation onto the set X and _∇[ˆ]_ _fi (¯xi(t)) is a subgradient at_
agent i of the local function fi at x = ¯xi(t). Then, we have
the following result:
**Fact 1. [2] Under Assumption 1. If ∥∇[ˆ]** _fi(x)∥≤_ _C1 for some_
_C1 and for all x ∈_ _X, and the step size satisfies_ [�]t[∞]=1 _[γ][(][t][) =]_
_∞,_ [�]t[∞]=1 _[γ][2][(][t][)][ <][ ∞][, then for all][ i, j][ ∈V][ we have]_
lim lim
_t→∞_ _[f]_ [(][x][i][(][t][)) =][ f][ ⋆] _[and]_ _t→∞_ _[∥][x][i][(][t][)][ −]_ **_[x][j][(][t][)][∥]_** [= 0][ .]
The above fact tells that for these convex problems, the DPS
method will converge to an optimal solution of problem (1).
In the next, we will discuss how this convergence will change
when there is attack within the network.
_A. Data Injection Attack From Insider_
In this setting, we assume that the set of agents can be
_V_
divided into two subsets: the set of trustworthy agents and
_V_
-----
Fig. 1. Neighborhood tasks in the attack detection scheme. Each trustworthy
agent performs ND and NL tasks independently for isolating attacker from
the network.
We remark that such hypotheses were also made in previous
work [19], [36]. An illustration of the neighborhood detection
and localization tasks is shown in Fig. 1. Notice that the NL
task is executed only if the event H1[i] [in the ND task is true.]
Moreover, once the attacker is localized, trustworthy agents
will disconnect from the attacker in the next communication.
In this way, it is expected that the network can exclude all the
attackers from the network.
To proceed our tasks, we run the asynchronous gossip-based
optimization algorithm (Algorithm 1) for K instances. We
denote **_X[˜]_** _i[k]_ [as the neighborhood state matrix collected by agent]
_i in kth instance, i.e., k_ [1, _, K]. The ND and NL tasks_
_∈_ _· · ·_
can be described as follows:
**_X˜_** _i[k]_ [:=[][x]i[k][,][ x]1[k][,][ · · ·][,][ x]j[k][,][ · · ·][,][ x][k]|Ni|[]][⊤] _[∀]_ _[j][ ∈N][i][,]_ (6)
_H1[i]_
_yi = FND( X[˜]_ _i[1][,][ · · ·][,][ ˜]Xi[K][)]_ ≷ _δ,_ (7)
_H0[i]_
_H1[ij]_
_zij = FNL( X[˜]_ _i[1][,][ · · ·][,][ ˜]Xi[K][)]_ ≷ _ϵ._ (8)
_H0[ij]_
where x[k]j _[∈]_ [R][d][ is the state vector of agent][ j][ ∈N][i][, which can]
be directly obtained by agent i ∈Vt from its neighbors, yi ∈ R
is a metric that indicates whether an attacker is present in the
neighborhood of agent i, and zij = [zi1, · · ·, zi|Ni|][⊤] _∈_ R[|N][i][|]
is the metric vector for localization task. Herein, δ > 0 and
_ϵ > 0 are some pre-designed thresholds._
On top of the detection and localization strategies, we have
an important assumption about the initial states:
**Assumption 3. We have the prior information about the**
_expected initial states about the mean of attackers E[x[k]j_ [(0)] =]
**_α¯, j ∈Vm and trustworthy agents E[x[k]i_** [(0)] = ¯][β][, i][ ∈V][t][ .]
_Moreover, ¯α_ = β[¯] in general.
_̸_
Note that this assumption is practical as the attacker always
aims at dragging the trustworthy agents to its desired value,
which is usually different from the optimal solution. Otherwise, we may not consider it as a meaningful attack.
**Remark 1. FND(·) and FNL(·) are statistical decision func-**
_tions judiciously designed for ND and NL tasks respectively._
_For each agent i ∈Vt, these decision functions are used to_
_calculate the criterion metrics to identify attackers_
_A. The Score-based Method_
As a remedy to protect these distributed optimization algorithms, such score-based methods have been studied in [18],
[19], which stem from statistical techniques. For gossip-based
DPS algorithm, a temporal difference strategy (TD) in [13]
and a spatial difference strategy (SD) in [36] are proposed to
detect and localize the attackers, and these two strategies are
reviewed below.
_1) Temporal Difference Strategy: Since the expected initial_
states about the mean of attackers and trustworthy agents are
different, when t, the network will be mislead by the
_→∞_
attackers to E[x[k]j [(][∞][)] = ¯][α][ =][ E][[][x]i[k][(][∞][)]][. This implies that the]
difference between the initial state and the steady state can be
used to detect anomalies. For each trustworthy agent i ∈Vt,
the following score can be evaluated [1]:
1
_ξij :=_
_Kd_
_K_
� **1[⊤][�]x[k]j** [(][T] [)][ −] **_[x]j[k][(0)]�, j ∈Ni._** (9)
_k=1_
Herein, T is sufficiently large, d is the state dimension
_→∞_
of agents, 1 is an all-one vector. x[k]j [(][T] [)][ and][ x]j[k][(0)][ are]
respectively the last and the first state for agent j observed
by agent i. To discern the events in ND task, the detection
criterion is defined as follow:
1
_yˆi :=_
_|Ni|_
�
_j∈Ni_
_H0[i]_
���ξij − _ξi���_ ≶ _δTD._ (10)
_H1[i]_
where ξi = 1/ |Ni| [�]j∈Ni _[ξ][ij][ is the average of neighborhood]_
of agent i. Intuitively, E[ˆyi] = 0 when the event H0[i] [is true,]
otherwise E[ˆyi] ̸= 0 when the event H1[i] [is true.][ δ][TD] [is a]
pre-designed threshold of the ND task.
For the NL task, these two events H1[ij] [and][ H]0[ij] [are checked]
by the following criterion:
_H1[ij]_
_zˆij := |ξij|_ ≶ _ϵTD, ∀_ _j ∈Ni ._ (11)
_H0[ij]_
Herein, ϵTD is a pre-designed threshold used to identify which
neighbor is the attacker. Note that E[ˆzij] is close to 0 if an
agent j is an attacker, seen in (9).
_2) Spatial Difference Strategy: According to (3), attackers_
always try to mislead the network to their desired value, and
thus the transient state in the network will also be affected
during the attack process. Unlike the TD method that only
uses the initial state and steady state, the transient states
are considered in the SD method for better performance.
We expect that the expected state E[x[k]i [(][t][)][ −] **_[x]j[k][(][t][)]][ between]_**
neighbor j and monitoring agent i will behave differently in
events H0 and H1, i.e., j ∈Ni and 0 < t < ∞. For the ND
task, agent i evaluates the following metrics:
**_ϕ[k]ij_** [:=]
1
_yˇi :=_
_|Ni|_
_T_
�
_t=0_
�
_j∈Ni_
� �
**_x[k]j_** [(][t][)][ −] **_[x]i[k][(][t][)]_** _, j ∈Ni._ (12)
_K_
� �2 H0[i]
**1[⊤]ϕ[k]ij** ≶ _δSD._ (13)
_k=1_ _H1[i]_
� 1
_Kd_
1For each instance k, each agent evaluates ∆j (t) ≜ **_xkj_** [(][t][)][ −] **_[x]j[k][(][t][ −]_** [1)][ at]
it ti _t_ d it ll th it ti t bt i � _k(T_ ) _k(0)�_
-----
Fig. 2. The TDNN method at trustworthy agent i: (Left) NN for ND task,
(Right) NN for NL task. SDNN shares a similarly structure with TDNN.
where
�
**_x[k]i_** [(][t][) = 1][/][ |N][i] _[∪]_ _[i][|]_ **_x[k]j_** [(][t][)] (14)
_j∈{Ni∪i}_
is the neighborhood average of agent i when iterating t in
instance k. ϕ[k]ij [is the sum of differences between neighbor]
agent j and the neighborhood average x[k]i [(][t][)][ in all iterations.]
_δSD is a pre-designed threshold._
For the NL task, we compare the state of neighbor agent j
with agent i to check the events in (5). The following criteria
are used:
�x[k]j [(][t][)][ −] **_[x]i[k][(][t][)]�_** _−_ **_ϕ[k]ii[, j][ ∈N][i][.]_** (15)
_K_
� �2 H0[ij]
**1[⊤]ϕ[k]ij** ≶ _ϵSD, ∀j ∈Ni._ (16)
_k=1_ _H1[ij]_
_chosen agent[2]_ _takes the role as an attacker. Based on Assump-_
_tion 3, we run the asynchronous gossip-based optimization_
_algorithm (Algorithm 1) for K instances and record_ **_X[˜]_** _i[k]_ _[as]_
_the data samples with the ground truth label ‘1’ for event H1[i]_
_where agent i is either in the neighborhood of (next to) the_
_attacker or beyond the neighborhood of (far from) the attacker._
We remark here that **_X˜_** _i[k]_ [is the local data collecting by]
agent i which is not allowed to exchange among agents. On
the other hand, ground truth label ‘0’ for event H0[i] [can be]
easily obtained by running gossip-based algorithm on . We
_G_
remark here that how to specifically set the training data
collecting process is a challenging problem while it is beyond
the scope of this work. Herein, we simply assume that each
agent can obtain its own training data with correct labels. Other
technique problems about the details of the training process
will be included in another work.
_1) Temporal Difference Strategy via NN: Armed with train-_
ing data, we propose a method called TDNN, which uses the
time difference values as the input of the NN to perform
neighborhood tasks, as illustrated in Fig. 2. Based on the
metric in (9), the inputs for the two neighborhood tasks are as
follows,
**_a[0]_** = ˆa[0] = [ξi1, ξi2 · · ·, ξiM ][⊤]. (17)
where ξij can be obtained by agent i. For the ND task, the
computation process of NN can be described below:
**_a[h]_** = σ(W _[h]a[h][−][1]_ + b[h]), _h = 1, ..., n_ 1; (18)
_−_
_H1[i]_
_y˜i = g(W_ _[n]a[n][−][1]_ + b[n]), _y˜i_ ≷ _δNN,_ (19)
_H0[i]_
where a[0] is the input of NN. σ( ) is the activation function,
_·_
_g(_ ) is the sigmoid function defined as g( ) = 1/(1 + e[−][x]),
_·_ _·_
**_W_** _[h]_ _∈_ R[L][h][×][L][h][−][1] is the weight matrix between the layer h
and layer h _−_ 1, Lh represents the number of neurons in layer
_h, b[h]_ _∈_ R[L][h] and a[h] _∈_ R[L][h] are the bias vector and the
activation output in the layer h, respectively. ˜yi ∈ R is the
expected output, and δNN ∈ [0, 1] is some prescribed threshold
for detection task.
For the NL task, a similar NN structure is used, except for
the number of neurons in the output layer. The design is given
as follows,
**_aˆ[h]_** = σ( W[ˆ] _[h]aˆ[h][−][1]_ + b[ˆ][h]), _h = 1, ..., n_ 1; (20)
_−_
_H1[ij]_
**_z˜i = g( W[ˆ]_** _[n]aˆ[n][−][1]_ + b[ˆ][n]), _z˜ij_ ≷ _ϵNN,_ (21)
_H0[ij]_
where ˆa[0] is the input of NN, **_W[ˆ]_** _[h],_ **_b[ˆ][h], and ˆa[h]_** are the weight
matrix, bias term, and activation output in NN, respectively,
**_z˜i = [˜zi1, · · ·, ˜ziM_** ] ∈ R[M] is the expected output of NL task,
and ϵNN ∈ [0, 1] is some prescribed threshold. Notice that
the actual output is encoded by one-hot vector during training
stage, such as ˜zi = ej if j ∈Vm, see Fig. 2 (Right).
2There could be more than one attackers in the training network while
h i l id th i l t
**_ϕ[k]ij_** [:=]
_T_
�
_t=0_
� 1
_zˇij :=_
_Kd_
where ¯ϕ[k]ii [is calculated by agent][ i][ itself, seen in (12).][ ϕ]ij[k] [is the]
metric between agent i and agent j, and ϵSD is a pre-designed
threshold used to identify the attacker.
_B. The AI-based Method_
In fact, the reason why (10), (11), (13) and (16) take effect
is that the anomalies will cause the the measured metrics to behave statistically different. In such score-based methods, these
decision functions of ND and NL tasks are approximately
linear or quadratic functions, which fuse the states obtained
by agent i into a scalar score for classification. A natural
question that follows is whether there exists more sophisticated
nonlinear functions that can better classify those events in the
two neighborhood tasks. This is a natural application of AI
technology for learning the complex mapping relationships in
a classification problem.
In the following, we propose to apply NN to handle the ND
and NL tasks. Let M = maxi |Ni| be the input dimension of
these NNs. Then, NNs can be trained at each monitoring agent
in an offline manner using data collected from each agent. To
facilitate our approach, we use the following process to collect
training data for the AI-based methods:
**Assumption 4. Assume that we have set a training data**
_collecting process which contains P training network Gp =_
( ) for p 1 2 _P For each network_ _a randomly_
_V_ _E_ _G_
-----
_2) Spatial Difference Strategy via NN: Both TD and TDNN_
only utilize the initial state and the steady state of agents rather
than the transient states, leading to the possibility of losing
some key features in the neighborhood tasks. In particular,
the neighborhood transient state information is not effectively
utilized for extracting key classification features. Therefore,
we propose a strategy called SDNN to improve the detection
and localization performance by using transient states and
NN. As a malicious agent always tries to influence and steer
the trustworthy agents away from the true value, we have
E[x[k]j [(][t][)][ −] **_[x]i[k][(][t][)][|H]1[ij][]][ ̸][=][ E][[][x]j[k][(][t][)][ −]_** **_[x]i[k][(][t][)][|H]0[ij][]][. Thus, we can]_**
compare the state of neighbor agent j and the neighborhood
average of agent i over time. The metrics for ND and NL tasks
can be described as follows:
**_s[k]ij_** [:=]
_T_
�
_t=0_
�x[k]j [(][t][)][ −] **_[x]i[k][(][t][)]�, j ∈Ni._** (22)
**Algorithm 2 Gossip training for AI-based methods**
**Input: Pij : probability of exchange, η : learning rate.**
**Initialize: W is initialized randomly, i ∈V.**
**repeat**
_• MERGEMODEL(Wr, Wi) in (28)_
_• Wi ←_ **_Wi −_** _η∇[ˆ]_ _Li(Wi), agent i updates parameters_
_• Agent i sends Wi to agent j ∈Ni with probability Pij_
**until Maximum iteration reached**
**function MERGEMODEL(Wr, Wi)**
**_Wi ←_** **_Wi(1 −_** _µ) + µWr, µ ∈_ [0, 1].
**end function**
and privacy preservation. Moreover, we allow the trustworthy
agents to have correctly labeled samples from the ND and NL
tasks. For instance, the samples labeled in the current training
round can be used to the next training round. In the next,
we will see how to share AI-based models between different
agents to achieve robust performance in ND and NL tasks.
_A. The Distributed Collaborative Training Process_
The goal of collaborative training is that participating agents
acting as local learners to train good local models (i.e., NN
models) through gossip exchanges. That is, an agent i
_∈V_
aims to train a model that performs well with respect to
the data points available on other agents. For distributed
collaborative training, the standard unconstrained empirical
risk minimization problem used in machine learning problems
(such as NN) can be described as follows [39]:
1
_χij :=_
_Kd_
_K_
�
**1[⊤]s[k]ij[, j][ ∈N][i][.]** (23)
_k=1_
where x[k]j [(][t][)][ is the][ t][th states of agent][ j][ at instance][ k][,][ s]ij[k]
is the sum of statistical differences between agent j and the
neighborhood average of agent i. Note that x[k]i [(][t][)][ has been]
defined in (14).
Herein, our goal is to accurately detect insider attacks and
identify whether the attacker appears in the neighborhood of
agent i. The detection structures of SDNN are similar with that
for TDNN, as seen in Fig. 2. Therein, we use the following
inputs for these NN models of ND and NL tasks:
**_a[0]_** = ˆa[0] = [χi1, χi2, · · ·, χiM ][⊤]. (24)
IV. COLLABORATIVE LEARNING FOR A ROBUST MODEL
In previous sections, we have introduced how to use NN
to help detect and localize the insider attackers. Our training
data comes from a training data collecting process under
Assumption 4 wherein the local data samples **_X[˜]_** _i[k]_ [are collected]
by agent i which could be within or beyond the neighborhood
of the attacker. Apparently, the optimal train way is to upload
all agents’ data to a fusion center and train the model in a
centralized manner. However in practice, collecting data from
decentralized data sources to the center is hard due to storage
and bandwidth limitations. On the other hand, as running a
gossip algorithm is time-consuming, it is usually difficult and
expensive to collect sufficient data at each agent. For example,
the attack could occur far from the monitoring agent while
the training data may only contains samples from a neighbor
attacker. As the training samples may not well represent the
general attack network, some individually trained NN may
not fit in all insider attack events. To alleviate these issues,
we propose a collaborative peer-to-peer protocol to facilitate
training our NN models. Before we go to the details, we recall
three assumptions for the proposed collaborative learning
process. First, we assume that all agents in the network have
equal number of neighbors (this is sort of impractical but
we can resolve it later). Also, different agents collect their
own training data with the advantages of high scalability
where Li(W ) = [1]q �ς∈Di _[ℓ][(][W][, ς][)][. This formulation enables]_
us to state the optimization problem (25) in a distributed
manner. This distributed collaborative training problem could
be addressed by gossip exchanges [41], [42]. We will detail it
as follows
min
**_W_** _[L][(][W][ ) = min 1]n_
�
_Li(W )_ (25)
_i∈V_
where W is the parameter of the NN model. Li(·) is a local
objective function of agent i, which is defined as the expected
loss of the local data set. The local objective is to minimize
the expected loss of its local sample
_Li(W ) = Eς∼Ii_ [ℓ(W, ς)] (26)
where ς is a pair variable, composed of an input and related
label, following the unknown probability distribution Ii, which
is specific for the sample set received by agent i. ℓ( ) is a loss
_·_
function used to quantify the prediction error on ς. Let Di =
_{ς1, · · ·, ςq} represents the set of training data on agent i ∈V,_
which contains q samples. Thus, we have D = D1 ∪· · · ∪ _Dn_
to optimize problem (25):
min
**_W_** _[L][(][W][ ) = min 1]n_
�
_i∈V_
� �
_ℓ(W, ς)_ (27)
_ς∈Di_
� 1
_q_
-----
TABLE I
TEST SCENARIOS SETTINGS FOR MISMATCHED DATA
Scenario Mean Deviation Initial distribution
Fig. 3. An example of tailoring neighbor agents.
_B. The Gossip Stochastic Gradient Descent Strategy_
Gossip learning is a method to learn models from fully
distributed data without central control [9]. The skeleton of
the gossip learning protocol is shown in Algorithm 2. Therein,
during the training stage, each agent i has a NN with the same
architecture and initializes a local model with the parameter
**_Wi. This is then sent to another agent j ∈Ni in the network_**
periodically with the probability Pij. Upon receiving a model
**_Wr, the agent i merges it with the local model, and updates it_**
using the local data set Di. We utilize the stochastic gradient
descent (SGD) algorithm to estimate the local parameter Wi
[43], as follows
**_Wi ←_** **_Wi(1 −_** _µ) + µWr,_ _µ ∈_ [0, 1]. (28)
**_Wi ←_** **_Wi −_** _η∇[ˆ]_ _Li(Wi),_ _i ∈V,_ (29)
where η and _∇[ˆ]_ _Li(·) are the learning rate and the expected_
gradient of agent i, respectively. µ [0, 1] is a weight
_∈_
used to merge the receive model Wr. Herein, MERGEMODEL(Wr, Wi) is a merging process as shown in (28) which
is typically achieved by averaging the model with parameters,
i.e., µ = 0.5.
_C. The Tailor-degree Network_
We have introduced the application of NN in detecting
and localizing attacker, and assumed that each normal agent
in the network has exactly M neighbors. Inevitably, the
communication network is an irregular network, where some
agents have a heterogeneous number of neighbor agents. In
order to adapt to scenarios with different Degree-|Ni| agents,
we tailor our M -input NN to fit into the scenario when
a normal agent has |Ni| ̸= M neighbors. Two scenarios
are considered in this subsection. In the first scenario, we
consider the case of |Ni| > M . The |Ni| neighbors is divided
into ⌈|Ni|/M _⌉_ potentially overlapping groups. In ND and
NL tasks, each group contains exactly M agents, which can
be treated as a standard neighbor set of TDNN and SDNN
methods. Thus, these two tasks can be implemented with the
unified NN model, as seen in Fig. 3. On the other hand,
if we have |Ni| < M, the deficient value in the input
vector is replaced by a reference value to fit a Degree-|Ni|
agent. For the TDNN method, the input is reconstructed by
**_a[0](ˆa[0]) ∈_** R[M] = [ξi1, · · ·, ξi|Ni|, ξii, . . ., ξii][⊤], wherein ξii
is the temporal difference value of agent i. For the SDNN
method, the deficient value of input vector is replaced by
_Xii, and the input is reconstructed by a[0](ˆa[0]) ∈_ R[M] =
[χ ; ; χ ; χ ; ; χ ][⊤] when _< M_
_|N |_
S0 0.5 1.0 **_β ∼U_** [0.0, 1.0][d]
S1 0.5 0.6 **_β ∼U_** [0.2, 0.8][d]
S2 0.5 1.4 **_β ∼U_** [−0.2, 1.2][d]
S3 0.7 1.0 **_β ∼U_** [0.2, 1.2][d]
S4 0.3 1.0 **_β ∼U_** [−0.2, 0.8][d]
_Remark: Typically, the training data used to train the AI-_
based methods is collected by trustworthy agents under a
scenario of specific prior information β. In practice, the prior
information of the gossip-based DPS optimization protocol
will be changed in some particular scenarios, that is, the test
data is statistically mismatched with the training data. To
further verify the robustness of the AI-based detection and
localization models, we generate the test data by keeping the
target value of attackers α and changing the mean and the
deviation of β. As depicted in Section V, we set β [0, 1][d],
_∼U_
then several test scenarios are defined in the TABLE I.
V. NUMERICAL RESULTS AND ANALYSIS
In this section, numerical results are presented to validate the effectiveness of the proposed AI-based methods in
neighborhood tasks. The DPS algorithm runs on a Manhattan
network with n = 9 agents, as shown in Fig. 4. In our experiment, an example of the least-square optimization problem is
considered; i.e., in (1) we set
_n_
�
_i=1_
_f_ _[k](x) =_
_n_
� _fi[k][(][x][) =]_
_i=1_
_k_ 2
��(θi [)][⊤][x][k][ −] _[φ]i[k]��_ _, k = 1, ..., K._
Herein, fi[k] is a utility function at agent i. As shown in
Algorithm 1, the DPS algorithm runs in an asynchronous
manner such that an agent i randomly selects an agent j
with probability [P ]ij = Pij = 1/|Ni|. Thus the expected
transition matrix in iteration t can be written as E [A(t)] =
**_I −_** 2[1]n **[Σ]** [+][ P][ +]2n[P][ ⊤], where Σ is a diagonal matrix with [Σ]ii =
�n
_j=1[(][P][ij][ +][ P][ji][)][. In each instance, we set][ d][ = 2][,][ T][ = 2000][,]_
the initialization x[k](0) [0, 1][d], α[k] [ 0.5, 0.5][d] and
_∼U_ _∼U_ _−_
**_rj[k][(][t][)][ ∼U][[][−][λ][ˆ][t][,][ ˆ][λ][t][]][, where][ λ][ is the second largest eigenvalue]_**
of E[A(t)]. In particular, to serve our purpose we change the
function fi[k][(][x][)][ by randomly generating][ θ]i[k]
_[∼U]_ [[0][.][5][,][ 2][.][5]][d][,]
(x[⋆])[k] _∼U[0, 1][d], and thus we have φ[k]i_ [= (][θ]i[k][)][⊤][(][x][⋆][)][k][.]
For the AI-based methods, the feed forward neural networks
(FFNN) with three hidden layers is applied to perform the ND
and NL tasks with neurons in each hidden layer being 200,
100 and 50 respectively. These NNs are implemented using a
modified version of the deep learning toolbox in [44]. Rectified
linear unit (ReLU) as the activation function is equipped in all
hidden layers and the parameters of NN are jointly optimized
through a back propagation method by minimizing the loss
function defined on different tasks.
To provide neighbor data and ground truth labels for the
AI-based methods, we run the DPS algorithm independently
in each event (4) starting with a new initial state each time
-----
1
Detection, agent i next to attacker
TD K=2, d=1
TD K=1, d=2
TD K=5, d=1
TD K=5, d=2
TDNN K=2, d=1
TDNN K=1, d=2
TDNN K=5, d=1
TDNN K=5, d=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
Localization, agent i next to attacker
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
|ection, agent i next|t to attacker|Col3|
|---|---|---|
||||
||||
||||
||||
||||
|TD K=2, TD K=1,|d=1 d=2||
|TD K=5, TD K=5,|d=1 d=2||
|TDNN K TDNN K|=2, d=1 =1, d=2||
|TDNN K TDNN K|=5, d=1 =5, d=2||
||||
|ocaliza|ation, agent i nex|Col3|xt to attack|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
||TD K|||
|||TD K|=2, d=1|
|||TD K TD K|=1, d=2 =5, d=1|
|||TD K TDN|=5, d=2 N K=2, d=1|
|||TDN TDN|N K=1, d=2 N K=5, d=1|
||TDN|TDN|N K=5, d=2|
|||||
0 0.2 0.4 0.6
Probability of false alarm Pilf
Fig. 4. The Manhattan network topology with agent 1 as the attacker.
TABLE II
TRAINING AND TESTING SETS FOR AI-BASED METHODS GIVEN THATZ
AGENT j IS THE ATTACKER.
Fig. 5. ROCs of TDNN and TD methods: (Left) ND task, (Right) NL task.
where ˆ and ˆ are the estimated event based on AI_H[i]_ _H[ij]_
based methods. Pnd[i] [(][P]ld[ i] [) and][ P][ i]nf [(][P]lf[ i] [) are the probability]
of detection and false alarm in the ND (NL) task, respectively.
More specifically, those probabilities are calculated as follows
Task Event Training set Testing set Label
_H0[i]_ 10000 6000 0
_Nnd_
�
_I(yi[(][n][)]_ = ˆyi[(][n][)] = 1), (32)
_n=1_
_Nnf_
�
_I(yi[(][n][)]_ = 0 ∧ _yˆi[(][n][)]_ = 1), (33)
_n=1_
_Nld_
�
_I(zij[(][n][)]_ = ˆzij[(][n][)] = 1), (34)
_n=1_
_Nlf_
�
_I(zij[(][n][)]_ = 0 ∧ _zˆij[(][n][)]_ = 1). (35)
_n=1_
ND
_H1[i]_ [(][j][ ∈N][i][)] 10000 6000 1
_H1[i]_ [(][j /][∈N][i][)] 10000 6000 1
NL _H1[i]_ 10000 6000 **_ej_**
As the Manhattan network is symmetric, to obtain the training
data under hypothesis H1[i] [with label ‘1’, we need to collect]
data at two types of the trustworthy agents: the one that stands
at the position “next to” the attacker agent (for example agent
1 is the only attacker and we collect data at agent 2), and the
one that stands at the position “far from” the attacker agent
(for example agent 1 is the only attacker and we collect data
at agent 5). Meanwhile, the training data under hypothesis H0[i]
with label ‘0’ is collected at any agent when DPS is running on
the Manhattan network free of attacker. We collect data from
different scenarios as shown in Table II and fuse them into ND
and NL models. Therein, available data is typically split into
two sets, a training set and a testing set. As for the ND task,
the detection task consists of 30, 000 samples as the training
set and 18, 000 samples as the testing set. Herein, within the
data under hypothesis H1[i] [, we have][ 10000][ samples collecting]
at agent next to the attacker and 10000 samples collecting at
agent far from the attacker. For the NL task, the training set
and testing set contain 10, 000 and 6, 000 samples respectively.
Herein, we encode the ground truth labels of event H1[i] [=]
_H1[ij]_ _[∪H]0[ij]_ [by one-hot coding, where the neighbor attacker is]
labeled by ‘1’ and the trustworthy agent by ‘0’.
Usually, the detection and localization models of AI-based
methods are actually classifiers for which NN produces continuous quantities to predict class membership through different
thresholds. To make a more comprehensive evaluation of these
classifiers, we adopt the probabilities of detection and false
alarm for the ND and NL tasks. That is, we define
_Pnd[i]_ [= 1]
_Nnd_
_Pnf[i]_ [= 1]
_Nnf_
_Pld[i]_ [= 1]
_Nld_
_Plf[i]_ [= 1]
_Nlf_
where Nnd (Nnf ) and Nld (Nlf ) are the number of positive
(negative) samples in the ND and NL tasks, respectively, ˆyi[(][n][)]
(zˆij[(][n][)][) is the predicted class by ND (NL) classifiers, and][ y]i[(][n][)]
(zij[(][n][)][) is the ground-truth class label. Note that][ I][(][·][)][ is an]
indicator function that has the value 1 when the predicted
class label equal to the ground-truth class label. Based on these
probabilities, the detection (or localization) performance can
be investigated by the receiver operating characteristic (ROC)
[45], for which the probability of detection is plotted on the
_Y -axis and the probability of false alarm is plotted on the X-_
axis. It is worth noting that the ROC curves that approach the
upper left corner outperform those far from it.
_A. Detection and Localization for One attacker_
_Pnd[i]_ [:=][ P] [( ˆ][H][i][ =][H]1[i] _[|H]1[i]_ [)][, P]nf[ i] [:=][ P] [( ˆ][H][i][ =][ H]1[i] _[|H]0[i]_ [)][.] (30)
_Pld[i]_ [:=][ P] [( ˆ][H][ij][ =][H]1[ij] 1 [)][, P]lf[ i] [:=][ P] [( ˆ][H][ij][ =][ H]1[ij] 0 [)][.][ (31)]
_[|H][ij]_ _[|H][ij]_
In this subsection, we show the detection and localization
performance of AI-based methods when the Manhattan network contains only one attacker, seen in Fig. 4. Suppose that
agent 1 is the attacker and the monitoring node is agent 2 (or
3, 4 and 7). Then, the NN model is trained and also tested by
the data collecting from the agent next to the attacker.
In Fig. 5, we study the attacker detection and localization
performance of TDNN, where TD in [13] is taken as the
benchmark method. ROC curves of ND task is depicted in
Fig. 5 (Left), variables K and d in the legend are the number
of instances and dimensions used to detect insider attacks
-----
1
Detection, agent i next to attacker
1
0.95
0.9
0 0.05 0.1
SD K=1, d=1
SD K=1, d=2
SD K=2, d=1
SD K=2, d=2
SDNN K=1, d=1
SDNN K=1, d=2
SDNN K=2, d=1
SDNN K=2, d=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
Localization, agent i next to attacker
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
SD K=1, d=1
SD K=1, d=2
SD K=2, d=1
SD K=2, d=2
SDNN K=1, d=1
SDNN K=1, d=2
SDNN K=2, d=1
SDNN K=2, d=2
0 0.2 0.4 0.6
Probability of false alarm Pilf
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
|Detection, a|Col2|agent i next|Col4|t to attacker|Col6|Col7|
|---|---|---|---|---|---|---|
|1|||||||
|0.95 0.9 0|||||||
|||0.05|||||
||||||||
||||||||
|||SD K=1, d= SD K=1, d= SD K=2, d=||1 2 1|||
|||SD K=2, d= SDNN K=1,||2 d=1|||
|||SDNN K=1, SDNN K=2, SDNN K=2,||d=2 d=1 d=2|||
|ocalization,|Col2|Col3|, agent i nex|Col5|xt to attacke|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|1 0.9|||||||||
||||||||||
|0|||0.05||0.1||||
||||||||||
||||SD K=1, d=1 SD K=1, d=2 SD K=2, d=1||||||
||||||||||
||||SD K=2, d=2 SDNN K=1,||d=1||||
||||SDNN K=1, SDNN K=2, SDNN K=2,||d=2 d=1 d=2||||
0
0 0.2 0.4 0.6
Probability of false alarm Pinf
|Detection,|Col2|agent i next|to attacker|Col5|
|---|---|---|---|---|
||||||
||TD Co TD|NN K=5, d=2 llaborative train NN K=5, d=2|ing||
||Ind SD|ependent traini NN K=2, d=2|ng||
||Col|laborative train|ing||
||SD Ind|NN K=2, d=2 ependent trainin|g||
||||||
||||||
||||||
Fig. 6. ROCs of SDNN and SD methods: (Left) ND task, (Right) NL task.
respectively. The localization performance of NL task is shown
in Fig. 5 (Right), where we assume that the ND task can
completely distinguish between events H0[i] [and][ H]1[i] [without]
errors (by an ‘Oracle’). In these plots, it is obvious that both
the performance of TDNN and TD will improve significantly
as K increases when d is fixed, and vice versa. For the first
and second curves in ND and NL tasks, TD has the same
performance at K = 2, d = 1 as at K = 1, d = 2, and the
same is true for TDNN on the fifth and sixth curves, which
is inherent to TD strategy (9). Thus, we may say that either
increasing K or increasing d will bring the same improvement
over performance. From Fig. 5, it can be seen that TDNN
improves significantly over TD in terms of both detection and
localization performance, performing good performance when
_K = 5, d = 2._
The ROCs of SDNN are shown in Fig. 6, while SD in [36] is
selected as a benchmark. It can be seen from the plots that both
SDNN and SD already provide good detection and localization
performance when K = 2, which is better than that of TDNN
and TD methods. This result implies that transient states can
indeed provide more information to identify the attacker, as
spatial methods (SDNN and SD) leverage the entire dynamic
information while the temporal methods (TDNN and TD) only
utilize the first and last states. Also in this case, the attacker
detection and localization performance of SDNN and SD will
improve significantly as K increases when d is fixed. When K
is fixed, the performance of SDNN and SD slightly improved
as d increases. For the ND task in Fig. 6 (Left), the detection
performance of SDNN and SD are close to each other, they
show excellent performance under the same feature processing
condition, seen in (12) and (22). Nevertheless, SDNN has
a drastic advantage over SD method in NL task and can
completely distinguish the neighboring attacker at K = 2,
seen in Fig. 6 (Right).
Fig. 7. Comparison between independent training and collaborative training
based on matched data. Models are trained on sufficient “next to” data then
tested on “next to” data.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
|Col1|ND task|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|||||
|||||
|Collabo|rative training|||
|Collabo|rative training|||
|||||
|||||
0 0.2 0.4 0.6
Probability of false alarm Pinf
_B. Performance of the Collaborative Learning_
In this subsection, we show how to utilize the collaborative
learning protocol to train a robust model for accommodating
more attack events. Specifically, we consider the performance
comparison of independent training and collaborative training.
Herein, independent training means that each agent trains its
model based on its local data where “next to” data refers to
Fig. 8. Comparison between independent training and collaborative training
for mismatched data. The plots are SDNN with K = 1, d = 2.
samples collected at the agent next to an attacker while “far
from” data refers to samples collected at the agent far from an
attacker. Moreover, “next to” model refers as the independent
model trained on “next to” data while “far from” model refers
as the independent model trained on “far from” data. We then
consider two extreme cases to verify the collaborative learning.
For Case 1, we assume that in the independent training
process the monitoring agent only collects a very small amount
of local “next to” data, which is not enough to complete a
meaningful NN model. While in the collaborative training
process, the agent update its local model by merging the
received neighboring models. We then test the two training
methods on “next to” data for ND and NL tasks. In Fig. 7,
the dashed and solid lines are the performance for the independent training and the collaborative training, respectively. It is
clear that with insufficient samples, the agent in independent
learning performs poorly on both ND and NL tasks, while
the collaborative training enables the agent to learn models
from its neighbors, which greatly improves detection and
localization performances. It is expected that a similar result
also holds for “far from” cases.
For Case 2, we first consider the scenario that each agent
has sufficient “next to” data (“far from” data) to train the
-----
1
Localization, agent i next to attacker
1
Localization, agent i next to attacker
1
1
Detection, agent i next to attacker
TD p=0
TD p=1
TD p=2
TDNN p=0
TDNN p=1
TDNN p=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
Detection, agent i next to attacker
TD ~ U[0.0, 1.0]
TD ~ U[0.2, 0.8]
TD ~ U[-0.2, 1.2]
TD ~ U[0.2, 1.2]
TD ~ U[-0.2, 0.8]
TDNN ~ U[0.0, 1.0]
TDNN ~ U[0.2, 0.8]
TDNN ~ U[-0.2, 1.2]
TDNN ~ U[0.2, 1.2]
TDNN ~ U[-0.2, 0.8]
0 0.2 0.4 0.6
Probability of false alarm Pinf
|Detection, a|agent i next|t to attacker|
|---|---|---|
||||
||||
||||
||TD p=0 TD p=1||
||TD p=2 TDNN p TDNN p TDNN p|=0 =1 =2|
||||
||||
|ocalization,|, agent i nex|xt to attack|
|---|---|---|
||||
||||
||||
||TD p=0 TD p=1 TD p=2||
||TDNN p TDNN p TDNN p|=0 =1 =2|
||||
||||
|Detection,|Col2|agent i next|Col4|t to attacker|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||TD ~ U[0.0 TD ~ U[0.2 TD ~ U[-0.||, 1.0], 0.8] 2, 1.2]||
|||TD ~ U[0.2 TD ~ U[-0. TDNN ~ U TDNN ~ U||, 1.2] 2, 0.8] [0.0, 1.0] [0.2, 0.8]||
|||TDNN TDNN|~ U ~ U|[-0.2, 1.2] [0.2, 1.2]||
|||||||
|||TDNN|~ U|[-0.2, 0.8]||
|||||||
|ocalization, agent|Col2|t i next to attack|
|---|---|---|
||||
||||
||||
||TD TD|~ U[0.0, 1.0] ~ U[0.2, 0.8]|
||TD TD TD TDN|~ U[-0.2, 1.2] ~ U[0.2, 1.2] ~ U[-0.2, 0.8] N ~ U[0.0, 1.0]|
||TDN TDN|N ~ U[0.2, 0.8] N ~ U[-0.2, 1.2]|
||TDN TDN|N ~ U[0.2, 1.2] N ~ U[-0.2, 0.8]|
0 0.2 0.4 0.6
Probability of false alarm Pilf
0 0.2 0.4 0.6
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Fig. 9. ROCs of TDNN and TD with different deficient size: (Left) ND task,
(Right) NL task. p = M −|Ni| means that the number of deficient inputs.
Localization, agent i next to attacker
1
1
Detection, agent i next to attacker
1
0.95
0.9
0 0.02 0.04 0.06
SD p=0
SD p=1
SD p=2
SDNN p=0
SDNN p=1
SDNN p=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
|Detection, a|Col2|Col3|agent i next|Col5|t to attacker|Col7|Col8|
|---|---|---|---|---|---|---|---|
|1||||||||
|0.95||||||||
|||||||||
|0.9 0||||||||
||||0.02 0.|||||
|||||||||
|||||||||
|||||||||
||||SD p=0|||||
||||SD p=1 SD p=2|||||
|||||||||
||||SDNN p= SDNN p=||0 1|||
|||||||||
||||SDNN p=||2|||
|||||||||
|||||||||
|||||||||
|ocalization, agen|Col2|nt i next to attacke|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|SD||p=0||
||SD|p=0||
||SD SD|p=1 p=2||
||SD SD|NN p=0 NN p=1||
|SD|SD|NN p=2||
|||||
0 0.5 1
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Fig. 10. ROCs of SDNN and SD with different deficient size: (Left) ND task,
(Right) NL task. p = M −|Ni| means that the number of deficient inputs.
NN model. We then test the independent/collaborative training
models on the “next to” data (“far from” data), which is
matched the training data. The red (blue) dashed and solid
lines in Fig. 8 (Left) represent their ROC results, which imply
that the collaborative learning model will converge to the
independent learning model. On the other hand, we further
test the mismatch case of the training data and testing data.
That is, we use the “next to” data to test the “far from” model,
and vice versa. Interestingly, Fig. 8 (Right) shows that the
collaborative learning model has a significant improvement
over the independent model, as it learns both the characteristics
of the “next to” model and “far from” model. These results
demonstrate the advantage of the collaborative learning on the
robustness of the model. Therefore, when there are enough
samples, collaborative learning also has strong competitiveness
compared with independent learning.
Fig. 11. ROCs of TDNN and TD for the mismatch model: (Left) ND task,
(Right) NL task. α[k] _∼U_ [−0.5, 0.5][d]. Each entry of x[k](0) is distributed as
legended for testing data.
as the target example, and we have M = 4 and |Ni| = {2, 3}.
The attack scenario of m = 1, c = 1 is applied to verify our
proposed method, and the performance in p = 0 is taken as the
baseline. We choose agent 2 as the test agent whose neighbors
are agents 1, 3, 5 and 8, i.e., p = 0. When p = 1, we cut off
the connection between agents 2 and 3, then the neighbors of
the test agent are agents 1, 5, and 8. Next when p = 2, we
further cut off the connection between agents 2 and 5, leaving
only agents 1 and 8 as neighbors of the test agent 2. Note
that the parameters for the AI-based methods are the same as
those in previous subsection V-A, and that the testing data is
generated from a modified Manhattan network with p = 1 and
_p = 2._
Fig. 9 shows the attacker detection and localization performance of TDNN and TD methods with K = 5, d = 2.
It can be seen that the performance of TDNN and TD in
ND and NL tasks will not fluctuate significantly with the
increase of p. However, TDNN has more stable detection and
localization performance than TD. In Fig. 10, we shows the
performance of SDNN and SD methods with K = 2, d = 2.
As p increases, SD has good detection performance, but its
localization performance slightly decreases. Obviously, the
results in both ND and NL tasks show that in our setting p does
not have a significant effect on the performance of SDNN.
SDNN still can provide stable performance for detecting and
localizing attackers. These results suggest that the proposed
AI-based models may fit well with irregular degree networks.
_C. Performance for Different Degree-|Ni| agents_
In this subsection, we discuss the scenario where the communication network is an irregular network. We assume that
the number of mismatched inputs not matching the unified
model is p = M −|Ni|. According to the scheme described
in subsection IV-C, we test the detection and localization
performance of AI-based methods when p = 0. To set up
_̸_
this simulation the Manhattan network topology is selected
_D. Robustness test when data is mismatched with prior infor-_
_mation_
Furthermore, we test the robustness of AI-based methods
when the prior information is inconsistent with the actual environment. The prior information of test scenarios are present
in subsection IV-C. We consider one attacker in the Manhattan
network and train the parameters of AI-based methods in the
scenario with α[k] [ 0.5, 0.5][d] and β [0.0, 1.0][d], and
_∼U_ _−_ _∼U_
the test data is generated by other scenarios in TABLE I.
In Fig. 11, we show the detection and localization performance of TDNN and TD methods in different test scenarios
when K 5 d 2 Specifically we generate the test data
-----
1
Detection, agent i next to attacker
1
Localization, agent i next to attacker
1
Localization, agent i next to attacker
1
Detection, agent i next to attacker
TD m=1, c=1
TD m=2, c=1
TD m=2, c=2
TD m=5, c=1
TD m=5, c=2
TD m=5, c=3
TDNN m=1, c=1
TDNN m=2, c=1
TDNN m=2, c=2
TDNN m=5, c=1
TDNN m=5, c=2
TDNN m=5, c=3
0 0.2 0.4 0.6
Probability of false alarm Pinf
|Detection, a|Col2|agent i next|Col4|Col5|t to attacker|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||SD SD|K=1, ~ U[0. K=1, ~ U[0.|||0, 1.0] 2, 0.8]||
||SD|K=1,|~ U[-0||.2, 1.2]||
||SD SD|K=1, ~ U[0. K=1, ~ U[-0|||2, 1.2] .2, 0.8]||
||SDN SDN|N K=1, N K=1,||~ ~|U[0.0, 1.0] U[0.2, 0.8]||
||SDN SDN|N K=1, N K=1,||~ ~|U[-0.2, 1.2] U[0.2, 1.2]||
|SDN||N K=1, ~|||U[-0.2, 0.8]||
|ocalization,|, agent i nex|Col3|Col4|xt to attack|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
|SD K SD K|=1, ~ U[0. =1, ~ U[0.|||0, 1.0] 2, 0.8]|
|SD K|=1,|~ U[-0||.2, 1.2]|
|SD K SD K|=1, =1,|~ U[0. ~ U[-0||2, 1.2] .2, 0.8]|
|SDNN SDNN|K=1, K=1,||~ ~|U[0.0, 1.0] U[0.2, 0.8]|
|SDNN SDNN|K=1, K=1,||~ ~|U[-0.2, 1.2] U[0.2, 1.2]|
|SDNN|K=1, ~|||U[-0.2, 0.8]|
|Detection, a|Col2|agent i next|t to attacker|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|||TD m=1, c= TD m=2, c=|1 1||
|||TD m=2, c=|2||
||||||
|||TD m=5, c= TD m=5, c=|1 2||
||||||
|||TD m=5, c= TDNN m=1|3, c=1||
||||||
|||TDNN m=2 TDNN m=2|, c=1, c=2||
||||||
|||TDNN m=5 TDNN m=5|, c=1, c=2||
|||TDNN m=5|, c=3||
|ocalization,|Col2|, agent i nex|xt to attack|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|||TD m=1, c=|1||
|||TD m=2, c= TD m=2, c=|1 2||
||||||
|||TD m=5, c= TD m=5, c=|1 2||
||||||
|||TD m=5, c= TDNN m=1,|3 c=1||
||||||
|||TDNN m=2, TDNN m=2,|c=1 c=2||
||||||
|||TDNN m=5, TDNN m=5,|c=1 c=2||
|||TDNN m=5,|c=3||
0 0.2 0.4 0.6
Probability of false alarm Pilf
0 0.2 0.4 0.6
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6
Probability of false alarm Pinf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Fig. 12. ROCs of SDNN and SD for the mismatch model: (Left) ND task,
(Right) NL task. α[k] _∼U_ [−0.5, 0.5][d]. Each entry of x[k](0) is distributed as
legended for testing data.
Fig. 14. ROCs for multiple attackers of TDNN and TD: (Left) ND task,
(Right) NL task. m is the number of attackers in the Manhattan network, and
_c is the number of attackers in the testing agent’s neighborhood._
Localization, agent i next to attacker
1
Localization, agent i next to attacker
1
1
Detection, agent i next to attacker
1
0.95
0.9
0 0.02 0.04 0.06
SD K=2, ~ U[0.0, 1.0]
SD K=2, ~ U[0.2, 0.8]
SD K=2, ~ U[-0.2, 1.2]
SD K=2, ~ U[0.2, 1.2]
SD K=2, ~ U[-0.2, 0.8]
SDNN K=2, ~ U[0.0, 1.0]
SDNN K=2, ~ U[0.2, 0.8]
SDNN K=2, ~ U[-0.2, 1.2]
SDNN K=2, ~ U[0.2, 1.2]
SDNN K=2, ~ U[-0.2, 0.8]
|Detection, a|Col2|Col3|agent i next|Col5|Col6|Col7|t to attacker|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|1||||||||||
|||||||||||
|0.95 0.9||||||||||
|0|||0.02 0.04||||0.06|||
|SD K|||=2, ~ U[0.||||0, 1.0]|||
||SD K SD K||=2, =2,||~ U[0. ~ U[-0||2, 0.8] .2, 1.2]|||
||SD K SD K SDN SDN SDN||=2, ~ U[0. =2, ~ U[-0 N K=2, ~ U N K=2, ~ U N K=2, ~ U||||2, 1.2] .2, 0.8] [0.0, 1.0] [0.2, 0.8] [-0.2, 1.2]|||
|ocalization,|Col2|, agent i nex|Col4|xt to attack|
|---|---|---|---|---|
||||||
|1|||||
|0.95 0.9|||||
|0||0.02 0||.04|
|SD K||=2, ~ U[0||.0, 1.0]|
|SD K SD K||=2, =2,|~ U[0 ~ U[-|.2, 0.8] 0.2, 1.2]|
|SD K SD K SDN SDN SDN||=2, ~ U[0 =2, ~ U[- N K=2, ~ N K=2, ~ N K=2, ~||.2, 1.2] 0.2, 0.8] U[0.0, 1.0] U[0.2, 0.8] U[-0.2, 1.2]|
|ocalization,|Col2|, agent i nex|xt to attacke|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|||SD m=1,c=|1||
||||||
|||SD m=2,c= SD m=2,c=|1 2||
|||SD m=5,c= SD m=5,c=|1 2||
|||SD m=5,c= SDNN m=1 SDNN m=2 SDNN m=2 SDNN m=5|3,c=1,c=1,c=2,c=1||
0 0.2 0.4 0.6
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6
Probability of false alarm Pinf
0 0.2 0.4 0.6
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Fig. 13. ROCs of SDNN and SD for the mismatch model: (Left) ND task,
(Right) NL task. α[k] _∼U_ [−0.5, 0.5][d]. Each entry of x[k](0) is distributed as
legended for testing data.
for the second and third curves by changing the deviation of
**_β to β_** [0.2, 0.8][d] and β [ 0.2, 1.2][d]. The results
_∼U_ _∼U_ _−_
indicate that the performances of TDNN and TD methods
deteriorate when the deviation of β increases, and improve
when the deviation of β decreases. While in the fourth and
fifth curves, we change the mean of β to β [0.2, 1.2][d]
_∼U_
and β [ 0.2, 0.8][d]. As can be seen that the performances
_∼U_ _−_
of TDNN and TD will improve when [E[α] − E[β]] increases
and deteriorate when the gap decreases. Meanwhile, TDNN
performs better than TD in both ND and NL tasks. In addition,
Fig. 12 and Fig. 13 respectively show the detection and
localization performance of SDNN and SD methods when
_K = 1, d = 2 and K = 2, d = 2. In these plots, the_
ROC curves of SDNN and SD follow the same trends as
those in Fig. 11. It is worth mentioning that SDNN still
shows good detection and localization performance despite the
mismatching of the training data and testing data. Specifically,
in Fig. 11, Fig. 12 and Fig. 13.
_E. Detection and Localization for Multiple Attackers_
Then, we investigated the performance of TDNN and SDNN
with the case of multiple attackers. Note that the parameters
of AI-based methods are the same as those in subsection V-A.
In Fig 14 and 15 we set agents 1 _m_ as the attackers
_{_ _}_
Fig. 15. ROCs for multiple attackers of SDNN and SD: (Left) ND task,
(Right) NL task. m is the number of attackers in the Manhattan network, and
_c is the number of attackers in the testing agent’s neighborhood._
when considering a scenario with m attackers in the Manhattan
network. The legend in these plots with ‘m and c’ indicates
that there are m attackers in the Manhattan network, and c
attackers are in the neighborhood of the monitoring agent. In
this case, the same α[k] is shared by all cooperating attackers,
but the noise is random and independent among each attacker.
In Fig. 14, we show the ROC curves of TDNN and TD
methods when K = 5, d = 2. Obviously, both the detection
and localization performance of TDNN and TD methods
fluctuate obviously in different m and c. We notice that the
total number of attackers (m) has only a slight impact on
the detection performance of TDNN, which can be seen from
the sixth (m = 1, c = 1), seventh (m = 2, c = 1) and
ninth curves (m = 5, c = 1). It shows that the detection
performance for TDNN depends on the number of attacking
neighbors. As for NL task, we observe that TDNN exhibits
similar performance in different attack scenarios. Nevertheless,
the proposed TDNN method outperforms TD and has good
performance in the case of multiple attackers. For SDNN and
SD methods, the detection and localization performance are
shown in Fig. 15 with K = 2, d = 2. From the Fig. 15
(Left), SDNN and SD still show good detection performance
in the case of multiple attackers. We notice that the detection
performance of SDNN is a slightly better than SD when
-----
VI. CONCLUSION
1
Detection, agent i next to attacker
Manhattan K=2, d=2
Small-world K=2, d=2
Manhattan K=5, d=2
Small-world K=5, d=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
Localization, agent i next to attacker
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
|Detection, a|Col2|agent i next|t to attacker|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||M S|anhattan K mall-world K|=2, d=2 =2, d=2||
||M S|anhattan K mall-world K|=5, d=2 =5, d=2||
||||||
||||||
||||||
||||||
|ocalization|n, agent i ne|ext to attacke|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|M|anhattan|K=2, d=2||
|S M|mall-world anhattan|K=2, d=2 K=5, d=2||
|S|mall-world|K=5, d=2||
|||||
|||||
|||||
0 0.2 0.4 0.6
Probability of false alarm Pilf
Fig. 16. ROCs of TDNN for the small world network: (Left) ND task, (Right)
NL task. Solid lines show the average detection and localization performance
in the small world network, and the parameters of TDNN are trained by the
Manhattan network.
This work is dedicated to the detection of insider attacks in
the DPS algorithm through AI technology. We have proposed
two AI-based defense strategies (TDNN and SDNN) for securing the gossip-based DPS algorithm. Unlike the traditional
score-based methods, this work utilizes NN to learn the
complex mapping relationships in this classification problem,
thus reducing the design difficulty of the attacker detector. To
circumvent the mismatch of the training data and the actual
network attack, we propose a federated learning approach to
learn a local model close to the global model using training
data from all agents. Experiment results demonstrate that the
proposed AI-based methods have good detection and localization performance in different attack scenarios. They also
have good adaptability to different degree of agent, and have
strong robustness to the inconsistency of prior information
with the actual environment. Therefore, it is convinced that the
proposed AI-based defense strategies have a high potential for
practical applications in the DPS algorithm. As a future work,
it would be interesting to try the AI-based methods on more
complicated attack models and other decentralized algorithms.
REFERENCES
Localization, agent i next to attacker
1
1
Detection, agent i next to attacker
Manhattan K=1, d=2
Small-world K=1, d=2
Manhattan K=2, d=2
Small-world K=2, d=2
0 0.2 0.4 0.6
Probability of false alarm Pinf
|Detection, a|Col2|agent i next|t to attacker|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||M|anhattan K|=1, d=2||
||Sm M|all-world K anhattan K|=1, d=2 =2, d=2||
|Sm||all-world K|=2, d=2||
||||||
||||||
||||||
|ocalization|n, agent i ne|ext to attacke|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
||Manhattan Small-world|K=1, d=2 K=1, d=2||
||Manhattan Small-world|K=2, d=2 K=2, d=2||
|||||
|||||
|||||
|||||
0 0.2 0.4 0.6
Probability of false alarm Pilf
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Fig. 17. ROCs of SDNN for the small world network: (Left) ND task, (Right)
NL task. Solid lines show the average detection and localization performance
in the small world network, and the parameters of SDNN are trained by the
Manhattan network.
_m = 5, c = 3. While in the NL task, SDNN has excellent_
localization performance and outperforms SD in all attack
scenarios.
_F. Performance Test In a Small World Network_
In addition, we also test the detection and localization
performance of AI-based methods in a small world network.
We consider a small world network with 20 agents. The
average degree is set to be 8 and the rewiring probability is
set to be 0.2. We assume that among all the nodes, agents
3, 10 and 17 are the attackers. The AI-based models are
trained by the training data from Manhattan network as those
in V-A, while the test data are collected from the small world
network. We only consider to monitor at the agents next to
the attacker. Note that herein the degree-mismatch problem is
solved by the proposed method in Section IV-C. We therefore
show the performance of TDNN and SDNN in Fig. 16 and
Fig. 17 respectively. In these plots, the solid lines show the
average detection and localization performance in the small
world network. We notice that AI-based methods also exhibit
good detection and localization performance in a small world
network. These results further illustrate the potential of the
proposed defense strategies
[1] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multiagent optimization,” IEEE Transactions on Automatic Control, vol. 54,
no. 1, pp. 48–61, 2009.
[2] A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and
optimization in multi-agent networks,” IEEE Transactions on Automatic
_Control, vol. 55, no. 4, pp. 922–938, 2010._
[3] G. Hattab and D. Cabric, “Distributed wideband sensing-based architecture for unlicensed massive iot communications,” IEEE Transactions on
_Cognitive Communications and Networking, vol. 5, no. 3, pp. 819–834,_
2019.
[4] B. V. Philip, T. Alpcan, J. Jin, and M. Palaniswami, “Distributed realtime iot for autonomous vehicles,” IEEE Transactions on Industrial
_Informatics, vol. 15, no. 2, pp. 1131–1140, 2019._
[5] S. X. Wu, H.-T. Wai, L. Li, and A. Scaglione, “A review of distributed
algorithms for principal component analysis,” Proceedings of the IEEE,
vol. 106, no. 8, pp. 1321–1340, 2018.
[6] C. Zhang and Y. Wang, “Enabling privacy-preservation in decentralized
optimization,” IEEE Transactions on Control of Network Systems, vol. 6,
no. 2, pp. 679–689, 2018.
[7] D. Gesbert, S. G. Kiani, A. Gjendemsjo, and G. E. Oien, “Adaptation,
coordination, and distributed resource allocation in interference-limited
wireless networks,” Proceedings of the IEEE, vol. 95, no. 12, pp. 2393–
2409, 2007.
[8] G. B. Giannakis, V. Kekatos, N. Gatsis, S.-J. Kim, H. Zhu, and B. F.
Wollenberg, “Monitoring and optimization for power grids: A signal
processing perspective,” IEEE Signal Processing Magazine, vol. 30,
no. 5, pp. 107–128, 2013.
[9] I. Heged˝us, G. Danner, and M. Jelasity, “Gossip learning as a decentralized alternative to federated learning,” in IFIP International Conference
_on Distributed Applications and Interoperable Systems. Springer, 2019,_
pp. 74–90.
[10] J. Tsitsiklis, “Problems in decentralized decision making and computation,” Ph.D. dissertation, Dept. of Electrical Engineering and Computer
Science, M.I.T., Boston, MA, 1984.
[11] S. S. Ram, A. Nedi´c, and V. V. Veeravalli, “Distributed stochastic
subgradient projection algorithms for convex optimization,” Journal of
_optimization theory and applications, vol. 147, no. 3, pp. 516–545, 2010._
[12] S. Sundaram and B. Gharesifard, “Consensus-based distributed optimization with malicious nodes,” in 2015 53rd Annual Allerton Con_ference on Communication, Control, and Computing (Allerton)._ IEEE,
2015, pp. 244–249.
[13] S. X. Wu, H.-T. Wai, A. Scaglione, A. Nedi´c, and A. Leshem, “Data injection attack on decentralized optimization,” in 2018 IEEE International
_Conference on Acoustics, Speech and Signal Processing (ICASSP)._
IEEE 2018 3644 3648
-----
[14] G. Li, X. Wu, S. Zhang, H.-T. Wai, and A. Scaglione, “Detecting
and localizing adversarial nodes using neural networks,” in 2018 IEEE
_19th International Workshop on Signal Processing Advances in Wireless_
_Communications (SPAWC) (IEEE SPAWC 2018), Kalamata, Greece, Jun._
2018.
[15] Q. Yan, M. Li, T. Jiang, W. Lou, and Y. T. Hou, “Vulnerability and
protection for distributed consensus-based spectrum sensing in cognitive
radio networks,” in INFOCOM, 2012 Proceedings IEEE. IEEE, 2012,
pp. 900–908.
[16] C. Zhao, J. He, and J. Chen, “Resilient consensus with mobile detectors
against malicious attacks,” IEEE Transactions on Signal and Information
_Processing over Networks, vol. 4, no. 1, pp. 60–69, 2017._
[17] S. Sundaram and B. Gharesifard, “Distributed optimization under adversarial nodes,” IEEE Transactions on Automatic Control, vol. 64, no. 3,
pp. 1063–1076, 2018.
[18] R. Gentz, H.-T. Wai, A. Scaglione, and A. Leshem, “Detection of datainjection attacks in decentralized learning,” Asilomar Conf, 2015.
[19] R. Gentz, S. X. Wu, H. T. Wai, A. Scaglione, and A. Leshem, “Data
injection attacks in randomized gossiping,” IEEE Transactions on Signal
_and Information Processing Over Networks, vol. 2, no. 4, pp. 523–538,_
2016.
[20] M. Mobilia, “Does a single zealot affect an infinite group of voters ?”
_Physical Review Letters, July 2003._
[21] B. Kailkhura, S. Brahma, and P. K. Varshney, “Data falsification attacks
on consensus-based detection systems,” IEEE Transactions on Signal
_and Information Processing over Networks, vol. 3, no. 1, pp. 145–158,_
2017.
[22] ——, “Consensus based detection in the presence of data falsification
attacks,” IEEE Transactions on Signal Processing, vol. PP, no. 99, 2015.
[23] O. Shalom, A. Leshem, A. Scaglione, and A. Nedi´c, “Detection of data
injection attacks on decentralized statistical estimation,” in 2018 IEEE
_International Conference on the Science of Electrical Engineering in_
_Israel (ICSEE)._ IEEE, 2018, pp. 1–5.
[24] N. Ravi and A. Scaglione, “Detection and isolation of adversaries in
decentralized optimization for non-strongly convex objectives,” IFAC_PapersOnLine, vol. 52, no. 20, pp. 381–386, 2019._
[25] S. Patel, V. Khatana, G. Saraswat, and M. V. Salapaka, “Distributed
detection of malicious attacks on consensus algorithms with applications
in power networks,” in Preprint. Golden, CO: National Renewable
_Energy Laboratory. NREL/CP-5D00-76848, 2020._
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in CVPR. IEEE Computer Society, 2016, pp. 770–778.
[27] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training
of deep bidirectional transformers for language understanding,” arXiv
_preprint arXiv:1810.04805, 2018._
[28] H. Wang, J. Ruan, G. Wang, B. Zhou, Y. Liu, X. Fu, and J. Peng, “Deep
learning-based interval state estimation of ac smart grids against sparse
cyber attacks,” IEEE Transactions on Industrial Informatics, vol. 14,
no. 11, pp. 4766–4778, 2018.
[29] Y. Yu, T. Wang, and S. C. Liew, “Deep-reinforcement learning multiple
access for heterogeneous wireless networks,” IEEE Journal on Selected
_Areas in Communications, vol. 37, no. 6, pp. 1277–1290, 2019._
[30] W. Wu, R. Li, G. Xie, J. An, Y. Bai, J. Zhou, and K. Li, “A survey
of intrusion detection for in-vehicle networks,” IEEE Transactions on
_Intelligent Transportation Systems, vol. 21, no. 3, pp. 919–933, 2019._
[31] A. Doboli, “Discovery of malicious nodes in wireless sensor networks
using neural predictors,” Wseas Transactions on Computers Research,
vol. 2, 2007.
[32] G. Rusak, A. Al-Dujaili, and U.-M. O’Reilly, “Ast-based deep learning
for detecting malicious powershell,” in Proceedings of the 2018 ACM
_SIGSAC Conference on Computer and Communications Security, 2018,_
pp. 2276–2278.
[33] O. Rahman, M. A. G. Quraishi, and C.-H. Lung, “Ddos attacks detection
and mitigation in sdn using machine learning,” in 2019 IEEE World
_Congress on Services (SERVICES), vol. 2642._ IEEE, 2019, pp. 184–
189.
[34] G. Li, S. X. Wu, S. Zhang, H.-T. Wai, and A. Scaglione, “Detecting
and localizing adversarial nodes usig neural networks,” in 2018 IEEE
_19th International Workshop on Signal Processing Advances in Wireless_
_Communications (SPAWC)._ IEEE, 2018, pp. 1–5.
[35] G. Li, S. X. Wu, S. Zhang, and Q. Li, “Neural networks-aided insider
attack detection for the average consensus algorithm,” IEEE Access,
vol. 8, pp. 51 871–51 883, 2020.
[36] ——, “Detect insider attacks using cnn in decentralized optimization,” in
_ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech_
_d Si_ _l P_ _i_ _(ICASSP)_ IEEE 2020 8758 8762
[37] L. Giaretta and S. Girdzijauskas, “Gossip learning: off the beaten path,”[ˇ]
in 2019 IEEE International Conference on Big Data (Big Data). IEEE,
2019, pp. 1117–1124.
[38] S. Savazzi, M. Nicoli, and V. Rampa, “Federated learning with cooperating devices: A consensus approach for massive iot networks,” IEEE
_Internet of Things Journal, vol. 7, no. 5, pp. 4641–4654, 2020._
[39] Z. Jiang, A. Balu, C. Hegde, and S. Sarkar, “Collaborative deep
learning in fixed topology networks,” in Advances in Neural Information
_Processing Systems (NIPS), 2017, pp. 5904–5914._
[40] R. Orm´andi, I. Heged˝us, and M. Jelasity, “Gossip learning with linear models on fully distributed data,” Concurrency and Computation:
_Practice and Experience, vol. 25, no. 4, pp. 556–571, 2013._
[41] M. Blot, D. Picard, N. Thome, and M. Cord, “Distributed optimization
for deep learning with gossip exchange,” Neurocomputing, vol. 330, pp.
287–296, 2019.
[42] J. Daily, A. Vishnu, C. Siegel, T. Warfel, and V. Amatya, “Gossipgrad:
Scalable deep learning using gossip communication based asynchronous
gradient descent,” arXiv preprint arXiv:1803.05880, 2018.
[43] M. Blot, D. Picard, and M. Cord, “Gosgd: Distributed optimization for
deep learning with gossip exchange,” arXiv preprint arXiv:1804.01852,
2018.
[44] R. B. Palm, “Prediction as a candidate for learning deep hierarchical
models of data,” Technical University of Denmark, vol. 5, 2012.
[45] T. Fawcett, “An introduction to roc analysis,” Pattern recognition letters,
vol. 27, no. 8, pp. 861–874, 2006.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2101.06917, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2101.06917"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-01-18T00:00:00
|
[
{
"paperId": "edfb119b453a7ee7055eeab7e407c4334ae57ed7",
"title": "Distributed Detection of Malicious Attacks on Consensus Algorithms with Applications in Power Networks"
},
{
"paperId": "c7c75508eca75e7a2c29724f5b735c24c0d3f72e",
"title": "Detect Insider Attacks Using CNN in Decentralized Optimization"
},
{
"paperId": "2d8db2937d3015d8ddc5ab16dfb9628c2d393430",
"title": "Neural Networks-Aided Insider Attack Detection for the Average Consensus Algorithm"
},
{
"paperId": "f97424b7026d344ccab301da4da81ef292a89aee",
"title": "A Survey of Intrusion Detection for In-Vehicle Networks"
},
{
"paperId": "0d7e26c623068f7119878f12f5ee1a49b20b9c9d",
"title": "Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks"
},
{
"paperId": "0ef9776f1abd2111c3d89101fd71a26273f3b7ae",
"title": "Gossip Learning: Off the Beaten Path"
},
{
"paperId": "aa551505e92f565e2985f478f9cb01442b58a6b2",
"title": "Detection and Isolation of Adversaries in Decentralized Optimization for Non-Strongly Convex Objectives"
},
{
"paperId": "56c27a7b93cd8fc372f8459d151ee589e752b8e0",
"title": "DDoS Attacks Detection and Mitigation in SDN Using Machine Learning"
},
{
"paperId": "5940b6a654300ed7bd6e495407b2002e5b33ba98",
"title": "Gossip Learning as a Decentralized Alternative to Federated Learning"
},
{
"paperId": "2de3693d22b2ae489ccafd2a447f06c0dcb58bf8",
"title": "Enabling Privacy-Preservation in Decentralized Optimization"
},
{
"paperId": "900a44e087e5ea69ddaea883fdc4e7cb46b1ad90",
"title": "Distributed Real-Time IoT for Autonomous Vehicles"
},
{
"paperId": "b44144f12c522b7cdf2bff9f5a2be23ce53058c4",
"title": "Distributed Wideband Sensing-Based Architecture for Unlicensed Massive IoT Communications"
},
{
"paperId": "989c107a890690634b4b87b7e5e2d35f1484ba9a",
"title": "Detection of Data Injection Attacks on Decentralized Statistical Estimation"
},
{
"paperId": "77369f12dd131a755129b1b5b923b1d479eff5db",
"title": "AST-Based Deep Learning for Detecting Malicious PowerShell"
},
{
"paperId": "bd2c2165197ac4ee13cc8a692e46703c43b37623",
"title": "Data Injection Attack on Decentralized Optimization"
},
{
"paperId": "5bcd7726beba0d40cbaf218ff57f9ffdea2b14d4",
"title": "A Review of Distributed Algorithms for Principal Component Analysis"
},
{
"paperId": "c4b3439af61021703b6f589b0a6f1380bc51439f",
"title": "Detecting and Localizing Adversarial Nodes Usig Neural Networks"
},
{
"paperId": "dcd206eb55da934f996b7ab5740f2a3630e1501e",
"title": "GoSGD: Distributed Optimization for Deep Learning with Gossip Exchange"
},
{
"paperId": "a7e9f6c55c1118c9947c6ef63bddd11764b85d33",
"title": "GossipGraD: Scalable Deep Learning using Gossip Communication based Asynchronous Gradient Descent"
},
{
"paperId": "b6691a340b287047bf1e56cbb3224113a2791d89",
"title": "Resilient Consensus with Mobile Detectors Against Malicious Attacks"
},
{
"paperId": "1ee1ab9ea9e79a3de7f5a1b8e0dcc882d9e8b5de",
"title": "Deep Learning-Based Interval State Estimation of AC Smart Grids Against Sparse Cyber Attacks"
},
{
"paperId": "953c6f0c1ae5311e1bcc523fbba2de57d751c248",
"title": "Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks"
},
{
"paperId": "4d886c571c0849fda73a2d24e944d59fd37bcf9c",
"title": "Collaborative Deep Learning in Fixed Topology Networks"
},
{
"paperId": "ac80c839dc4b47062af7db9ccf71b8c406bfab3f",
"title": "Distributed Optimization Under Adversarial Nodes"
},
{
"paperId": "2c03df8b48bf3fa39054345bafabfeff15bfd11d",
"title": "Deep Residual Learning for Image Recognition"
},
{
"paperId": "45532b07468f95ccd1509e7783a0ebb6d046320e",
"title": "Detection of data injection attacks in decentralized learning"
},
{
"paperId": "2916496933fe0f4e35ad074bd53e63fe0393cfee",
"title": "Consensus-based distributed optimization with malicious nodes"
},
{
"paperId": "26ae65513b8c5d93a881fb04a39213f85a3227be",
"title": "Data Falsification Attacks on Consensus-Based Detection Systems"
},
{
"paperId": "a45c0fb9ab7f9933656c4e1b45616625455d6c03",
"title": "Monitoring and Optimization for Power Grids: A Signal Processing Perspective"
},
{
"paperId": "06818ca61e095700b021debb4bf0b693e602f4e2",
"title": "Vulnerability and protection for distributed consensus-based spectrum sensing in cognitive radio networks"
},
{
"paperId": "cbca96cd533e2f0f7097277e16a597d8bd073158",
"title": "Gossip learning with linear models on fully distributed data"
},
{
"paperId": "92a203af1a25201dc09148c63adf7bbb3ca56112",
"title": "Distributed Subgradient Methods for Multi-Agent Optimization"
},
{
"paperId": "f6c3b74e5e1d89b56da324edc87958659c05c8bf",
"title": "Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization"
},
{
"paperId": "eca39922b65b91c53c195f0ea6c1380850f2db19",
"title": "Constrained Consensus and Optimization in Multi-Agent Networks"
},
{
"paperId": "d40ee5dd758c525dfb9932d726bb4e844b7b8478",
"title": "An introduction to ROC analysis"
},
{
"paperId": "7ca531a49c8f9a53cedc58a90ddfe826809db75d",
"title": "Does a single zealot affect an infinite group of voters?"
},
{
"paperId": "49d003dec54e01141d885df0fd41a23427a82cab",
"title": "Problems in decentralized decision making and computation"
},
{
"paperId": "df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
},
{
"paperId": "e6c08dd8f02b292b17f70f3094f7321149657bd5",
"title": "Data Injection Attacks in Randomized Gossiping"
},
{
"paperId": null,
"title": "Consensus based detection in the presence of data falsification attacks"
},
{
"paperId": "7c616fe341381a4866135042dbb565d2eda415c3",
"title": "Prediction as a candidate for learning deep hierarchical models of data"
},
{
"paperId": "dc28a23b92d9c2f6fdf46bbac47f0c99c8318401",
"title": "Discovery of Malicious Nodes in Wireless Sensor Networks Using Neural Predictors"
},
{
"paperId": null,
"title": "“Adaptation, coordination, and distributed resource allocation in interference-limited wireless networks,”"
}
] | 26,407
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/014fdd56682f5f3e91789e83114aad81f5aa5670
|
[
"Computer Science",
"Medicine"
] | 0.851831
|
Adaptive Square-Shaped Trajectory-Based Service Location Protocol in Wireless Sensor Networks
|
014fdd56682f5f3e91789e83114aad81f5aa5670
|
Italian National Conference on Sensors
|
[
{
"authorId": "8782316",
"name": "Hwa-Jung Lim"
},
{
"authorId": "66152655",
"name": "Joahyoung Lee"
},
{
"authorId": "70342309",
"name": "Heonguil Lee"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
In this paper we propose an adaptive square-shaped trajectory (ASST)-based service location method to ensure load scalability in wireless sensor networks. This first establishes a square-shaped trajectory over the nodes that surround a target point computed by the hash function and any user can access it, using the hash. Both the width and the size of the trajectory are dynamically adjustable, depending on the number of queries made to the service information on the trajectory. The number of sensor nodes on the trajectory varies in proportion to the changing trajectory shape, allowing high loads to be distributed around the hot spot area.
|
_Sensors_ **2010, 10, 4497-4520; doi:10.3390/s100504497**
_Article_
**OPEN ACCESS**
# sensors
**ISSN 1424-8220**
www.mdpi.com/journal/sensors
## Adaptive Square-Shaped Trajectory-Based Service Location Protocol in Wireless Sensor Networks
**Hwa-Jung Lim, Joa-Hyoung Lee and Heon-Guil Lee ***
Dept. of Computer Science and Engineering, Kangwon National University, Chuncheon Gangwondo,
200-701, Korea; E-Mails: jinnie4u@kangwon.ac.kr (J.-H.L.); hjlim@kangwon.ac.kr (H.-J.L.)
- Author to whom correspondence should be addressed; E-Mail: hglee@kangwon.ac.kr;
Tel.: +82-01-4914-0107; Fax: +82-33-252-6390.
_Received: 22 March 2010; in revised form: 8 April 2010 / Accepted: 14 April 2010 /_
_Published: 30 April 2010_
**Abstract:** In this paper we propose an adaptive square-shaped trajectory (ASST)-based
service location method to ensure load scalability in wireless sensor networks. This first
establishes a square-shaped trajectory over the nodes that surround a target point computed
by the hash function and any user can access it, using the hash. Both the width and the size
of the trajectory are dynamically adjustable, depending on the number of queries made to
the service information on the trajectory. The number of sensor nodes on the trajectory
varies in proportion to the changing trajectory shape, allowing high loads to be distributed
around the hot spot area.
**Keywords: service location; trajectory; data replication; load scalability; robust**
**1. Introduction**
Advances in wireless networking have set new paradigms in computing, including pervasive
computing based on a large-scale wireless sensor network. A wireless sensor network, a type of ad hoc
network (MANets), is designed to be an infrastructure-less, unattended, and rapidly-deployable
network. A fundamental issue in wireless sensor network environments is the efficient location of the
required service in the network. The service location protocol is imperative to the design of a wireless
sensor network because each network node lacks prior knowledge of the service available in the
network [1-7].
-----
_Sensors 2010, 10_ **4498**
Service location in wireless sensor networks is a challenging problem for several reasons. First, due
to a lack of infrastructure, there are no well-known servers in a pre-defined network structure. Second,
energy scarcity in a network node in a wireless network necessitates the design of new service location
protocols that are qualitatively different from those designed for the wired network. Third, in many
cases, wireless networks may scale up to thousands of nodes, rendering the location problem even
more challenging [8-20].
In pervasive computing, users receive information regarding the environment in real-time;
therefore, the sensor network, which is the foundation of pervasive computing, should enable real-time
access. In particular, service information is very time critical in pervasive computing; therefore, the
service location protocol for the wireless sensor network should provide high accessibility to service
information [21-23]. The easiest way to provide high accessibility is to periodically broadcast (flood)
service information to the entire network. This method entails major energy consumption, but it is
simple and some protocols use this approach.
To reduce the overhead associated with broadcasting, some protocols restrict the flooding area by
forwarding packets in a specific direction, as cross shape or restricted regions. These schemes could
reduce the broadcasting overhead but still require unnecessary replications if the service information is
not popular. Load scalability is the ease with which a distributed system can expand and contract its
resource pool to accommodate heavier or lighter loads; it is the ease with which a system or
component can be modified, added, or removed to accommodate a changing load. Service location
protocols should rapidly provide service information with a large number of users. Therefore, load
scalability is an important metric for a service location protocol [24-40].
In this paper, we propose an adaptive square-shaped trajectory (ASST)-based service location
method, which is a novel self-configuring, scalable, energy efficient, and robust service location
protocol. ASST is based on Geographic Hash Table (GHT) and Trajectory Based Forwarding (TBF).
GHT maps the geographic position of a sensor network field to a hash table. In GHT, the sensor node
closest to the position where is computed by hash function is responsible for a set of key and
data [6,8,9,11]. ASST stores service information in groups of sensor nodes, called a trajectory. A node
wishing to publish (advertise) service information obtains a position through the hash function, and it
then uses geographic-aided routing such as GPSR (Greedy Perimeter Stateless Routing) to store
service information to the trajectory surrounding the hashed position, as in GHT [6,7]. ASST uses TBF
to form a trajectory storing the service information. Replication between nodes in the trajectory
reduces the network load on a node because queries from users are distributed to several nodes in the
trajectory. To further distribute the network load, ASST adjusts the range and size of the trajectory in
proportion to the frequency of user queries. In the next section, we review related work. Section 3
describes ASST, and Section 4 provides performance evaluation. We conclude the paper in Section 5.
**2. Related Work**
Conventional solutions related to this paper can be classified into the following two approaches:
Data Storage architecture in a wireless sensor network and Service Location protocols in an _ad hoc_
network, as shown in Table 1 [4,5,15,22,34,37,40].
-----
_Sensors 2010, 10_ **4499**
**Table 1. Classification of Data Storage Schemes and Service Location Protocols.**
**Data Storage Scheme** **Service Location Protocol**
**Wired Networks** External Storage Service Directory
Internal Storage Flooding
Data Centric Storage FMMS
**Wireless Networks**
GHT GCLP
In the early days of sensor networks, the data sensed by sensor nodes were collected by a base
station and stored externally. Here, external storage means that the node providing the storage space is
located separately from the sensor networks. For users on a wired network seeking to access and use
the sensed data, there are no problems associated with the external storage. However, if users are
mobile, it is difficult to access the external storage on the wired network. To address this problem,
internal storage architectures are proposed. In internal storage, each sensor node saves the sensed data
to its local storage. Users obtain the data by directly querying the sensor network. Internal storage
architecture can reduce the data collection overhead; however, users have to query the entire network
to find the data. Data-centric storage architecture provides fast data dissemination by storing the data
on the basis of its name. Data-centric storage is an enhanced version of data-centric routing. A
data-centric routing scheme presented firstly is directed diffusion. Directed diffusion uses flooding to
advertise the interests from sinks to sources throughout the network [4,13]. GHT is a type of
data-centric storage architecture. GHT is based on the Distributed Hash Table (DHT) that is the results
of research efforts on peer-to-peer (P2P) computing networks. GHT was proposed for data-centric
storage with geographic information in a sensor network. GHT is a geographic hash table system that
hashes keys into geographic points and stores the key-value set at the sensor node closest to the hashed
point. GHT uses geographic perimeter routing to identify a packet home node. GHT provides fast
access to the data in the sensor nodes but does not take account of availability and scalability. In GHT,
only the home node responds to user queries. This causes a concentration of network load and reduces
the energy of the home node [6,22].
_Ad-hoc networks and DHT share key characteristics in terms of self organization, decentralization,_
redundancy requirements, and limited infrastructure. However, node mobility and the continually
changing physical topology pose a special challenge to scalability and the design of a DHT for mobile
ad-hoc network. Using DHT over wireless sensor networks has gained a lot of attention in the research
arena recently. In wireless sensor network, the most important issue in routing is to gather the routed
information coming from sensor nodes to the sink node regardless of the identity of the donating node.
The problem in this context is to locate efficiently the sensor node, which holds the data item with the
minimum number of intermediate nodes to save network energy.
A ScatterPastry platform based on Pastry DHT as an overlay routing platform for distributed
applications over wireless sensor network using Scatterweb nodes, a real-world wireless sensor
platform was proposed in [27]. A topology-based distributed hash tables (T-DHT) as an infrastructure
|Table 1. Classific|cation of Data Storage Schemes and|d Service Location Protocols.|
|---|---|---|
||Data Storage Scheme|Service Location Protocol|
|Wired Networks|External Storage|Service Directory|
|Wireless Networks|Internal Storage|Flooding|
||Data Centric Storage|FMMS|
||GHT|GCLP|
-----
_Sensors 2010, 10_ **4500**
for data-centric storage, information processing, and routing in ad hoc and sensor networks was
introduced in [28]. T-DHTs do not rely on location information and work even in the presence of voids
in the network. Using a virtual coordinate system, we construct a distributed hash table which is
strongly oriented to the underlying network topology. The mobile hash-table (MHT) [29] addresses
this challenge by mapping a data item to a path through the environment. In contrast to existing DHT,
MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic
topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the
moving nodes and allows the MHT to scale up to several ten thousands of nodes.
In [30], the appropriateness of using DHT routing paths for service placement in an SBON was
evaluated, when aiming to minimize network usage. SBONs are one approach to implementing largescale stream processing systems. A fundamental consideration in an SBON is that of service placement,
which determines the physical location of in-network processing services or operators, in such a way
that network resources are used efficiently. Service placement consists of two components: node
discovery, which selects a candidate set of nodes on which services might be placed, and node
selection, which chooses the particular node to host a service. By viewing the placement problem as
the composition of these two processes we can trade-off quality and efficiency between them. For this,
two DHT-based algorithms for node discovery, which use either the union or intersection of DHT
routing paths in the Stream-based overlay networks (SBON), was considered and compared their
performance to other techniques.
In [31] a GHT based service discovery protocol including the mechanism that constructs
topology-aware overlay networks in wireless sensor network was proposed. It does not require a
central lookup server and does not rely on multicast or flooding. A Similarity Search Algorithm (SSA)
for efficiently processing similarity search queries was proposed in [32]. A data-centric storage
structure based on the concept of Hilbert curve and DHT was presented and then an algorithm
designed for efficiently probing the most similar data item for the sensor network was proposed. A
dynamic geographic hash table for data-centric storage in wireless sensor networks was proposed in
[33]. Unbalanced resource utilization problem was addressed by proposing a dynamic GHT solution
that relies on two schemes—a temporal-based geographic hash table to achieve overall load balancing
among sensor nodes over time and a location selection scheme based on node contribution potential to
proactively adapt the system to network dynamics.
An effective hotspot storage management schemes to solve the hotspot storage problem in GHT
was proposed in [35]. The scheme included the cover-up and multi-threshold mechanisms. The coverup mechanism can adjust to another storage node dynamically when a storage node is full, while the
multi-threshold mechanism can spread the data into several storage nodes for load balancing of the
sensor nodes. Increasing Ray Search (IRS), an energy efficient and scalable search protocol, and k-IRS,
an enhanced variant of IRS based on the GHT was proposed in [36]. The priority of IRS is energy
efficiency and sacrifices latency whereas k-IRS is configurable in terms of energy-latency trade-off
and this flexibility makes it applicable to varied application scenarios. The basic principle of these
protocols is to route the search packet along a set of trajectories called rays that maximizes the
likelihood of discovering the target information by consuming least amount of energy.
The classical protocols for service location in wired networks rely on a central server called a
service directory. The central server advertises its information periodically. Users who want to use a
-----
_Sensors 2010, 10_ **4501**
service connect to the service directory in order to obtain a service description. In wired networks, the
network topologies hardly change; therefore, users can access the service directory anytime. However,
the central server cannot be used in a wireless network because the network topology in a wireless
network frequently changes. The simplest form of service location is global flooding in the network;
however, flooding does not scale well. To overcome the weakness of flooding, restricted flooding
techniques are developed, such as the Facilitating Match-Making Service (FMMS) and
Geography-based Content Location Protocol (GCLP) [5,15]. In FMMS, a service provider advertises
the service in a cross-shaped trajectory along the network, as shown in Figure 1.
**Figure 1. Service location in FMMS.**
Service provider P sends a service advertisement packet in four directions, and the packet is
forwarded until it reaches the boundary of the network forming the publish trajectory. A user C who
wants to use the service also propagates the query packet in four directions, and the query packet is
forwarded until it reaches the boundary of the network packet forming subscribe trajectory similar to
advertisement trajectory. The nodes that belong to both trajectories (publish trajectory and subscribe
trajectory) reply to user C with the service descriptions. GCLP reduces the query forwarding overhead
by stopping the propagation of a query packet when the subscribe trajectory crosses the publish
trajectory. FMMS and GCLP reduce the flooding overhead by limiting the flooding to four
trajectories; however, both protocols are unable to provide scalability because they do not consider the
query amounts, such that the trajectory always has the same size.
-----
_Sensors 2010, 10_ **4502**
The distance-sensitive service discovery problem in wireless sensor and actor networks was
formalized, and a novel localized algorithm, iMesh was proposed in [38]. Unlike existing solutions,
iMesh uses no global computation and generates constant per-node storage load. In iMesh, new service
providers (i.e., actors) publish their location information in four directions, updating an information
mesh such as GCLP. Information propagation for relatively remote services is restricted by a blocking
rule, which also updates the mesh structure. Based on an extension rule, nodes along mesh edges may
further advertise newly arrived relatively near service by backward distance-limited transmissions,
replacing previously closer service location. The final information mesh is a planar structure
constituted by the information propagation paths. It stores locations of all the service providers and
serves as service directory. Service consumers (i.e., sensors) conduct a lookup process restricted within
their home mesh cells to discover nearby services. We analytically study the properties of iMesh
including construction cost and distance sensitivity over a static network model.
A service discovery protocol based on hierarchical grid architecture in an ad hoc network which
enhanced the GCLP was proposed in [39]. The geographical area was divided into a 2D logical
hierarchical grid and the information of available services was registered to a specific location along a
predefined trajectory. To enhance resource availability and effective discovery of GCLP, each grid cell
selects a directory to cache available services. This work utilizes the transmitting trajectory to improve
the efficiency of registration and discovery. First, the service provider registers a service along the
proposed register trajectory. The requestor then discovers the service along the discovery trajectory to
acquire the service information.
**3. ASST**
_3.1. Basic Concept_
This section presents the basic design of ASST that is based on the following assumptions: (1) a
vast field is covered by a large number of homogeneous sensor nodes that communicate with each
other through short-range radios. (2) Each sensor node is aware of its own location and uses
geographic routing such as GPSR to accomplish long distance delivery. (3) Service information means
the service description that describes the characteristic of the service, as shown in Figure 2. (4) The
storage space on the sensor node is sufficiently large to save the service information. Recently, sensor
nodes with a very large memory space, such as RISE (RIverside SEnsor) with 1 GB of flash memory,
have been developed so that sensor nodes can store a large amount of service information and occupy
very little space [17].
-----
_Sensors 2010, 10_ **4503**
**Figure 2. Example of service description and service location in a wireless network.**
(A)Example of Service Description
(B) Service Location in Wireless Network
Services have descriptions that include a service name and a service provider ID. Figure 2A shows
an example of the service description of a printer. The service description can differ from applications
to application and mainly depends on the types of services. Users wishing to use services provided by
a wireless sensor network need to obtain the service description. The simple way to obtain the service
description is by query flooding the entire network, as shown in Figure 2B. In the figure, a user with a
laptop wants to print documents. Query packets with the service name that the user wants to use are
broadcasted to all the nodes in the network until they reach the service provider matching the
requirement in the query packet. The user connects to the printer server on the basis of the service
-----
_Sensors 2010, 10_ **4504**
description reply sent by the service provider. The flooding is easy and simple, but it may require a
long time to discover the required service provider. Furthermore, the flooding could cause a
considerable amount of energy consumption on the sensor nodes as a result of broadcast storming.
Therefore, efficient service location protocols that provide service information quickly and with low
network load are required in a wireless sensor network.
ASST is based on GHT and extends the DHT. Service descriptions that consist of a service name, a
service provider ID, _etc., are stored in a distributed manner using an algorithm based on DHT. The_
specific service description is stored in repository nodes that correspond to the hash value of the
service name. Geographic routing delivers the information for the service descriptions to and from the
repository nodes. When a service provider wants to advertise its service description information on the
network, the target point Q(Tx,Ty) hash key corresponding to the service name is first obtained using a
well-known hash function such as MD5 and SHA. Once the target point Q(Tx, Ty) hash key is
obtained, the service provider finds the repository nodes surrounding the hash value and stores the
service description in those nodes. For example, in Figure 3, a node provides a printer service and
wants to advertise its service to the network. First, the node finds the repository nodes for the printer
by calculating the hash key for the printer service. If the printer’s hash key is (11, 28), nodes
around (11, 28) become the repository nodes for the printer. A user wanting to use a printer finds the
repository nodes for the printer by calculating the hash key of the printer service, same as the
service provider.
**Figure 3. Example of ASST.**
(A) Service Advertisement
-----
_Sensors 2010, 10_ **4505**
**Figure 3.** _Cont._
(B) Service Discovery
ASST provides a general architecture for service discovery. Both the store and the query operations
can be viewed as a general insertion and lookup, and the key can be looked-up in the hash table.
Different applications can have different insertion and lookup characteristics. Therefore, different
policies should be applied that are based on the characteristics of application. The popularity of data
(equivalent to the data query frequency) is one of the most important characteristics of the lookup
operation. Different data in an application may have different popularities, and this could lead to an
unbalanced load distribution in the existing GHT.
ASST transforms the cross shaped trajectory into the square shaped trajectory to control the number
of nodes on the trajectory in proportion of popularity of data as shown in Figure 4. In the cross shaped
trajectory, the publish trajectories are forwarded toward the boundary of sensor network field and the
shape is fixed regardless of popularity of data. Therefore, it is possible some nodes on the trajectory
not to receive any subscription trajectory (query packet) if the popularity if data is very low. Moreover,
forwarding the publish/subscription trajectory toward the boundary of sensor network field can be
overhead when a sensor network is deployed in large area. On the other hand, the shape of square
trajectory can be increased or decreased easily and thus can be said more flexible than cross shaped
trajectory. ASST distributes the query processing load to several replicated nodes, this is called
trajectory zone. Trajectory zone in ASST is a square-shaped region with replica nodes for the data. A
sensor node within the trajectory zone is called a repository node, which is responsible for the data
storage and query response. To provide high scalability and accessibility for the data, ASST adjusts the
range of the trajectory in proportion to the data’s popularity; this is referred to as dynamic trajectory.
-----
_Sensors 2010, 10_ **4506**
**Figure 4. Cross Shaped Trajectory and Square Shaped Trajectory.**
_3.2. Square-Shaped Trajectory_
As in the case of many distributed hash table systems, ASST provides a _(key, value)-based_
associative memory. Services are named with keys. Both the storage of the service and its retrieval are
performed using these keys. Any naming scheme that distinguishes the services that users of the sensor
network wish to distinctly identify will suffice in ASST [6]. ASST supports the following two operations:
**Put(k; v) stores v (the observed data) according to the key k, the name of the service.**
**Get(k) retrieves whatever stored value is associated with key k.**
First, the backbone node Rn in the trajectory Tr obtains data from the data packet sent by source S,
and it then forwards the data packet to the next backbone node along edge E and toward vertex V in a
counterclockwise direction, as shown in Figure 5. By overhearing the data packet forwarded by the
backbone node, replica nodes around the backbone node obtain the data and store it. Source node S
frequently sends data packets to update the data on the repository node. Each repository node keeps a
timer for the data it stores. If the timer expires before any update packet is received from the source,
then the data is discarded. To prevent query slippage in the trajectory, we use the method proposed in
[9]. The method divides the network into a virtual grid and ensures that the trajectory is constructed
through the sequential cells with sensor nodes. The grid line in Figure 5 shows the virtual grid in the
network; the trajectory is formed by forwarding the data packet through the sequential cells. In [9], the
trajectory was cross shaped; the trajectory in this paper is square shaped. However, the process of
trajectory formation is the same for both, only the order differs.
-----
_Sensors 2010, 10_ **4507**
**Figure 5. Trajectory formation.**
Trajectory Tr is formed away from the target point Q with a distance H (h, h1, h2,… hn) computed
in proportion to the data popularity (the same as the query frequency) p by the source node S. In
ASST, source nodes only need to know the target point Q and the distance h to form the trajectory Tr.
A node that receives a data packet from source S could check whether or not it is involved in trajectory
Tr. Trajectory Tr is shaped with vertex V, edge E, and boundary B. Figure 6 shows the elements and
formation process of the trajectory Tr. Vertex V of trajectory Tr is a corner of the square with a
distance H(h, h1, h2,… hn) from the target point Q (Tx, Ty). The value of vertex V is one of (Tx – h,
Ty – h), (Tx + h, Ty – h ), (Tx – h, Ty + h), and (Tx + h, Ty + h). Edge E of trajectory Tr is a line
connecting two vertexes V. Trajectory Tr has a boundary B at each side of edge E with width W.
Trajectory Tr is a set of data-centric storage nodes, called repository node Rn. Repository node Rn
consists of a backbone node b, which lies on the edge E of trajectory Tr, and replica nodes u, which lie
between boundary B and edge E. Source S sends a data packet toward the target point. The header of
the data packet contains the target point Q (Tx, Ty), range distance h, Source ID, and the Data ID.
Each node between the source node and the target point computes the vertex V and edge E for the data
packet. If the node is on the edge or vertex of the trajectory’s square, then the node becomes the
backbone node and starts to form trajectory Tr.
-----
_Sensors 2010, 10_ **4508**
**Figure 6. Square-Shaped Trajectory Formation.**
_3.3. Dynamic Trajectory_
To provide scalability for a large sensor network, ASST dynamically adjusts the square range of the
trajectory in proportion to the query frequency for the data. If the number of queries for the specific
data increases, the range of trajectory Tr also increases. On the other hand, if the number of queries for
the data decreases, the range of trajectory Tr decreases. This is known as Dynamic Trajectory.
To decide the distance H which is the range of trajectory Tr, ASST computes the data popularity
(query frequency) p for the data. Initially, we gather the number of total queries on all repository nodes
Rn. This information can be gathered through the summation of the number of queries at each node
and by piggybacking this on the data packet. The repository node feeds back the query information by
returning the data packet to the source node. First, the backbone node b in trajectory Tr gathers query
counts from the replica nodes around it, and then it piggybacks the total query counts, which is a
summation of the query counts, for both itself and the replica nodes. Next, the backbone node that
receives the data packet forwarded from the previous backbone node, adds both its query counts and
the replica node’s query counts to the query counts from the data packet, and then it forwards this to
the next backbone node. When the data packet returns to the first backbone node, the data packet is
returned to source S, which computes a new distance H.
Figure 7 shows the Dynamic Trajectory in ASST. Dynamic Trajectory starts with a minimum
distance h1. The source node S sends a data packet with a distance h1. At the next update, the source S
sends a data packet with a distance h1 and computes a new distance with the query frequency. The
new distance is not applied immediately but at the time of the next update. If the query frequency is
increased over the threshold, the distance h is also increased to h2. At distance h2, if the query
-----
_Sensors 2010, 10_ **4509**
frequency is decreased, the distance is decreased to h1; if the query frequency is increased again, the
distance becomes h3. When the distance is increased or decreased, a new trajectory is generated with
the new repository nodes. We do not need to manually remove the old trajectory because each
repository node has an update timer. If the data is not updated until the timer is expired, the node
discards the data and this leads to deformation of the trajectory. Table 2 shows the procedures of
dynamic trajectory in ASST.
**Figure 7. Dynamic Trajectory Development.**
**Table** **2. Dynamic ASST procedure.**
_procedure_ _put(_ _k,_ _v_ )
_TP(Tx,Ty)_ _Hash(k)_
_QF_ _get_ _ _Query_ _ _Freq(k)_
_h_ _QF_ _QT_
2
_Vertex[4]_ {(Tx _h,Ty_ _h),_ (Tx _h,Ty_ _h),_ (Tx _h,Ty_ _h),_ (Tx _h,Ty_ _h)}_
_Min_ _ _Dis_ _MAX_ _ _DIS_
_for_ _i_ 1 _to_ 4 _do_
_dis_ _dis_ tan _ce(my_ _ _P,Vertex[i])_
_if_ _dis_ _Min_ _ _Dis_ _then_
_Min_ _ _dis_ _dis_
_T_ arg _et_ _Vertex _Vertex[i]_
_Direction_ _i_
_end_ _if_
_end_ _for_
_Send(US,_ _k,_ _v,TP,_ _h,Vertex,T_ arg _et_ _Vertex, _Direction,_ _ET_ )
_return_
-----
_Sensors 2010, 10_ **4510**
**Table 2.** _Cont._
_procedure_ _receiveVertex(k,_ _v,TP,_ _h,Vertex,T_ arg _et_ _Vertex, _Direction,_ _ET_ )
_if_ _Us_ _my_ _US _then_
_return_
_my_ _ _US_ _US_
_my_ _ _k_ _k_
_my_ _ _v_ _v_
_my_ _ _ET_ _ET_
_set_ _ _Timer(my_ _ _ET_ )
_if_ _Direction_ _EAST_ _then_
_Direction_ _NORTH_
_else_ _if_ _Direction_ _NORTH_ _then_
_Direction_ _WEST_
_else_ _if_ _Direction_ _WEST_ _then_
_Direction_ _SOUTH_
_else_ _if_ _Direction_ _SOUTH_ _then_
_Direction_ _EAST_
_end_ _if_
_T_ arg _et_ _Vertex _Vetext[Direction]_
_GN_ _Grid_ _ _Number(Direction)_
_Neighbor[]_ _Neighbor_ _ _Set[GN_ ]
_Backbone_ min _dis(TP,_ _h,_ _Neighbor)_
_Send(US,_ _k,_ _v,TP,_ _h,Vertex,T_ arg _et_ _ _Vertex,_ _Direction,_ _Backbon,_ _GN_ )
_return_
_procedure_ _receiveRe_ _pository(k,_ _v,TP,_ _h,Vertex,T_ arg _et_ _Vertex, _Direction,_ _ET_, _Backbone,GN_ )
_if_ _US_ _my_ _US _then_
_return_
_end_ _if_
_if_ _my_ _ _GN_ _GN_ _then_
_my_ _US _US_
_my_ _ _k_ _k_
_my_ _ _v_ _v_
_myET_ _ET_
_set_ _ _Timer(ET_ )
_end_ _if_
_if_ _Backbone_ _my_ _ _ID_ _then_
_T_ arg _et_ _Vertex _Vertex[Direction]_
_Neighbor[]_ _Neighbor_ _ _Set[Direction]_
_Backbone_ min _dis(TP,_ _h,_ _Neighbor)_
-----
_Sensors 2010, 10_ **4511**
**Table 2.** _Cont._
_end_ _if_
_Send_ (US, _k,_ _v,TP,_ _h,Vertex,T_ arg _et_ _Vertex, _Direction,_ _Backbone)_
_return_
k : key of value
v : value
TP : Target Point
QF : Query Frequency
my_P : position of source
US : Update Sequence
ET : Expire Time
GN : Next Grid Number
_3.4. Analysis_
3.4.1. Robustness
Sensor networks consist of small fragile sensor nodes with limited resources such as computing
power, energy, and network bandwidth. Sensor nodes can easily fail as a result of a physical attack or
through energy dissipation. Therefore, sensor nodes should be protected, and systems running on
sensor nodes should provide failure tolerance. The service location system should also be able to
rapidly provide service information to the user, even if there are node failures in the network. ASST
provides failure tolerance with a virtual grid when ASST forms a trajectory and the sensor node
forwards the query packet, as shown in Figure 8.
**Figure 8. Failure Avoidance and Recovery.**
-----
_Sensors 2010, 10_ **4512**
**Figure 8.** _Cont._
A virtual grid on the trajectory consists of a backbone node and several replica nodes; other nodes
in the virtual grid can respond to a query in the event of node failure, as long as at least one node is
alive in the grid. If all the nodes in the grid fail, the query is forwarded to other grids by geographic
routing, thus avoiding the failed area. Both the trajectory formation and query forwarding are based on
the virtual grid; therefore, the query cannot miss the trajectory, as shown in [9]. When the service
provider updates the trajectory, the trajectory is recovered by making a detour around the failed area.
This technique can be extended for multiple grid failures.
3.4.2. Load scalability
To be load scalable, a scheme has to offer uniform processing times irrespective of the loads. The
service location protocol should be scalable to accommodate the variable popularity of a service.
ASST increases the distance H of the trajectory in proportion to the query frequency in order to
provide a uniform query processing time to the user. The number of virtual blocks on the trajectory
4(2H )
with a distance H is _NUMVB_ _W_, and the number of queries that one virtual block can process
during T is
_NUM_ _proc_ _T_ tSNproc _Avg_, where tproc is the processing time of one query on the sensor node
and SNAvg is the average number of sensor nodes in a virtual block. When the total number of queries
during T is NUMquery, the number of virtual block that requires to process the NUMquery is
_NUM_ _reqVB_ _NUMNUMqueryproc_ and thus the distance Hquery is _H_ _query_ _W8NUMNUMprocquery_ . The required
processing time for NUMquery is _treq_ _NUM_ _query_ _t_ _proc_ . The query processing time per node is:
-----
_Sensors 2010, 10_ **4513**
_ttotal_ _SNtreqtotal_ _SNNUMAvg_ _query_ _NUM_ _t_ _procVB_ _NUMSN_ _Avg_ _query_ 8HWqueryt _proc_ _NUM8_ _queryW8NUMNUMt_ _proc_
_SN_ _Avg_ _W_
_NUM_ _query_ _t_ _proc_ _NUM_ _query_ _t_ _proc_ _T_
_SN_ _Avg_ _NUMNUMqueryproc_ _SN_ _Avg_ _TNUM_ _SNqueryAvg_
_query_
_proc_
_t_
_proc_
Therefore, there is no delay time on the repository node.
3.4.3. Time and message
For the data transmissions from a child sensor node to a parent node, CSMA is used to reserve the
channel. We assume that each node knows the number of contending neighbor nodes (m) and contends
the channel with the optimal probability p = 1/m. The probability that one contending node wins the
channel is psucc = (1 − 1/m)[m−1]. Since the number of slots needed until the successful reservation is a
geometric random variable, the average number of contending slots (ACS) is given by:
_ACS_
1
1( 1 )M 1
_M_
where M is the Average number of Neighbor nodes.
One node on the grid included to the Trajectory Zone has to forward the packet to next virtual block
and therefore, the required time slots for Trajectory Zone (STZ) is given by:
1 8H
_STZ_ _ACS_ _NUM_ _VB_ 1 _M_ 1 _W_
1( )
_M_
The main part of procedure in Table 2 is finding the next node which is closest to the trajectory
Edge and the time for this routine is in proportion to the number of nodes in a virtual zone, which is in
proportion to the width of Trajectory Zone (W). Sorting with distance and finding a node minimum
distance requires O(n log n) time. The required time for inter communication between backbone nodes
is in proportion to the width of Trajectory Zone (W) and the distance H as shown above and thus can
be also O(n log n). As a consequence, the total time complexity is O(n log n).
The number of message for Trajectory Zone formation is in proportion to the number of virtual
block in Trajectory Zone since every backbone node in virtual blocks has to forward the formation
message to next virtual block and thus the number of message for Trajectory Zone formation is given
by O(N).
**4. Performance Evaluation**
In Section 3, we proposed ASST, a new mechanism for data dissemination based on DHT and TBF.
In this section, we evaluate the performance of our proposed mechanism in ns-2 simulations. Ns-2
supports detailed simulation of mobile, wireless networks. Our simulation uses an 802.11 radio with a
30 m radio range, rather than the 250 m radio range of IEEE-compliant hardware; this choice is similar
-----
_Sensors 2010, 10_ **4514**
to that made in the evaluation of GHT with a 40 m radio range. ASST was implemented on the basis of
GPSR. 900 sensor nodes were deployed in a 600 m × 600 m field in grid topology. The distance
between sensor nodes was 20 m so that each sensor node has 8 neighboring nodes. The sensor nodes at
the border line of the field were set as query nodes, frequently sending queries. The target point was
set at the center of the field. Each sensor node consumes 0.5 w of power when it sends and 0.2 w of
power when it receives. Table 3 provides details of the simulation parameters. RXThresh is a receive
threshold, CPThresh capture threshold, and CSThresh carrier-sensing Threshold.
**Table 3. Simulation parameters.**
**Parameter** **Value** **Parameter** **Value**
Radio Propagation Shadowing Routing Protocol GPSR
MAC 802.11 Idle Power 0.1 w
Energy
Queue DropTail Rx Power 0.2 w
Consumption
Antenna OmniAntenna Tx Power 0.5 w
Network size 600 m × 600 m Nodes 900 nodes
CPThresh 10.0 Packet size 60 bytes
Radio CSThresh 1.559e–11 X 0
Antenna
Parameter RXThresh 3.652e–10 Y 0
Parameter
Pt 2.5872e–4 Z 1.5
Scalability, as a property of systems, is generally difficult to define; however, it is essential to
define the specific requirements for scalability based on the dimensions that are deemed important. In
telecommunications and software engineering, scalability is a desirable property of a system, a
network, or a process; it indicates the ease with which the system can either handle growing amounts
of work, or can be readily enlarged. For example, it can refer to the capability of a system to increase
total throughput under an increased load. An algorithm, design, networking protocol, program, or other
system is said to be scalable if it is suitably efficient and practical when applied to large situations (e.g.,
a large input data set or a large number of participating nodes in the case of a distributed system). If
the design fails when the quantity increases then it does not scale. In particular, load scalability means
the ability of a distributed system to expand easily and to contract its resource pool to accommodate
heavier or lighter loads. Alternatively, it is the ease with which a system or component can be
modified, added, or removed to accommodate a changing load. In this paper, we use load scalability as
an evaluation metric.
We evaluate the performance of ASST with a diverse query frequency. We compare ASST with
GHT and GCLP. To evaluate the scalability of ASST, we varied the query frequency from 0.5 queries
per second to 100 queries per second. To reflect query processing, each query has a query processing
time of 100 ms. A replica node receiving the query packet has to wait for 100 ms before responding.
To provide load scalability, each query has to maintain the response time of 100 ms. The query was
uniformly distributed among sensor nodes.
|Col1|Col2|Table 3. Sim|mulation parameters.|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|Parameter||Value|Parameter|||Value|
|Radio Propagation||Shadowing|Routing Protocol|||GPSR|
|MAC||802.11|Energy Consumption||Idle Power|0.1 w|
|Queue||DropTail|||Rx Power|0.2 w|
|Antenna||OmniAntenna|||Tx Power|0.5 w|
|Network size||600 m × 600 m|Nodes|||900 nodes|
|Radio Parameter|CPThresh|10.0|Packet size|||60 bytes|
||CSThresh|1.559e–11|Antenna Parameter|X||0|
||RXThresh|3.652e–10||Y||0|
||Pt|2.5872e–4||Z||1.5|
-----
_Sensors 2010, 10_ **4515**
**Figure 9. Number of repository nodes and the number of queries received per repository node.**
(A) Number of repository nodes
(B) Number of queries received per repository node
Figure 9 shows the number of repository nodes and the number of queries received per repository
node given several query frequencies. GHT saves information into very few repository nodes—home
nodes and neighboring nodes surrounding the target point—and GCLP uses many repository nodes
forming a trajectory from end-to-end in the network. Neither GHT nor GCLP take account of the
query frequency; therefore, the number of repository nodes is fixed in spite of the increased query
frequency. This leads to an increase in the number of queries received per node. On the other hand,
ASST varies the size of the trajectory in proportion to the query frequency, so that the number of
repository nodes in ASST is increased in proportion to the increased query frequency. Therefore, the
number of received queries per node in ASST is smaller than that in GHT and GCLP. GHT can be
regarded as minimum replication, and GCLP can be regarded as maximum replication. It should be
noted that the number of repository nodes in GCLP and ASST is similar when the query frequency is
high, but the number of received queries per node in GCLP is higher than that in ASST due to the two
-----
_Sensors 2010, 10_ **4516**
matching points in GCLP. In GCLP, both the publish trajectory and the subscribe trajectory form a
cross, making two matching points. Therefore, the number of received queries is higher than that in
ASST.
The number of queries per node can affect the response time. Figure 10 shows the response time
with an increased query frequency. When the query frequency is low, GCLP shows a shorter response
time than ASST or GHT because GCLP has more repository nodes than GHT or ASST. Moreover,
GCLP forms a trajectory crossing the network so that GCLP occupies a larger area than GHT or ASST,
and this leads to a reduction in the hop counts between the repository and query nodes. However, as
the query frequency increases, the response time also increases. When the query frequency is very high,
the response time is longer than that in ASST. This is caused by the larger number of received queries.
However, GHT and ASST have the same number of repository nodes and the same hop counts
between the repository and query nodes when the query frequency is low, so that the response times of
both is the same. However, when the query frequency is increased, the response time of GHT is
increased drastically while the response time of ASST is almost uniform. This is due to the difference
between GHT and ASST in terms of the number of received queries per node. GHT always has a fixed
number of repository nodes so that the number of received queries per node increases, and this leads to
an increased response time. On the other hand, ASST increases the number of repository nodes in
proportion to the query frequency so that the number of received queries per node and response time
are almost uniform.
**Figure 10. Response time with increased query frequency.**
120
100
**Figure 10.**
700
500
400
GCLP
GHT
80
60
600
500
300
200
40
20
100
0
|GCLP ASST GHT query frequency|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|Col33|Col34|Col35|Col36|Col37|Col38|Col39|Col40|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||
1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0
**query frequency per second**
0
1.0
4.0 5.0 6.0
Replicating data in several sensor nodes requires energy costs for packet transmission. More nodes
participate in replication, and more energy is consumed because energy consumption is increased
whenever a node transmits a packet. Figure 11 shows the energy consumption for repository
construction. As mentioned earlier, the number of repository nodes in GHT and GCLP is fixed in spite
2.0 3.0
-----
_Sensors 2010, 10_ **4517**
of query frequency change. GHT maintains a minimum number of repository nodes around the target
point so that the energy consumption is lower than that with GCLP or ASST. In contrast, GCLP
maintains repository nodes between the network boundaries in a cross shape so that energy
consumption is higher than that under GHT or ASST. On the other hand, ASST varies the number of
repository nodes in proportion to the query frequency so that the energy consumption is low when the
query frequency is low, and the energy consumption is increased in proportion to the query frequency.
The energy consumption is similar with GHT when the query frequency is low, and the energy
consumption is increased towards the GCLP as the query frequency increases.
**Figure 11. Energy consumption for storage construction.**
**5. Conclusions**
In this paper, we have proposed a new energy efficient and scalable data dissemination method, i.e.,
ASST, based on DHT and TBF. ASST is a type of data-centric storage system for sensor networks.
ASST provides fast access to the data stored in a sensor network by using the distributed hash
function. In ASST, the source node stores data at the nodes forming a trajectory around the target
position computed by the hash function. A client sends a query packet to the position computed by the
hash function that is the same as the source node. By storing data at several repository nodes, ASST
distributes the network load caused by query packets.
ASST also provides scalability for a large sensor network. When the query frequency is increased,
the range of the trajectory is also increased to reduce the network load at the repository node. By
adjusting the range of the trajectory in proportion to the query frequency, ASST can ensure a fast
response time for multiple clients and reduce the network load assigned to a sensor node. Such
capabilities result in an increased life span for the sensor network.
The main focus of ASST is providing load scalability with constant response time by adjusting the
width of trajectory in proportion to the query frequency under uniform distribution of query, however
the query distribution might not be uniform over network and thus increasing or decreasing four edges
with same distance H can cause load unbalance under non-uniform distribution of query. In the future
-----
_Sensors 2010, 10_ **4518**
work, we will consider the adjustment of distance H of four edges in trajectory zone in separated
manner under with under non-uniform distribution of query.
**References**
1. Akyildiz, I.F.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. A Survey on Sensor Networks. IEEE
_Commun. Mag._ **2002, 102-114.**
2. Henn, H.; Hepper, S.; Rindtorff, K.; Schack, T.; Burkhardt, J. Pervasive Computing: Technology
_and Architecture of Mobile Internet Applications, 1st ed.; Addison-Wesley Professional: Boston,_
MA, USA, 2002.
3. Niculescu, D. Communication Paradigms for Sensor Networks. _IEEE Commun. Mag. 2005, 43,_
116-122.
4. Seada, K.; Helmy, A. Rendezvous Regions: A Scalable Architecture for Service Location and
Data-Centric Storage in Large-Scale Wireless Networks. In _proceedings of_ _18th International_
_Parallel and Distributed Processing Symposium (IPDPS'04), Santa Fe, New Mexico, April 26–_
30, 2004.
5. Tchakarov, J.B.; Vaidya, N.H. Efficient Content Location in Wireless Ad Hoc Networks. _IEEE_
_International Conference on Mobile Data Management, Berkeley, CA, USA, January 19–22,_
2004; p. 74.
6. Ratnasamy, S.; Karp, B.; Yin, L.; Yu, F; Estrin, D.; Govindan, R.; Shenker, S. GHT: A Geographic
Hash-table for Data centric Storage in Sensornets. In Proceedings of the First ACM International
_Workshop on Wireless Sensor Networks and Applications (WSNA), Atlanta, GA, USA, September_
28, 2002; pp.78-87.
7. Karp, B.; Kung, H.T. _GPSR: Greedy Perimeter Stateless Routing for Wireless Networks;_
A870044; MobiCom: Houston, TX, USA, 2000.
8. Capone, A.; Pizziniaco, L.; Filippini, I. A SiFT: An Efficient Method for Trajectory Based
Forwarding. In _Proceedings of_ _Wireless Communication Systems 2nd International Symposium,_
Siena, Italy, September 5–7, 2005; pp. 135-139.
9. Tscha, Y.; Caglayan, M.U. Query Slipping Prevention for Trajectory-based Publishing and
Subscribing in Wireless Sensor Networks. Comp. Commun. **2006, 29, 1979-1991.**
10. Machado, M.; Goussevskaia, O.; Mini, R.A.F.; Rezende, C.G.; Loureiro, A.A.F.; Mateus, G.R.;
Nogueira, J.M.S. Data Dissemination in Autonomic Wireless Sensor Networks. IEEE J. Sel. Area.
_Commun._ **2005, 23, 2305-2319.**
11. Akkaya, K.; Younis, M. A survey on routing protocols for wireless sensor networks. Ad Hoc Netw.
**2005, 3, 325-349.**
12. Liu, X.; Huang, Q.; Zhang, Y. _Combs, Needles, Haystacks: Balancing Push and Pull for_
_Discovery in Large-scale Sensor Networks; ACM Press: New York, NY, USA, 2004; pp. 122-133._
13. Intanagonwiwat, C.; Govindan R.; Estrin, D. Directed Diffusion: A Scalable and Robust
Communication Paradigm for Sensor Networks, submitted to ACM MobiCom 2000.
14. GridwiseTech. Principles of Scalable Systems—Describing Technologies/layers that Are
Components of Scalable System. Available online: http://www.gridwisetech.com/scalability_
expert (accessed November 2007).
-----
_Sensors 2010, 10_ **4519**
15. Aydin, I.; Shen, C.C. Facilitating Match-making Service in Ad Hoc and Sensor Networks Using
Pseudo Quorum. In _Proceedings of the 11th IEEE International Conference on Computer_
_Communications and Networks (ICCCN), Miami, FL, USA, October 14–16, 2002._
16. Sailhan, F.; Issarny, V. Scalable service discovery for MANET. In _Proceedings of the 3rd IEEE_
_International Conference on Pervasive Computing and Communications, Kauai Island, HI, USA,_
March 8–12, 2005.
17. Mitra, A.; Banerjee, A.; Najjar, W.; Zeinalipour-Yazti, D.; Kalogeraki, V.; Gunopulos, D. HighPerformance Low Power Sensor Platforms Featuring Gigabyte Scale Storage. In Proceedings of
_IEEE/ACM 3rd International Workshop on Measurement, Modelling, and Performance Analysis_
_of Wireless Sensor Networks, San Diego, CA, USA, July 21, 2005._
18. Jurczyk, P.; Xiong, L.; Sunderam, V. DObjects: Enabling Distributed Data Services for
Metacomputing Platforms. ICCS. Krakow, Poland, June 23-25. 2008; pp.136-145.
19. Yuksel, M.; Pradhan, R.; Kalyanaraman, S. An Implementation Framework for Trajectory-based
Forwarding in Ad-Hoc Networks. In Proceedings of IEEE ICC, June 20–24, 2004; pp. 4062-4066.
20. Park, S.; Lee, D.; Lee, E.; Yu, F.; Choi, Y.; Kim, S-H. A Communication Architecture to Reflect
User Mobility Issue in Wireless Sensor Fields. In Proceedings of IEEE Communications Society
_Subject Matter Experts for publication in the WCNC, Kowloon, HongKong, March 11-15, 2007;_
pp.3376-3381.
21. Krishnamachari, B.; Ahn, J. Optimizing Data Replication for Expanding Ring-based Queries in
Wireless Sensor Networks. USC Computer Engineering Technical Report; CENG-05-14, 2005.
22. Shenker, S.; Ratnasamy, S.; Karp, B; Govindan, R.; Estrin, D. Data-Centric Storage in Sensornets.
_ACM SIGCOMM, Comp. Commun. Rev._ **2003, 33, 137-142.**
23. Ball, R. Content Sharing for Mobile Devices. CoRR **2008, abs/0809.4395.**
24. Gjermundrød, H.; Bakken, D.E.; Hauser, C.H.; Bose, A. GridStat: A Flexible QoS-Managed Data
Dissemination Framework for the Power Grid. IEEE Trans. Power Delivery **2009, 24, 136-143.**
25. Liang, S.H.L. A New Fully Decentralized Scalable Peer-to-Peer GIS Architecture. In Proceedings
_of ISPRS Congress, Beijing, China, July 3–11, 2008; p. 687._
26. Bondi, A.B. Characteristics of scalability and their impact on performance. In Proceedings of the
_2nd International Workshop on Software and Performance, Ottawa, Canada, September 17–20,_
2000; pp. 195-203.
27. Al-Mamou, A.; Labiod Houda. ScatterPastry: An Overlay Routing Using a DHT over Wireless
Sensor Networks. In Proceedings of _The 2007 International Conference on Intelligent Pervasive_
_Computing (IPC 2007), Jeju Island, Korea, October 11–13, 2007; pp.274-279._
28. Landsiedel O.; Lehmann K.; Wehrle K. T-DHT: Topology-based Distributed Hash Tables. _In_
_Proceedings of the Fifth IEEE International Conference on Peer-to-Peer Computing (P2P'05),_
Konstanz, Germany, August 31–September 2, 2005; pp. 143-144.
29. Landsiedel, O.; Gotz, S.; Wehrle, K. Towards Scalable Mobility in Distributed Hash Tables. _In_
_Proceedings of the Sixth IEEE International Conference on Peer-to-Peer Computing, Cambridge,_
UK, October 2–4, 2006; pp. 203-209.
30. Pietzuch, P.; Shneidman, J.; Ledlie, J.; Welsh, M.; Seltzer, M. Roussopoulos, M. Evaluating DHTBased Service Placement for Stream-Based Overlays. In Proceedings of the 4th International
_Workshop on Peer-to-Peer Systems (IPTPS'05), Ithaca, NY, USA, February 24–25, 2005._
-----
_Sensors 2010, 10_ **4520**
31. Jung, J.; Lee, S.; Kim, N.; Yoon, H. Efficient Service Discovery Mechanism for Wireless Sensor
Networks. Comput. Commun. **2008, 31, 3292-3298.**
32. Su, I.; Chung, Y.; Lee, C. Finding Similar Answers in Data-Centric Sensor Networks. _IEEE_
_International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (Sutc_
_2008), Taichung, Taiwan, June 11–13, 2008. pp. 217-224._
33. Le, T.; Yu, W.; Bai, X.; Xuan, D. A Dynamic Geographic Hash Table for Data Centric Storage in
Sensor Networks. In Proceeding of IEEE Wireless Communications and Networking Conference
_(WCNC 2006), Las Vegas, NV, USA, April 3–6, 2006._
34. Liao W.; Wu W.; A cover-up scheme for data-centric storage in wireless sensor networks, _In_
_Proceeding of IEEE Symposium on Computers and Communications (ISCC 2007), Aveiro,_
Portugal, July 1-4. 2007.
35. Liao W.; Wu W. Effective Hotspot Storage Management Schemes in Wireless Sensor Networks.
_Comput. Commun._ **2008, 31, 2131-2141.**
36. Rachuri, K.; Murthy, S. Energy Efficient and Scalable Search in Dense Wireless Sensor Networks.
_IEEE Trans. Comput._ **2009, 58, 812-826.**
37. Satoshi, M.; Kazutoshi, F.; Hideki, S. Implementation Methodology of Geographical Overlay
Network Suitable for Ubiquitous Sensing Environment. J. Inform. Process. **2008, 16, 80-92.**
38. Li, X.; Santoro, N.; Stojmenovic, I. Localized Distance-Sensitive Service Discovery in Wireless
Sensor and Actor Networks. IEEE Trans. Comput. **2009, 58, 1275-1288.**
39. Tsai, H.; Chen, T.; Chu, C. Service Discovery in Mobile Ad Hoc Networks Based on Grid. IEEE
_Trans. Veh. Technol. 2009, 58, 1528-1545._
40. Mann, C.; Baldwin, R.; Kharoufeh, J.; Mullins, B. A Trajectory-Based Selective Broadcast Query
Protocol for Large-Scale, High-Density Wireless Sensor Networks. _Telecommun. Syst.: Model._
_Anal. Des. Manag._ **2007, 35, 67-86.**
© 2010 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article
distributed under the terms and conditions of the Creative Commons Attribution license
(http://creativecommons.org/licenses/by/3.0/).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC3292128, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/10/5/4497/pdf?version=1403312442"
}
| 2,010
|
[
"JournalArticle"
] | true
| 2010-04-30T00:00:00
|
[
{
"paperId": "b8d9b10cf54629364523ec065e6307ab87f7d4f0",
"title": "iRun: Horizontal and Vertical Shape of a Region-Based Graph Compression"
},
{
"paperId": "e44581f9aed1c309f9b7bb7054b7cba6a40e77ed",
"title": "A survey on routing protocols in Wireless Sensor Networks"
},
{
"paperId": "a8220c1e619befd4ee02a9175c9e7f16281abfac",
"title": "Localized Distance-Sensitive Service Discovery in Wireless Sensor and Actor Networks"
},
{
"paperId": "845206025b38ef09e9e0e5629ec29140cbfa723f",
"title": "Energy Efficient and Scalable Search in Dense Wireless Sensor Networks"
},
{
"paperId": "5f7d282fd35b1391e35d2a722fb6cfe7e20169b1",
"title": "Service Discovery in Mobile Ad Hoc Networks Based on Grid"
},
{
"paperId": "d7503f1126d6df93d1f418d89fe7f5b52bb0ecad",
"title": "Content Sharing for Mobile Devices"
},
{
"paperId": "1cfa82527fee869bcaddc060a02cbb6a2fb9a541",
"title": "Efficient service discovery mechanism for wireless sensor networks"
},
{
"paperId": "0355405fbadfba27d7f682bbae6777c5878c2269",
"title": "DObjects: enabling distributed data services for metacomputing platforms"
},
{
"paperId": "dc694cd4ca181f01a737bb17d1ae78bd7e342eab",
"title": "Finding Similar Answers in Data-Centric Sensor Networks"
},
{
"paperId": "4a63f0a098bdccb905e6202c9b70f4a0f493a0f9",
"title": "Effective hotspot storage management schemes in wireless sensor networks"
},
{
"paperId": "8a82225d733ed2fb1c1ff0e06150f6e281654f3c",
"title": "ScatterPastry: An Overlay Routing Using a DHT over Wireless Sensor Networks"
},
{
"paperId": "c289d2fa550ff33457655749b5cfc91b3cc7714a",
"title": "A Cover-Up Scheme for Data-Centric Storage in Wireless Sensor Networks"
},
{
"paperId": "00e810580940e3568ab2e473b847d79db414a8bb",
"title": "A trajectory-based selective broadcast query protocol for large-scale, high-density wireless sensor networks"
},
{
"paperId": "c53b063a0ff982012a7cc7d81153b6ca6856e445",
"title": "A Communication Architecture to Reflect User Mobility Issue in Wireless Sensor Fields"
},
{
"paperId": "a9bf09db95ea70d4f29d965d9cd92f7e2c639c43",
"title": "Towards Scalable Mobility in Distributed Hash Tables"
},
{
"paperId": "20587e2ec5e5c296c86c5cf21a2a4677273ebd6c",
"title": "Query slipping prevention for trajectory-based publishing and subscribing in wireless sensor networks"
},
{
"paperId": "4626c35153566f4e6b1f00eb1513c9de74f5a60c",
"title": "A dynamic geographic hash table for data-centric storage in sensor networks"
},
{
"paperId": "8b17c8876351d26a6ed55799d196dfcaf84799e7",
"title": "Optimizing Data Replication for Expanding Ring-based Queries in Wireless Sensor Networks"
},
{
"paperId": "7a3d67136402eac90e59e5e6a545bf75eafec029",
"title": "A SiFT: an efficient method for trajectory based forwarding"
},
{
"paperId": "f3ce8b8f56427ff3cb7b253d1938ac7170de0ebf",
"title": "Data dissemination in autonomic wireless sensor networks"
},
{
"paperId": "ab3812744dcea0d6f2dcac994ac88254c9fe00c4",
"title": "T-DHT: topology-based distributed hash tables"
},
{
"paperId": "7c2af5ebe723580e90348d3bf1eaf31fe7bef828",
"title": "Service discovery in mobile ad hoc networks"
},
{
"paperId": "014d24556957a61ba4a3b6f01066a8cbdf6547c6",
"title": "Scalable Service Discovery for MANET"
},
{
"paperId": "841a78677c613d063aa930245cf4d6277d396d04",
"title": "Communication paradigms for sensor networks"
},
{
"paperId": "665578c35c379446c5d22e2e12ef57983517ce29",
"title": "Evaluating DHT-Based Service Placement for Stream-Based Overlays"
},
{
"paperId": "b8e81f7def2f9bd1207e41e49c6b456c81ea0609",
"title": "Combs, needles, haystacks: balancing push and pull for discovery in large-scale sensor networks"
},
{
"paperId": "ff96b433d5b5ef9dec64f78ab914c798e527f811",
"title": "Efficient content location in wireless ad hoc networks"
},
{
"paperId": "c791c71e27541abfd6d61f60e50d7978511c02fb",
"title": "An implementation framework for trajectory-based routing in ad-hoc networks"
},
{
"paperId": "eb0361300b9786312caa54a4661df5dec376a216",
"title": "Rendezvous regions: a scalable architecture for service location and data-centric storage in large-scale wireless networks"
},
{
"paperId": "1799995b17df1d9266bed56d7c4902392d513863",
"title": "Facilitating match-making service in ad hoc and sensor networks using pseudo quorum"
},
{
"paperId": "19d9d5e14d4d3f3e4ef66e00cdbc11ff6a93e9d5",
"title": "GHT: a geographic hash table for data-centric storage"
},
{
"paperId": "c3ed671bf7c9bb71c9b791309c11f2b8f3ecf204",
"title": "Sensor Networks"
},
{
"paperId": "6f2b8a31d0139bc93ec64da1a44d8be584d3dbd2",
"title": "Pervasive Computing: Technology and Architecture of Mobile Internet Applications"
},
{
"paperId": "cfa353b0652b3d1799a91ff454c9d8e90c792c10",
"title": "Characteristics of scalability and their impact on performance"
},
{
"paperId": "829cf8d5540342b6a7fd190befd0874e8e0e3b20",
"title": "Directed diffusion: a scalable and robust communication paradigm for sensor networks"
},
{
"paperId": "ef9d0802dad4d6e8fb39a5fc14841651736d6503",
"title": "GPSR: greedy perimeter stateless routing for wireless networks"
},
{
"paperId": "dbc6af4eb4088158990e737440c7724193c2b1ab",
"title": "GridStat: A Flexible QoS-Managed Data Dissemination Framework for the Power Grid"
},
{
"paperId": "f65b4d6e874d22e6c6d4eab8bdd7dd3fdad05651",
"title": "A NEW FULLY DECENTRALIZED SCALABLE PEER-TO-PEER GIS"
},
{
"paperId": "45af9fe862a92406249cd7d91f6e0e46a67b078d",
"title": "Implementation Methodology of Geographical Overlay Network Suitable for Ubiquitous Sensing Environment"
},
{
"paperId": "f2687e43bfee9958963fbc4d19b799a4504e1edb",
"title": "High Performance , Low Power Sensor Platforms Featuring Gigabyte Scale Storage"
},
{
"paperId": "3e8cbf0a19e997d93afd3b3aa89ea54ea728a98f",
"title": "Data-centric storage in sensornets"
},
{
"paperId": "798f3053268dbfc1c6df40fee61ac862ef713d1f",
"title": "An Implementation Framework for Trajectory-Based Forwarding in Ad-Hoc Networks"
},
{
"paperId": null,
"title": "Principles of Scalable Systems — Describing Technologies / layers that Are Components of Scalable System"
},
{
"paperId": null,
"title": "This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license"
},
{
"paperId": null,
"title": "Principles of Scalable Systems—Describing Technologies/layers that Are Components of Scalable System Available online"
}
] | 14,644
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0153cde1ef613a5261c93354a43fb137d3bbf2a4
|
[] | 0.879905
|
A Comparative Study of Consensus Mechanisms in Blockchain for IoT Networks
|
0153cde1ef613a5261c93354a43fb137d3bbf2a4
|
Electronics
|
[
{
"authorId": "2183437602",
"name": "Zachary Auhl"
},
{
"authorId": "9219862",
"name": "N. Chilamkurti"
},
{
"authorId": "51132514",
"name": "Rabei Alhadad"
},
{
"authorId": "2145540588",
"name": "Will Heyne"
}
] |
{
"alternate_issns": [
"2079-9292",
"0883-4989"
],
"alternate_names": null,
"alternate_urls": [
"http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-247562",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-247562",
"https://www.mdpi.com/journal/electronics"
],
"id": "ccd8e532-73c6-414f-bc91-271bbb2933e2",
"issn": "1450-5843",
"name": "Electronics",
"type": "journal",
"url": "http://www.electronics.etfbl.net/"
}
|
The consensus mechanism is a core component of Blockchain technology, allowing thousands of nodes to agree on a single and consistent view of the Blockchain. A carefully selected consensus mechanism can provide attributes such as fault tolerance and immutability to an application. The Internet of Things (IoT) is a use case that can take advantage of these unique Blockchain properties. IoT devices are commonly implemented in sensitive domains such as health, smart cities, and supply chains. Resilience and data integrity are important for these domains, as failures and malicious data tampering could be detrimental to the systems that rely on these IoT devices. Additionally, Blockchains are well suited for decentralised networks and networks with high churn rates. A difficulty involved with applying Blockchain technology to the IoT is the lack of computational resources. This means that traditional consensus mechanisms like Proof of Work (PoW) are unsuitable. In this paper, we will compare several popular consensus mechanisms using a set of criteria, with the aim of understanding which consensus mechanisms are suitable for deployment in the IoT, and what trade-offs are required. We show that there are opportunities for both PoW and PoS to be implemented in the IoT, with purpose-made IoT consensus mechanisms like PoSCS and Microchain. Our analysis shows that Microchain and PoSCS have characteristics that are well suited for IoT consensus.
|
# electronics
_Article_
## A Comparative Study of Consensus Mechanisms in Blockchain for IoT Networks
**Zachary Auhl** **[1,]*, Naveen Chilamkurti** **[1]** **, Rabei Alhadad** **[1]** **and Will Heyne** **[2]**
1 Cybersecurity Innovation Node, La Trobe University, Melbourne, VIC 3086, Australia
2 BAE Systems, Adelaide, SA 5000, Australia
***** Correspondence: z.auhl@latrobe.edu.au
**Citation: Auhl, Z.; Chilamkurti, N.;**
Alhadad, R.; Heyne, W. A
Comparative Study of Consensus
Mechanisms in Blockchain for IoT
Networks. Electronics 2022, 11, 2694.
[https://doi.org/10.3390/](https://doi.org/10.3390/electronics11172694)
[electronics11172694](https://doi.org/10.3390/electronics11172694)
Academic Editor: Asma Khatoon
Received: 18 July 2022
Accepted: 22 August 2022
Published: 27 August 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The consensus mechanism is a core component of Blockchain technology, allowing thou-**
sands of nodes to agree on a single and consistent view of the Blockchain. A carefully selected
consensus mechanism can provide attributes such as fault tolerance and immutability to an application. The Internet of Things (IoT) is a use case that can take advantage of these unique Blockchain
properties. IoT devices are commonly implemented in sensitive domains such as health, smart
cities, and supply chains. Resilience and data integrity are important for these domains, as failures
and malicious data tampering could be detrimental to the systems that rely on these IoT devices.
Additionally, Blockchains are well suited for decentralised networks and networks with high churn
rates. A difficulty involved with applying Blockchain technology to the IoT is the lack of computational resources. This means that traditional consensus mechanisms like Proof of Work (PoW) are
unsuitable. In this paper, we will compare several popular consensus mechanisms using a set of
criteria, with the aim of understanding which consensus mechanisms are suitable for deployment in
the IoT, and what trade-offs are required. We show that there are opportunities for both PoW and
PoS to be implemented in the IoT, with purpose-made IoT consensus mechanisms like PoSCS and
Microchain. Our analysis shows that Microchain and PoSCS have characteristics that are well suited
for IoT consensus.
**Keywords: consensus; IoT; Blockchain**
**1. Introduction**
Blockchains are cryptographically linked distributed ledgers that are known for storing
the transaction history of the Bitcoin network. Bitcoin adopted the Blockchain as it has
two important properties: tamper-evidence and the triple-entry ledger. As each block in
the Blockchain is cryptographically linked to the previous block, attempts to tamper with
the Blockchain will invalidate the blocks cryptographic link. This means malicious parties
cannot arbitrarily alter the history of the Blockchain. Triple-entry accounting refers to the
distributed characteristics of a Blockchain. Instead of relying on two parties to provide
evidence of their activities, transactions on the Blockchain are transmitted to the whole
network, allowing anyone to validate all the transactions on the Blockchain. Often paired
with a Blockchain, is a consensus mechanism. Consensus mechanisms allow Blockchains to
converge on network-wide agreement on the state of the Blockchain, meaning that nodes
all agree on the same history of the ledger. Consensus on the Bitcoin’s Blockchain relies
on two mechanisms, Proof of Work (PoW), and the Longest Chain Rule (LcR). PoW is a
cryptographic puzzle that miners on the Bitcoin network attempt to solve. This provides
miners with a financial incentive to support the network, and prevents Sybil attacks.
The LcR is a fork resolution tool, that manages competing histories of the Blockchain,
and converges the network back onto a single state in cases where the Blockchain forks.
Recently, there has been interest in applying Blockchain technology to the Internet of Things
(IoT). Specifically, consensus mechanisms have been modified to be less resource intensive,
-----
_Electronics 2022, 11, 2694_ 2 of 23
and more suitable for deployment in the IoT, with consensus mechanisms such as the
Credit-Based PoW (CBPoW) and Proof of Supply Chain Share (PoSCS). This paper will
examine the suitability of permissionless Blockchains for the IoT and the trade-offs required,
especially for resource limited IoT devices.
_1.1. Outline_
We begin the paper by discussing the criteria. We use these criteria to compare the
consensus mechanisms discussed in the paper. Next, we briefly discuss the fundamental
properties of Blockchains in the background section. The second half of the paper is
focused on analysing consensus mechanisms. We start by covering consensus mechanisms
commonly found in blockchains, such as PoW and PoS; then, we discuss four newer
consensus mechanisms from the literature, specifically designed for the IoT. Finally, we
compare the discussed consensus mechanisms using our proposed criteria. We pay close
attention to properties that positively and negatively impact the critical characteristics of
IoT devices. Finally, we conclude with consensus recommendations for the IoT and outline
future work.
_1.2. Contributions_
In this paper, we provide the following contributions:
- Analysis of PoW and PoS consensus mechanisms and their usability in the IoT.
- Analysis of four novel consensus mechanisms from the literature, specifically designed
for the IoT.
- A comparison between the mentioned consensus mechanisms, with clear criteria to
show their suitability for the IoT.
**2. Criteria for Blockchain Consensus**
The IoT environment is incredibly diverse, involving a wide range of hardware solutions and software solutions paired with a stringent set of requirements, involving power
consumption, storage, and computational capabilities. IoT devices work in dynamic environments, where sensors generate data constantly, coming online and offline depending on
their power requirements, and expected to work in ad hoc networks.
Due to this flexibility, IoT devices have seen widespread usage in applications such as
smart cities [1] supply chains [2–5] and healthcare [6]. The consensus mechanism is a critical
part of most Blockchain deployments, but the choice becomes even more important when
working with Blockchain deployments targeted towards the IoT. All Blockchains come
with trade-offs; there is no such thing as a ’perfect’ Blockchain. Some are more resourceintensive, some are faster, and others are more centralised. To compare the Blockchains
discussed throughout this paper, we will define a set of requirements, to better understand
their usability and impact in IoT environments.
1. Processor Usage: How are IoT devices going to agree on the content and order
of the Blockchain? Historically, Blockchains made use of PoW to decide this. However,
PoW is well-known for being computationally expensive, and environmentally destructive,
and has seen waning interest in newer consensus mechanisms. Extending the battery life
of IoT devices, and maintaining an acceptable processor utilisation, will be an important
factor when selecting a consensus mechanism.
2. Security: Blockchain implementations may provide stronger security guarantees,
when compared to traditional IoT networks with central points of failure (coordinators,
controllers, cloud networks etc.). Many Blockchains suffer from potential attacks when the
number of malicious nodes reaches 51% and 33% depending on the consensus mechanism.
While there is no longer a single point of failure for a malicious actor to target, Blockchain
specific attacks can still compromise the IoT network.
3. Decentralisation: A choice for most Blockchains which is not binary and operates
on a sliding scale. Increasing the network’s decentralisation further diversifies Blockchain
storage and decision-making, but usually impacts the speed and scalability of the net
-----
_Electronics 2022, 11, 2694_ 3 of 23
work. Decreasing decentralisation has an inverse effect, which reduces the diversity in the
consensus process, and prioritises scalability and speed.
4. Storage: A factor that needs to be considered for security and the decentralisation
aspects of a Blockchain. If all nodes on the network store a full copy of the Blockchain, they
can independently verify transactions, and help new nodes bootstrap their Blockchains.
IoT devices generally do not have the capacity to store hundreds of gigabytes worth of
Blockchain data, so a compromise that still allows for security, and potentially decentralisation needs to be made.
5. Transactions Per Second (TPS): Another trade-off occurs between decentralisation
and speed. The more nodes that participate in consensus, the higher the latency to make
a decision, which results in generally lower speed, but higher decentralisation. A lower
number of nodes participating in consensus could lead to increased transaction throughput
and lower block times, which is generally desirable for IoT devices.
**3. Background**
Consensus mechanisms were traditionally deployed to maintain critical control systems, such as those aboard commercial airlines. Aboard an airline, the consensus mechanism is used to coordinate multiple control systems, keeps the system operational even in
partial failure, and keeps hundreds of passengers on an aircraft safe [7].
Consensus mechanisms, such as Paxos, have also been widely adopted by Google
and Amazon in their distributed systems. Google’s projects such as Spanner [8], which
provide a distributed, replicated database system, and Mesa [9], which provides distributed
data warehousing.
A consensus mechanism defines a set of rules or protocols a group of systems needs to
abide by, in order to make a decision. Let us use Bitcoin as an example. Part of Bitcoin’s
consensus mechanism lets users running full nodes agree on Bitcoin’s Blockchain history.
Each nonfaulty participant running a full node, and enforcing Bitcoin’s consensus rules,
will check transactions for issues like: a user spending Bitcoin they do not own, a user
trying to print Bitcoin out of thin air, or a miner creating a block and rewarding themselves
with thousands of Bitcoin [10].
Rather than a central entity enforcing these rules, every full node on the Bitcoin
network is enforcing these rules. One of the most famous examples of distributed agreement
was published by Lamport et al. titled the “Byzantine Generals Problem” [11]. This paper
includes a thought experiment involving a set of generals planning an attack on an enemy
city. If a treacherous commander or lieutenant attempts to deceive their peers by sending
incorrect orders, this could lead to disaster for the army. If parts of the army attack, while
other parts of the army retreat, the battle cannot be won. The paper proves that in order
to deal with b Byzantine nodes, there must be at least 3b+1 nodes on the network [11].
Byzantine actors are capable of colluding to deceiving other users in our system, which
breaks consensus by simple majority. To accommodate for Byzantine actors and to maintain
safety and liveness guarantees, b < 1/3 of the total nodes on the network, so that consensus
cannot be split by a 50/50 vote on our network by colluding nodes [12].
_Types of Blockchains_
Blockchains can be generalised into two categories: permissionless, and permissioned.
Permissionless, like Bitcoin and Ethereum, are public Blockchains that anyone can participate in. Users are free to create transactions on these networks, interact with smart
contracts, and are free to propose blocks if they participate in mining. Users on permissionless Blockchains are generally pseudonymous, which means users wallets are associated
with certain public addresses, but not necessarily linked to a person’s name.
Permissioned Blockchains are more restrictive on how users can interact with the
Blockchain, and generally have different use cases. Examples of permissioned Blockchains
include R3 [13] and J.P. Morgan’s Quorum [14], which are both targeted at the financial
-----
_Electronics 2022, 11, 2694_ 4 of 23
industry. These Blockchains may allow the public to join, but are generally invite only,
and have a stricter set of rules.
Private Blockchains can be thought of as an extension to permissioned Blockchains.
The difference is permissioned Blockchains may be publicly accessible, as long as a user
meets some sort of criteria. Private Blockchains are generally not accessible to the public,
and are almost always invite only [15]. In Section 4, we will discuss PoW Blockchains,
and their usability in the IoT.
**4. Proof of Work**
PoW is a consensus mechanism used by Bitcoin, and has been forked, modified and
copied by many other cryptocurrencies [16]. At its core, most PoW implementations involve
solving a cryptographic puzzle with certain parameters, and the first machine to solve this
problem is rewarded. Figure 1 shows a diagram of this process. More specifically, miners
are searching for a nonce (a random number), that can be hashed together with the block
header, to produce a block hash, with a specific number of starting zeros [17]. The first
miner on the network to produce a hash with these specific requirements, is given the
block reward as payment for their service. Bitcoin’s consensus mechanism involves the
interaction between two important components, PoW, and the Longest Chain Rule (LcR),
otherwise known as Nakamoto Consensus [10]. PoW provides two important features:
a mechanism that provides a financial incentive for mining, and a way of preventing
Sybil attacks. Bitcoin’s second component, LcR, exists due to a trade-off Bitcoin made to
maintain consensus guarantees. Bitcoin had to compromise either safety or liveness to
guarantee consensus under Byzantine actors, and in an asynchronous network environment,
with Bitcoin choosing safety [18]. Bitcoin is unable to provide strong safety, which means
Bitcoin cannot guarantee that every node on the network will have identical copies of
the Blockchain [19]. With the ability for forks to occur, Bitcoin requires the LcR as a fork
resolution tool. In the case the network is split, the LcR states that the fork with the most
aggregated computational work, is the correct Blockchain [20].
**Figure 1. The process of mining on PoW Blockchains.**
Proof of work is still widely used by several cryprocurrencies such as Ethereum,
Litecoin, and Monero with slight differences. Ethereum uses a modified version of the
SHA3 hashing algorithm called Keccak-256 [21], Litecoin uses the Scrypt hash function [22],
and Monero uses an evolution of the CryptoNote hash function called CryptoNight [23].
With these two rules we have a system that rewards miners, stops Sybil attacks,
and can reconcile forks in a trustless and decentralised manner. With hundreds of billions
of dollars at stake, Bitcoin is yet to be hacked catastrophically, and, in the words of Andreas
-----
_Electronics 2022, 11, 2694_ 5 of 23
[Antonopoulos, Bitcoin has become the “sewer rat” of Blockchains (https://aantonop.com/](https://aantonop.com/bubble-boy-and-the-sewer-rat/)
[bubble-boy-and-the-sewer-rat/ (accessed on 20 August 2022)).](https://aantonop.com/bubble-boy-and-the-sewer-rat/)
_4.1. Credit-Based PoW (CBPoW)_
Huang et al. proposes a credit-based PoW system that’s suitable to run on IoT devices [24]. The authors created a consensus mechanism that dynamically adjusts a device’s
PoW difficulty depending on their adherence to the consensus rules. A node’s total score
can be calculated by taking the sum of their positive score along with their negative score
as shown in Figure 2. A positive score can be increased by following the consensus mechanism, while their negative score grows by disobeying consensus. The paper focuses on two
specific attacks that could lower a client’s score: lazy tips and double spending. Lazy tips is
an issue that specifically effects Directed Acyclic Graphs (DAGs), where a malicious actor
will avoid confirming recent transactions by building on top of old, preexisting transactions.
This can be detrimental to the network, as honest nodes may not have their new
transactions approved. Huang et al. also penalises users for attempting to spend their
tokens twice. If nodes are found to be acting maliciously, a penalty function that takes
the sum of their malicious transactions, over a certain period of time, multiplied by a
punishment coefficient, is used to penalise their credit amount. As nodes are required
to confirm two previous transactions before submitting their own, a low PoW credit will
make adding a transaction time intensive, and computationally expensive. CBPoW uses a
tiered node network, where lite nodes are responsible for collecting data and broadcasting
transactions, and full nodes are responsible for maintaining the tangle.
**Figure 2. CBPoW Consensus.**
_4.2. Proof of Elapsed Work and Luck (PoEWAL)_
PoEWAL is a consensus mechanism with similar traits to Bitcoin, but has been modified to be mineable on resource constrained devices [25]. PoEWAL still requires devices
to solve a cryptographic puzzle, however, rather than devices searching for the matching
nonce, miners just need to mine for a short period of time. This heavily reduces the power
and computational load on IoT devices. Once the mining time in a round elapses, miners
will compare their hash values used to solve the computational puzzle. The node, whose
-----
_Electronics 2022, 11, 2694_ 6 of 23
hash value has the highest number of consecutive zeros, has the right to produce a block for
the round. In the case that two miners propose hashes with the same number of consecutive
zeros, Huang et al. proposes a fork resolution tool called Proof of Luck. Proof of Luck
compares the two hashes values with equal consecutive zeros, then selects the node whose
hash value was the lowest to propose a block as presented in Figure 3. PoEWAL is able
to enforce these highly synced time limits on their consensus mechanism as the authors
assume that their IoT devices will have synced clocks. PoEWAL also implements a dynamic
difficulty level depending on the number of collisions. If collisions happen at a regular
frequency, the difficulty level will be raised in an attempt to lower the number of collisions.
**Figure 3. PoEWAL Consensus.**
**5. Proof of Stake**
Proof of Stake (PoS) was presented in a paper written by Sunny King and Scott Nadal
in 2012 [26]. King et al. proposed that the age of a cryptocurrencies coin, known as the coinage, could be used to develop an alternative consensus mechanism to PoW. The authors
propose a system where PoW mints the initial supply of coins on the network, and then
slowly diminishes the mining rewards to lower the reliance on PoW. Sunny King went
on to create Peercoin (PPC) a fork of Bitcoin in 2013. Peercoin implemented an initial
PoW coin distribution. The proposed consensus mechanism also used coin-age to stop
wealthy users from hoarding staking rewards, and checkpoints to deny changes to the
Blockchain after a certain point [24]. Rather than finding a nonce, a node is selected to
mine the next block using a pseudorandom lottery. The larger the node’s stake of coins
in proportion to the rest of the network, the higher the chance of being selected to mine a
block [16]. Similarly, to PoW, the header is hashed, but rather than spending large amounts
of electricity, constantly hashing different nonces, PoS does one calculation. If the coin
age > blockhash/target, the node can create a new valid block. Figure 4 shows a similar
mechanism, but displays the more sophisticated PoS kernel as a replacement to the coin
age. If not, the node waits til the next round to check if it meets the criteria to produce a
block [27]. PoS proved popular due to its minimal hardware requirements and reduced
energy usage compared to PoW. Cryptocurrencies such as Algorand, Cardano, and PIVX
are examples of cryptocurrencies that have adopted PoS as their consensus mechanism.
Algorand [28] co-found by cryptographer Silvio Micali, Cardano, one of the largest smart
contract platforms by market cap [29], and PIVX, that allows users to run ’masternodes’,
nodes which provide extra security and functionality for the network [6].
-----
_Electronics 2022, 11, 2694_ 7 of 23
**Figure 4. PoS Consensus.**
_5.1. Byzantine Agreement Protocol (BAP)_
Algorand is a cryptocurrency co-found by Silvio Micali, and uses a Verifiable Random
Function (VRF) to power its consensus mechanism, the Byzantine Agreement Protocol [28].
Nodes on the network can choose to participate in consensus by computing an evaluation
function. The Decentralised Random Beacon (DRB) allows nodes to agree on a VRF and to
collaboratively create one new output of the VRF every round. A VRF in this context means
a commitment to a deterministic, pseudorandom value. In particular, the VRF outputs are
unbiased due to their pseudorandom qualities [30].
On the Algorand network, a user has their own unique secret key, and a ’magic seed’
known to nodes on the Algorand network. The evaluation function returns an output
string (which is used to select committee members) and a proof to verify the output. Next
the output string is checked to see if it falls between a range [0, user stake] where user
stake is the proportion of the coins a user has staked compared to the total coins staked.
If the output string falls in this range, the user is selected to join the committee for the
current round [28]. The VRF also acts as a lottery to select leaders to propose blocks to
the committee.
If most of the committee is honest, and a node proposes a valid block, a block can be
certified and added to the Blockchain. This process is shown in Figure 5.
-----
_Electronics 2022, 11, 2694_ 8 of 23
**Figure 5. BAP Consensus.**
_5.2. Dfinity_
Dfinity Launched on the 18 December 2020 and is positioning itself as the “Internet
Computer”. Dfinity is creating a Blockchain that can host various online services, such as
those provided by Amazon AWS and Google Cloud. The Dfinity consensus mechanism is
split into 4 segments [31], summarised here, and depicted in detail in Figure 6.
1. Identities and Registry: Used to register clients to the network, each client has a
permanent pseudonymous identity. This is a form of Sybil protection to defend
against malicious users flooding the network with fake identities.
2. Random Beacon: built on top of a Verifiable Random Function (VRF) that allows
registered clients on the network to generate and agree upon random numbers. Dfinity
uses an optimised implementation of the Boneh–Lynn–Shacham (BLS) signature
scheme, which they have used to solve the last actor problem. This problem involves
-----
_Electronics 2022, 11, 2694_ 9 of 23
the last actor in the protocol knowing the random value for the next round, and having
the ability to abort the protocol
3. Blockchain and Fork Resolution: This segment implements the Probabilistic Slot
Protocol (PSP) that is used to rank the clients for a particular round according to
the output from the Random Beacon. This rank is used to assign a weight to block
proposers, with a higher rank resulting in a better chance of being selected to create a
block. PSP offers instantaneous ranking and a deterministic block time.
4. Notarisation and near-instant finality: Block notarisation is Dfinity’s technique to
provide near-instant finality, that is, network-wide and irreversible agreement on a
new block. Dfinity takes advantage of the BLS threshold signatures [32], Random
Beacon, and the client ranking system to achieve this.
**Figure 6. Dinifity Consensus.**
_5.3. Ethereum PoS Consensus_
Ethereum is a Blockchain that was originally conceived by Vitalik Buterin in 2013 [33],
and extended upon in 2014 by Gavin Wood to define Ethereum’s smart contract functionality [34]. Ethereum launched in 2015, with the code name ‘Frontier’. At the time of writing,
Ethereum is the second largest cryptocurrency in terms of market cap. Ethereum is a
Blockchain known for decentralised applications, commonly called ‘DApps’. The Ethereum
DApp ecosystem is diverse, with a range of widely used DApps in areas such as finance [35,36], gaming [37] and prediction markets [38]. Currently, Ethereum is using
PoW as its consensus mechanism, but plans to move to PoS in an upgrade commonly
-----
_Electronics 2022, 11, 2694_ 10 of 23
known as ‘Eth2’. The Eth2 upgrade is underway, and being rolled out progressively.
The roll out is planned to occur in 3 stages, with each stage implementing several changes.
Stage 1, launches the Beacon chain, which will introduce PoS, stage 2 will focus on merging
the Ethereum PoW chain, and the Beacon chain, and step 3 will focus on scalability with the
implementation of sharding [39]. Consensus on Eth2 is going to involve two components,
the Greedy Heaviest Observed Subtree (GHOST), which will act as a fork resolution tool,
and Casper the Friendly Finality Gadget (CFFG), which will finalise the decisions that
GHOST makes [40]. The GHOST protocol compromises on its safety, which means that
it is possible to switch between different forks, with different chain heights. However,
as GHOST has liveness guarantees, blocks can continue to be added to the Ethereum
Blockchain even when the chain is under attack.
The CFFG protocol finalises the blocks that are added to the chain, and as CFFG
favours safety over liveness, the protocol’s decisions are final. CFFG has similar properties
to the Practical Byzantine Fault Tolerance (PBFT) consensus mechanism, in that both
protocols use justification rounds and finalisation rounds to come to consensus [41]. CFFG
also employs a method to batch justification and finalisation messages, which increases
Ethereum’s potential scalability. While the Ethereum network is functioning normally,
GHOST will provide a fork resolution process, then CFFG will finalise the decision and
add the block to Ethereum’s Blockchain. However, in the event the network is under
attack, or there is an issue that causes many nodes to go offline, GHOST will continue to
function, and blocks will still be added to the Blockchain, but will not be finalised. Once
the attack subsides, CFFG will start working again, and will finalise the blocks that GHOST
has proposed, and add them to the Blockchain if they are valid. CFFG and GHOST cover
each other’s weaknesses, and allow a consensus mechanism with both safety, and liveness
guarantees [42]. A partial snapshot of Ethereum’s PoS implementaion is shown in Figure 7.
**Figure 7. Ethereum PoS Consensus.**
-----
_Electronics 2022, 11, 2694_ 11 of 23
_5.4. Microchain_
Microchain proposes a lightweight consensus mechanism aimed at the IoT [43]. Microchain’s consensus mechanism has similar properties to PoS, where a number of validators are selected to join a committee, and from the committee a node is selected to produce
a block. The purpose of the committee is to select a pseudorandom subset of the network,
to avoid biased or malicious block producers. Microchain also uses a committee called
a ‘Dynasty’, where eligible validators are selected to join the committee. Microchain’s
consensus mechanism is broken down into two major components: Proof of Credit (PoC),
and Voting based Chain Finality (VCF). PoC is a PoS mechanism that uses a credit weight
to increase the chance a particular node has of producing a block as depicted in Figure 8.
Given the distribution of credits in a particular Dynasty, nodes that have a higher credit
weight have a larger chance of being selected to produce a block.
The VCF is a fork resolution tool, and is also responsible for extending the chain by
adding new blocks, and protecting the Blockchain from malicious or accidental reorganising
by adding checkpoints. The consensus mechanism proposed by Xu et al. leverages a VRF to
power its slot selection (that is, the process of picking nodes to join a Dynasty). Microchain
makes an assumption that networks are synchronous, and is able to provide two guarantees:
persistence and liveness. Persistence guarantees that all the users agree on the same history
of the Blockchain, and if one honest node finds a transaction to be finalised, all honest
nodes see the transaction as final. Liveness guarantees that a valid transaction submitted
by an honest node will eventually be added to a new block.
**Figure 8. Microchain Consensus.**
_5.5. Proof of Supply Chain Share (PoSCS)_
PoSCS is a consensus mechanism proposed by Tsang et al. targeted towards the
Perishable Food Supply Chain (PFSC) [44]. The project uses a framework that incorporates
an IoT network to manage monitoring and communication, a Blockchain to manage the data
of food through the life cycle of the supply chain, and a database to archive supply chain
information. The authors quickly point out that PoW is not suitable for the IoT due to the
computationally expensive mining process. The authors propose a consensus mechanism
like PoS, but replace the need for a currency, with a reputation system. Each node
-----
_Electronics 2022, 11, 2694_ 12 of 23
participating in consensus has four components that determine its reputation: Influence
Factor (INF), Interest Factor (INT), Devotion Factor (DEV), and Satisfaction Factor (SAT).
These factors can then be weighted using three strategies: the interest-first strategy,
moderate-strategy, and devotion-first strategy. These weights prevent the consensus
mechanism from favouring participants who attempt to maximise a single factor. Lastly,
the shipment volume, considers the ingoing and outgoing volume a particular party is
moving on the supply chain network. This process is summarised in Figure 9. These
factors and weights are used to pseudorandomly select a block producer, who will be
required to forge a block.
Block forgers are also required to do a small amount of PoW mining, which allows
the block creation time to be controlled. Rather than all the nodes participating in PoSCSs
consensus mechanism, only the block forger is required to mine. The architecture of the
system uses a hybrid approach, combining Blockchain and the cloud. The Blockchain
is used to record the data about a particular object in the supply chain in tandem with
a traditional database. Once the object has completed its journey through the supply
chain, the object is removed from the storage of the IoT devices, and remains archived in
the cloud.
**Figure 9. PoSCS Consensus.**
_5.6. Tendermint_
A flexible consensus mechanism in the Byzantine Fault Tolerance (BFT) family that can
be configured to work in either public, private, or permissioned networks [16]. Tendermint
can be configured as a public consensus mechanism with PoS, or as a permissioned/private
Blockchain with predetermined validator nodes. Tendermint’s consensus mechanism uses
a voting mechanism that has three steps: proposal, prevote and precommit. The proposal
message is used by a proposer to suggest a particular value or state, while the prevote
and precommit messages allow other nodes to vote on the proposal [45]. Tendermint uses
a locking mechanism to guarantee consensus if the number of malicious nodes on the
network does not surpass one-third of the total participants [12]. This locking mechanism
uses the term ‘polka’, which checks that two-thirds of the prevotes are for a single block.
If a validator tries to publish a block without a polka, it is considered malicious behaviour
as shown in the ‘Commit’ phase in Figure 10. Cosmos is an example of a Blockchain that
-----
_Electronics 2022, 11, 2694_ 13 of 23
is leveraging the Tendermint consensus mechanism. Cosmos is a multichain Blockchain,
which allows many independent Blockchains, called zones, to run in parallel, with the
ability to communicate through a central Blockchain, called the hub. The native token on
the Cosmos Blockchain is called Atom.
**Figure 10. Tendermint Consensus.**
**6. Alternative Consensus Mechanisms**
The consensus mechanisms PoW and PoS are well-known and widely used in Blockchains
and cryptocurrencies. In this section, well cover 3 consensus mechanisms that deviate from
purely PoW and PoS. PoC creates consensus using hard drive capacity, PoI heavily integrates a reputation system into its consensus mechanism. Section 6 concludes by covering
hybrid consensus mechanisms.
_6.1. Proof of Capacity_
A consensus mechanism that focuses on hard drive capacity, rather than mining
with graphics cards or ASICs (Application-specific integrated circuit). Proof of Capacity
saw its first use in the cryptocurrency BurstCoin. Mining on BurstCoin has two phases,
plotting, and mining. Plotting involves hashing a list of nonce values, and then storing
them on a hard drive. BustCoin uses the Shabal hashing algorithm, which is harder to
hash than Bitcoins SHA256. Rather than discarding the hashes like in Bitcoin, they are
bundled together into scoops (a pair of hashes), and stored on the nodes hard drive. In the
mining phase, miners calculate a scoop number, and they use that scoop number to create a
deadline value [46]. A node’s deadline value will vary depending on the hashes that have
been calculated, and will represent a time limit in seconds. A node that is able to calculate
-----
_Electronics 2022, 11, 2694_ 14 of 23
a deadline with the lowest time, is given the right to produce a block. An outline of this
porocess can be found in Figure 11. BurstCoin has since rebranded to Signum, and has
changed to a hybrid consensus mechanism called PoC+. PoC+ still requires a commitment
of hard drive space, with miners now having the option to stake their Signa coins, which
will increase their chance of mining a block.
**Figure 11. PoC Consensus.**
_6.2. Proof of Importance_
Proof of Importance (PoI) is a consensus mechanism originally proposed by the New
Economy Movement (NEM). PoI shares similarities to PoS, where nodes are required to lock
up a certain about of coins. However, rather than just keeping a node running like in the case
of PoS, PoI has some extra requirements to encourage network usage and calculate a wallet’s
importance [47] as scene in Figure 12. To be selected for the importance calculation, NEM
wallets must have a minimum of 10,000 coins vested for a certain period. An importance
score can also be increased by using the NEM network and sending transactions. Safeguards
have been put in place against loop attacks, which involves sending coins between accounts
controlled by a single actor, to boost their importance [48]. NEM has added a mechanism
that heavily weights the importance of an account sending NEM, and minimal weight to
an account that sends many coins, but receives most or all of their NEM back. Even if an
account were to attempt the loop attack, they gain a minor increase in their importance
score (<10%) but gain very little monetarily, as the extra money that receive from their
higher importance, is lost in transaction fees attempting to boost their importance [48].
The name of NEM’s native Blockchain token is XEM.
-----
_Electronics 2022, 11, 2694_ 15 of 23
**Figure 12. PoI Consensus.**
_6.3. Hybrid Consensus (PoW/PoS)_
A number of cryptocurrencies have taken alternative approaches to consensus, by combining elements from PoW and PoS. Decred is a cryptocurrency that saw the flaws in PoW
(double-spend problem) and the issues in PoS (nothing at stake) and decided to create a
hybrid consensus mechanism to mitigate these problems as shown in Figure 13. Miners on
the Decred network are still used to produce blocks, but are unable to add blocks directly
to the Blockchain. Instead, miners propose their blocks to a network of PoS nodes who
purchase tickets as their stake [49]. If a PoS node is pseudorandomly selected from this
pool of tickets, they will be required to validate the block and add it to Decred’s Blockchain,
as shown in figure Figure 13. These improvements stop miners from creating private chains
and adds a checkpoint system that stops large parts of the Blockchain from being reorganised in the event of an attack. The cryptocurrency Horizen also uses a Hybrid consensus
mechanism. Horizen leverages a network of PoW miners, to solve a cryptographic puzzle.
Horizen full nodes are still given a reward for running honestly but are not part of the
consensus process. Horizen’s ‘Secure Nodes’ provide a more secure version of standard
full nodes which are found in other cryptocurrencies.
Horizen requires their Secure Nodes to use TLS encryption, hold a small number of
tokens, and maintain a full copy of the Blockchain. Secure Node users are compensated with
part of the block reward, providing a financial incentive to support the network [50]. In 2018,
[Horizen was a victim of a 51% attack (https://www.coindesk.com/markets/2018/06/08](https://www.coindesk.com/markets/2018/06/08/blockchains-once-feared-51-attack-is-now-becoming-regular/)
[/blockchains-once-feared-51-attack-is-now-becoming-regular/ (accessed on 20 August](https://www.coindesk.com/markets/2018/06/08/blockchains-once-feared-51-attack-is-now-becoming-regular/)
2022)), and decided to make a modification to their consensus mechanism to make future
attacks more difficult. Horizen added a delay function, that penalises miners for keeping
their private Blockchain hidden from the network. Malicious miners will be required to
continue mining their Blockchain for a certain number of blocks, according to the delay
function, rather than having honest nodes instantly adopt their modified Blockchain once
it is made public [51]. This makes 51% attacks require more time to execute, and require
more electricity, when comparing attacks to Horizen’s original implementation of the LcR.
The consensus mechanisms discussed in Sections 4–6 have been conveniently summarised into 3 tables. Table 1 discusses common properties of consensus mechanisms, such
as block time, Transactions Per Second (TPS), and adversary tolerance. Table 2 specficially
discusses the consensus mechanisms designed for IoT devices (PoSCS, Microchain, PoEWAL, and CBPoW) in more detail. Table 3 compares the discussed consensus mechnaisms
against our criteria that was defined in Section 2, and allocates each consensus mechanisms
a rating.
-----
_Electronics 2022, 11, 2694_ 16 of 23
**Figure 13. Decred’s hybrid PoW/PoS consensus mechanism.**
**Table 1. Overview of all the consensus mechanisms mentioned in Sections 4–6.**
**Adversary**
**Consensus** **Blockchain** **Block Time** **TPS** **L2 Network** **Reference**
**Tolerance**
PoW
PoS
Bitcoin 10 min 7 Lightening [52,53]
Litecoin 2.5 min 56 Network [54]
Monero 2 min Variable None [55]
<51% Side Chains,
Ethereum 12–14 s 15 [56]
Rollups
Horizen 2.5 min N/A Side Chains [50]
CBPoW Variable 500+ None [24]
PoEWAL Variable 25 None [25]
Ethereum (PoS) 12 s TBD <51% TBD [40]
Off-chain
Algorand 4.5 s 1000 <33% [57]
Contracts
Dfinity Variable Variable <33% [31]
Cosmos 6 s 1000+ <33% [58]
PIVX 60 s 173 <51% None [55,59]
Microchain 9 s 230+ <33% [43]
PoSCS Variable Variable <51% [44]
Lightening
PoW + PoS Decred 5 min 14 <51% [49]
Network
PoC BurstCoin 4 min 80+ <50% [60]
None
PoI NEM 1 min 4000 <51% [48]
A Blockchains layer 1 network is the Blockchains primary network, where transactions are created on-chain.
Transactions on layer 2 are created off-chain, and are often compressed and posted on a Blockchains layer 1
network to increase scalability. Note: Transactions Per Second (TPS)—Only considers a Blockchains layer 1
network.
-----
_Electronics 2022, 11, 2694_ 17 of 23
**Table 2. Overview of the IoT specific consensus mechanisms mentioned in Sections 4 and 5.**
**Consensus** **Similar to** **Decentralised** **Features** **Apps** **Drawbacks** **Reference**
Reputation
PoSCS PoS No Supply Chains Cloud Reliance [44]
System
Crypto Synchronous
Microchain PoS Partially IoT Blockchain [43]
Sortition Networks
Time-limited
PoEWAL PoW IoT Dapps Synced Clocks [25]
Partially PoW
DAG
CBPoW PoW Credit System Industrial IoT [24]
Coordinator
**Table 3. Consensus mechanism suitability for IoT devices, measured against our criteria defined in**
Section 2.
**Processor**
**Consensus** **Security** **Decentralisation** **Storage** **TPS** **Suitable?** **Reference**
**Usage**
PoW High High High High Low No [16]
PoS Medium High Medium High Variable Partially [16]
PoW + PoS High High High High Low No [59]
PoC Low High High High Low No [60]
PoI Low High High High High Partially [48]
PoSCS Low High Low Low Variable Partially [44]
CBPoW Low High Medium Low Medium Yes [24]
PoEWAL Low High High High low Partially [25]
Microchain Medium High Medium High Medium Yes [43]
Storage refers to the internal memory needed to store the Blockchain on IoT devices. TPS Refers to the transactions
per second of the consensus mechanism. less than 100 TPS is low, 100–1000 TPS is considered medium, and 1000+
is considered high TPS.
**7. Analysis**
Before starting the analysis, we will discuss the three trade-offs Blockchains commonly make. The Blockchain trilemma commonly effects the design choices of consensus
mechanisms, and also has consequences related to IoT devices. After, we will discuss each
consensus mechanism, and talk about their suitability for the IoT.
_7.1. Blockchain Trilemma_
Blockchains have three important properties: security, decentralisation, and scalability.
Many Blockchains are only able to pick two of these properties, while having to compromise
on the third [61]. The Blockchain trilemma was originally coined by Ethereum’s creator,
Vitalik Buterin. Buterin explains that, using these three properties, that simple (meaning
with no advanced techniques, such as sharding [62]) Blockchains can broadly be placed into
three categories: traditional Blockchains, high Transaction Per Second (TPS) Blockchains,
and multichain Blockchains.
Bitcoin and Ethereum (pre-PoS Ethereum) are examples of traditional Blockchains.
Bitcoin and Ethereum highly value decentralisation, and security, at the expense of scalability [63]. Blockchains that prioritise speed, and security, generally have a limited amount
of nodes participating in consensus. Blockchains with Delegated Proof of Stake (DPoS)
such as EoS [64] and TRON [65] are examples of Blockchains that prioritise performance.
These Blockchains are able to process more transactions per second, but are more prone to
centralisation due to a smaller number of nodes participating in consensus [66]. Blockchains
such as COSMOS [58] and Avalanche [67] are two examples of multi-chain Blockchains.
From the trilemma triangle, these sorts of Blockchains generally prioritise scalability and
decentralisation. Buterin suggests that multichain Blockchains may not be able to pro
-----
_Electronics 2022, 11, 2694_ 18 of 23
vide certain security guarantees, when implementing more advanced techniques, such as
sharding [61].
_7.2. Proof of Work Suitability_
PoW can be quickly discarded from the list of suitable consensus mechmanisms for
IoT devices. PoW is extemely energy intensive [65], processor intensive, and requires
specalised mining hardware. PoW is not suitable for the IoT.
_7.3. Proof of Stake Suitability_
PoS was given a suitability score of partial, and could potentially be implemented for
the IoT. PoS has more desirable qualities than PoW for deployment in the IoT, as the energy
and processor intensive nature required for consensus has been removed. However, PoS
still has challenges for IoT usage:
1. Cryptography such as VRFs and the BLS signature scheme can be processor intensive,
and may become problematic when working with resource constrained devices,
especially as the network grows. Ethereum (PoS) Figure 7 and Dfinity Figure 6 are
well-known to use these cryptographic functions in their consensus implementations.
2. PoS Blockchains are based on monetary concepts that involve a currency, which may
not be suitable for some IoT applications. Arbitrary tokens could replace a monetary
system, but this still comes with economic issues (who generates the tokens, how are
tokens distributed, is the token amount capped etc.).
3. The transaction throughput of the network may not be adequate depending on the
IoT use case, the PoS TPS varies dramatically from just over 100 TPS, to well over
1000 TPS as shown in Table 1. Selecting a particular PoS implementation will become
important, as different PoS implementations have varying performance capabilities.
Due to these issues, PoS may be suitable for the IoT, but only under particular circumstances, such as where a more decentralisaed network is required.
_7.4. Hybrid Consensus (PoW/PoS) Suitability_
PoW/PoS hybrid consensus mechanisms are an alternative solution when attempting
to solve the 51% attack problem and nothing at stake problem using a novel two consensus
solution. Unfortunately, having PoW as a prerequisite brings all the same problems faced
by PoW and its IoT suitability. Theoretically, the PoW porition of this hybrid consensus
mechanism could be done offsite by powerful ASICS. The PoS portion could then be
delegated to the IoT devices. We believe a configuration like the one mentioned above
would be excessively complicated for consensus in the IoT and do not recommend Hybrid
PoW/PoS for the IoT.
_7.5. Proof of Capacity Suitability_
PoC is another novel consensus mechanism that uses the concept of hard drive capacity
to create consensus. Local storage on IoT devices is limited and would not be available for
use in consensus. PoC is not suitable for use in the IoT.
_7.6. Proof of Importance Suitability_
This consensus mechanism takes concepts from PoS, and combines them with an
importance mechanism. The higher a node’s importance score, combined with the nodes
total amount of staked coins, calculates their chance of being selected to mint a block. PoI
satisfies many criteria in Table 1, which make it well suited for the IoT. Adoption of PoI
consensus in Blockchains is limited, as NEM is the only Blockchain to implement the PoI
consensus mechanism as seen on CoinMarketCap [68]. PoI is referenced in the literature,
in surveys such as [16,69] giving a general description of the flow of the consensus mechanism. However, the authors are not aware of any papers or references that validate these
claims and support the figures in Table 1. Until papers are developed that independently
-----
_Electronics 2022, 11, 2694_ 19 of 23
verify the characteristics of the PoI, we can only partially recommend the PoI consensus
mechanism for the IoT.
_7.7. PoSCS Suitability_
PoSCS is a consensus mechanism that also uses elements of PoS. PoSCS uses a staking
system, but replaces a monetary system with a reputation system based on how a node
interacts with the supply chain. PoSCS also relies on the cloud to archive parts of the
Blockchain that are not required to be stored on IoT devices, one of the few consensus
mechanisms discussed here that addresses this problem. Based on the results of PoSCS [44],
the transaction throughput may not be high enough for some IoT use cases, and the reliance
on the cloud may not be suitable for some implementations. For these reasons, we think
PoSCS is partially suitable, and may be usable for some IoT implementations.
_7.8. CBPoW Suitability_
One of the two consensus mechanisms that adapts PoW for the IoT. CBPoW dynamically adjusts the PoW mining difficulty for nodes, and actively punishes misbehaving nodes
by making their PoW mining difficulty very high, to the point where mining is infesable.
CBPoW also replaces a traditional Blockchain with a DAG, which has the ability to prune
itself to reduce the sizde of the Blockchain stored locally on the device. This consensus
mechanism performes well according to Table 1 with a throughput of 500 TPS. CBPoW has
one central point of failure in current implementations (the coordinator, in the case of the
IoTA DAG [70]) shown in Table 2. CBPoW has a number of characteristics that are suitable
for the IoT including: a DAG Blockchain structure which has the ability to reduce its size, a
lightweight PoW consensus mechanism and moderately high transaction throughput. Due
to these 3 features that are favorable for IoT devices, we have labeled CBPoW as suitable in
Table 3.
_7.9. PoEWAL Suitability_
PoEWAL consensus mechanism is also based on a modified version of PoW. In PoEWAL, the PoW mining process is time limited. Devices have a short amount to mine a block,
significantly reducing power and processor usage. PoEWAL also makes the assumption
that devices all have synced clocks, which is probably a reasonable assumption for IoT
devices on a wireless sensor network collecting time series data. This requirement may
unsuitable for some implementations, as IoT devices use commodity parts and are generally
more susceptible to drifting out of sync [71]. PoEWAL has two limitations: low transaction
throughput and reliance on sycnhronised clocks. Due to these limitations, we have labeled
PoEWAL as only partially suitable on Table 3.
_7.10. Microchain Suitability_
Microchain has adapted concepts from PoS and made them more suitable for IoT
usage. Nodes use credit amounts, rather than a monetary system, and Microchain has
a block selection process to accommodate this change in PoS. Microchain also uses VRF
cryptography to power its consensus mechanism, which may incur high processor usage
on IoT devices as the network grows. Microchain also made some assumptions around
network environments (synchronous networks in Table 2), which makes this consensus
mechanism unusable for public Blockchains. Microchain is also shown to have reasonable
performance, managing 230 TPS in their tests. Microchain has a number of features that
make it suitable for IoT devices:
- A PoS implementation not dependent on a monetary system
- A moderately fast transaction trhougput
- Processor usage which remains low on controlled private networks
Due to these features which favour IoT devices, Microchain has been labeled as suitable
in Table 3.
-----
_Electronics 2022, 11, 2694_ 20 of 23
**8. Conclusions**
In this paper, we discussed resource constrained IoT devices, and the limitations of
the current consensus mechanisms on the IoT. We started the discussion by defining a
criterion to rank the consensus mechanisms against, including speed, security, decentralisation, and others. The discussion continued by individually describing a number of
consensus mechanisms and defining their general flow through a series of figures. We
described well-known consensus mechanisms such as PoW and PoS, but also discuss
4 IoT consensus mechanisms purpose built for the IoT (CBPoW, Microchain, PoEWAL,
and PoSCS). These IoT focused consensus mechanisms make modifications to the already
existing PoW and PoS consensus, but remove the need for energy inefficient mining and
monetary systems respectfully.
In our analysis, discuss the advantages and disadvantages of each consensus mechanism, and their suitability for the IoT. The results from our analysis show that Microchain
and CBPoW are suitable for the IoT. Microchain shows that it has suitable performace for
the IoT in private environments, and CBPoW addresses problems around local Blockchain
storage on IoT devices. Microchain and CBPoW have been labeled suitable in our analysis.
Four other consensus mechanisms where also labeled partially suitable, including: PoEWAL, PoSCS, PoI and PoS. Some of the partially recommended consensus mechanisms
addressed issues with monetary systems, computational overhead, and storage requirements. However, other issues where introduced, such as cloud reliance, synchronised
clocks and poor and unverified performance claims.
_Future Work_
The current trend around consensus research in academia, and in industry, has been
focused on making mechanisms that are lightweight enough for low powered devices,
and reducing the carbon footprint of the mining process. Our future research will look
more closely into novel consensus mechanisms. Specifically, those that can meet the unique
requirements of an IoT device, and mechanisms that can be deployed to meet business
specific goals in both private, and potentially public operating environments.
**Author Contributions: Conceptualization, Z.A. and N.C.; investigation, Z.A.; writing—original draft**
preparation, Z.A.; writing—review and editing, N.C., R.A., W.H.; supervision, N.C., R.A. All authors
have read and agreed to the published version of the manuscript.
**Funding: This work was supported by the SmartSat C.R.C., whose activities are funded by the**
Australian Government’s C.R.C. Program.
**Acknowledgments: The authors would like to thank SmartSat CRC for providing scholarship fund-**
ing, and BAE Systems Australia, our industry collaborator.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Singh, S.; Sharma, P.K.; Yoon, B.; Shojafar, M.; Cho, G.H.; Ra, I.H. Convergence of blockchain and artificial intelligence in IoT
[network for the sustainable smart city. Sustain. Cities Soc. 2020, 63, 102364. [CrossRef]](http://doi.org/10.1016/j.scs.2020.102364)
2. [Min, H. Blockchain technology for enhancing supply chain resilience. Bus. Horizons 2019, 62, 35–45. [CrossRef]](http://dx.doi.org/10.1016/j.bushor.2018.08.012)
3. Dujak, D.; Sajter, D. Blockchain Applications in Supply Chain. In SMART Supply Network; Springer: Berlin/Heidelberg, Germany,
2019; pp. 21–46.
4. Sternberg, H.S.; Hofmann, E.; Roeck, D. The Struggle is Real: Insights from a Supply Chain Blockchain Case. J. Bus. Logist. 2021,
_[42, 71–87. [CrossRef]](http://dx.doi.org/10.1111/jbl.12240)_
5. Casado-Vara, R.; Prieto, J.; Prieta, F.D.L.; Corchado, J.M. How blockchain improves the supply chain: Case study alimentary
[supply chain. Procedia Comput. Sci. 2018, 134, 393–398. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2018.07.193)
6. Werner, R.; Lawrenz, S.; Rausch, A. Blockchain Analysis Tool of a Cryptocurrency. In PervasiveHealth: Pervasive Computing
_Technologies for Healthcare; Springer: Berlin/Heidelberg, Germany, 2020; pp. 80–84._
7. Hopkins, A.L.; Lala, J.H.; Smith, T.B. The Evolution of Fault Tolerant Computing at the Charles Stark Draper Laboratory, 1955–85;
Springer: Berlin/Heidelberg, Germany, 1987; pp. 121–140.
-----
_Electronics 2022, 11, 2694_ 21 of 23
8. Corbett, J.C.; Dean, J.; Epstein, M.; Fikes, A.; Frost, C.; Furman, J.J.; Ghemawat, S.; Gubarev, A.; Heiser, C.; Hochschild, P.; et al.
[Spanner: Google’s Globally-Distributed Database. ACM Trans. Comput. Syst. (TOCS) 2013, 31, 1–21. [CrossRef]](http://dx.doi.org/10.1145/2491245)
9. Gupta, A.; Yang, F.; Govig, J.; Kirsch, A.; Chan, K.; Lai, K.; Wu, S.; Dhoot, S.; Kumar, A.; Agiwal, A.; et al. Mesa: Geo-Replicated,
[Near Real-Time, Scalable Data Warehousing. Proc. Vldb Endow. 2014, 7, 1259–1270. [CrossRef]](http://dx.doi.org/10.14778/2732977.2732999)
10. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System; Portal Unicamp: Campinas, Brazil, 2008.
11. [Lamport, L.; Shostak, R.; Pease, M. The Byzantine generals problem. ACM Trans. Program. Lang. Syst. 2019, 4, 382–401. [CrossRef]](http://dx.doi.org/10.1145/357172.357176)
12. Buchman, E. Tendermint: Byzantine Fault Tolerance in the Age of Blockchains. Ph.D. Thesis, The University of Guelph, Guelph,
ON, Canada, 2016.
13. Helliar, C.V.; Crawford, L.; Rocca, L.; Teodori, C.; Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf.
_[Manag. 2020, 54, 102136. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2020.102136)_
14. Polge, J.; Robert, J.; Traon, Y.L. Permissioned blockchain frameworks in the industry: A comparison. ICT Express 2021, 7, 229–233.
[[CrossRef]](http://dx.doi.org/10.1016/j.icte.2020.09.002)
15. Yang, R.; Wakefield, R.; Lyu, S.; Jayasuriya, S.; Han, F.; Yi, X.; Yang, X.; Amarasinghe, G.; Chen, S. Public and private blockchain
[in construction business process and information integration. Autom. Constr. 2020, 118, 103276. [CrossRef]](http://dx.doi.org/10.1016/j.autcon.2020.103276)
16. Salimitari, M.; Chatterjee, M.; Fallah, Y.P. A survey on consensus methods in blockchain for resource-constrained IoT networks.
_[Internet Things 2020, 11, 100212. [CrossRef]](http://dx.doi.org/10.1016/j.iot.2020.100212)_
17. Conti, M.; Sandeep, K.E.; Lal, C.; Ruj, S. A survey on security and privacy issues of bitcoin. IEEE Commun. Surv. Tutorials 2018,
_[20, 3416–3452. [CrossRef]](http://dx.doi.org/10.1109/COMST.2018.2842460)_
18. Anceaume, E.; Lajoie-Mazenc, T.; Ludinard, R.; Sericola, B. Safety analysis of Bitcoin improvement proposals. In Proceedings
of the 2016 IEEE 15th International Symposium on Network Computing and Applications, NCA 2016, Boston, MA, USA,
31 October–2 November 2016; pp. 318–325.
19. Decker, C.; Seidel, J.; Wattenhofer, R. Bitcoin meets strong consistency. In Proceedings of the 17th International Conference on
Distributed Computing and Networking, Singapore, 4–7 January 2016.
20. Shi, E. Analysis of deterministic longest-chain protocols. In Proceedings of the 2019 IEEE 32nd Computer Security Foundations
Symposium (CSF), Hoboken, NJ, USA, 25–28 June 2019; pp. 122–135.
21. Antonopoulos, A.M.; Wood, G. Mastering Ethereum: Building Smart Contracts and Dapps; O’Reilly Media: Sebastopol, CA, USA,
2018.
22. Alwen, J.; Chen, B.; Pietrzak, K.; Reyzin, L.; Tessaro, S. Scrypt is maximally memory-hard. In Lecture Notes in Computer Science
_(Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany,_
2017; Volume 10212 LNCS, pp. 33–62.
23. Jamtel, E.L. Swimming in the Monero pools. In Proceedings of the 11th International Conference on IT Security Incident
Management and IT Forensics, IMF 2018, Hamburg, Germany, 7–9 May 2018; pp. 110–114.
24. Huang, J.; Kong, L.; Chen, G.; Wu, M.Y.; Liu, X.; Zeng, P. Towards secure industrial iot: Blockchain system with credit-based
[consensus mechanism. IEEE Trans. Ind. Inform. 2019, 15, 3680–3689. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2903342)
25. Raghav; Andola, N.; Venkatesan, S.; Verma, S. PoEWAL: A lightweight consensus mechanism for blockchain in IoT. Pervasive
_[Mob. Comput. 2020, 69, 101291. [CrossRef]](http://dx.doi.org/10.1016/j.pmcj.2020.101291)_
26. [King, S.; Nadal, S. Ppcoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake. Available online: https://bitcoin.peryaudo.org/](https://bitcoin.peryaudo.org/vendor/peercoin-paper.pdf)
[vendor/peercoin-paper.pdf (accessed on 20 August 2022).](https://bitcoin.peryaudo.org/vendor/peercoin-paper.pdf)
27. [Zhang, S.; Lee, J.H. Analysis of the main consensus protocols of blockchain. ICT Express 2020, 6, 93–97. [CrossRef]](http://dx.doi.org/10.1016/j.icte.2019.08.001)
28. Gilad, Y.; Hemo, R.; Micali, S.; Vlachos, G.; Zeldovich, N. Algorand: Scaling Byzantine Agreements for Cryptocurrencies. In
Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, 28–31 October 2017.
29. Badertscher, C.; Gaži, P.; Kiayias, A.; Russell, A.; Zikas, V. Ouroboros genesis: Composable proof-of-stake blockchains with
dynamic availability. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto,
ON, Canada, 15–19 October 2018; pp. 913–930.
30. Galindo, D.; Liu, J.; Ordean, M.; Wong, J.M. Fully Distributed Verifiable Random Functions and their Application to Decentralised
Random Beacons. IACR Cryptol. EPrint Arch. 2020, 2020, 96.
31. Hanke, T.; Movahedi, M.; Williams, D. DFINITY Technology Overview Series, Consensus System. arXiv 2018, arXiv:1805.04548.
32. Boneh, D.; Lynn, B.; Shacham, H. Short Signatures from the Weil Pairing. In Lecture Notes in Computer Science; Springer:
Berlin/Heidelberg, Germany, 2001; Volume 2248, pp. 514–532.
33. Buterin, V. A Next-Generation Smart Contract and Decentralized Application Platform; Ethereum Foundation: Bern, Switzerland, 2014.
34. Wood, G. Ethereum: A Secure Decentralised Generalised Transaction Ledger; Ethereum Foundation: Bern, Switzerland, 2022.
35. [Angeris, G.; Kao, H.T.; Chiang, R.; Noyes, C.; Chitra, T. An analysis of Uniswap markets. Cryptoecon. Syst. 2019, 1. [CrossRef]](http://dx.doi.org/10.21428/58320208.c9738e64)
36. Schär, F. Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets. Fed. Reserve Bank St. Louis Rev.
**[2021, 103, 153–174. [CrossRef]](http://dx.doi.org/10.20955/r.103.153-74)**
37. Scholten, O.J.; Gerard, N.; Hughes, J.; Deterding, S.; Drachen, A.; Walker, J.A.; Zendle, D. Ethereum Crypto-Games: Mechanics,
Prevalence and Gambling Similarities. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play,
Barcelona, Spain, 22–25 October 2019; Association for Computing Machinery: New York, NY, USA, 2019.
38. Peterson, J.; Krug, J.; Zoltu, M.; Williams, A.K.; Alexander, S. Augur: A decentralized oracle and prediction market platform (v2.
0). arXiv 2021, arXiv:1501.01042.
-----
_Electronics 2022, 11, 2694_ 22 of 23
39. Mohanty, D. Ethereum: What Lies Ahead. In Ethereum for Architects and Developers; Apress: New York, NY, USA, 2018; pp. 245–258.
40. Buterin, V.; Hernandez, D.; Kamphefner, T.; Pham, K.; Qiao, Z.; Ryan, D.; Sin, J.; Wang, Y.; Zhang, Y.X. Combining GHOST and
Casper. arXiv 2020, arXiv:2003.03052.
41. Buterin, V.; Griffith, V. Casper the Friendly Finality Gadget. arXiv 2017, arXiv:1710.09437.
42. [Beekhuizen, C. Validated, Staking on Eth2: #2—Two Ghosts in a Trench Coat. Available online: https://blog.ethereum.org/2020](https://blog.ethereum.org/2020/02/12/validated-staking-on-eth2-2-two-ghosts-in-a-trench-coat)
[/02/12/validated-staking-on-eth2-2-two-ghosts-in-a-trench-coat (accessed on 20 August 2022).](https://blog.ethereum.org/2020/02/12/validated-staking-on-eth2-2-two-ghosts-in-a-trench-coat)
43. Xu, R.; Chen, Y.; Blasch, E.; Chen, G. Microchain: A Hybrid Consensus Mechanism for Lightweight Distributed Ledger for IoT.
_arXiv 2019, arXiv:1909.10948._
44. Tsang, Y.P.; Choy, K.L.; Wu, C.H.; Ho, G.T.S.; Lam, H.Y. Blockchain-Driven IoT for Food Traceability with an Integrated Consensus
[Mechanism. IEEE Access 2019, 7, 129000–129017. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2940227)
45. Buchman, E.; Kwon, J.; Tendermint, Z.M. The latest gossip on BFT consensus. arXiv 2018, arXiv:1807.04938.
46. Andrey, A.; Petr, C. Review of Existing Consensus Algorithms Blockchain. In Proceedings of the 2019 IEEE International
Conference Quality Management, Transport and Information Security, Information Technologies IT and QM and IS 2019, Sochi,
Russia, 23–27 September 2019; pp. 124–127.
47. Wen, Y.; Lu, F.; Liu, Y.; Cong, P.; Huang, X. Blockchain Consensus Mechanisms and Their Applications in IoT: A Literature Survey.
In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics);
Springer: Berlin/Heidelberg, Germany, 2020; Volume 12454 LNCS, pp. 564–579.
48. [Nem Foundation. NEM Technical Reference. Available online: https://www.cryptoground.com/nem-white-paper (accessed on](https://www.cryptoground.com/nem-white-paper)
20 August 2022).
49. [Jepson, C. DTB001: Decred Technical Brief. Available online: https://decred.org/dtb001.pdf (accessed on 20 August 2022).](https://decred.org/dtb001.pdf)
50. [Viglione, R.; Versluis, R.; Lippencott, J. Zen White Paper. Available online: https://www.horizen.io/assets/files/Zen-White-](https://www.horizen.io/assets/files/Zen-White-Paper.pdf)
[Paper.pdf (accessed on 20 August 2022).](https://www.horizen.io/assets/files/Zen-White-Paper.pdf)
51. Garoffolo, A.; Viglione, R. Sidechains: Decoupled Consensus between Chains. arXiv 2018, arXiv:1812.05441.
52. Bhaskar, N.D.; Chuen, D.L.K. Bitcoin Mining Technology. In Handbook of Digital Currency: Bitcoin, Innovation, Financial Instruments,
_and Big Data; Academic Press: Cambridge, MA, USA, 2015; pp. 45–65._
53. Gobel, J.; Krzesinski, A.E. Increased block size and Bitcoin blockchain dynamics. In Proceedings of the 2017 27th International
Telecommunication Networks and Applications Conference, ITNAC 2017, Melbourne, Australia, 22–24 November 2017; pp. 1–6.
54. Boshuis, S.; Braam, T.; Marchena, A.P.; Jansen, S. The Effect of Generic Strategies on Software Ecosystem Health: The Case of
Cryptocurrency Ecosystems. In Proceedings of the 2018 IEEE/ACM 1st International Workshop on Software Health (SoHeal),
Gothenburg, Sweden, 27 May 2018.
55. [Lee, J.H. Rise of Anonymous Cryptocurrencies: Brief Introduction. IEEE Consum. Electron. Mag. 2019, 8, 20–25. [CrossRef]](http://dx.doi.org/10.1109/MCE.2019.2923927)
56. Bach, L.M.; Mihaljevic, B.; Zagar, M. Comparative analysis of blockchain consensus algorithms. In Proceedings of the 2018 41st
International Convention on Information and Communication Technology, Electronics and Microelectronics, Opatija, Croatia,
21–25 May 2018; pp. 1545–1550.
57. Esgin, M.F.; Kuchta, V.; Sakzad, A.; Steinfeld, R.; Zhang, Z.; Sun, S.; Chu, S. Practical Post-quantum Few-Time Verifiable Random
Function with Applications to Algorand. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial
_Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2021; Volume 12675 LNCS, pp. 560–578._
58. [Kwon, J.; Buchman, E. Cosmos Whitepaper. Available online: https://v1.cosmos.network/resources/whitepaper (accessed on](https://v1.cosmos.network/resources/whitepaper)
20 August 2022).
59. Baudlet, M.; Fall, D.; Taenaka, Y.; Kadobayashi, Y. The Best of Both Worlds: A New Composite Framework Leveraging PoS
and PoW for Blockchain Security and Governance. In Proceedings of the 2020 2nd Conference on Blockchain Research and
Applications for Innovative Networks and Services, BRAINS 2020, Paris, France, 28–30 September 2020; pp. 17–24.
60. [Gauld, S.; von Ancoina, F.; Stadler, R. The Burst Dymaxion. Available online: https://www.allcryptowhitepapers.com/wp-](https://www.allcryptowhitepapers.com/wp-content/uploads/2018/05/Burst-Coin-whitepaper.pdf)
[content/uploads/2018/05/Burst-Coin-whitepaper.pdf (accessed on 20 August 2022).](https://www.allcryptowhitepapers.com/wp-content/uploads/2018/05/Burst-Coin-whitepaper.pdf)
61. [Buterin, V. Why Sharding Is Great: Demystifying the Technical Properties. Available online: https://vitalik.ca/general/2021/04/](https://vitalik.ca/general/2021/04/07/sharding.html)
[07/sharding.html (accessed on 20 August 2022).](https://vitalik.ca/general/2021/04/07/sharding.html)
62. Zamani, M.; Movahedi, M.; Raykova, M. Rapidchain: Scaling blockchain via full sharding. In Proceedings of the 2018 ACM
SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 931–948.
63. Kiayias, A.; Panagiotakos, G. Speed-Security Tradeoffs in Blockchain Protocols. IACR Cryptol. EPrint Arch. 2015, 2015, 1019.
64. Xu, B.; Luthra, D.; Cole, Z.; Blakely, N. EOS: An Architectural, Performance, and Economic Analysis. Available online:
[https://blog.bitmex.com/wp-content/uploads/2018/11/eos-test-report.pdf (accessed on 20 August 2022).](https://blog.bitmex.com/wp-content/uploads/2018/11/eos-test-report.pdf)
65. Li, J.; Li, N.; Peng, J.; Cui, H.; Wu, Z. Energy consumption of cryptocurrency mining: A study of electricity consumption in
[mining cryptocurrencies. Energy 2019, 168, 160–168. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2018.11.046)
66. Al-Saqaf, W.; Seidler, N. Blockchain technology for social impact: Opportunities and challenges ahead. J. Cyber Policy 2017,
_[2, 338–354. [CrossRef]](http://dx.doi.org/10.1080/23738871.2017.1400084)_
67. Tanana, D. Avalanche blockchain protocol for distributed computing security. In Proceedings of the 2019 IEEE International
Black Sea Conference on Communications and Networking (BlackSeaCom), Sochi, Russia, 3–6 June 2019.
68. [CoinMarketCap. Top POI Tokens by Market Capitalization. Available online: https://coinmarketcap.com/view/poi/ (accessed](https://coinmarketcap.com/view/poi/)
on 20 August 2022).
-----
_Electronics 2022, 11, 2694_ 23 of 23
69. Zhang, P.; Schmidt, D.C.; White, J.; Dubey, A. Chapter Seven—Consensus mechanisms and information security technologies.
_Adv. Comput. 2019, 15, 181–209._
70. Silvano, W.F.; Marcelino, R. Iota Tangle: A cryptocurrency to communicate Internet-of-Things data. Future Gener. Comput. Syst.
**[2020, 112, 307–319. [CrossRef]](http://dx.doi.org/10.1016/j.future.2020.05.047)**
71. Tirado-Andrés, F.; Rozas, A.; Araujo, A. A Methodology for Choosing Time Synchronization Strategies for Wireless IoT Networks.
_[Sensors 2019, 19, 3476. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s19163476)_
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/electronics11172694?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/electronics11172694, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2079-9292/11/17/2694/pdf?version=1662026333"
}
| 2,022
|
[] | true
| 2022-08-27T00:00:00
|
[] | 17,890
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01540651818ded5880dbfa4a6aecf665efd8a00a
|
[
"Computer Science"
] | 0.854752
|
Policies and Security Aspects For Distributed Scientific Laboratories
|
01540651818ded5880dbfa4a6aecf665efd8a00a
|
IFIP International Information Security Conference
|
[
{
"authorId": "1751943",
"name": "N. Dessì"
},
{
"authorId": "1702365",
"name": "M. Fugini"
},
{
"authorId": "122042482",
"name": "R. Balachandar"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Symp Edge Comput",
"Symposium on Edge Computing",
"Inf Secur",
"Information Security",
"IFIP Int Inf Secur Conf",
"SEC"
],
"alternate_urls": [
"https://acm-ieee-sec.org"
],
"id": "c60f6e64-434b-4c82-931d-faefcb362d4c",
"issn": null,
"name": "IFIP International Information Security Conference",
"type": "conference",
"url": "http://www.tc11.uni-frankfurt.de/"
}
| null |
# Policies and Security Aspects For Distributed Scientific Laboratories
Nicoletta Dess´ı, Maria Grazia Fugini, R. A. Balachandar
**Abstract Web Services and the Grid allow distributed research teams to form dy-**
namic, multi-institutional virtual organizations sharing high performance computing
resources, large scale data sets and instruments for solving computationally intensive scientific applications, thereby forming Virtual Laboratories. This paper aims
at exploring security issues of such distributed scientific laboratories and tries to
extend security mechanisms by defining a general approach in which a security policy is used both to provide and regulate access to scientific services. In particular,
we consider how security policies specified in XACML and WS-Policy can support
the requirements of secure data and resource sharing in a scientific experiment. A
framework is given where security policies are stated by the different participants in
the experiment, providing a Policy Management system. A prototype implementation of the proposed framework is presented.
## 1 Introduction
Web Services (WS) and the Grid have revolutionized the capacity to share information and services across organizations that execute scientific experiments in a wide
range of disciplines in science and engineering (including biology, astronomy, highenergy physics, and so on) by allowing geographically distributed teams to form
dynamic, multi-institutional virtual organizations whose members use shared community tools and private resources to collaborate on solutions to common problems.
Since WS have been recognized as the logical architecture for the organization of
Nicoletta Dess´ı and R. A. Balachandar
Dipartimento di Matematica e Informatica, Universit´a degli Studi di Cagliari, Via Ospedale 72,
09124 Cagliari, Italy, e-mail: dessi@unica.it, balsonra@yahoo.co.in
Maria Grazia Fugini
Dipartimento di Elettronica e Informazione, Politecnico di Milano, piazza L. da Vinci 32, 20133
Milano, Italy, e-mail: fugini@elet.polimi.it
_Please use the following format when citing this chapter:_
-----
Grid services, they can enable the formation of Virtual Laboratories, which are not
simply concerned with file exchange, but also with direct access to computers, software, and other resources, as required by a dynamic collaboration paradigm among
organizations [6].
As the community of researchers begins to use Virtual Laboratories, exploiting Grid
capabilities [16], the definition of secure collaborative environments for the next
generation of the science process will need further potentialities. In order to extend
common security mechanisms such as certification, authorization or cryptography.
These new functions include, for example, the definition and the enforcement of
policies in place for single Virtual Laboratories in accordance with dynamically
formed Virtual Organizations (VOs), and the integration of different local policies,
in order to make the resources available to the VO members, who deploy their own
services in the VO environment. These considerations motivate the approach that
we propose in this paper, whose aim is to explore the security of environments supporting the execution of scientific experiments in a Virtual Laboratory. Specifically,
the paper elaborates on extending usual control access mechanism by defining a
general approach in which security policies are expressed and enforced to regulate
_resource sharing and service provisioning. In detail, the paper proposes a reference_
framework for secure collaboration where security policies can be formulated in
order to regulate access to scientific services and to their provisioning. Since each
Virtual Laboratory has a set of local security policies, we examine how these polices
can be expressed and enforced such that the allocation process of resources to a distributed experiment is made aware of security implications. As a sample application
of the proposed approach, some implementation hints are presented for distributed
experiments that incorporate security policies. This paper is structured as follows.
Section 2 reviews related work. Section 3 addresses requirements to be considered
when security policies for experiments are defined. Section 4 presents our reference framework for Virtual Laboratories, with emphasis on security issues. Section
5 details our approach to Policy Management, giving a component architecture and
providing implementation aspects. Finally, Section 6 contains the conclusions.
## 2 Related Work
A Virtual Laboratory for e-Science can be viewed as a cooperative System where
WS are dynamically composed in complex processes (experiments) and executed
at different organizations. WS security [19] is assuming more and more relevance
since WS handle users’ private information. WS-Trust [9] describes a framework for
managing, assessing and establishing trust relationships for WS secure interoperation. In WS-based systems, security is often enforced through security services [20],
for which new specifications have been developed to embed such security services
in the typical distributed and WS-based elements, considering also security policies
[18]. Examples are the SOAP header [19], the Security Assertion Markup Language
(SAML) [12], XML Signature [4] and XML Encryption [14]. WS-Security [3] ap
-----
plies XML security technologies to SOAP messages with XML elements. Based
on SOAP e-Services, [8] proposes an access control system, while XACML (XML
Access Control Markup Language) [2] allows fine-grained access control policies
to be expressed in XML. However, all these mechanisms prove useful in specifying
specific aspects of security, but need to be selected first, and integrated later, into a
uniform framework addressing all issues regarding e-collaboration.
Policies, as an increasingly popular approach to dynamic adjustability of applica
tions, require an appropriate policy representation and the design and development
of a policy management framework. Considering that security policies should be
part of WS representations, [19] and [10] specify the Web Services Policy Framework (WS-Policy). Policy-based management is supported by standards organizations, such as the Internet Engineering Task Force (IETF). The IETF framework
[13] defines a policy-based management architecture, as the basis for other efforts
at designing policy architectures.
Existing technology for the Grid (e.g., see [11]) allows scientists to develop
project results and to deploy them for ongoing operational use, but only within a
restricted community. However, security is still implemented as a separate subsystem of the Grid, making the allocation decisions oblivious of the security implications. Lack of security [20] may adversely impact future investment in e-Science
capabilities. The e-Science Core Programme initiated a Security Taskforce (STF)
[http://www.nesc.ac.uk/teams/stf/], developing a Security Policy for e-Science
(http://www.nesc.ac.uk/teams/stf/links/), while an authorization model for multipolicies is presented in [17]. An approach combining Grid and WS for e-Science
is presented in [5, 1].
Authorizations in distributed workflows executed with their own distinctive ac
cess control policies and models has been tackled in [7]; security is handled through
alarms and exceptions. In [15] access control for workflows is described explicitly
addressing cooperation. However, decentralization of workflow execution is not explicitly addressed nor security policies handling is specifically tackled.
## 3 Basic Security Aspects for Virtual Laboratories
At least for certain domains, scientific experiments are cooperative processes that
operate on, and manipulate, data sources and physical devices, whose tasks can be
decomposed and made executable as (granular) services individually. Workflows
express appropriate modeling of the experiment as a set of components that need
to be mapped to distinct services and support open, scalable, and cooperative environments for scientific experiments [5]. We denote such scientific environments as
Virtual Laboratories (VLs) or eLabs.
Each VL node (or eNode) is responsible for offering services and for setting the
rules under which the service can be accessed by other eNodes through service
invocation. Usually, the execution of an experiment involves multiple eNodes interacting to offer or to ask for services. Services correspond to different functionalities
-----
that encapsulate problem solving and data processing capabilities. Services can be
designed to use of VOs resources while the network infrastructure promotes the
exploitation of distributed resources in a transparent manner. This offers good opportunities for achieving an open, scalable and cooperative environment.
We classify services in:
- Vertical services, that include components for a range of scientific domains, in
cluding various software applications.
- Horizontal services, that provide adaptive user interfaces, plug-and-play collab
orative work components, interoperability functions, transaction co-ordination,
and security.
Vertical services expose interfaces that convey information about specific application functions. Their interfaces are implemented from within the component embedding them and are assembled in a workflow that globally expresses the experiment
_model. Horizontal services allow for easier, more dynamic and automated eNode_
integration and for more precise run-time integration of remote services. They are
designed to facilitate collaboration.
A VO member plans a complex scientific experiment by repeatedly choosing a sequence of services and including these services in a workflow. He can wait for the
fulfilment of a specific workflow and/or choose the next service to invoke on the basis of the returned information. The workflow execution may require the collaboration of various services spread over different VLs whose owners must be confident
that users accessing their software or data respect fine-grained access restrictions
controlling the varying levels of access to the resource that a user may be granted
for. For example, a service may require commodity processors or may have a limited
choice of input data (possibly requiring a specific file-format or database access).
Similarly, a scientist executing a service on a remote eNode must trust the administrator of the eNode to deliver a timely and accurate result (and possibly proprietary
data sent to the site).
This requires the extension of security aspects related to resource sharing to those
related to service sharing.
However, security is still currently unsupported in an integrated way by any of the
available WS technologies, nor a standard method to enforce Grid security is defined. Moreover, security policy requirements have to be considered. The approach
of this paper regards the definition of the basic aspects to be tackled when extending
WS and Grid security infrastructures to VLs environments.
## 4 A Reference Framework for Virtual Laboratories
Based on what illustrated so far, we now introduce some basic modeling elements
for the context of VLs security, by defining as an actor each relevant subject capable
of executing experiments supported by networked resources, which we consider as
_objects. In detail:_
-----
- Subjects execute activities and request access to information, services, and tools.
Among subjects we mention the remote user of a WS/Grid enabled application,
which would generally be composed of a large, distributed and dynamic population of resources. Subjects may also include organizations, servers and applications acting on behalf of users. In this paper, we consider only trusted groups
which are not requested to exchange security tokens or credentials during a scientific experiment, since they know and trust each other, and received authentication and authorization to access resources when first joining the VL.
- Objects are the targets of laboratory activities. Services are considered as objects.
Methods are also regarded as objects, which can be grouped together to form
experiments. Fine-grained access control would thus be required over input and
output parameters, methods, WS and groupings among WS (to form a process)
and among WS and other applications (e.g., legacy software or device control
software). Other objects are the server hosting the WS, an IP address, or the URI
of a WS. Internal data, kept in a database and other objects accessed by the WS,
should also be considered as part of the list of objects to be managed.
- Actions that can be performed are various, depending on the type of subject issu
ing a request. Remote users or applications would generally be allowed to execute
a WS method, or access a server hosting a number of WS objects or an application. Rights corresponding to actions such as place experiment, or view results,
update data could be granted.
The identification of subjects and objects in a scientific environment defines a framework for secure collaboration based on the idea of integrating components that control the workflow execution through a set of specific security components. Such
framework, depicted in Fig. 1 comprises components (diamonds), their specific activities (ovals) and specific security aspects (double-border boxes).
The framework elements are as follows:
**Process Manager - Each process manager supervises the execution of the work-**
flow modeling the scientific experiment. It is responsible for the transformation of
the abstract workflow into a concrete plan whose components are the executions of
specific tasks/tools and/or actual accesses to data repositories. This planning process
can be performed in cooperation with a service broker, acting as a mediator, in that
it supplies, at run time, the description and location of useful resources and services.
**Task Manager - This is in charge of executing a particular set of activities which are**
instances of the workflow plan. It is also responsible for collaborating with others
components for managing the service execution. In fact, execution involves contacting data sources and components and requesting the appropriate execution steps.
**Service Manager - This supervises the successful completion of each task request.**
In case of failure, the service manager takes appropriate repair actions. Repair may
involve either restarting the task execution or re-entering the configuration component in order to explore alternative ways of instantiating the task execution to avoid
service failures, e.g., due to a security attack or service misuse. In that case, the service flow can be rerouted to other services able to provide substitute functionalities,
thus allowing redo or retry operations on tasks that were abnormally ended before
-----
rerouting. Moreover, this component waits until completion of the task request, and
notifies to the task manager the end of the activity.
**Policy Manager - This component supports and updates the resource provision pol-**
icy that regulates the flow of information through the applications and the network,
and across organizational boundaries, to meet the security requirements established
by members who are in charge of deploying their own services under their own policies that assert privileges and /or constraints on resource and services utilization.
**Fig. 1 Security Aspects and Related Components of a Virtual Laboratory**
Two major concerns in this framework are: structural and dynamic concerns,
and security concerns. i) Structural and dynamic concerns deal with the execution
of a scientific experiment in a VL and incorporate controls on vertical services. ii)
_Security concerns refer to horizontal services supporting privileges and constraints_
on the of VL resources, and may differ from user to user for each individual service.
The sequel of the paper presents how these policies can be implemented and how
fine-grained constraints can be defined in the VL to gain restricted different access
levels to services according to a policy that is fully decided by software owners
themselves.
## 5 Policy Management
Policy management in VLs, as the ability to support an access control policy in accordance with the resource access control goals, should support dynamically changing decentralized policies, policy administration and integrated policy enforcement.
A typical policy management system would include two components, namely the
_Policy Enforcement Point (PEP), and the Policy Decision Point (PDP), as shown in_
-----
Fig. 2. The PEP is the logical entity, or location within a server, responsible for enforcing policies with respect to authentication of subscribers, authorization to access
and services, accounting and mobility, and other requirements. The PEP is used to
ensure that the policy is respected before the user is granted access the WS resource.
The PDP is a location where an access decision is formed, as a result of evaluating the user’s policy attributes, the requested operation, and the requested resource,
in the light of applicable policies. The policy attributes may relate to authorization
and authentication. They may also refer to the attributes related to Quality of Service (QoS), or to service implementation details, such as transport protocol used,
and security algorithms implemented. The PEP and the PDP components may be
either distributed or resident on the same server. In our VL, access control works
as follows. A user who wants to perform an experiment submits a request to the
appropriate resource(s) involved in the experiments through a set of invocations to
WS providers. The Policy Manager (see Fig. 2) located in each of the requested
resources, implements the PEP and the PDP to take the access decision about the
user access request. The PEP wraps up an access request based on the user’s security attributes or credentials, on the requested resource, and on the action the user
wants to perform on the resource. It then forwards this request to the PDP, which
checks the request against the resource policy and determines whether the access
can be granted.
**Fig. 2 Policy Management System**
There is no standard way of implementing the PDP and PEP components; they
shall either be located in a single machine or be distributed in the different machines depending on the convenience of the Grid Administrator and of the resource
provider.
The Policy Manager (see Fig. 2) has the ability to recognize rules from the WS
requestor and provider of relevant sources, and is able to correctly combine applicable access rules to return a proper, enforceable access decision.
Generally, policies are defined for access to a single resource; hence, the PEP
and the PDP can be contained in a single eNode or be distributed. VL resources may
-----
be part of more than one application and therefore there should be a defined access
_control service. Further, these resources can be used contemporaneously by different_
applications with different associated policies; hence they will be processed by the
applicable Policy Managers. In that case, the applications have their own PEP and
_PDP, which control user access to the applications. Further, the Policy Manager_
must be able to recognize the policy attributes related to access control, as well
as, the information related to QoS. In the following subsection, we describe the
implementation methodology employed for the Policy Manager and the standard
specification used to express the access policy requirements for a resource.
The described access control mechanisms of the Policy Manager can be imple
mented using XACML, which includes both a policy language and an access control decision request/response language (both encoded in XML). The policy language is used to describe general access control requirements, and has standard
extension points for defining new functions, data types, combining logic, etc. The
request/response language allows queries on whether a given action should be allowed, and the interpretation of the result. The response always includes an answer
about whether the request should be allowed using one of four values: Permit, Deny,
Indeterminate (in case of error or required values missing, that so a decision cannot be made) or Not Applicable (the request can’t be answered by this service). A
Policy represents a single access control policy, expressed through a set of Rules.
Each XACML policy document contains exactly one Policy or a PolicySet, that
contains other policies or a reference to policy locations. For example, consider a
scenario where a user wants to access and read a web page available in a resource.
The XACML representation of this request in the PEP is as follows:
_Request_
<
_Subject_
<
_Attribute AttributeId = ”urn : oasis : names : tc : xacml : 1_ 0 : subject : subject − _id”_
< .
_DataType = ”urn : oasis : names : tc : xacml : 1_ 0 : data − _type : r fc822Name”_
.
_AttributeValue_ _www_ _unica_ _it_ _AttributeValue_
< - . . < /
_Attribute_
< /
_Subject_
< /
_Resource_
<
_AttributeAttributeId = ”urn : oasis : names : tc : xacml : 1_ 0 : resource : resource − _id”_
< .
_DataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#anyURI”_
// . . / /
_AttributeValue_ _http :_ _webmail_ _dsf_ _unica_ _it_ _userGuide gLite_ _html_ _AttributeValue_
< - // . . . / . < /
_Attribute_
< /
_Resource_
< /
_Action_
<
_AttributeAttributeId = ”urn : oasis : names : tc : xacml : 1_ 0 : action : action − _id”_
< .
_DataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#string”_
// . . / /
_AttributeValue_ _read_ _AttributeValue_
< - < /
_Attribute_
< /
-----
_Action_
< /
_Request_
< /
The PEP submits this request form to the PDP component which checks this
request against the policy of the resource hosting the intended web page. For example, the following policy states that the ”developers” group is allowed to read the
resource (i.e., the Web Page):
_RuleRuleId = ”ReadRule”E f fect = ”Permit”_
<
_Target_
<
_Subjects_
<
_AnySubject_
< / >
_Subjects_
< /
_Resources_
<
_AnyResource_
< / >
_Resources_
< /
_Actions_
<
_Action_
<
_ActionMatchMatchId = ”urn : oasis : names : tc : xacml : 1_ 0 : function : string − _equal”_
< .
_AttributeValue_
<
_DataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#string”_ _read_ _AttributeValue_
// . . / / - < /
_ActionAttributeDesignatorDataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#string”_
< // . . / /
_AttributeId = ”urn : oasis : names : tc : xacml : 1_ 0 : action : action − _id”_
. / >
_ActionMatch_
< /
_Action_
< /
_Actions_
< /
_Target_
< /
_ConditionFunctionId = ”urn : oasis : names : tc : xacml : 1_ 0 : function : string − _equal”_
< .
_ApplyFunctionId = ”urn : oasis : names : tc : xacml : 1_ 0 : function : string − _one_ − _and −_ _only”_
< .
_SubjectAttributeDesignatorDataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#string”_
< // . . / /
_AttributeId = ”group”_
/ >
_Apply_
< /
_AttributeValue_
<
_DataType = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema#string”_ _developers_ _AttributeValue_
// . . / / - < /
_Condition_
< /
_Rule_
< /
The PDP checks this policy against the request and determines whether the read
request can be allowed for the web page. It then forms a XACML response and
forwards it to the PEP which eventually allows the user to read the page. The implementation of XACML provides a programming interface to read, evaluate and
validate XACML policies. It can also be used to develop the Policy Manager con
-----
taining the PEP and the PDP, and performs most of the functionalities of the Policy
Manager. We can create a PEP which interacts with a PDP by creating requests
and interpreting the related responses. A PEP typically interacts in an applicationspecific manner and there is currently no standard way to send XACML requests to
an online PDP. Hence, we need to include code for both PEP and PDP in the same
application. For instance, the following code snippet will create an XACML request
and pass the same to the PDP.
_RequestCtxrequest = newRequestCtx(subjects_ _resourceAttrs_ _actionAttrs_
,,,
_environmentAttrs);_
_ResponseCtxresponse = pdp_ _evaluate(request);_
.
The XACML based Policy Manager can recognize policy attributes related to
authentication and authorization. Hence, they can be used only for implementing
access control mechanisms. However, such authorization policies do not express
the capabilities, requirements, and general characteristics of entities (i.e., users and
resources) in an XML WS-based system and there are some more attributes, different from the access control attributes, that need to be examined before accessing a
WS.
For instance, one may need to negotiate QoS characteristics of the service, or privacy policies and also the kind of security mechanism used in the WS. Unfortunately, XACML does not provide the grammar and syntax required to express these
policies. For this aspects, we use WS-policy specifications which provide a flexible
and extensible grammar for expressing various aspects of policy attributes, such as
the used authentication scheme, the selected transport protocol, the algorithm suite,
and so on. For example, the following specification represents the policy for the algorithm suite required for cryptographic operations with symmetric or asymmetric
key based security tokens (it is also possible to include timestamps to the policy
specifications to prevent any misuse of the policies).
_wsp : Policy_
<
_xmlns : sp = ”http :_ _schemas_ _xmlsoap_ _org_ _ws_ 2005 07 _securitypolicy”_
// . . / / / /
_xmlns : wsp = ”http :_ _schemas_ _xmlsoap_ _org_ _ws_ 2004 09 _policy”_
// . . / / / /
_wsp : ExactlyOne_
<
_sp : Basic256Rsa15_
< / >
_sp : TripleDesRsa15_
< / >
_wsp : ExactlyOne_ _wsp : All_
< / ><
_sp : IncludeTimestamp_
< / >
_wsp : All_
< /
_wsp : Policy_
< /
The Apache implementation of WS-Policy provides versatile APIs for program
matic access to WS-Policies. Under this approach, we can implement a policy
matching mechanism to negotiate security attributes, and other QoS attributes, be
-----
fore actual access to the WS. Moreover, WS-policy APIs are a flexible tool to read,
compare and verify the attributes present in WS-Policies. For instance, the following
code snippet shall be used for creating a Policy Reader object to access a WS-Policy
(here Policy A) and to compare this object with another policy (Policy B):
_PolicyReaderreader =_
_PolicyFactory_ _getPolicyReader(PolicyFactory_ _DOM POLICY READER);_
. .
_PolicyReaderreader =_
_PolicyFactory_ _getPolicyReader(PolicyFactory_ _DOM POLICY READER);_
. .
_FileInputStreamPolicy A = newFileInputStream(”ResA_ _xml”);_
.
_PolicypolicyA = reader_ _readPolicy(Policy A);_
.
_FileInputStreamPolicy B = newFileInputStream(”ResB_ _xml”);_
.
_PolicypolicyB = reader_ _readPolicy(Policy B);_
.
_Booleanresult = PolicyComparator2_ _compare(Policy A_ _Policy B)_
.,
Through the combination of XACML and WS-Policy specifications, we can implement a full fledged Policy Management system for WS to manage authorization
policies on resources as well as policies related to security and other QoS aspects.
However, this Policy Management system cannot be used as such in Grid environments, considering the very nature of jobs and resources in the Grid. In fact, in the
Grid, there are computationally intensive resources, such as clusters, that can either host an experiment as a service, or allow jobs to be executed in it. Hence, the
policy requirements in this environment will be different from those of WS environments. For example, suppose that a resource wants to contribute up to (but not
more than) 200MB of its memory for job execution in the Grid. To express such
policy, currently existing policy languages do not offer enough grammar and syntax. Hence, we suggest to extend the existing policy language schema to include
policies regarding elements typical of Grid Services, such as bandwidth information, memory, CPU cycle, etc. For our prototype implementation, we consider three
attributes namely the memory, CPU cycle and the available nodes in the cluster resource and a schema is developed with these attributes. The APIs of the WS-Policy
implementation are modified accordingly, to deal with this schema and be able to
perform operations such as compare, read, normalize, and so on.
The schema that includes the attributes related to a Grid resource, and its usage
in WS-Policy is as follows:
_xs : schema_
<
_targetNamespace = ”http :_ _unica_ _it_ _gridpolicy_ _xsd”_
// . / .
_xmlns : tns = ”http :_ _unica_ _it_ _gridpolicy_ _xsd”_
// . / .
_xmlns : xs = ”http :_ _www_ _w3_ _org_ 2001 _XMLSchema”_
// . . / /
_elementFormDe fault = ”qualified”_
_blockDe fault = ”#all”_
-----
_xs : elementname = ”Mem”type = ”tns : OperatorContentType”_
< / >
_xs : elementname = ”ProcessorSpeed”type = ”tns : OperatorContentType”_
< / >
_xs : elementname = ”DiskSpace”type = ”tns : OperatorContentType”_
< / >
The following WS-Policy uses this schema to represent the capabilities and policy
information of a Grid resource:
_wsp : Policyxmlns : sp = ”http :_ _schemas_ _xmlsoap_ _org_ _ws_ 2005 07 _securitypolicy”_
// . . / / / /
_xmlns : wsp = ”http :_ _schemas_ _xmlsoap_ _org_ _ws_ 2004 09 _policy”_
// . . / / / /
_xmlns : cs = ”http :_ _schemas_ _mit_ _edu_ _cs”_ _wsp : ExactlyOne_ _wsp : All_
// . . / >< ><
_cs : Mem_ 1024 _cs : Mem_
< - < /
_cs : ProcessorSpeed_ 2GHz _cs : ProcessorSpeed_
< - < /
_wsp : All_
< /
_wsp : All_ _sp : Basic256Rsa15_ _sp : TripleDesRsa15_ _wsp : ExactlyOne_ _wsp : All_
< >< / >< / >< / <
_wsp : ExactlyOne_
< /
_wsp : Policy_
< /
Through this policy, the Grid resource wants to advertise that it can allocate no
more than 1GB of its free memory to Grid job execution, and that it is able to provide
2GHz of its processor speed. This policy information can be read and compared with
other policies using the WS-Policy implementation libraries.
This prototype implementation modifies the WS-Policy specification to deal with
a larger number of attributes. To implement these issues in a real time dynamic
environment, an extensive survey of Grid resource usage policies and their representation in a WS-policy schema are needed. Our future research will investigate
the development of a Policy Management system working for both WS and Grid
environments.
## 6 Implementation Hints
The illustrated framework has been the basis for developing a prototype VL which,
in an initial validation stage, has been used to test secure cooperation from the perspective of one scientific server only, for which a Security Server has been implemented, containing security functions deployed as Security WS. The prototype (see
Fig. 3) is built on top of Taverna[1], a workflow composer that allows designers to
map the initial abstract workflow into a detailed plan. Each Taverna workflow consists of a set of components, called Processors, each with a name, a set of inputs and
a set of outputs. The aim of a Processor is to define an inputs-to-outputs transformation. Vertical services can be installed by adding to Taverna new plug-in processors
that can operate alone or can be connected with data and workflows through control
links. When a workflow is executed and the execution reaches a security Proces
1 Taverna is available in the myGrid open source E-science environment
http://www.mygrid.org.uk/
-----
sor, an associated invocation task is called that invokes a specific horizontal service
implementing security mechanisms. The Scufl workbench included in MyGrid provides a view for composition and execution of processors. The internal structure
of a VL includes four components: a Security Server, a Front-End, a Back-End, a
Workflow Editor.
The Security Server exposes various functionalities aimed at data privacy and
security both in the pole and during the interaction among poles. It manages User
Authentication, Validity check of Security Contracts, Trust Levels, Cryptographic
Functions, and Security Levels. The Security Server service communicates with the
front-end scientific services by sending them the local Security Levels and the list
of remote poles offering a specific resource. User authentications occurs through
insertion of a secret code by the user requesting the execution of a protected workflow. The Front-end of the scientific pole is a set of WS that can be invoked by a
workflow editor, after negotiation. These WS interact with the Security Server, from
which they require information related to the local pole access policy. The Front-end
includes services that do not hold their own resource trust level, but rather inherit
the clearance level of the user executing the WS. However, the Front-end service
receives, at creation time, a threshold security level, reflecting the quality and sensitiveness of the service.
**Fig. 3 Security Components Implementation Architecture**
The Back-end of a scientific pole is constituted by the local resources of the sci
entific pole, e.g., calculus procedures or data stored in databases. All the resources
in the Back-end are exposed as WS, and can be invoked by a remote Virtual Laboratory. Each resource has its own Resource Service Level assigned by an administrator. The applied policy is ”no read up, no write down”. The invocations of the
Back-end services are protected via SSL. Finally, the scientific workflow is defined
-----
using the Taverna workflow editor of MyGrid [2]. Upon proper negotiation of security
contracts, a download of the workflow modifier tool and the encryption/decryption
module from the provider pole is required. The modifier tool modifies the scientific
workflow, by adding crypt and decrypt activities and the input data related to access
codes of services. The crypt/decrypt module implements cryptographic functions
on exchanged data (we use AES). These editors are designed to be used by scientists teams, generally co-ordinated by a Chief Scientist. However, a workflow is
not associated to a whole, given global Security Level, but rather each service of
the workflow has an associated Security Level depending on the qualification of the
user requiring the service.
## 7 Concluding Remarks
This paper has highlighted the requirements that should be considered when access
control policies of Virtual Laboratories are written. To allow an access control policy to be flexible and dynamic, it can no longer be a high-level specification, but
must become a dynamic specification that allows real-time access control administration of WS and the Grid resources. To this aim, we have presented the security requirements of a cooperative environment for executing scientific experiments.
Namely, we have illustrated XACML policy specifications, and the use of the WSPolicy to define scientific resource sharing requirements needed to securely activate
a collaboration in experiments with negotiating of QoS policy attributes. A security framework and a prototype environment have been presented, with the purpose
of providing a uniform view of Grid service policies for a dynamic environment
where a set of nodes cooperate to perform a scientific experiment. Currently there
exists no standardized access control for virtual applications implemented with WS
on the Grid. We plan to extend the requirements presented in this paper and define
a formal security model and architecture for WS and Grid enabled scientific applications. The model will be based on the security policy languages used in this paper, independently of specific technologies and configuration models. This should
ensure industry-wide adoption by vendors and organizations alike to allow crossorganization business integration. Interoperation requires a standard-based solution.
In fact, a Virtual Laboratory, created with WS and the Grid, where scientific relationships may frequently change, requires a highly flexible, but robust security
framework, based on approval and universal acceptance of standards. This would
allow business partners to avoid interoperability problems among their disparate
applications and maintain a security context to allow interoperation.
**Acknowledgements This paper has been partially supported by the Italian TEKNE Project.**
2 Taverna, and other e-Science management tools, are freely available on the Internet, but to ensure
encryption, decryption and server authentication capabilities they require additional features.
-----
## References
1. Amigoni F., Fugini M.G., Liberati D., ”Design and Execution of Distributed Experiments”,
Proc. 9th International Conference on Enterprise Information Systems, (ICEIS’07), Madeira,
June 2007
2. Anderson A. et. al., XACML 1.0 Specification,
http://www.oasis-open.org/committees/tc home.php?wg abbrev=xacml, 2003
3. Atkinson B. et al., Web Services Security (WS-Security), 2002, Version 1.0 April 5, 2002,
http://www.verisign.com/wss/wss.pdf
4. Bartel M., Boyer J., Fox B., LaMacchia B. and Simon, XML Signatures,
http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/E
5. Bosin A., Dess N., Fugini M.G., Liberati D., Pes B., ”Supporting Distributed Experiments in
Cooperative Environments”, in Business Process Management, Springer-Verlag Bussler C.,
Haller A. (Eds.), vol. 25, 2006, pp. 281 - 292
6. Camarinha-Matos L.M., Silveri I., Afsarmanesh H., and Oliveira A.I., ”Towards a Frame
work for Creation of Dynamic Virtual Organizations”, in Collaborative Networks and Their
Breeding Environments, Springer, Boston Volume 186/2005, 2005, pp. 69-80
7. Casati F., Castano S., Fugini M.G., ”Managing Workflow Authorization Constraints Through
Active Database Technology”, Journal of Information Systems Frontiers, Special Issue on
Workflow Automation and Business Process Integration, 2002
8. Damiani E., De Capitani di Vimercati S., Paraboschi S., Samarati P., ”Fine Grained Access
Control for SOAP E-Services”, in Proc. of the Tenth International World Wide Web Conference, Hong Kong, China, May 1-5, 2001.
9. Della-Libera G. et al., Web Services Trust Language (WS-Trust), available at
http://www.ibm.com/developerworks/library/ws-trust/index.html
10. Della-Libera G., et al, ”Web Services Security Policy Language (WS-SecurityPolicy,” July
2005. (See http://www.oasis-en.org/committees/download.php/16569/)
11. Foster, I. 2006. ”Service-Oriented Science: Scaling e-Science Impact”, Proceedings of the
2006 IEEE/WIC/ACM International Conference on Web intelligence, 2006
12. Hallam-Baker P., Hodges J., Maler E., McLaren C., Irving R., SAML 1.0 Specification,
http://www.oasis-open.org/committees/tc home.php?wg abbrev=security, 2003
13. IETF Policy Framework Working Group, A framework for policy-based admission control,
available at http://www.ietf.org/rfc/rfc2753.txt, 2003
14. ImamuraT., Dillaway B., Simon E., XML Encryption, http://www.w3.org/TR/xmlenc-core/
15. Jiang H., Lu S., ”Access Control for Workflow Environment: The RTFW Model”, in Com
puter Supported Cooperative Work in Design III, LNCS Springer Berlin / Heidelberg, Volume
4402/2007, 2007, pp. 619-626
16. Kim K.H., Buyya R., ”Policy-based Resource Allocation in Hierarchical Virtual Organiza
tions for Global Grids”, 18th International Symposium on Computer Architecture and High
Performance Computing (SBAC-PAD’06), 2006, pp. 36-46
17. Lang B., Foster I., Siebenlist F., Ananthakrishnan R., Freeman T., ”A Multipolicy Authoriza
tion Framework for Grid Security,” Fifth IEEE International Symposium on Network Computing and Applications (NCA’06), 2006, pp. 269-272
18. Mohammad A.,Chen A.,Wang G. W., Changzhou C., Santiago R., ”A Multi-Layer Security
Enabled Quality of Service (QoS) Management Architecture”, in Enterprise Distributed Object Computing Conference, 2007 (EDOC 2007) Oct. 2007, pp.423-423
19. Nadalin A., C. Kaler, P. Hallam-Baker, R. Monzillo (Eds.) Web Services Security, available
at http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0.pdf
20. Welch V., Siebenlist F., Foster I., Bresnahan J., Czajkowski K., Gawor J., Kesselman C.,
Meder S., Pearlman L., Tuecke S., ”Security for Grid Services”, Proc. 12th IEEE International
Symposium on High Performance Distributed Computing, 22-24 June 2003, pp. 48- 57
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-0-387-09699-5_15?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-0-387-09699-5_15, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/978-0-387-09699-5_15.pdf"
}
| 2,008
|
[
"JournalArticle"
] | true
| 2008-09-07T00:00:00
|
[] | 9,527
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0154a7e5dfd13d425e75b44ccbbb87043c308356
|
[] | 0.942783
|
Analysis of Consensus Mechanisms: PoW and PoS
|
0154a7e5dfd13d425e75b44ccbbb87043c308356
|
Applied and Computational Engineering
|
[
{
"authorId": "73529960",
"name": "Jia Wenxuan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Comput Eng"
],
"alternate_urls": null,
"id": "38ef5a81-0fad-4de7-abc2-0fb847d3ece7",
"issn": "2755-2721",
"name": "Applied and Computational Engineering",
"type": "journal",
"url": null
}
|
Along with the trend of Bitcoin blockchain, the concept behind all virtual currency has be-come popular in the study of the Internet. This essay mainly researches two kinds of common consensus mechanisms for the current blockchain network and looks forward to the future development of the technologys usage in daily life. This research aims to overview the two most common consensus mechanisms in the construction of blockchain. By reviewing re-sources from other research, an explanation of the goal of the consensus method, the ad-vantages, and disadvantages of each approach and the future development of these two meth-ods are summarized and developed. The result of the review explains the shifts of mature vir-tual currencies from proof of work to proof of stake and advises what mechanism should be used at starting stage and why a shift is necessary for proof of stake currencies.
|
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
# Analysis of consensus mechanisms: PoW and PoS
**Jiang Wenxuan**
Ningbo Xiaoshi High School, Ningbo, Zhejiang, China, 315000
vincentjiang0207@163.com
**Abstract. Along with the trend of Bitcoin blockchain, the concept behind all virtual currency**
has become popular in the study of the Internet. This essay mainly researches two kinds of common consensus mechanisms for the current blockchain network and looks forward to the future
development of the technology’s usage in daily life. This research aims to overview the two most
common consensus mechanisms in the construction of blockchain. By reviewing resources from
other research, an explanation of the goal of the consensus method, the advantages, and disadvantages of each approach and the future development of these two methods are summarized
and developed. The result of the review explains the shifts of mature virtual currencies from
proof of work to proof of stake and advises what mechanism should be used at starting stage and
why a shift is necessary for proof of stake currencies.
**Keywords: PoW, PoS, consensus mechanism, blockchain, virtual currency.**
**1. Introduction**
Just like what Satoshi Nakamoto said in his paper, “A purely peer-to-peer version of electronic cash
would allow online payments to be sent directly from one party to another without going through a
financial institution” [1]. Bitcoin is a kind of electronic cash in which there is no trusted third party
required. As a kind of currency which is going to lead the world’s development in the future, the importance of researching and overviewing the current mechanism development is significant.
In detail, this paper focused on the advantages and disadvantages of proof of work and proof of stake,
known as PoW and PoS. The review of the structure of the algorithm and document from the developers
is the source used by this report. By summarizing the feature of the two methods, both the long-term
and short-term influences are discussed, and risks of exchanges and starting stages’ problems are also
evaluated in the paper. The method of research is by overview mechanism of both proves and summarize
their features. Furthermore, the situation is stimulated to evaluate the advantages and disadvantages of
a specific currency development or usage stage.
As a strong foundation, the report, which is a consensus mechanism analysis, could help researchers
or organizations to make a better choice when constructing their new currency in the future. It would
help other resources to summarize the new mechanism’s advantages and disadvantages.
**2. Basic knowledge of blockchain and consensus mechanism**
Blockchain is a chain list which connects blocks. In each block, numerous things are packaged, such as
all trade data and a new key for the next block. The most important feature of a chain list is a connection
to the next block requires addressing information in the block, and thus the problem of double pay and
© 2023 The Authors. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0
(https://creativecommons.org/licenses/by/4.0/).
-----
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
modification of the contents of the block easily. The original blockchain is invented by Satoshi Nakamoto, and the product of this algorithm is Bitcoin, known as BTC. A schematic diagram for the original
blockchain is shown in figure 1. In this schematic diagram, the inheritance of the blockchain is displayed,
which is the connection between the first transaction and the following transaction that needs the previous key to verify the signature.
**Figure 1. Schematic diagram of blockchain [1].**
The idea of blockchain is essential for a currency without a third party. It provides the currency with
numerous features different feature. However, the concern remains. In this paper, how to make a new
block is the focus, and different consensus mechanisms would be researched. “Blockchain technology,
the consensus mechanism determines the security, scalability, and decentralization of Blockchain” [2].
As the mechanism is introduced, the production of a new block is not defined properly. If a new
block would be produced easily, too many branches of a blockchain would destroy the currency since
people do not know where to write their trade and no branch would be really overall. Tons of merge
processes are required and always required.
Blockchain has many outstanding features, such as immutability, security, speed, and consensus as
Lashkari mentioned in his review, and the consensus mechanism is the source of this feature [3]. For
the consensus mechanism, the main idea is to add some obstacles for creating a new block and the
difficulty would be verified easily. As the description shows, a one-way function is required for this
obstacle. With this mechanism, the block produced would be much more reasonable and more people
would trade on it and tend to add a new block after it, which is also known as consensus.
Furthermore, a different way of proof would be possible if the current owner does not only determine
the validity by the computer power. Another kind of consensus mechanism is based on the stake (currency) that the block creator owes and the standard of creating a new block is the currency they held
(often, the time of holding is also considered an important factor).
**3. Proof of work**
For the proof of work, the main idea of this mechanism is to set a difficult task which would be difficult
for everyone in every block, and thus it is difficult to add a new block. Two widely used approaches of
proof of work are one-way function and space-time exam.
_3.1. Hash applied to bitcoin_
For the hash, the most widely known example is bitcoin. It is the basis of countless different currencies.
Bitcoin creates a fundament for all other algorithms using the hash. Although other, more practical ways
have been developed, it is still critical for us to learn how a simple hash function works for blockchain.
-----
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
The main idea is to find a difficult problem and start to modify it. In this case, using the hash function
secure hash algorithm 256-bit, to produce a number with numerous zeros at the first of the answer. This
issue is relatively random, and no trend could be obtained. So, a huge number of trials, which means
computing power, is required to make a new block. When the zero number increases, the difficulty of
the concern increases as well. The change in difficulties would help to limit the rate of block produced.
However, solving a problem would not avoid the modification concern, so the real rule of new block
development is based on the data in the previous block. Every time when the miner wanted to produce
a new block, their hash value needs to be added with the previous value of all the trades in the block,
the signature of the miner who make the last block and the last block’s hash. If someone would like to
modify the previous trade records, these people need to redo the hash calculation. Moreover, changing
the content in the block means the miner must change all the following blocks.
Using this kind of algorithm, the blockchain would be produced. Choosing bitcoin as an example is
great since its process is easy to understand and straightforward. In other cases, some currencies would
use a different hash function or a different way of validation, and some coins even use MD5 messagedigest algorithm as their one-way function (it has been proven to be easy to crash). This algorithm would
be the mode widely known approach.
_3.2. Storage applied for proof of the capacity_
POC means proof of capacity. It is different from the hash approach mentioned above. The examination
takes place by storage ability. The developer of this mechanism claims it would not cost a lot of power
compared to bitcoin. However, the end of POC causes a gigantic rise in hard disk prices.
Most of the POC would also include time as another factor. Every time when a miner claim they
have this ability of storage, the examiner request would be sent, and the miner needs to rent the request.
A verifier can check if a prover has stored the data they committed to over space and over a period.
_3.3. Advantages of proof of work_
The advantages of PoW are that it is a strong verification for every people, and it could be used at the
start phase of a currency. Everyone would verify easily because it relied solely on a one-way function
or simple same message. A hash function would be difficult to obtain a certain value, but easy to calculate a value.
Moreover, in the process of building blocks, the total amount of currency is not enough at first. The
consensus mechanism cannot be based on stake. For the PoW, it would reward the first group of miners
with some currencies. It would encourage them to build this blockchain more and get more rewards due
to the increase in the value of coins. No matter what time it is, the PoW mechanism would be useful for
building blocks, since it depends solely on solving problems, instead of proving a huge amount of currency owned. Furthermore, this method of construction of new blocks could prevent forking because of
limited computing power for every miner. As explained by Cointelegraph, “A miner would have to split
their computational resources between the two sides of the fork in order to support both blockchains.
As a result, through an economic incentive, proof-of-work systems naturally prevent constant forking
and urge the miners to pick the side that does not wish to harm the network” [5].
_3.4. Disadvantages of proof of work_
As the currency developed more, plenty of disadvantages appear. When the difficulties of producing a
new block increase to a high level, a huge number of resources, such as computers and electricity, would
be wasted for calculating a simple hash function. And as the currency developed, even more resources
would be required. According to Bonheur, “Powerful computers inherently consume a lot of energy.
Furthermore, these machines require effective heat management or cooling system to remain operational
[and prevent overheating, as well as associated damages to hardware components due to internal heat](https://www.profolus.com/topics/causes-of-overheating-in-electronic-devices/)
build-up” [6].
Moreover, when someone wants to attack a kind of currency, a 51% attack is possible for some
currency at the start. Since not plenty of miners are focusing on this blockchain, the attackers would use
-----
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
51% of their computing power to create a whole new branch and replace the original chain. It is possible
to happen if someone really aimed to attack. So, it would increase the probability of being attacked by
excessive computing power.
**4. Proof of stake**
The basic concept of proof of stake is that instead of using computer power as the indicator of the
capability to build a new block, PoS is aiming for using a stake to prove the validity of a new block. In
this way, relatively less computing power is required to maintain a currency and more problems emerge.
_4.1. Ethereum 2.0_
It is widely known that Ethereum, known as ETH, is another widely used currency which uses PoW to
create new blocks on the blockchain. However, in the next version of the ETH, its developer decided to
shift to PoS. Before this transition, a huge amount of energy usage is required for bitcoin mining and
ETH mining. For the further development of this currency, a shift is needed.
For ETH 2.0, PoS has been used. In principle, PoS is a new consensus mechanism for ETH which
“This staked ETH then acts as collateral that can be destroyed if the validator behaves dishonestly or
lazily. The validator is then responsible for checking that new blocks propagated over the network are
valid and occasionally creating and propagating new blocks themselves” [4]. As the schematic diagram
in figure 2 shown, a part of the stake of the block creator would be used to verify the new block’s validity,
and the stake decision chooses whether it would be the main branch or not to be selected.
**Figure 2.** Schematic diagram of PoS used by ETH 2.0 [4].
It asks a group of validators to deposit some ETH into a contract and run software programs. The
validator is responsible for creating a new block and sending it out to other people in the network.
Furthermore, a feature of finality is introduced, which means the block’s content cannot be changed
unless a lot of ETHs are burnt. Only two-thirds of the people agree the process would take place.
_4.2. Advantages_
PoS mechanism only requires currency to prove the stakeholder’s ability to maintain a block, instead of
using plenty of resources to verify their ability. In most cases, the PoS chain of a currency is developing
a lot faster than the PoW chain, and thus, the speed of payment is faster. According to Gehmlich, “The
proof-of-stake solves scalability issues that have been a thorn in the flesh in the proof-of-work consensus
mechanism. PoS facilitates faster transactions since blocks are approved faster as there’s no need to
solve complex mathematical equations. Since no physical machines or mining farms requiring ample
energy supplies are needed to generate consensus, there is better scalability” [7].
-----
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
For another reason, the people who take part in PoS are more likely to contribute more to the currency.
Because they hold a lot of this kind of currency, improving in value of these coins would be beneficial
for them, which means they want to and would work on it. For them, they have a good reason to maintain
a new block nicely and fast. It is essential to reward the people who want to improve this currency. For
the PoW, some miners might just complete the work and sell the coins.
Moreover, it is less likely that a coin-used PoS would experience a 51% attack. Since it is possible
that the attacker uses a lot of computer power to attack a coin, the 51% attack for a coin based on PoW
is possible when it is relative, not popular. However, no matter what kind of coin, it is much more
difficult for an attacker to control 51% of the coin. In some algorithms, change blocks even require the
attacker to have two-thirds of the total currency. It is much more difficult and if you really have that
number of coins, the people will not want to attack.
_4.3. Disadvantages_
One of the disadvantages is that in a normal PoS community, more stake means more power in policymaking, but that is not always the case. It is usual for more people to invest some of their money into
electric currency such as bitcoin and ETH, but they would not create their own electric wallet. Instead,
they would only open an account on a platform and the platform would help them purchase and sell all
the coins. For the concept of stake equal to power, people who owe the coins do not have the right to
vote since the platform helps them to keep them, which means some massive cooperation might make
several bad decisions to make these coins bad. And it is also difficult for most people to understand and
try to use a hash code as their account.
In another case, if the coins just start developing, PoS is not likely to work. Not numerous people
have coins, and they could only buy them from other people or from mining. It is possible that only the
introducer of the coin would maintain the chain and the reward cannot be distributed thus the influence
of the coins cannot increase. As a result, no one would spend more on this coin. So, it is inappropriate
to use PoS at the beginning of a new electric currency.
Furthermore, proof of stake might cause the problem of centralization again. According to Chandler
[8], since there is no limit on how much crypto a single validator could stake, a huge validator might act
like a bank and control the currency.
**5. Future development of consensus mechanisms**
Nowadays, more and more consensus mechanisms are being developed. Combined networks such as
"an improved network for blockchain is proposed to combine different blockchain networks together. It
uses the POU consensus mechanism to improve the network environment, which consists of Proof of
Stake Entrance, Hash Net Verification and Delegated Parliament” [9]. This kind of mixed network requires a different level of explaining and evaluating since it would combine the effects, too. Moreover,
different methods from basic are developed, such as Proof-of-Stake with Delegated Ownership (DPoS)
Blockchain-based Consensus and Fault Tolerance in the Byzantine Style (BFT) [10].
In the future, blockchain and electric currency will be used more and more. Furthermore, the certain
difference would appear while developing. The content in each block would increase and the frequency
of requests and receive would increase noticeably. To cope with this increase in demand for trade, better
algorithms would be developed and applied to the currencies. Moreover, the ability to be updated is also
being considered. Instead of having a different currency, a future developer should be able to update
existing currency and blockchain. It could reduce the cost of transfer a lot. In the future, the blockchain
could be used for smart contracts and create a global country. It is possible to see that during the usage
of the electric currency, the concept of country and world could change. In mete verse, more currency
system needs to be established. Moreover, NTF has appeared for a long time, and this merchandise
could be stored on a blockchain. The more the title represents all kinds of possibilities.
-----
Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
DOI: 10.54254/2755-2721/8/20230187
**6. Conclusion**
In conclusion, the features of the PoW and PoS are varying, and all comes down to a simple fact: PoW
is better for starting and PoS could be used for further development. It is certain that PoS will be the
mainstream of blockchain in the future, but the contribution of the PoW should not be forgotten.
The lack of method analysis is one of the improvements that the report could improve. Moreover,
practical examples of risk for PoW and PoS could be added to illustrate these advantages and disadvantages in much more clarity and detail. These improvements could help readers to think about the
current virtual currency’s development and provide important resources for them to make choices in the
mechanism’s use.
**References**
[1] Bitcoin: A Peer-to-Peer Electronic Cash System Satoshi Nakamoto Institute. (n.d.). Retrieved
[March 7, 2023, from https://nakamotoinstitute.org/bitcoin/](https://nakamotoinstitute.org/bitcoin/)
[2] Liu, Z., Liu, W., Zhang, Y., Xu, G., & Yu, H. 2019. Overview of Blockchain Consensus Mecha
nisms. _Journal_ _of_ _Cryptologic_ _Research,_ **6,** 395–432.
[https://doi.org/10.13868/j.cnki.jcr.000311](https://doi.org/10.13868/j.cnki.jcr.000311)
[3] Lashkari, B., & Musilek, P. 2021. A Comprehensive Review of Blockchain Consensus Mecha
[nisms. IEEE Access, 9, 43620–43652. https://doi.org/10.1109/ACCESS.2021.3065880](https://doi.org/10.1109/ACCESS.2021.3065880)
[4] [Proof-of-stake (PoS). (n.d.). Ethereum.Org. Retrieved March 7, 2023, from https://ethereum.org](https://ethereum.org/)
[5] Proof-of-stake vs. proof-of-work: Pros, cons, and differences explained. (n.d.). Cointelegraph.
[Retrieved March 12, 2023, from https://cointelegraph.com/blockchain-for-beginners/proof-](https://cointelegraph.com/blockchain-for-beginners/proof-of-stake-vs-proof-of-work:-differences-explained)
[of-stake-vs-proof-of-work:-differences-explained](https://cointelegraph.com/blockchain-for-beginners/proof-of-stake-vs-proof-of-work:-differences-explained)
[6] Bonheur, K. (2021, September 15). PoW: Advantages and Disadvantages of Proof-of-Work.
[Profolus. https://www.profolus.com/topics/pow-advantages-and-disadvantages-of-proof-of-](https://www.profolus.com/topics/pow-advantages-and-disadvantages-of-proof-of-work/)
work/
[7] Gehmlich, B. (2022, October 10). Pros and Cons of Proof of Stake for Ethereum Blockchain
[Security. Gigster. https://gigster.com/pros-and-cons-of-pos-for-ethereum-security/](https://gigster.com/pros-and-cons-of-pos-for-ethereum-security/)
[8] Chandler, S. (n.d.). Proof of stake vs. proof of work: Key differences between these methods of
verifying cryptocurrency transactions. Business Insider. Retrieved March 12, 2023, from
[https://www.businessinsider.com/personal-finance/proof-of-stake-vs-proof-of-work](https://www.businessinsider.com/personal-finance/proof-of-stake-vs-proof-of-work)
[9] Guo, H., Zheng, H., Xu, K., Kong, X., Liu, J., Liu, F., & Gai, K. 2018. An Improved Consensus
Mechanism for Blockchain. In M. Qiu (Ed.), Smart Blockchain **11373, 129–138. Springer In-**
[ternational Publishing. https://doi.org/10.1007/978-3-030-05764-0_14](https://doi.org/10.1007/978-3-030-05764-0_14)
[10] Blockchain Consensus Algorithms: What and How? Blockchain Certification Programs CBCA.
[(n.d.). Retrieved March 7, 2023, from https://www.cbcamerica.org/blockchain-insights/block-](https://www.cbcamerica.org/blockchain-insights/blockchain-consensus-algorithms-what-and-how)
[chain-consensus-algorithms-what-and-how](https://www.cbcamerica.org/blockchain-insights/blockchain-consensus-algorithms-what-and-how)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.54254/2755-2721/8/20230187?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54254/2755-2721/8/20230187, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://www.ewadirect.com/proceedings/ace/article/view/2511/pdf"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-08-01T00:00:00
|
[] | 5,304
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0155fe1751b05dda3dfde6e95745a5fc3e0c6e95
|
[] | 0.833873
|
Cloud-Assisted Private Set Intersection via Multi-Key Fully Homomorphic Encryption
|
0155fe1751b05dda3dfde6e95745a5fc3e0c6e95
|
Mathematics
|
[
{
"authorId": "3236877",
"name": "Cunqun Fan"
},
{
"authorId": "2201198555",
"name": "Peiheng Jia"
},
{
"authorId": "8766367",
"name": "Manyun Lin"
},
{
"authorId": "2110972903",
"name": "Lan Wei"
},
{
"authorId": "2075394776",
"name": "Peng Guo"
},
{
"authorId": "3134309",
"name": "Xiangang Zhao"
},
{
"authorId": "2154487593",
"name": "Ximeng Liu"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-283014",
"https://www.mdpi.com/journal/mathematics"
],
"id": "6175efe8-6f8e-4cbe-8cee-d154f4e78627",
"issn": "2227-7390",
"name": "Mathematics",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-283014"
}
|
With the development of cloud computing and big data, secure multi-party computation, which can collaborate with multiple parties to deal with a large number of transactions, plays an important role in protecting privacy. Private set intersection (PSI), a form of multi-party secure computation, is a formidable cryptographic technique that allows the sender and the receiver to calculate their intersection and not reveal any more information. As the data volume increases and more application scenarios emerge, PSI with multiple participants is increasingly needed. Homomorphic encryption is an encryption algorithm designed to perform a mathematical-style operation on encrypted data, where the decryption result of the operation is the same as the result calculated using unencrypted data. In this paper, we present a cloud-assisted multi-key PSI (CMPSI) system that uses fully homomorphic encryption over the torus (TFHE) encryption scheme to encrypt the data of the participants and that uses a cloud server to assist the computation. Specifically, we design some TFHE-based secure computation protocols and build a single cloud server-based private set intersection system that can support multiple users. Moreover, security analysis and performance evaluation show that our system is feasible. The scheme has a smaller communication overhead compared to existing schemes.
|
# mathematics
_Article_
## Cloud-Assisted Private Set Intersection via Multi-Key Fully Homomorphic Encryption
**Cunqun Fan** **[1,2], Peiheng Jia** **[3], Manyun Lin** **[1,2], Lan Wei** **[1,2,][∗], Peng Guo** **[1,2], Xiangang Zhao** **[1,2]** **and Ximeng Liu** **[4]**
1 Key Laboratory of Radiometric Calibration and Validation for Environmental Satellites, National Satellite
Meteorological Center (National Center for Space Weather), China Meteorological Administration,
Beijing 100081, China
2 Innovation Center for FengYun Meteorological Satellite (FYSIC), Beijing 100081, China
3 School of Mathematics and Computer Science, Shanxi Normal University, Taiyuan 030031, China
4 College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
***** Correspondence: weilan@cma.cn
**Citation: Fan, C.; Jia, P.; Lin, M.;**
Wei, L.; Guo, P.; Zhao, X.; Liu, X.
Cloud-Assisted Private Set
Intersection via Multi-Key Fully
Homomorphic Encryption.
_Mathematics 2023, 11, 1784._
[https://doi.org/10.3390/](https://doi.org/10.3390/math11081784)
[math11081784](https://doi.org/10.3390/math11081784)
Academic Editor: Antanas Cenys
Received: 21 March 2023
Revised: 4 April 2023
Accepted: 4 April 2023
Published: 8 April 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: With the development of cloud computing and big data, secure multi-party computation,**
which can collaborate with multiple parties to deal with a large number of transactions, plays an
important role in protecting privacy. Private set intersection (PSI), a form of multi-party secure
computation, is a formidable cryptographic technique that allows the sender and the receiver to
calculate their intersection and not reveal any more information. As the data volume increases and
more application scenarios emerge, PSI with multiple participants is increasingly needed. Homomorphic encryption is an encryption algorithm designed to perform a mathematical-style operation
on encrypted data, where the decryption result of the operation is the same as the result calculated
using unencrypted data. In this paper, we present a cloud-assisted multi-key PSI (CMPSI) system
that uses fully homomorphic encryption over the torus (TFHE) encryption scheme to encrypt the
data of the participants and that uses a cloud server to assist the computation. Specifically, we design
some TFHE-based secure computation protocols and build a single cloud server-based private set
intersection system that can support multiple users. Moreover, security analysis and performance
evaluation show that our system is feasible. The scheme has a smaller communication overhead
compared to existing schemes.
**Keywords: private set intersection; homomorphic encryption; multi-key TFHE; cloud computing;**
privacy protection
**MSC: 68U99; 68U99; 68T09; 68Q06**
**1. Introduction**
With the rapid growth of data in the Internet era, the demand for data storage and
computing capacity in various fields far exceeds the capacity of their own devices. To solve
this problem, cloud computing has been proposed. Cloud computing is generally defined
as an internet-based computing method. In this way, the shared software and hardware
information and resources can be provided to various terminals and other devices of the
computer as required. Cloud computing technology can transmit various information to
the Internet and store and calculate data, and users can view the calculation results and
data information. However, current security issues in the context of cloud computing are
more prominent [1]. Data security issues in cloud computing mainly include storage data
security, computing data security, and transmission data security. When users store data
on the cloud server, the cloud server will obtain the users’ data first, but the abnormal
use of malicious users can also cause a risk of data leakage. In the process of cloud server
computing, the cloud server will know the calculation results and additional data. This
information that should only be known by users also has a risk of leakage. In addition,
-----
_Mathematics 2023, 11, 1784_ 2 of 20
data theft can easily occur during data transmission, and user data can show problems of
theft and tampering [2].
Private set intersection (PSI), as an interactive encryption protocol, calculates the
intersection of two data owners’ data and returns it to one of them. We generally refer to
the party receiving the data as the receiver and the party receiving nothing as the sender. It
is important and necessary to protect the privacy of the set in computing, especially when
the information in the set is important private information such as the customer transaction
information of a bank or the address book of a user. With the concerted efforts of many
researchers, PSI technology has developed rapidly, and more and more efficient solutions
have been proposed [3–15]. After several years of development, PSI technology has been
applied to the fields of internet of vehicles [16], profile matching [17], and private contact
search [18]. In the current situation where the data volume is large and scattered in the
hands of different participants, PSI technology can well balance the relationship between
privacy and information sharing. Leveraging the storage and computing power of cloud
servers allows PSI protocols to compute larger datasets, but current cloud-assisted PSI
schemes suffer from information leakage [19] or large communication overhead [20].
Fully homomorphic encryption (FHE) refers to the computation of data that has been
homomorphically encrypted, and the computed decryption result is the same as that
obtained by the same computation for unencrypted data. The concept of FHE has been
proposed as early as the late 1970s, but it has only started to develop rapidly in the last
two decades. The development of fully homomorphic encryption is generally divided
into three stages. In 2009, the first generation of fully homomorphic encryption started
to develop, and Gentry constructed the first fully homomorphic encryption scheme [21].
The scheme first constructs a somewhat homomorphic encryption (SHE) scheme that can
homomorphically compute circuits of a certain depth, then compresses and decrypts the
circuits and performs bootstrapping operations in an orderly manner, and finally obtains a
scheme that can homomorphically compute arbitrary circuits. The second generation of
fully homomorphic encryption schemes arose in 2011 when Brakerski and Vaikuntanathan
implemented FHE for the first time under the LWE assumption using linearization and
modulo conversion [22] and implemented FHE under the RLWE assumption [23]. These
schemes do not require compression and decryption circuits, and the security and efficiency
are greatly improved. In 2013, the third generation of fully homomorphic encryption
schemes was born, and Gentry et al. for the first time designed a fully homomorphic
encryption scheme, Gentry–Sahai–Waters (GSW), that does not require the computation of
a key using the approximate eigenvector technique [24].
There are two broad categories of fully homomorphic algorithms, the BGV [25] scheme
proposed by Professor Brakerski of Stanford University, Research Fellow Gentry of IBM,
and Professor Vaikuntanathan of the University of Toronto, and the GSW [24] scheme proposed by Gentry of IBM, Sahai of the University of California and Waters of the University
of Austin. Fully homomorphic encryption over toru (TFHE) [26] is an improvement of
the GSW scheme with higher efficiency. TFHE can accomplish fast comparisons, supports
arbitrary boolean circuits, and allows fast bootstrapping to reduce the noise due to ciphertext computation. In previous studies, the BGV scheme has been used to focus on the
unbalanced privacy aggregation scenario [27–29]. Unlike previous works, this paper uses
the TFHE encryption scheme for the first time to implement privacy-seeking protocol based
on cloud computing. At a high level, our contributions can be summarized as follows:
- We have designed a series of security sub-protocols for the MKTFHE cryptosystem,
including some basic circuit gate operations and security comparison protocols.
- We have built a cloud-assisted multi-key private set intersection (CMPSI) system
based on a single cloud server. Our system can prevent collusion attacks between
servers and participants.
- We strictly prove the security of the proposed CMPSI system under the semi-honest
model.
-----
_Mathematics 2023, 11, 1784_ 3 of 20
- We have conducted extensive experimental evaluation on the performance of the
scheme, which proves that our scheme has greatly reduced the communication cost of
the participants.
The rest of the paper is organized as follows. In Section 2, we describe the related work
of private set intersection. In Section 3, we provide the preliminaries. Section 4 details the
system model, threat model, and design goals. Section 5 elaborates on the cryptographic
protocol for the private set intersection. Section 6 analyzes the security of our proposed
protocols. Section 7 conducts a series of experimental comparisons. Finally, Section 8
concludes this paper.
**2. Related Work**
PSI was first proposed by Freedman et al. [30], who transformed the element comparison problem into the polynomial root problem and realized PSI through multiplicative
homomorphic encryption. However, when the polynomial order is large, it will lead
to a costly exponential computation of the homomorphic encryption. In recent years,
many researchers have intensively studied the PSI problem, and many PSI protocols
with high efficiency and low communication overhead have emerged. PSI computing
protocols are mainly divided into two categories according to whether there is a third
party, namely, the traditional PSI computing protocol based on public key encryption,
obfuscation circuit [31–33] and inadvertent transmission [34] technology and the cloudassisted PSI computing protocol that uses cloud servers to complete computing.
Traditional PSI computing protocols rely on a series of basic cryptography technologies
for computing. These basic cryptography technologies are mainly divided into PSI based
on public key encryption mechanism, PSI based on obfuscation circuit, and PSI based on
inadvertent transmission. The PSI calculation protocol proposed by Freedman et al. [34] is
based on the public key encryption mechanism. This scheme represents the elements in the
set as the roots of polynomials and uses polynomials to calculate the intersection. However,
the cost of calculation will become large with the increase in the order of polynomials.
Hazay et al. also improved the article [30] and adopted the bit commitment protocol to
prevent the scenario of inconsistent input data on the server [35], so that the PSI protocol
can be applied to the protocol of malicious adversaries. In 2012, Huang et al. first proposed
PSI computing protocols based on obfuscated circuits [36], which are Bitwise-AND (BWA),
Pairwise-Comparisons (PWC), and Sort-Compare-Suffle (SCS) protocols. In 2013, the PSI
protocol proposed by Dong et al. [37] used OT technology for the first time. The author
used OT technology to ensure the security of the protocol. Pinkas et al. [38] proposed
a new PSI protocol based on Hash and random OT protocols and optimized the SCS
protocol in [36]. The computational efficiency of the protocol was greatly improved, and
the complexity of the algorithm was also reduced. Based on the article [34], Freedman et al.
further optimized and improved their scheme in 2014 [39]. Specifically, the scheme uses
different hash functions for the client and server when mapping the set elements. In 2018,
Pinkas et al. realized PSI based on unintentional pseudorandom function [40] through the
circuit. In 2020, Pinkas et al. [12] constructed a PSI protocol with malicious security based
on the protocols [41] in the literature. The traditional PSI does not need the assistance of a
third party, but in the application, the participants are generally resource-constrained users,
who are insufficient in providing sufficient data storage and computing power.
With the development of cloud computing, the PSI protocol based on cloud servers
began to develop. The cloud-assisted PSI scheme provides a new optimization method
for the existing PSI scheme by the excellent storage and computing capabilities of the
cloud server. The cloud-assisted PSI uses the third-party cloud computing framework to
complete the calculation and uses the storage and computing resources of the cloud server
to enable the protocol to calculate large-scale datasets. Kerschbaum [42] implemented the
anti-collusion outsourcing PSI protocol through two single functions, but the method has
the risk of brute force cracking. Then, Kerschbaum [43] proposed another kind of cloudassisted PSI using bloom filter and homomorphic encryption. Liu et al. [19] proposed a
-----
_Mathematics 2023, 11, 1784_ 4 of 20
relatively simple PSI protocol, but it can disclose the cardinality of set intersection. Abadi
et al. [44] implemented the PSI protocol using homomorphic encryption and polynomial
interpolation in 2015. This protocol outsources the collection of clients to a third-party server
to perform infinite PSI operations. Based on this work, a verifiable cloud outsourcing PSI
protocol [45] is proposed to ensure the privacy and integrity of data. Ali et al. [46] proposed
an attribute-based private set intersection scheme. The cloud server can calculate the
corresponding access rights of the participants. The PSI protocol based on the cloud server
can use the computing and storage capabilities of the cloud server, but it has produced
the privacy disclosure problem of data outsourcing, and the excessive cost of users in the
operation of the protocol is another problem that needs to be solved. Table 1 shows the
comparison between our scheme and the existing scheme.
**Table 1. Comparison with existing schemes.**
**CMPSI** [46] [47] [48] [19] [42] [43]
The year 2023 2020 2019 2014 2014 2012 2012
Private against the CSP 52 52 52 52 56 52 52
PSI computation authorization 52 52 52 52 56 52 52
Supports multiple user queries 52 52 52 52 56 56 56
Participants can go offline after uploading data 52 52 56 56 56 56 56
CSP can collude with participants 52 56 56 56 56 56 56
**3. Preliminaries**
In this section, we first introduce the concept of private set intersection and have an
example to better understand the concept. Then, we introduce the cryptosystem MKTFHE
used in our system and present the algorithm as an example of a NAND gate. Table 2 lists
some of the symbols used in this paper.
**Table 2. Notation used.**
**Notations** **Definition**
_λ_ Security parameter
Z Integer set
T (R)LWE over the real torus
_si_ Private key of participant i
(PKi, BKi, KSi) Public key set of participant i
[[x]]si Encrypted data x under si
**MKHENAND** NAND gate in multi-key TFHE
**CMPSI** Cloud-assisted multi-party private set intersection
_3.1. Private Set Intersection_
PSI allows two parties holding sets to compare encrypted versions of these sets to
compute the intersection. Let the two parties holding the sets be sender X and receiver Y.
The sender and receiver hold datasets of size Nx and Ny respectively, each with a number
of bits σ. In a basic PSI protocol, receiver Y encrypts its own dataset and sends it to sender
_X. For each of Y’s data, sender X calculates the homomorphic product of the difference_
with all of its own terms and sends the result to receiver Y. Y decrypts the result of X’s
calculation and obtains the final intersection information. The result of the calculation
is sent to the receiver Y. Y decrypts the result of X’s computation and obtains the final
intersection information. The basic PSI protocol is shown in Figure 1.
-----
_Mathematics 2023, 11, 1784_ 5 of 20
**Figure 1. Basic PSI protocol.**
In the scheme of this paper, the storage of data and the computation are performed
on the cloud server. We construct a new PSI scheme using fully homomorphic encryption.
Both the sender and the receiver encrypt the data locally and then send it to the cloud server.
Suppose that the sender has encrypted data a1, . . ., aNX and the receiver has encrypted
data b1, . . ., bNY . Both parties send their encrypted data to the cloud server. On the cloud
server, for each data bi of the receiver, ci = ∏0<j≤Nx �bi − _aj�_ is computed. ci is a Boolean
value that represents whether the data bi of the receiver are in the sender X or not. Figure 2
shows the handshake model of this scheme.
**Figure 2. Handshake model.**
_3.2. MKTFHE Cryptosystem_
Homomorphic encryption is the computation of the encrypted data to obtain the
encrypted computational result, and the result of the decryption of the obtained encryption
result is the same as the result obtained by performing the same operations on the unencrypted plaintext. Fully homomorphic encryption [24,25] is a homomorphic encryption that
can satisfy both additive and multiplicative operations. Fully homomorphic encryption
over the toru (TFHE) [26] is a type of fully homomorphic encryption that can accomplish
fast comparisons and support operations on arbitrary Boolean circuits.TFHE differs from
other FHE schemes in that it can be fast bootstrapping to reduce noise during ciphertext
operations. In this paper, we use multi-key TFHE [49] to meet the needs of our system.
MKTFHE is a multi-key version of TFHE that can compute Boolean circuits on ciphertexts
encrypted under different keys, and then performs bootstrapped to refresh the noise as
-----
_Mathematics 2023, 11, 1784_ 6 of 20
each binary gate is computed. However, the MKTFHE library only implements multi-key
homomorphic NAND gates, which cannot meet the needs of our system. The following
describes the five components of MKTFHE and gives an example of the homomorphic
computation process with a multi-key homomorphic NAND gate.
1. **Setup(1[λ]): Takes as input the security parameter λ and returns the public parameter**
**pp[MKTFHE].**
(a) Run LWE.Setup(1[λ]) to generate the LWE parameter pp[LWE] = (n, χ, α, B[′], d[′]).
In the LWE parameters, n is the dimension of the LWE secret, χ is the key
distribution of the LWE secret, α is the error rate, B[′] is the decomposition basis,
and d[′] is the dimension of the key transformation gadget vector. We use the
�
key-switching gadget vector g[′] = _B[′−][1], . . ., B[′−][d][′]_ [�].
(b) Run RLWE.Setup(1[λ]) to generate the RLWE parameter pp[RLWE] = (N, ψ, B, d, a).
We define N as the dimension of RLWE secret (a power of 2), ψ as the distribution of RLWE secret over R and with error rate α, B ≥ 2 as an integer base,
�
decomposition dimension d, and gadget vector g = _B[−][1], . . ., B[−][d][�]. a is a_
uniformly distributed sample over distribution T[d].
(c) Returns the generated public parameter pp[MKTFHE] = �pp[LWE], pp[RLWE][�].
2. **KeyGen(pp[MKTFHE]): Each participant generates its keys independently. Take the**
public parameter pp[MKTFHE] as input and return the key si and the public key set
(PKi, BKi, KSi).
(a) Generate the LWE secret si ← **LWE.KeyGen(). This step is only for sampling**
the key from distribution χ.
(b) Run (zi, bi) ← **RLWE.KeyGen(), and set the public key to PKi = bi. Sample**
_z from distribution ψ, and then, set z = (1, z). Take an error vector e from_
_Dα[d]_ [and calculate the public key][ b][ =][ −][z][ ·][ a][ +][ e][(][mod][1][)][. For][ z]i [=][ z]1,0 [+][ z]i,1[X][ +]
. . . + zi,N−1X[N][−][1], note zi[∗] [= (][z][i][,0][,][ −][z][i][,][N][−][1][, . . .,][ z][i][,1][)][ ∈] [Z][N][.]
(c) For j ∈ [n], generate �di,j, Fi,j� _←_ **RLWE.UniEnc�si,j, zi�, this step is to encrypt**
the LWE secret using the RLWE secret. In addition, set the bootstrap key to
**BKi =** ��di,j, Fi,j��j∈[n][. Taking a random value][ r][ from][ ψ][, one can think of][ d]
as the LWE key s under the encryption of the random value r and F as the
random value r under the encryption of the RLWE key z.
(d) Generate a key conversion key KS ← **LWE.KSGen�zi[∗][,][ s][i]�, capable of convert-**
ing an LWE ciphertext corresponding to t ∈ Z[N] into another LWE ciphertext
for the same message under s ∈ Z[N] encryption.
(e) Returns key si, a triple (PKi, BKi, KSi) of public keys, public key, bootstrap
key and key transformation key, respectively.
3. **Enc(m): The data m to be encrypted are taken as input, and return TLWE ciphertext**
[[m]] = (b, a) ∈ T[n][+][1] satisfies b + ⟨a, s⟩≈ [1]4 _[m][(][mod1][)][.]_
(a) Using standard LWE encryption, uniformly sample from T[n] to obtain a as the
mask and sample from Dα to obtain e as the error.
(b) Output ciphertext [[m]] = (b, a) ∈ T[n][+][1], where b + ⟨a, s⟩≈ [1]4 _[m][(][mod1][)][.]_
4. **Dec([[m]], {si}i∈[k]): Takes as input the TLWE ciphertext [[m]] = (b, a1, . . ., ak) ∈** T[kn][+][1]
with a set of keys (s1, . . ., sk) and returns the decrypted message m which minimizes
_| b + ∑i[k]=1[⟨][a][i][,][ s][i][⟩−]_ [1]4 _[m][ |][.]_
(a) Input [[m]] = (b, a1, . . ., ak) ∈ T[kn][+][1] with a set of keys (s1, . . ., sk).
(b) Returns the bit m ∈{0, 1} that minimizes | b + ∑i[k]=1[⟨][a][i][,][ s][i][⟩−] [1]4 _[m][ |][.]_
5. **NAND([[m1]], [[m2]], {(PKi, BKi, KSi)}i∈[k]): Takes two TLWE ciphertexts and the**
public key as input. Expand [[m1]] ∈ T[k][1][n][+][1] and [[m2]] ∈ T[k][1][n][+][1] to [[m1[′] []]][,][ [[][m]2[′] []]][ ∈] [T][kn][+][1]
and evaluate the gate homomorphically on encrypted bits. Then the algorithm evalu
-----
_Mathematics 2023, 11, 1784_ 7 of 20
ates the decryption circuit of the TLWE ciphertext and execute the multi-key switching
algorithm. Finally, returning the TLWE ciphertext of the same message under joint
key encryption.
(a) Given two ciphertexts [[m1]] ∈ T[k][1][n][+][1] and [[m2]] ∈ T[k][1][n][+][1], let k be the number of
participants, associated with either [[m1]] or [[m2]]. For a public key set, PKi = bi
represents the public key, BKi = ��di,j, Fi,j��j∈[n] [represents the bootstrap key,]
and KSi represents the key transformation key of the j-th participant. Expand
ciphertext [[m1]] and [[m2]] to [[m1]][′], [[m2]][′] _∈_ T[kn][+][1], i.e., the same message under
joint key s = (s1, . . ., sk) ∈ Z[kn] encryption. The process of expansion is the
process of rearrangement, and 0 is put into the empty slot. Using the expanded
ciphertext to perform the calculations. Only the calculation of NAND gate is
supported in the document.
(b) Use the Mux gate to implement the main calculation, for i ∈ [k], let ˜ai =
�a˜i,j�j∈[n][.] For _i_ _∈_ [k] and _j_ _∈_ [n], recursively compute
[[c]] _←_ [[c]] + RLWE.Prod�[[c]] · X[a][˜][ij] _−_ [[c]], �di,j, Fi,j�, {bl}l∈[k]�, where
**RLWE.Prod�[[c]], (di, Fi),** �bj�j∈[k]� is a hybrid product algorithm that multi
plies a single encrypted ciphertext (di, Fi) by a multi-key RLWE ciphertext [[c]].
(c) For [[c]] = (c0, c1, . . ., ck) ∈ _T[k][+][1], let b[∗]_ be a constant term of c0 and for i ∈ [k],
let ai[∗] [be a vector of coefficients of][ c][i][. Compute the LWE ciphertext][ [[][m][]]][∗] [=]
�b[∗], a1[∗][, . . .,][ a][∗]k � _∈_ Tkn+1. Finally a multi-key key conversion algorithm is exe
� �
cuted and returns the ciphertext [[m]][′′] _←_ **LWE.MKSwitch** [[m]][∗], {KSi}i∈[k],
� �
where LWE.MKSwitch [[m]][∗], {KSi}i∈[k] inputs the expanded ciphertext
and a series of key conversion keys, returning the ciphertext of the same
message under joint key encryption.
**4. System Model and Design Goal**
_4.1. Problem Formulation_
Suppose the receiver Y has a dataset TY, and Y wants to know their intersection with
other data owners but does not want to expose more information. The data owners encrypt
their datasets separately and send them to the cloud server. The cloud server can store this
encrypted information but cannot decrypt it. Data receiver Y encrypts its data and uploads
it to the cloud server, which executes privacy intersection and obtains the intersection
information of dataset TY with other datasets. The cloud server computes and returns
the cryptographic result to receiver Y. Y decrypts the intersection result and obtains the
intersection information. Note that each data owner including the data receiver has their
separate key to encrypt the data.
_4.2. System Model_
In Section 3.1, we mention the flow of the basic PSI protocol, in which the sender
interacts directly with the receiver for information. Unlike the basic PSI protocol, our
system consists of four entities, which are Parameter Generation Center (PGC), Cloud
Server (CS), Data Receiver (DR), and Data Owners (DOs). DO owns its own dataset and
is able to let other participants obtain information about the intersection of the dataset
but does not want to expose more information. DR wants to query the intersection of
its own dataset with the dataset of other participants and does not want to expose more
information. Specifically, PGC is responsible for generating public parameters in the system
and sending them to other entities. CS can store a large amount of data and has excellent
computing resources. DR needs to query the intersection. DOs provide their encrypted
data to CS. Note that in our system, the data owners can be multiple participants. The
general model of our private set intersection system is shown in Figure 3.
-----
_Mathematics 2023, 11, 1784_ 8 of 20
**Figure 3. System model.**
1. PGC: PGC generates public parameters for our system and sends them to each entity
involved in the computation (See 1 ).
_⃝_
2. CS: CS has huge storage resources to store the encrypted data of the participating
parties. At the same time, CS has large enough computing power to satisfy the
intersection of the datasets of the participating parties.
3. DR: DR generates its own private key and public key set using public parameters,
encrypts its own data using the private key and sends it to CS (See 3 ), and receives
_⃝_
the computation results sent by CS (See 4 ).
_⃝_
4. DOs: Each DO generates its own private key and public key set using public parameters, encrypts its own dataset using the private key, and sends it to CS (See 2 ).
_⃝_
Please note that in our system, the participants do not need to be online all the time.
Since CS can store the encrypted data, the DOs can go offline after they send their encrypted
data to CS. Similarly, DR can be offline after sending data until CS returns the calculation
results. In our scheme, DO can be used as DR for frequent item set queries, and the DR can
query the intersection information with multiple DOs to achieve multi-user query.
_4.3. Threat Model_
In our system model, the participating entities are curious but honest individuals.
Curious means that the server and the participants try to use existing resources and data
to obtain the data of other participants and are curious about the data of other entities;
honest means that the server and the participants do not falsify the experimental data and
follow the developed protocols to complete the computation. A is the active adversary we
introduce to obtain the real data from other entities. Specifically, desires to obtain the
_A_
real data of DOs and DR. We assume that adversary has the following capabilities.
_A_
1. can obtain all the data that passes through the public channel.
_A_
2. may collude with CS. Try to obtain the original values of the encrypted data
_A_
uploaded by DOs and DR.
3. may be a DR used to obtain its dataset information, the cryptographic query results
_A_
returned by the CS, and the encryption and decryption capabilities of the DR.
4. may be a DO used to obtain its dataset information and encryption and decryption
_A_
capabilities.
-----
_Mathematics 2023, 11, 1784_ 9 of 20
Note that in our threat model, the attacking adversary can be a DR. Since the joint
_A_
key of multiple participants must be used in decryption to decrypt the computed result of
CS, the final decryption result is also not available when A has only the key of DR. Unlike
existing schemes when the attacking adversary A is a CSP, A can collude with DR or DO. In
our scheme, decryption requires the keys of all participants to perform; thus, CSP colluding
with some DR or DO still cannot decrypt the computation results.
_4.4. Design Goal_
According to the system model and threat model proposed above, the design objectives
of this paper are as follows.
1. Data privacy: the original data of DR and the query intersection result as well as the
original dataset of DOs cannot be revealed to adversary .
_A_
2. Calculation accuracy: The accuracy of the calculation results of the system cannot be
reduced compared with other methods.
3. Low overhead: The time and upload overhead of the calculation cannot be too large
compared with other methods.
4. Offline participant: The participant should be able to go offline after encrypting the
data and uploading it to ensure the scalability of the system.
**5. Cloud-Assisted Multi-Party Private Set Intersection**
In this section, we first introduce the initialization of the system. Then, we design the
secure computing sub-protocol based on MKTFHE. Finally, we describe our private set
intersection scheme.
_5.1. System Initialization_
Our system can satisfy the DR to query the information of its intersection with multiple
participants, and we assume that there is a DR and n DOs. First, PGC generates public
parameters for each participant and the cloud server and sends the public parameters to
CS, DR, and n DOs. Then, each entity that receives the public parameters generates its own
public key set (PK, BK, KS) and private key s based on the public parameters.
_5.2. Security Protocol Design_
In this paper, four secure computation protocols are proposed to help complete the
privacy-seeking intersection, which is a secure AND gate computation protocol (SCAND),
secure OR gate computation protocol (SCOR), secure XNOR computation protocol (SCXNOR),
and secure comparison protocol (SCP).
5.2.1. Secure AND Gate Computation Protocol
We implement the AND operation between two MKLwe samples. We implement
the addition between multi-key Lwe samples (MKlweAddTo) to implement this secure
computation protocol. Suppose CS has two MKLwe samples ca and cb: initialize an
intermediate sample temp, add ca and cb using MKlweAddTo twice, and finally return
the result to res (Algorithm 1).
**Algorithm 1 Secure AND gate computation protocol (SCAND).**
**Input: MKLwe Sample ca, cb.**
**Output: MKLwe Sample res.**
1: CS initializes temp using the public parameter pp to hold the intermediate variable
LWE sample.
2: AndConst = modSwitchToTorus32( 1, 8)
_−_
3: temp **MKlweNoiselessTrivial(AndConst, pp)**
_←_
4: temp **MKlweAddTo(temp + ca)**
_←_
5: res **MKlweAddTo(temp + cb)**
_←_
-----
_Mathematics 2023, 11, 1784_ 10 of 20
5.2.2. Secure OR Gate Computation Protocol
We implement the OR operation between two MKLwe samples. As with SCAND
above, we use the addition MKlweAddTo between multi-key Lwe samples to implement
this secure computation protocol. Suppose CS has two MKLwe samples ca and cb, initialize
an intermediate sample temp, add ca and cb using MKlweAddTo twice respectively, and
finally, return the result to res to obtain the result of the OR gate operation between ca and
_cb (Algorithm 2)._
**Algorithm 2 Secure OR gate computation protocol (SCOR).**
**Input: MKLwe Sample ca, cb.**
**Output: MKLwe Sample res.**
1: CS initializes temp using the public parameter pp to hold the intermediate variable
LWE sample.
2: ORConst = modSwitchToTorus32(1, 8)
3: temp **MKlweNoiselessTrivial(ORConst, pp)**
_←_
4: temp **MKlweAddTo(temp + ca)**
_←_
5: res **MKlweAddTo(temp + cb)**
_←_
5.2.3. Secure XNOR Gate Computation Protocol
We implement the XNOR operation between two MKLwe samples. We implement
this secure computation protocol using the addition and multiplication of multi-key Lwe
samples MKlweAddMulTo. Suppose CS has two MKLwe samples ca and cb: initialize an
intermediate sample temp, add 2 ∗ _ca and 2 ∗_ _cb using MKlweAddMulTo twice, return the_
result to temp to obtain the XOR gate operation result of ca and cb, and use the multi-key
homomorphic NOT gate SCNOT once to obtain the XNOR gate operation result. Note
that in the cryptographic scheme we use, MKTFHE, the computation of the NOT gate
does not require bootstrapping operations; thus, the computation overhead is very small
(Algorithm 3).
**Algorithm 3 Secure XNOR gate computation protocol (SCXNOR).**
**Input: MKLwe Sample ca, cb.**
**Output: MKLwe Sample res.**
1: CS initializes temp using the public parameter pp to hold the intermediate variable
LWE sample.
2: XNORConst = modSwitchToTorus32(1, 8)
3: temp **MKlweNoiselessTrivial(XNORConst, pp)**
_←_
4: temp **MKlweAddMulTo(temp + 2** _ca)_
_←_ _∗_
5: temp **MKlweAddMulTo(temp + 2** _cb)_
_←_ _∗_
6: res ← **SCNOT(temp)**
5.2.4. Secure Comparison Protocol
**SCP is important in our protocol and is used to determine whether the two input**
ciphertext vectors are equal or not. Suppose DR has its own encrypted data [[x]]sDR =
([[x1]]sDR, . . ., [[xn]]sDR ) sent to CS and DO has its own encrypted data [[y]]sDO = ([[y1]]sDO, . . .,
[[yn]]sDO ) also sent to CS, where sDR and sDO are the private keys of DR and DO, respectively. For each of [[x]]sDR = ([[x1]]sDR, . . ., [[xn]]sDR ) and [[y]]sDO = ([[y1]]sDO, . . ., [[yn]]sDO ),
the protocol performs SCXNOR and SCAND protocols to finally obtain a ciphertext with a
Boolean value (Algorithm 4).
-----
_Mathematics 2023, 11, 1784_ 11 of 20
**Algorithm 4 Secure Comparison Protocol (SCP).**
**Input: Encrypted** data vectors [[x]]sDR = ([[x1]]sDR, . . ., [[xn]]sDR ),[[y]]sDO =
([[y1]]sDO, . . ., [[yn]]sDO ).
**Output: Encrypted Boolean values [[z]]si** .
1: CS initializes the intermediate data vector [[v]] = ([[v1]], . . ., [[vn]]) using the public
parameter pp.
2: for k = 0 to n 1 do
_−_
3: [[vk]]si [[xk]]sDR XNOR[[yk]]sDO
_←_
4: [[z]]si [[vk]]si AND[[z]]si
_←_
5: end for
_5.3. Private Set Intersection_
CMPSI is performed by CS, DR, and DOs working together. Now DR wants to obtain the intersection information of their dataset and DOs dataset. First DOs encrypt their
dataset with their own private key _sDO,_ send the encrypted dataset
� �
_ADO =_ [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO with the public key set (PKsDO, BKsDO, KSsDO )
to CS, and then they can go offline. DR encrypts the dataset with its own private key
� �
_sDR and then sends the encrypted dataset BDR =_ [[b1]]sDR, [[b2]]sDR, . . ., [[an]]sDR with its
public key set (PKsDR, BKsDR, KSsDR ) to CS, and then, it can be offline until CS completes
the calculation. CS receives the encrypted dataset sent by DOs and DR, saves the data,
and performs the secure computation in a secure environment. Finally, DR receives the
encryption result calculated by CS and decrypts it using the joint key to obtain the intersec
� �
tion. Let there be m items in the encrypted dataset ADO = [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO
of DOs with k Boolean values in each item, and n items in the encrypted dataset BDR =
� �
[[b1]]sDR, [[b2]]sDR, . . ., [[bn]]sDR of DR with k Boolean values in each item.
S1(DOs): Each DO encrypts its dataset using its own key sDO generated by the public
parameter pp issued by PGC and sends it to CS. CS stores the encrypted dataset of all
� �
DOs, and for item i of dataset ADO = [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO, we have [[ai]]sDO =
([[a1]]sDO, . . ., [[ak]]sDO ).
S2(DR): DR uses the public parameter pp to generate its own key sDR to encrypt its
dataset and sends it to CS. CS uses DR’s encrypted database for secure computation and has
� �
[[bj]]sDR = ([[b1]]sDR, . . ., [[bk]]sDR ) for item j of dataset BDR = [[b1]]sDR, [[b2]]sDR, . . ., [[bn]]sDR .
� �
S3(CS): CS receives the encrypted data message BDR = [[b1]]sDR, [[b2]]sDR, . . ., [[bn]]sDR
� �
from DR and the encrypted data message ADO = [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO from
DO. For j ∈{1, 2, . . ., n} and i ∈{1, 2, . . ., m}, each item [[bj]]sDR = ([[b1]]sDR, . . ., [[bk]]sDR ) in
� �
_BDR_ = [[b1]]sDR, [[b2]]sDR, . . ., [[bn]]sDR performs **SCP** with each item
� �
[[ai]]sDO = ([[a1]]sDO, . . ., [[ak]]sDO ) in ADO = [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO, i.e.,
**SCP([[ai]]sDO**, [[bj]]sDR ). The result [[gi]]s = ([[g1]]s, . . ., [[gk]]s) is obtained as the result of
whether the current [[bj]]sDR = ([[b1]]sDR, . . ., [[bk]]sDR ) is the same as each item in ADO =
� �
[[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO .
S4(CS): For each computed [[gi]]s = ([[g1]]s, . . ., [[gk]]s), CS runs SCOR to obtain [[cj]]s =
([[c1]]s, . . ., [[ck]]s). [[cj]]s = ([[c1]]s, . . ., [[ck]]s) is a cryptographic Boolean value indicating
� �
whether each item in _BDR_ = [[b1]]sDR, [[b2]]sDR, . . ., [[bn]]sDR exists in
� �
_ADO =_ [[a1]]sDO, [[a2]]sDO, . . ., [[am]]sDO . A value of 1 means it exists and 0 means it
does not.
S5 (CS): For j ∈{1, 2, . . ., n}, execute S4, and send the calculated result C = {[[c1]]s,
[[c2]]s, . . ., [[cn]]s} to DR.
-----
_Mathematics 2023, 11, 1784_ 12 of 20
S6 (DR): Receive the calculation result from C = {[[c1]]s, [[c2]]s, . . ., [[cn]]s} sent by CS
and decrypt it using the joint key to obtain the result.
Please note that in our PSI scheme, the dense state computation is performed by FHE
cryptography. All the calculations are performed on the cloud server, and the data on the
cloud server are all cryptographic data, so that the privacy of the participants is protected.
During the calculation process, the DR does not obtain any information other than its own
information and the query result. The DOs do not obtain any information other than their
own information and do not expose their information to other participants. The result of
the CS calculation is in cryptographic form and cannot be decrypted by the participants
except by the DR, which protects the privacy of the calculation result.
**6. Security Analysis**
In this section, we prove that our scheme is secure under a semi-honest model. We
will prove the security of the MKTFHE cryptosystem, SCAND, SCOR, SCXOR, SCP and
PSI schemes separately. We first present the security of the semi-honest model below.
**Definition 1 (Security of the semi-honest model). According to protocol π, let ai be the input**
_of participant Pi and bi be the output of Pi. REALi[Π][(][π][)][ is the viewpoint of][ P][i][ when protocol][ π][ is]_
_actually executed. IDEALi[Π][(][π][)][ is the viewpoint of][ P][i][, simulated by][ a][i][ and][ b][i][, executed in the ideal]_
_world of protocol π. If REALi[Π][(][π][)][ is computationally indistinguishable from][ IDEAL]i[Π][(][π][)][, then]_
_protocol π is secure in the semi-fair model [50]._
Note that in our protocols, the execution image usually consists of the exchanged data
and the information that can be computed from these data. It follows from Definition (1) that
when proving the security of these protocols, the image we simulate should be indistinguishable from the actual execution image when we compute it.
_6.1. Security of MKTFHE Cryptosystem_
Privacy of LWE Assumption: The j-th component Kj of a key-switching key KS =
_{Kj}j∈[N] from t ∈_ Z[N] to s ∈ Z[N] is generated by adding tj · g[′] to the first column of the
T[d][′][×][(][n][+][1][)] matrix, the rows of which are instances of LWE under the secret s. Therefore,
KS **LWE.KSGen(t, s) is computationally indistinguishable from a uniform distribu-**
_←_
tion over (T[d][′][×][(][n][+][1][)])[N] where LWE assumes a parameter of (n, χ, β) and s is sampled
according to χ.
Privacy of RLWE Assumption: Under the assumption that the parameter is (N, ψ, α),
a uniform distribution over T[d][×][5] is computationally indistinguishable from the distribution
_D0 = {(a, b, d, F) : pp[RLWE]_ _←_ **RLWE · Setup�1[λ][�], (z, b) ←** **RLWE.KeyGen(), (d, F) ←**
**RLWE · UniEnc(µ, z)} for any µ ∈** _R. We consider the following distribution: First, we_
transform F = [f0 | f1] and (b, a) into independent uniform distributions of T[d][×][2] using the
RLWE assumption of a secret z. Therefore, D0 is indistinguishable from D1 in terms of
�
calculation. _D1_ = _{(a, b, d, F)_ : **a, b** _←_ _U_ _T[d][�],_
� �
**F ←** _U_ _T[d][×][2][�], r_ _←_ _ψ, e1 ←_ _Dα[d][,][ d][ =][ r][ ·][ a][ +][ µ][ ·][ g][ +][ e]1_ (mod1) .Then, d is made uniformly
distributed using the RLWE assumption with a secret of r. Therefore, D1 is indistinguish
� � �
able from the distribution D2. D2 = (a, b, d, F) : a, b, d ← _U_ _T[d][�], F ←_ _U_ _T[d][×][2][��]. Since_
_D2 is independent from µ, our RLWE scheme is semantically private._
In summary, under the (R)LWE assumption, our cryptosystem is semantically private;
thus, we can appropriately choose parameters pp[LWE] and pp[RLWE] to achieve a security
level of at least λ-bit.
_6.2. Security of Secure Computing Protocols_
In this section, we demonstrate the security of our secure computing subprotocols,
including SCAND, SCOR, SCXOR and SCP.
-----
_Mathematics 2023, 11, 1784_ 13 of 20
**Theorem 1. The SCAND proposed is secure under the semi-honest model.**
**Proof of Theorem 1. We use REALCS[Π]** [(][SC][AND][)][ to denote the execution view in the real world]
of the of CS, where it is specified as REALCS[Π] [(][SC][AND][)] =
[[ca]], [[cb]], [[AndConst]], [[temp]], [[res]] . [[AndConst]] is obtained from [[ 1]] and [[8]] by
_{_ _}_ _−_
_modSwitchToTorus32. [[temp]] is obtained from [[AndConst]] and [[ca]] by MKlweAddTo and_
**MKlweNoiselessTrival.** We assume that IDEALCS[Π] [(][SC][AND][)] =
[[ca[′]]], [[cb[′]]], [[temp[′]]], [[res[′]]], [[AndConst[′]]] is the execution view of the simulation in the ideal
_{_ _}_
world, where [[ca[′]]], [[cb[′]]], [[temp[′]]], [[res[′]]] and [[AndConst[′]]] are chosen randomly from T[n][+][1].
The semantic privacy of our encryption scheme makes [[ca]], [[cb]], [[temp]] and [[AndConst]]
computationally indistinguishable from [[ca[′]]], [[cb[′]]], [[temp[′]]] and [[AndConst[′]]] respectively.
In addition, [[res]] is computationally indistinguishable from [[temp[′]]] and [[AndConst[′]]] respectively. Thus, it can be concluded that REALCS[Π] [(][SC][AND][)][ and][ IDEAL]CS[Π] [(][SC][AND][)][ are com-]
putationally indistinguishable. We can obtain that SCAND is secure under the semi-honest
model.
**Theorem 2. The SCOR proposed is secure under the semi-honest model.**
**Proof of Theorem 2. We use REALCS[Π]** [(][SC][OR][)][ to denote the execution view in the real world]
of CS, where it is specified as REALCS[Π] [(][SC][OR][) =][ {][[[][ca][]]][,][ [[][cb][]]][,][ [[][temp][]]][,][ [[][ORConst][]]][,][ [[][res][]]][}][.]
[[ORConst]] is obtained from [[1]] and [[8]] by modSwitchToTorus32. [[temp]] is obtained
from [[ca]] and [[ORConst]] by MKlweNoiselessTrivial and MKlweAddTo. [[res]] is obtained from [[temp]] and [[cb]] by MKlweAddTo. We assume that IDEALCS[Π] [(][SC][OR][) =]
[[ca[′]]], [[cb[′]]], [[temp[′]]], [[ORConst[′]]], [[res[′]]] is the execution view of the simulation in the
_{_ _}_
ideal world, where [[ca[′]]], [[cb[′]]], [[temp[′]]], [[ORConst[′]]] and [[res[′]]] are chosen randomly from
T[n][+][1]. The semantic privacy of our encryption scheme makes [[ca]] and [[cb]] computationally
indistinguishable from [[ca[′]]], [[cb[′]]], [[temp[′]]] and [[ORConst[′]]], respectively. In addition, [[res]]
is computationally indistinguishable from [[ca[′]]], [[cb[′]]], [[temp[′]]] and [[ORConst[′]]] respectively.
Thus, it can be concluded that REALCS[Π] [(][SC][OR][)][ and][ IDEAL]CS[Π] [(][SC][OR][)][ are computationally]
indistinguishable. We can obtain that SCOR is secure under the semi-honest model.
**Theorem 3. The SCXOR proposed is secure under the semi-honest model.**
**Proof of Theorem 3. Since the design ideas of SCAND and SCOR are similar, we can prove**
the theorem based on Theorem (1).
**Theorem 4. The SCP proposed is secure under the semi-honest model.**
**Proof of Theorem 4. We use REALCS[Π]** [(][SCP][)][ to denote the execution view in the real world]
of the CS, where it is specified as REALCS[Π] [(][SCP][) =][ {][([[][x][]]][,][ [[][y][]])][,][ [[][z][]]][}][.][ [[][x][]]][ and][ [[][y][]]][ are the]
encrypted data vectors. [[z]] is the result of determining whether the encrypted data vectors
[[x]] and [[y]] are equal. [[z]] is a random number between 0 and 1 in the ciphertext. We
assume that IDEALCS[Π] [(][SCP][) =][ {][([[][x][′][]]][,][ [[][y][′][]])][,][ [[][z][′][]]][}][ is the execution view of the simulation in]
the ideal world, where the encrypted data in both [[x[′]]] and [[y[′]]] are chosen randomly from
T[n][+][1]. [[z[′]]] are chosen randomly from T[n][+][1]. The semantic privacy of our encryption scheme
makes [[x]] and [[y]] computationally indistinguishable from [[x[′]]] and [[y[′]]], respectively. In
addition, [[z[′]]] takes 0 or 1 with equal probability. [[z]] are computationally indistinguishable
from [[z[′]]], respectively. Thus, it can be concluded that REALCS[Π] [(][SCP][)][ and][ IDEAL]CS[Π] [(][SCP][)]
are computationally indistinguishable. We can obtain that SCP is secure under the semihonest model.
_6.3. Security of CMPSI_
**Theorem 5. The CMPSI proposed is secure under the semi-honest model, and the security of**
_encrypted data, mining results, and query data can be guaranteed._
-----
_Mathematics 2023, 11, 1784_ 14 of 20
**Proof of Theorem 5. We can use the above method to prove that our proposed CMPSI is**
secure under the semi-honest model. In S1, CS obtains the encrypted dataset from DOs.
In S2, CS obtains the encrypted dataset from DR. From Section 6.1, our cryptosystem is
semantically secure, and the semi-honest CS cannot distinguish these messages from the
random values of T[n][+][1]. In S3, SCP is executed to obtain the intersection information of
the encryption of individual items in the dataset. Since SCP is secure in our system, it
can be confirmed that the protocol in S3 is secure. In S4, SCOR is used to obtain the final
encryption result. Since SCOR is secure in our system, the protocol in S4 is secure. In S5
and S6, the execution of S4 is repeated, the DR receives the message and decrypts it using
the joint key, and the protocol is secure from the security of MKTFHE.
**Theorem 6. The CMPSI proposed is able to resist man-in-the-middle attacks.**
**Proof of Theorem 6. As shown in Figure 4, the participants represent the DR and DOs in**
our scenario. Under normal conditions, the participants can communicate with the CS,
and Figure 4a shows the communication under normal conditions. The man-in-the-middle
attack changes the original communication channel and can access the communication
data between the participant and the cloud server, and Figure 4b shows the impact of the
man-in-the-middle attack on the communication. We will prove that our model is resistant
to man-in-the-middle attacks in three ways. First, DO encrypts its own dataset TDO into
[[TDO]]sDO using its own key sDO and then sends [[TDO]]sDO to CS. Intermediary A obtains
[[TDO]]sDO through the new channel, but A does not have DO’s key, and it is known from
the security of MKTFHE that A cannot decrypt [[TDO]]sDO . Thus, our model can resist
the man-in-the-middle attack during the data transmission from DO to CS. Second, DR
wants to obtain the intersection information and sends the encrypted data [[TDR]]sDR to CS.
Intermediary A obtains [[TDR]]sDR through the illegal channel. By the security of MKTFHE,
_A does not have sDR and cannot obtain TDR from [[TDR]]sDR_ . Thus, our model can resist the
man-in-the-middle attack from DR to CS man-in-the-middle attack during the data transfer.
Finally, CS needs to return the computed intersection information [[TDR∩DO]]s to DR. The
middleman A obtains the information [[TDR∩DO]]s, and it is known from the security of
**MKTFHE that A does not have the key to obtain TDR∩DO. Thus, our model can resist the**
man-in-the-middle attack during the data transmission from CS to DR.
(a) (b)
**Figure 4. Man-in-the-middle attack; (a) normal communication; (b) post-attack communication.**
_6.4. Security Services_
According to the above proof of CMPSI security, Table 3 shows the security services
provided by the scheme and a demonstration from our model of how the method provides
each of these functions.
-----
_Mathematics 2023, 11, 1784_ 15 of 20
**Table 3. Security services provided.**
**Security Services** **Definition** **Proof**
Confidentiality Network information is not dis- In our system, DO uses its own key sDO to encrypt its own dataset
closed to non-authorized users, enti- _TDO into [[TDO]]sDO and then sends [[TDO]]sDO to CS. An unauthorized_
ties, or processes. user A illegally obtains [[TDO]]sDO, and according to the security of the
MKTFHE cryptosystem in Section 6.1, it is known that without the
key, sDO cannot perform decryption. Therefore, unauthorized illegal
users A cannot obtain the information of DO’s dataset TDO.
Integrity Information is transmitted, ex- In our system, DR encrypts the dataset TDR as [[TDR]]sDR using the
changed, stored, and processed in key sDR and sends [[TDR]]sDR to CS. Attacker A obtains the dataset
such a way that it remains uncor- [[TDR]]sDR through the intermediate channel, and according to the
rupted or unmodified, that it is not definition of the semi-honest model in Section 4.3, A does not modify
lost, and that it cannot be changed or corrupt the data, and CS can obtain the dataset [[TDR]]sDR intact.
without authorization.
Availability Assurance that information is avail- In our system, DR is the legal user. When DR wants to obtain the
able to authorized users, i.e., assur- intersection information of its dataset TDR, DR sends [[TDR]]sDR to CS,
ance that legitimate users can use the and CS sends the computed intersection result [[TDR∩DO]]s to DR. The
required information when needed. legitimate user DR can obtain the required data when needed, which
proves the usability of our system.
Non-repudiation The two parties of information ex- In our system, DO sends its encrypted dataset [[TDO]]sDO to CS. Acchange cannot deny that they send cording to the definition of the semi-honest model in Section 4.3,
or receive information in the ex- DO will not deny that the dataset [[TDO]]sDO is its data, proving the
change process. non-repudiation of our system.
**7. Performance Analysis**
In this section, we evaluate the time overhead and communication overhead of our
proposed scheme. The experimental parameters we used [51] are shown in Table 4 below.
According to one study [52], the parameters we use reach a privacy level of at least 110 bits,
which is a common reference in this field.
**Table 4. Parameter sets.**
**LWE-n** **LWE-α** **LWE-B[′]** **LWE- d[′]** **RLWE-N** **RLWE-β** **RLWE-B** **RLWE-d**
560 3.05 10[−][5] 2[2] 8 1024 3.72 10[−][9] 2[9] 3
_×_ _×_
The test environment used for our experiments was as follows: a 2.30 GHz Intel (R)
Core(TM) i5-8300H Dell laptop. The programming language we used was C++, and our
system was based on the MKTFHE library. First, we tested the efficiency of the security
subprotocols separately. Then, we tested the communication overhead of our scheme and
compared it with existing schemes. Finally, we tested our scheme.
_7.1. Experiments on Security Computing Protocols_
Our secure subprotocol experiments were performed using the MKTFHE library
[(https://github.com/ilachill/MK-TFHE) (1 February 2023). MKTFHE is a proof-of-concept](https://github.com/ilachill/MK-TFHE)
implementation of a multi-key version of TFHE. The code is written on top of the TFHE
[library (https://tfhe.github.io/tfhe/) (1 February 2023). The computation of secure NAND](https://tfhe.github.io/tfhe/)
gates is given in the MKTFHE library. In the MKTFHE-based implementation, our goal
is to implement the MKLwe sample addition and multiplication operations as a way to
implement the other circuit gates needed in our scheme in addition to the NADN gate.
We first performed experiments on single circuit gates, including experiments on secure
AND gate computation protocol, secure OR gate computation protocol, and secure XNOR
computation protocol, and the experimental results are shown in Table 5. We compared
these with NAND gates and found that the efficiency of individual gate computation
is close.
-----
_Mathematics 2023, 11, 1784_ 16 of 20
**Table 5. Experimental results for single circuit gates.**
**Gate Circuit** **Key Generation Time (s)** **FFT Conversion Time (s)** **Bootstrapping Time (s)**
_AND_ 1.973 0.039 0.226
_NAND_ 1.982 0.038 0.227
_OR_ 1.956 0.040 0.227
_XNOR_ 1.975 0.039 0.220
Then, as shown in Table 6, we tested the experimental time overhead of SCP for k = 8,
16, and 32, where k is the bits of data. The results show that the time overhead of the SCP
protocol is linearly related to the number of bits of input.
**Table 6. Running time of SCP.**
**_k_** **8** **16** **32**
Running time (s) 3.52 7.17 14.20
_7.2. Overhead Evaluation_
In our scenario, DOs and DRs are resource-constrained users; thus, it is important
to have a smaller communication overhead. In our scheme, each participant uses their
key to encrypt the data and uploads it to the cloud server; thus, the total communication
overhead is related to the total data size. We tested the communication overhead of
our scheme on datasets with aggregate sizes of 2[8], 2[12], 2[16], and 2[20]. We compared our
scheme with the scheme based on RSA [53] and the scheme based on pseudorandom
permutation (PRP) [48]. As shown in Figure 5, our scheme is significantly superior to the
privacy intersection scheme based on RSA. For the server-assisted scheme with limited
security [48], the communication cost of our scheme is also lower. Our experimental results
are the average of ten experiments.
**Figure 5. Communication overhead.**
Our scheme is based on the underlying PSI protocol, and the computation of the
ciphertext is performed directly on the cloud server. To the best of our knowledge, our
proposed scheme is the first scheme that uses MKTFHE to achieve the ideal PSI, and the
time overhead of the scheme is a very important metric. For users with limited resources,
-----
_Mathematics 2023, 11, 1784_ 17 of 20
low overhead in the process of data encryption and decryption is necessary. We tested the
time cost of using encryption and decryption and the size of ciphertext on datasets with
sizes of 2[8], 2[12], 2[16] and 2[20]. Table 7 shows that for DOs and DR with limited resources, the
cost of our scheme in data encryption and decryption is very small. Finally, we tested the
computing cost of the cloud server. In the experiment, we used data from 16, 32, and 64 bit
systems to test the performance of our proposed scheme. Table 8 shows our experimental
results. The results show that the time cost of the scheme is linearly related to the size of the
dataset and the number of bits of data. Please note that the cloud has excellent computing
power, so that the efficiency of the solution can be faster in actual use.
**Table 7. Cost during encryption.**
**2[8]** **2[12]** **2[16]** **2[20]**
Encryption time (ms) 13.5 208.2 3162.2 47,987.1
Cipher size (kb) 3.5 57.3 917.5 15,083.6
**Table 8. Cloud computing time (min).**
**Data Set Size** **16bit** **32bit** **64bit**
2[2] 0.51 0.99 1.98
2[4] 8.47 16.02 31.70
2[6] 137.68 273.83 547.66
**8. Conclusions**
In this paper, we proposed CMPSI, a cloud-assisted private set intersection via
multi-key fully homomorphic encryption, which allows the participants to outsource
the encrypted data to cloud servers for storage and computation. We also designed some
MKTFHE-based secure computing protocols to complete the design of our system. We
analytically demonstrated the security of our scheme under a semi-honest model. Through
experiments, we tested the performance of our proposed scheme and proved that our
scheme has less communication overhead by comparing it with existing schemes. We also
proved the feasibility of the scheme.
As future research work, we plan to apply our proposed MKTFHE to a wider range of
areas, such as association rule mining systems in large shopping malls. In addition, we will
improve our framework to handle more complex computations and further improve the
performance of our system.
**Author Contributions: Conceptualization, C.F.; Methodology, X.L.; Software, C.F. and P.J.; Validation,**
P.J.; Formal analysis, M.L.; Investigation, M.L. and P.G.; Data curation, X.Z.; Writing—original draft,
X.Z.; Writing—review & editing, L.W. and X.L.; Visualization, P.G.; Supervision, L.W. All authors
have read and agreed to the published version of the manuscript.
**Funding: This work was funded by the National Key Technology Research and Development**
Program of China (grant nos. 2021YFB3901000 and 2021YFB3901005); the Civil Aerospace Technology
Advance Research Project of China (D040405); the Application Pilot Plan of Fengyun Satellite (FYAPP-2021.0501).
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
-----
_Mathematics 2023, 11, 1784_ 18 of 20
**Abbreviations**
The following abbreviations are used in this manuscript:
PSI Private set intersection
CMPSI Cloud-assisted multi-key private set intersection
TFHE Fully homomorphic encryption over toru
MKTFHE Multi-key fully homomorphic encryption over toru
**References**
1. [Abdulsalam, Y.S.; Hedabou, M. Security and privacy in cloud computing: technical review. Future Internet 2022, 14, 11. [CrossRef]](http://doi.org/10.3390/fi14010011)
2. Aburukba, R.; Kaddoura, Y.; Hiba, M. Cloud Computing Infrastructure Security: Challenges and Solutions. In Proceedings of
the 2022 International Symposium on Networks, Computers and Communications (ISNCC), Shenzhen, China, 19–22 July 2022;
pp. 1–7.
3. Shao, Z.; Bo, Y. Private set intersection via public key encryption with keywords search. Secur. Commun. Netw. 2015, 8, 396–402.
[[CrossRef]](http://dx.doi.org/10.1002/sec.988)
4. Shi, R.H.; Mu, Y.; Zhong, H.; Cui, J.; Zhang, S. An efficient quantum scheme for Private Set Intersection. Quantum Inf. Process.
**[2016, 15, 363–371. [CrossRef]](http://dx.doi.org/10.1007/s11128-015-1165-z)**
5. Yang, X.; Luo, X.; Xu, A.W.; Zhang, S. Improved outsourced private set intersection protocol based on polynomial interpolation.
_[Concurr. Comput. Pract. Exp. 2018, 30, e4329. [CrossRef]](http://dx.doi.org/10.1002/cpe.4329)_
6. Tajima, A.; Sato, H.; Yamana, H. Outsourced Private Set Intersection Cardinality with Fully Homomorphic Encryption. In Proceedings of the 2018 6th International Conference on Multimedia Computing and Systems (ICMCS), Rabat, Morocco, 10–12 May
2018.
7. Ruan, O.; Huang, X.; Mao, H. An efficient private set intersection protocol for the cloud computing environments. In Proceedings
of the 2020 IEEE 6th International Conference on Big Data Security on Cloud (BigDataSecurity), Baltimore, MD, USA, 25–27 May
2020; pp. 254–259.
8. Jiang, Y.; Wei, J.; Pan, J. Publicly Verifiable Private Set Intersection from Homomorphic Encryption. In Proceedings of the Security
and Privacy in Social Networks and Big Data: 8th International Symposium, SocialSec 2022, Xi’an, China, 16–18 October 2022;
pp. 117–137.
9. Debnath, S.K.; Kundu, N.; Choudhury, T. Efficient post-quantum private set-intersection protocol. Int. J. Inf. Comput. Secur. 2022,
_[17, 405–423. [CrossRef]](http://dx.doi.org/10.1504/IJICS.2022.122381)_
10. Wang, Q.; Zhou, F.; Xu, J.; Peng, S. Tag-based verifiable delegated set intersection over outsourced private datasets. IEEE Trans.
_[Cloud Comput. 2020, 10, 1201–1214. [CrossRef]](http://dx.doi.org/10.1109/TCC.2020.2968320)_
11. Pinkas, B.; Rosulek, M.; Trieu, N.; Yanai, A. SpOT-light: lightweight private set intersection from sparse OT extension.
In Proceedings of the Advances in Cryptology—CRYPTO 2019: 39th Annual International Cryptology Conference, Santa
Barbara, CA, USA, 18–22 August 2019; pp. 401–431.
12. Pinkas, B.; Rosulek, M.; Trieu, N.; Yanai, A. PSI from PaXoS: fast, malicious private set intersection. In Proceedings of
the Advances in Cryptology—EUROCRYPT 2020: 39th Annual International Conference on the Theory and Applications of
Cryptographic Techniques, Zagreb, Croatia, 10–14 May 2020; pp. 739–767.
13. Chase, M.; Miao, P. Private set intersection in the internet setting from lightweight oblivious PRF. In Proceedings of the Advances
in Cryptology—CRYPTO 2020: 40th Annual International Cryptology Conference, CRYPTO 2020, Santa Barbara, CA, USA,
17–21 August 2020; pp. 34–63.
14. Rindal, P.; Schoppmann, P. VOLE-PSI: fast OPRF and circuit-psi from vector-ole. In Proceedings of the Advances in Cryptology—
EUROCRYPT 2021: 40th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb,
Croatia, 17–21 October 2021; pp. 901–930.
15. Shi, R.H.; Li, Y.F. Quantum private set intersection cardinality protocol with application to privacy-preserving condition query.
_[IEEE Trans. Circuits Syst. Regul. Pap. 2022, 69, 2399–2411. [CrossRef]](http://dx.doi.org/10.1109/TCSI.2022.3152591)_
16. Zhou, Q.; Zeng, Z.; Wang, K.; Chen, M. Privacy Protection Scheme for the Internet of Vehicles Based on Private Set Intersection.
_[Cryptography 2022, 6, 64. [CrossRef]](http://dx.doi.org/10.3390/cryptography6040064)_
17. Qian, Y.; Xia, X.; Shen, J. A profile matching scheme based on private set intersection for cyber-physical-social systems.
In Proceedings of the 2021 IEEE Conference on Dependable and Secure Computing (DSC), Aizuwakamatsu, Japan, 30 January–2
February 2021; pp. 1–5.
18. Demmler, D.; Rindal, P.; Rosulek, M.; Trieu, N. PIR-PSI: Scaling Private Contact Discovery; Cryptology ePrint: Archive, CA, USA, 2018.
19. Liu, F.; Ng, W.K.; Zhang, W.; Han, S.; et al. Encrypted set intersection protocol for outsourced datasets. In Proceedings of the
2014 IEEE International Conference on Cloud Engineering, Boston, MA, USA, 11–14 March 2014; pp. 135–140.
20. De Cristofaro, E.; Tsudik, G. Practical private set intersection protocols with linear complexity. In Proceedings of the Financial
Cryptography and Data Security: 14th International Conference, FC 2010, Tenerife, Canary Islands, 25–28 January 2010; pp. 143–159.
21. Gentry, C. Fully homomorphic encryption using ideal lattices. In Proceedings of the 41st Annual ACM Symposium on Theory of
Computing, Bethesda, MA, USA, 31 May–2 June 2009; pp. 169–178.
-----
_Mathematics 2023, 11, 1784_ 19 of 20
22. Brakerski, Z.; Perlman, R. Lattice-based fully dynamic multi-key FHE with short ciphertexts. In Proceedings of the Advances in
Cryptology—CRYPTO 2016: 36th Annual International Cryptology Conference, Santa Barbara, CA, USA, 14–18 August 2016;
pp. 190–213.
23. López-Alt, A.; Tromer, E.; Vaikuntanathan, V. On-the-fly multiparty computation on the cloud via multikey fully homomorphic
encryption. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing, New York, NY, USA, 20–22 May 2012;
pp. 1219–1234.
24. Gentry, C.; Sahai, A.; Waters, B. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster,
attribute-based. In Proceedings of the Annual Cryptology Conference, Barbara, CA, USA, 18–22 August 2013; pp. 75–92.
25. Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. (Leveled) fully homomorphic encryption without bootstrapping. ACM Trans.
_[Comput. Theory (TOCT) 2014, 6, 1–36. [CrossRef]](http://dx.doi.org/10.1145/2633600)_
26. Chillotti, I.; Gama, N.; Georgieva, M.; Izabachene, M. Faster fully homomorphic encryption: Bootstrapping in less than 0.1
seconds. In Proceedings of the International Conference on the Theory And Application of Cryptology and Information Security,
Taipei, Taiwan, 5–9 December 2016; pp. 3–33.
27. Chen, H.; Laine, K.; Rindal, P. Fast private set intersection from homomorphic encryption. In Proceedings of the 2017 ACM
SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1243–1255.
28. Chen, H.; Huang, Z.; Laine, K.; Rindal, P. Labeled PSI from fully homomorphic encryption with malicious security. In Proceedings
of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018;
pp. 1223–1237.
29. Cong, K.; Moreno, R.C.; da Gama, M.B.; Dai, W.; Iliashenko, I.; Laine, K.; Rosenberg, M. Labeled PSI from homomorphic
encryption with reduced computation and communication. In Proceedings of the 2021 ACM SIGSAC Conference on Computer
and Communications Security, Copenhagen, Denmark, 15–19 November 2021; pp. 1135–1150.
30. Freedman, M.J.; Nissim, K.; Pinkas, B. Efficient private matching and set intersection. In Proceedings of the Advances
in Cryptology-EUROCRYPT 2004: International Conference on the Theory and Applications of Cryptographic Techniques,
Interlaken, Switzerland, 2–6 May 2004; pp. 1–19.
31. Yao, A.C. Protocols for secure computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer
Science (SFCS 1982), Chicago, IL, USA, 3–5 November 1982; pp. 160–164.
32. Micali, S.; Goldreich, O.; Wigderson, A. How to play any mental game. In Proceedings of the 19th ACM Symposium on Theory
of Computing, New York, NY, USA, 1 January 1987; pp. 218–229.
33. Kolesnikov, V. Gate evaluation secret sharing and secure one-round two-party computation. In Proceedings of the Advances in
Cryptology-ASIACRYPT 2005: 11th International Conference on the Theory and Application of Cryptology and Information
Security, Chennai, India, 4–8 December 2005; pp. 136–155.
34. [Even, S.; Goldreich, O.; Lempel, A. A randomized protocol for signing contracts. Commun. ACM 1985, 28, 637–647. [CrossRef]](http://dx.doi.org/10.1145/3812.3818)
35. Hazay, C.; Nissim, K. Efficient Set Operations in the Presence of Malicious Adversaries. In Proceedings of the Public Key
Cryptography, Paris, France, 26–28 May 2010; Volume 6056; pp. 312–331.
36. Huang, Y.; Evans, D.; Katz, J. Private set intersection: Are garbled circuits better than custom protocols? In Proceedings of the
NDSS, San Diego, CA, USA, 5–8 February 2012.
37. Dong, C.; Chen, L.; Wen, Z. When private set intersection meets big data: An efficient and scalable protocol. In Proceedings of the
2013 ACM SIGSAC Conference on Computer & Communications Security, Berlin, Germany, 4–8 November 2013; pp. 789–800.
38. Pinkas, B.; Schneider, T.; Zohner, M. Faster Private Set Intersection based on OT Extension (Full Version). In Proceedings of the
USENIX Security Symposium, San Diego, CA, USA, 20–22 August 2014.
39. Freedman, M.J.; Hazay, C.; Nissim, K.; Pinkas, B. Efficient set intersection with simulation-based security. J. Cryptol. 2016,
_[29, 115–155. [CrossRef]](http://dx.doi.org/10.1007/s00145-014-9190-0)_
40. Pinkas, B.; Schneider, T.; Zohner, M. Scalable private set intersection based on OT extension. ACM Trans. Priv. Secur. (TOPS) 2018,
_[21, 1–35. [CrossRef]](http://dx.doi.org/10.1145/3154794)_
41. Orrù, M.; Orsini, E.; Scholl, P. Actively secure 1-out-of-N OT extension with application to private set intersection. In Proceedings
of the Topics in Cryptology–CT-RSA 2017: The Cryptographers’ Track at the RSA Conference 2017, San Francisco, CA, USA,
14–17 February 2017; pp. 381–396.
42. Kerschbaum, F. Collusion-resistant outsourcing of private set intersection. In Proceedings of the 27th Annual ACM Symposium
on Applied Computing, Trento, Italy, 25–29 March 2012; pp. 1451–1456.
43. Kerschbaum, F. Outsourced private set intersection using homomorphic encryption. In Proceedings of the Proceedings of the 7th
ACM Symposium on Information, Computer and Communications Security, Hong Kong, 7–11 June 2012; pp. 85–86.
44. Abadi, A.; Terzis, S.; Dong, C. O-PSI: delegated private set intersection on outsourced datasets. In Proceedings of the ICT Systems
Security and Privacy Protection: 30th IFIP TC 11 International Conference, SEC 2015, Hamburg, Germany, 26–28 May 2015;
Proceedings 30; pp. 3–17.
45. Abadi, A.; Terzis, S.; Dong, C. VD-PSI: verifiable delegated private set intersection on outsourced private datasets. In Proceedings
of the Financial Cryptography and Data Security: 20th International Conference, FC 2016, Christ Church, Barbados, 22–26
February 2016; Revised Selected Papers 20; pp. 149–168.
46. Ali, M.; Mohajeri, J.; Sadeghi, M.R.; Liu, X. Attribute-based fine-grained access control for outscored private set intersection
[computation. Inf. Sci. 2020, 536, 222–243. [CrossRef]](http://dx.doi.org/10.1016/j.ins.2020.05.041)
-----
_Mathematics 2023, 11, 1784_ 20 of 20
47. Abadi, A.; Terzis, S.; Metere, R.; Dong, C. Efficient Delegated Private Set Intersection on Outsourced Private Datasets. IEEE Trans.
_[Dependable Secur. Comput. 2019, 16, 608–624. [CrossRef]](http://dx.doi.org/10.1109/TDSC.2017.2708710)_
48. Kamara, S.; Mohassel, P.; Raykova, M.; Sadeghian, S. Scaling private set intersection to billion-element sets. In Proceedings of the
Financial Cryptography and Data Security: 18th International Conference, FC 2014, Christ Church, Barbados, 3–7 March 2014;
Revised Selected Papers 18; pp. 195–215.
49. Chen, H.; Chillotti, I.; Song, Y. Multi-key homomorphic encryption from TFHE. In Proceedings of the International Conference
on the Theory and Application of Cryptology and Information Security, Kobe, Japan, 8–12 December 2019; pp. 446–472.
50. Oded, G. Foundations of Cryptography: Volume 2, Basic Applications; Cambridge University Press: Cambridge, MA, USA, 2009.
51. Pradel, G.; Mitchell, C. Privacy-Preserving Biometric Matching Using Homomorphic Encryption. In Proceedings of the 2021
IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Shenyang,
China, 18–20 August 2021; pp. 494–505.
52. [Albrecht, M.R.; Player, R.; Scott, S. On the concrete hardness of learning with errors. J. Math. Cryptol. 2015, 9, 169–203. [CrossRef]](http://dx.doi.org/10.1515/jmc-2015-0016)
53. Ciampi, M.; Orlandi, C. Combining private set-intersection with secure two-party computation. In Proceedings of the Security
and Cryptography for Networks: 11th International Conference, SCN 2018, Amalfi, Italy, 5–7 September 2018; pp. 464–482.
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/math11081784?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/math11081784, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2227-7390/11/8/1784/pdf?version=1680946668"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-04-08T00:00:00
|
[
{
"paperId": "5c2d23341c3462eef259c9829fa8b0f208e7b171",
"title": "Privacy Protection Scheme for the Internet of Vehicles Based on Private Set Intersection"
},
{
"paperId": "fd2f276b3392c877e535fd8645ae9144f127dca7",
"title": "Quantum Private Set Intersection Cardinality Protocol With Application to Privacy-Preserving Condition Query"
},
{
"paperId": "0315646aef872dd04595a6d81cfedc3634bc64d9",
"title": "Tag-Based Verifiable Delegated Set Intersection Over Outsourced Private Datasets"
},
{
"paperId": "14320c540d0ef53b9b87d5673ad7ce4d89b12dd4",
"title": "Security and Privacy in Cloud Computing: Technical Review"
},
{
"paperId": "2eddb162b782692d46f71f4118ba2f6a315506d7",
"title": "Labeled PSI from Homomorphic Encryption with Reduced Computation and Communication"
},
{
"paperId": "90aa26fe3b9b34a9a4afab67cf358f25e4f7db61",
"title": "Attribute-based fine-grained access control for outscored private set intersection computation"
},
{
"paperId": "efba98ce884d7ff8341f963651f6999e93ca5187",
"title": "Efficient Delegated Private Set Intersection on Outsourced Private Datasets"
},
{
"paperId": "2b51ea49ac326f4a37550c15988ab2415712bb48",
"title": "Labeled PSI from Fully Homomorphic Encryption with Malicious Security"
},
{
"paperId": "424e9f80879e3f085673df8ca7559d4a20142c9f",
"title": "PIR-PSI: Scaling Private Contact Discovery"
},
{
"paperId": "6767847f52f8a1c48bd59aeedf2eefecd87e061e",
"title": "Improved outsourced private set intersection protocol based on polynomial interpolation"
},
{
"paperId": "a9d07270be6e48448ef17b348f3455d76ea1d68f",
"title": "Scalable Private Set Intersection Based on OT Extension"
},
{
"paperId": "eccb9301284339575275c3788e71d36b5908c9ea",
"title": "Fast Private Set Intersection from Homomorphic Encryption"
},
{
"paperId": "76b62d5ce49829dc34855a2b573007cf2b195469",
"title": "An efficient quantum scheme for Private Set Intersection"
},
{
"paperId": "3365f31550db350ce54b7ccfd9dff3cb7715185c",
"title": "On the concrete hardness of Learning with Errors"
},
{
"paperId": "d1db76412435907060cdf9d05b13d2f5e2ae27c5",
"title": "Private set intersection via public key encryption with keywords search"
},
{
"paperId": "517d88b6ab20c6041876ace670199054a1509f22",
"title": "(Leveled) Fully Homomorphic Encryption without Bootstrapping"
},
{
"paperId": "c320a52959ed7bdbde14775338ce867b97697601",
"title": "When private set intersection meets big data: an efficient and scalable protocol"
},
{
"paperId": "c3f095c11102196896ed10ed27b9adb2e5bca68f",
"title": "On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption"
},
{
"paperId": "2c9ebc00a3e4b78c2754c3e084e9af3c5ac569f8",
"title": "Collusion-resistant outsourcing of private set intersection"
},
{
"paperId": "df2473061df11b76cebb7400c50246d0b354390c",
"title": "How to play ANY mental game"
},
{
"paperId": "f2c4398e489bed6cd2ac00492c762f6b112aa7bc",
"title": "A randomized protocol for signing contracts"
},
{
"paperId": "0856b3ce6350e5cbfe10a7b5881bc67c105708e9",
"title": "Efficient post-quantum private set-intersection protocol"
},
{
"paperId": "9f17818d52ddbb13998097a87964a14731b8849d",
"title": "Efficient Set Intersection with Simulation-Based Security"
}
] | 20,396
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01595e97cdf2cb27afbab014d58eb69ab9ffe030
|
[
"Computer Science",
"Engineering"
] | 0.879541
|
Low Complexity Approaches for End-to-End Latency Prediction
|
01595e97cdf2cb27afbab014d58eb69ab9ffe030
|
International Conference on Computing Communication and Networking Technologies
|
[
{
"authorId": "2164874701",
"name": "Pierre Larrenie"
},
{
"authorId": "1907649",
"name": "J. Bercher"
},
{
"authorId": "2076500",
"name": "O. Venard"
},
{
"authorId": "2541806",
"name": "Iyad Lahsen Cherif"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Conf Comput Commun Netw Technol",
"International Conference on Computing, Communication and Networking Technologies",
"ICCCNT"
],
"alternate_urls": null,
"id": "47c47088-7640-441b-85ba-375fe8f5c3a4",
"issn": null,
"name": "International Conference on Computing Communication and Networking Technologies",
"type": "conference",
"url": null
}
|
Software Defined Networks have opened the door to statistical and AI-based techniques to improve efficiency of networking. Especially to ensure a certain Quality of Service (QoS) for specific applications by routing packets with awareness on content nature (VoIP, video, files, etc.) and its needs (latency, bandwidth, etc.) to use efficiently resources of a network.Predicting various Key Performance Indicators (KPIs) at any level may handle such problems while preserving network bandwidth.The question addressed in this work is the design of efficient and low-cost algorithms for KPI prediction, implementable at the local level. We focus on end-to-end latency prediction, for which we illustrate our approaches and results on a public dataset from the recent international challenge on GNN [1]. We propose several low complexity, locally implementable approaches, achieving significantly lower wall time both for training and inference, with marginally worse prediction accuracy compared to state-of-the-art global GNN solutions.
|
# Low Complexity Approaches for End-to-End Latency Prediction
## Pierre Larrenie, Jean-François Bercher, Olivier Venard, Iyad Lahsen-Cherif
To cite this version:
#### Pierre Larrenie, Jean-François Bercher, Olivier Venard, Iyad Lahsen-Cherif. Low Complexity Ap- proaches for End-to-End Latency Prediction. 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT), Oct 2022, Kharagpur, France. pp.1-6, 10.1109/ICCCNT54827.2022.9984543. hal-03957811
## HAL Id: hal-03957811
https://hal.science/hal-03957811
#### Submitted on 30 Jan 2023
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# LOW COMPLEXITY APPROACHES FOR END-TO-END LATENCY PREDICTION
**Pierre Larrenie**
Thales SIX & LIGM
Université Gustave Eiffel, CNRS
Marne-la-Vallée, France
```
pierre.larrenie@esiee.fr
```
**Jean-François Bercher**
LIGM
Université Gustave Eiffel, CNRS
Marne-la-Vallée, France
```
jean-francois.bercher@esiee.fr
```
**Olivier Venard**
ESYCOM
Université Gustave Eiffel, CNRS
Marne-la-Vallée, France
```
olivier.venard@esiee.fr
```
**Iyad Lahsen-Cherif**
Institut National des Postes et Télécommunications (INPT)
Rabat, Morocco
```
lahsencherif@inpt.ac.ma
### ABSTRACT
```
Software Defined Networks have opened the door to statistical and AI-based techniques to
improve efficiency of networking. Especially to ensure a certain Quality of Service (QoS)
for specific applications by routing packets with awareness on content nature (VoIP, video,
files, etc.) and its needs (latency, bandwidth, etc.) to use efficiently resources of a network.
Predicting various Key Performance Indicators (KPIs) at any level may handle such problems while preserving network bandwidth.
The question addressed in this work is the design of efficient and low-cost algorithms
for KPI prediction, implementable at the local level. We focus on end-to-end latency
prediction, for which we illustrate our approaches and results on a public dataset from the
recent international challenge on GNN [1]. We propose several low complexity, locally
implementable approaches, achieving significantly lower wall time both for training and
inference, with marginally worse prediction accuracy compared to state-of-the-art global
GNN solutions.
**_Keywords KPI Prediction · Machine Learning · General Regression · SDN · Networking · Queuing Theory ·_**
GNN
### 1 Introduction
Routing while ensuring Quality of Service (QoS) is still a great challenge in any networks. Having powerful
ways to transmit data is not sufficient, we must use resources wisely. This is true for wide static networks but
even more for mobile networks with dynamic topology.
The emergence of Software-Defined Networking (SDN) [2, 3] has made it possible to share data more
efficiently between communication layers. Services are able to provide network requirements to routers based
on their nature; routers acquire data about network performance, and finally allocate resources to meet these
requirements. However, acquiring overall network performance can result in high consumption of network
bandwidth for signalization; that is particularly constraining for networks with limited resources like Mobile
_Ad-Hoc Networks (MANET)._
1
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
_k = T_ _k = T −_ 1 _k = T −_ 2 _k = 0_
A _· · ·_
B
C _· · ·_
A _· · ·_
A AGGREGATE C E _· · ·_
F _· · ·_
D A _· · ·_
Figure 1: GNN repeating T Message Passing mechanisms: message propagation and aggregation (inspired
_from [7])_
We consider network for which we wish to reduce the amount of signalization and perform intelligent routing.
In order to limit signalization, a first axis is to be able to estimate some key performance indicators (KPI) from
other KPIs. A second point would be to be able to perform this prediction locally, at the node level, rather
than a global estimation of the network. Finally, if predictions are to be performed locally, the complexity of
the algorithms will need to be low, but still preserve good prediction quality. The question we address is thus
the design of efficient and low-cost algorithms for KPI prediction, implementable at the local level. We focus
on end-to-end latency prediction, for which we illustrate our approaches and results on a public dataset from
the recent international challenge [1].
The best performances of the state-of-the-art are obtained with Graph Neural Networks (GNNs) [4, 5, 1].
Although this is a global method while we favor local methods, we use these performances as a benchmark.
We first propose to use standard machine learning regression methods, for which we show that a careful
feature engineering and feature selection (based on queue theory and the approach in [6]) allows to obtain
near state-of-the-art performances with a very low number of parameters and very low computational cost,
with the ability to operate at the link level instead of a whole-graph level. Building on that, we show that it is
even possible to obtain similar performances with a single feature and curve-fitting methods.
The presentation is structured as follows. In section 2, we first recall the key concepts on GNNs and queues;
present some related works in the literature, before introducing the dataset used for the validation of our
proposals. In section 3, we present the different approaches proposed, starting with the choice of features
for machine learning methods, followed by general curve fitting methods. We then compare in section 4 the
performances of these different approaches, in terms of performance as well as in terms of learning time and
inference time. Finally, we conclude, discuss the overall results and draw some perspectives.
### 2 Related work and dataset
**2.1** **Graph Neural Networks (GNNs)**
GNN [7, 8] is a machine learning paradigm that handles non-euclidean data: graphs. A graph is defined as a
set of nodes and edges with some properties on its nodes and its edges. The key point in GNNs is the concept
of Message Passing: each node of the graph will update its state according to states of its neighborhood by
sending and receiving messages transmitted along edges. By repeating this mechanism T times, a node is
able to capture states of its T -hop neighborhood as shown in Figure 1.
**2.2** **Queue Theory**
Queue Theory is a well studied domain and for most of simple queue systems, explicit equations exist [9].
Further, we will refer to queue systems by using their Kendall’s notation. We often take at reference M/M/1
and M/M/1/K for their markovian property, since equations are particularly easy to handle in this case.
2
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
|A|Col2|
|---|---|
|Col1|AGGREGATE|
|---|---|
|||
|A E F|Col2|
|---|---|
|B C D|Col2|
|---|---|
|T k = B AGGREGATE C D|T −1 k = A C A E F A|T −2 k · · · · · · · · · · · · · · · · · ·|
|---|---|---|
-----
However, for more general queue systems such as M/G/1 and M/G/1/K, equations are getting more
complex. Whereas closed formulas exist for M/G/1 queues, M/G/1/K queues require to solve an equation
system with K + 1 unknowns.
Queue systems analysis focus on stable queue, i.e. when the ratio ρ = _µ[λ]_ _[≤]_ [1][ where][ λ][ (resp.][ µ][) is the]
expected value of the arrival rate process (resp. service time). But finite queue systems are always stable since
the maximal number of pending items is always finite and are subject to loss instead. To model the drop of
incoming item in the queue we use the ratio ρe = _[λ]µ[e]_ [where][ λ][e][ is known as the effective arrival rate and can]
be determined thanks to equation (1).
_λe = λ(1 −_ _πK) = µ(1 −_ _π0)_ (1)
Where π0 (resp. πK) in the above equation (1) refers to the probability to the queue at equilibrium to be
empty (resp. full).
**2.3** **Related Work**
Chua, Ward, Zhang, et al. [10] present an heuristic and an Mixed Integer Programming approach to optimize
Service Functions Chain provisioning when using Network Functions Virtualization for a service provider.
Their approach relies on minimizing a trade-off between the expected latency and infrastructures resources.
Such optimization routing flow in SDN may need additional information to be exchanged between the nodes
of a network. This results in an increase of the volume of signalization, by performing some measurements
such as in [11]. This is not a consequent problem in unconstrained networks, i.e. static wired networks with
near-infinite bandwidth but may decrease performance of wireless network with poor capacity. An interesting
solution to save bandwidth would be to predict some of the KPIs from other KPIs and data exchanged globally
between nodes.
In [12, 13], authors proposed a MANETs application of SDN in the domain of tactical networks. They
proposed a multi-level SDN controllers architecture to build both secure and resilient networking. While
orchestrating communication efficiently under military constraints such as: high-level of dynamism, frequent
network failures, resources-limited devices. The proposed architecture is a trade-off between traditional
centralized architecture of SDN and a decentralized architecture to meet dynamic in-network constraints.
Jahromi, Hines, and Delanev [14] proposed a Quality of Experience (QoE) management strategy in a SDN to
optimize the loading time of all the tile of a mapping application. They have shown the impact of several
KPIs on their application using a Generalized Linear Model (GLM). This mechanism make the application
aware of the current network state.
Promising works rely on estimating KPIs at a graph-level. Note that it is very difficult, if not impossible, to
address this analytically since computer networks models a complex structure of chained interfering queues
for each flow in the network.
Rusek, Suárez-Varela, Mestres, et al. [4] used GNNs for predicting KPIs such as latency, error-rate and
jitter. They relied on the Routenet architecture of Figure 2. The idea is to model the problem as a bipartite
hypergraph mapping flows to links as depicted on Figure 3. Aggregating messages in such graph may result
in predicting KPIs of the network in input. The model needs to know the routing scheme, traffic and links
properties. Their result is very promising and has been the subject of two ITU Challenge in 2020 and 2021 [5,
1]. These ITU challenges have very good results since the top-3 teams are around 2% error in delay prediction
in the sense of Mean-Absolute Percentage Error (MAPE).
In [6], very promising results were obtained with a a near 1% GNN model error (in the sense of MAPE)
on the test set. The model mix analytical M/M/1/K queueing theory used to create extra-features to feed
GNN model. In order to satisfy the constraint of scalability proposed by the challenge, the first part of model
operates at the link level.
**2.4** **Dataset**
We use public data from the challenge [1] The dataset models static networks that have run for a certain
amount of time; the obtained data is a mean of the global working period. The data contains information about
3
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
Figure 2: Routenet Architecture [4]
D _F1_
_F3_
(a) Simple topology
(b) Paths-links Hypergraph of (a)
Figure 3: Routenet [4] paths-links hypergraph transformation applied on a simple topology graph carrying 3
flows.
(a) Black circles represents communication node, double headed arrows between them denotes available symmetric
communications links and dotted arrows shows flows path. (b) Circle (resp. dotted) represents links (resp. flows) entities
defined in the first graph (Lij is the symmetric link between node i and node j.). Unidirectional arrows encode the
relation "<flow> is carried by <link>".
the topology of the network, participants and available link characteristics, traffic and routing information.
The aim of the GNN ITU Challenge [1] was to build a scalable GNN model in order to predict end-to-end
flow latency. Nevertheless, train on one hand and test and validation on the other hand model very different
networks. Whereas training dataset models network between 25 and 50 nodes (120,000 samples), test (1,560
samples) and validation (3,120 samples) datasets model networks up to 300 nodes. This results in a very
different distribution among these different splits as shown on Figure 4.
Figure 4: End-to-end latency distribution on train and test datasets of ITU Challenge 2021, where train and
test datasets describe networks of very different sizes.[1]
It is important to point out that the proposed data is not in accordance with M/M/1/K queue models since
process service time depends on the size of the packet. The size of the packet for each flow follows a Binomial
distribution; it can be approximated by a Normal distribution inducing a general service time.
Nevertheless, it turns out that the system does not have the behavior of a M/G/1/K queue system globally
but that of a complex system with interconnected queues that cannot be easily modeled.
4
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
Hence, approximating the system locally by a mixed of a simple analytical theory (M/M/1/K) and blackbox optimization (GNNs), as was proposed in [6], is a good approach despite the lack of explicability or
interpretability and the high-computational requirements with a lot of parameters to train. We show below
that it is possible to obtain comparable performances with other regression approaches.
### 3 Our approaches
The main question is to define an estimator ˆy of the occupancy y according to the various available characteristics of the system, with a joint objective of low complexity and performance. In the following, we present
regression approaches based on machine learning and then approaches based on curve-fitting.
Once an estimate of occupancy is obtained, it is possible to get the latency prediction _d[ˆ]n for a specific link n_
by the simple relation
_dˆn = ˆyn_ E(|Pn|)
_cn_
where E(|Pn|) is the observed average packet size on link n and cn the capacity of this link.
Performances will be evaluated using the MAPE loss-function
(2)
����
(ˆy, y) = [100%]
_L_
_N_
_N_
�
_n=1_
_yˆn −_ _yn_
���� _yn_
which is preferred to Mean Squared Error (MSE) because of its scale-invariant property.
**3.1** **Feature Engineering and Machine Learning**
Based on the assumption that the system may be approximated by a model whose essential features come
from M/M/1/K and M/G/1/K queue theory,
we took essential parameters characterizing queueing systems, such as: ρ, ρe, π0, πK, etc. and built further
features by applying interactions and various non-linearities (powers, log, exponential, square root). Then, we
selected features in this set by a forward step-wise selection method; i.e. by adding in turn each feature to
potential models and keeping the feature with best performance. Finally, we selected the model with best
MAPE error. For a linear regression model, this led us to select and keep a set of 4 simple features, which
interestingly enough, have simple interpretations:
_π0 =_ 1−1ρ−[K]ρ[+1]
_L = ρ + π0_ �k _[kρ][k]_
_ρe =_ _[λ]λ[e]_ _[ρ][ =][ λ]µ[e]_
_Se =_ [�]k _[kρ]e[k]_
(3)
where L is the expected number of packets in the queue according to M/M/1/K, π0 the probability that the
queue is empty according to M/M/1/K theory, ρe the effective queue utilization, and Se the unnormalized
expected value of the effective number of packet in the queue buffer. These features can be thought as a
kind of data preprocessing, before applying ML algorithms, and this turns out to be a key to achieving good
performances. The 4 previous features have been kept as input for all the machine learning models.
Next we considered several machine learning algorithm, fitted on the training split and performances were
evaluated by test split of a public dataset [1]. Algorithms that were considered are: Multi-Layer Perceptron
model (MLP) with 4 layers and with ReLU activation function, Linear Regression, Gradient Boosting
Regression Tree (GBRT) with an ensemble of n = 100 estimators, Random Forest of n = 100 trees and
Generalized Linear Model (GLM) with Poisson family and exponential link. All results of these methods are
shown in Table 1.
5
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
**3.2** **Curve Regression for occupancy prediction**
There is a high interdependence of the features we selected in Equation 3, since all these features can be
expressed in term of ρe. Furthermore, it is confirmed by data exploration that ρe is the prominent feature for
occupancy prediction (and in turn latency prediction), as exemplified in Figure 5.
It is then tempting to try to further simplify our features space and to try to estimate the occupancy from a
non-linear transformation of the single feature ρe, as:
_yˆ = g(ρe)_ (4)
where ˆy is the estimate of the occupancy y. The concerns are of course to define simple and efficient functions
_g, with a low number of parameters, that can model the kind of growth shown in Figure 5, and of course to_
check that the performance remains interesting.
We followed three approaches to design the estimator g in order to predict links occupancy and end-to-end
flow latency. In all cases, the parameters of g were computed by minimizing the mean squared of the
regression error.
Figure 5: Data of ITU Challenge 2021 [1], ρe vs queue occupancy. Color-scale is an indicator of points cloud
density.
**3.2.1** **Exponential of polynomial**
The simplest approach is to use a curve-fitting regression of the form
_yˆ = g(ρe) = e[p][n][(][ρ][e][)]_ (5)
where pn(x) ∈ Rn[x] is a polynomial of degree n with real coefficients.
In order to find coefficients of pn one can obviously consider predicting log(y) (where y denotes the queue
occupancy). Choosing an arbitrary high polynomial degree results to oscillations and increases largely
computation time. However choosing a too small degree does not allow the prediction of high occupancy.
**3.2.2** **Generative polynomials**
The estimator g is defined as a linear combination of simple functions (fn):
�
_yˆ = g(ρe) =_ _αn · fn(ρe)_ (6)
_n_
6
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
**Generative polynomial similar to M/M/1/K theory** The idea here is to use a polynomial fn[K] [that will]
match approximately the expression in Equation 3 of the expected number of packets in the queue L.
φfγnn[K]n[K] =[(][(][x][x] φ[) =][) =][K]n _[ x][ φ]�_ _[n]nn[K]γ+1nn1[(][x]−(1[)]�x−[K]x[+1])_ _nn ≥ ≥_ 00 ∀x ∈ [0; 1[ (7)
where K is the size of the queue[1] The sequence of (fn)[K]n=0 [is finite and defined in interval][ [0; 1[][.]
In order to improve regression capabilities, each fn[K] [is defined as][ φ]n[K] [normalized by][ γ][n][, a local maximum of]
_φ[K]n_ [in the interval][ [0; 1[][.]
**Bernstein Polynomials** The previous method relies on polynomial approximation. Since the expected
value L can be expressed theoretically in terms of a polynomial of degree K, we are driven to the Bernstein
polynomials that form a basis in the set of polynomial in the interval [0; 1]:
�K
_fn[K][(][x][) =]_
_n_
�
_x[n](1_ _x)[K][−][n]_ (8)
_−_
The approximation of any continuous function on [0; 1[ by a Bernstein polynomial converges uniformly.
**3.2.3** **Implicit function**
The idea here is to define a set of N points θn = (an, bn) and approximate the underlying function by linear
interpolation between those points. To obtain a good positioning of these points, we select them as the
solution of the following optimization problem:
minθ _L(fθ(x), y) +_ _N[α]_
�
_n_
_∥⃗un × ⃗un+1∥[2]_
_∥⃗un∥[2]∥⃗un+1∥[2]_
s.t. _⃗un = θn+1_ _θ0n_
_−_
_a0 = 0_
_aN = bN = 1_
_an+1 −_ _an ≥_ 0
_θn = (an, bn)[T]_ _∈_ [0; 1][2]
(9)
Equation 9 includes a first term for minimizing the interpolation error, and a second term weighted by a
parameter α ≥ 0, to force θn sequence to be as aligned and as far as possible. This implies that our sampling
will be refined in high curvature zone of our function. The constraint formulated makes θn an increasing
sequence along the feature axis in order to get a correct interpolation of the curve, especially when N is high
enough.
### 4 Comparison and Discussion
In this section, we evaluate our methods on the data from the GNN ITU Challenge 2021, described in
subsection 2.4. We compare our results to those of the challenge winners, which establish the state-of-the-art
in terms of pure performance. Since the actual labeled test dataset used for the challenge was released after
the end of the challenge, all evaluations are performed on this particular dataset. The Table 1 presents the
characteristics of the methods, in terms of the number of input features and parameters to be learned; their
performance in the sense of MAPE and MSE; and the values of the execution times, both in learning time and
inference time. All results were obtained with the same computer configuration: 120 Go RAM, 1 CPU Intel
i9-9920X @ 3.50 GHz with 24 cores and 2 GPUs Nvidia TITAN RTX2080 24Go.
The methods used for comparison are divided into 3 groups, the first being the set of GNN approaches.
1In results shown Table I., we consider K = 32 in order to match the data contained in the ITU challenge dataset [1].
7
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
Sampling Optimization (N = 12, α =1e-5)[*] 24 1.77% 3.18e-5 _≈1min_ 0.306s
Table 1: Results Synthesis of various models for flow latency prediction. Test dataset from [1]
*only 500,000 samples used for training (2.25% of training dataset); **only 5,000,000 samples used for training (22.5% of
training dataset); [+]under-estimation/over-estimation occurs on high queue occupancy prediction
In the second group, we used classical machine learning models with only 4 input features obtained by
stepwise selection, as presented in Equation 3.
In the third group, we group curve regression models using a single well-chosen feature, namely ρe, as
presented in subsection 3.2.
As we can observe, the proposed approaches achieve a much lower computational time than the GNN
approaches, both in terms of learning time and inference time; this at the cost of a marginal performance
degradation.
Moreover, non-GNN approaches provide a more local solution since predictions are performed at the link
level and not at the whole graph level (Models predict queues occupancy, then compute analytically delay
for each link and finally aggregate along path). This would allow to use them for simple local predictions,
without having to rely on the global knowledge and prediction of the network.
The consequent gain in computational time of our low-complexity approaches is that they use far fewer
parameters, which reduces the amount of data needed for training. The reduction in the number of parameters
and the architecture (number of operations) of the solutions explains the drop in learning and inference times.
Nevertheless, when we match the distribution as presented in Figure 5, we notice that most of our data are
on a low occupancy level. In practice, some models have a kind of limited behavior when the occupation of
the targeted queue is close to 100%: there is a significant over- or under-prediction. However, this behavior
does not really affect the overall performance due to the low density of this scenario in our dataset and the
predicted values are close enough to the targets.
### 5 Conclusion
In this paper, we considered the problem of designing efficient and low-cost algorithms for KPI prediction,
implementable at the local level. We have argued and proposed several alternatives to GNNs for predicting
the queue occupancy of a complex system using simple ML models with carefully chosen features or general
curve-fitting methods.
At the cost of a marginal performance loss, our proposals are characterized by low complexity, significantly
lower learning and inference times compared to GNNs, and the possibility of local deployment. Thus, this
type of solution can be used for continuous performance monitoring.
The low complexity and structures of linear regression algorithms or curve-fitting solutions should also be
suitable for adaptive formulations. These last two points are current perspectives of this work. Of course, the
approaches considered here will have to be considered and adapted for other types of KPI, such as error-rate
or jitter.
8
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
|Approaches|Input Features|Model Parameters|MAPE|MSE|Wall Training Time|Wall Inference Time|
|---|---|---|---|---|---|---|
|Routenet [4]|Topology Traffic matrix Routing Scheme|-|≫100%|(N/A)|≈12h|-|
|Top-1 ITU Challenge Team (PARANA)[6]||654006|1.27%|1.10e-5|≈8h|214s|
|MLP|π0 L Se ρe|291|1.91%|3.18e-5|≈45min|8.26s|
|Linear Regression*+||4|1.74%|3.20e-5|<1sec|0.296s|
|GBRT (n=100)*||4|1.73%|2.90-5|≈1min|0.867s|
|Random Forest (n=100)*||4|1.69%|3.00e-5|<1sec|0.994s|
|GLM - Poisson+||4|3.68%|5.09e-4|≈1min|0.481s|
|Curve-fitting exponential (deg=3)+|ρe|4|3.94%|3.75e-4|≈1sec|0.311s|
|Curve-fitting exponential (deg=8)+||9|1.70%|3.53e-5|≈5secs|0.320|
|Curve-fitting M/M/1/K**+||33|2.04%|4.42e-5|≈3min|3.55s|
|Curve-fitting Bernstein**+||33|1.68%|3.13e-5|≈2min|3.14s|
|Sampling Optimization (N = 12, α = 0)*+||24|1.77%|3.18e-5|≈1min|0.281s|
|Sampling Optimization (N = 12, α =1e-5)*||24|1.77%|3.18e-5|≈1min|0.306s|
-----
Finally, a last point that deserves interest is the fact that these low complexity models can be interpreted/explained either by direct inspection (visualization), or by using tools such as Shapley values [15]
which allow to interpret output values by measuring contributions of each input feature on the prediction.
### References
[1] J. Suárez-Varela et al., “The graph neural networking challenge: A worldwide competition for education
in AI/ML for networks,” ACM SIGCOMM Computer Communication Review, vol. 51, no. 3, pp. 9–16,
[2021. DOI: 10.1145/3477482.3477485.](https://doi.org/10.1145/3477482.3477485)
[2] S. Singh and R. K. Jha, “A survey on Software Defined Networking: Architecture for next generation
network,” Journal of Network and Systems Management, vol. 25, no. 2, pp. 321–374, 2017.
[3] R. Amin, M. Reisslein, and N. Shah, “Hybrid SDN networks: A survey of existing approaches,” IEEE
_[Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3259–3306, 2018. DOI: 10.1109/COMST.](https://doi.org/10.1109/COMST.2018.2837161)_
```
2018.2837161.
```
[4] K. Rusek, J. Suárez-Varela, A. Mestres, P. Barlet-Ros, and A. Cabellos-Aparicio, “Unveiling the
potential of graph neural networks for network modeling and optimization in SDN,” in Proceedings of
_the 2019 ACM Symposium on SDN Research, 2019, pp. 140–151._
[5] _[The graph neural networking challenge 2020. https://bnn.upc.edu/challenge/gnnet2020.](https://bnn.upc.edu/challenge/gnnet2020)_
[6] [B. K. de Aquino Afonso, GNNet challenge 2021 report (1st place), https://github.com/ITU-AI-](https://github.com/ITU-AI-ML-in-5G-Challenge/ITU-ML5G-PS-001-PARANA)
```
ML-in-5G-Challenge/ITU-ML5G-PS-001-PARANA, 2021.
```
[7] W. L. Hamilton, “Graph representation learning,” Synthesis Lectures on Artifical Intelligence and
_Machine Learning, vol. 14, no. 3, pp. 1–159, 2020._
[8] D. Bacciu, F. Errica, A. Micheli, and M. Podda, “A gentle introduction to deep learning for graphs,”
_Neural Networks, vol. 129, pp. 203–221, 2020._
[9] R. B. Cooper, “Introduction to queueing theory,” Edward Arnold, London, 1981.
[10] F. C. Chua, J. Ward, Y. Zhang, P. Sharma, and B. A. Huberman, “Stringer: Balancing latency and
resource usage in service function chain provisioning,” IEEE Internet Computing, vol. 20, no. 6,
pp. 22–31, 2016.
[11] S. T. V. Pasca, S. S. P. Kodali, and K. Kataoka, “AMPS: Application aware multipath flow routing using
machine learning in SDN,” in 2017 Twenty-third National Conference on Communications (NCC),
IEEE, 2017, pp. 1–6.
[12] K. Poularakis, G. Iosifidis, and L. Tassiulas, “SDN-enabled tactical ad hoc networks: Extending
programmable control to the edge,” IEEE Communications Magazine, vol. 56, no. 7, pp. 132–138,
2018.
[13] K. Poularakis, Q. Qin, E. M. Nahum, M. Rio, and L. Tassiulas, “Flexible SDN control in tactical ad hoc
[networks,” Ad Hoc Networks, vol. 85, pp. 71–80, 2019, ISSN: 1570-8705. DOI: 10.1016/j.adhoc.](https://doi.org/10.1016/j.adhoc.2018.10.012)
```
2018.10.012.
```
[14] H. Z. Jahromi, A. Hines, and D. T. Delanev, “Towards application-aware networking: Ml-based end-toend application KPI/QoE metrics characterization in SDN,” in 2018 Tenth International Conference on
_Ubiquitous and Future Networks (ICUFN), IEEE, 2018, pp. 126–131._
[15] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in
_neural information processing systems, vol. 30, 2017._
9
**Note: This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022 IEEE. Personal use of this**
material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes creating new collective works for resale or
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2302.00004, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://hal.science/hal-03957811/file/main.pdf"
}
| 2,022
|
[
"JournalArticle",
"Conference"
] | true
| 2022-10-03T00:00:00
|
[
{
"paperId": "546cdc1f6318fb1ff12e245919424be5cab34594",
"title": "The graph neural networking challenge"
},
{
"paperId": "45f4585f12ed7310c2dbc487a2aa1ba23c568071",
"title": "Graph Representation Learning"
},
{
"paperId": "2e3a86c4b8f6883b371f718eb0a35857a6bf9b95",
"title": "A Gentle Introduction to Deep Learning for Graphs"
},
{
"paperId": "5056d98741060e1e51098a568f170bc545677f0d",
"title": "Flexible SDN control in tactical ad hoc networks"
},
{
"paperId": "53bc1dd1dbfa61bcb5bba6fcaee3f2c2323a0f49",
"title": "Unveiling the potential of Graph Neural Networks for network modeling and optimization in SDN"
},
{
"paperId": "e9764cbb19a8d3895e3c47ac08a77cb42b4ceeb3",
"title": "Towards Application-Aware Networking: ML-Based End-to-End Application KPI/QoE Metrics Characterization in SDN"
},
{
"paperId": "24bf243d9b5f0b8fe6ae840e22b97423d1249234",
"title": "Hybrid SDN Networks: A Survey of Existing Approaches"
},
{
"paperId": "ebffa220a8109feb7f34a15aca3ede4bcd2d2161",
"title": "SDN-Enabled Tactical Ad Hoc Networks: Extending Programmable Control to the Edge"
},
{
"paperId": "442e10a3c6640ded9408622005e3c2a8906ce4c2",
"title": "A Unified Approach to Interpreting Model Predictions"
},
{
"paperId": "ee40e107a2e34a7ee67a913253d12d0dde4943f9",
"title": "AMPS: Application aware multipath flow routing using machine learning in SDN"
},
{
"paperId": "f0c51b5804af7b96bf2d45a5c3ad39987ac46001",
"title": "A Survey on Software Defined Networking: Architecture for Next Generation Network"
},
{
"paperId": "05380d21cd612184e89e82005c9392ac16554352",
"title": "Stringer: Balancing Latency and Resource Usage in Service Function Chain Provisioning"
},
{
"paperId": "7d6e65fde7ce51c281a65a17d8fa86395a3526c2",
"title": "Virtual Conference"
},
{
"paperId": "49bea1156edc540b32cd456a4f7252b5033c8a2f",
"title": "Introduction to Queueing Theory"
},
{
"paperId": null,
"title": "GNNet challenge 2021 report (1st place)"
},
{
"paperId": null,
"title": "proposed a Quality of Experience (QoE) management strategy in a SDN to optimize the loading time of all the tile of a mapping application"
},
{
"paperId": null,
"title": "Note : This paper has been accepted for publication at IEEE 13th ICCCNT 2022. ©2022"
}
] | 8,861
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Law",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0159c5f17225c6e8aa183f7b39e74aaffa185a1e
|
[
"Computer Science",
"Business",
"Engineering"
] | 0.896982
|
Building a Blockchain Application thatComplies with the EU General DataProtection Regulation
|
0159c5f17225c6e8aa183f7b39e74aaffa185a1e
|
MIS Q. Executive
|
[
{
"authorId": "4727858",
"name": "Alexander Rieger"
},
{
"authorId": "35440237",
"name": "Florian Guggenmos"
},
{
"authorId": "79347889",
"name": "J. Lockl"
},
{
"authorId": "1711202",
"name": "G. Fridgen"
},
{
"authorId": "2986366",
"name": "Nils Urbach"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Building a Blockchain Application that Complies with the EU General Data Protection Regulation
### Complying with the EU General Data Protection Regulation (GDPR) poses significant challenges for blockchain projects, including establishing clear responsibilities for compliance, securing lawful bases for processing personal data, and observing rights to rectification and erasure. We describe how Germany’s Federal Office for Migration and Refugees addressed these challenges and created a GDPR-compliant blockchain solution for cross-organizational workflow coordination. Based on the lessons learned, we pro vide three recommendations for ensuring blockchain solutions are GDPR-compliant.[1,2]
#### Alexander Rieger
University of Augsburg
(Germany)
FIM Research Center
Jannik Lockl
University of Bayreuth
(Germany)
Project Group Business & Information
Systems Engineering of the Fraunhofer FIT
Nils Urbach
University of Bayreuth
(Germany)
Faculty of Law, Business, and Economics
#### Florian Guggenmos
University of Bayreuth
(Germany)
Project Group Business & Information
Systems Engineering of the Fraunhofer FIT
Gilbert Fridgen
University of Luxembourg
(Luxembourg)
SnT - Interdisciplinary Centre for Security,
Reliability and Trust
## The EU General Data Protection Regulation Poses
Significant Challenges for Blockchain Projects[12]
Blockchain technology provides an innovative means of fostering collaboration, especially
in cross-organizational workflows. Blockchain solutions allow the organizations involved
in the workflow to maintain control over their respective activities but, at the same time,
enable them to establish a “shared and persistent truth” on the state of the workflow at any
given time. This truth can act as a point of reference if conflicts need to be resolved at a later
point. By extension, this allows the organizations to use updates on the blockchain as reliable
1 Carsten Sørensen is the accepting senior editor for this article.
2 We developed this article as part of an applied research project with Germany’s Federal Office for Migration and Refugees. The
authors would like to thank everyone involved for their support. We would also like to express our gratitude to Carsten Sørensen,
Mary Lacity, Rajiv Sabherwal, and three anonymous reviewers for their guidance and comments, which considerably improved this
article.
**DOI: 10.17705/2msqe.00020**
-----
triggers for subsequent activities. Moreover, the
continuous distribution of updates throughout
the network means that these triggers are readily
available. If required, smart contracts can also
allow the automated activation of certain steps
of the workflow and its monitoring. In simple
terms, blockchain technology offers a promising
alternative to centralized workflow management
systems where the delegation of workflow
governance to a central authority is not possible
or desirable.[3]
However, when blockchain projects move
beyond the proof-of-concept stage, they begin
to encounter the limiting effects of regulations
and legal barriers. Foremost among these is the
European Union (EU) General Data Protection
Regulation (GDPR).[4] The GDPR protects a “natural
person”[5] from unregulated processing of their
personal data and establishes rules governing the
free movement of their personal data. It codifies
several essential rights of natural persons, such
as the right to have inaccurate personal data
rectified, or completed if it is incomplete, and
to have their personal data erased. Moreover, it
establishes clear responsibilities for compliance
with the regulation and prohibits the processing
of personal data without a lawful basis, such
as requiring explicit consent if the action is
necessary to fulfill obligations of a law or
contract.
At first glance, many of the GDPR requirements
appear to conflict with the basic properties
of blockchain technology. For instance,
the technology does not envisage the data
being erased at a later point. Moreover, the
decentralized nature of blockchain networks
seems to prevent the designation of clear
responsibilities. Also, the need to obtain a lawful
3 For a detailed discussion on the prospect of using blockchains for
the management of business processes and workflows, see Mendling,
J. et al. “Blockchains for Business Process Management : Challenges
and Opportunities,” ACM Transactions on Management Information
_Systems (9:1), February 1, 2018, pp. 1-16._
4 _Regulation (EU) 2016/679 of the European Parliament and_
_of the Council of 27 April 2016, Council of the European Union,_
European Parliament; the full text of the GDPR is available at https://
publications.europa.eu/en/publication-detail/-/publication/3e485e1511bd-11e6-ba9a-01aa75ed71a1/language-en. While the GDPR is an
EU regulation, many global platforms and other cross-border firms
observe its requirements.
5 The GDPR regulates the processing of information relating to an
identified or identifiable natural person—i.e., an individual human
being. It does not regulate the processing of information relating to
legal persons.
basis for processing personal data at each node
appears daunting.
As we show in this article, however, these
challenges can be resolved. We describe how the
Bundesamt für Migration und Flüchtlinge (the
BAMF—Germany’s Federal Office for Migration
and Refugees) created a GDPR-compliant
blockchain solution for processing applications
for asylum. (The German asylum procedure is
described in Appendix A.) The key learnings from
this project give rise to three recommendations
for the management of GDPR requirements
and the design of GDPR-compliant blockchain
solutions. (Appendix B describes the research we
conducted in preparing to write this article.)
## A Brief Introduction to the
EU General Data Protection
Regulation
Data privacy has been an important focus
of European lawmaking since the 1970s. A
key multilateral milestone was the EU’s 1981
signing of the Convention for the Protection
of Individuals, which addressed the automatic
processing of personal data. The most recent and
comprehensive regulatory step was the passing of
the General Data Protection Regulation in 2016,
which took effect across all member states of the
EU in May 2018.
The GDPR applies to any act of wholly
or partially automated processing[6] of any
information relating to an identified or
identifiable natural person in the EU, and to any
such act by a data controller[7] or a data processor[8]
that operates on that person’s behalf, in the
European Union. Importantly, it relates not only
to data that is obviously personal, such as names
6 As set out in Article 4(2) of the GDPR, the term “processing” en
compasses a wide variety of conceivable actions, such as recording,
storing, and disseminating data.
7 Article 4(7) of the GDPR defines a data controller as a “natural
or legal person, public authority, agency or other body which, alone
or jointly with others, determines the purposes and means of the
processing of personal data ….”
8 Article 4(8) of the GDPR defines a data processor as a “natural or
legal person, public authority, agency or other body which processes
personal data on behalf of the controller.”
-----
but also to data that, in combination with other
means, can be used to identify a natural person.[9]
The GDPR aims to foster the free movement
of personal data within EU member states by
standardizing the rules for the processing of
personal data by both private and public data
controllers. It builds on six principles, including
purpose limitation and data minimization, and
enshrines privacy by design and by default.
Importantly, it outlaws any processing of personal
data unless the data controller has a lawful basis.
Chapter 3 of the GDPR also establishes the
various rights of data subjects[10] (Articles 12 to
23). These rights include, among others, the
right to rectification (Article 16)[11] and the right
to erasure (“the right to be forgotten”) (Article
17)[12]. This means that data subjects can
hold controllers and processors of their data
accountable, and violators can incur hefty fines.
In particular, Article 83(5) of the GDPR prescribes
administrative fines of up to €20 million ($22.29
million)[13] or, in the case of companies, up to 4%
of total worldwide annual revenue from the
preceding financial year, whichever is higher.
## Reconciling Blockchain
Solutions with the GDPR
Most guidelines on the management of GDPR
requirements presuppose a single identifiable
controller and skirt around the particularities of
decentralized networks in general and blockchain
technology in particular. Blockchain projects
therefore face genuine challenges in observing
the requirements of the GDPR. Chief among
these challenges is the need to establish clear
responsibilities for compliance, to secure lawful
9 In particular, the GDPR also applies to data that allows attribu
tion through the analysis of patterns of use and context. In many
instances, this includes public keys. For more details on the resulting
challenges, see Lyons, T., Courcelas, L. and Timsit, K. Blockchain
_and the GDPR, The European Union Blockchain Observatory and_
Forum, October 16, 2018, available at https://www.eublockchainfo
rum.eu/sites/default/files/reports/20181016_report_gdpr.pdf.
10 The GDPR uses the term “data subject” as a synonym for any
identified or identifiable natural person.
11 Article 16 of the GDPR grants each data subject “the right to
obtain from the controller without undue delay the rectification of
inaccurate personal data.”
12 Article 17 of the GDPR states that an individual has “the right
to obtain from the controller the erasure of personal data concerning
him or her without undue delay” when one of the defined reasons
applies.
13 Euro/dollar conversion rate as of October 2019.
bases for processing personal data, and to comply
with the rights to rectification and erasure.
**Establishing** **Clear** **Responsibilities**
**for Compliance. The GDPR requires that**
responsibilities for compliance with its articles
are identified and designated, especially when
several parties jointly determine the purposes
and means of processing (“joint control”).[14]
For conventional databases, the establishment
of responsibilities is comparatively easy. In
blockchain networks, defining responsibility is
often difficult. In particular, legal opinions differ
as to which participants qualify as standalone
controllers and which as joint controllers. The
distinction is important because joint controllers
are jointly accountable and have to create an
arrangement that identifies each joint controller
and determines their respective responsibilities,
and that is transparent to the affected data
subjects.[15]
**Securing Lawful Bases for Processing**
**Personal data. Article 5 of the GDPR specifies**
six lawful bases for processing personal data,
including documented authorization by the data
subject or processing that is required to fulfill
obligations under law or contract;[16] without one
of these lawful bases, a data controller cannot
legally process personal data. Establishing
lawfulness for each data-processing action
in a blockchain network can be particularly
14 The primary criterion for qualifying as joint controllers is the
joint determination of the purpose of processing (“primacy of the
purpose criterion”); simple participation in the determination of the
means does not necessarily qualify a participant of a blockchain net
work as a joint controller. For a detailed discussion of joint control
lership in the context of blockchains, see Blockchain and the General
_Data Protection Regulation: Can distributed ledgers be squared with_
_European data protection law?, European Parliamentary Research_
Service, July 2019, available at http://www.europarl.europa.eu/Reg
Data/etudes/STUD/2019/634445/EPRS_STU(2019)634445_EN.pdf
15 The national data protection authority of France (CNIL), for
instance, considers participants of blockchain networks to be data
controllers “when the … participant is a natural person and … the
personal data processing operation is related to a professional or com
mercial activity” or “when the … participant is a legal person and …
it registers personal data in a blockchain.” When these controllers do
not designate a single controller who determines the purposes and
mean of processing, regulators and courts may easily decide to hold
them accountable as joint controllers. The CNIL’s detailed opinion
is in Blockchain: Solutions for a responsible use of the blockchain in
_the context of personal data, 2018, available at https://www.cnil.fr/_
sites/default/files/atoms/files/blockchain.pdf
16 Lawfulness has to be established for three essential processing
steps: the submission of new data to the blockchain by a submitting
participant; its validation, distribution, and replication by the nodes
of the blockchain network; and its reading from the blockchain by
another participant.
-----
|Table 1: Advantages and Disadvantages of the Central Authority, Shared Responsibility, and Pseudonymization Approaches|Col2|Col3|Col4|
|---|---|---|---|
||Description (in terms of controlling and complying with the right to erasure)|Advantages|Disadvantages|
|Central Authority|●● The network nominates a central authority that acts as the network’s single controller ●● The right to erasure is waived by way of contracts between the central authority and the network’s partci ipants, and in consultatoi n with afef cted third partei s if necessary|●● Easy identfi ci atoi n of the data controller ●● Requires a less intricate solutoi n architecture|●● Requires centralized control over network rights ●● If any of the erasure contracts become void, the blockchain may have to be modifei d|
|Shared Responsibility|●● All partci ipants in the blockchain network act as joint controllers ●● The right to erasure is waived by way of mutual contracts between the network’s partci ipants, and in consultatoi n with afef cted third partei s if necessary|●● Does not require centralized control over network rights ●● Requires a less intricate solutoi n architecture|●● There must be a legal basis for processing personal data for each partci ipant ●● If any of the erasure contracts become void, the blockchain may have to be modifei d|
|Pseudonymizatoi n|●● Data on the blockchain is pseudonymized; only those partci ipants who possess the additoi nal informatoi n required for atrt ibutoi n are (joint) controllers ●● The blockchain solutoi n can comply with the right to erasure by eliminatni g the additoi nal informatoi n|●● Does not require centralized control over network rights ●● The right to erasure is upheld by design|●● Requires an intricate solutoi n architecture to ensure that the additoi nal informatoi n required for atrt ibutoi n can be securely shared and reliably eliminated ●● The blockchain may have to be modifei d if there is inadvertent atrt ibutoi n from examining patet rns of use or context (linkability risk) or any other inadvertent reversal of the pseudonymizatoi n (reversal risk)|
-----
burdensome. Moreover, any lawful basis may
cease to exist or apply in the future (e.g., with the
withdrawal of consent or amendment of the law).
In these circumstances, storage of the relevant
personal data is no longer permitted and the data
must be erased.
**Complying with the Rights to Erasure**
**and Rectification.** The GDPR states that data
subjects can request that data controllers rectify
their personal data if there are errors and erase
the data once a lawful basis ceases to exist. This
implies that modifications to data on a blockchain
must be made on each copy of the blockchain.
#### Three Potential Approaches for Ensuring Blockchain Solutions Are GDPR-Compliant
From a data-privacy perspective, addressing
the three challenges described above requires
a combination of organizational and technical
measures. We have identified three potential
blockchain solution approaches—“central
authority,” “pseudonymization,”[17] and “shared
responsibility.”[18] Table 1 lists the advantages
and disadvantages of these approaches, and
we describe them below. To the best of our
knowledge, there is no single best approach
for each application and context. Moreover, the
approaches are not comprehensively exhaustive,
and some blockchain projects may identify
other ways of ensuring they comply with the
requirements of the GDPR.
**Central Authority Approach. The central**
authority approach addresses conflicts between
GDPR requirements and a blockchain solution
through organizational measures and by
delegating responsibility to a central authority.
This authority may be a single participant in the
17 Article 4(5) of the GDPR defines pseudonymization as “the
processing of personal data in such a manner that the personal data
can no longer be attributed to a specific data subject without the use
of additional information, provided that such additional informa
tion is kept separately and is subject to technical and organizational
measures to ensure that the personal data are not attributed to an
identified or identifiable natural person.” Pseudonymization is differ
ent from anonymization, which renders personal data “anonymous in
such a manner that the data subject is not or no longer identifiable.”
(Recital 26 of the GDPR).
18 For a comprehensive discussion of the three approaches, see
Fridgen, G., Guggenberger, N., Hoeren, T., Prinz, W. and Urbach, N.
_Chancen und Herausforderungen von DLT (Blockchain) in Mobilität_
_und Logistik, (the management summary is in English), Bundesmin_
isteriums für Verkehr und digitale Infrastruktur, May 2019, available
at https://www.bmvi.de/SharedDocs/DE/Anlage/DG/blockchaingutachten.pdf?__blob=publicationFile.
blockchain network or a group of participants.
The central authority assumes the role of the data
controller and the responsibility for compliance
with the GDPR. Moreover, it establishes the rights
of network participants and creates, using a
contract or another legal instrument, agreements
for processing personal data with the operators of
the blockchain nodes. The authority also secures
the lawful bases for processing personal data and
handles any related matters. When the blockchain
network processes the personal data of network
participants, the central authority has to create
contracts with each network participant. When
the network processes the personal data of third
parties, the central authority must secure the
lawful bases for processing the data of those third
parties.
The right to erasure of personal data is waived
by way of contracts between the central authority
and the network’s participants, and, if necessary,
in consultation with affected third parties. If any
of these contracts become void, the blockchain
network must erase the personal data from the
blockchain. This can be done in several ways. For
instance, each node can remove the data from
its block and recalculate all subsequent blocks.
This recalculation can be documented in another
blockchain. Another option is to use redactable[19]
blockchains. The right to rectify data can be
achieved through technical means by submitting
a rectification transaction to the blockchain. More
specifically, the original transaction is invalidated
by the rectification transaction, but it remains on
the blockchain.
The central authority approach is appropriate
for blockchain solutions that permit the
designation of a single data controller with farreaching competencies.
**Shared** **Responsibility** **Approach.** The
shared responsibility approach is very similar
to the central authority approach but builds on
the premise of sharing responsibilities among
the participants of the blockchain network.
All participants in the network act as joint
controllers and establish an arrangement that
sets out the respective responsibilities of each
participant. The lawful basis for processing
19 For a discussion of redactable blockchains, see Ateniese, G.,
Magri, B., Venturi, D. and Andrade, E. “Redactable Blockchain – or
– Rewriting History in Bitcoin and Friends”, 2017 IEEE Symposium
on Security and Privacy, May 11, 2017 pp. 111-126.
-----
personal data relating to network participants
and/or third parties is ideally ensured through
mutual contracts. As with the central authority
approach, the right to erasure is waived by way
of contracts between the network’s participants
and, if necessary, with affected third parties.
Again, the right to rectification can be achieved
through rectification transactions.
The shared responsibility approach is
appropriate for blockchain networks where all
participants have lawful bases for processing all
the personal data exchanged.
**Pseudonymization** **Approach.** As its
name suggests, this approach is based on
pseudonymizing the data on the blockchain
so that it only qualifies as personal data when
participants possess certain additional off-chain
information that allows the data to be attributed
to a natural person. Pseudonymization of the data
can be achieved using encryption, cryptographic
hash functions, or pseudonymous identifiers.[20]
Only those participants who possess the
additional information required for attribution
are controllers. When these controllers jointly
determine the purposes and means of processing
the pseudonymized data and the data required
for attribution, they are joint controllers. As such,
they need to establish, through a joint control
arrangement, their respective responsibilities for
compliance with the GDPR and for establishing
lawful bases for processing personal data.
Alternatively, they can create data processing
agreements to establish clear responsibilities for
compliance.
Controllers and processors can uphold the
right to erasure by eliminating the additional
information—that is, by depriving themselves of
the ability to attribute data to specific individuals.
This technical measure is considerably more
reliable than an organizational measure based on
waivers but requires a solution that ensures that
the additional information needed for attribution
can be securely shared and reliably eliminated.
The process for rectification mirrors the central
authority and shared responsibility approaches.
The pseudonymization approach is
appropriate for blockchain networks where the
20 In the first case, the additional information required for attribu
tion is the decryption key. In the second case, the additional informa
tion is the unhashed information, and in the third case, the additional
information required for attribution is the mapping of a pseudony
mous identifier to a specific identifier.
designation of a central authority is not viable
or desirable, and where not all participants have
lawful bases for the processing of all the personal
data exchanged.
## Background of the Choice of
Blockchain Technology for the
German Asylum Procedure
In Germany, the asylum procedure involves
close collaboration between various authorities
at the municipal, state and federal levels, with
the BAMF playing a pivotal central role because
it handles and issues decisions regarding asylum
applications. State-level migration authorities
are responsible for the initial registration of
asylum seekers, and for their eventual integration
or repatriation. Several security agencies are
involved in background checks; municipal
governments generally handle housing, and
various health authorities provide medical care.
#### Lessons Learned from Early Efforts to Introduce Centralized Support Systems for the Asylum Procedure
Federal separation of competencies prevents
the delegation of workflow governance to
a central authority, such as the BAMF. This
separation also leads to a significant degree of
variation between workflows, and complicates
the creation of a common workflow model and
the introduction of a conventional workflow
management system.
One essential step in managing the
resulting complexities was to transform
the Central Register of Foreign Nationals
(Ausländerzentralregister, or AZR for short), a
database that contains personal information on
about 20 million foreign nationals, into a shared
repository for certain master data, such as names
and fingerprints. However, this transformation
did not include workflow management features.
Moreover, the transformation revealed
three challenges for creating a centralized
solution for the German asylum procedure.
First, centralization requires the redistribution
of competencies, which, in turn, requires
considerable legislative action. In particular, the
existence of the AZR requires a specific AZR law.
While this law provides a solid legal foundation,
it also reduces the AZR’s flexibility, as technical
-----
updates first require Germany’s parliament to
make a formal legislative update to the AZR law.
Second, centralization creates unbalanced data
guardianship arrangements. In particular, the
BAMF has to assume full responsibility for the
lawfulness of the subsequent processing of any
data in the AZR. Third, centralization leads to
the development of solutions that do not take
account of the specifics of individual workflows.
In particular, the AZR’s data model includes only
a fraction of the data typically exchanged between
authorities over the course of the workflow
involved in processing asylum applications.
#### Identifying Blockchain as a Potential Solution for the Asylum Procedure
These shortcomings encouraged the BAMF
to explore decentralized alternatives for
cross-organizational workflow coordination,
which would require neither the delegation of
workflow governance to a single authority nor
the extension of the AZR. After a preliminary
evaluation, the BAMF narrowed down its
technological options and decided to consider
a blockchain solution. This choice was based on
best practices for the identification of blockchain
use cases and essentially followed the first seven
questions of the ten-step decision path described
by Pedersen et al.[21]
The solution the BAMF sought was a shared
common database for event logs (Question 1
in the ten-step path) that would be used by
multiple parties (Question 2). Although trust is
not necessarily an issue between the authorities
involved in the German asylum procedure,
the federal nature of the process means that it
incorporates a multitude of interests that are
often not fully aligned (Question 3). Concerns
about competencies, data guardianship,
and flexibility caused the BAMF to seek a
decentralized solution (Question 4). Moreover,
it argued that a solution for cross-organizational
workflow coordination would have to offer tiered
rights of access because most authorities involved
in the procedure are only entitled to view specific
data (Question 5). The rules of the procedure,
meanwhile, would remain predominately the
21 Pedersen, A. B., Risius, M., and Beck, R. “A Ten-Step Decision
Path to Determine When to Use Blockchain Technologies,” MIS
_Quarterly Executive (18:2), June, 2019, pp. 99-115. This article_
provides a comprehensive discussion of what constitutes a genuine
blockchain use case.
same (Question 6), and the BAMF was interested
in creating an immutable log that would facilitate
process forensics at a later point (Question 7).
#### Choosing the Blockchain Design
Access right considerations caused the BAMF
to choose a private permissioned blockchain
design. Blockchain networks are deemed
“private” when reading access is limited to a
certain set of participants, such as the authorities
involved in the asylum procedure, whereas a
public blockchain network allows anyone to
read transactions. “Permissioned” means that
only preregistered participants can submit new
transactions, validate those transactions, and
append new blocks; in a permissionless network,
any participant can do so.[22] The BAMF chose
to make its blockchain solution permissioned
because the authorities in the asylum procedure
are known and have clearly designated roles and
competencies.
A private permissioned blockchain solution
offered the BAMF several functional and technical
benefits over the status quo. Functionally, such a
solution would improve integrity and increase the
speed of procedures. Lengthy asylum procedures
regularly result in undue hardship for applicants,
negative press coverage, and protracted revisions
in court. The BAMF was particularly interested in
blockchain technology’s ability to use event logs
to quickly establish a shared truth on the status
and course of asylum applications, as illustrated
by the manager of the BAMF’s blockchain project:
_“Blockchain is a promising technology that_
_can support communication and collaboration_
_among the public authorities involved in asylum_
_procedures. It offers many advantages, especially_
_for sharing status updates quickly and securely:_
_the authorities involved can obtain an overview_
_of the course of an applicant’s asylum procedure_
_via the blockchain and can call up the status_
_almost in real time.” Haris Trtovac, Manager of_
the BAMF’s blockchain project
Technically, a blockchain solution could
provide the BAMF with flexibility, which would
22 For detailed information on the differences between these block
chain design choices, see Androulaki, E. et al. “Hyperledger Fabric:
A Distributed Operating System for Permissioned Blockchains,”
_Proceedings of the Thirteenth EuroSys Conference, April 23-26,_
2018, ACM Digital Library, available at https://dl.acm.org/citation.
cfm?id=3190538.
-----
only require agreement on data models and
application programming interfaces (APIs).
Moreover, it recognized blockchain’s potential to
further the once-only principle:[23]
_“In the future, we should no longer copy data_
_into large nationwide databases. Rather, we_
_should leave the data where we collect it and_
_use a logging layer to make transparent when_
_and where status changes occurred. With a_
_lightweight blockchain solution, we can more_
_easily implement this logging layer than with an_
_expansion of the existing and already complex IT_
_solutions.” Markus Richter, Vice President of the_
BAMF
## How the BAMF Ensured its
Blockchain Solution Is GDPR compliant
#### Proof-of-Concept and Pilot Stages
The BAMF began its blockchain project in
January 2018 with a proof of concept intended
to demonstrate that a blockchain solution could
offer the functionality required to coordinate
the workflow underlying the German asylum
procedure. The prototype used a blockchain to log
and propagate the completion of essential steps
in the procedure. It matched these event logs
to asylum applications using AZR identification
numbers.
Although the prototype was successful
in demonstrating blockchain technology’s
functional merits, the BAMF was concerned about
compliance with the GDPR, which took effect in
May 2018. The BAMF therefore commissioned
a legal opinion,[24] which raised serious concerns
about the prototype’s data model. In particular,
23 The European Commission’s Communication on the eGovern
_ment Action Plan 2016 – 2020 sets out several principles, including_
the “once only principle,” which states that “public administrations
should ensure that citizens and businesses supply the same infor
mation only once to a public administration. Public administration
offices take action if permitted to internally reuse this data, in due
respect of data protection rules, so that no additional burden falls on
citizens and businesses.”, available at https://ec.europa.eu/digitalsingle-market/en/news/communication-eu-egovernment-action-plan2016-2020-accelerating-digital-transformation
24 For the full opinion (in German only), see Hoeren, T. and Baur,
J. Datenschutzrechtliche Zulässigkeit der Übermittlung von Infor
_mationen über Migranten zwischen öffentlichen Stellen mittels einer_
_Permissioned-Biockchain, 2018, available at https://fragdenstaat.de/_
anfrage/gutachten-blockchainbamf/302470/anhang/ifg_gutachten_
blockchain.pdf.
the opinion argued that, while the event logs did
not themselves qualify as personal data, the use
of the AZR identifiers turned each event log into
personal data, which would eventually have to be
erased. The opinion urged the BAMF to address
three issues:
1. Define the responsibilities for compliance
with the requirements of the GDPR
2. Establish the lawful bases for processing
personal data
3. Create a design that would allow personal
data to be rectified and erased. Ideally,
the design would either use a so-called
redactable blockchain or pseudonymize
the personal data.
The BAMF addressed these issues during the
subsequent pilot phase. To limit complexity, the
BAMF decided to focus on the Saxony Arrival,
Decision, and Return (AnkER) facility, which
opened in Dresden mid-2018. (The aim is for the
initial processing of all asylum seekers to take
place in AnkER facilities.) To improve information
exchange and expedite procedures, several
authorities are involved in the AnkER procedure.
The BAMF approached Saxony’s central
immigration authority (the LDS), with the aim of
jointly creating and testing a blockchain solution
for coordinating those parts of the AnkER
procedure that required the closest collaboration
between the BAMF and the LDS.
To mitigate the lack of best practices for
managing the requirements of the GDPR and
developing a GDPR-compliant solution, the BAMF
held several idea-generation workshops and
architectural refinement meetings. The BAMF
also met with Germany’s Federal Commissioner
for Data Protection and Freedom of Information
(BfDI). In two workshops, the BAMF and experts
from the BfDI discussed the prototype and the
BAMF’s propositions for a GDPR-compliant
solution.
#### Choosing the Blockchain Solution Approach
Because it wanted to avoid the creation
of a central authority, the BAMF used the
pseudonymization approach to ensure that its
blockchain solution is GDPR-compliant. It also
determined that encryption and hashing were
impractical choices for the pseudonymization
-----
|Blockchain <H…ash Value <Status Upd <Time-stamp <Authority ID <Pseudonym Blockchain Node Blockchain Node|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|<H…ash Value <Status Upd <Time-stamp <Authority ID <Pseudonym|Col11|> ate > > ou|> s ID>|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||Blockchain Node||||<Pseudon Blockchain Node||||<Pseudon||||
|||||||<Pseudo Blockchain Node|||<Pseudo||||
||||||||||||||
||||||||||||||
|Privacy Services|Privacy Service||||||Privacy Service||||||
||||Secure co|m|munication||||||||
||||||||||||||
||||||||||||||
|Dashboard Services|Dashboard Service||||Dashboard Service||||||||
||||||||||||||
||||||||||||||
|Back-end Systems|Workflow Management Database System||||Workflow Management Database System||||||||
||BAMF||||Saxony’s Central Immigration Authority||||||||
of event logs on the blockchain. It rejected
encryption because this would limit the
network’s ability to validate transactions and
because managing and distributing individual
encryption keys for each event log would create
substantial complexity. Encryption with static
keys might eventually lead to all participants
being able to decrypt all event logs, which would,
in turn, make all participants joint controllers.
It chose not to use cryptographic hash functions
because this would reduce the blockchain to a
simple notarization[25] solution with very limited
options for the use of smart contracts. Moreover,
such a solution would require the redundant
exchange of event logs via another channel.
Instead, the BAMF decided to implement
a pseudonymous identifier solution with so
25 According to the National Notary Association, “Notarization is
the official fraud-deterrent process that assures the parties of a trans
action that a document is authentic, and can be trusted.” For more
information, see https://www.nationalnotary.org/knowledge-center/
about-notaries/what-is-notarization.
called privacy services. With this solution,
each participant operates an off-chain service
that maps pseudonymous identifiers on the
blockchain to the IDs used by the participant,
and does so in a privacy-compliant, erasable,
and rectifiable manner. Without the mapping,
the BAMF (and other authorities involved in the
blockchain solution) cannot attribute the data
on the blockchain to a natural person. In order
to enable the sharing of meaningful information,
privacy services can exchange mapping
information through secure communication
channels.
#### Creation of a Joint Control Arrangement
Through an administrative agreement, the
BAMF created a joint control arrangement with
the LDS that established the purpose and means
of processing and assigned responsibilities
for GDPR compliance. In terms of purpose and
-----
means, the agreement specified the storage and
exchange of event logs required for collaborating,
via the blockchain solution throughout the
AnkER procedure. In terms of responsibilities,
the agreement specified that the BAMF would
host and assume responsibility for the data
stored on the blockchain and for the privacy
services. However, for each event log submitted
to the blockchain, such as the BAMF’s ruling on
an asylum application, the LDS and the BAMF
would have to independently verify whether
they have a lawful basis for submission; once the
event log is written to the blockchain, the other
authority is responsible for establishing its own
lawful basis before reading the log. For each piece
of mapping information exchanged between the
privacy services, the sending authority must
verify that it has a lawful basis for sending, and
the receiving authority must to establish whether
it has a lawful basis for adding the information to
its mapping database. To minimize complexity,
the BAMF and the LDS consulted the relevant
legislation to establish up-front the required
lawful bases for each conceivable type of data
exchange.
#### The BAMF’s Blockchain Solution Architecture
In terms of technical measures, the BAMF
implemented a blockchain architecture with three
layers (see Figure 1). Layer 1 (back-end systems)
holds the existing workflow management
systems and data repositories of the authorities
involved. The other two layers do not need to be
integrated with these back-end systems; instead,
they can be loosely coupled through a set of APIs.
Layer 2 (integration) hosts dashboard services,
which create the event logs and can display to
users data from both the back-end systems and
the blockchain (Layer 3). Layer 2 also hosts
privacy services, which map the pseudonymous
blockchain IDs with the specific IDs used in the
back-end systems. The design of Layers 1 and 2
can vary between the authorities involved in the
blockchain solution; only the blockchain layer is
standardized across all authorities.
**Blockchain Layer. The blockchain layer**
propagates pseudonymized event logs, with
each entry consisting of four elements—a status
update, a time-stamp, the ID of the authority that
created the status update and a pseudonymous
ID. From a functional perspective, these elements
reflect the minimum amount of data required
for effective use. From a GDPR perspective, they
are sufficiently nonspecific to limit the risk of
inadvertent attribution—for example, through the
analysis of the trail of event logs.[26]
**Integration** **Layer—Privacy** **Services.**
In order to attribute the event logs on the
blockchain, the BAMF created a network of
authority-specific privacy services, with each
authority hosting a standalone privacy service.
Each service contains databases that map
the pseudonymous IDs to the specific IDs—
such as application or personal identification
numbers—used in the authority’s back-end
systems. The privacy services support role-based
access procedures for different user groups
within authorities, and can exchange mapping
information. Such an exchange is important for
the handover of an asylum application to another
authority.[27] Moreover, the services can exchange
requests for the erasure of mappings related to a
pseudonymous ID.
**Integration** **Layer—Dashboard** **Services.**
In order to submit event logs to the blockchain
and display data from both the blockchain and
the back-end systems, the BAMF implemented
dashboard services. Event logs can be submitted
manually to the dashboard services, or by
drawing (pull-based mechanism) or receiving
(push-based mechanism) the data from the
back-end systems. The dashboard services then
convert the event log data to comply with the
blockchain’s data model. To display the data,
users access the dashboard services through
a web browser and enter various commands,
such as “display the history of a certain
procedure” or “display all procedures that meet
certain conditions. The dashboard will then, in
accordance with the access rights of the user and
the mapping information in the privacy service,
collect and display attributed event logs from the
26 The risk of inadvertent attribution from spatiotemporal data—
i.e., data points with both location and time attributes—is high
because the four data points can be sufficient to uniquely identify a
person (linkability risk). For a detailed discussion of the linkability
risk of anonymized mobility data, see de Montjoye, Y.-A., Hidalgo,
C. A., Verleysen, M. and Blondel, V. D. “Unique in the Crowd: The
privacy bounds of human mobility,” Scientific Reports (3), March
2013, Article 1376.
27 From a legal perspective, every such exchange equates to the
processing of personal data, and requires both the sending and receiv
ing authority to establish a lawful basis.
-----
blockchain layer and further information from the
back-end systems of the authority. Importantly,
a user can only view information for which the
authority and the user have clearance and a
lawful basis.
#### Ensuring Privacy by Design
**Erasure by Design.** Erasure of personal
data from a blockchain may become necessary
for several reasons, such as simple errors in
entering data, or the expiration of a lawful basis.
Explicit time limits in the German Asylum Act,
for instance, ensure that authorities do not store
personal data for more than a maximum of ten
years after the completion of a procedure.
The erasure procedure implemented in the
BAMF’s blockchain solution is triggered by
an authority issuing a command to its privacy
service, which deletes the respective mapping
and submits a so-called “erasure event log” to
the blockchain. An erasure event log on the
blockchain invalidates the pseudonymous
blockchain ID and prevents further use of this
ID by all authorities in the blockchain network.
Moreover, the log informs other authorities of
the erasure. Each joint controller who receives
this information can then use the erasure event
log as a trigger to re examine all the lawful bases.
For those events for which the joint controllers
still have a lawful basis, they can create and
submit copies to the blockchain under a new
pseudonymous ID.
Conceptually, the erasure procedure could
also be useful for off-chain information exchange
related to an event log. Currently, authorities
check whether data requests from other
authorities are legitimate, but often do not keep a
record of these requests or the data they forward.
This means that authorities are unable to direct
requests for erasure and rectification to specific
authorities. The blockchain solution, however,
would ensure that such requests for erasure
reach all authorities in the blockchain network.
**Rectification** **by** **Design.** In addition
to the erasure procedure, the BAMF also
implemented a rectification procedure.
Rectification may become necessary if, for
example, false event logs are submitted to the
blockchain or duplicate blockchain IDs need
to be reconciled. The rectification procedure
mirrors the erasure procedure and is triggered
by specific rectification actions in the back-end
systems. Rectification actions are submitted to
the blockchain as “rectification events.” Other
authorities can respond to rectification events by,
for instance, approaching the issuing authority for
further information, and/or stopping or reversing
subsequent steps in the asylum procedure. If
there are duplicates, the privacy service adjusts
its mapping and retires one of the duplicate
blockchain IDs.
## Recommendations for
Ensuring Blockchain Solutions
Are GDPR-Compliant
We distilled three key learnings from the
BAMF project and have translated these into
three recommendations. These recommendations
should be interpreted as high-level guidelines
rather than as a reference architecture or legal
advice. In line with the European parliament,[28]
we advise each blockchain project to seek its own
legal assessment and to design its own portfolio
of organizational and technical measures.
#### 1. Avoid Storing Personal Data on a Blockchain
Blockchain solutions should be designed so
that it is not necessary to store personal data on
the blockchain. Instead, personal data should
remain in systems that permit rectification and
erasure. This advice also applies to any attribute
on a blockchain that allows identification of an
individual by analyzing patterns of use or context.
#### 2. A Blockchain Solution that Needs to Process Personal Data Should Use a Private and Permissioned Pseudonymization Approach
If a blockchain solution will process personal
data, we recommend using the pseudonymization
approach, because a central authority or shared
responsibility approach will be impractical in
most instances. Moreover, a pseudonymization
approach simplifies the identification of
controllers. All those who hold the additional
information required for attribution qualify as
28 “Compatibility between distributed ledgers and the GDPR can
only be assessed on the basis of a detailed case-by-case analysis that
accounts for the specific technical design and governance set-up of
the relevant blockchain use case.” European Parliament, 2019.
-----
(joint) controllers unless otherwise specified in
an agreement for processing personal data.
When the solution requires two or more
participants to share additional information for
attribution, we strongly recommend establishing
a private and permissioned blockchain network.
This will simplify the establishment and
management of arrangements required for joint
control or agreements for processing personal
data. In particular, a private network enables
the establishment of a controlled introduction
process during which new participants can be
added to the arrangements or agreements. A
permissioned network facilitates the creation of a
flexible and role-based model for the allocation of
responsibilities.
To avoid inadvertent attribution, however,
even pseudonymized data should be limited to
an absolute minimum. Moreover, the solution
should store information required for attribution
in a highly secure manner, as any uncontrolled
disclosure may require the blockchain to be
modified.
#### 3. A Blockchain Solution that Needs to Coordinate Cross-Organizational Workflows Should Use a Private and Permissioned Pseudonymization Approach with Identifier Mapping
For cross-organizational workflows, the
pseudonymization approach with identifier
mapping—i.e., separate mapping databases for
each participant—provides the best trade-off
between value and security. Although storing
only hashed event logs on the blockchain would
be more secure, this approach would require the
redundant exchange of the unhashed data and
would limit the use of the blockchain solution
to simple notarization. Storing encrypted event
logs on the blockchain would be just as useful as
identifier mapping but would require each event
log to be encrypted with a separate encryption
key, which would significantly increase the
complexity and vulnerability of the overall
blockchain solution.
## Concluding Remarks
_“GDPR_ _compliance_ _is_ _not_ _about_ _the_
_technology, it is about how the technology_
_is used. Just like there is no GDPR-compliant_
_Internet or GDPR-compliant artificial intelligence_
_algorithm, there is no such thing as a GDPR-_
_compliant blockchain technology. There are only_
_GDPR-compliant use cases and applications.”[29]_
The BAMF has created a GDPR-compliant
blockchain application through a combination of
organizational and technical measures. The BAMF
application for processing asylum applications
thus demonstrates that blockchain technology
and the GDPR are not incompatible and suggests
that organizations should continue to explore and
develop blockchain solutions that will involve the
processing of personal data. Because blockchain
solutions emphasize decentralized governance,
they could be a particularly promising alternative
in cross-organizational settings that prevent
the delegation of workflow governance to a
central authority. A next essential step for the
widespread deployment of GDPR-compliant
blockchain applications will be to establish
standards and reference architectures that
ensure the interoperability of various blockchain
technologies and solutions.
## Appendix A: The German
Asylum Procedure
The German Constitution grants anyone
persecuted on political grounds the right to
asylum. This right also extends to those fleeing
from violence, war, or terrorism.
29 Lyons, T., Courcelas, L. and Timsit, K., op. cit., October 16,
2018.
-----
|Table 2: BAMF Blockchain Team Members Interviewed|Col2|
|---|---|
|Role in the Blockchain Project|Focus|
|Director of the AnkER and functoi nal project lead with more than 15 years’ experience|Functoi nal benefti s, design principles, and data privacy|
|Business process manager with more than 15 years’ experience|Functoi nal benefti s, design principles, and data privacy|
|Lawyer, GDPR compliance-responsible team member with more than 15 years’ experience|Data privacy|
|Lawyer, GDPR compliance-responsible team member with more than 15 years’ experience|Functoi nal benefti s, design principles, and data privacy|
|Project manager with more than 20 years’ experience, responsible for communicatoi n with the c-suite|Functoi nal benefti s, design principle, and data privacy|
Figure 2 shows a simplified version of the
German asylum procedure. On arriving in
Germany, federal law requires asylum seekers
to immediately report to federal or state police
and make a request for asylum. The police
will then take them to the closest registration
agency, where they will have access to medical
care, and the registration agency provides them
with a proof-of-arrival document that grants a
temporary right to stay. While at the registration
agency, asylum seekers can also register their
application with the BAMF. The BAMF checks if
another member state of the European Union
has previously registered the applicant. If that
check is positive, the Dublin Regulation stipulates
that the refugee must be returned to the member
|Table 3: External Blockchain Experts Interviewed|Col2|Col3|
|---|---|---|
|Interviewee|Experience|Focus|
|Serial blockchain entrepreneur|Founder and CEO of a blockchain startup that has implemented a blockchain-based payment system in the refugee context|Functoi nal beneftis, design principles, and principles for blockchain decision paths|
|Blockchain consultant|Blockchain consultant who has worked since 2015 for T-Systems MMS, and has been involved with multpi le blockchain proofs of concept and pilots|Functoi nal beneftis, design principles, and principles for blockchain decision paths|
|Blockchain researcher and consultant|Blockchain researcher and solutoi n architect who has worked since 2018 for Centrifuge, which provides an open, decentralized operatni g system that aims to connect the global fni ancial supply chain|Functoi nal beneftis, design principles, and impact of blockchain on IT strategies|
|Blockchain researcher and consultant|Associate partner who has worked since 2008 for a Fortune 500 technology company closely involved with Hyperledger Fabric|Functoi nal beneftis, design principles, and data privacy|
|Blockchain developer|Blockchain developer and solutoi n architect who has worked since 2016 for the NEM Foundatoi n, which provides technical support for the NEM ecosystems|Functoi nal beneftis, design principles, and data privacy|
|Blockchain researcher and consultant|Founder and CEO of a blockchain startup founded in 2016 to provide secure and GDPR-compliant data exchange|Functoi nal beneftis, design principles, and data privacy|
|Blockchain researcher and consultant|Junior IT manager who has worked since 2017 for a globally actvi e automotvi e supplier on technology research and implementatoi n|Functoi nal beneftis, design principles, and data privacy|
|Blockchain developer|Blockchain developer and solutoi n architect who has worked since 2016 for a globally actvi e automotvi e supplier|Functoi nal beneftis, design principles, and data privacy|
|Blockchain entrepreneur|Co-founder of a blockchain startup that ofef rs digital infrastructure services for innovatvi e electricity tarifsf|Functoi nal beneftis, design principles, and data privacy|
|Blockchain researcher and consultant|Blockchain Ph.D. student and consultant who has worked since 2014 for one of the largest research insttiutoi ns in Europe|Functoi nal beneftis, design principles, and data privacy|
-----
state in which he or she was first registered.
This check, however, can take up to several days.
Meanwhile, refugees may have to relocate to
a different registration agency based on their
nationality and Germany’s federal quota system.
If the check is negative, the BAMF will hold a
personal interview at the closest appropriate
registration agency or a regional office. A
BAMF caseworker will then decide whether to
approve or reject the application for asylum. The
caseworker justifies the decision in a written
document that is given to the applicant. If the
caseworker rejects the application, the applicant
can appeal the decision in court. Favorable
decisions result in the applicant being granted
a residence permit. If the application is rejected,
the relevant immigration authority repatriates
the applicant. More details on the German asylum
procedure are available in the BAMF’s overview
document.[30]
## Appendix B: Research Method
There is a dearth of detailed accounts of and
knowledge about developing GDPR-compliant
blockchain applications. In the public sector, in
particular, most governmental agencies remain
unfamiliar with blockchain technology. Our
research thus required us to provide substantial
guidance to the BAMF on developing its
blockchain solution, as well as to other agencies,
such as the Federal Commissioner for Data
Protection and Freedom of Information, to help
them assess the solution’s GDPR-compliance.
As a consequence, we chose an action
research[31] approach, with three of our co-authors
providing advisory services to the BAMF’s
blockchain project from January 2018 onward.
These three co-authors familiarized the BAMF
team with blockchain technology and organized
an ongoing cycle of cross-team reflections,
30 _The stages of the German Asylum Procedure: An Overview of_
_the Individual Procedural Steps and the Legal Basis, 2016, Federal_
Office for Migration and Refugees, available at http://www.bamf.de/
SharedDocs/Anlagen/EN/Publikationen/Broschueren/das-deutscheasylverfahren.pdf?__blob=publicationFile.
31 Action research emphasizes (participatory) observation in the
field to address a specific problem (in this case, enabling digital
federalism through a GDPR-compliant blockchain architecture). For
more information on action research, see Baskerville, R. and Myers,
M. D. “Special Issue: Action Research in Information Systems,” MIS
_Quarterly (28:3), September 2004, pp. 329-335._
which continued throughout the project.[32] One
co-author, for instance, worked closely with
the IT vendor hired by the BAMF to implement
the blockchain solution and guided the BAMF’s
architectural board. Two other co-authors were
not involved with the project team’s operations
but acted as external observers. The combination
of three collaborating and two observing
researchers allowed us to maintain high
standards of evidence gathering and academic
rigor.
In the course of the project, we gathered
evidence from four different sources:
1. We held various workshops on functional,
technical, and data privacy issues
2. We regularly participated in and
contributed to developer meetings and
architectural reviews
3. We analyzed public blockchain interviews
of BAMF employees and conducted 15
additional semistructured interviews
with blockchain project team members
and blockchain experts. These interviews
lasted between 40 minutes and two hours
and each was recorded
4. We reviewed and analyzed various internal
and external documents on the blockchain
project.
#### Blockchain Workshops and Contribution to Technical Meetings
During the project, the three collaborating
co-authors held nearly 30 blockchain
workshops. The range of attendees included
BAMF employees, employees of Saxony’s
central immigration authority (the LDS),
employees of the Federal and the Saxony
Ministries of the Interior, a delegation from
the Federal Commissioner for Data Protection
and Freedom of Information, employees of the
Dutch Immigration and Naturalization Service,
and several other organizations. In these
workshops, we focused on various functional,
technological and data privacy issues. To deliver
the educational segments of these workshops,
32 Avison, D., Baskerville, R., Myers, M. and Wood-Harper, T. “IS
action research: can we serve two masters? (panel session),” Kock,
N., panel chairman, Proceedings of the 20th International Conference
_on Information Systems, December 1999, pp. 582-585._
-----
|Table 4: BAMF Employees Public Blockchain Interviews Analyzed|Col2|Col3|
|---|---|---|
|Public Interview Reported in:|Interviewee and Position|Focus|
|Behörden Spiegel|Dr. Markus Richter (BAMF CIO from Jan 2018 – July 2018 and BAMF vice president since July 2018)|Functoi nal and technical beneftis and data privacy|
|Der Spiegel|Dr. Markus Richter (BAMF CIO from Jan 2018 – July 2018 and BAMF vice president since July 2018)|Functoi nal and technical beneftis and data privacy|
|Bundesamt für Migratoi n und Flüchtlinge – Digitalisierungsagenda 2020|Haris Trtovac (BAMF blockchain project manager since April 2018)|Functoi nal and technical beneftis and data privacy|
|Bundesamt fürMigratoi n und Flüchtlinge – Digitalisierungsagenda 2020|Kausik Munsi (BAMF CTO)|Functoi nal and technical beneftis, data privacy and impact of blockchain on the BAMF’s IT strategy|
we adapted the method of Fridgen et al.[33] In the
conceptual segments, we used creative elements
to access the attendees’ prior experiences and
knowledge and to further their involvement.
In addition to these workshops, we
collaborated with the BAMF team members on a
daily basis in stand-up meetings, development
meetings, and management calls. We were
routinely involved in architectural as well
as sprint review and planning meetings. In
particular, we suggested multiple refinements
to the blockchain solution and helped resolve
technical and data-privacy issues. For instance,
we developed the erasure and rectification
concepts and contributed essential elements to
the privacy service concept.
#### Interviews with BAMF Stakeholders, Team Members, and Experts
Given the novelty of blockchain technology
and the related challenges, we complemented
our action research approach by conducting
interviews, which are a preferred method for
extracting explorative knowledge. In total, we
conducted 15 interviews, five with project
team members and 10 with various blockchain
experts. We used an interview guide for these
semistructured interviews, which allowed the
interviews to flow naturally but also ensured
33 Fridgen, G., Lockl, J., Radszuwill, S., Rieger, A., Schweizer, A.
and Urbach, N. “A Solution in Search of a Problem: A Method for the
Development of Blockchain Use Cases,” Proceedings of the Ameri
_cas Conference on Information Systems, August 2018, pp. 1-10._
comparability between the interviews. An
open dialogue, rather than the rigorous use of
predefined questions, helped to maximize the
depth of insights provided by interviewees,
who thus delivered valuable knowledge that
supported the subsequent development of the
recommendations.[34]
Because all blockchain team members
preferred to remain anonymous, the table below
provides only anonymized information on their
roles in the blockchain project and their prior
experience.
We also conducted ten semistructured
interviews with external blockchain experts (as
listed in the next table), some of whom preferred
to remain anonymous.
In addition, we analyzed public blockchain
interviews given by four BAMF employees, listed
in Table 4.
#### Analysis of Internal and Public Documents
We also analyzed several hundred pages of
BAMF internal memos, reports, analyses, and
meeting minutes. Importantly, these internal
documents included highly relevant strategy
papers, data privacy analyses, and architectural
specifications. We also reviewed the BAMF’s
public documents, such as its digitalization
34 Urquhart, C., Lehmann, H. and Myers, M. D. “Putting the
‘theory’ back into grounded theory: guidelines for grounded theory
studies in information systems,” Information Systems Journal (20:4),
July 2010, pp. 357-381.
-----
agenda and blockchain webpage. Lastly, but
importantly, we reviewed legal analyses and
the data privacy advice issued by lawyers
and renowned German scholars concerning
blockchain, legal decisions in comparable
scenarios, and governmental papers on
comparable blockchain use cases.
#### Analyzing the Evidence from the Sources
To analyze the evidence, we first consolidated
our sources of data and the data itself to
eliminate redundancies. Next, we clarified
imprecise statements and added—where
needed—explanatory comments to data points.
Third, we assigned codes to the data points and
developed tentative principles through open
and, later, axial coding. Where the data related
to new phenomena, we marked the passages
and discussed them within the research team,
building new principles when necessary.[35] We
iteratively adapted the codes until they were
collectively exhaustive and mutually exclusive.
Subsequently, we discussed the resultant
principles with the practitioners in order to gain
other perspectives.
## About the Authors
**Alexander Rieger**
Alexander Rieger (alexander.rieger@fim
rc.de) is a doctoral candidate at the Finance &
Information Management (FIM) Research Center
and the Project Group Business & Information
Systems Engineering of the Fraunhofer FIT,
University of Augsburg. His professional interests
include innovative digital technologies such as
blockchain and artificial intelligence, and, more
specifically, their strategic implications and
adoption. Prior to joining the BAMF’s blockchain
project in February 2018, Alex spent several years
working in industry and consulting.
**Florian Guggenmos**
Florian Guggenmos (florian.guggenmos@fim
rc.de) is a doctoral candidate at the Finance &
Information Management (FIM) Research Center
and the Project Group Business & Information
Systems Engineering of the Fraunhofer FIT,
University of Bayreuth. His current research
35 Strauss, A. and Corbin, J. M. Basics of Qualitative Research:
_Grounded Theory Procedures and Techniques, 1990, Sage Publica_
tions.
focuses on systemic risk management as well
as data privacy and information security,
particularly in the context of digitalization
projects. Florian has also worked on a range of
applied research projects. He joined the BAMF’s
blockchain project in February 2018.
**Jannik Lockl**
Jannik Lockl (jannik.lockl@fim-rc.de) is a
doctoral candidate at the Finance & Information
Management (FIM) Research Center and the
Project Group Business & Information Systems
Engineering of the Fraunhofer FIT, University
of Bayreuth. His main focus is on the Internet
of Things (IoT) as well as the wider adoption
of digital technologies and the socioeconomic
embedding of blockchain applications. Jannik
worked as a consultant on a variety of industry
projects before joining the BAMF’s blockchain
project in February 2018.
**Gilbert Fridgen**
Gilbert Fridgen (gilbert.fridgen@uni.lu) is
PayPal-FNR PEARL Chair in Digital Financial
Services in the Interdisciplinary Center for
Security, Reliability and Trust (SnT) at the
University of Luxembourg. His work focuses
on smart grids, the machine economy, and
blockchain technology in both the public and
private sectors. Gilbert’s work has been published
in several prominent IS, management, computer
science and engineering journals. He has also
managed various industry research projects
and received multiple research grants. Gilbert
has served as expert counsel to many German
government bodies, including the Bundestag
and six German federal ministries, and also to
the European Commission through its European
Blockchain Partnership.
**Nils Urbach**
Nils Urbach (nils.urbach@fim-rc.de) is a
professor of information systems and strategic
IT Management at the University of Bayreuth. He
is deputy director of the Finance & Information
Management (FIM) Research Center and of
the Project Group Business & Information
Systems Engineering of Fraunhofer FIT. He is
also a co-founder and director of the Fraunhofer
BlockchainLab. Nils’ research focuses on digital
transformation, blockchain, and the management
of artificial intelligence. His work has been
published in leading journals including _MIS_
-----
_Quarterly_ _Executive,_ _Journal_ _of_ _Information_
_Technology,_ and _The_ _Journal_ _of_ _Strategic_
_Information Systems. Before his academic career,_
Nils worked for several years as a management
consultant.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.17705/2msqe.00020?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.17705/2msqe.00020, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://www.fim-rc.de/Paperbibliothek/Veroeffentlicht/941/wi-941.pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-12-01T00:00:00
|
[] | 15,269
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0159df66618cb667a54a529247fedac5640af2da
|
[
"Computer Science"
] | 0.921355
|
Investigating Service Discovery, Management and Network Support for Next Generation Object Oriented Services
|
0159df66618cb667a54a529247fedac5640af2da
|
Intelligence in Networks
|
[
{
"authorId": "2133614",
"name": "Bilhanan Silverajan"
},
{
"authorId": "39982166",
"name": "J. Hartman"
},
{
"authorId": "153831770",
"name": "Jani Laaksonen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IFIP Int Conf Intell Netw Telecommun Netw Intell",
"IFIP International Conference on Intelligence in Networks: Telecommunication Network Intelligence",
"SMARTNET",
"Intell Netw"
],
"alternate_urls": null,
"id": "bf640a72-6f1c-4143-aaec-ae21f1bb6eff",
"issn": null,
"name": "Intelligence in Networks",
"type": "conference",
"url": null
}
| null |
###### Investigating Service Discovery, Management and Network Support for Next Generation Object Oriented Services
Bilhanan Silverajan, Joona Hartman, and Jani Laaksonen
_Dept. oflnformation Technology, Tampere University ofTechnology, P.O. Box 553,_ _FIN-3310J_
_Tampere, Finland_
_{bilhanan_ I hartmanj IJjlaakso}@cs.tutfi
Abstract The network computing industry has eagerly embraced technologies,
welcoming an ever-increasing variety ofnew service discovery protocols and
object architectures. With this abundance now offered across a wide
collection of environments, technologies that offer standardized interfaces for
the discovery process, while supporting communication for several different
types of service access technologies, will provide the greatest achievable
interoperability and resilience in the long-term. In this paper, we introduce a
distributed architecture based on using directory services to significantly
reduce the complexity ofmanaging the information and services required to
support next-generation networked applications, by providing automatic
service discovery and a single coherent model for representing the data
managed by supporting services. Standards-based solutions are used, and a
prototype implementation ofthe CORBA Naming Service that has been
designed to iIIustrate how the architecture incorporates distributed object
models, directory services and multicast-based dynamic service discovery is
presented.
Keywords: Distributed Object Architectures, Next-Generation Applications, Directory
Services, Service Discovery
The original version of this chapter was revised: The copyright line was incorrect. This has been
-----
###### 1. INTRO DUC TI ON
Recent years have witnessed the network and distributed computing
industry embracing various technologies, media and content as a direct
consequence of converging technical and business interests of
telecommunications networks and Internet service providers. In the long run,
this will likely lead to integration of all types of communications: a
bewildering array of devices such as mobiles, fixed phones, PDAs,
embedded devices, workstations and pes will all be used to provide
seamless communications between users and services. This has led to
significant investment by many service and content providers into
developing and furthering network and protocol-oriented technologies, for it
is widely accepted that such a convergence willlikely result in a structurally
stable core packet-switched public network such as the Internet, having
many edges acting as private gateways to proprietary and subscriber-based
intranets offering a multitude of services using several different kinds of
technology. These private networks may be both circuit and packet switched
networks and will comprise subscriber-based IP connections, enterprise
nodes and services, fixed and mobile telephone connections, intelligent
network services and so on.
In this paper, we introduce a distributed architecture based on using
directory services to significantly reduce the complexity of managing the
information and services required to support next-generation networked
applications, by providing automatic service discovery and a single coherent
model for representing the data managed by supporting services. The
architecture promises to be scalable and flexible enough to address basic
issues that will arise in distributed computing as weIl as user, device and
application mobility in the future. Section 2 discusses the type of network
connectivity and support necessary to cope with the rising demand of mobile
networked users and applications. Section 3 then sets out the aims and
objectives ofthe proposed architecture while Section 4 provides an overview
and a discussion of the proposed architecture, as well as describing a
prototype that was implemented. Section 5 presents some performance
measurements that were made, with the conclusions in Section 6.
###### 2. REQUIREMENTS FOR NETWORK SUPPORT
Regardless of how services are utilized, two dominant scenarios are
nevertheless heavily anticipated to influence and drive the development of
next generation networks: User (and device) mobility and application
mobility.
-----
###### 2.1 User mobility
Network connectivity and support for user mobility is a necessary
measure which is already being provisioned for in many organizations. The
overall demand for mobility has indeed not shown any significant decrease,
as evident by the tremendous popularity and ever-increasing adoption of low
cost portable devices such as mobile computing devices, PDAs and phones
for computing and networking needs. This trend will very likely be even
more widespread, given that roaming agreements signed amongst mobile
operators may not only inc1ude current demands for seamless GSM-based
voice or data calls, but might encompass future GPRS and UMTS enabled
service.
In supporting user mobility, user identities, profiles, and sessions will
need to be preserved across network boundaries. Very often user mobility is
associated with device mobility, in which the mobile device retains its
identity when roaming. The user may thus have an associated device which
moves with hirn but the user may remain unaware of the underlying
networking issues such as whether the device assuming temporary network
addresses on roaming networks or retaining its address during base station
handovers.
However, some applications resident on the device may need an
awareness of the current location to access local services such as printers and
file servers and might thus have to perform service discovery to a certain
extent. Furthermore, visiting devices mayaiso enrich the network being
visited by offering their own services to the various applications and users
that are resident on that particular network. Access control mechanisms
would thus need to be exerted by the network hosting the user to ascertain
exactly which local services are visible and what levels of service utilization
are permitted.
There are several technologies which exist today that inherently support
user and device mobility. These include wireless LAN technologies such as
802.11 b, Bluetooth technology for piconet and scatternet oriented ad-hoc
networking, as well as movement across networks as specified by mobile IP.
###### 2.2 Application mobility
Application mobility can occur independently of user and device
mobility, in a sense that all or certain parts of an application may
autonomously or semi-autonomously mi grate across heterogenous network
spaces. An application mayaiso consist of numerous non-mobile
components or objects which transparently reside in several different parts of
a network, with each component offering weH known services to the others.
-----
Network connectivity and support for application mobility on the other
hand is foreseen in the medium to long-term future. It can be regarded as the
next major wave in mobility after user and device mobility, which will have
an important bearing on the underlying infrastructure.
Such an approach for application development is being undertaken in
mobile code and agent technologies and frameworks, as weIl as distributed
object-oriented architeetures. Current examples include Java programs,
automatie downloads of codecs by multimedia applications, Jini[1]-enabled
programs, CORBA [2] applications and the forthcoming distributed
applications of the Microsoft .NET framework [3]. Location awareness will
be of lesser importance for migration when compared to computing resource
availability ofthe present host and network.
Owing to the transparent, distributed and seemingly autonomous nature
of such applications, it is rather unlikely they would enjoy as rapid and
widespread an adoption across untrusted commercial networks as user and
device mobility would. Rather, it is more probable that application mobility
would gamer more support and fill a demand for important niche services in
enterprise level computing and service provisioning, within corporate
intranets as weIl as private research networks of a more academic nature.
###### 2.3 Supporting requirements
Such an environment of connectivity and mobility will have a significant
impact on the way next-generation applications, both autonomous and
interactive are implemented, discovered, managed and accessed. One can
easily perceive that both user mobility and application mobility will become
killer scenarios for existing networks and services. These will create an
overwhelming strain in the way current networks and legacy applications
manage, interact and serve the necessary information that needs to be
delivered to the end-users and applications. In both cases, a more robust
supporting infrastructure for handling next-generation applications needs to
be developed in a standardized manner that must also prove scalable and
adapt to accommodate the research and development challenges that will
manifest themselves.
The overall design must possess some flexibility, as applications being
built today are likely to function as part of bigger, integrated systems
tomorrow. The architect of tomorrow's distributed systems will face large-
scale distributions both in terms of the number of users and connected
systems running atop heterogenous networks. There may not be a
revolutionary change in the way networks and services will be designed for
supporting next-generation applications. In fact, all indications point to the
development being evolutionary, with network architects and scientists
-----
examining, using and extending existing solutions and standards to support
new applications and services as the need arises.
The huge potential of being able to harness an increased subscriber base
of a converged market has led to several sets of standards continuously being
introduced for communication protocols and architectures. In many ways,
the objectives and target applications to be supported by these technologies
can be remarkably similar. Even the technologies may mirror each other.
However, in reality, getting applications and services supported by one set of
standards to communicate with another pose interoperability challenges.
This also leads to an unfortunate effect of leading to fragmented or duplicate
solutions. As an example, IETF's Service Location Protocol [4], Sun
Microsystems' Jini, Microsoft's UPnP [5] and wireless technologies
promoted by IrDA and Bluetooth all implement their own service discovery
mechanisms for potentially common types of services such as printing.
Another notable example would include an object location service such as
CORBA and RMI naming services.
Thus, a service designer targeting a product for multiple networking
environments, object services or service discovery methods may be faced
with the dilemma having to develop and managing several different kinds of
applications, each supporting a specific protocol or object model but in all
probability having duplicate information models.
Instead, trying to achieve the same goal by supporting protocol bridging
and conversion methods for several service discovery methods and location
services, with different front-ends using a common back-end application-
level information repositorystoring service records and attributes, would be
a far less harrowing experience.
The challenge then becomes one of developing a unified yet scalable
service information management system that stores and manages service
data in a standardized and generic manner, but still provides the appropriate
data to requesting application via a specific discovery method, using
gateways and information mapping and translation.
###### 3. AIMS AND OBJECTIVES
A prototype distributed environment that would enable devices, object-
oriented client applications, and services to automatically search for a
particular capability, and to request and establish interoperable sessions with
other clients or servers was investigated to meet the following objectives:
- To provide applications, services and devices a standard method for
describing and advertising·their capabilities
-----
- To allow mobile users and applications the possibility to discover local
services upon application start-up or movement within network space or
administrative zones automatically or with as little static service
discovery configuration as possible
- Interworking non-invasively or transparently with existing code and
applications
- Ease of administration and maintenance by providing a single consistent
model of available services that can scale, at the very minimum, to meet
enterprise computing needs, thus reducing or eliminating the need for
duplicity of information needed to support various access methods
- Integration into existing or commonly prevalent network resources,
supporting components and infrastructure that may already be in place,
but remaining extensible enough to meet future needs
- Well-known user and application access control and security mechanisms
can be enforced, and existing authentication practices and encryption
methods within the organization can be used.
###### 4. ARCHITECTURAL OVERVIEW
Figure 1 illustrates a prototype implementation of a CORBA Naming
Service [6] that has been designed to fulfill the objectives mentioned in
section 3. CORBA has become increasingly important in distributed
computing for the Internet aswell as telecommunications owing to its
language independence and separation of the object interface away from its
implementation, and the increased use of distributed objects needing
fundamental CORBA based object services such as the Naming Service
cannot be ignored.
In this prototype, the back end service repository was implemented as a
directory service using the Lightweight Directory Access Protocol (LDAP)
[7]. This allowed the possibility to leverage the existing prevalence of using
LDAP-based services and servers based on a well-known IETF-standard
protocol already commonly deployed at the organizational level. Moreover
much research is being done to make LDAP services secure, scalable,
distributed and fault-tolerant. Clients would be able to retrieve information
about services either natively through LDAP, or through a lightweight
gateway/proxy that will implement the specific mappings necessary to
support the particular type of service technology (in this case the CORBA
Naming Service) the client would be using. As the figure shows, the model
would be capable of supporting other kinds of front-ends as weIl.
-----
##### ,_._.-.-.,
CORBA , Java , _Multicast and_
Name _SLP Aware Service Agenls_
Server i JNDI/RMI i
##### ,_.-.-._.,
Front-end i Front-end i
Microsoft i
###### ! Active
, D' ,
. lrectory.
###### ! Front-end ! _._-- -_ ...
Back-End
_All TCPIIP Network_
_componeniS and_
_applications running either_
_over IPv4 01" IPv6_
_Figure I._ Prototype directory-based CORBA Naming Service
The discovery of the CORBA Naming Service by CORBA client and
server objects was made possible using the Service Location Protocol (SLP)
as a bootstrapping protocol for multicast-based for service discovery.
###### Abrief description of SLP, LDAP and the CORBA Naming Service
together with how they have been used or implemented are discussed in the
following subsections.
###### 4.1 Automatic service dis'covery using SLP
The Service Location Protocol (SLP) provides a flexible and scalable
framework for providing hosts with access to information about the
existence, location, and configuration of networked services. Traditionally,
users have had to find services by knowing the name of a network host or its
network address [4]. SLP eliminates the need for a user to know the name of
a network host supporting a service. Rather, the user supplies the desired
type of service and a set of attributes which describe the service. Based on
that description, SLP resolves the network address of the service for the user.
SLP models client applications as User Agents (UAs), while services are
advertised by Service Agents (SAs). Applications which would like to
-----
contact the SA running on the same host to register a Uniform Resource
Location (URL) to be advertised, such as "service:ftp://ftp.company.coml".
A service can moreover have attributes that describe the service (or the
URL) in more detail. As an example, a web server would have its URL as
.. service:http://www.company.com .. with attributes possibly listing the name,
email address or a telephone number of the webmaster. SLP also allows
services to be administratively grouped into scopes that could then be
controlled for service provisioning to UAs.
Client programs (UAs) that would like to find services in a network also
use standardized IETF function calls to find available services in a network
and are capable of querying available service types, specific service type
URLs as weIl as attributes associated with a specific URL. Directory Agents
(DAs) are an optional third category ofagents that SLP defines which allows
for scalability by caching SA advertisements for direct interaction with UAs.
Service discovery can either be statically configured or allowed to take
advantage of multi casting or broadcasting features in a network for dynamic
configuration.
The choice of using SLP as a primary service discovery protocol was
influenced by the fact that it has been specified by the IETF, and hence did
not need to be adapted to be used over TCP/IP. Moreover, it is generic and
independent of any particular language and object architecture. SLP's
simplicity combined with its feature set also enables it to interwork with
various other service discovery methods and environments such as Jini using
bridging [8] and the Apple Computer's next-generation AppleTalk [9]. Much
standardization work is also in progress, such as efforts in combining SLP
with LDAP [10], and extensions for SLP for use over IPv6 [11].
An open-source implementation of SLP, named OpenSLP [12] that
implements SLP version 2, was primarily used in developing the SLP UA
and SA components of the architecture. In the later stages of development
and testing, the SLP system libraries of Sun's Solaris 8 were also used
successfuIly. As only a minimum configuration was involved, no Directory
Agents were used, and the architecture was developed to use dynamic
discovery and communication between UAs and SAs using multicasting
instead.
###### 4.2 Information storage and retrieval using LDAP
LDAP was originally developed as a front end to X.SOO, the OSI
directory service. X.SOO defines the Directory Access Protocol (DAP) for
clients to use when contacting directory servers. Currently at version 3,
LDAP however has evolved to eventually provide most of the functionality
of DAP at a much lower cost and defines a reasonably simple mechanism for
-----
Internet clients to query and manage an arbitrary database of hierarchical
attribute/value pairs over a _TCP/IP_ connection. LDAP has rapidly gaining
significant Internet support, including the support of many companies, such
as NovelI, Sun, HP, IBM, SOl, AT&T and Banyan, and is the focus ofmuch
standardization activity in the IETF.
The LDAP directory service model is based on entries. An entry is a
collection of attributes that has a name, called a distinguished name (DN).
The DN is used to refer to the entry unambiguously. Each of the entry's
attributes has a type and one or more values. The types are typically
mnemonic strings, like "cn" for common name, or "mail" for e-mail address.
The values depend on what type of attribute it iso Directory entries are
arranged in a hierarchical tree-like structure that reflects political,
geographie, and/or organizational boundaries, representing people,
organizational units, printers, documents, or just about anything else one can
think of.
The information model stored in the LDAP directory backend of our
prototype was loosely modeled upon the campus computer science
department building, in both the physical and organizational sense. The top
level root entry (o=CS,c=FI) represents the computer science building which
was then subdivided into four floors representing the structure, personneI,
laboratories, services and IP subnets resident on these floors. The basic idea
in storing service and object information in the LDAP directory in this
manner was not only to store the access method ·of the object or the URL of
the service, but also to append the geological or organizational location
information ofthe respective services and objects. Any LDAP browser could
then be used in obtaining detailed information about services offered within
particular subnet or subnets (such as printers), by inspecting the relevant
attributes of the LDAP entries, such as their network addresses, physical
location as weIl as different access methods (network printing,
BluetoothlIrDA connectivity, Appletalk, and so on). The OpenLDAP [13]
project provided the necessary tools and libraries needed for this part of the
architecture.
###### 4.3 Object location discovery using CORBA Naming Service
The Naming Service was implemented in C++ with MICO [14], an open-
source CORBA implementation which conforms to CORBA 2.3, using the
DSI (Dynamic Skeleton Interface) to dynamically handle the object
invocations. Apart from being designed specifically to aid in the
development of gateways, the CORBA DSI functionality will allow the
possibility to expand the interface of the CORBA naming service to possibly
-----
serve other kinds of naming services, since type-checking is done at runtime.
Clients are free to use either the DII (Dynamic Invocation Interface) or the
SII (Static Invocation Interface) to invoke the methods on the server object.
The CORBA objects are stored in the directory as stringified object
references. LDAP offers a schema for representing CORBA Object
References in an LDAP Directory [15], with each entry storing a textual
description of the CORBA object as weIl as its IOR (Interoperable Object
Reference). The IOR is the attribute with the single-most relevance in this
case, as it is the primary means used by CORBA clients to locate and
communicate with CORBA server objects in a network. Figure 2 displays
the LDAP directory as seen by a freeware LDAP browser, Softerra LDAP
Browser [16] running on Windows2000, after the Naming Service was
launched and was used to register a simple CORBA Messaging server
application, showing its various attributes, location as weIl as its DN in the
window titlebar.
s-!iI Browser root
8-EI o=CS,c=Fl
cn=Manager
ou=fOu"ttfIoor 14
20
"I ou=SqlaProcessng 29
" ou= Telecom mlTiCaUons
OU=l-B420
#### I
ou=HC414
### I L cn=CorbaMessageBroker
ou:people
#### I - ou..prhters
##### I ou=rooms
## te:=S
##### :k: ou=users
_Figure 2._ LDAP Browser showing an entry representing a CORBA object
Because the CORBA Naming Service that has been specified by the
Object Management Group directly supports the notion of name-to-object
associations using name bindings relative to a naming context [6], an entire
Naming Service is conceptually a naming graph having hierarchical
relationships among parent and child nodes representing contexts and
directed edges forming names. This relationship is directly supported by the
hierarchical nature of an LDAP directory structure. Also, a name is
comprised of a concatenation of components, where components each
-----
denoting the bound object. The Naming Service specifications do not restrict
the way in which the naming system should interpret, assign or manage these
attributes. Thus our prototype maps these structures to be directly relevant
to the entries in the LDAP tree, in such a way that the _kind_ attribute is
mapped to the corresponding LDAP entry's attribute type, such as "cn" or
"ou", and the _id attribute is mapped to its value. Therefore,_ if the CORBA
Naming Service binds itselfto the LDAP Tree with the DN of"o=CS,c=FI",
then
"ou=FourthFloor,ou=Telecommunications,ou=HC414,cn=CorbaMessageBro
ker" would actually be a valid name string for binding or resolving an object
using our prototype Naming Service, as it would convert the given name
sequence to a string representing the DN ofthe LDAP entry.
At the moment, the Naming Service implements four methods: _bind,_
_unbind and rebind are used by CORBA server objects to register themselves,_
and _resolve_ is used by CORBA client objects to find the IORs of server
objects corresponding to their names from the Naming Service.
###### 4.4 Example usage scenario
For starting the Naming Service, the following steps are undertaken:
I. OpenLDAP server daemon slapd, must first be running.
2. The OpenSLP server daemon, slpd is launched on all machines which
desire to host SAs.
3. The CORBA Naming Service application is launched, and binds to a weIl
known location to the LDAP back-end. This will be known as the root
context of the Naming Service. At the moment, this is configured
statically. However this can be easily extended to be dynamic as weIl.
4. The Naming Service registers itselfto its local slpd, passing its VRL (e.g.
"service:namingservice://myhost.company.com")and its stringified IOR
as an attribute.
5. The Naming Service is now ready to serve CORBA objects.
Registering a CORBA server application with the Naming Service would
proceed in the foIlowing manner:
I. CORBA Server starts up, and behaves as a VA which multicasts a
Service Request packet to the Administratively Scoped SLP Multicast
Address [17], 239.255.255.253, requesting for a service type
"namingservice". Currently, VAs use default scopes with a multicast
packet TTL value of 8.
2. The SA responsible for the Naming Service unicasts a Service Reply
back to the requesting VA, retuming its URL and thus specifying the
location of the Naming Service.
-----
3. Armed with the VRL, the VA once again multicasts an Attribute Request
packet requesting for the IOR attribute for that VRL.
4. The SA once again unicasts directly to the VA with an Attribute Reply,
furnishing it with the IOR attribute registered with that VRL.
5. Armed with enough knowledge now, the CORBA Server application
contacts the Naming Service directly with the IOR it possesses, and
registers itself with the Naming Service using the bind call with 2
parameters: its name and its IOR.
6. The Naming Service maps the information into an LDAP call and stores
the information in its LDAP database backend.
In order for a CORBA client finding a CORBA server, it does the
following:
1. The CORBA client first discovers the Naming Service in a similar way to
the CORBA Server.
2. The Client calls the resolve call of the Naming Service, passing the name
of the CORBA server object as a parameter and obtaining an IOR from
the Naming Service with which the c1ient then invokes methods on the
CORBA server object.
###### 5. RESULTS
The basic network topology used for developing and testing the prototype
implementation is depicted in Figure 3. All 3 subnets were multicast aware,
with the FreeBSD machine hosting the LDAP, Naming Service, CORBA
server objects and SLP Service Agents.
As an initial indication of the performance of the prototype, the timing
measurements that were recorded are tabulated in Table 1. All SLP and
CORBA c1ient calls in Table 1 were made from the SunBlade machine to the
FreeBSD machine. For the Naming Service, the time taken to execute a
single resolve call was measured. The time taken to execute the same call
was also measured against MICO's native naming service, which runs as a
standalone daemon in the network. For SLP measurements we timed the
discovery of the Naming Service. No DAs were employed in this topology,
hence communication occurred directly between the _VA_ in the SunBlade
machine and the SA in the FreeBSD machine. The measurements were taken
over aperiod of approximately 3 days.
-----
Cisco 6509
Integrated
Pentlum 133MHz/64MB Switch/Router
FreeBSD 4.2-STABLE
Cisco 7206 VXR Subnet B
Router
IOOMbps IOOMbps
Fast-Ethernet Fast-Ethernet
IOOMbps SunBlade 100/512MB
UltraSPARC lIe. Solarls 8
Fast-Ethernet
Subnet C
IOOMbps
IOOMbps
Fast-Ethernet Fast-Ethernet
Cisco3548
Switch
_Figure 3._ Physical Network Topology
Approximately two-thirds of the execution time of the resolve call was
consumed by the LDAP operation, while a small fraction was consumed in
the Naming Service front-end that serves as a gateway by mapping
parameters and return results to and from the CORBA Naming Service API
and the OpenLDAP API. On the other hand, to use MICO's own Naming
Service, it is necessary to initially ron an object adapter daemon called
micod, create an entry for the naming service in an implementation
repository and pass the c1ients the address of the naming service. A fair bit of
manual intervention, static configuration and using commandline options are
hence necessary. Also, the entries of the Naming Service are stored in
memory.
_Table J._ Timing measurements in milliseconds
Prototype MICO's Naming Service SLP
Average duration 36.5843 3.60773 4.24144
Maximum duration 148.506 34.252 10.203
Minimum duration 34.289 3.213 3.960
Thus, bearing in mi nd the objectives laid out in Section 3, we firmly
believe that the offset in performance of our prototype implementation is
justified by virtue of the scalable replicating and fault-tolerant properties
-----
inherent from the LDAP component of the architecture. Automatic service
discovery is also ideal for object services, owing to their more volatile and
migratory nature as compared to fixed network services. The design of the
architecture with the components described also allows for the flexible
interaction amongst them, depending on the needs of the organization. As an
example, the OpenLDAP server could be modified to become SLP-aware so
that even the Naming Service front-end could perform service discovery to
dynamically discover its backend, as opposed to the configuration that our
prototype used. Interworking between SLP and LDAP is being studied, so
that if Directory Agents are used in the architecture, SLP service URLs and
attributes can be stored in an LDAP Directory. [10]
###### 6. CONCLUSIONS
Almost all the software components used in designing, buildingand
testing the prototype implementation have been based on code obtained from
highly active open-source projects. This has made troubleshooting and bug-
solving relatively easy through open mail-list forums and by simply
browsing through the code. Minor problems, such as UAs in OpenSLP
insisting on multicasting to search for DAs even when configured not to,
were quickly solved, and more optimal service discovery times were
achieved (from a very rough estimate, UAs using Solaris's SLP library for
service discovery took far longer, about 12 seconds).
At the moment, SLP over IPv6 within our prototype has yet to be tested.
However, the following are changes required to have the Service Location
Protocol work over IPv6. These changes include [11]:
- Eliminating support for broadcast SLP requests
- Address Specification for IPv6 Addresses in URLs
- Use of IPv6 multicast addresses and IPv6 addressscopes
- Restricted Propagation of Service Advertisements
The architecture proposed in this paper also does not preclude the use of
any security models and can remain fully conformant to any and all security
mechanisms standardized by the many RFCs and other specification
documents for its various components. As an example, SLP authentication
and LDAP security and access control mechanisms can be used to enhance
the architecture to allow employees and regular users to use all standard
services of the organization's internal network, but restricting visitors and
anonymous users to a smaller sub set, perhaps by presenting a different
location service front-end during service discovery which has a more limited
level of access and visibility of the LDAP directory.
-----
Being able to store, access and manipulate data stored in a common
directory format with LDAP also has a huge advantage in being able to use
common LDAP browsers (with varying levels of quality) supported in many
platforms. This allows a far greater ease of management and maintenance
than having to maintain several different service models, each having their
own customized administration tools. This also significantly reduces the
leaming curve for efficient tool use.
The network computing industry has eagerly embraced technologies,
welcoming an ever-increasing variety of new service discovery protocols
and object architectures. With this abundance now offered across a wide
collection of environments, technologies that offer standardized interfaces
for the discovery process, while supporting communication for several
different types of service access technologies, will provide the greatest
achievable interoperability and resilience in the long-term. In this respect,
the proposed architecture in this paper holds good promise for supporting
enterprise-Ievel next-generation computing needs.
###### REFERENCES
[1] Sun Microsystems: Jini Network Techno1ogy, http://www.sun.comljini/
[2] OMG: The Common Object Request Broker: Architecture and Specification.
CORBA V2.3, June 1999.
[3] Microsoft Corporation: .NET http://msdn.microsoft.comlnet
[4] IETF: RFC 2608, "Service Location Protoco1, Version 2", June 1999.
[5] Microsoft Corporation: Universal P1ug and Play forum, http://www.upnp.org
[6] OMG: Naming Service Specification, February 2001.
[7] IETF: RFC 2251, "Lightweight Directory Access Protocol (v3)", Dec 1997.
[8] Erik Guttman, James Kempf: Automatic Discovery of Thin Servers: SLP, Jini
and the SLP-Jini Bridge, IECON, San Jose, 1999.
[9] Apple Computer Inc.: Mac OS X Server - Network & Security
http://www.app1e.comlmacosxl server/networksecurity .html
[lO]IETF: RFC 2609, "Service Temp1ates and Service: Schemes", June 1999.
[11]IETF: RFC 3111, "Service Location Protocol Modifications for IPv6", May
2001.
[12] The OpenSLP Project, http://www.opens1p.org
[l3]The OpenLDAP Project, http://www.openldap.org
[14]MICO - Mico Is COrba http://www mico org
-----
[15]IETF: RFC 2714, 11 Schema for Representing CORBA Objeet Referenees in an
LDAP Direetory", Oetober 1999.
[16] Softerra LDAP Browser, http://www.ldapadministrator.com
[17] IETF: RFC 2365, "Administratively Scoped IP Multicast", July 1998.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-0-387-35584-9_5?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-0-387-35584-9_5, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/978-0-387-35584-9_5.pdf"
}
| 2,002
|
[
"JournalArticle"
] | true
| 2002-04-08T00:00:00
|
[] | 7,739
|
en
|
[
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/015d50faf4b17a5b7a910e87a7327d9a7896e62d
|
[] | 0.90359
|
It's Time to Regulate Stablecoins as Deposits and Require Their Issuers to Be FDIC-Insured Banks
|
015d50faf4b17a5b7a910e87a7327d9a7896e62d
|
Social Science Research Network
|
[
{
"authorId": "119536238",
"name": "Arthur E. Wilmarth"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SSRN, Social Science Research Network (SSRN) home page",
"SSRN Electronic Journal",
"Soc Sci Res Netw",
"SSRN",
"SSRN Home Page",
"SSRN Electron J",
"Social Science Electronic Publishing presents Social Science Research Network"
],
"alternate_urls": [
"www.ssrn.com/",
"https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e",
"https://www.wikidata.org/wiki/Q53949192",
"www.ssrn.com/en",
"http://www.ssrn.com/en/",
"http://umlib.nl/ssrn",
"umlib.nl/ssrn"
],
"id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62",
"issn": "1556-5068",
"name": "Social Science Research Network",
"type": "journal",
"url": "http://www.ssrn.com/"
}
| null |
[GW Law Faculty Publications & Other Works](https://scholarship.law.gwu.edu/faculty_publications) [Faculty Scholarship](https://scholarship.law.gwu.edu/faculty_scholarship)
2021
# It's Time to Regulate Stablecoins as Deposits and Require Their It's Time to Regulate Stablecoins as Deposits and Require Their
Issuers to Be FDIC-Insured Banks Issuers to Be FDIC-Insured Banks
Arthur E. Wilmarth Jr.
George Washington University Law School, awilmarth@law.gwu.edu
[Follow this and additional works at: https://scholarship.law.gwu.edu/faculty_publications](https://scholarship.law.gwu.edu/faculty_publications?utm_source=scholarship.law.gwu.edu%2Ffaculty_publications%2F1576&utm_medium=PDF&utm_campaign=PDFCoverPages)
[Part of the Law Commons](http://network.bepress.com/hgg/discipline/578?utm_source=scholarship.law.gwu.edu%2Ffaculty_publications%2F1576&utm_medium=PDF&utm_campaign=PDFCoverPages)
Recommended Citation Recommended Citation
41 Banking & Financial Services Policy Report No. 2 (Feb. 2022), at 1-20.
This Article is brought to you for free and open access by the Faculty Scholarship at Scholarly Commons. It has
been accepted for inclusion in GW Law Faculty Publications & Other Works by an authorized administrator of
[Scholarly Commons. For more information, please contact spagel@law.gwu.edu.](mailto:spagel@law.gwu.edu)
-----
**It’s Time to Regulate Stablecoins as Deposits and**
**Require Their Issuers to Be FDIC-Insured Banks**
Arthur E. Wilmarth, Jr.*
December 16, 2021
**Introduction**
In November 2021, the President’s Working Group on Financial Markets (PWG) issued a
report analyzing the rapid expansion and growing risks of the stablecoin market. As explained in
PWG’s report, “[s]tablecoins are digital assets that are designed to maintain a stable value
relative to a national currency or other reference assets.”[1] PWG’s report determined that
stablecoins pose a wide range of potential hazards, including the risks of inflicting large losses
on investors, destabilizing financial markets and the payments system, supporting money
laundering, tax evasion, and other forms of illicit finance, and promoting dangerous
concentrations of economic and financial power.
PWG’s report called on Congress to pass legislation that would (i) require all issuers of
stablecoins to be banks that are insured by the Federal Deposit Insurance Corporation (FDIC),
and (ii) “ensure that payment stablecoins are subject to appropriate federal prudential oversight
on a consistent and comprehensive basis.” PWG also recommended that federal agencies and
the Financial Stability Oversight Council (FSOC) should use their “existing authorities” to
“address risks associated with payment stablecoin arrangements . . . to the extent possible.”[2]
*Professor Emeritus of Law, George Washington University Law School.
1 President’s Working Group on Financial Markets et al., Report on Stablecoins (Nov. 2021) (quote at 1)
[[hereinafter PWG Stablecoin Report], https://home.treasury.gov/system/files/136/StableCoinReport_Nov1_508.pdf;](https://home.treasury.gov/system/files/136/StableCoinReport_Nov1_508.pdf)
_see also Alexis Goldstein, Written Testimony before the Senate Comm. on Banking, Housing, and Urban Affairs 1_
(Dec. 14, 2021) (“Stablecoins are crypto assets that attempt to maintain a stable value, either through a basket of
reserve assets acting as collateral (asset-backed stablecoins), or through algorithms (algorithmic stablecoins).”)
[[hereinafter Goldstein Testimony], https://www.banking.senate.gov/imo/media/doc/Goldstein%20Testimony%2012-](https://www.banking.senate.gov/imo/media/doc/Goldstein%20Testimony%2012-14-21.pdf)
[14-21.pdf. This paper focuses on asset-backed stablecoins, which account for most of the stablecoin market.](https://www.banking.senate.gov/imo/media/doc/Goldstein%20Testimony%2012-14-21.pdf)
2 PWG Stablecoin Report, supra note 1, at 16, 18.
-----
At present, stablecoins are mainly used to make payments for trades in cryptocurrency
markets and to provide collateral for derivatives and lending transactions involving
cryptocurrencies.[3] However, technology companies are exploring a much broader range of
potential uses for stablecoins, including their use as digital currencies for making purchases and
sales of goods and services as well as person-to-person payments. In October, Facebook
launched a “pilot” of its Novi “digital currency wallet,” which uses the Pax Dollar stablecoin as
its first digital currency.[4] Novi enables its customers to make person-to-person payments within
and across national borders and is part of Facebook’s larger plan to establish itself as “a
challenger in the payments system.”[5] Facebook intends to “migrate” Novi’s digital wallet to its
proposed Diem stablecoin as soon as Facebook receives regulatory approvals for Diem.[6]
Facebook’s launch of Novi indicates that stablecoins could potentially become a form of
“private money” that is widely used in consumer and commercial transactions. Federal agencies
have not yet issued rules governing the issuance and distribution of stablecoins. Federal and
state officials have only rarely enforced consumer and investor protection laws against issuers
and distributors of stablecoins. PWG’s report calls on federal agencies and Congress to take
3 _Id. at 7-10; see also Andrew Ackerman, “Stablecoins in Spotlight as U.S. Begins to Lay Ground for Rules on_
Cryptocurrencies,” Wall Street Journal (Sept. 25, 2021) (“For now, stablecoins are used mainly by investors to buy
and sell crypto assets on exchanges . . . [and] as collateral for derivatives”),
[https://www.wsj.com/articles/stablecoins-in-spotlight-as-u-s-begins-to-lay-ground-for-rules-on-cryptocurrencies-](https://www.wsj.com/articles/stablecoins-in-spotlight-as-u-s-begins-to-lay-ground-for-rules-on-cryptocurrencies-11632562202?mod=article_inline)
[11632562202?mod=article_inline; Statement by SEC Chair Gary Gensler, “President’s Working Group Report on](https://www.wsj.com/articles/stablecoins-in-spotlight-as-u-s-begins-to-lay-ground-for-rules-on-cryptocurrencies-11632562202?mod=article_inline)
[Stablecoins” (Nov. 1, 2021) [hereinafter Gensler Statement], https://www.sec.gov/news/statement/gensler-](https://www.sec.gov/news/statement/gensler-statement-presidents-working-group-report-stablecoins-110121)
[statement-presidents-working-group-report-stablecoins-110121 (stating that “more than 75 percent of trading on all](https://www.sec.gov/news/statement/gensler-statement-presidents-working-group-report-stablecoins-110121)
crypto trading platforms occurred between a stablecoin and some other token” in October 2021).
4 See infra notes 17-21, 45-47 and accompanying text.
5 Hannah Murphy & Siddarth Venkataramakrishnan, “Facebook says ready to launch digital wallet,” Financial
_Times (Aug. 18, 2021) (quoting David Marcus, who was then the leader of Facebook’s Novi project),_
[https://www.ft.com/content/a8512417-1fde-481a-b282-2f892e3c3b51; Siddarth Venkataramakrishnan & Hannah](https://www.ft.com/content/a8512417-1fde-481a-b282-2f892e3c3b51)
Murphy, “Facebook launches digital wallet Novi,” Financial Times (Oct. 19, 2021),
[https://www.ft.com/content/b9a61950-a32c-4c77-95fe-fc0d00021a0f.](https://www.ft.com/content/b9a61950-a32c-4c77-95fe-fc0d00021a0f)
6 Venkataramakrishnan & Murphy, supra note 5 (quoting David Marcus).
2
-----
immediate steps to establish a federal oversight regime that could respond effectively to the
dangers created by stablecoins.[7]
This paper strongly supports three regulatory approaches recommended in PWG’s report.
First, the Securities and Exchange Commission (SEC) should use its available powers to regulate
stablecoins as “securities” and protect investors and securities markets. However, the scope of
the SEC’s authority to regulate stablecoins is not clear, and federal securities laws do not provide
adequate safeguards to control the systemic threats that stablecoins pose to financial stability and
the payments system.
Second, the Department of Justice (DOJ) should designate stablecoins as “deposits” and
should bring enforcement actions to prevent issuers and distributors of stablecoins from
unlawfully receiving “deposits” in violation of Section 21(a) of the Glass-Steagall Act. Section
21(a) offers a promising avenue for regulatory action, but its provisions contain uncertainties and
gaps and do not provide a complete remedy for the hazards created by stablecoins. The most
significant gap in Section 21(a) allows state (and possibly federal) banking authorities to charter
special-purpose depository institutions that could issue and distribute stablecoins without
obtaining deposit insurance from the FDIC.
Third, Congress should adopt legislation mandating that all issuers and distributors of
stablecoins must be FDIC-insured banks. That requirement would compel all stablecoin issuers
7 PWG Stablecoin Report, supra note 1, at 2-3, 10-18. For additional discussions of the dangers created by
stablecoins and possible regulatory responses to those perils, see Ackerman, supra note 3; Testimony by Hilary J.
Allen before the Senate Comm. on Banking, Housing, and Urban Affairs (Dec. 14, 2021) [hereinafter Allen
[Testimony], https://www.banking.senate.gov/imo/media/doc/Allen%20Testimony%2012-14-211.pdf; Dan Awrey,](https://www.banking.senate.gov/imo/media/doc/Allen%20Testimony%2012-14-211.pdf)
_Bad Money, 106 Cornell Law Review 1, 6-8, 39-45 (2020); Nate DiCamillo, “The US is dragging its heels on critical_
[stablecoin regulations,” Quartz (Nov. 8, 2021), https://qz.com/2083636/what-are-stablecoins-and-how-will-they-be-](https://qz.com/2083636/what-are-stablecoins-and-how-will-they-be-regulated/)
[regulated/; Gary Gorton & Jeffrey Y. Zhang, “Taming Wildcat Stablecoins” (Sept. 30, 2021), available at](https://qz.com/2083636/what-are-stablecoins-and-how-will-they-be-regulated/)
[http://ssrn.com/abstract=3888752; Jeanna Smialek, “Why Washington Worries About Stablecoins,” New York Times](http://ssrn.com/abstract=3888752)
[(Sept. 23, 2021), https://www.nytimes.com/2021/09/17/business/economy/federal-reserve-virtual-currency-](https://www.nytimes.com/2021/09/17/business/economy/federal-reserve-virtual-currency-stablecoin.html)
[stablecoin.html.](https://www.nytimes.com/2021/09/17/business/economy/federal-reserve-virtual-currency-stablecoin.html)
3
-----
and distributors and their parent companies to comply with federal laws that protect the safety,
soundness, and stability of our banking system and obligate banks to operate in a manner
consistent with the public interest. Requiring stablecoin issuers and distributors to be FDIC
insured banks would also maintain the longstanding U.S. policy of separating banking and
commerce. It would prevent Facebook and other Big Tech firms from using stablecoin ventures
as building blocks for “shadow banking” empires that would erode consumer protections, impair
competition, subvert the effectiveness of financial regulation, and potentially unleash systemic
crises across our financial and commercial sectors during severe economic downturns and
financial disruptions.
**Analysis**
**1.** **The Rapid Expansion and Escalating Risks of Stablecoins**
The volume of outstanding stablecoins has mushroomed during the past two years,
growing from less than $6 billion in January 2020 to $150 billion in December 2021.[8] The rapid
expansion of the stablecoin market has mirrored the explosive growth of all cryptocurrency
markets. The total market capitalization of cryptocurrencies increased almost nine-fold – from
$350 billion to $3 trillion – between September 2020 and November 2021.[9]
At present, stablecoins are mainly used to speculate in cryptocurrencies and other digital
assets. Stablecoins are the leading form of payment for trades executed on cryptocurrency
exchanges, and stablecoins are used as collateral for derivatives and lending transactions
involving digital assets. In addition, stablecoins play “a central role in facilitating trading,
8 The Block, “Stablecoins: Total Stablecoin Supply” (visited on Dec. 15, 2021),
[https://www.theblockcrypto.com/data/decentralized-finance/stablecoins.](https://www.theblockcrypto.com/data/decentralized-finance/stablecoins)
9 Office of Financial Research, Annual Report to Congress 2021, at 49 [hereinafter OFR 2021 Annual Report],
[https://www.financialresearch.gov/annual-reports/files/OFR-Annual-Report-2021.pdf; Yvonne Lau,](https://www.financialresearch.gov/annual-reports/files/OFR-Annual-Report-2021.pdf)
“Cryptocurrencies hit market cap of $3 trillion for the first time as Bitcoin and Ether reach record highs,” Fortune
[(Nov. 9, 2021), https://fortune.com/2021/11/09/cryptocurrency-market-cap-3-trillion-bitcion-ether-shiba-inu/.](https://fortune.com/2021/11/09/cryptocurrency-market-cap-3-trillion-bitcion-ether-shiba-inu/)
4
-----
lending, and borrowing activity” in decentralized finance (DeFi) transactions. DeFi transactions
are completed by using smart contracts and “autonomous” distributed ledgers instead of
organized exchanges.[10]
Stablecoins enable participants to trade in cryptocurrencies and engage in other digital
asset transactions while avoiding the use of fiat currencies and traditional financial institutions.
Stablecoins offer a much higher degree of anonymity in conducting such transactions, and many
participants use stablecoins to avoid complying with “Know Your Customer” (KYC)
requirements, anti-money laundering (AML) laws, tax laws, and sanctions against terrorist
financing.[11] “[V]irtually no KYC/AML checks” are conducted for DeFi transactions, and
criminals can “launder proceeds of crime” by exchanging other assets for stablecoins (or vice
versa) while “hiding the blockchain money trail.”[12]
Tether, the largest issuer of stablecoins, has issued over $80 billion of stablecoins and
controls a majority of the stablecoin market.[13] Tether and the issuers of most other leading
stablecoins represent to the public that they hold sufficient “reserves” to maintain a 1-for-1 parity
10 PWG Report, supra note 1, at 5-10 (quote at 9); see also Allen Testimony, supra note 7, at 2-3, 6-14; Goldstein
Testimony, supra note 1, at 1-5, 10-13; Gary Silverman, “Cryptocurrency: rise of decentralized finance sparks ‘dirty
[money’ fears,” Financial Times (Sept. 15, 2021), https://www.ft.com/content/beeb2f8c-99ec-494b-aa76-](https://www.ft.com/content/beeb2f8c-99ec-494b-aa76-a7be0bf9dae6)
[a7be0bf9dae6.](https://www.ft.com/content/beeb2f8c-99ec-494b-aa76-a7be0bf9dae6)
11 PWG Report, supra note 1, at 1-2, 10-11, 19-21; Gensler Statement, supra note 3; Goldstein Testimony, supra
note 1, at 1-2, 5, 13-15; Smialek, supra note 7; see also Zeke Faux, “Anyone Seen Tether’s Billions?”, Bloomberg
_BusinessWeek (Oct. 7, 2021) (“Tether Holdings checks the identity of people who buy coins directly from the_
company, but once the currency is out in the world, it can be transferred anonymously, just by sending a code. A
drug lord can hold millions of Tethers in a digital wallet and send it to a terrorist without anyone knowing.”),
[https://www.bloomberg.com/news/features/2021-10-07/crypto-mystery-where-s-the-69-billion-backing-the-](https://www.bloomberg.com/news/features/2021-10-07/crypto-mystery-where-s-the-69-billion-backing-the-stablecoin-tether?sref=f7rH2jWS)
[stablecoin-tether?sref=f7rH2jWS; JP Koning, “What Happens If All Stablecoin Users Have to Be Identified?”,](https://www.bloomberg.com/news/features/2021-10-07/crypto-mystery-where-s-the-69-billion-backing-the-stablecoin-tether?sref=f7rH2jWS)
_CoinDesk (Sept. 14, 2021) (“Right now, a large chunk of stablecoin usage is pseudonymous. That is, you or I can_
hold $20,000 worth of tether or USD coin stablecoins in an unhosted wallet (i.e., not on an exchange), without
[having to provide our identities to either Tether or Circle.”), https://www.coindesk.com/policy/2021/02/18/what-](https://www.coindesk.com/policy/2021/02/18/what-happens-if-all-stablecoin-users-have-to-be-identified/)
[happens-if-all-stablecoin-users-have-to-be-identified/; Silverman, supra note 10 (reporting on the belief of some](https://www.coindesk.com/policy/2021/02/18/what-happens-if-all-stablecoin-users-have-to-be-identified/)
cryptocurrency entrepreneurs that “DeFi innovations . . . will enable them to break free of [KYC] obligations”).
12 Goldstein Testimony, supra note 1, at 5, 14 (including quotes from a report, dated Oct. 18, 2021, by Elliptic, a
cryptocurrency compliance firm); see also Silverman, supra note 10 (reporting that DeFi “allows a wave of
innovation by people trying to launder money through the system”) (quoting David Jevans, CEO of CipherTrace, a
cryptocurrency intelligence company).
13 The Block, supra note 8.
5
-----
between their stablecoins and the U.S. dollar. However, there are substantial doubts about the
adequacy of reserves held by Tether and other issuers. Tether and its affiliates paid more than
$60 million to settle charges filed by the Office of the New York Attorney General and the
Commodity Futures Trading Commission, alleging that Tether’s representations about its
reserves were false and materially misleading.[14] In 2021, Tether disclosed that a majority of its
reserves consisted of commercial paper and other corporate obligations (reportedly including
debts of Chinese companies). At best, the reserves of Tether and other leading stablecoin issuers
resemble the assets held by prime money market funds, which experienced systemic investor
runs and were forced to accept bailouts from the federal government in 2008 and 2020.[15]
As discussed in PWG’s report, some technology companies have “the stated ambition” to
create stablecoin programs that can be “used widely by retail users to pay for goods and services,
by corporations in the context of supply chain payments, and in the context of international
remittances.”[16] Facebook’s launch of Novi in October 2021 is a “pilot” for Facebook’s planned
creation of a global digital payments network that will ultimately use Facebook’s proposed Diem
stablecoin. Novi is available initially to customers in the U.S. and Guatemala, and its first digital
currency is the stablecoin USDP (Pax Dollar), issued by Paxos.[17]
14 Commodity Futures Trading Commission, Press Release No. 8450-21, “CFTC Orders Tether and Bitfinex to Pay
[Fines Totaling $42.5 Million” (Oct. 15, 2021), https://www.cftc.gov/PressRoom/PressReleases/8450-21; Office of](https://www.cftc.gov/PressRoom/PressReleases/8450-21)
the N.Y. Attorney General, Press Release, “Attorney General James Ends Virtual Currency Trading Platform
Bitfinex’s Illegal Activities in New York” (Feb. 23, 2021) (imposing an $18.5 million fine on Tether and its
[affiliates), https://ag.ny.gov/press-release/2021/attorney-general-james-ends-virtual-currency-trading-platform-](https://ag.ny.gov/press-release/2021/attorney-general-james-ends-virtual-currency-trading-platform-bitfinexs-illegal)
[bitfinexs-illegal.](https://ag.ny.gov/press-release/2021/attorney-general-james-ends-virtual-currency-trading-platform-bitfinexs-illegal)
15 Faux, supra note 11; Goldstein Testimony, supra note 1, at 2-5; OFR 2021 Annual Report, supra note 9, at 51-52;
Gorton & Zhang, supra note 7, at 6-16, 21-24; Bill Nelson & Paige Pidano Paridon, “Stablecoins are backed by
‘reserves’? Give us a break,” American Banker (Dec. 10, 2021), available at 2021 WLNR 40403852; Arthur E.
Wilmarth, Jr., “The Pandemic Crisis Shows That the World Remains Trapped in a ‘Global Doom Loop’ of Financial
Instability, Rising Debt Levels, and Escalating Bailouts,” 40 Banking & Financial Services Policy Report No. 8
[(Aug. 2020), at 1, 9-10, available at https://ssrn.com/abstract=3901967 [hereinafter Wilmarth, “Pandemic Crisis”];](https://ssrn.com/abstract=3901967)
Yueqi Yang, “Tether Fails to Dispel Mystery on Stablecoin’s Crucial Reserves,” Bloomberg Law (Dec. 3, 2021).
16 PWG Report, supra note 1, at 8.
17 Venkataramakrishnan & Murphy, supra note 5; see also Novi Financial, Inc., “Meet Novi,”
[https://www.novi.com/.](https://www.novi.com/)
6
-----
According to Facebook, “Novi is a digital wallet that helps you send and receive money
instantly and securely.” Novi’s customers can send and receive payments by using “digital
currencies, starting with USDP (Pax Dollar). When you add money to your Novi account, we’ll
convert it to USDP. On Novi, 1 USDP is equal to 1 US dollar.”[18] Novi’s terms of service allow
customers to redeem their stablecoins from Novi based on the same 1-for-1 parity between the
Pax Dollar and the U.S. dollar.[19]
Facebook’s launch of Novi indicates that stablecoins could potentially expand from their
current roles in cryptocurrency trading and other digital asset transactions to a much broader
range of uses in consumer and commercial transactions. On October 19, 2021, David Marcus,
who was then head of Novi, described Facebook’s ambitions to create a general-use digital
payments network:
Beyond the pilot, our business model is clear. We’re a challenger in payments.
We’ll offer free person-to-person payments using Novi. Once we have a solid
customer base, we’ll offer cheaper merchant payments and make a profit on
merchant services.[20]
Marcus explained that “our support for Diem hasn’t changed and we intend to launch
Novi with Diem once it receives regulatory approval.” Marcus also confirmed that Novi planned
to offer “interoperability” in payments between Facebook’s Diem, Pax Dollar, and other
stablecoins.[21]
[18 Novi Financial, Inc., “Novi: How It Works,” https://www.novi.com/how-it-works.](https://www.novi.com/how-it-works)
19 Novi Financial, Inc., “Terms of Service (Last Modified: October 19, 2021)” [hereinafter Novi Terms of Service],
- 3 (“User Redemption Right”) (“You are entitled to redeem each Digital Currency for one U.S. dollar (USD) with
[Novi.”), https://www.novi.com/legal/app/us/terms-of-service?temp_locale=en_US](https://www.novi.com/legal/app/us/terms-of-service?temp_locale=en_US)
[20 Tweet by David Marcus (Oct. 19, 2021), https://twitter.com/davidmarcus/status/1450447444379013122 (visited](https://twitter.com/davidmarcus/status/1450447444379013122)
on Dec. 15, 2021).
21 _Id.; see also Venkataramakrishnan & Murphy, supra note 5 (reporting on Marcus’ statements about Facebook’s_
ambitions for Novi).
7
-----
As discussed in PWG’s report, stablecoins present a wide array of potential hazards,
including deceptive marketing, fraudulent and manipulative trading, abusive and predatory
terms, and facilitating evasion of KYC/AML requirements, tax laws, and sanctions against
terrorist financing.[22] This paper focuses on four systemic dangers posed by stablecoins, which
are also analyzed in PWG’s report. First, investors in stablecoins could suffer large losses from
investor runs triggered by concerns about the adequacy of stablecoin reserves. Investor runs on
stablecoins would likely resemble the investor runs that occurred in 2008 and 2020 in prime
money market funds, which invest (like stablecoins) in securities that are not issued or
guaranteed by the federal government. Stablecoins are also similar to the private banknotes that
state-chartered banks issued before the Civil War. Many state-chartered banks experienced runs
by holders of their banknotes during that period because they did not hold adequate reserves and
their notes were not guaranteed by the federal government.[23]
Second, the collapse of a major stablecoin could destabilize financial markets. For
example, a default by Tether – the leading form of payment used in cryptocurrency transactions
– would probably cause widespread trading failures as well as fire sales in cryptocurrency
markets. If stablecoins became a widely-accepted medium of payment for purchases and sales of
goods and services in the general economy, the failure of a leading stablecoin could trigger a
22 PWG Stablecoin Report, supra note 1, at 1-2, 10-11, 19-21; Goldstein Testimony, supra note 1, at 5-15; see also
Letter from Open Markets Institute to federal regulatory agencies, dated Nov. 23, 2021, expressing concerns about
“Facebook’s Digital Asset Wallet Pilot” [hereinafter Open Markets Facebook Letter], at 1-7,
[https://www.openmarketsinstitute.org/publications/letter-to-regulators-grave-risks-of-facebook-digital-wallet-pilot.](https://www.openmarketsinstitute.org/publications/letter-to-regulators-grave-risks-of-facebook-digital-wallet-pilot)
23 PWG Stablecoin Report, supra note 1, at 1-2, 10-12; see also Ackerman, supra note 3; Awrey, supra note 7, at 36, 11-18, 33-39; Gorton & Zhang, supra note 7, at 21-31; James Mackintosh, “Bitcoin’s Reliance on Stablecoins
Harks Back to the Wild West of Finance,” Wall Street Journal (May 27, 2021),
[https://www.wsj.com/articles/bitcoins-reliance-on-stablecoins-harks-back-to-the-wild-west-of-finance-](https://www.wsj.com/articles/bitcoins-reliance-on-stablecoins-harks-back-to-the-wild-west-of-finance-11622115246)
[11622115246.; Arthur J. Rolnick & Warren E. Weber, “Free Banking, Wildcat Banking, and Shinplasters,” 6](https://www.wsj.com/articles/bitcoins-reliance-on-stablecoins-harks-back-to-the-wild-west-of-finance-11622115246)
_Quarterly Review No. 3, at 10-19 (Fed. Res. Bank of Minneapolis, Fall 1982)._
8
-----
generalized run on stablecoins that might shut down the payments system and inflict widespread
losses on consumers, business firms, and financial institutions.[24]
Third, issuers and distributors of stablecoins are rapidly becoming a new category of
systemically important “shadow banks.” Shadow banks provide functional substitutes for
deposits (“shadow deposits”) and offer other financial services that mimic the activities of banks
while avoiding compliance with federal laws that establish essential safeguards for the safety,
soundness, and stability of our banking system. The systemic significance of stablecoin issuers
would increase exponentially if stablecoins are widely accepted as a medium of payment in
consumer and commercial transactions. Under those circumstances, stablecoins would likely
become a systemically important form of “private money” comparable to money market funds,
which do not have explicit government backing but rely on general expectations of government
support during severe economic downturns or financial crises.[25]
Fourth, issuers and distributors of stablecoins are permitted to combine their financial
activities with commercial ventures because they are not defined as “banks” for purposes of the
Bank Holding Company Act (BHC Act). Like other shadow banks, issuers and distributors of
stablecoins are not subject to the BHC Act’s longstanding policy of separating banking and
commerce.[26] As the PWG’s report correctly pointed out,
24 PWG Stablecoin Report, supra note 1, at 1-3, 12-14; see also OFR 2021 Annual Report, supra note 9, at 49-54;
Sam Knight, “Biden Administration Is Playing With Fire by Failing to Regulate Cryptocurrency,” Truthout (Nov.
[16, 2021), https://truthout.org/articles/biden-administration-is-playing-with-fire-by-failing-to-regulate-](https://truthout.org/articles/biden-administration-is-playing-with-fire-by-failing-to-regulate-cryptocurrency/)
[cryptocurrency/.](https://truthout.org/articles/biden-administration-is-playing-with-fire-by-failing-to-regulate-cryptocurrency/)
25 PWG Stablecoin Report, supra note 1, at 1-3, 7-14; see also Gorton & Zhang, supra note 7, at 3-6, 21-24, 33, 38;
Wilmarth, “Pandemic Crisis,” supra note 15, at 6-13, 16-17; Arthur E. Wilmarth, Jr., Taming the Megabanks: Why
_We Need a New Glass-Steagall Act 150-57, 279-88, 341-44, 353-56 (Oxford Univ. Press, 2020) [hereinafter_
Wilmarth, Taming the Megabanks].
26 12 U.S.C. §§ 1841(c), 1843; Arthur E. Wilmarth, Jr., “The OCC’s and FDIC’s Attempts to Confer Banking
Privileges on Nonbanks and Commercial Firms Violate Federal Laws and Are Contrary to Public Policy,” 39
_Banking & Financial Services Policy Report No. 10 (Oct. 2020), at 1, 6-11, available at_
[https://ssrn.com/abstract=3750964 [hereinafter Wilmarth, “Banking Privileges”]; see also Gorton & Zhang, supra](https://ssrn.com/abstract=3750964)
note 7, at 17-19.
9
-----
[T]he combination of a stablecoin issuer or wallet provider and a commercial firm
could lead to an excessive concentration of economic power. These policy
concerns are analogous to those traditionally associated with the mixing of
banking and commerce, such as advantages in accessing credit or using data to
market or restrict access to products. This combination could have detrimental
effects on competition and lead to market concentration in sectors of the real
economy.[27]
As explained below in Part 2(c), permitting issuers and distributors of stablecoins to operate
without being chartered and regulated as FDIC-insured banks would enable Facebook and other
Big Tech firms to enter the banking business and undermine the BHC Act’s policy of separating
banks from commercial enterprises. Allowing Big Tech firms to subvert that policy would
inflict great harm on our financial system, economy, and society.
**2.** **Regulatory Strategies for Controlling the Dangers of Stablecoins**
This section strongly endorses three regulatory approaches discussed in PWG’s report for
addressing the perils created by stablecoins. First, the SEC should use its existing powers to
regulate stablecoins as “securities” and protect investors and securities markets. Second, DOJ
should designate stablecoins as “deposits” and bring enforcement actions to prevent issuers and
distributors of stablecoins from violating Section 21(a) of the Glass-Steagall Act. Third, to
overcome uncertainties and gaps that limit the effectiveness of SEC and DOJ remedies, Congress
should pass legislation requiring all issuers and distributors of stablecoins to be FDIC-insured
banks.[28]
27 PWG Stablecoin Report, supra note 1, at 14.
28 _Id. at 2-3, 15-18._
10
-----
**a.** **The SEC should use its existing powers to regulate stablecoins as**
**“securities.”**
The SEC should exercise its existing authority to regulate stablecoins as “securities,”
thereby requiring issuers and distributors of stablecoins to comply with federal securities laws
that protect investors and securities markets by (1) prohibiting fraud and manipulation in
purchases and sales of securities and (2) imposing registration and disclosure duties on those
who sell securities to the public. As discussed below, the SEC would face difficult legal
challenges in regulating stablecoins as “securities.” In addition, the SEC does not possess the
broad prudential oversight powers that federal bank regulators can wield to address systemic
risks and promote financial stability. Consequently, vigorous efforts by the SEC to regulate
stablecoins as “securities” would be a very helpful step, but it would not provide an adequate
remedy for the systemic perils created by stablecoins.
The SEC would confront potentially significant obstacles in showing that stablecoins are
“securities,” especially with regard to stablecoins that do not pay interest and are used solely for
the purpose of buying and selling goods and services for consumption. To establish legal
grounds for regulating stablecoins as “securities,” the SEC must show that stablecoins are
“investment contracts” or “notes” (debt obligations) as defined in federal securities laws.[29]
Under the Supreme Court’s Howey decision, an “investment contract” is a “scheme [that]
involves an investment of money in a common enterprise with profits to come solely from the
efforts of others.”[30] In SEC v. Edwards, the Supreme Court explained that the “profits” referred
to in Howey are “the profits that investors seek on their investment . . . in the sense of income or
29 _See 15 U.S.C. §§ 77b(a)(1), 78c(a)(10), 80a-2(a)(36); Todd Phillips, The SEC’s Regulatory Role in the Digital_
_[Assets Markets 5-7 (Center for American Progress, Oct. 2020), available at http://ssrn.com/abstract=3964632.](http://ssrn.com/abstract=3964632)_
30 _SEC v. W.J. Howey & Co., 328 U.S. 293, 301 (1946); see also Phillips, supra note 29, at 5-6._
11
-----
return, to include, for example, dividends, other periodic payments, or the increased value of the
investment.” The Court also held in Edwards that “fixed returns” on “investments pitched as
low-risk” would satisfy the Howey test, and the ability of investors to redeem their investments
would not affect that outcome.[31]
In Reves v. Ernst & Young, the Supreme Court held that every promissory “note” is
presumptively a “security.” However, that presumption can be rebutted based on several factors,
including whether “the buyer is interested primarily in the profit the note is expected to
generate,” or, in contrast, whether “the note is exchanged to facilitate the purchase and sale of a
minor asset or consumer good.” Reves held that courts should also consider (1) whether the note
is an instrument in which there is “common trading for speculation or investment,” and (2)
whether “the existence of another regulatory scheme significantly reduces the risk of the
instrument, thereby rendering application of the Securities Acts unnecessary.”[32]
Federal district courts concluded in several cases that cryptocurrencies created by sellers
with fluctuating values were “investment contracts” and “securities” under federal securities
laws. In those cases, the sellers represented that their cryptocurrencies could increase in value
and provide investment gains to the buyers.[33] None of those cases involved stablecoins having a
fixed value with reference to widely-used fiat currencies or other ostensibly “safe” assets, and
there do not appear to be any reported court decisions addressing the issue of whether such
stablecoins are “securities.”
31 _SEC v. Edwards, 540 U.S. 389, 394-97 (2004)._
32 _Reves v. Ernst & Young, 494 U.S. 56, 64-67 (1990); see also Phillips, supra note 29, at 6._
33 _See, e.g.,_ _SEC v. NAC Foundation, LLC, 512 F. Supp. 2d 988, 994-97 (N.D. Cal. 2021); SEC v. Kik Interactive_
_Inc., 492 F. Supp. 2d 169, 177-80 (S.D.N.Y. 2020); SEC v. Telegram Group Inc., 448 F. Supp. 2d 352, 364-79_
(S.D.N.Y. 2020); Balestra v. ATBCoin LLC, 380 F. Supp. 2d 340, 352-57 (S.D.N.Y. 2019); SEC v. Shavers, No.
4:13-CV-416, 2014 WL 12622292, at *4-*8 (E.D. Tex. Aug. 26, 2014).
12
-----
Issuers of the most widely-used stablecoins (including Tether, USD Coin, and Pax
Dollar) represent that their stablecoins will maintain a 1-to-1 parity with the U.S. dollar by
holding reserves that include cash, government securities, and (in most cases) corporate debt
obligations. Most leading stablecoins do not pay interest to their holders. Thus, instead of
promising potential gains, issuers of most prominent stablecoins assure investors that they will
not suffer losses from buying and holding stablecoins. Those stablecoins are different from
cryptocurrencies that have fluctuating values and offer buyers the possibility of making profits
from trading.[34]
The SEC could argue that stablecoins should be treated as “investment contracts” or
“notes” because (1) issuers and distributors offer and sell stablecoins to investors with the shared
understanding that stablecoins are the most widely-used form of payment for speculating in
cryptocurrencies and other digital assets;[35] and (2) issuers and distributors expect that most
buyers of stablecoins will use their coins to pursue speculative profits by trading in digital assets
or by lending their coins to other traders.[36] Thus, a purchase of stablecoins could reasonably be
viewed as the payment of an “entry fee” enabling the buyer to speculate in cryptocurrency
markets, just as the purchase of poker chips permits a gambler to participate and place bets in
34 Ackerman, supra note 3; Awrey, supra note 7, at 60 n.221; Nikhilesh De, “SEC Chair Hints Some Stablecoins
[Are Securities,” CoinDesk (Sept. 14, 2021), https://www.coindesk.com/markets/2021/07/21/sec-chair-hints-some-](https://www.coindesk.com/markets/2021/07/21/sec-chair-hints-some-stablecoins-are-securities/)
[stablecoins-are-securities/; DiCamillo, supra note 7; Gorton & Zhang, supra note 7, at 3, 6-8, 12-16; Smialek, supra](https://www.coindesk.com/markets/2021/07/21/sec-chair-hints-some-stablecoins-are-securities/)
note 7; Wilmarth, “Pandemic Crisis,” supra note 15, at 9-10.
35 _See supra notes 3 & 10 and accompanying text._
36 For court decisions that involved interest-bearing debt instruments but also indicated that financial instruments
sold for the purpose of encouraging speculation could be treated as “securities,” see, e.g., _Gary Plastic Packaging_
_Corp. v. Merrill Lynch, Pierce, Fenner & Smith, Inc., 756 F.2d 230, 240-42 (2d Cir. 1985) (holding that the_
defendant broker-dealer sold “investment contracts” that were subject to regulation as “securities” because the
defendant sold negotiable bank certificates of deposits (CDs) accompanied by promises that the defendant would
monitor the quality of the issuing banks, repurchase the CDs on demand, and maintain a “secondary market” in the
CDs, thereby enabling customers to resell their CDs for potential gains without risking any loss of their principal or
accrued interest); Stoiber v. SEC, 161 F.3d 745, 747-52 (D.C. Cir. 1998) (holding that the defendant broker sold
“notes” that were subject to regulation as “securities” because the defendant sold interest-bearing promissory notes
to customers with the understanding that the defendant would use most of the sale proceeds to trade in commodities
and generate profits to pay off the notes).
13
-----
poker games and tournaments.[37] SEC Chair Gary Gensler recently observed that stablecoins are
primarily bought and used for speculative purposes, and he described stablecoins as “acting
almost like poker chips at the casino.”[38] The SEC could argue that it would be proper to classify
stablecoins as “notes” because buyers of stablecoins are “primarily motivated by the opportunity
to earn a profit on their money” by using their stablecoins to pay for subsequent speculative
transactions.[39]
In contrast, if issuers created stablecoins that could be used only to buy and sell goods
and services for consumption, and that could not be used for speculation, it would be much more
difficult for the SEC to characterize those stablecoins as “securities.” As explained above, court
decisions defining “investment contracts” and “notes” have excluded financial instruments that
are purchased solely for the purpose of buying and selling goods, other property, or services for
consumption, and not for potential investment gains.[40] Special-purpose, consumption-only
stablecoins do not appear to be part of the present digital asset landscape. However, issuers
37 _See Tschetschot v. Commissioner, T.C. Memo. 2007-38, 2007 WL 518989, at *3 (U.S.T.C., Feb. 20, 2007)_
(stating that participants in poker tournaments bought poker chips as part of their “entry fees” for the purpose of
“placing bets, hoping to win” prizes).
38 Gensler Statement, supra note 3; Tory Newmyer, “SEC’s Gensler likens stablecoins to ‘poker chips’ amid calls
for tougher crypto regulation,” Washington Post (Sept. 21, 2021) (quoting from interview with Mr. Gensler),
[https://www.washingtonpost.com/business/2021/09/21/sec-gensler-crypto-stablecoins/; see also Opening Statement](https://www.washingtonpost.com/business/2021/09/21/sec-gensler-crypto-stablecoins/)
of Sen. Sherrod Brown at a hearing before the Senate Comm. on Banking, Housing, and Urban Affairs (Dec. 14,
2021) (“Stablecoins make it easier than ever to risk real dollars on cryptocurrencies.”),
[https://www.banking.senate.gov/imo/media/doc/Brown%20Statement%2012-14-21.pdf.](https://www.banking.senate.gov/imo/media/doc/Brown%20Statement%2012-14-21.pdf)
39 _Stoiber v. SEC, 161 F.3d at 750._
40 _See, e.g., SEC v. Kik Interactive, 492 F. Supp. 2d at 179-80 (distinguishing between digital assets purchased for a_
“consumptive use” and those bought primarily for their “profit-making potential”); Solis v. Latium Network, Inc.,
No. 18-10255 (SDW) (SCM), 2018 WL 6445543, at *1-*3 (D.N.J., Dec. 10, 2018) (holding that digital tokens sold
by defendants were “securities” because defendants encouraged plaintiffs to “expect a profit” from investing in the
tokens, even though the tokens could also potentially be used to purchase services). For additional analysis of the
distinction between financial instruments used solely for consumption and those purchased for investment gains, see
SEC “Finhub” Staff, “Framework for ‘Investment Contract’ Analysis of Digital Assets” (April 3, 2019), Part II.C.3,
[https://www.sec.gov/corpfin/framework-investment-contract-analysis-digital-assets; Jay B. Sykes, “Securities](https://www.sec.gov/corpfin/framework-investment-contract-analysis-digital-assets)
Regulation and Initial Coin Offerings: A Legal Primer,” at 14-19, 26-32 (Congressional Res. Serv. Rep. R45301,
[Aug. 31, 2018), https://sgp.fas.org/crs/misc/R45301.pdf.](https://sgp.fas.org/crs/misc/R45301.pdf)
14
-----
might decide to create such instruments if the SEC succeeded in classifying stablecoins used for
speculation as “securities.”
The SEC could also seek to regulate issuers of stablecoins as investment companies
under the Investment Company Act of 1940 (1940 Act). Issuers of stablecoins that engage
primarily in the business of investing and trading in securities, or engage in that business and
hold more than 40% of their assets in non-government securities, could potentially be treated as
investment companies. There are numerous exemptions in the 1940 Act that might allow some
issuers of stablecoins to avoid being treated as investment companies, and an analysis of those
exemptions is beyond the scope of this paper.[41]
The SEC’s track record with money market funds – financial instruments that closely
resemble stablecoins – does not inspire confidence that the SEC could effectively control the
systemic dangers of stablecoins by regulating them as investment companies. The SEC’s
regulation of money market funds under the 1940 Act failed to ensure the resilience of those
funds after Lehman Brothers collapsed in September 2008. Lehman’s bankruptcy and default on
its commercial paper triggered systemic runs by investors on money market funds, and the
Treasury Department and Federal Reserve (Fed) were forced to arrange a comprehensive bailout
of those funds. Despite that calamity, the SEC rejected numerous recommendations after 2008 –
including one from FSOC – calling on money market funds to stop offering fixed net asset
values (NAVs) of $1 per share and to use floating NAVs like other mutual funds. The SEC
required institutional prime and tax-exempt money market funds to adopt floating NAVs, and it
permitted non-government money market funds to impose restrictions on redemption. However,
the SEC allowed retail prime money market funds and institutional and retail government money
41 15 U.S.C. § 80a-3; see Phillips, supra note 29, at 6-7; SEC, “Investment Company Registration and Regulation
[Package,” https://www.sec.gov/investment/fast-answers/divisionsinvestmentinvcoreg121504htm.html.](https://www.sec.gov/investment/fast-answers/divisionsinvestmentinvcoreg121504htm.html)
15
-----
market funds to continue offering deposit-like treatment with fixed NAVs of $1 per share.
Money market funds experienced another series of systemic runs by investors in March 2020 and
were rescued for a second time by the Treasury and the Fed.[42]
In December 2021, the SEC issued a proposal to amend its money market fund rules to
address the problems revealed by the investor runs of 2020. The SEC’s proposal would increase
liquidity requirements and modify redemption terms for money market funds in order to reduce
incentives for investor runs during periods of financial stress. At the same time, the proposal
conceded that the SEC’s changes to its money market fund rules in 2010 and 2014 did not
achieve their intended purpose and failed to prevent the investor runs of 2020.[43]
The SEC’s proposal considered – and rejected – the alternative possibility of requiring all
money market funds to use floating NAVs. The proposal acknowledged that a new rule
requiring floating NAVs for all funds would
increase transparency about the risk of money market fund investments. . . . To
the degree that investors in stable NAV funds are currently treating them as if
they were holding U.S. dollars due to a lack of transparency about risks of such
funds, expanding the scope of the floating NAV requirements may enhance
42 Michael S. Barr, Howell E. Jackson & Margaret E. Tahyar, Financial Regulation: Law & Policy 1302-24 (2d ed.
2018); Gorton & Zhang, supra note 7, at 21-24; Wilmarth, “Pandemic Crisis,” supra note 15, at 4-8, 11-12;
Wilmarth, Taming the Megabanks, supra note 25, at 153-57, 279-88, 341-44; see also Marco Cypriani & Gabrielle
La Spada, “Sophisticated and Unsophisticated Runs” (Fed. Res. Bank of N.Y. Staff Rep. No. 956, Dec. 2020),
[https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr956.pdf; Lei Li, Yi Li, Marco](https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr956.pdf)
Macchiavelli & Xing (Alex) Zhou, “Liquidity Restrictions, Runs, and Central Bank Interventions: Evidence from
[Money Market Funds” (May 24, 2021), available at https://ssrn.com/abstract=3607593.](https://ssrn.com/abstract=3607593)
43 Securities and Exchange Commission, “Money Market Fund Reforms: Proposed rule” (Dec. 15, 2021)
[[hereinafter SEC Money Market Fund Proposal], https://www.sec.gov/rules/proposed/2021/ic-34441.pdf. For the](https://www.sec.gov/rules/proposed/2021/ic-34441.pdf)
SEC’s explanation of why its changes to money market fund rules in 2010 and 2014 were inadequate and did not
prevent the investor runs of March 2020, see id. at 10-31, 81-97.
16
-----
investor protections and enable investors to make more informed investment
decisions.[44]
The SEC’s proposal also recognized that requiring floating NAVs for all funds “would
reduce the distortions arising out of implicit government guarantees of money market funds” and
likely cause “investors of stable NAV funds to reallocate capital into cash accounts subject to
deposit insurance.”[45] The resulting shrinkage of the money market fund industry would reduce
the demand for short-term wholesale funding instruments, such as securities repurchase
agreements (repos) and commercial paper. Money market funds are the most prominent
investors in repos and commercial paper. Those short-term debt instruments also function as
“shadow deposits” (functional substitutes for bank deposits), and they experienced their own
systemic breakdowns in 2008 and 2020.[46] The SEC’s proposal admitted that the support
provided by money market funds for short-term wholesale funding markets “may be sustainable,
in part, due to perceived government backstops of money market funds and lack of transparency
to investors about the risks inherent in money market fund investments.”[47]
Thus, the SEC’s proposal recognized that money market funds with fixed NAVs produce
market distortions, depend on implicit government guarantees, and do not provide full
transparency to investors. Nevertheless, the SEC’s proposal rejected the option of requiring all
money market funds to adopt floating NAVs. That rejection suggests that the SEC would be
equally unprepared to force stablecoins to abandon their promised 1:1 parity with the U.S. dollar
– a promise that conveys to investors the same illusion of deposit-like status.
44 _Id. at 234._
45 _Id. at 236-38._
46 _Id. at 237-38; see also Wilmarth, “Pandemic Crisis,” supra note 15, at 2-8, 12; Wilmarth, Taming the Megabanks,_
_supra note 15, at 153-57, 278-87, 341-44, 353-56._
47 SEC Money Market Fund Proposal, supra note 43, at 238.
17
-----
Like money market funds, stablecoins are “shadow deposits” – a type of “private money”
that is designed to serve as a functional substitute for federally-insured bank deposits. The
bailouts of money market funds in 2008 and 2020 and the close similarities between those funds
and stablecoins strongly support the conclusion that both types of financial instruments should be
regulated in the same way as bank deposits to control their systemic dangers.[48] As shown by the
vicissitudes of money market funds, regulating stablecoins as investment companies under the
1940 Act would not provide an adequate substitute for requiring stablecoins to comply with the
regulatory regime governing FDIC-insured bank deposits.
An additional factor supporting that conclusion is that the SEC lacks broad financial
stability powers comparable to the extensive prudential regulatory and supervisory authorities of
federal banking agencies. The SEC’s core mission is to protect investors and preserve the
integrity of securities markets. The SEC generally has not attempted to act as a financial
stability regulator.[49] As shown in the next two sections, regulating stablecoins as deposits and
requiring their issuers and distributors to become FDIC-insured banks would provide the most
promising approach for controlling their systemic perils.
**b.** **The Department of Justice should enforce Section 21(a) of the Glass-**
**Steagall Act against issuers and distributors of stablecoins.**
Novi provides deposit-like treatment for stablecoins that its customers buy and hold in
their accounts. Novi sells stablecoins (currently Pax Dollars) to its customers at a fixed price of
$1 per coin, and Novi agrees to redeem stablecoins from its customers at the same price of $1 per
48 Gorton & Zhang, supra note 7, at 21-24, 33-35; Wilmarth, “Pandemic Crisis,” supra note 15, at 6-10.
49 _See Congressional Research Service, “Who Regulates Whom? An Overview of the U.S. Financial Regulatory_
[Framework” (CRS Report No. R44918, updated Mar. 10, 2020), https://sgp.fas.org/crs/misc/R44918.pdf; Daniel K.](https://sgp.fas.org/crs/misc/R44918.pdf)
Tarullo, “The SEC should – and can – pay more attention to financial stability” (May 13, 2021),
[https://www.brookings.edu/blog/up-front/2021/05/13/the-sec-should-and-can-pay-more-attention-to-financial-](https://www.brookings.edu/blog/up-front/2021/05/13/the-sec-should-and-can-pay-more-attention-to-financial-stability/)
[stability/; see generally Barr, Tahyar & Jackson, supra note 42, at 444-502, 535-64.](https://www.brookings.edu/blog/up-front/2021/05/13/the-sec-should-and-can-pay-more-attention-to-financial-stability/)
18
-----
coin.[50] Novi’s customers own the stablecoins held in their accounts, and Novi agrees to maintain
custody of its customer’s stablecoins until they are redeemed, withdrawn, or transferred to other
persons.[51] Novi enables its customers to
(i) purchase and hold Digital Currency in your Account, (ii) conduct person-to
person transfers of Digital Currency, (iii) set up recurring Digital Currency
transactions, (iv) convert your Digital Currency to local currency and pick up
cash; (iv) convert your Digital Currency to local currency and transfer to your
linked bank account via an automated clearing house (“ACH”) transaction; and
(vi) use any additional features we may provide through your use of the
Services.[52]
The services that Novi provides to its customers through their stablecoin accounts satisfy
both of the key functional characteristics of “deposits”: (1) the placing of funds with another
person for custody and safekeeping, and (2) the ability of the depositor to withdraw or transfer
those funds on demand or at a definite time. In a 1991 decision, the Second Circuit Court of
Appeals held that, “[a]s commonly understood, the term ‘deposit’ means a sum of money placed
in the custody of a bank, to be withdrawn at the will of the depositor.”[53] Similarly, in a 2016
decision, the Fifth Circuit Court of Appeals explained:
50 Novi Financial, Inc., “How It Works: Add money” (“Simply add a debit card to put money in your account, and
[it’ll be converted to USDP. On Novi, 1 USDP is equal to 1 US dollar.”), https://www.novi.com/how-it-works; Novi](https://www.novi.com/how-it-works)
Terms of Service, supra note 19, ¶ 3 (“User Redemption Right”) (“You are entitled to redeem each Digital Currency
for one U.S. dollar (USD) with Novi.”).
51 Novi Terms of Service, supra note 19, ¶ 3 (“Title and Ownership”) (“Your Account will give you access to buy,
sell, transfer, and manage your Digital Currency. The Digital Currency is held by Novi on a blockchain in one or
more blockchain addresses (each, a ‘Wallet’). . . . Novi controls the Wallet that holds your Digital Currency. . . .
[Y]ou own all beneficial interest in the Digital Currency in your Account.”); id. ¶ 3 (“Custody”) (“To more securely
custody [sic] Digital Currency, we may use one or more shared, commingled Wallets to hold Digital Currency
on your behalf and on our own behalf.”).
52 Novi Terms of Service, supra note 19, ¶ 5 (“Description of the Services”).
53 _United States v. Jenkins, 943 F.2d 167, 174 (2d Cir.) (citations omitted), cert. denied, 502 U.S. 1014 (1991)._
19
-----
The relevant authorities demonstrate that the essential elements of a “deposit”
include the following. First, a deposit must involve the placement of funds with
another for “safekeeping.” . . . Second, those funds must be subject to the control
of the depositor such that they are repayable on demand or at a fixed time.[54]
As shown above, Novi clearly accepts “deposits” by agreeing to (i) receive funds from its
customers, (ii) convert those funds into stablecoins at a fixed 1:1 parity with U.S. dollars, (iii)
maintain custody of the stablecoins owned by its customers, (iv) repay its customers’ funds
based on the same fixed parity when its customers decide to redeem their stablecoins, and (v)
allow its customers to transfer their stablecoins to other persons. Courts have repeatedly held
that nonbanks are deemed to receive “deposits” if they accept funds from other persons while
agreeing to hold those funds and repay them on demand or at a specified time. The ability of
customers to transfer their funds to third parties is not a prerequisite for status as “deposits,” but
their right to transfer their funds to third parties provides additional evidence that a deposit
relationship has been formed.[55]
Section 21(a) of the Glass-Steagall Act establishes two overlapping prohibitions against
the receipt of deposits by nonbanks. Section 21(a)(1) focuses on persons who are engaged in
securities activities. Section 21(a)(1) bars issuers, underwriters, distributors, and sellers of
54 _MoneyGram Int’l, Inc. v. Commissioner, No. 15-60527, 664 Fed. Appx. 386, 392 (5th Cir., Nov. 15, 2016)_
(citations omitted); see also MoneyGram Int’l, Inc. v. Commissioner, 999 F.3d 269, 274-76 (5th Cir. 2021).
55 _United States v. Jenkins, 943 F.2d_ at 174 (holding that the defendant (an individual) accepted a “deposit” when he
“took custody” of $150,000 on behalf of a purported foreign bank and “agreed to return it at the will” of the
depositor, stating “[y]our money will be here for your use”); In re Thaxton Group, Inc., Securities Litigation, C/A
No. 8:02-2612-GRA, 2006 WL 8462530, at *1–*3, *9–*14 (D.S.C., Mar. 20, 2006) (holding that the defendant (a
nonbank finance company) accepted “deposits” by selling $121 million of demand notes to 5,000 investors, thereby
“taking money from investors in return for a promise to return the funds on demand,” and explaining that the “notes
were designed to imitate bank certificates of deposit and money market accounts in order to attract bank depositors
to the note program”); S & N Equipment Co. v. Casa Grande Cotton Finance Co., 97 F.3d 337, 340–45 (9th Cir.
1996) (determining that the defendant (a nonbank finance company) accepted “demand deposits” for purposes of the
BHC Act because the defendant “accepted funds from its customers,” placed those funds in “credit accounts,” and
allowed customers to “withdraw funds as needed” and transfer funds to third parties).
20
-----
“stocks, bonds, debentures, notes, or other securities” from also “engag[ing] at the same time to
any extent whatever in the business of receiving deposits subject to check or repayment upon
presentation of a passbook, certificate of deposit, or other evidence of debt, or upon request of
the depositor.”[56] If Novi’s stablecoins are determined to be “securities,” Novi would be
“engag[ing] at the same time” in both (1) issuing, underwriting, distributing, or selling
“securities” and (2) receiving “deposits” that are (A) withdrawn or transferred by customers
using functional equivalents of “checks” or (B) repaid to customers upon their request. Section
21(a)(1) clearly forbids that combination of activities.[57]
Court decisions have treated “deposits” as also being “securities” for purposes of the
federal securities laws unless those deposits are accepted either by FDIC-insured U.S. banks or
by foreign banks subject to a regulatory regime that provides comparable protections to
depositors.[58] Based on those decisions, Novi’s stablecoins would be subject to regulation as both
“deposits” and “securities” because Novi is not chartered or regulated as a bank and does not
have FDIC insurance. Accordingly, DOJ should determine that Novi’s stablecoins violate
Section 21(a)(1) if those stablecoins are found to be “securities.”
Section 21(a)(2) of the Glass-Steagall Act is a broader and more sweeping provision.
Section 21(a)(2) prohibits all persons (regardless of whether they are also involved in
“securities” activities) from “engag[ing], to any extent whatever . . . in the business of receiving
deposits” – described with the functional characteristics included in Section 21(a)(1) – unless
56 12 U.S.C. § 378(a)(1).
57 For court decisions applying Section 21(a)(1) and holding that the relevant terms – including “securities,”
“underwriting,” and “dealing” – are generally given the same meaning under federal securities laws and the GlassSteagall Act, see Securities Indus. Ass’n v. Board of Governors, 468 U.S. 137, 148-52 (1984); Investment Co.
_Institute v. Conover, 790 F.2d 925, 927-28, 933-34 (D.C. Cir.), cert. denied sub nom. Investment Co. Institute v._
_Clarke, 479 U.S. 939 (1986)._
58 _Marine Bank v. Weaver, 455 U.S. 551, 555-59 (1982); SEC v. McDuffie, Civil Action No. 12-cv-02939, 2014 WL_
4548723, at *3-*7 (D. Colo., Sept. 15, 2014); SEC v. Stanford Int’l Bank, Ltd., Civil Action No. 3:09-CV-0298-N,
2011 WL 13160374, at *3-*5 (N.D. Tex., Nov. 30, 2011).
21
-----
those persons satisfy one of three alternative sets of regulatory criteria. Under Section 21(a)(2),
a person who engages in the business of receiving deposits must (A) be chartered and authorized
to “engage in such business” by, and subject to examination and regulation under, federal laws or
the laws of a state, U.S. territory, or the District of Columbia, or (B) be “permitted by” federal
laws or the laws of a state, U.S. territory, or the District of Columbia to “engage in such
business” and also be subject under the laws of that jurisdiction to “examination and regulation,”
or (C) submit to “periodic examination by the banking authority” of the state, territory, or
District of Columbia where “such business is carried on,” and publish “periodic reports of its
condition,” in “the same manner and under the same conditions” as are required by the laws of
such state, territory, or District for chartered banks “engaged in such business in the same
locality.”[59] It bears repeating that Section 21(a)(2) – unlike Section 21(a)(1) – applies to all
_persons who engage in the business of receiving deposits, regardless of whether they are also_
issuing, underwriting, distributing, or selling “securities.”
Paragraphs (A) and (B) of Section 21(a)(2) cover much of the same ground – as both
paragraphs refer to institutions that are legally authorized to engage in “the business of receiving
deposits” – except that paragraph (A) refers to chartering, examination, and regulation while
paragraph (B) refers to examination and regulation but not chartering. Paragraph (C) describes
persons who are subject to “periodic examination” by a state, District, or territorial “banking
authority” and who also submit “periodic reports,” with such examination and reports to be made
“in the same manner and under the same conditions” as are required for chartered banks engaged
in the business of receiving deposits in the same state, District, or territory. The crucial point is
that all three paragraphs in Section 21(a)(2) refer to institutions that are either chartered as,
59 12 U.S.C. § 378(a)(2).
22
-----
regulated as, or subject to the same examination and reporting requirements as, deposit-taking
banks. Persons who do not satisfy the criteria set forth in any of the three paragraphs would
violate Section 21(a)(2) if they engage “to any extent whatever . . . in the business of receiving
deposits.”[60]
As explained above, Novi’s activities satisfy Section 21(a)’s functional description of
engaging in the “business of receiving deposits subject to check or repayment . . . upon request
of the depositor.” Novi’s deposit-taking business violates Section 21(a)(2) – regardless of
whether its stablecoins are treated as “securities” – because Novi does not satisfy any of the
criteria set forth in paragraphs (A), (B), or (C). Novi is not chartered as a deposit-taking bank
under federal or state laws. Novi is not permitted by federal or state laws to engage in the
business of receiving deposits while being examined and regulated in connection with that
business. Novi also is not complying with the same examination and reporting requirements as
are applied by the “banking authority” of the relevant state, District, or territory to chartered,
deposit-taking banks.
Section 21(b) imposes criminal sanctions on persons who violate Section 21(a), and DOJ
is therefore responsible for enforcing the statute. In October 1979, a New York savings bank
sent a letter to DOJ and the SEC alleging that Merrill Lynch was violating Section 21(a) by
offering “cash management” money market funds that were unlawful “deposits.” DOJ’s
Criminal Division issued an opinion in December 1979, which rejected the savings bank’s
allegations based on a highly formalistic analysis. DOJ classified money market funds as equity
investments rather than debt claims, and DOJ concluded that only debt claims could be treated as
“deposits” under Section 21(a). DOJ ignored the fact that Merrill Lynch provided its customers
60 _Id.; see United States v. Jenkins, 943 F.2d at 173-74; In re Thaxton Group, Inc., Securities Litigation, supra note_
55, at *1-*3, *9-*14.
23
-----
with the functional equivalent of deposits by (i) maintaining a stable value – a fixed NAV of $1
per share – for the money market funds held in customer accounts, (iii) allowing customers to
withdraw their funds with the same stable value by making redemption requests or writing
checks, and (iii) enabling customers to transfer their funds with the same stable value to third
parties by writing checks.[61]
DOJ’s 1979 opinion should not be considered a binding precedent. That opinion’s
formalistic reasoning is not consistent with either Section 21(a)’s functional description of
“deposits” or the statute’s purpose to “prohibit[] . . . unregulated private banking so far as
practicable.” The 1979 opinion’s reasoning also is not compatible with the functional, pragmatic
approach of at least two courts that interpreted Section 21(a) more recently.[62] DOJ should
undertake a fresh review of Section 21(a) and should determine that the statute’s functional
description of “deposits” includes funds that are received from and held on behalf of customers
with the understanding that customers may withdraw or transfer those funds by using “checks”
(or functionally equivalent methods of payment) or by making “requests” for repayment.
Based on the foregoing determination, DOJ should issue a rule declaring that issuers and
distributors of stablecoins providing a fixed 1:1 parity with the U.S. dollar or another widely
used fiat currency are “engag[ing] . . . in the business of receiving deposits” if they receive funds
from customers, hold stablecoins on behalf of customers, and allow customers to redeem,
61 Gorton & Zhang, supra note 7, at 10-12, 33-35; Howell E. Jackson & Morgan Ricks, “Locating Stablecoins
Within the Regulatory Perimeter,” Harvard Law School Forum on Corporate Governance (Aug. 5, 2021),
[https://corpgov.law.harvard.edu/2021/08/05/locating-stablecoins-within-the-regulatory-perimeter/.; Wilmarth,](https://corpgov.law.harvard.edu/2021/08/05/locating-stablecoins-within-the-regulatory-perimeter/)
_Taming the Megabanks, supra note 25, at 153-54._
62 _See United States v. Jenkins, 943 F.2d at 173-74; In re Thaxton Group, Inc., Securities Litigation, supra note 55,_
at *1-*3, *9-*14; Jackson & Ricks, supra note 61 (“The legislative history of section 21(a)(2) confirms that the
provision was intended to ‘prohibit[]. . . unregulated private banking so far as practicable.’”) (quoting Senate Report
No. 1007, 74th Cong., 1st Sess. 15 (1935)). See also Gorton & Zhang, supra note 7, at 10-12, 33-35; Wilmarth,
“Pandemic Crisis,” supra note 15, at 8, 20-21 n.45; Wilmarth, Taming the Megabanks, supra note 25, at 137-39,
153-54.
24
-----
withdraw, or transfer their stablecoins. Pursuant to Section 21(a)(1), DOJ’s rule should prohibit
issuers and distributors of stablecoins that are determined to be “securities” from also receiving,
holding, and allowing redemptions, withdrawals, or transfers of customers’ stablecoins.
For stablecoins that are determined not to be “securities,” DOJ’s rule should describe the
criteria that issuers and distributors of those stablecoins must satisfy under Section 21(a)(2).
DOJ’s rule should make clear that issuers and distributors of stablecoins may not receive, hold,
and allow redemptions, withdrawals, or transfers of customers’ stablecoins unless those issuers
and distributors are either (A) chartered, examined, and regulated as deposit-taking banks, or (B)
legally authorized to engage in the business of receiving deposits while also being examined and
regulated in their conduct of that business, or (C) complying with the same examination and
reporting requirements as are applied to deposit-taking banks by the “banking authority” of the
relevant state, District, or U.S. territory.[63]
Some might argue that DOJ should not bring enforcement proceedings against issuers
and distributors of stablecoins under Section 21(a) unless DOJ takes similar measures against
other nonbanks providing financial services that are functionally equivalent to deposits. Such
nonbanks would include money market funds as well as payment service providers such as
PayPal and its subsidiary Venmo, which hold customer balances and allow customers to
withdraw or transfer those balances to others. I would personally welcome a decision by DOJ to
take an across-the-board approach, and I believe DOJ would have authority under Section 21(a)
63 12 U.S.C. § 378(a); see Gorton & Zhang, supra note 7, at 33-35; Jackson & Ricks, supra note 61; Wilmarth,
“Pandemic Crisis,” supra note 15, at 8-10, 20-21 n.45 (contending that an issuer or distributor of stablecoins would
not comply with Section 21(a)(2) if it merely obtained a state money transmitter license and complied with
FinCEN’s AML requirements, as those limited forms of regulation would not be “equivalent to bank regulation and
supervision in any meaningful sense”); see also _MoneyGram_ _Int’l Inc. v. Commissioner, 999 F.3d 269 (5th Cir._
2021) (holding that a state-licensed money transmitter was not a “bank” because it did not accept “deposits”);
Awrey, supra note 7, at 7-8, 40-56 (describing the “alarming . . . permissiveness” of state laws regulating money
transmitters, a situation that “undermines the credibility of [money transmitters’] monetary commitments”).
25
-----
to institute enforcement proceedings against money market funds, PayPal, and Venmo.[64]
However, DOJ is not required to act against all violators of Section 21(a) at the same time. DOJ
could reasonably decide to focus on stablecoins as a particularly dangerous form of unauthorized
deposit-taking that should be stopped before DOJ determines how to deal with similar problems
created by money market funds, PayPal, and Venmo.[65]
**c.** **Congress should pass legislation mandating that all issuers and**
**distributors of stablecoins must be FDIC-insured banks.**
**i.** **Requiring all issuers and distributors of stablecoins to be**
**FDIC-insured banks would remove uncertainties and gaps in**
**Section 21(a) of the Glass-Steagall Act.**
I strongly support PWG’s recommendation that Congress should “promptly” pass
legislation requiring all issuers of stablecoins to be FDIC-insured banks.[66] Such legislation is
urgently needed to overcome uncertainties and gaps that currently exist in Section 21(a) of the
Glass-Steagall Act. As explained in the preceding section, an issuer or distributor of stablecoins
could offer services that are functionally equivalent to deposits and avoid violating Section 21(a)
if it could show that (1) its stablecoins are not “securities,” and (2) it is either (A) chartered,
regulated, and examined as a deposit-taking bank, or (B) legally authorized to engage in the
business of receiving deposits and subject to examination and regulation in conducting that
64 Wilmarth, “Pandemic Crisis,” supra note 15, at 7-10; see also Gorton & Zhang, supra note 7, at 33-35; Jackson &
Ricks, supra note 61.
65 _See United States v. Central Adjustment Bureau, Inc., 823 F.2d 880 (5th_ Cir. 1987) (holding that “Congress can
attack particular evils on a step by step basis,” and Congress had a rational basis for passing the Fair Debt Collection
Act, which prohibited abusive practices by independent debt collectors without addressing similar abuses by other
types of debt collectors); see also Minnesota v. Clover Leaf Creamery Co., 449 U.S. 456, 461-70 (1981) (quote at
466) (holding that state legislatures “need not ‘strike at all evils at the same time or in the same way,’” and the
Minnesota legislature had a rational basis for banning nonreturnable plastic milk jugs to protect the environment
without also prohibiting nonreturnable paperboard milk containers) (quoting Semler v. Oregon State Bd. of Dental
_Examiners, 294 U.S. 608, 610 (1935))._
66 PWG Report, supra note 1, at 2, 16.
26
-----
business, or (C) subject to the same examination and reporting requirements as are applied to
chartered, deposit-taking banks by the “banking authority” of the relevant state, District, or U.S.
territory.
The terms of Section 21(a)(2) contain potential ambiguities that would need to be
resolved by DOJ and the courts. For example, what precise levels of examination and regulation
are needed to satisfy paragraph (B), and what exact types of examinations and reports are
required to comply with paragraph (C)? States that wanted to attract entry by stablecoin issuers
and distributors could enact laws designed to exploit those ambiguities by granting the most
lenient possible treatment to stablecoin providers.[67]
Even more troubling, an issuer or distributor of stablecoins would qualify under
paragraph (A) of Section 21(a)(2) if it could obtain a charter for an uninsured depository
institution from a federal or state banking authority. Until recently, a deposit-taking bank could
not receive either a federal or state charter unless it also obtained deposit insurance from the
FDIC and became subject to the full panoply of laws governing FDIC-insured banks. Federal
law currently requires all national banks that accept deposits to obtain FDIC insurance. Prior to
2019, every state required state-chartered banks to obtain FDIC insurance as a precondition for
accepting deposits.[68]
A number of the state laws mandating FDIC insurance for state-chartered banks were
enacted in response to widespread failures of non-federally-insured depository institutions during
the 1980s and early 1990s. During that period, state-sponsored, privately-funded insurance
67 _See Jackson & Ricks, supra note 61._
68 _See Barr, Jackson & Tahyar, supra note 42, at 173-74 (As of 2018, “Federal law requires that all national banks be_
FDIC insured, and all state laws require that a state-chartered commercial bank obtain FDIC insurance.”); 12 U.S.C.
§ 222 (“Every national bank in any State shall . . . become a member bank of the Federal Reserve System . . . and
shall thereupon be an insured bank under the Federal Deposit Insurance Act”); Wilmarth, “Banking Privileges,”
_supra note 26, at 2-7._
27
-----
systems for state-chartered depository institutions collapsed in several states, with the worst
disasters occurring in Ohio, Maryland, and Rhone Island. The injuries suffered by depositors
and local economies were particularly severe in Rhode Island, where non-federally-insured
depositors lost access to at least some of their deposits for nearly three years.[69]
Despite the dismal record of non-federally-insured depository institutions, Wyoming and
Nebraska have passed laws during the past two years that authorize charters for uninsured
“special purpose depository institutions” (SPDIs) in Wyoming and uninsured “digital asset
depositories” (DADs) in Nebraska. Wyoming and Nebraska allow SPDIs and DADs to accept
deposits (including deposits of digital assets) and engage in other cryptocurrency-related
activities without obtaining FDIC insurance. Wyoming has already approved four SPDI
charters, including one awarded to Kraken, a major cryptocurrency venture.[70]
In December 2020, Figure Technologies (Figure) applied to the OCC to obtain a charter
for a national bank that would accept only “jumbo” deposits larger than $250,000 (the current
limit for federal deposit insurance). Figure asserted that it could avoid any obligation to obtain
FDIC insurance by accepting only jumbo deposits. The OCC has not acted on Figure’s
application, and state regulators have challenged the OCC’s legal authority to approve any
69 Quian Chen et al., “The Macroeconomic Fallout of Shutting Down the Banking System,” 105 Economic Review
[No. 2, at 31 (Fed. Res. Bank of K.C., 2020), https://www.kansascityfed.org/documents/8185/v105n2sharma.pdf;](https://www.kansascityfed.org/documents/8185/v105n2sharma.pdf)
Walker F. Todd, “Lessons from the Collapse of Three State-Chartered Private Insurance Funds,” Economic
_[Commentary (Fed. Res. Bank of Cleve., May 1, 1994), https://www.clevelandfed.org/en/newsroom-and-](https://www.clevelandfed.org/en/newsroom-and-events/publications/economic-commentary/economic-commentary-archives/1994-economic-commentaries/ec-19940501-lessons-from-the-collapse-of-three-state-chartered-private-deposit-insurance-funds.aspx)_
[events/publications/economic-commentary/economic-commentary-archives/1994-economic-commentaries/ec-](https://www.clevelandfed.org/en/newsroom-and-events/publications/economic-commentary/economic-commentary-archives/1994-economic-commentaries/ec-19940501-lessons-from-the-collapse-of-three-state-chartered-private-deposit-insurance-funds.aspx)
[19940501-lessons-from-the-collapse-of-three-state-chartered-private-deposit-insurance-funds.aspx; see also](https://www.clevelandfed.org/en/newsroom-and-events/publications/economic-commentary/economic-commentary-archives/1994-economic-commentaries/ec-19940501-lessons-from-the-collapse-of-three-state-chartered-private-deposit-insurance-funds.aspx)
Christine Bradley & Valentine V. Craig, “Privatizing Deposit Insurance: Results of the 2006 FDIC Study,” 1 FDIC
_Quarterly No. 2, at 23, 28-30 (2007) (discussing the collapses of numerous state-sponsored private insurance_
[systems for depository institutions between the 1830s and the 1990s), https://www.fdic.gov/analysis/quarterly-](https://www.fdic.gov/analysis/quarterly-banking-profile/fdic-quarterly/2007-vol1-2/privatizing-deposit-insurance.pdf)
[banking-profile/fdic-quarterly/2007-vol1-2/privatizing-deposit-insurance.pdf.](https://www.fdic.gov/analysis/quarterly-banking-profile/fdic-quarterly/2007-vol1-2/privatizing-deposit-insurance.pdf)
70 _See Gorton & Zhang, supra note 7, at 20; Wyoming Division of Banking, “Special Purpose Depository_
[Institutions,” https://wyomingbankingdivision.wyo.gov/banks-and-trust-companies/special-purpose-depository-](https://wyomingbankingdivision.wyo.gov/banks-and-trust-companies/special-purpose-depository-institutions)
[institutions; “Nebraska Financial Innovation Act,” Neb. Rev. Stat. Ch. 8, Art. 30, available at](https://wyomingbankingdivision.wyo.gov/banks-and-trust-companies/special-purpose-depository-institutions)
[https://ndbf.nebraska.gov/about/legal/financial-innovation-act.](https://ndbf.nebraska.gov/about/legal/financial-innovation-act)
28
-----
charters for uninsured, deposit-taking national banks.[71] Nevertheless, Figure and other
cryptocurrency companies recently met with the Fed, FDIC, and OCC to discuss “how to issue a
stablecoin that satisfies” regulators.[72]
Wyoming’s and Nebraska’s new laws and Figure’s charter application are designed to
produce charters for uninsured deposit-taking banks that can issue and distribute stablecoins and
engage in other cryptocurrency activities. As indicated above, uninsured banks – if lawfully
chartered and legally authorized to receive deposits – could issue and distribute stablecoins that
are not “securities” without violating Section 21(a)(2)(A) of the Glass-Steagall Act. They also
would not be required to comply with the Federal Deposit Insurance Act (FDI Act) or the BHC
Act. As explained in the next section, the FDI Act and the BHC Act establish crucial public
interest safeguards that govern FDIC-insured banks and their parent companies.[73] Congress
should make certain that those safeguards apply to all institutions that engage in the business of
receiving either bank deposits or “shadow deposits.” Ongoing efforts by cryptocurrency
ventures to obtain charters for uninsured, deposit-taking banks have cast a very bright light on a
dangerous gap in existing laws. Congress must quickly pass legislation that will close that gap
by requiring all issuers and distributors of stablecoins to be FDIC-insured banks.
**ii.** **Congress should require all issuers and distributors of**
**stablecoins to be FDIC-insured banks, thereby bringing those**
**entities within the scope of the FDI Act and the BHC Act.**
71 Lydia Beyoud, “Fintech Charter Suit on Hold as Bank Regulator Reviews Policies,” Bloomberg Law (June 17,
2021); Lydia Beyoud, “Fintech Lender’s Bank Bid Says ‘No Thanks’ on Deposit Insurance,” Bloomberg Law (Dec.
3, 2020).
72 Jesse Hamilton, “Stablecoin Advocates Make Their Case to U.S. Banking Regulators,” Bloomberg Law (Nov. 22,
2021).
73 _See Gorton & Zhang, supra note 7, at 3-6, 17-21, 33-35, 38-39; Wilmarth, “Banking Privileges,” supra note 26, at_
1, 6-11.
29
-----
PWG’s report urged Congress to act “promptly” in passing legislation that would require
all issuers of stablecoins to be FDIC-insured banks.[74] The same requirement should apply to
entities, such as Facebook’s Novi, that distribute stablecoins issued by other companies. Firms
that distribute stablecoins to the public should not be allowed to avoid compliance with the FDI
Act and the BHC Act simply because their deposit-taking and payment networks employ
stablecoins issued by other companies.
Legislation requiring all issuers and distributors of stablecoins to be FDIC-insured banks
would guarantee that those firms must comply with crucial public interest safeguards in the FDI
Act. Those safeguards include: (a) deposit insurance coverage, payment of risk-based deposit
insurance premiums, and reporting and examination requirements under 12 U.S.C. §§ 1817,
1820 & 1821; (b) supervisory and enforcement powers granted to federal bank regulators under
12 U.S.C. § 1818; (c) procedures for resolving failed and failing banks under 12 U.S.C. §§
1821(c), 1822 & 1823; (d) risk-based capital requirements and other safety and soundness
standards under 12 U.S.C. §§ 1831p-1 & 3901-07; (e) prompt corrective action remedies under
12 U.S.C. § 1831o; (f) safety and soundness requirements and protections for competition that
govern proposed changes in control of banks and bank mergers under 12 U.S.C. §§ 1817(j) &
1828(c); (f) prohibitions on abusive tying practices under 12 U.S.C. §§ 1971-77; (f) “source of
strength” obligations and capital requirements for parent companies of FDIC-insured banks
under 12 U.S.C. §§ 1831o-1 & 5371(b); (g) community reinvestment duties under 12 U.S.C. §§
3901-08; and (h) expedited funds availability requirements under 12 U.S.C. §§ 4001-10.
Mandating status as FDIC-insured banks for all issuers and distributors of stablecoins
would also make certain that those entities are treated as “banks” for purposes of the BHC Act.[75]
74 PWG Stablecoin Report, supra note 1, at 2, 16.
75 12 U.S.C. § 1841(c)(1)(A) (defining “bank” for purposes of the BHC Act to include all FDIC-insured banks).
30
-----
The BHC Act requires all companies that own or control FDIC-insured banks to comply with
additional public interest safeguards, including (a) safety and soundness standards and
protections for competition governing proposed acquisitions of banks under 12 U.S.C. § 1842;
(b) limitations on permissible nonbanking activities under 12 U.S.C. § 1843; (c) the Fed’s
authority to conduct examinations, require reports, bring enforcement actions, adopt supervisory
rules, and impose risk-based capital requirements under 12 U.S.C. §§ 1818, 1844, 1847, &
5371(b); and (d) privacy protections that (i) prohibit financial holding companies from disclosing
nonpublic customer information to unaffiliated third parties against their customers’ wishes, and
(ii) bar third parties from using false or deceptive practices to obtain such information (15 U.S.C.
§§ 6801-09, 6821-27).
One of the BHC Act’s most important provisions is 12 U.S.C. § 1843, which prohibits
companies that own or control banks from engaging in commercial activities or owning
commercial enterprises. Section 1843 prevents the formation of banking-and-commercial
conglomerates that would pose grave dangers to our society, financial system, and economy,
including (1) hazardous concentrations of economic and financial power and political influence,
(2) toxic conflicts of interest that would destroy the ability of banks to act objectively in
providing credit and other services, and (3) risks of systemic contagion between financial and
commercial sectors that could inflict enormous losses on the federal “safety net” for banks
(including the FDIC’s deposit insurance fund, the Fed’s discount window, and the Fed’s
guarantee for interbank payments made on Fedwire, as well as the federal government’s explicit
and implicit protections for “too big to fail” banking organizations). Requiring all issuers and
distributors of stablecoins to be FDIC-insured banks would prevent stablecoin ventures from
31
-----
being owned or controlled by commercial enterprises, including Big Tech firms like Apple,
Amazon, Facebook, Google, and Microsoft.[76]
Stopping Big Tech firms from acquiring ownership or control of issuers and distributors
of stablecoins should be a top priority for financial regulators and Congress. Big Tech firms
already enjoy significant advantages over traditional providers of financial services in areas such
as automation, artificial intelligence, data management, and mobile payments. The rapid
expansion of Ant Financial (Alipay) and Tencent (WePay) in China – prior to the crackdown on
both companies by Chinese authorities in 2020 – indicates that Big Tech firms could potentially
dominate major segments of our financial industry if they were allowed to offer deposit and
payment services. The entry of Big Tech firms into the banking business would create a wide
range of potential threats, including unfair competition, market dominance, predatory lending,
abusive sharing of customer data and other violations of customer privacy rights, as well as much
greater risks of systemic contagion across financial and nonfinancial sectors of our economy
during financial crises and severe economic downturns.[77]
76 Wilmarth, “Banking Privileges,” supra note 26, at 6-11; Arthur E. Wilmarth, Jr., “The FDIC Should Not Allow
Commercial Firms to Acquire Industrial Banks,” 39 Banking & Financial Services Policy Report No. 5 (May 2020),
[at 1, 2-10 [hereinafter Wilmarth, “Industrial Banks”], available at https://ssrn.com/abstract=3613022; Arthur E.](https://ssrn.com/abstract=3613022)
Wilmarth, Jr., “Wirecard and Greensill Scandals Confirm Dangers of Mixing Banking and Commerce,” 40 Banking
_& Financial Services Policy Report No. 5 (May 2021), at 1, 11-12 [hereinafter Wilmarth, “Wirecard and_
[Greensill”], available at https://ssrn.com/abstract=3849567.](https://ssrn.com/abstract=3849567)
77 _See Raúl Carrillo, Testimony before the Subcomm. on Consumer Protection and Financial Institutions of the_
House Comm. on Financial Services 7-9 (April 15, 2021) [hereinafter Carrillo Testimony],
[https://docs.house.gov/meetings/BA/BA15/20210415/111447/HHRG-117-BA15-Wstate-CarrilloR-20210415.pdf;](https://docs.house.gov/meetings/BA/BA15/20210415/111447/HHRG-117-BA15-Wstate-CarrilloR-20210415.pdf)
Kathryn Petralia, Thomas Philippon, Tara Rice & Nicholas Véron, Banking Disrupted? Financial Intermediation in
_an Era of Transformational Technology 25–38, 44–82 (Geneva Reports on the World Economy 22, 2019),_
[https://www.cimb.ch/uploads/1/1/5/4/115414161/banking_disrupted_geneva22-1.pdf; Investigation of Competition](https://www.cimb.ch/uploads/1/1/5/4/115414161/banking_disrupted_geneva22-1.pdf)
_in Digital Markets: Majority Staff Report and Recommendations (Subcomm. on Antitrust, Commercial and_
Administrative Law of the House Comm. on the Judiciary, 2020) [hereinafter 2020 House Staff Report on
[Competition in Digital Markets], https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf;](https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf?utm_campaign=4493-519)
Wilmarth, “Banking Privileges,” supra note 26, at 6-11; Wilmarth, “Industrial Banks,” supra note 76, at 4-10;
Wilmarth, “Wirecard and Greensill,” supra note 76, at 11-13.
32
-----
Facebook’s plan to offer deposit and payment services through Novi poses unacceptable
threats to consumer privacy and welfare. Facebook has repeatedly abused its customers’ privacy
rights and has reportedly marketed products that it knew were harmful to its customers. In 2012,
Facebook entered into a consent decree with the Federal Trade Commission (FTC) to settle
charges that it deceived customers and violated its promises to allow customers to control the
privacy of information they posted on Facebook. In 2019, Facebook paid a $5 billion fine to
resolve the FTC’s charges that Facebook violated the privacy commitments included in the 2012
consent decree.[78] The FTC recently launched a new investigation of Facebook after a
whistleblower informed Congress that Facebook knew from its internal research that some of its
products caused mental health problems in minors as well as other harms to customers.[79]
Facebook has long sought to enter the financial services industry to extend its dominance
over social networks and expand its access to customer information. In 2012, Facebook founder
and CEO Mark Zuckerberg said that the launch of a successful payment service would give
Facebook “a pretty awesome combo and a good reason for people to use [Facebook’s] platform,”
as well as making it “more acceptable for us to charge them quite a bit more for using [our]
platform.”[80] Offering deposit and payment services would greatly increase Facebook’s ability to
access, leverage, and monetize its customers’ private information. As Open Markets Institute
recently pointed out,
78 “FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook,” (Fed. Trade Comm’n,
[July 24, 2019), https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-](https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions)
[new-privacy-restrictions.](https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions)
79 John D. McKinnon & Brent Kendall, “Federal Trade Commission Scrutinizing Facebook Disclosures,” Wall
_[Street Journal (Oct. 27, 2021), https://www.wsj.com/articles/facebook-ftc-privacy-kids-11635289993; John D.](https://www.wsj.com/articles/facebook-ftc-privacy-kids-11635289993)_
McKinnon & Ryan Tracy, “Facebook Whistleblower’s Testimony Builds Momentum for Tougher Tech Laws,”
_[Wall Street Journal (Oct. 5, 2021), https://www.wsj.com/articles/facebook-whistleblower-frances-haugen-set-to-](https://www.wsj.com/articles/facebook-whistleblower-frances-haugen-set-to-appear-before-senate-panel-11633426201)_
[appear-before-senate-panel-11633426201.](https://www.wsj.com/articles/facebook-whistleblower-frances-haugen-set-to-appear-before-senate-panel-11633426201)
80 Open Markets Facebook Letter, supra note 22, at 12 (quoting Zuckerberg’s statement).
33
-----
Facebook occupies a dominant role in American life and indeed the lives of
people around the world, with over 1 billion users for four of its services,
including Facebook, Instagram, Messenger, and WhatsApp. Facebook is also a
giant in the advertising space, with their 2020 advertising revenue close to $84.2
billion dollars — nearly $1.6 billion each week.[81]
A 2020 House subcommittee staff study found that “Facebook has monopoly power in
the market for social networking” and has exploited that market power by becoming a
“gatekeeper” with “outsized power to control the fates of other companies.”[82] Facebook
generates most of its revenues by selling digital advertising. Facebook’s access to the private
information of hundreds of millions of customers enables it to command much higher prices for
its sales of advertising, compared with its competitors.[83] The House staff study concluded that
Facebook’s dominance of the social networking market – like Google’s dominance of the
Internet search market – allows Facebook to “abuse consumers’ privacy without losing
customers.”[84] As one expert advised the House subcommittee:
Facebook and Google have built comprehensive dossiers on almost everyone, and
they can sell incredibly targeted advertisement on that basis. . . . But doing so
represents an inherent violation of the receiver’s privacy. Every ad targeted using
personal information gathered without explicit, informed consent is at some level
81 _Id._
82 2020 House Staff Report on Competition in Digital Markets, supra note 77, at 12-14, 39-40; see also id. at 132-73
(providing a detailed analysis of Facebook’s exploitation of its market power, including its acquisition of numerous
competitors).
83 _Id. at 170-72._
84 _Id. at 18, 51-53._
34
-----
a violation of privacy. And Facebook and Google are profiting immensely by
selling these violations to advertisers.[85]
Novi’s deposit and payment services could enable Facebook to collect and monetize a
vast array of data about its customers’ financial assets and transactions. The treasure trove of
nonpublic customer information that Big Tech firms could capture by offering deposit and
payment services is indicated by the huge data set compiled by JPMorgan Chase Institute
(JPMCI). JPMCI has collected and analyzed a massive pool of information, drawn from the
records of JPMorgan Chase (JPMC), the largest U.S. bank, containing the “saving, spending, and
borrowing habits of the bank’s customers.” The Fed “has used the Institute’s research when
weighing interest-rate decisions,” thereby confirming the enormous value of JPMC’s
comprehensive information about its customers’ financial dealings.[86] Allowing Facebook and
other Big Tech firms to build similar data sets by offering deposit and payment services would
increase exponentially their ability to monetize customer information and degrade customer
privacy by secretly transferring that information to third-party sellers of goods and services.
Requiring all issuers and distributors of stablecoins to be FDIC-insured banks would
guarantee that all companies that own or control those entities must comply with the privacy
protections governing financial holding companies (15 U.S.C. §§ 6801-09, 6821-27). That
requirement would also prevent Facebook and other Big Tech firms from offering deposit and
payment services built around stablecoins. The PWG Report correctly determined that “the
combination of a stablecoin issuer or [digital] wallet provider and a commercial firm could lead
85 _Id. at 54-55 (quoting testimony of David Heinemeier Hansson, co-founder and chief technology officer of_
Basecamp).
86 David Benoit, “How JPMorgan Is Helping Washington Make Sense of the Pandemic Recovery,” Wall Street
_[Journal (Nov. 9, 2021), https://www.wsj.com/articles/how-jpmorgan-is-helping-washington-make-sense-of-the-](https://www.wsj.com/articles/how-jpmorgan-is-helping-washington-make-sense-of-the-pandemic-economy-11636462980)_
[pandemic-economy-11636462980.](https://www.wsj.com/articles/how-jpmorgan-is-helping-washington-make-sense-of-the-pandemic-economy-11636462980)
35
-----
to an excessive concentration of economic power,” which could “restrict access” to credit and
other financial services and have “detrimental effects on competition.”[87]
Our nation stands at a crossroads. We can maintain the BHC Act’s longstanding policy
of separating banking and commerce, thereby preserving a financial sector, a national economy,
and a society that are not compromised by toxic conflicts of interest, exploited by unfair
competitive advantages, or dominated by the overwhelming economic power and political
influence of giant banking-and-commercial conglomerates. Or we can allow Facebook and other
Big Tech firms to enter the banking business and leverage their stablecoin ventures to create
massive “shadow banking” empires, thereby subverting the BHC Act’s separation of banking
and commerce. In that event, Big Tech firms might well gain dominance over our banking
industry – either by building their own financial kingdoms or by combining with our largest
banks – thereby creating the very evils that the BHC Act was designed to prevent.[88]
**iii.** **The FDIC should not approve pass-through deposit insurance**
**coverage for stablecoins.**
The FDIC is reportedly considering the possibility of allowing FDIC-insured banks to
hold reserves deposited by stablecoin issuers and provide “pass-through” deposit insurance
coverage to customers of those issuers.[89] The FDIC currently grants pass-through deposit
insurance coverage to holders of stored-value cards if the issuers of those cards satisfy the
following conditions: (i) each issuer must establish a custodial deposit account at an FDIC
insured bank to hold the funds owned by card holders, (ii) the issuer must allow card holders to
87 PWG Stablecoin Report, supra note 1, at 14.
88 Carrillo Testimony, supra note 77, at 7-9; Wilmarth, “Banking Privileges,” supra note 26, at 6-11; Wilmarth,
“Industrial Banks,” supra note 76, at 4-12; Wilmarth “Wirecard and Greensill,” supra note 76, at 11-14.
89 Nate DiCamillo, “US FDIC Said to Be Studying Deposit Insurance for Stablecoins,” CoinDesk (Oct. 6, 2021),
[https://www.coindesk.com/policy/2021/10/06/us-fdic-said-to-be-studying-deposit-insurance-for-stablecoins/.](https://www.coindesk.com/policy/2021/10/06/us-fdic-said-to-be-studying-deposit-insurance-for-stablecoins/)
36
-----
access their funds at the bank through withdrawals or transfers to third parties, (iii) the bank’s
records must confirm that the issuer has established a custodial deposit account holding funds
owned by card holders, (iv) either the bank’s records or the issuer’s records must show, on an
accurate and current basis, the identity of each card holder and the amount of funds owned by
that holder, and (v) the issuer must inform card holders that their funds are held in a custodial
account at the bank.[90]
Approving pass-through deposit insurance for stablecoins would involve a number of
operational difficulties. One of the most significant challenges would be the requirement that
either the custodial bank or the stablecoin issuer must maintain current and accurate records
showing the identity of each holder of stablecoins and the amount of stablecoins owned by that
holder. As the PWG’s report pointed out,
The majority of the stablecoin market currently operates on public blockchains
where transactions may be pseudonymous, meaning the identity of the sender or
the receiver of a transaction is unknown, but other transactional information is
available (e.g., the amount, the time, the value, etc.).”[91]
Indeed, the relative anonymity of transactions conducted with stablecoins – compared with
traditional payment methods other than cash – is a major reason for the popularity of
stablecoins.[92]
It is difficult to envision how stablecoin issuers and custodial banks could maintain
accurate and current records showing the identities of holders of stablecoins and the amounts of
90 FDIC, New General Counsel Opinion No. 8 (Oct. 31, 2008), available at 73 Fed. Reg. 67155-57 (Nov. 13, 2008).
91 PWG Stablecoin Report, supra note 1, at 19 n.39; see also DiCamillo, supra note 84 (explaining that stablecoins
are transferred on “public blockchain networks . . . and theoretically anyone with a crypto wallet that hasn’t been
blacklisted can receive stablecoins from, and send them to, other wallets.”).
92 _See supra notes 11-12 and accompanying text._
37
-----
coins they own without changing the fundamental nature of stablecoin transactions that are
conducted on distributed ledgers (particularly in DeFi transactions). A financial journalist
recently described the following problems that stablecoin issuers would confront if “financial
regulators declare that all stablecoin owners must be verified”:
Building infrastructure to collect and verify the identity of all users [of
stablecoins], and not just the few who redeem or deposit, is expensive. To recoup
their costs, issuers . . . may consider introducing fees. All of this could render
stablecoins less accessible for people who only want to use them for casual
remittances.
. . . In DeFI, stablecoins are often deposited into accounts controlled by bits of
autonomous code, or smart contracts, which don't have any underlying owner.
It’s not evident how a stablecoin issuer can conduct KYC [compliance] on a smart
contract.[93]
Thus, it would be extremely difficult, if not impossible, for a stablecoin issuer or a custodial
bank to maintain an accurate and current record of the identities of stablecoin holders or the
amounts of coins they currently own in order to qualify for pass-through deposit insurance
coverage from the FDIC.
Moreover, granting pass-through deposit insurance coverage to holders of stablecoins
would not remove the systemic perils created by the issuers and distributors of those coins. Pass
through coverage would allow issuers and distributors of stablecoins and their customers to
benefit from access to the FDIC’s deposit insurance fund. Issuers, distributors, and customers
would also benefit indirectly from the custodial bank’s access to the Fed’s discount window
93 Koning, supra note 11.
38
-----
advances and payments system guarantees, as well as other components of the federal safety net
for banks. In contrast, pass-through deposit insurance coverage would not require issuers and
distributors of stablecoins to comply with the FDI Act’s provisions that protect customers,
communities, businesses, and the stability of the banking system. In addition, pass-through
coverage would allow companies that own or control issuers and distributors of stablecoins to
avoid complying with the BHC Act’s safeguards, including the Fed’s regime of consolidated
regulation and supervision, the privacy rules governing financial holding companies, and the
separation of banking and commerce.
In sum, pass-through deposit insurance coverage would enable Facebook and other Big
Tech firms to offer deposit and payment services and receive extensive benefits from the federal
safety net for FDIC-insured banks without complying with the public interest mandates
governing those banks and their parent companies. Pass-through coverage would effectively
create a “back door” that allows Big Tech firms to compete directly with traditional banks,
undermine the BHC Act’s separation of banking and commerce, and circumvent other important
public interest safeguards.
Pass-through deposit insurance coverage for stablecoins would produce many of the
harmful effects of “rent-a-bank” arrangements, which the OCC approved when it adopted its so
called “true lender” rule in October 2020. The OCC’s rule declared that a national bank would
be treated as the “true lender” for a loan as long as the bank funded the loan at closing or was
named as the lender in the loan agreement, even if the bank transferred its entire interest and
entire risk in the loan to a nonbank “partner” the following day. Under the OCC’s rule,
nonbanks could have reaped the benefits that their national bank partners enjoyed under federal
statutes preempting the application of state usury laws and other state consumer protection laws
39
-----
to national banks.[94] In June 2021, Congress passed a joint resolution that repealed the OCC’s
rule under the Congressional Review Act.[95] Members of Congress who supported the joint
resolution condemned the OCC’s “true lender” rule for allowing predatory nonbank lenders “to
use superficial and deceptive partnerships with [national] banks to skirt state laws and charge
outrageous annual percentage rates” on loans they acquired from their national bank partners.[96]
The FDIC should reject pass-through deposit insurance coverage for stablecoins for the
same reasons that Congress repealed the OCC’s “true lender” rule. Issuers and distributors of
stablecoins should not be allowed to obtain the benefits provided by FDIC insurance and other
components of the federal safety net for banks unless those entities become FDIC-insured banks.
Issuers and distributors of stablecoins and their parent companies should not be allowed to use
“rent-a-bank” arrangements to engage in destructive regulatory arbitrage. Instead, they should
be required to comply fully with the essential public interest safeguards contained in the FDI Act
and the BHC Act.[97]
**Conclusion**
The PWG’s report provides a welcome blueprint for top-priority actions by regulatory
agencies and Congress. The SEC should use its existing powers to regulate stablecoins as
94 Wilmarth, “Banking Privileges,” supra note 26, at 2, 14-17; see also Carrillo Testimony, supra note 77, at 6-7;
Adam J. Levitin, “Rent-a-Bank: Bank Partnerships and the Evasion of Usury Laws,” 71 Duke Law Journal 329
(2021).
95 Davis Polk, “Client Update: The OCC’s true lender rule has been repealed” (July 1, 2021),
[https://www.davispolk.com/insights/client-update/occs-true-lender-rule-has-been-repealed.](https://www.davispolk.com/insights/client-update/occs-true-lender-rule-has-been-repealed)
96 Senate Comm. on Banking, Housing, and Urban Affairs, “Majority Press Release: President Signs Van Hollen,
Brown Legislation to Strike Down Trump-era ‘Rent-a-Bank’ Rule” (June 30, 2021) (quote),
[https://www.banking.senate.gov/newsroom/majority/president-signs-van-hollen-brown-legislation-to-strike-down-](https://www.banking.senate.gov/newsroom/majority/president-signs-van-hollen-brown-legislation-to-strike-down-trump-era-rent-a-bank-rule)
[trump-era-rent-a-bank-rule; see also House Financial Services Comm., “Press Release: Waters Floor Statement on](https://www.banking.senate.gov/newsroom/majority/president-signs-van-hollen-brown-legislation-to-strike-down-trump-era-rent-a-bank-rule)
House Passage of Resolution to Eliminate Trump’s Predatory ‘True Lender’ Rule” (June 25, 2021),
[https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=408055.](https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=408055)
97 _See Carrillo Testimony, supra note 77, at 7-9 (supporting the proposed “STABLE Act,” introduced in December_
2020 by three House members); see also Press Release, “Tlaib, García, and Lynch Introduce Legislation Protecting
Consumers Against Cryptocurrency-Related Financial Threats” (Dec. 2, 2020) (describing the proposed STABLE
[Act), https://tlaib.house.gov/media/press-releases/tlaib-garcia-and-lynch-stableact.](https://tlaib.house.gov/media/press-releases/tlaib-garcia-and-lynch-stableact)
40
-----
“securities” and protect investors and securities markets. DOJ should designate stablecoins as
“deposits” and bring enforcement actions to prevent issuers and distributors of stablecoins from
violating Section 21(a) of the Glass-Steagall Act. To overcome uncertainties and gaps in the
remedies available to the SEC and DOJ, Congress should pass legislation requiring all issuers
and distributors of stablecoins to be FDIC-insured banks. The foregoing measures are urgently
needed to counteract the grave dangers that stablecoins pose to our society, financial system, and
economy.
41
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.4000795?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.4000795, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,021
|
[] | false
| null |
[] | 28,283
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Political Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/015ece3905050fcb19ea459d7aebfa88a6d015a3
|
[
"Business"
] | 0.934849
|
The case for a technically safe environment to protect the identities of anonymous whistle-blowers : a conceptual paper
|
015ece3905050fcb19ea459d7aebfa88a6d015a3
|
[
{
"authorId": "2799211",
"name": "J. Loggerenberg"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Whistle-blowing is one of the most important aspects in the fight against corruption. In most cases, it is impossible to commit a corrupt deed without at least one other person being involved in, or, knowing about it. Many businesses and public entities have a ‘crime lines’ in place to facilitate whistle-blowing but these facilities are mostly limited to a particular telephone number. Using a telephone to report corruption has inherent problems when anonymity is required. Organisations can easily trace calls made from their facilities. Even calls made from cellular phones can be traced. Eavesdropping, although illegal, is not technically difficult to achieve. The fact is: Telephonic whistle-blowing provides little protection for the whistle-blower who wants to remain anonymous. This paper focuses on the problem of current ways for whistle-blowing and suggests an improvement conceptually. It aims to open up debate and discussion on this topic with the intention to attract further contributions and stimulate research on this topic. Although the paper focuses strongly on the situation in South Africa, it is probably equally applicable anywhere else in the world.
Key words: Whistle-blowing, Information Technology, Anonymity, Onion Routing.
|
African Journal of Business Management Vol.6 (44), pp. 10799-10806, 7 November 2012
Available online at http://www.academicjournals.org/AJBM
DOI: 10.5897/AJBM12.992
ISSN 1993-8233 ©2012 Academic Journals
## Review
# The case for a technically safe environment to protect
the identities of anonymous whistle-blowers: A
conceptual paper
### Johan van Loggerenberg
[Department of Informatics, University of Pretoria, Pretoria, 0001 South Africa. E-mail: johan.vl@up.ac.za.](mailto:johan.vl@up.ac.za)
Accepted 25 September, 2012
**Whistle-blowing is one of the most important aspects in the fight against corruption. In most cases, it is**
**impossible to commit a corrupt deed without at least one other person being involved in, or, knowing**
**about it. Many businesses and public entities have a ‘crime lines’ in place to facilitate whistle-blowing**
**but these facilities are mostly limited to a particular telephone number. Using a telephone to report**
**corruption has inherent problems when anonymity is required. Organisations can easily trace calls**
**made from their facilities. Even calls made from cellular phones can be traced. Eavesdropping,**
**although illegal, is not technically difficult to achieve. The fact is: Telephonic whistle-blowing provides**
**little protection for the whistle-blower who wants to remain anonymous. This paper focuses on the**
**problem of current ways for whistle-blowing and suggests an improvement conceptually. It aims to**
**open up debate and discussion on this topic with the intention to attract further contributions and**
**stimulate research on this topic. Although the paper focuses strongly on the situation in South Africa, it**
**is probably equally applicable anywhere else in the world.**
**Key words: Whistle-blowing, Information Technology, Anonymity, Onion Routing.**
**INTRODUCTION**
Reporting suspicious incidents is one of the prime
components in the detection of criminal activities. Without
such reporting, law enforcement becomes much less
effective with the result that many crimes take place
without being detected.
Suspicious incidents are typically facilitated through the
use of “Crime Lines”. These are typically telephone
numbers that can be dialled by whistle-blowers.
In most cases protection of the identity of the whistle
blower is of paramount importance. To report an incident
which exposes another individual can be seriously
dangerous to the person blowing the whistle and cases
are known where such whistle-blowers have lost their
lives.
Many cases have been documented where whistle
blowers chose to reveal their identity. When the identity
of the whistle-blower is known, investigation by law
enforcement agencies is strongly facilitated and is, as
such, always preferred. An argument can be made that a
whistle-blower is protected by the law and should, therefore, have no fear. However, whilst the law theoretically
protects whistle-blowers from being victimised, the
practical side is far from safe.
Anonymity must therefore be viewed as a pre-requisite
for whistle-blowers that choose to remain anonymous.
Crime lines (that is, telephone lines) are currently being
used as the preferred mechanism to report suspicious
incidents. Unfortunately, telephone reporting provides
very little in terms of anonymity. Most companies keep
logs of telephone calls being made from internally and if
someone is suspected to have reported an incident, such
logs can, with relative ease, be accessed by perpetrators
to determine where the call was made from.
The aim of this conceptual paper is to investigate the
potential of Information and communications technology
(ICT) to enable a safe alternative to facilitate anonymous
whistle-blowing.
As is the case with concept papers, the research
-----
10800 Afr. J. Bus. Manage.
methodology is primarily based on the limited literature
available on the topic. The author made use of newspaper reports to provide situational context.
**WHISTLE-BLOWING**
The term whistle-blowing originated from the days when
the whistle was blown by a police officer when witnessing
a criminal deed or by a referee when witnessing someone contradicting the rules of the sports game. There is,
however, as metaphors regarding whistle-blowing, a
distinct difference between the use of a policeman or a
referee. Policemen and referees have the authority to
enforce their actions whereas present day whistleblowers do not enjoy such authority (Ellison, 1982). The
whistle-bloweris dependent on someone else with
authority.
The idea behind the term „whistle-blower‟ is, however,
very positive. It occurs when someone violates an
accepted rule or law and should be stopped from doing
so. The whistle-blower does so in the public interest. The
whistle-blower is, in fact, trying “…to enlist the support of
_others to achieve social objectives” (Ellison, 1982)._
Whistle-blowing is, in general, defined as “raising a
concern about malpractice within an organisation”. This
definition is attributed to the UK Committee on Standards
in Public Life (Camerer, 2001; Martin, 2010). Another
definition is the “pursuit of a concern about wrongdoing
that does damage to a wider public interest” (Public
Concern at Work, 2005). This second definition puts
whistle-blowing specifically in the context of the public
interest as opposed to the first one which is more
specifically about the „organisation‟.
Near and Miceli (1985) define whistle-blowing as “the
_disclosure by organization members (former or current) of_
_illegal, immoral, or illegitimate practices under the control_
_of their employers, to persons or organizations that may_
_be able to effect action”._
Definitions identify that something illegal or un
acceptable is taking place and the person raising the
alarm (the „whistle-blower‟) is drawing attention to this
fact. The ultimate intention is to avoid a repetition. The
person blowing the whistle is, therefore, simply the
„messenger‟ who acts in the best interest of the
organisation, or in the best interest of society (Camerer,
2001).
It is understandable that those people actively parti
cipating in the deed will not appreciate their actions being
exposed. What comes as a surprise, is that, often, even
innocent „onlookers‟ are also critical of the whistleblower‟s action. The result is that whistle-blowers have
been getting a poor reputation from some quarters
(Camerer, 2001).
In an organisational sense, a whistle-blower can be
either one of the employees (that is, internally to the
organisation), or externally to the organisation. Famous
internal whistle-blowers were Sherron Watkins (in the
Enron case) and Cynthia Cooper (Worldcom) (Colvin,
2002).
The auditing profession, for instance, can be seen as
mandatedwhistle-blowers. They are mandated by the
shareholders to look for anything outside of the rules,
regulations or laws and formally report it to the
management and the shareholders.
There can be no doubt that whistle-blowing plays a
very important role in the fight against corruption (Martin,
2010). Not only does it play a deterring role, but it also
assists in bringing criminals to book by making law
enforcement agencies aware of criminal activities for
further investigation.
Estimates of the scale of corruption in South Africa vary
considerably, but it is safe to say that it runs into the
billions of Rands every year. The South African Minister
of Finance (speaking of Income Tax revenue), reported
that R13 Billion less would be collected than what was
budgeted. This „loss‟ is probably far less than what is lost
through corruption, let alone what is lost on wasteful
expenditure and, what the Minister calls „extravagance in
_public administration‟ (Treasury, 2011). South Africa can_
certainly not afford the losses suffered through
corruption. Whistle-blowing, therefore, plays an important
role to combat such losses.
**LEGAL PROTECTION FOR WHISTLE-BLOWERS**
Identified whistle-blowers face many risks (Martin, 2010).
These risks can take the form of being victimised in the
workplace, and/or the very real possibility of retaliation by
those involved in the criminal deeds.
To protect whistle-blowers in the workplace, legal
protection is available in many countries, including South
Africa. Martin (2011) makes the point that “[p]rotection for
_whistle-blowers is essential to create a culture of_
_disclosure of wrong-doing”._
The primary South African Act addressing the phenol
mena of corruption, is the Prevention and Combating of
Corrupt activities 2003, Act 12 of 2004 (Republic of South
Africa, 2004).
The Protected Disclosures Act, 26 of 2000 (Republic of
South Africa, 2000) was designed specifically to provide
such protection. Martin (2010, 2011) and Camerer (2001)
deal extensively with the legal intentions of the Act in
their papers.
Both the Companies Act, Act 71 of 2008 (Republic of
South Africa, 2008) and the Companies Amendment Act,
3 of 2011 (Republic of South Africa, 2011) also provide
legal protection for whistle-blowers. These Acts do not
specifically refer to anonymous whistle-blowing, and
seem to assume that the identity of the whistle-blower is
known.
-----
What also needs to be noted is that the legal protection
provided by these Acts, is limited to whistle-blowers with
known identities. This is to be expected as protection can
only be provided to someone who is known. This point is
further discussed in the summary to the area dealing with
legal protection.
Despite the legal protection provided in the Acts, cases
are still reported where whistle-blowers did not enjoy the
protection promised in the Acts.
The (London) Guardian newspaper (Syal, 2010) reports
that “…[employment tribunal statistics show that the total
_number of people using whistleblowing legislation, which_
_aims to protect workers from victimisation if they have_
_exposed wrongdoing, increased from 157 cases in 1999_
_to 1,791 10 years later”. The article quotes many cases of_
whistle-blowers being dismissed or being „gagged‟.
Statistics for the South African situation are not
available, but there is plenty evidence of whistle-blowers
being dismissed or victimised (Martin, 2010). In an article
appearing in the (South African) Mail and Guardian
(Calland, 2011) a case is described where a municipal
manager was dismissed and her house burnt down when
she initiated an investigation into fraud. The article
indicates that, although her dismissal had been ruled
unfair in the court, her employers still refused to give her
back her job.
In the same article, it is reported that “…14 government
_officials_ _or_ _politicians_ _have_ _been_ _murdered_ _[in_
_Mpumalanga province] since 1998‟ and that there was a_
_“…twelve-fold increase in wasteful expenditure since_
_2007, but a sharp decrease in the number of whistle-_
_blowers coming forward to report malfeasance” (Calland,_
2011).
In yet another article, the (SA) Mail and Guardian
Online (2007) reported a case where the medical
superintendent of a hospital in the Eastern Cape was
dismissed when _“speaking out against [the hospital‟s]_
_handling of the Frere Hospital maternity saga” (Mail and_
Guardian Online, 2007). The superintendent alleged that
“200 babies were dying every month at East London‟s
_two largest hospitals” (Mail and Guardian Online, 2007)._
Martin (2010, 2011) raises a number of concerns about
the adequacy of the protection provided by the legal
framework in South Africa. Adv. Madonsela, The Public
Protector in South Africa, echoed the same concern
(Martin, 2011; Public Protector, 2010).
Miceli et al. (1999) point out that, whilst lawmakers
generally want to believe that protecting whistle-blowers
from retaliation will encourage the practice of whistleblowing, the contrary is true. They quote several research
papers with supporting evidence that _“legal protections_
_neither reduce the incidence of retaliation nor increase_
_the incidence of whistle-blowing” (Miceli et al., 1999). In_
their research (covered in the 1999 paper) they again
tested, _inter alia, two hypotheses. The first was that an_
effective law (that is, protecting whistle-blowers) was
Loggerenberg 10801
likely to cause whistle-blowers to identify themselves
rather than do so anonymously. The second was that an
effective law was likely to cause perceived retaliation to
be less likely to follow identified whistle-blowing.
In both hypotheses they found the opposite to what
they expected. In the first hypothesis, the number of
identified whistle-blowers reduced from 74 to 60%. In the
second hypothesis they found that the percentage of
identified whistle-blowers who suffered retaliation
increased from 15 to 33%.
In summary: Despite the legal protection found in the
Acts, whistle-blowers experience that such protection is
only partly effective at best. This, in itself, should
discourage whistle-blowers from disclosing their identities
when blowing the whistle, thereby leading to anonymous
whistle-blowing. Blowing the whistle anonymously,
however, disqualifies them from the legal protection they
would have enjoyed as an identified whistle-blower. As
an anonymous whistle-blowers, _they would not need_
_protection, on condition that they remain anonymous._
**ANONYMOUS VS IDENTIFIED WHISTLE-BLOWING**
Anonymity means that the person‟s _“…identity is not_
_publicly known” (Ellison, 1982). The question is: Is it_
acceptable for someone to remain anonymous when
blowing the whistle or is the person obliged to reveal his
identity?
As Ellison (1982) points out, one has to distinguish
between anonymity and two other, closely related terms,
namely, secrecy and privacy. He argues that secrecy
requires a _“conspiracy of silence”, thereby implying that_
more than one person knows the secret (Ellison, 1982).
Something only known to one person is, according to
Ellison, the _“extreme form of secrecy”. This type of_
secrecy seems to fall outside of the scope of whistleblowing as it is highly unlikely that one will blow the
whistle on oneself.
In the context of whistle-blowing, it is the _denial of_
_access to information to others that makes it a secret_
(Ellison, 1982).
Privacy, according to Ellison (1982), occurs when one
can justify why others are not allowed to share
information that one has. He quotes the example of one‟s
sex life. One has the right to exclude others from such
_private_ information, unless someone else can invoke a
higher right than one‟s own to force one to disclose such
information.
Ellison (1982) makes an important point that, regarding
_privacy, the burden of the proof rests with the other party_
who wants to have access to such information.
Regarding secrecy, the burden of proof is reversed in that
the person with the information has to justify why it
should remain secret.
This raises the question of whether anonymity in the
-----
10802 Afr. J. Bus. Manage.
context of whistle-blowing is more on the side of secrecy
or more on the side of privacy. Ellison (1982) argues that
the kind of information which is being withheld is not
about the deed itself, but about the whistle-blower‟s
identity.
The question becomes whether the public has a right to
know the whistle-blower‟s identity or whether the whistleblower has the right to withhold it (Ellison, 1982).
Ellison (1982) points out that many members of the
public would consider an anonymous whistle-blower to be
_“saying nasty things behind people‟s backs” and, as a_
result, would argue that anonymous whistle-blowing
should be discouraged. However, when one considers
the risks associated with identified whistle-blowing, the
case for anonymity gets much stronger (and the case for
speaking behind people‟s backs, weaker).
Ellison (1982) argues in favour of identified whistle
blowing but concedes that many factors influence the
argument. On the one hand, anonymous whistle-blowing
_“impedes the pursuit of truth” because it makes law_
enforcement considerably more difficult. Another factor to
take into consideration in favour of anonymity is that fear
or retaliation by the whistle-blower may result in keeping
quiet if not allowed to report it anonymously. Clearly,
blowing the whistle anonymously is infinitely more
valuable than not blowing it at all (Ellison, 1982).
Ellison concludes that _“…blanket condemnation on_
_anonymity is not warranted”. He proposes that the justifi-_
cation must take three factors into consideration: the
seriousness of the offense, the probability of retaliation
and the social relationships.
One can conclude that identified whistle-blowing is
definitely the preferred way of blowing the whistle, but,
given the risks that have been outlined, there is a strong
case to be made for anonymous whistle-blowing.
**CURRENT WAYS OF BLOWING THE WHISTLE**
There are many ways by which a person can report a
suspicious incident. In an organisational context, the first
option is to share the concern with a colleague or, more
likely, with a supervisor. This unequivocally means that
the person who reports the deed is known (referred to as
an identified whistle-blower). An identified whistlebloweris, as minimum, known to the person reporting it
to, and may also be publicly known.
If the supervisor does not respond in a way acceptable
to the whistle-blower, the person can report it to higher
levels of management, or, report it externally to, say, a
newspaper. It could also be reported by means of a
„crime line‟ to an agency put in place by the organisation.
Crime lines are commonly services procured from outside
the organisation, such as ones offered by auditing firms.
Reporting a suspicious incident to an external source
allows the whistle-blower to either be an identified
whistle-blower, or to report it without mentioning his[1]
identity. By using technology (as opposed to face-to-face
communication), the whistle-blower is given the choice to
remain anonymous.
Reporting a suspicious incident by not revealing one‟s
identity, _assumes anonymity but when one analyses the_
mechanisms facilitating such reporting (the channels), the
identity of the caller may be revealed through the channel
used. In some cases it may be very simple and in others,
whilst more difficult, still very possible and feasible. This
has the unpleasant surprise to the whistle-blower that he
may be identified - despite his intention to remain
anonymous.
**Internal telephones**
Consider the case where a whistle-blower makes a call
from inside his organisation by using the telephone
exten-sion assigned to him. Most organisations, as
normal good practice, keep logs of all calls made from
extensions. These logs, commonly only records the event
and not the content itself (although there are some cases
where the entire conversation is recorded). By simply
analysing the logs of calls made to the „crime line‟ would
reveal the extension from which the call was made and
the identity of caller can be revealed. This is, obviously,
the worst way of trying to be an anonymous whistleblower. We are of the opinion that many whistle-blowers
use this option without realising the risk of being identified
as a standard feature of technology.
Making the call from someone else‟stelephone exten
sion will cause the wrong person to be suspected of
making the call. This will make it more difficult for a
pursuer to identify the true whistle-blower but, the
organisation, and the extension from where the call was
made, is known. Because pursuers, typically, have good
suspicions as to who may possess the relevant
information to report them, a good guess can lead them
to the whistle-blower. In the worst case, an innocent
person may targeted.
**Public telephones**
What if the whistle-blower goes to a public telephone and
makes the call to the crime line? The only way for the
pursuer to detect such reporting would be to constantly
monitor the crime line number, in other words, to
eavesdrop. This is technically quite feasible, albeit illegal.
It may also require voice recognition to identify the caller.
This may be easy in some cases and more difficult in
1 When referring to the term ‘his’ in respect of a whistle-blower, pursuer or
criminal both the male and the female gender is implied.
-----
others. Public telephones, therefore, still hold risks to the
whistle-blower.
**Cellular telephones**
Blowing the whistle by using a cellular phone is equally
dangerous. Firstly, cellular service providers keep logs of
calls made (excluding the content). Normally one needs a
court order to get access to such logs and it is likely to be
challenging for a pursuer to obtain such permission. Of
course, the pursuer can always „persuade‟ an employee
of the cellular network to - illegally - obtain the data on his
behalf.
Even if the pursuer is prevented from getting access to
the cellular call logs, it is quite possible and feasible to
„eavesdrop‟ on the crime line number (illegal) and „listen
in‟ to all the calls being made to the crime line. Callers
can be identified through voice recognition techniques or,
if the whistle-blower is known to the pursuer, he can quite
easily be recognised.
**Postal services**
Another option open to a whistle-blower is to use the
postal service. For instance, a whistle-blower can easily
obtain the postal address of the Public Protector and
send her an anonymous letter detailing the incident. In
this case the „strength‟ of the anonymity is strong, but
only the public is not encouraged to use this way of
whistle-blowing. Once the public is encouraged to use
this mechanism, pursuers only have „persuade‟ someone
where mail is received, to intercept suspicious mail.
**Email**
Some organisations have made facilities available to
whistle-blowers to report incidents via email. In some
cases, these emails are encrypted. Sometimes these are
addressed to a recipient internally to the organisation (for
example, Internal audit), or, sometimes externally (for
example, an Auditing Firm).
The origin for such email messages are easy to trace
for the organisation concerned through logs being kept as
a standard feature of email platforms. If the message is
encrypted, it may be difficult to decipher the content, but
the origin would be simple to trace as it would normally
have the originator‟s name appearing in the message.
The point is this: All the aforementioned mechanisms
described have weaknesses regarding the anonymity of
the whistle-blower. Weaknesses create risks to the
whistle-blower and prospective whistle-blowers will
assess such risks when deciding to blow the whistle or
not. One needs to acknowledge that the seriousness (or
Loggerenberg 10803
scale) of the case also plays an important role. The
extent to which the pursuer is prepared to go to prevent
or detect whistle-blowers, is directly related to the
seriousness of the offence and the severity of consequences the pursuer faces. There will be a world of
difference between a traffic official taking a R100 bribe
and someone taking a R10 million bribe in, say, an arms
deal. Equally, a prospective whistle-blower in an arms
deal involving billions of Rands, will expect a much higher
level of anonymity than a prospective whistle-blower
involving a few thousand Rands in traffic fines.
To make it more real: If large scale corruption did
indeed take place in South Africa‟s much publicised arms
deal, it is a reasonable assumption that „someone out
there‟ is in possession of, or have access to documentary
evidence to prove such corruption. It is also reasonable
to assume that such „someone‟ will think twice before
making such evidence available without adequate protection. Such protection has to be in terms of the
workplace but, even more importantly, in terms of his
personal life and those of his family. The perception of
the adequacy and sufficiency of the protection, we argue,
will be a deciding factor to the prospective whistle-blower
when deciding (a) to blow the whistle or to remain silent
and (b) to reveal his identity or to remain anonymous.
Anonymity – guaranteed anonymity – is essential,
especially when the stakes are high. The current ways of
providing anonymity to whistle-blowers have built-in
weaknesses and, as a result, seriously jeopardise the
safety of whistle-blowers and their families. If a way can
be found to guarantee anonymity, we suspect that more
people will be prepared to volunteer information about
wrongdoings, including, and especially, about corruption.
The point has to be made that the current ways of
facilitating whistle-blowing (for example, crime lines and
all of the others) must remain in place. An additional way
of blowing the whistle is required; a way to _guarantee_
anonymity.
**TECHNICALLY SAFE ENVIRONMENT**
The idea is to create an additional channel to the existing
ones for whistle-blowing, but a channel which guarantees
anonymity. It is, however, doubtful if the ideal of a 100%
guarantee will ever be achieved, simply because of the
very nature of technology. Technology advances at a
rapid rate and newer technology is always available not
only to legitimate users, but also to those wanting to
exploit it for selfish or illegitimate purposes. This makes a
„100% guarantee‟ a theoretical impossibility.
Despite this, it is still, in our opinion, meaningful to try
and get as close as possible to the goal so that, when the
prospective whistle-blower does his risk assessment, an
outcome in favour of blowing the whistle is still achieved.
It is hoped that such a safe environment will encourage
-----
10804 Afr. J. Bus. Manage.
prospective whistle-blowers to deliver their messages
and evidence to law enforcement authorities.
**Email**
Telephone based technologies for providing the safe
environment do not hold much promise, whether land-line
based, public or wireless. One therefore has to look for a
different kind of technology and email seems to be the
next logical choice.
Apart from technology constantly changing, it is quite a
daunting task to design an environment where the
originator of an email message cannot be traced. Every
computer which a log on to a data communications
network anywhere in the world gets a unique number
assigned to it at the moment of logging on. This „number‟
is referred to as the Internet Protocol (IP) address. This
IP address is - unknown to most email users - always
transmitted along with the message, irrespective of where
the message originates or where it terminates. Even
when logging onto a website, the website is „aware‟ of the
IP address of the computer logging on[2]. The IP address
is typically (and deliberately) not under the control of the
user of the computer with the result that a user will not be
able to hide this unique identifier. An ordinary whistleblower, for instance, would certainly not have the
technical skills to send a message by hiding the IP
address assigned.
In trying to design a technically safe environment, one
cannot, therefore, simply advertise an email address as a
means for whistle-blowers to inform organisations or law
enforcement authorities of suspicious incidents. Just like
hackers are traced despite their attempts to hide their
identities, a non-technical user sending an email would
be relatively easy to trace for someone with the
necessary technical skills.
**Encryption**
A commonly used technique to protect the content of an
email message (or even data attached as a file) is to
encrypt the message and/or the data. This would make it
impossible for a pursuer to read the content of the
message even if he gets access to it, but the IP address
could be identified and that, in most cases, would reveal
thewhistle-blower‟s location and, perhaps, his name.
2 It is for this reason why one is sometimes surprised to see that a website has
identified you as originating from, for instance, South Africa when logging on.
[A typical example is when trying to log onto www.google.com (the Google](http://www.google.com/)
service based in the US), one is automatically rerouted to the local website in
South Africa (www.google.co.za).
**Internet café**
One way to overcome the problem posed by the IP
address, is for the whistle-blower to send the email from
an internet café and, of course, not revealing anything
else about his identity in the message. This would make it
more difficult for the pursuer to identify the originator
despite tracing it back to the originating internet café.
Internet café‟s, typically, do not keep records of the
identities of their clients with the result that the whistleblower is reasonably safe from that perspective.
However, many internet café‟s record the activities of the
clients on video camera, so the whistle-blower could still
be identified if the pursuer can trace the message or
email back to a particular internet café and then getting
access to the video recordings to look for suspects. This,
however, will have to be done in a relatively short period
of time as the video recordings are typically overwritten
after a few days or weeks.
There is another danger that the whistle-blower must
avoid when making use of an internet café, namely, the
data contained in any attachments to the message. When
one creates a document in Microsoft Word, for example,
it automatically creates a profile of the user and stores it
with the rest of the document. Many users are not even
aware that this is the case. Depending on how the
computer was set up, the creator of the document may
well be easily identified by name and surname without
him being aware of it. (This is easily seen by simply
clicking on „File‟, then „Properties‟, then „Summary‟ of any
document created using MS WORD).
It is easy enough to delete any such identifiers before
attaching the document, but whistle-blowers must, firstly,
be made aware of the danger and, secondly, remember
to do so – consistently - before sending the attachment.
Both aspects pose risks to the kind of technically sound
environment one ideally would like to see.
**TOWARDS A SAFE ENVIRONMENT**
To get closer to the vision of a „100% guarantee‟, one
should not have rely on the whistle-blower to, firstly,
remove all of the identifying information on attachments,
then encrypting the message and data using a robust
encryption technique, and then using the internet café to
send the email and, even then, run the risk of being
traced back to a particular internet cafe.
There are simply too many unacceptable risks in the
scenario and something more robust and more reliable
must be designed.
**Onion routing**
The IP address poses a challenging problem. However,
-----
there is a way of getting rid of that in a relatively simple,
but safe, way. This opportunity is provided through
making use of a so-called „Onion Routing‟ facility
(Feigenbaum et al., 2007). This facility was originally
developed by the United States Naval Laboratory for the
purpose of _„protecting government communications‟_
[(TOR Project 2011).](https://www.torproject.org/about/overview.html.en)
This facility makes use of several websites situated
around the globe at undisclosed locations. The user
„drops‟ a message or data into a „drop box‟ typically
provided by the advertised website for whistle-blowers.
The message is then automatically routed in a random
route from one location (“node”) to several others (called
a „tunnel‟), at the same time, automatically stripping away
originating IP addresses. After passing through a random
number of nodes, the message is eventually delivered to
the receiving party but the receiving party only sees the
IP address of the last node sending the message. This IP
address is simply one of the many nodes in the Onion
Network and of no use to anyone, hence guaranteeing
anonymity to the originating node and, the originator.
Ironically, the Onion Network was developed by the US
Navy to safeguard government communications, but this
very same network was used by the Wikileaks[3] organisation to publish government cables and other documentation which caused such an embarrassment to the
US Government (and others). Not even the US Military or
Navy was able to trace the originator of the documents
published on the Wikileaks website. The fact that Bradley
Manning was eventually identified as the whistle-blower
happened as a result of Manning revealing his identity to
someone else whom he thought he could trust and who
then disclosed it to the US government (Leigh and
Harding, 2011).
The claim is made that the Onion Routing facility
provides „provable anonymity‟ (Feigenbaum et al., 2007)
and this facility is available, free-of-charge, to anyone
caring to use it. Of course, this claim only applies to the
technical environment used to facilitate anonymity. From
what can be gathered in the literature, using the Onion
Network is relatively simple as the user is isolated from
the technical complexities associated with the network.
**Potential solution**
Our solution involves a combination of the aforementioned, in the following process:
3 We must point out that we do not necessarily endorse or support any of the
actions of the Wikileaks organisation. The Wikileaks organisation has its own
objectives and we have our own. Yet, we do not want to pass judgement on
what the Wikileaks organisation set out to do. The fact that we are proposing to
use some of the same network technology (which technology does not belong
to the Wikileaks organisation) must be seen as purely coincidental.
Loggerenberg 10805
1. The whistle-blower goes with the evidence to an
internet café
2. The whistle-blower drops the message and/or data into
an electronic drop-box provided by the whistle-blowing
organisation
3. Any data in the message or in the documents that
could identify the sender is automatically deleted by the
website when dropped in the box
4. The message and data get encrypted automatically by
the website
5. The message and data is sent to the whistle-blowing
organisation through the TOR network
6. The whistle-blowing websitedeliberately does not keep
any logs of email received so that they would not be able
to provide information whatsoever about the originator,
even when forced to do so with a court order
7. The whistle-blowing organisation waits at least 14 days
before making it available to law enforcement authorities
so that video recordings at the originating internet café
are likely to have been overwritten.
**CONCLUSION**
The scale of corruption in South Africa, and, for that
matter, everywhere else in the World, is unacceptably
large. Many African and other countries have poverty
problems of immense magnitudes and cannot afford to
waste billions of currencies to enrich a few corrupt
individuals at the expense of the majority of the citizens.
This money could go a long way to improve living
conditions, healthcare to and education of the poor.
In this respect whistle-blowing plays a very important
role. Detection of corrupt deeds is in the hands of people
of integrity to observe such corrupt deeds and report
them to the relevant authorities. Such reporting carries
huge risks, including loss of life, damage to property
and/or dismissal or victimisation in the workplace.
Whistle-blowers deserve to be protected. Such pro
tection must be rooted in the legal framework but it needs
to be complemented by mechanisms that allow whistleblowers to remain anonymous if they choose. Such
anonymity must get as close to a 100% guarantee as one
could possibly get.
**RECOMMENDATIONS FOR FURTHER RESEARCH**
This paper aimed at stimulating academic research on
the topic. The topic can broadly be defined as the use of
ICT in the fight against corruption. This paper only looked
at one aspect of such broad topic, namely, anonymous
whistle-blowing.
It is recommended that further academic research gets
initiated. For instance, a theoretical framework describing
the use of ICT in the fight against corruption could be
useful. An Actor Network Research (ANT) approach to
-----
10806 Afr. J. Bus. Manage.
the topic is currently being investigated by the author to
provide insight into the actors and the roles played by the
actors in the corruption phenomenon.
**ACKNOWLEDGEMENT**
The initial research was partially supported by the
German Development Cooperation (GIZ) awarded to
Citizens against Corruption, a non-profit company.
**REFERENCES**
Calland R (2011). Blow the whistle at your peril. Mail and Guardian
Online, Oct 17.
Camerer L (2001). Protecting whistle-blowers in South Africa: The
Protected Disclosures Act, no 26 of 2000. Occasional Paper no 47,
Institute for Security Studies.
Colvin G (2002). Wonder Women of Whistleblowing. Is it significant that
the prominent heroes to emerge from the two great business
scandals of recent years were women? Fortune Magazine, August.
Ellison FA (1982). Anonymity and Whistleblowing. J. Bus. Ethics p. 1.
Feigenbaum J, Johnson A, Syverson P (2007). A Model of Onion
Routing with Provable Anonymity. Financial Cryptography and Data
Security, 11th International Conference, FC.
Leigh D, Harding L (2011). Wikileaks. Inside Julain Assange‟s War on
secrecy.Guardian Books, London.
Mail and Guardian Online (2007). Frere Hospital whistle-blower fired. 28
September. Cited on 23 October 2011 at mg.co.za/article/2007-0928-frere-hospital-whistleblower-fired.
Martin P (2010). The Status of Whistle-Blowing in South Africa: Taking
Stock. Open Democracy Advice Centre, June.
Martin P (2011). Corruption.Towards A Comprehensive Societal
Response. CASAC March.
Miceli MP, Rehg M, Near JL, Ryan KC (1999). Can Laws Protect
Whistle-Blowers?: Results of a Naturally Occurring Field Experiment.
Work Occup. 26:129.
Near JP, Miceli MP (1985). Organizational Dissidence: The Case of
Whistle-Blowing. J. Bus. Ethics p. 4.
Public Protector (2010). Address by Public Protector
AdvThuliMadonsela during the Open Democracy Advice Center
(ODAC) Conference on whistle-blowing held in Johannesburg,
Wednesday, 17 November 2010. Cited at
http://www.pprotect.org/media_gallery/2010/17112010_sp.asp.
Republic of South Africa (2000). Protected Disclosures Act 2000, Act 26
of 2000. 7 August. Government Gazette p.422.
Republic of South Africa (2004). Prevention and combating of corrupt
activities 2003, Act 12 of 2004. 28 April. Government Gazette p. 466.
Republic of South Africa (2008). Companies Act, Act 71 of 2008. 9 April
2009. Government Gazette p.526.
Republic of South Africa (2011). Companies Amendment Act, Act 3 of
2011., 26 April 2011. Government Gazette p.550.
[Syal R (2010). Tenfold rise in whistleblower cases taken to tribunal.](http://www.guardian.co.uk/profile/rajeev-syal)
Campaigners fear workers deliberately undermined despite repeated
[promises to protect them. The Guardian, Monday, 22 March 2010.](http://www.guardian.co.uk/theguardian)
TOR Project (2011). Cited at
https://wwwtorproject.org/about/overview.html.en.
Treasury (2011). Medium Term Budget Policy Statement 2011: Speech
by the Minister of Finance, Mr PravinGordhan. Cited at
http://www.info.gov.za/speech/DynamicAction?pageid=461&sid=2268
5&tid=47198.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5897/AJBM12.992?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5897/AJBM12.992, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/1CB625D15993.pdf"
}
| 2,012
|
[] | true
| 2012-11-07T00:00:00
|
[
{
"paperId": "5695cc8555799d9e9d49e9cb02b025a2cee18e2f",
"title": "WikiLeaks"
},
{
"paperId": "d9185cf4cf0b3acb3b119a81cc9ec77f2b211d44",
"title": "Capital and capital maintenance rules under the Companies Act, Act 61 of 1973 and the Companies Act, Act 71 of 2008"
},
{
"paperId": "d313d294e142a0de16bdc9c79e4e39c10d6a0ca6",
"title": "A Model of Onion Routing with Provable Anonymity"
},
{
"paperId": "43cf96b32f029285de45a7a269257981cbd644de",
"title": "Can Laws Protect Whistle-Blowers?"
},
{
"paperId": "5d93ab4f1a88e1628217f16961eea529993570c2",
"title": "Organizational dissidence: The case of whistle-blowing"
},
{
"paperId": "c224eaba41c12b28f6ca6e0ea3d38f5d049cbc56",
"title": "Anonymity and whistleblowing"
},
{
"paperId": "d926e48fdb95ee9eb2c17f74670a01ce64d87da6",
"title": "Protecting Whistle Blowers in South Africa: The Protected Disclosures Act, No 26 of 2000"
},
{
"paperId": null,
"title": "Blow the whistle at your peril"
},
{
"paperId": null,
"title": "Address by Public Protector AdvThuliMadonsela during the Open Democracy Advice Center (ODAC) Conference on whistle-blowing held in Johannesburg, Wednesday, 17 November 2010"
},
{
"paperId": null,
"title": "The Status of Whistle-Blowing in South Africa: Taking Stock"
},
{
"paperId": null,
"title": "Corruption.Towards A Comprehensive Societal Response"
},
{
"paperId": null,
"title": "Frere Hospital whistle-blower fired"
},
{
"paperId": null,
"title": "Prevention and combating of corrupt activities 2003, Act 12 of 2004"
},
{
"paperId": null,
"title": "Tenfold rise in whistleblower cases taken to tribunal"
},
{
"paperId": null,
"title": "Protected Disclosures Act 2000, Act 26 of 2000"
},
{
"paperId": null,
"title": "Wonder Women of Whistleblowing"
},
{
"paperId": null,
"title": "Wonder Women of Whistleblowing. Is it significant that the prominent heroes to emerge from the two great business scandals of recent years were women? Fortune Magazine"
},
{
"paperId": null,
"title": "Address by Public Protector AdvThuliMadonsela during the Open Democracy Advice Center (ODAC) Conference on whistle-blowing held in Johannesburg, Wednesday"
},
{
"paperId": null,
"title": "Medium Term Budget Policy Statement 2011: Speech by the Minister of Finance, Mr PravinGordhan"
},
{
"paperId": null,
"title": "The Status of Whistle-Blowing in South Africa: Taking Stock. Open Democracy Advice Centre"
},
{
"paperId": null,
"title": "Blow the whistle at your peril. Mail and Guardian Online"
},
{
"paperId": null,
"title": "Tenfold rise in whistleblower cases taken to tribunal. Campaigners fear workers deliberately undermined despite repeated promises to protect them. The Guardian"
},
{
"paperId": null,
"title": "Wikileaks. Inside Julain Assange\"s War on secrecy.Guardian Books, London. Mail and Guardian Online"
}
] | 9,414
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/015ed213c9398c10a4fea0eb09a29b7bd9816d81
|
[
"Computer Science"
] | 0.864164
|
An Introduction to Decentralized Finance (DeFi)
|
015ed213c9398c10a4fea0eb09a29b7bd9816d81
|
Complex Systems Informatics and Modeling Quarterly
|
[
{
"authorId": "81268997",
"name": "Johannes Rude Jensen"
},
{
"authorId": "2047997256",
"name": "Victor von Wachter"
},
{
"authorId": "48199469",
"name": "Omri Ross"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Complex Syst Informatics Model Q"
],
"alternate_urls": null,
"id": "a234316e-a138-4d28-9486-28de0ad963e5",
"issn": "2255-9922",
"name": "Complex Systems Informatics and Modeling Quarterly",
"type": null,
"url": "https://csimq-journals.rtu.lv/"
}
|
. Decentralized financial applications (DeFi) are a new breed of consumer-facing financial applications composed as smart contracts, deployed on permissionless blockchain technologies. In this article, we situate the DeFi concept in the theoretical context of permissionless blockchain technology and provide a taxonomical overview of agents, incentives and risks. We examine the key market categories and use-cases for DeFi applications today and identify four key risk groups for potential stakeholders contemplating the advantages of decentralized financial applications. We contribute novel insights into a rapidly emerging field, with far-reaching implications for the financial services.
|
Complex Systems Informatics and Modeling Quarterly (CSIMQ)
eISSN: 2255-9922
[Published online by RTU Press, https://csimq-journals.rtu.lv](https://csimq-journals.rtu.lv/)
Article 150, Issue 26, March/April 2021, Pages 46–54
https://doi.org/10.7250/csimq.2021-26.03
# An Introduction to Decentralized Finance (DeFi)
Johannes Rude Jensen[1,2*], Victor von Wachter[1], and Omri Ross[1,2 ]
1 Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
2 eToroX Labs, Copenhagen, Denmark
```
johannesrudejensen@gmail.com, victor.vonwachter@di.ku.dk, omri@di.ku.dk
```
**Abstract. Decentralized financial applications (DeFi) are a new breed of**
consumer-facing financial applications composed as smart contracts, deployed
on permissionless blockchain technologies. In this article, we situate the DeFi
concept in the theoretical context of permissionless blockchain technology and
provide a taxonomical overview of agents, incentives and risks. We examine the
key market categories and use-cases for DeFi applications today and identify
four key risk groups for potential stakeholders contemplating the advantages of
decentralized financial applications. We contribute novel insights into a rapidly
emerging field, with far-reaching implications for the financial services.
**Keywords: Blockchain, Decentralized Finance, DeFi, Smart Contracts.**
## 1 Introduction
Decentralized financial applications, colloquially referred to as ‘DeFi’, are a new type of open
financial applications deployed on publicly accessible, permissionless blockchains. A rapid surge
in the popularity of these applications saw the total value of the assets locked in DeFi
applications (TVL) grow from $675mn at the outset of 2020 to an excess of $40bn towards the
end of first quarter in the following year[†]. While scholars within the information systems and
management disciplines recognize the novelty and prospective impact of blockchain
technologies, theoretical or empirical work on DeFi remains scarce [1]. In this short article, we
provide a conceptual introduction to ‘DeFi’ situated in the theoretical context of permissionless
blockchain technology. We introduce a taxonomy of agents, roles, incentives, and risks in DeFi
applications and present four potential sources of complexity and risk.
This article extends the previous publication on managing risk in DeFi[‡] and is structured as
follows. Section 2 introduces the permissionless blockchain technology and decentralized
finance. Section 3 presents DeFi application taxonomy. An overview of popular DeFi application
- Corresponding author
© 2021 Johannes Rude Jensen, Victor von Wachter, and Omri Ross. This is an open access article licensed under the Creative
[Commons Attribution License (http://creativecommons.org/licenses/by/4.0).](http://creativecommons.org/licenses/by/4.0)
Reference: J. R. Jensen, V. von Wachter, and O. Ross, “An Introduction to Decentralized Finance (DeFi),” Complex Systems
Informatics and Modeling Quarterly, CSIMQ, no. 26, pp. 46–54, 2021. Available: https://doi.org/10.7250/csimq.2021-26.03
[Additional information. Author ORCID iD: J. R. Jensen – https://orcid.org/0000-0002-7835-6424, V. von Wachter –](https://orcid.org/0000-0002-7835-6424)
[https://orcid.org/0000-0003-4275-3660, and O. Ross – https://orcid.org/0000-0002-0384-1644. PII S225599222100150X.](https://orcid.org/0000-0003-4275-3660)
Received: 18 February 2021. Accepted: 15 April 2021. Available online: 30 April 2021.
[† https://defipulse.com/](https://defipulse.com/)
[‡ http://ceur-ws.org/Vol-2749/short3.pdf](http://ceur-ws.org/Vol-2749/short3.pdf)
-----
categories is given in Section 4. The risks in decentralized finance are discussed in Section 5.
Section 6 concludes the paper.
## 2 Permissionless Blockchain Technology and Decentralized Finance
The implications and design principles for blockchain and distributed ledger technologies have
generated a growing body of literature in the information systems (IS) genre [2]. Primarily
informed by the commercial implications of smart contract technology, scholars have examined
the implications for activities in the financial services such as the settlement and clearing of
‘tokenized’ assets [3] the execution and compilation of financial contracts [4]–[6], complexities
in supply-chain logistics [7] and beyond. A blockchain is a type of distributed database
architecture in which a decentralized network of stakeholders maintains a singleton state
machine. Transactions in the database represent state transitions disseminated amongst network
participants in ‘blocks’ of data. The correct order of the blocks containing the chronological
overview of transactions in the database is maintained with the use of cryptographical primitives,
by which all stakeholders can manually verify the succession of blocks.
A network consensus protocol defines the rules for what constitutes a legitimate transaction in
the distributed database. In most cases, consensus protocols are rigorous game-theoretical
mechanisms in which network participants are economically incentivized to promote network
security through rewards and penalties for benevolent or malicious behavior [8]. Scholars
typically differentiate between ‘permissioned’ and ‘permissionless’ blockchains. Permissionless
blockchains are open environments accessible by all, whereas permissioned blockchains are
inaccessible for external parties not recognized by a system administrator [2]. Recent
implementations of the technology introduces a virtual machine, the state of which is maintained
by the nodes supporting the network. The virtual machine is a simple stack-based architecture, in
which network participants can execute metered computations denominated in the native
currency format. Because all ‘nodes’ running the blockchain ‘client’ software must replicate the
computations required for a program to run, computational expenditures are priced on the open
market. This design choice is intended to mitigate excessive use of resources leading to network
congestion or abuse.
Network participants pass instructions to the virtual machine in a higher-level programming
language, the most recent generations of which is used to write programs, referred to as _smart_
_contracts. Because operations in the virtual machine are executed in a shared state, smart_
contracts are both transparent and _stateful, meaning that any application deployed as a smart_
contract executes deterministically. This ensures that once a smart contract is deployed, it will
execute exactly as instructed.
## 3 DeFi Agent Taxonomy
We denote the concept: ‘DeFi application’ as an arrangement of consumer-facing smart
contracts, executing a predefined business logic within the transparent and deterministic
computational environment afforded by a permissionless blockchain technology. Blockchain
technology is the core infrastructure layer (see Figure 1) storing transactions securely and
providing game-theoretic consensus through the issuance of a native asset. As a basic financial
function, standardized smart contracts are utilized to create base assets in the asset layer. These
assets are utilized as basis for more complex financial instruments in the application layer. In the
application layer, DeFi applications are deployed as sophisticated smart contracts and thus
execute a given business logic deterministically. Contemporary DeFi applications provide a
range of financial services within trading, lending, derivatives, asset management and insurance
services. Aggregators source services from multiple applications, largely to provide the best rates
across the ecosystem. Finally, user friendly frontends combine the applications and build a
service similar to today’s banking apps. In contrast to traditional banking services, in a
47
-----
blockchain-based technology stack, users interact directly with the application independent of
any intermediary service provider.
**Figure 1. DeFi applications on permisionless blockchain**
The metered pricing of computational resources on permissionless blockchains means that
DeFi applications are constrained by the computational resources they can use. Application
designers seek to mitigate the need for the most expensive operations, such as storing big
amounts of data or conducting sophisticated calculations, in the effort of reducing the level of
complexity required to execute the service that their application provides.
Because the resources required for interacting with a smart contract are paid by the user, DeFi
application designers employ an innovative combination of algorithmic financial engineering
and game theory to ensure that all stakeholders of their application are sufficiently compensated
and incentivized. In Table 1, we introduce a taxonomy for the different types of agents and their
roles in contemporary DeFi applications. We highlight the incentives for participation and key
risks associated with each role.
Owing to the original open-source ethos of blockchain technology, application designers are
required to be transparent and build ‘open’ and accessible applications, in which users can take
ownership and participate in decision-making processes, primarily concerning new features or
changes to the applications. As a reaction to these demands, application designers often issue and
distribute so-called governance tokens. Governance tokens are fungible units held by users,
which allocates voting power in majority voting-schemes [9]. Much like traditional equities,
governance tokens trade on secondary markets which introduces the opportunity for capital
48
-----
formation for early stakeholders and designers of successful applications. By distributing
governance tokens, application designers seek to disseminate value to community members
while retaining enough capital to scale development of the application by selling inventory over
multiple years.
**Table 1. Agent classification, incentives, and key risks**
**Incentives for**
**Agent:** **Role:** **Key risk:**
**participation:**
**Users** Utilizing the application Profits, credit, exposure and Market risk, technical risk
governance token
**Liquidity** Supply capital to the Protocol fees, governance Systemic economic risk,
**Providers** application in order to ensure token technical risk, regulatory risk,
liquidity for traders or opportunity costs of capital
borrowers
**Arbitrageurs** Return the application to an Arbitrage profits Market risk, network
equilibrium state through congestion and transaction
strategic purchasing and selling fees
of assets
**Application** Design, implement and Governance token Software bugs
**Designers** maintain the application appreciation
**(Team and**
**Founders)**
The generalized agent classification demonstrated in Table 1 is applicable to a wide area of
DeFi applications providing peer-to-peer financial services on blockchain technology including,
trading, lending, derivatives and asset management. In the following section, we dive into a
number of recent use cases, examining the most recently popular categories of applications.
## 4 An Overview of Popular DeFi Application Categories
The development principles presented above have been implemented in a number of live
applications to date. In this section, we provide a brief overview of the main categories of DeFi
applications.
**4.1 Decentralized Exchanges and Automated Market Makers**
Facilitating the decentralized exchange of assets requires an efficient solution for matching
counterparties with the desire to sell or purchase a given asset for a certain price, a process
known as price-discovery. Early implementations of decentralized exchanges on permissionless
blockchain technologies successfully demonstrated the feasibility of executing decentralized
exchange of assets on permissionless blockchain technology, by imitating the conventional
central limit order book (CLOB) design. However, for reasons stipulated below, this proved
infeasible and expensive at scale.
First, in the unique cost structure of the blockchain based virtual machine format [10], traders
engaging with an application, pay fees corresponding to the complexity of the computation and
the amount of storage required for the operation they wish to compute. Because the virtual
machine is replicated on all active nodes, storing even small amounts of data is exceedingly
expensive. Combined with a complex matching logic required to maintain a liquid orderbook,
computing fees rapidly exceeded users’ willingness to trade.
Second, as ‘miners’ pick transactions for inclusion in the next block by the amount of
computational fees attached to the transaction, it is possible to front-run state changes to the
decentralized orderbook by attaching a large computational fee to a transaction including a trade,
49
|Agent:|Role:|Incentives for participation:|Key risk:|
|---|---|---|---|
|Users|Utilizing the application|Profits, credit, exposure and governance token|Market risk, technical risk|
|Liquidity Providers|Supply capital to the application in order to ensure liquidity for traders or borrowers|Protocol fees, governance token|Systemic economic risk, technical risk, regulatory risk, opportunity costs of capital|
|Arbitrageurs|Return the application to an equilibrium state through strategic purchasing and selling of assets|Arbitrage profits|Market risk, network congestion and transaction fees|
|Application Designers (Team and Founders)|Design, implement and maintain the application|Governance token appreciation|Software bugs|
-----
which pre-emptively exploits the next state change of the orderbook, thus profiting through
arbitrage on a deterministic future state [11].
Subsequent iterations of decentralized exchanges addressed these issues by storing the state of
the orderbook separately, using the blockchain only to compute the final settlement [12].
Nevertheless, problems with settlement frequency persisted, as these implementations introduced
complex coordination problems between orderbook storage providers, presenting additional risk
vectors to storage security. Motivated by the shortcomings of the established CLOB design a
generation of blockchain specific ‘automated’ market makers (AMMs) presents a new approach
to blockchain enabled market design.
By pooling available liquidity in trading pairs or groups, AMMs eliminate the need for the
presence of buyers and sellers at the same time, facilitating relatively seamless trade execution
without compromising the deterministic integrity of the computational environment afforded by
the blockchain. Trading liquidity is provided by ‘liquidity providers’ which lock crypto assets in
the pursuit of trading fee returns.
**Figure 2. AMM Price Discovery Function**
While the primary context for the formal literature on blockchain based AMM has been
provided by Angeris and Chitra _et al. [13]–[15] the field has attracted new work on adjacent_
topics such as liquidity provisioning [16]–[18] and token weighted voting systems [19].
**4.2 Peer-to-Peer Lending and Algorithmic Money Markets**
The ‘money markets’ to borrow and lend capital with corresponding interest payments occupy an
important role in the traditional financial service. Within DeFi, borrowing and lending
applications are amongst the largest segments of financial applications with $7bn total value
locked[§] at the end of 2020. In borrowing/lending protocols agents with excess capital can lend
crypto assets (‘liquidity providers’) to a peer-to-peer protocol receiving continuous interest
payments. Consequently, a borrower can borrow crypto assets and pays an interest rate. Given
the pseudonymous nature of blockchain technology, it is not possible to borrow funds purely on
credit. To borrow funds, the borrowing agent has to ‘overcollateralize’ a loan, by providing
another crypto assets exceeding the dollar value of the loan to the smart contract. The smart
contract then issues a loan relative to 70–90% of the value of the collateral assets. Should the
[§ https://defipulse.com/](https://defipulse.com/)
50
-----
value of the collateral assets drop below the value of the outstanding loan, the smart contract
automatically auctions away the collateral on a decentralized exchange at a profit. The interest
rate is algorithmically set by the relative supply and demand for each specific crypto asset.
Initially pioneered by the MakerDAO [**] application, several protocols are now accessible
providing similar services with novel interests rate calculations or optional insurance properties,
currently presiding over a $7bn crypto assets under management.
**4.3 Derivatives**
Blockchain-based financial contracts (derivatives) are one of the fastest growing market
segments in DeFi. Here, application designers seek to make traditional financial derivatives such
as _options, futures and other kinds of_ _synthetic contracts available to the broader DeFi_
ecosystem. A futures contract stipulates a sale of an asset at a specified price with an expiry date,
an option contract stipulates the _right_ but not the obligation to sell or purchase an asset at a
specific price.
As in traditional finance both financial services can be used as insurance against market
movements as well as speculation on prices. Recently, a new segment of ‘synthetic’ assets has
entered the market in the form of tokens pegged to an external price, commonly tracking the
price of commodities (e.g., gold) or stocks (e.g., Tesla). A user can create such synthetic asset by
collateralized crypto assets in a smart contract similar to how a decentralized lending is
computed. The synthetic asset tracks an external price feed (‘oracle’) which is provided to the
blockchain. However, external price feeds are prone to technical issues and coordination
problems leading to staleness in case of network congestions or fraudulent manipulation [20].
**4.4 Automated Asset Management**
The traditional practice of ‘asset management’ in the financial services industry consists
primarily of the practice of allocating financial assets such as to satisfy the long-term financial
objectives of an institution or an individual. As the reader will have noted above, there are an
increasing number of DeFi applications, all of which operate algorithmically without human
intervention. This means that the DeFi markets operate around the clock and are impossible to
manage
The two main use cases for automated asset managers are ‘yield aggregators’ and traditional
crypto asset indices. Utilizing the interoperability and automation of blockchain technology,
‘yield aggregators’ are smart contract protocols allocating crypto assets according to predefined
rules, often with the goal of maximizing yield whilst controlling risk. Users typically allocate
assets to a protocols, which automatically allocates assets across applications in order to
optimize the aggregate returns, while rebalancing capital allocations on an ongoing basis.
Indices, on the other hand, offer a broad exposure to crypto assets akin to the practice of
‘passive’ investing. These applications track a portfolio of crypto assets by automatically
purchasing these assets and holding them within the smart contract. Equivalent to exchange
traded funds (ETFs), stakeholders purchase ownership of the indices by buying a novel token,
granting them the algorithmic rights over a fraction of the total assets held within the smart
contract[††].
## 5 Identifying and Managing Risk in Decentralized Finance
In this section, we identify and evaluate four risk factors which are likely to introduce new
complexities for stakeholders involved with DeFi applications.
[** https://makerdao.com/](https://makerdao.com/)
†† blockchain-in-asset-management.pdf (pwc.co.uk)
51
-----
**5.1 Software Integrity and Security**
Owing to the deterministic nature of permissionless blockchain technology, applications
deployed on as smart contracts are subject to excessive security risks, as any signed transaction
remains permanent once included in a block. The irreversible or, ‘immutable’ nature of
transactions in a blockchain network has led to significant loss of capital on multiple occasions,
most frequently as a result of coding errors, sometimes relating to even the most sophisticated
aspects virtual machine and programming language semantics [21]. DeFi applications rely on the
integrity of smart contracts and the underlying blockchain. Risk is further enforced through
uncertainties in future developments and the novelty of the technology.
**5.2 Transaction Costs and Network Congestion**
To mitigate abusive or excessive use of the computational resources available on the network,
computational resources required to interact with smart contracts are metered. This creates a
secondary market for transactions, in which users can outbid each other by attaching transaction
fees in the effort of incentivizing miners to select their transaction for inclusion in the next
block [11]. In times of network congestion, transactions can remain in a pending state, which
ultimately results in market inefficiency and information delays.
Furthermore, in these times, complex transactions can cost up to hundreds of dollars, making
potential adjustments to the state costly.[‡‡] While intermediary service providers occasionally
choose to subsidize protocol transaction fees[§§], application fees are in near all cases paid by the
user interacting with the DeFi application.
Because application designers seek to lower the aggregate transaction costs, protocol fees,
slippage or impermanent loss through algorithmic financial modelling and incentive alignment,
stakeholders must carefully observe the state of the blockchain network. If a period of network
congestion coincides with a period of volatility, the application design may suddenly impose
excessive fees or penalties on otherwise standard actions such as withdrawing or adding funds to
a lending market [20].
**5.3 Participation in Decentralized Governance**
Responding to implications of the historically concentrated distribution of native assets amongst
a small minority of stakeholders, DeFi application designers increasingly rely on a gradual
distribution of fungible governance-tokens in the attempt at adequately ‘decentralizing’ decisionmaking processes [9].
While the distribution of governance tokens remains fairly concentrated amongst a small
group of colluding stakeholders, the gradual distribution of voting-power to liquidity providers
and users will result in an increasingly long-tailed distribution of governance tokens. Broad
distributions of governance tokens may result in adversarial implications of a given set of
governance outcomes, for stakeholders who are not sufficiently involved in monitoring the
governance process [19].
**5.4 Application Interoperability and Systemic Risks**
A key value proposition for DeFi applications is the high level of interoperability between
applications. As most applications are deployed on the Ethereum blockchain, users can transact
seamlessly between different applications with settlement times rarely exceeding a few minutes.
This facilitates rapid capital flows between old and new applications on the network. While
interoperability is an attractive feature for any set of financial applications, tightly coupled and
[‡‡ https://etherscan.io/gastracker](https://etherscan.io/gastracker)
[§§ Coinbase.com](https://www.coinbase.com/)
52
-----
complex liquidity systems can generate an excessive degree of financial integration, resulting in
systemic dependencies between applications [22].
This factor is exacerbated by the often complex and heterogeneous methodologies for the
computation of exposure, debt, value, and collateral value that DeFi application designers have
used to improve their product. An increasing degree of contagion between applications may
introduce systemic risks, as a sudden failure or exploit in one application could ripple throughout
the network, affecting stakeholders across the entire ecosystem of applications.
The primary example of this dynamic can be demonstrated by the computation of ownership
in so-called liquidity pools used by traders utilizing AMM smart contracts. When providing
liquidity in the form of crypto assets to a decentralized exchange, liquidity providers receives
‘liquidity shares’ redeemable for a proportional share of the liquidity pool, together with the
accumulated fees generated through trading.
As liquidity shares are typically transferable and fungible IOU tokens representing fractional
ownership of a liquidity pool, this has led to the emergence of secondary markets for liquidity
shares. Providing liquidity in the form of IOU tokens, to these secondary market creates
additional (3rd generation) liquidity shares, generating additional fees for the liquidity provider.
As a consequence of the increasingly integrated market for liquidity shares, a rapid depreciation
of the source asset for the liquidity shares may trigger a sequence of cascading liquidations, as
the market struggles to price in any rapid changes in the price of the source asset [20], [22], [23].
## 6 Conclusion: Is DeFi The Future of Finance?
In this article, we have examined the potential implications, complexities and risks associated
with the proliferation of consumer facing DeFi applications. While DeFi applications deployed
on permissionless blockchains present a radical potential for transforming consumer facing
financial services, the risks associated with engaging with these applications remain salient.
Future stakeholder contemplating an engagement with these applications ought to consider and
evaluate key risks prior to committing or allocating funds to DeFi applications.
Scholars interested in DeFi applications may approach the theme from numerous angles,
extending early research on the market design of DeFi applications [14] or issues related to
governance tokens [9], [19] and beyond.
## Acknowledgments
This project has received funding from the European Union’s Horizon 2020 research and
innovation programme under the Marie Sklodowska-Curie grant agreement No 801199.
## References
[1] J. Kolb, M. Abdelbaky, R. H. Katz, and D. E. Culler, “Core Concepts, Challenges, and future Directions in
Blockchain: A centralized Tutorial,” _ACM Comput. Surv., vol. 53, no. 1, pp. 1–39, 2020. Available:_
[https://doi.org/10.1145/3366370](https://doi.org/10.1145/3366370)
[2] O. Labazova, “Towards a Framework for Evaluation of Blockchain Implementations,” in _Conference_
_Proceedings of ICIS (2019), 2019._
[3] O. Ross, J. Jensen, and T. Asheim, “Assets under Tokenization: Can Blockchain Technology Improve Post
Trade Processing?” in _Conference_ _Proceedings_ _of_ _ICIS_ _(2019),_ 2019. Available:
[https://doi.org/10.2139/ssrn.3488344](https://doi.org/10.2139/ssrn.3488344)
[4] J. R. Jensen and O. Ross, “Settlement with Distributed Ledger Technology,” in _Conference Proceedings of_
_ICIS (2020), 2020._
[5] B. Egelund-Müller, M. Elsman, F. Henglein, and O. Ross, “Automated Execution of Financial Contracts on
Blockchains,” _Bus._ _Inf._ _Syst._ _Eng.,_ vol. 59, no. 6, pp. 457–467, 2017. Available:
[https://doi.org/10.1007/s12599-017-0507-z](https://doi.org/10.1007/s12599-017-0507-z)
53
-----
[6] O. Ross and J. R. Jensen, “Compact Multiparty Verification of Simple Computations,” in _CEUR Workshop_
_[Proceedings, 2018. Available: https://doi.org/10.2139/ssrn.3745627](https://doi.org/10.2139/ssrn.3745627)_
[7] B. Düdder and O. Ross, “Timber Tracking: reducing Complexity of Due Diligence by using Blockchain
[Technology,” SSRN, 2017. Available: https://doi.org/10.2139/ssrn.3015219](https://doi.org/10.2139/ssrn.3015219)
[8] A. Antonopoulos and G. Wood, Mastering Ethereum: Building Smart Contracts and DApps. Sebastopol, CA:
O’Reilly Media, 2018.
[9] V. von Wachter, J. R. Jensen, and O. Ross, “How Decentralized is the Governance of Blockchain-based
Finance? Empirical Evidence from four Governance Token Distributions,” 2020. Available:
[https://arxiv.org/abs/2102.10096](https://arxiv.org/abs/2102.10096)
[10] G. Wood, “Ethereum: A secure decentralized generalized Transaction Ledger EIP 150,” in _Ethereum Project_
_Yellow Paper, 2014, pp. 1–32._
[11] P. Daian _et al., “Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in_
[Decentralized Exchanges,” 2019. Available: https://arxiv.org/abs/1904.05234](https://arxiv.org/abs/1904.05234)
[12] W. Warren and A. Bandeali, “0x : An open Protocol for decentralized Exchange on the Ethereum Blockchain.”
[Available: https://github.com/0xProject](https://github.com/0xProject)
[13] G. Angeris, A. Evans, and T. Chitra, “When does the Tail wag the Dog? Curvature and Market Making,” 2020.
[Available: https://arxiv.org/abs/2012.08040](https://arxiv.org/abs/2012.08040)
[14] G. Angeris, H.-T. Kao, R. Chiang, C. Noyes, and T. Chitra, “An Analysis of Uniswap Markets,”
_[Cryptoeconomic Systems, vol. 1, no. 1, 2019. Available: https://doi.org/10.21428/58320208.c9738e64](https://doi.org/10.21428/58320208.c9738e64)_
[15] T. Chitra, “Competitive Equilibria between Staking and on-chain Lending,” Cryptoeconomic Systems, vol. 1,
[no. 1, 2021. Available: https://doi.org/10.21428/58320208.9ce1cd26](https://doi.org/10.21428/58320208.9ce1cd26)
[16] J. Aoyagi, “Liquidity Provision by Automated Market Makers,” _SSRN,_ 2020. Available:
[https://doi.org/10.2139/ssrn.3674178](https://doi.org/10.2139/ssrn.3674178)
[17] M. Tassy and D. White, “Growth Rate of A Liquidity Provider’s Wealth in XY = c Automated Market
[Makers,” 2020. Available: https://math.dartmouth.edu/~mtassy/articles/AMM_returns.pdf](https://math.dartmouth.edu/~mtassy/articles/AMM_returns.pdf)
[18] M. Bartoletti, J. H. Chiang, and A. Lluch-Lafuente, “SoK: Lending Pools in Decentralized Finance,” 2020.
[Available: https://arxiv.org/abs/2012.13230](https://arxiv.org/abs/2012.13230)
[19] G. Tsoukalas and B. H. Falk, “Token-Weighted Crowdsourcing,” Manag. Sci., vol. 66, no. 9, pp. 3843–3859,
[2020. Available: https://doi.org/10.1287/mnsc.2019.3515](https://doi.org/10.1287/mnsc.2019.3515)
[20] D. Perez, S. M. Werner, J. Xu, and B. Livshits, “Liquidations: DeFi on a Knife-edge,” 2020. Available:
[https://arxiv.org/abs/2009.13235](https://arxiv.org/abs/2009.13235)
[21] L. Luu, D.-H. Chu, H. Olickel, P. Saxena, and A. Hobor, “Making Smart Contracts Smarter,” in _Proceedings_
_of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16), pp.254–269,_
[2016. Available: https://doi.org/10.1145/2976749.2978309](https://doi.org/10.1145/2976749.2978309)
[22] L. Gudgeon, D. Perez, D. Harz, B. Livshits, and A. Gervais, “The Decentralized Financial Crisis,” _Crypto_
_Valley_ _Conference_ _on_ _Blockchain_ _Technology_ _(CVCBT),_ pp. 1–15, 2020. Available:
[https://doi.org/10.1109/CVCBT50464.2020.00005](https://doi.org/10.1109/CVCBT50464.2020.00005)
[23] V. von Wachter, J. R. Jensen, and O. Ross, “Measuring Asset Composability as a Proxy for Ecosystem
Integration,” in DeFi _[Workshop Proceedings of FC'21, 2021. Available: https://arxiv.org/abs/2102.04227](https://arxiv.org/abs/2102.04227)_
54
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.7250/csimq.2021-26.03?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.7250/csimq.2021-26.03, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://csimq-journals.rtu.lv/article/download/csimq.2021-26.03/2570"
}
| 2,021
|
[
"JournalArticle",
"Review"
] | true
| 2021-04-30T00:00:00
|
[
{
"paperId": "ccc89f268ae2c69384e4bfaeca463aa38d56d7f1",
"title": "How Decentralized is the Governance of Blockchain-based Finance: Empirical Evidence from four Governance Token Distributions"
},
{
"paperId": "d102780639e4d91e6549b23f621da00e961447bf",
"title": "SoK: Lending Pools in Decentralized Finance"
},
{
"paperId": "3ef3e42c148961a92be129d2ed8f630c0e7fcf7b",
"title": "When does the tail wag the dog? Curvature and market making."
},
{
"paperId": "fa07b0d228edd081d548da718474ec8fcc9b9d13",
"title": "Settlement with Distributed Ledger Technology"
},
{
"paperId": "8cfea92862743cca95fbe48b9f4acf89ce16511d",
"title": "Liquidations: DeFi on a Knife-Edge"
},
{
"paperId": "44f28c4f4e302d9584aa32f1c20c367b2e05c6cd",
"title": "Token-Weighted Crowdsourcing"
},
{
"paperId": "d798034b2e65d8afe2d2ead977731aceb55c9012",
"title": "Liquidity Provision by Automated Market Makers"
},
{
"paperId": "c876d76a22b23ae28fe6d9c5589f857768d543fc",
"title": "The Decentralized Financial Crisis"
},
{
"paperId": "be28d28b546e5f0904cd1a56485511b706363587",
"title": "Core Concepts, Challenges, and Future Directions in Blockchain"
},
{
"paperId": "5f9b892a11df3c3dcb20b7c0ded3a0bf2724f32c",
"title": "Competitive equilibria between staking and on-chain lending"
},
{
"paperId": "5fcbdeabead3daed43816bb714992df1e866b628",
"title": "An Analysis of Uniswap Markets"
},
{
"paperId": "393ab84a86631d5fda128c3aac0bf5476da07791",
"title": "Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges"
},
{
"paperId": "f63636d729899ce2f070adf9a7af4b1e0892dd9a",
"title": "Timber Tracking: Reducing Complexity of Due Diligence by Using Blockchain Technology"
},
{
"paperId": "1f155d4c196954aa8951ed7c769ddec64631c132",
"title": "Automated Execution of Financial Contracts on Blockchains"
},
{
"paperId": "7968129a609364598baefbc35249400959406252",
"title": "Making Smart Contracts Smarter"
},
{
"paperId": "66b285002b6864019a644217c697b494b0215aa1",
"title": "Measuring Asset Composability as a Proxy for Ecosystem Integration"
},
{
"paperId": "ded2d1d492efc89fee394f50b8716f1c7d6a2370",
"title": "Assets under Tokenization: Can Blockchain Technology Improve Post-Trade Processing?"
},
{
"paperId": "0e5bfb12a8ebf718b2ee41d25957f7c66651b3ab",
"title": "Towards a Framework for Evaluation of Blockchain Implementations"
},
{
"paperId": "7e8e1aeb7b973d3ed99409fe0ce0605d9293eabe",
"title": "Compact Multiparty Verification of Simple Computations"
},
{
"paperId": null,
"title": "Mastering Ethereum: Building Smart Contracts and DApps"
},
{
"paperId": null,
"title": "“Ethereum: A secure decentralized generalized Transaction Ledger EIP 150,”"
},
{
"paperId": null,
"title": "“0x : An open Protocol for decentralized Exchange on the Ethereum Blockchain.”"
}
] | 7,479
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01614001dfc81c1d899402168314ff0c3ef7ce1e
|
[
"Computer Science"
] | 0.842226
|
Private and efficient set intersection protocol for RFID-based food adequacy check
|
01614001dfc81c1d899402168314ff0c3ef7ce1e
|
IEEE Wireless Communications and Networking Conference
|
[
{
"authorId": "3388391",
"name": "Zakaria Gheid"
},
{
"authorId": "1756893",
"name": "Y. Challal"
},
{
"authorId": "2118537403",
"name": "Lin Chen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Wirel Commun Netw Conf",
"WCNC",
"Wireless Communications and Networking Conference",
"Wirel Commun Netw Conf"
],
"alternate_urls": null,
"id": "27235614-bd3e-4d6b-be38-5ede18f4e209",
"issn": null,
"name": "IEEE Wireless Communications and Networking Conference",
"type": "conference",
"url": "http://www.ieee-wcnc.org/"
}
| null |
# Private and Efficient Set Intersection Protocol For RFID-Based Food Adequacy Check
## Zakaria Gheid[†], Yacine Challal[∗†], Lin Chen[‡]
_∗Centre de Recherche sur l’Information Scientifique et Technique, Algiers, Algeria_
_†Ecole nationale sup´erieure d’informatique, Laboratoire des M´ethodes de Conception des Syst`emes, Algiers, Algeria_
_‡Lab. Recherche Informatique (LRI-CNRS UMR 8623), Univ. Paris-Sud, 91405 Orsay, France_
Email: z gheid@esi.dz, y challal@esi.dz, chen@lri.fr
**_Abstract—Radio Frequency Identification (RFID) is a technol-_**
**ogy for automatic object identification that has been implemented**
**in several real-life applications. In this work, we expand a novel**
**relevant application of RFID tags for grocery stores, which aims**
**to check the adequacy of food items with respect to the shoppers’**
**personal preferences. Unlike similar works, we focus on shoppers’**
**privacy and running time efficiency. For this aim, we propose**
**a novel private set intersection (PSI) protocol to be used in**
**matching the shoppers’ personal preferences with the set of each**
**item’s adequate profiles that are held by the back-end server of**
**the store. We provide a standard security proof against curious**
**stores and malicious customers. For efficiency concern, we build**
**our protocol without cryptographic operations, and we achieve**
**a linear asymptotic complexity of O(v + c) for communications**
**and store-side computations, where v and c are the numbers**
**of profiles in the store’s back-end server and the shopper’s list**
**of preferences respectively. Moreover, experimental results and**
**comparisons with state-of-the art solutions reveal the scalability**
**of our novel PSI protocol for big market stores.**
**_Index Terms—Radio Frequency Identification (RFID), Profile_**
**Matching, Private Set Intersection.**
I. INTRODUCTION
Radio Frequency Identification (RFID) is a wireless technology that uses radio waves to identify and track objects.
RFID systems consist of tags attached to the objects to be
identified and readers that communicate with the tags to collect
information. Owing to its advantages over barcode systems,
RFID market is gaining increasing values that are expected to
exceed US$18 billion by 2026 [1]. This rapid proliferation has
allowed a wide range of endless applications, where commerce
comes in first. For instance, Large-scale supermarkets like
Walmart are attaching RFID tags to their goods to increase
business revenue lost by theft or inaccurate accounting of
goods [2]. Amazon Inc. has launched Amazon Go, a hightech retail store currently in a private beta testing in Seattle,
and filled a patent [3] in which they use RFID to detect when
a shopper takes an item from the shelf. Then, the system
adds up the item and charges the shopper’s Amazon account
without requiring going through a traditional check-out line.
This, should improve shopping experience for customers with
easier item returns.
Following this technology adoption, we introduce a novel
relevant RFID-based application that we coin as FAC: Food
Adequacy Check, which provides customers with information
on food items whether they match their preferences or not.
For instance, customers suffering from diabetes or needing a
gluten-free diet, should have a detailed insight about items
before buying them. Traditionally, a shopper can easily make
such a check relying on product labels; nevertheless, matching
a complex profile that involves several information such as
age, weight, several diseases, and follows a special program
as Low Carb diet [4], is a tedious task. This is highly true,
if the shopper wants to match several preferences including
his/her one and those of his/her family.
Accordingly, we propose (Π-FAC), a novel private and efficient set intersection protocol that we use to match customers
personal preferences with items adequate profiles. We design
(Π-FAC) as a multi-party computation (MPC) protocol that is
implemented on customers’ smartphones and the supermarket
back-end server. We use passive RFID tags with no computational capability, which is the cheapest type of tag, to allow
the deployment of our application with affordable cost.
We address the privacy concern of the shopper preferences
against curious server, besides the privacy of the database
of item profiles held by the back-end server against malicious shoppers as this may be a paid service. We provide a
simulation-based security proof under the standard real/ideal
paradigm [5].
For the efficiency concern, we build our protocol upon
efficient matrix algebra without cryptographic operations to
ensure its scalability for large supermarkets. We achieve a
linear asymptotic complexity of O(v + c) in communications
and server side computations, where v and c are, respectively,
number of profiles within the server database and the customer
list of preferences. Finally, we make experimental evaluations
to confirm the efficiency of our (Π-FAC) protocol compared
to the hash-based private set intersection solution used in
practice.
The rest of this paper is organised as follows. In Section II,
we review recent literature works in Private Set Intersection
field and we discuss them. In Section III, we introduce a
novel RFID-based application that we name Food Adequacy
Check (FAC). Then, in Section IV, we detail our novel private
set intersection protocol used within FAC application for the
private profile matching purpose. Next, we provide a standard
security analysis of our protocol, in Section V, using the
Real/Ideal paradigm. After that, we devote Section VI to
evaluate the efficiency of our protocol compared to the hash
-----
based solution used in practice. Finally, we conclude this work
by summarizing our contribution.
II. RELATED WORK
In this section, we provide a literature survey on the private
set intersection (PSI) functionality that we use to implement
the Food Adequacy Check application. We focus on PSI
protocols that work in the standard (plain) model, where
security is only based on complexity assumptions.
Assume a client (C) and a server (V ) having private
sets of profiles X and Y of sizes c and v respectively.
Two main approaches were used to solve PSI(X,Y), namely
Oblivious Polynomial Evaluation (OPE) [8] and Oblivious
Pseudo-Random Functions (OPRF) evaluation [9].
_1) OPE based-PSI: In this approach, C defines a poly-_
nomial P (.) such that P (x) = 0 for each x _X, and_
_∈_
sends to V homomorphic encryptions of the coefficients of
_P_ (.). Then, V computes the encryption of (r.P (y) + y) for
each y _Y, using homomorphic properties of the encryption_
_∈_
system and a fresh random r. Finally, C decrypts the received
cyphertexts and gets either elements of the intersection (if
plaintexts match an element of X) or random values. In this
approach, we find works of Freedman et al. [10], Kissner and
Song [11], Dachman-Soled et al. [14] and Hazay [15]. They
targeted semi-honest and malicious settings, where the most
efficient construction [15] incurs O(v+c) communications and
_O(c + v log log c) computations, under the strong Decisional_
Diffie-Hellman assumption (strong-DDH).
_2) OPRF-based PSI: In which V defines a random key (k)_
for a pseudo random function (PRF) fk(.) and computes the
set fky = {fk(y) : y ∈ _Y }. Then, V and C executes an OPRF_
protocol where V inputs fk(.) and C inputs the set X and gets
the set fkx = {fk(x) : x ∈ _X}. At the end, V sends the set_
_fky to C that evaluates fkx_ _fky. This approach was used_
_∩_
by Hazay and Lindell [16], Jarecki and Liu [17], Hazay and
Nissim [19] and Hazay [15] to propose PSI protocols secure
in the semi-honest and malicious settings. The most efficient
protocol that does not require non standard assumptions [15]
costs O(v+c) computations under the strong-DDH assumption
and O((v + c) log (v + c)) under the DDH assumption.
Contrary to existing PSI protocols that rely on cryptographic
schemes, we propose a novel PSI protocol based on efficient
matrix algebra and secure under the mixed model of adversaries. Our protocol incurs O(v + c) communications and
server computations while maintaining fairness.
III. FAC: A NOVEL RFID-BASED FOOD ADEQUACY
CHECK SYSTEM
In this section, we present a novel RFID-application that
aims to check the adequacy of foods to shoppers’ personal
preferences.
_A. FAC Overview_
To illustrate the FAC application, we consider a supermarket
that tagged its items with RFID tags, and provides shopping
carts with embedded RFID reader devices for its clients. Each
Application
Communication
RFID System
Figure 1. Architecture & Infrastructure requirements for FAC application
client will be provided a mobile application that he/she sets upSmart **(Π-SI)** Backend
Phones Server
on his/her smartphone to enter information about personal foodPrivate
preferences that he/she wants to match. When a client entersPersonal Profiles Intersection Set All Item Profiles
the supermarket he/she uses the provided mobile application
to connect to the supermarket wireless gateway. Then, each
time the shopper takes an item from the shelf and passes it
through the embedded RFID reader device, the latter reads the
item tag and passes its information to the mobile application
of the shopper. Once the application handles a novel arriving
tag information, it sends it to the back-end server with a profile
matching request. The shopper’s smartphone and the back-end
server start running a profile matching process using our novel
private set intersection protocol (Π-FAC). This application
ends-up by showing the shopper which profiles match the
taken item among the set of profiles that he/she entered.
_B. FAC Architecture_
To implement the FAC application, we propose the following architecture model that is based on three layers, namely
RFID system, communication, and application (Figure 1).
_• RFID system. This is the basic layer. It consists of_
an RFID system with standard components. It involves
passive tags put on each item of the supermarket, reader
devices that can be either fixed on the shelves or embedded on shopping carts, and an RFID middleware. This
latter component is not required by our FAC application,
it aims to recover each tag read by a device to enable
the integration of other application using the same RFID
infrastructure.
_• Communication layer. It involves a wireless commu-_
nication gateway that covers the supermarket surface.
It aims to interconnect the upper-layer components and
ensures the communication with the RFID reader devices
and the middleware.
_• Application layer. It involves the FAC application set up_
on the clients’ smartphones and the back-end server of
the supermarket. The mobile application allows the user
to input its personal preferences and connect to the backend to run the private profile matching process using our
built-in private set intersection protocol (Π-FAC).
Smart
Phones
Personal
Profiles
-----
IV. A NOVEL PRIVATE AND EFFICIENT SET
INTERSECTION PROTOCOL
In this section, we present our novel private set intersection
protocol as well as its design model.
_A. Our Methodology_
In this work, we use a matrix-based approach in which we
represent the private sets of profiles as row matrices (each
matrix corresponds to a private set of profiles and each row
within it corresponds to a profile in the set). Then, each
party obfuscates its matrix by performing a multiplication
with a random matrix chosen independently from the input
domain. Next, each party sends its resultant matrix to the
other party to be multiplied by the other random matrix. Since,
matrix product is not commutative, which is required for the
correctness of the scheme, the two parties will interchange
the side of the matrix product (left multiplication and right
multiplication). At the end, the two resulting matrices will
be checked for rows equality as each row corresponds to an
original element in the set. In what follows, we give a detailed
implementation of the Π-FAC protocol.
_B. Protocol Design_
To introduce our novel private set intersection protocol
(Π-FAC), we consider a client denoted C and a back-end
server denoted V having respectively X = {x1, ..., xc} and
_Y = {y1, ..., yv} sets of profiles and want to securely get the_
intersection between their sets. Assume for 1 _i_ _c and_
_≤_ _≤_
1 ≤ _j ≤_ _v: xi and yj ∈_ R[n]. Let M(m, n) denote the set of
all m-by-n matrices and denote the matrix multiplication
_⊗_
operator. Let M1 and M2 denote random invertible matrices
used by C and V respectively to obfuscate their sets, where
**M1 ∈** M(c, c) and M2 ∈ M(n, n). Let MX and ∪i>1MYi
denote the private sets X and Y respectively, represented as
row matrices, where MX ∈ M(c, n) and MYi ∈ M(c, n). We
present the detail of Π-FAC protocol in Algorithm 1.
V. SECURITY ANALYSIS
In this section, we give a security proof of our protocol
using the Real/Ideal security model [5].
_A. Security Model_
Let Π denote a multi-party protocol executed by m participants (P1,...,Pm) in order to evaluate a function f . Let B
denote the class of adversary that may corrupt participants in
Π. Let R and D denote respectively the real and the ideal
executions of Π on the set of inputs w and the set of security
parameters sec.
**Notation 1. Let viewE[Π][(][w][,][sec][)][i][ denote the set of messages]**
_received by the party Pi∈{1,...,m} along with its inputs and_
_outputs during the execution E of Π on the set of inputs w_
_and security parameters sec._
**Notation 2. Let out[Π]E[(][w][,][sec][)][i][ denote the output of the party]**
_Pi∈{1,...,m} by the execution E of the protocol Π on the set of_
_inputs w and security parameters sec. Let out[Π]E[(][v][,][sec][) denote]_
**Algorithm 1: Π-FAC, a Private and Efficient Set Intersec-**
tion Protocol
**Input : X = {x1, ..., xc} C’s set of personal profiles**
_Y = {y1, ..., yv} V ’s set of all item profiles_
**Output: (For C only) Ψ(X, Y ): the private set**
intersection between X and Y
**Require: (c, n) ∈** N[2]: 0 < c < n
**Step 1 by C**
1: Generates a random invertible M1 ∈ M(c, c)
2: Creates MX ∈ M(c, n) with X’s elements as rows
3: Computes M1X = M1 ⊗ **MX**
4: Sends M1X to V
**Step 2 by V**
5: Generates a random invertible M2 ∈ M(n, n)
6: Computes M1X2 = M1X ⊗ **M2**
7: for (i = 1; i < (v/c) + 1; i + +) do
8: Creates MYi ∈ M(c, n) with Y ’s elements as
rows
9: Computes MY2i = MYi **M2**
_⊗_
10: end for
11: Sends M1X2 and ∪i>1MY2i to C
**Step 3 by C**
12: Computes M1Y2i = M1 ⊗ **MY2i, for each**
received MY2i
13: For each (m, n, i) if M1X2[m,*] = M1Y2i[n,*]
and M1X2[m,*] ̸∈ Ψ(X, Y ) then puts M1X2[m,*]
in Ψ(X, Y )
_the global output of all collaborating parties from the same_
_execution of Π, where_
_out[Π]E[(][w, sec][) =][ ∪]i[m]=1[out]E[Π][(][w, sec][)][i]_
During a real execution (R) we consider the presence of
an adversary denoted A that behaves according to the class B
while corrupting a set of participants Pi(1≤i≤m). At the end
of R, uncorrupted parties output whatever was specified in Π
and the corrupted Pi outputs any random functions of their
_viewR[Π][(][w][,][sec][)][i][.]_
During an ideal execution (D) we consider the presence
of a trusted incorruptible party denoted T, which receives
the set of inputs w from all participants in order to evaluate
the function f in the presence of an adversary denoted S.
We assume S corrupts the same Pi as the correspondent
adversary A of real execution, and behaves according to the
same class B before sending inputs to T . By the end of D,
uncorrupted participants output what was received from T
and the corrupted Pi output any random functions of their
_viewD[Π]_ [(][w][,][sec][)][i][.]
**Definition 1. Let Π and f be as above. We consider Π a**
_secure multi-party protocol if for any real adversary A having_
_a class B and attacks the protocol Π during its execution on_
_the set of inputs w and the set of security parameters sec,_
_there exists an adversary S in the ideal execution having the_
_same class B and that can emulate any effect achieved by_
-----
_d_
_A. Let_ _denote the distribution equality. We formalize the_
_≡_
_definition of a secure multi-party protocol Π as follows_
_{out[Π]R[(][w, sec][)][}]_ _≡{d_ _outΠD[(][w, sec][)][}]_ (1)
_B. Security Proof_
In what follows, we give security simulations of Π-FAC
protocol using Real/Ideal paradigm. The allowed behavioural
class of adversary is the mixed one, where the client (C)
having a set of inputs X is actively corrupted and the server
(V ) having the set of inputs Y is passively corrupted.
Let A, S and T denote respectively a real adversary, an ideal
adversary and a trusted third party, where A and S have the
same class. Let Π denote the Π-FAC protocol (Algorithm 1),
_sec denote security parameters that will be presented below_
(Theorem 1), w denote the set of inputs {MX, ∪i>1MYi},
which are the matrix representation of the sets X and Y
respectively, and Ψ(X, Y ) denote the private set intersection
between X and Y .
**Theorem 1. Given a set of security conditions (sec) defined as**
_sec = {(n, c) ∈_ N[2] : 0 < c < n}. Under these conditions, the
_protocol Π-FAC defined in Algorithm 1 is a secure multi-party_
_protocol against an active corruption of C._
_Proof: Assume C is actively corrupted by A. Then, it_
can only inject fake inputs (MA) since aborting the protocol
untimely will have no meaning. Assume C sends a fake MA.
In this case, S can emulate A by just handling the fake MA and
sends it to T, which performs the required computation and
sends back Ψ(X, Y ) to C. Thereby, completing the simulation.
At the end, the views of C in Ideal and Real executions are
as follows
_viewD[Π]_ [(][w, sec][)][C] [=][ {][MX][,][ Ψ(][X, Y][ )][}] (2)
_viewR[Π][(][w, sec][)][C]_ [=][ {][MX][,][ M1X2][,][ ∪][i>][1][MY2][i][,][ Ψ(][X, Y][ )][}][ (3)]
Otherwise, M1X2 = M1X ⊗ **M2, where M1X ∈** M(c, n) and
**M2 ∈** M(n, n). According to security parameters (sec), we
have c < n. This preserves well the privacy of M2. Thereby,
**M1X2 that contains (c × n) equations opposite to (n × n)**
unknowns for C, will not involve meaningful information for
it and can be reduced from its view. Likewise, ∪i>1MY2i =
_∪i>1MYi ⊗_ **M2, where MYi ∈** M(c, n) and M2 ∈ M(n, n).
Then, ∪i>1MY2i will contain α(c × n) equations opposite to
(α(c _n)+(n_ _n)) unknowns for C, where 0 < α < (v/c)+1._
_×_ _×_
This, does not involve meaningful information for it and can
be so, reduced from its view. After these reductions, the view
of C in real execution will be defined as follows
_viewR[Π][(][w, sec][)][C]_ [=][ {][MX][,][ Ψ(][X, Y][ )][}] (4)
Thus, relying on (2) and (4) we get
_{out[Π]R[(][w, sec][)][C][}]_ _≡{d_ _outΠD[(][w, sec][)][C][}]_ (5)
On the other hand, the uncorrupted V can not be affected by
the corruption of C since V does not require any output in real
execution. Thus, T will simply not send it any output during
ideal execution. This, means that
_{out[Π]R[(][w, sec][)][V]_ _[}]_ _≡{d_ _outΠD[(][w, sec][)][V]_ _[}]_ (6)
Through (5) and (6), we proved by simulation that all effects
achieved by a real active adversary corrupting C can also be
achieved in an ideal execution. Then, Π-FAC is a secure multiparty protocol against active corruption of C (Definition 1).
**Theorem 2. Given a set of security conditions (sec) defined as**
_sec = {(n, c) ∈_ N[2] : 0 < c < n}. Under these conditions, the
_protocol Π-FAC defined in Algorithm 1 is a secure multi-party_
_protocol against a passive corruption of V ._
_Proof: Assume V is passively corrupted. In this case,_
_V should follow the specification of the protocol Π-FAC,_
yet, it is allowed to analyse all data gathered during the
execution. Then, S will just handle V ’s input and sends it to T,
which performs the required computation and sends Ψ(X, Y )
to C while sending nothing to V . Thereby, completing the
simulation. At the end, the views of V in Ideal and Real
executions are as follows
_viewD[Π]_ [(][w, sec][)][V] [=][ {∪][i>][1][MY][i][}] (7)
_viewR[Π][(][w, sec][)][V]_ [=][ {∪][i>][1][MY][i][,][ M1X][}] (8)
Moreover, M1X = M1 ⊗ **MX, where, M1 ∈** M(c, c) and
**MX ∈** M(c, n). Then, since we defined (0 < c) as security
parameter (sec), we get (c _×_ _n)<((c_ _×_ _n)+(c_ _×_ _c)). Thus, M1X_
that contains (c _n) opposite to ((c_ _n)+(c_ _c)) unknowns_
_×_ _×_ _×_
for V will not involve meaningful information for it and can
be, so, reduced from its view. After reduction, we obtain
_viewR[Π][(][w, sec][)][V]_ [=][ {∪][i>][1][MY][i][}] (9)
Thus, relying on (7) and (9) we get
_{out[Π]R[(][w, sec][)][V]_ _[}]_ _≡{d_ _outΠD[(][w, sec][)][V]_ _[}]_ (10)
On the other hand, the uncorrupted C outputs what was received from T in ideal execution, which is Ψ(X, Y ) according
to the simulation given above and outputs what was specified
in the protocol Π-FAC in real execution, which is Ψ(X, Y )
(Algorithm 1, Output section) . Then, we have
_{out[Π]R[(][w, sec][)][C][}]_ _≡{d_ _outΠD[(][w, sec][)][C][}]_ (11)
Through (10) and (11) we proved by simulation that all effects
achieved by a real passive adversary corrupting V can also be
achieved in an ideal execution. Then, Π-FAC is a secure multiparty protocol against passive corruption of V (Definition 1).
**Corollary 1. Given a set of security conditions (sec) defined**
_as sec = {(n, k) ∈_ N[2] : 0 < k < n}. Under these conditions,
_the protocol Π-FAC defined in Algorithm 1 is a secure multi-_
_party protocol in the mixed model of adversary, where C is_
_actively corrupted and V is passively corrupted._
-----
_Proof: Corollary 1 relies heavily on the Theorem 1 and_
Theorem 2 proved above, while considering separately the case
when the client (C) is corrupted and the case when the server
(V ) is corrupted. We assume that if both parties are corrupted
we are not required to provide security guarantees.
VI. PERFORMANCE ANALYSIS
In this section, we simulate the performance of our protocol
(Π-FAC) using a queueing theory model. We make experiments on the same data sets with a custom simulator built in
Python and an Intel i5-2557M CPU running at 1.70 GHz and
having a 4 GB of RAM.
_1) Experimental scenario: We consider a large-scale su-_
permarket that wants to provide their clients with a Food
Adequacy Check (FAC) service using our private set intersection protocol (Π-FAC). In order to assess the scalability of
Π-FAC, we consider the back-end server (V ) receiving (N )
FAC requests from different clients (C) at rate (λ) request(s)
per minute according to a Poisson process: N _P_ (λ).
_∼_
Assume the requests processing times (ti) have an exponential
distribution with rate (µ) requests per minute ti ∼ _exp(µ)._
Without loss of generality, we consider fixing the number of
client personal profiles to 10 profiles per client, where each
profile involves 20 attributes (∈ R[20]). Besides, we consider
_V having 1000 profiles of 20 attributes per item. This, results_
in an average range of 50 possible values for each attribute,
which is highly sufficient in real applications.
For comparison purpose, we model the same above scenario,
while V uses the hash-based private set intersection protocol
used in practice (Section IV-A) instead of our Π-FAC protocol.
For this, we use an efficient commutative hash function
_Hk(x) = x[k]_ _mod p, where k is a 32-bit security parameter_
and p is a 32-bit random prime.
Assume V having a FIFO service discipline with unlimited access, and operating all day long. Let M/M/1 denote
this system using Kendall’s notation [23]. We evaluate this
system by varying the the number of clients requesting for
FAC service (λ) in the range 10, 20, 50, 100, 200 clients
_{_ _}_
(requests) per minute. Let mult, add, exp and mod denote
respectively one multiplication, one addition, one exponentiation and one modulo operations. Let v and c denote the
number of profiles of V and C respectively, where each profile
involves n attributes. To measure µ parameter, we evaluate the
computational costs required by V when using Π-FAC and the
hashing scheme by the following equations.
_Cost[(Π]V_ _[−][F AC][)]_ = n[2](v + c) mult + n(n − 1)(v + c) add
_Cost[(]V[hash][)]_ = n(v + c) exp + n(v + c) mod
_2) Results & discussion: We have made experimental eval-_
uations by simulating two back-end servers of a supermarket
handling FAC requests, while one was running Π-FAC protocol and the other was running a hash-based protocol. We
used the model described in Table 1 and we evaluated the
system performance for each protocol according to the number
of requests (λ) through the following metrics: the usability
rate (U ) of the back-end server, its response time (R), the
average number of clients (N ) in the system, and the mean
length of waiting queue (Q). Let ρ denote the intensity traffic
rate. We assess the previous metrics according to the following
equations and we present the results in Table 1 and Figure 4.
_ρ_
_ρ =_ _[λ]_ _U = ρ_ _N =_ _Q = N_ _ρ_ _R =_ _[N]_
_−_
_µ_ 1 _ρ_ _λ_
_−_
For low arrival rates (λ < 100) results show that the
server running Π-FAC was undergoing a slow intensity traffic
(ρ < 0.1), which results in a very low probability of server
overload. This claim may be confirmed by looking the low
server utilization rate (U < 10%), besides, the zero queue
length (Q = 0). On the hand, the server running the hash
protocol was less efficient with a usability rate of U > 70%
for 50 clients per minute. This high usability tends to overload
the server if more clients arrive (λ > 50), which may be
confirmed by looking the increase in the number of clients
waiting in the queue (Q > 0). Regarding response time, ΠFAC provided a high efficient and stable response (R = 2.x
ms) compared to the hash protocol, which was less efficient
and had a significant delay each time there was an increase in
the arrival rate.
For high arrival rates (λ > 100), the server running Π-FAC
remained efficient with a usability rate of U < 50% for 200
clients per minute while providing a high efficient response
time (R < 4 ms). In contrast, the server running the hash
protocol was undergoing a very high intensity traffic (ρ > 1),
which leads the system to a non-steady state and results in
overloading the server with a utilization rate of U > 100%.
This, tended to an infinite queue length (Q = ) and an
_∞_
infinite response time (R = ).
_∞_
Experimental results revealed the efficiency of our Π-FAC
protocol compared to the hash-based solution used in practice.
This efficiency raises from the fact that our protocol involves
efficient arithmetic operations (addition and multiplication)
and does not require any expensive computations (modulo,
exponentiation), which are involved in cryptographic methods.
These performance results show the adequacy of our protocol
to be used by large scale supermarkets.
VII. CONCLUSION
In this paper, we expanded a novel RFID-based application
that aims to check whether food items matches the preferences
of the shoppers, according to their personal profiles. For
this, we proposed Π-FAC, a novel set intersection protocol
that targets privacy and efficiency concerns while matching
shoppers’ preferences with item profiles that are held by the
back-end server of the store. Through security analysis conducted with the standard Real/Ideal paradigm, we showed the
privacy guarantees provided by Π-FAC against curious stores
and malicious clients. Besides, across empirical performance
analysis, we demonstrated the high efficiency of our protocol
compared to the hash-based private set intersection used in
practice. Evaluation results revealed the adequacy of Π-FAC to
provide a private and efficient Food Adequacy Check service
for large-scale stores.
-----
Table I
EVALUATION OF A BACK-EN SERVER STORE USING M/M/1 MODEL
**Fixed** **Used**
**Parameters** **Protocol**
Π-FAC
_v= 1000_
_c= 10_
_n= 20_
**Hash**
**Client** **Running** **Processing** **Intensity** **Usability** **Number of** **Queue** **Response**
**Requests** **Time** **Rate** **Traffic** **Rate** **Client** **Length** **Time**
**(λ)/min** **s** **(µ)** **(ρ)** **(U) %** **(N) × 10[2]** **(Q)** **(R) ms**
10 1.29 0.02 2 2 0 2
20 2.54 0.04 4 4 0 2
50 6.30 470 0.10 10 11 0 2.2
100 12.79 0.21 21 26 0 2.6
200 25.53 0.42 42 72 0 3.6
10 8.59 0.14 14 16 0 16
20 17.28 0.28 28 39 0 19.5
50 42.80 70 0.71 71 245 2 49
100 86.53 1.43 143 _∞_ _∞_ _∞_
200 171.66 2.86 286 _∞_ _∞_ _∞_
(a) Usability Rate (U) (b) Queue length (Q) (c) Response time (R)
Figure 2. Evaluation of a back-en server store using M/M/1 model
REFERENCES
[1] M. R. Das, RFID Forecasts, Players and Opportunities 2016-2026.
IDTechEx.
[2] Walmart stores, inc. wal-mart continues rfid expansion. [Online].
Available: http://corporate.walmart.com/ [visited 04/30/2017]
[3] e. a. Puerini, Gianna Lise, “Transitioning items from a materials handling
facility,” US Patent, 01 08, 2015.
[4] M. L. Dansinger, J. A. Gleason, J. L. Griffith, H. P. Selker, and E. J.
Schaefer, “Comparison of the atkins, ornish, weight watchers, and zone
diets for weight loss and heart disease risk reduction: a randomized
trial,” Jama, vol. 293, no. 1, pp. 43–53, 2005.
[5] R. Canetti, “Security and composition of multiparty cryptographic
protocols,” Journal of CRYPTOLOGY, vol. 13, no. 1, pp. 143–202, 2000.
[6] The best of global digital marketing. case study: Hellmann’s
recipe cart. [Online]. Available: http://www.best-marketing.eu/casestudy-hellmanns-recipe-cart/ [visited 04/30/2017]
[7] The wall street journal. whole foods aims for younger shoppers with
new stores. [Online]. Available: http://www.wsj.com/articles/wholefoods-to-launch-new-outlets-1431041549 [visited 04/30/2017]
[8] M. Naor and B. Pinkas, “Oblivious transfer and polynomial evaluation,”
in Proceedings of the Thirty-first Annual ACM Symposium on Theory
_of Computing, ser. STOC ’99._ New York, NY, USA: ACM, 1999, pp.
245–254.
[9] M. J. Freedman, Y. Ishai, B. Pinkas, and O. Reingold, Keyword Search
_and Oblivious Pseudorandom Functions._ Berlin, Heidelberg: Springer
Berlin Heidelberg, 2005, pp. 303–324.
[10] M. J. Freedman, K. Nissim, and B. Pinkas, Efficient Private Matching
_and Set Intersection._ Berlin, Heidelberg: Springer Berlin Heidelberg,
2004, pp. 1–19.
[11] L. Kissner and D. Song, “Privacy-preserving set operations,” in Pro_ceedings of the 25th Annual International Conference on Advances in_
_Cryptology, ser. CRYPTO’05._ Berlin, Heidelberg: Springer-Verlag,
2005, pp. 241–257.
[12] E. De Cristofaro, J. Kim, and G. Tsudik, Linear-Complexity Private Set
_Intersection Protocols Secure in Malicious Model._ Berlin, Heidelberg:
Springer Berlin Heidelberg, 2010, pp. 213–231.
[13] C. Hazay and M. Venkitasubramaniam, Scalable Multi-party Private
_Set-Intersection._ Berlin, Heidelberg: Springer Berlin Heidelberg, 2017,
pp. 175–203.
[14] D. Dachman-Soled, T. Malkin, M. Raykova, and M. Yung, Efficient
_Robust Private Set Intersection._ Berlin, Heidelberg: Springer Berlin
Heidelberg, 2009, pp. 125–142.
[15] C. Hazay, Oblivious Polynomial Evaluation and Secure Set-Intersection
_from Algebraic PRFs._ Berlin, Heidelberg: Springer Berlin Heidelberg,
2015, pp. 90–120.
[16] C. Hazay and Y. Lindell, “Efficient protocols for set intersection and
pattern matching with security against malicious and covert adversaries,”
_Journal of Cryptology, vol. 23, no. 3, pp. 422–456, 2010._
[17] S. Jarecki and X. Liu, Efficient Oblivious Pseudorandom Function with
_Applications to Adaptive OT and Secure Computation of Set Intersection._
Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 577–594.
[18] R. Canetti and M. Fischlin, Universally Composable Commitments.
Berlin, Heidelberg: Springer Berlin Heidelberg, 2001, pp. 19–40.
[19] C. Hazay and K. Nissim, “Efficient set operations in the presence of
malicious adversaries,” Journal of Cryptology, vol. 25, no. 3, pp. 383–
433, 2012.
[20] Y. Lindell and B. Pinkas, “Secure multiparty computation for privacypreserving data mining,” Journal of Privacy and Confidentiality, vol. 1,
no. 1, p. 5, 2009.
[21] J. Vaidya and C. Clifton, “Secure set intersection cardinality with
application to association rule mining,” J. Comput. Secur., vol. 13, no. 4,
pp. 593–622, Jul. 2005.
[22] B. Pinkas, T. Schneider, and M. Zohner, “Scalable private set intersection
based on ot extension,” 2016.
[23] E. Gelenbe, G. Pujolle, and J. Nelson, Introduction to queueing net_works._ John Wiley & Sons, Inc., 1987, vol. 2.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/WCNC.2018.8377207?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/WCNC.2018.8377207, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal.archives-ouvertes.fr/hal-01786000/file/Article%20WCNC%202018.pdf"
}
| 2,018
|
[
"JournalArticle",
"Conference"
] | true
| 2018-04-15T00:00:00
|
[] | 9,454
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/016143d542f7249ee1a1a2aafc45f11fc61a0bc0
|
[] | 0.860276
|
Blockchain Theory and Applications - Welcome and Committees
|
016143d542f7249ee1a1a2aafc45f11fc61a0bc0
|
2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)
|
[] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# BRAIN 2022: Third Workshop on Blockchain Theory and Applications - Welcome and Committees
Welcome Message
BRAIN 2022, the 3rd Workshop on Blockchain theoRy and ApplicatIoNs, aims to provide a venue for
researchers from both academy and industry to present and discuss important topics on blockchain
technology. In particular, blockchain provides an innovative solution to address challenges in pervasive
environments, e.g. distributed computing, smart devices, and device-to-device coordination, in terms of
decentralization, data privacy, and network security, while pervasive environments offer elasticity and
scalability characteristics to improve the efficiency of blockchain operations. The workshop's goal is to
present results on both theoretical and more applicative open challenges, as well as to provide a venue to
showcase the current state of existing proposals.
The members of the workshop's international program committee refereed the submitted papers by both
the quality, far sight, and fit with the workshop topics. This year's workshop program includes the 7
selected high quality papers. The submissions have been divided in two sessions, one covering
blockchain applications and the other blockchain analysis and theory. The program is integrated with
two blockchain related keynotes, one about layer 2 payment channels and the other about blockchain
distributed consensus protocols evolution.
We would like to thank the members of the program committee for providing detailed and rigorous
reviews, allowing us to select the most engaging papers for this edition of the workshop. We would also
like to thank the PerCom organizers, in particular Frank Dürr and Antinisca Di Marco, the PerCom
workshop co-chairs, for constantly supporting the workshop and assisting us along the way. Finally, we
thank all attendees, keynote speakers, and authors of both accepted and rejected papers for their
contributions and participation.
Damiano Di Francesco Maesa (University of Pisa, Italy), Laura Ricci (University of Pisa, Italy),
Nishanth Sastry (University of Surrey, United Kingdom)
## BRAIN 2022 Organizing Committee
### Co-Chairs
Damiano Di Francesco Maesa (University of Pisa & University of Cambridge, United Kingdom
(Great Britain))
Laura Ricci (University of Pisa, Italy)
Nishanth Sastry (University of Surrey, United Kingdom (Great Britain))
-----
## Technical Program Committee
Andrea Bracciali Stirling University United Kingdom (Great Britain)
Andrea Bracciali University of Stirling United Kingdom (Great Britain)
Antorweep Chakravorty University of Stavanger Norway
Mauro Conti University of Padua Italy
Andrea De Salve National Research Council (CNR) Italy
Damiano Di Francesco MaesaUniversity of Pisa United Kingdom (Great Britain)
Zeki Erkin Delft University of Technology The Netherlands
Tooba Faisal Kings College London United Kingdom (Great Britain)
Barbara Guidi University of Pisa Italy
Mohammad Hammoudeh Manchester Metropolitan UniversityUnited Kingdom (Great Britain)
Paolo Mori IIT, CNR Italy
Remo Pareschi University of Molise Italy
Radu Prodan University of Klagenfurt Austria
Laura Ricci University of Pisa Italy
Nishanth Sastry University of Surrey United Kingdom (Great Britain)
Claudio Schifanella Università di Torino Italy
Luca Spalazzi Università Politecnica delle Marche Italy
Frank Tietze University of Cambridge United Kingdom (Great Britain)
Sara Tucci-Piergiovanni CEA LIST France
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/percomworkshops53856.2022.9767389?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/percomworkshops53856.2022.9767389, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://ieeexplore.ieee.org/ielx7/9766442/9767206/09767389.pdf"
}
| 2,022
|
[
"Conference"
] | true
| 2022-03-21T00:00:00
|
[] | 804
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/016320200c600c00a351586477cfcf8f3df423d1
|
[
"Computer Science",
"Medicine"
] | 0.852325
|
Decentralized Real-Time Anomaly Detection in Cyber-Physical Production Systems under Industry Constraints
|
016320200c600c00a351586477cfcf8f3df423d1
|
Italian National Conference on Sensors
|
[
{
"authorId": "2198956885",
"name": "Christian Goetz"
},
{
"authorId": "1808059",
"name": "B. Humm"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
Anomaly detection is essential for realizing modern and secure cyber-physical production systems. By detecting anomalies, there is the possibility to recognize, react early, and in the best case, fix the anomaly to prevent the rise or the carryover of a failure throughout the entire manufacture. While current centralized methods demonstrate good detection abilities, they do not consider the limitations of industrial setups. To address all these constraints, in this study, we introduce an unsupervised, decentralized, and real-time process anomaly detection concept for cyber-physical production systems. We employ several 1D convolutional autoencoders in a sliding window approach to achieve adequate prediction performance and fulfill real-time requirements. To increase the flexibility and meet communication interface and processing constraints in typical cyber-physical production systems, we decentralize the execution of the anomaly detection into each separate cyber-physical system. The installation is fully automated, and no expert knowledge is needed to tackle data-driven limitations. The concept is evaluated in a real industrial cyber-physical production system. The test result confirms that the presented concept can be successfully applied to detect anomalies in all separate processes of each cyber-physical system. Therefore, the concept is promising for decentralized anomaly detection in cyber-physical production systems.
|
# sensors
_Article_
## Decentralized Real-Time Anomaly Detection in Cyber-Physical Production Systems under Industry Constraints
**Christian Goetz *** **and Bernhard Humm**
Hochschule Darmstadt— Department of Computer Science, University of Applied Sciences,
64295 Darmstadt, Germany
*** Correspondence: christian.goetz@yaskawa.eu**
**Abstract: Anomaly detection is essential for realizing modern and secure cyber-physical production**
systems. By detecting anomalies, there is the possibility to recognize, react early, and in the best case,
fix the anomaly to prevent the rise or the carryover of a failure throughout the entire manufacture.
While current centralized methods demonstrate good detection abilities, they do not consider the
limitations of industrial setups. To address all these constraints, in this study, we introduce an
unsupervised, decentralized, and real-time process anomaly detection concept for cyber-physical
production systems. We employ several 1D convolutional autoencoders in a sliding window approach
to achieve adequate prediction performance and fulfill real-time requirements. To increase the
flexibility and meet communication interface and processing constraints in typical cyber-physical
production systems, we decentralize the execution of the anomaly detection into each separate
cyber-physical system. The installation is fully automated, and no expert knowledge is needed to
tackle data-driven limitations. The concept is evaluated in a real industrial cyber-physical production
system. The test result confirms that the presented concept can be successfully applied to detect
anomalies in all separate processes of each cyber-physical system. Therefore, the concept is promising
for decentralized anomaly detection in cyber-physical production systems.
**Keywords: anomaly detection; cyber-physical production systems; cyber-physical systems; deep**
learning; unsupervised learning
**Citation: Goetz, C.; Humm, B.**
Decentralized Real-Time Anomaly
Detection in Cyber-Physical
Production Systems under Industry
Constraints. Sensors 2023, 23, 4207.
[https://doi.org/10.3390/s23094207](https://doi.org/10.3390/s23094207)
Academic Editors: Jun Wu, Zhaojun
Steven Li, Yi Qin and Carman K.M.
Lee
Received: 13 February 2023
Revised: 17 April 2023
Accepted: 21 April 2023
Published: 23 April 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**1. Introduction**
Due to the rising complexity of modern processes in manufacturing, the application of
cyber-physical systems (CPS) is increasing. A CPS can be described as a combination of an
embedded system with sensors and actuators. The system interacts with these to monitor
and control physical processes (Figure 1) [1]. Typically, the embedded system requires a
communication interface to exchange data with other systems or a cloud. Many of these
CPSs are networked to realize complex physical processes in the real world [2]. CPSs
combine powerful information technology to monitor and control engineered systems [3].
Modern production systems, which include CPSs, are defined as cyber-physical production systems (CPPS) [4]. These systems are based on two main functionalities, advanced
connectivity to ensure real-time data acquisition from the physical world and feedback from
cyberspace. CPPSs break with the structure of the typical automation hierarchy to enable
intelligent data management, real-time analytics, and enhanced computational capabilities.
The control and field levels still exist to ensure the highest performance for critical loops,
while the higher levels are more dynamic and decentralized [5].
-----
_Sensors 2023, 23, 4207_ 2 of 19
**Figure 1. Abstract concept of a CPS [1].**
Such a CPPS can be seen in Figure 2. The rotary table dispenser system consists of
different CPSs working together to realize several physical processes, e.g., transportation or
pick-and-place operations. The overall process involves picking small items from a rotating
table and putting them into several containers which are moving around the machine on
conveyor belts. After the container is filled and reaches the end position, it gets picked up
by the production robot and emptied back onto the rotating table. Thereafter, the container
is put in a central location from which the sliding robot places it into the container tray.
When the container tray is full, both sliders move to the left side of the system, and the
sliding robot sets the container back on the conveyor belt. The described system acts as
a simulation of a similar real industrial process and is used as a demonstration unit in
Yaskawa. In total, there are nine CPSs, each combining a mechanical and an embedded
system. Seven CPSs are based on servomotors and servo controllers. Two CPSs consists
of an industrial robot with a robot controller. A central control unit collects data from the
different CPSs and regulates the main production process. Additional computational units
provide the opportunity to integrate higher functions, e.g., resource planning, production
analysis, and process control handling.
**Figure 2. Rotary table dispenser system.**
-----
_Sensors 2023, 23, 4207_ 3 of 19
In such a connected structure, even a single failure in one CPS can influence the
entire production, resulting in a faulty product, a breakdown of the complete process, or a
carryover of the failure through the whole system. Therefore, it is necessary to ensure an
error-free operation to realize a secure and modern CPPS [6].
Anomalies can be taken as essential failure indicators, such as a rising vibration at a
bearing of the rotating table or an unexpected torque increase on the motor of the conveyor
belt. Anomaly detection (AD) in CPPSs refers to the identification of behavior that is not
shown under the regular operations of the system. Consequently, by detecting anomalies,
there is the possibility to recognize, react early, and in the best case, fix the anomaly to
prevent the rise or the carryover of the failure throughout the entire manufacture [7].
Techniques for anomaly detection in CPPS can be distinguished into model-based [8]
and data-driven approaches [9]. Model-based methods work based on precise and engineered models of the complete system. Creating such models over the complex structure of
CPPS is time-consuming while simultaneously requiring deep expert knowledge. Datadriven approaches establish models only on collected data. Through the high amount of
monitored and available data in CPPSs, these approaches are more appropriate for such
systems, while additionally, no proper expert knowledge is needed [10]. Recent developments in machine learning and deep learning for anomaly detection have improved the
detection performance on complex data sets [11].
By following the scheme to deliver all data from the control and field-level device to
one CPS at a higher level to process, analyze, and detect anomalies, current centralized
data-driven AD approaches in industrial CPPSs demonstrate better detection abilities than
decentralized ones. While this is a significant advantage, it first requires a fully connected
and high-performance unit for monitoring all integrated CPSs. Subsequently, adding such
a unit increases costs and installation time. Additionally, a centralized concept creates a
communication delay between the different stations to exchange the enormous amount of
data produced in a CPPS. This can result in a delayed response after detecting an anomaly.
Furthermore, it slows down the execution, evaluation, and detection of the anomalies in the
individual CPS [12]. The structure of a CPPS is highly dynamic. Often single components,
such as motors and sensors, are exchanged, replaced, or modified due to predictive or
preventive maintenance. In a centralized approach, this results in a complete recreation of
the AD due to the changed characteristics.
In contrast, a decentralized concept addresses these drawbacks by establishing the AD
directly in each CPS. While this allows monitoring of the whole system by combining each
separate AD, the need for a high-performance unit can be reduced, and the execution and
response time can be increased. Furthermore, changes in a single CPS result in only the
retraining of the associated AD. By establishing adequate prediction performances in each
single CPS, comparable performance to a centralized AD can be reached.
The contribution of this paper is a novel unsupervised, decentralized, and real-time
process anomaly detection concept for CPPS under industry constraints. We focus on
industrial production processes and common constraints in CPPSs, including real-time
requirements, asynchronous signals, prediction quality, configurable design, data-driven
limitations, processing limitations, and communication interface constraints.
We employ several 1D convolutional autoencoders (1D-ConvAE) in a sliding window
approach to achieve adequate prediction performance and fulfill real-time requirements.
Current methods do not consider the limitations and constraints of industrial setups and
mainly follow a centralized approach. By executing the installation process on an external,
removable device, we increase the flexibility of our concept while considering processing
limitations. To meet communication interface and processing constraints in typical CPPSs,
we decentralize the execution of the AD into each separate CPS. The installation is fully
automated to tackle data-driven limitations. Thereby, no expert knowledge about explicit
anomalies is needed. Adjustments to the data collection routine were made to optimize the
external sampling procedure and improve the installation process.
-----
_Sensors 2023, 23, 4207_ 4 of 19
This paper is structured as follows. Section 2 summarizes related work about anomaly
detection for industrial CPS and CPPS. The problem statement is specified in Section 3.
Section 4 presents a concept for fast and decentralized unsupervised anomaly detection
in CPPS. Information about a prototypical implementation is provided in Section 5. In
Section 6, the evaluation of the approach is presented based on an industrial setup. Finally,
a conclusion and an outlook for future work are given in Section 7.
**2. Related Work**
Surveys on anomaly detection techniques can be found in [13–15]. More industrialrelated AD methods are described in [16,17]. Overall, these techniques can be differentiated
into model-based and data-driven approaches. Model-based techniques detect anomalies
by manually creating precise models about the underlying system. This requires a deep
prior knowledge of the individual CPPS. While data-driven approaches are also based
on models, those models are generated automatically from data and not manually by
domain experts. Furthermore, data-driven approaches can be split into supervised and
unsupervised techniques. Anomalous data in CPPS is associated with the undefined
behavior of the system. Creating such anomalous data can be hazardous for the CPPS itself,
while defining all possible anomalies in advance is nearly impossible. Based on the points
mentioned above, we focus on unsupervised data-based methods.
Common approaches in unsupervised data-based AD are one-class classification methods, such as deep one-class networks [18] and one-class support vector machine [19,20].
While multi-class classification techniques typically require labeled datasets, these approaches focus on the normal samples by learning a discriminative hyperplane surrounding them. Other frequently used techniques are unsupervised clustering methods such
as Gaussian Mixture Models [21], k-nearest neighbor methods [22], or random isolation
forests [23]. These models can identify anomalies by building a detailed representation of
the normal data. While the resulting models are generally lightweight and computationally
fast, they lack performance when processing high-dimensional data.
Deep learning methods for AD have recently improved the state of the art in detection
performance on complex and large datasets [24]. The standard techniques in this field are
generative adversarial networks (GAN). GANs consist of a generator combined with a
discriminator as the base structure. By teaching the discriminator to distinguish between
real and fake samples while the generator tries to generate new data based on the input,
GANs can detect anomalies even in large multivariate data streams. Concepts of GANs
differ mainly in the models used as the base structure, such as long-short-term-memory
(LSTM) recurrent neural networks (RNN) [25], two-dimensional convolutional autoencoder [26], and one-dimensional convolutional autoencoder [27]. While the described
approaches achieve good outcomes, they result in highly complex and large models that
cannot be applied to a CPS with limited computational resources, which is a common
industry constraint.
Reconstruction-based methods in AD combine techniques that rely on the assumption
that a model trained only on normal data cannot reconstruct abnormal or unseen data.
Typical techniques of these fields are PCA methods [28] or sparse representations [29].
A widely used approach for reconstruction-based anomaly detection in CPS is using
autoencoders [30,31] or variants thereof [32,33]. By learning the latent features of the
input data, autoencoders can reconstruct their input as output. While these models can
be applied to analyze the spatial characteristics of the input data, they miss considering
the temporal dependencies, which are necessary indicators for anomalies in the time series
data of industrial CPS.
While convolutional neural networks (CNN) were initially developed for solving
image classification tasks, they can also be successfully applied for AD in time series data of
a CPS through the ability to extract temporal dependencies [34]. Several industrial applications of CNNs in CPS, such as fault detection in motors [35], AD in wheelset bearings [22],
and rolling bearings [36], can be found. Additionally, ref. [37] pointed out that CNNs have
-----
_Sensors 2023, 23, 4207_ 5 of 19
lower parameters than other network structures while performing comparably or better,
resulting in reduced complexity, needed storage capacity, and computing power [38].
Convolutional autoencoders (ConvAE) combine the ability to detect temporal anomalies with the help of convolutions and spatial anomalies by the autoencoder structure while
being also resource-efficient. This results in ideal models for AD in multivariate time series
data [39–41]. Using a 2D variational ConvAE, the authors in [42] detect anomalies from
unseen abnormal patterns in industrial robots. In [43], a ConvAE based on channel-wise
reconstruction in combination with a local outlier factor is used to detect anomalies in
automobile sensors.
Several approaches for decentralized AD can be found [44–46]. In [12], different decentralized AD techniques are analyzed and compared in complexity and performance. A
decentralized approach for real-time AD in transportation networks is introduced in [47].
The authors of [48] presented spatial anomaly detection in sensor networks using neighborhood information. While these are promising approaches for decentralized AD, no work
considers all the different industrial constraints simultaneously, which is important for
integration into a CPPS.
Different automated frameworks for anomaly detection can be found. In [49], a
framework for automatic time series anomaly detection is introduced. The study focuses
on large-scale time series data in a centralized AD approach, which cannot be applied to a
CPPS with limited resources. The authors of [50] introduce an unsupervised framework for
anomaly detection in CPS. Furthermore, ref. [51] presents a high-performance unsupervised
anomaly detection for CPS networks. Both approaches are developed for CPS, but mainly
focus on adversarial attacks and not on the process of the CPS and, respectively, of the CPPS.
In our previous work [52], we introduced an unsupervised anomaly detection concept
for CPSs under industry constraints while focusing on repetitive tasks with a fixed duration
for a single CPS. In this contribution, we improved the concept for CPPS with multiple
CPSs, while still considering all industrial constraints. We adapted the technology to a
sliding window approach to simultaneously handle processes with variable durations and
meet real-time instead of near-time requirements.
In summary, there are several approaches for centralized and decentralized datadriven unsupervised anomaly detection. Only a few are evaluated in real CPSs, and even
fewer are applied to real production data of a CPPS. Overall, no work considers all the
different industrial limitations of a CPPS while following a decentralized and fast approach
to realize anomaly detection in industrial production data.
In this work, we propose a concept that addresses all the requirements that must be
considered to realize a usable decentralized, real-time anomaly detection in CPPS under
industrial constraints. Our contribution in this paper is summarized as follows. We employ
several 1D-ConvAEs for unsupervised anomaly detection in a CPPS to monitor the different
processes. We introduce a novel concept to decentralize the different models in each single
CPS of the CPPS by splitting the installation and execution of the anomaly detection to
meet industrial requirements. While the concept is fully automated, no expert knowledge
about explicit known anomalies is needed to meet the defined requirements.
**3. Problem Statement**
This article aims at a decentralized concept for real-time unsupervised anomaly detection for production processes under industrial constraints. The problem statement can be
described by the different industrial requirements that must be considered to implement
such a concept. Several conditions are adapted and extended from [52].
1. **Anomaly detection: An anomaly detection for a CPPS, such as an industrial pro-**
duction system, shall be performed. The CPPS consists of multiple CPSs producing
multivariate time series data over variable process lengths, for example, the sliding
robot from the CPPS in Figure 2, combining a robot with several axes and a robotic
controller to move containers on a conveyor belt.
-----
_Sensors 2023, 23, 4207_ 6 of 19
2. **Real-time: To cover all different kinds of anomalies and react even in time-critical**
scenarios, such as detecting collisions in the production system, the result and reaction
of the anomaly detection should be available as quickly as possible. Therefore, the
execution of the anomaly should be performed during production, and the results
must be immediately provided after new data from sensors and actors are available,
e.g., a few milliseconds after the data is received.
3. **Prediction quality: For an AD application in an industrial environment, adequate**
prediction performance is required. This depends on the different use cases for which
the anomaly detection is applied, e.g., an F1 score of 0.95 or better for each CPS in
the CPPS.
4. **Configurable: To apply AD on different CPPSs in different applications, the anomaly**
detection should be adaptable to various CPSs and use cases. The possibility of using
the technique for varied time series data with different variable types and diverse time
lengths should be given, for instance, robots or transportation systems with features
such as torque, position, and speed.
5. **Data-driven: As mentioned before, manually creating models is time-consuming and**
requires deep expert knowledge. Simultaneously recording anomalous data from
CPPS can be dangerous for the system itself. Therefore, the AD should only be trained
with regular production data and without expert knowledge.
6. **Feasible: The AD should be compatible with current technological standards in**
industrial environments to realize a generalist integration for various scenarios. This
includes constraints and limitations of commonly used CPPSs in production settings:
(a) Process limitations, due to the design of CPSs in industry, that are unable to
execute process-intensive tasks in parallel to control and monitor the physical
process, e.g., limited available RAM and processing power.
(b) Communication interface constraints of commonly available CPSs in industry,
e.g., OPC UA Communication, to transfer the high amount of production data
at a sample rate of 2 ms during the sampling process to a database.
**4. A Concept for a Fast, Decentralized, and Unsupervised Anomaly Detection in CPPSs**
_4.1. Overview_
This section describes a fast and decentralized process anomaly detection concept
based on several 1D-ConvAEs, which fulfills the requirements specified in the problem
statement. Figure 3 shows the sequence of the different steps that are carried out. The
concept consists of one AD Installation Cycle, which triggers the creation of several anomaly
detection pipelines (AD pipelines) through the parallel execution of AD Generation Cycles,
as shown in Figure 4. A detailed description of the AD Generation Cycle can be found
in Section 4.3 and in Figure 5. The number of different AD Pipelines depends on the
number of included CPSs in the CPPS. In the AD Production Cycles, located in every
CPS in Figure 4, each pipeline is directly implemented and executed as part of the CPS.
Explanations about the AD Production Cycle can be found in Section 4.4 and in Figure 6.
The processing unit backend, an external device that can be removed after the installation
process is finished, performs all heavy processing tasks of the AD Installation Cycle to meet
the previously explained industrial constraints of the CPPS. The concept is developed to be
executed automatically, enabling AD implementation without deep expert knowledge. In
addition, a direct explanation of the individual components of the diagrams can be found
in Appendix A.
-----
_Sensors 2023, 23, 4207_ 7 of 19
**Figure 3. Sequence of steps performed in the concept.**
**Figure 4. Overview of AD Installation Cycle.**
**Figure 5. Overview of AD Generation Cycle.**
-----
_Sensors 2023, 23, 4207_ 8 of 19
**Figure 6. Overview of AD Production Cycle.**
_4.2. AD Installation_
The AD Installation Cycle consists of four parts: data collection, data analysis, AD
generation, and deployment (see Figure 4).
**Data collection: The operator triggers the data collection at the processing unit back-**
end to record regular process data. Process data samples, single packages of time series
data from the individual CPS, are collected at a high sample rate and sent to the control
device. Over a defined period of time, the individual data of the various CPSs are recorded
and then combined. The resulting package, named regular process data, is then sent to the
processing unit backend. This procedure is required to meet the communication interface
limitations in the installation process and enable the use of the high sample rates at the AD
Production Cycles directly in the CPSs. The data packages are saved inside the processing
unit backend until a specified number of records is reached. Regular process data consist of
different features like position, torque, and speed sampled in the form of time series data
from the various CPSs. This data can be defined as multiple data streams containing the
features of the physical process recorded by the different sensors and actors.
**Data Analysis: Depending on the diverse CPSs, different features with different**
ranges are provided. In the analysis step, unnecessary features are automatically removed,
and configuration files are accordingly generated. Each configuration file contains the
necessary information for the following AD generation cycle, e.g., feature ranges, types,
and default hyperparameters. The operator can manually tune this information, or the
default values can be used.
**AD Generation: In the installation step for each included CPS, an AD Generation**
Cycle (Figure 5) is triggered. The different AD Generation Cycles can be executed in parallel
to speed up the installation process. A detailed description of the AD generation cycle can
be found in Section 4.3.
**Deployment: After the generation of the AD pipelines, each pipeline is exported and**
deployed to the separate CPS. This terminates the AD Installation Cycle.
_4.3. AD Generation Cycle_
The AD Generation Cycle consists of preprocessing, model initialization, training,
evaluation, optimization, and export, as shown as a BPMN diagram in Figure 5. First,
the provided regular process data, the combined collected data samples of all CPS, are
preprocessed with the information received from the configuration files. This transforms the
data, which consist of different ranges and units, into an equal numerical range. The type of
the desired preprocessor is defined in the configuration file. This enhances a configurable
setup, which can handle various variables with different units and ranges. Next, the
model is initialized, trained, evaluated, and optimized. Additional hyperparameters set
in the configuration file are, e.g., the number of layers, filters per layer, used loss function,
and type of optimizer. After initialization, the model is trained on the preprocessed
-----
_Sensors 2023, 23, 4207_ 9 of 19
data. The method specified in the configuration file is used to evaluate the model. In the
optimization step, the hyperparameters are changed, influenced by the defined ranges
and tuning parameters. The search algorithm declared in the configuration file searches
over a generated search space for the best possible parameters. These steps are executed
iteratively until the specified reconstruction performance (e.g., the desired MAE Value) is
reached. After the tuning is finished, the AD pipeline, a combination of preprocessor and
model, is exported to the deployment step. This terminates the AD generation cycle.
_4.4. AD Production Cycle_
After the AD Generation Cycle is finished and the AD pipeline is deployed in the
CPS, the AD Production Cycle, shown as a BPMN diagram in Figure 6, starts. Process data
samples, single packages of time series data from the CPS, are collected at a high sample
rate and stored in an in-memory data storage. When the required amount of data packages
to execute the AD process step is reached, the data are preprocessed and evaluated by
the AD pipeline. After the execution, the previously collected data in the in-memory data
storage will be released to limit the needed memory capacity. The AD process step will
be executed again immediately after enough data is available. In case of an anomaly, the
detection can be delivered to the control unit, or the operator can be directly notified.
Additionally, the AD process can be terminated, and therefore the AD Production Cycle.
_4.5. Sliding Window Convolutional Autoencoder_
To achieve adequate prediction performance and meet real-time requirements, we
choose a sliding-window-based 1D-ConvAE as the model type (see Figure 7). Autoencoders
are reconstruction-based neural networks that reconstruct their input as output. By only
learning the reconstruction of the regular pattern, every datum consisting of unseen, abnormal patterns cannot be correctly reconstructed, which will result in a higher reconstruction
error. To gain adequate prediction performance and meet the processing limitations, 1D
convolutional layers are used. Adding these layers to the autoencoder allows the model
to learn spatially invariant features and capture spatially local correlations from the data.
This means it can recognize patterns of high-dimensional data without requiring feature
engineering. At the same time, the required parameters and the computational complexity
of a 1D convolutional layer are significantly lower than the comparable 2D convolutional
layers. The 1D-ConvAE can be trained without expert knowledge or explicitly known
anomalies, only with regular process data. This fulfills the requirement 5, data driven. A
detailed comparison with other methods can be found in Appendix B.
**Figure 7. Convolutional autoencoder.**
-----
_Sensors 2023, 23, 4207_ 10 of 19
_4.6. Anomaly Detection_
Regular process data can be defined as a data stream containing several time series
of sampled features F = ( f1, f2, ..., fn), where n defines the number of different features
such as position, speed, and torque from the various sensors of the mechanical system. The
data stream is split into several windows depending on the chosen window size m and
step size s. Each window consists of several time series equal to n different features in
the data stream over a time period corresponding to the window size m. These generated
sliding windows act as the input to the model. The output of the model is each separated
reconstructed sliding window. With the help of an aggregation function (e.g., arithmetic
mean), the reconstructed sliding windows can be merged into a reconstructed data stream.
To calculate the reconstruction error matrix E, the reconstructed error eit of each feature fi at
every time step t can be calculated as the Absolute-Error (AE) e fit =| fi,t − _f[ˆ]i,t | (1 ≤_ _i ≤_ _n)_
between the input and output. This results in a matrix E representing each feature at each
time step as a value of the differentiation between the input and reconstructed data stream.
Threshold values must be defined to evaluate which value a reconstruction error indicates if
an anomaly is detected. We employ the following method for automatically computing and
tuning the threshold values. After training the model, the described method re-evaluates
all training data. This results in an error matrix over the whole training data stream. The
maximum reconstruction error of each feature is taken from this matrix to construct a
threshold vector θ. This vector can be adapted when the model is integrated directly into
the CPS by automatically tuning the values in the live testing stage. In the AD production
cycle, after enough high sample data are collected in the in-memory storage, each column
of the reconstruction matrix E is evaluated with the threshold vector θ. The number of
collected data samples can be flexibly chosen but must be at least twice as large as the
window size to allow the reconstruction concept to be applied. Suppose a value of e f
_it,_
where i is the considered feature of the total features and n exceeds the associated threshold
value of this feature θ fi (1 ≤ _i ≤_ _n). In that case, the data point in the input data stream_
is declared anomalous. Therefore, anomalies in the input data stream can be detected by
applying the threshold vector θ to each timestep t of the reconstruction matrix E.
**5. Prototype Implementation**
The concept has been implemented prototypically. As programming language,
[Python 3.9 is used. A MongoDB https://www.mongodb.com/ (accessed on 12 February](https://www.mongodb.com/)
2023) is established on the processing unit backend to save and export the regular process
data. As a preprocessor, a MinMaxScaler was generated. The model is implemented using
[the Keras library https://keras.io/ (accessed on 12 February 2023), running on top of](https://keras.io/)
[Tensorflow https://www.tensorflow.org/ (accessed on 12 February 2023) [53]. For hyper-](https://www.tensorflow.org/)
[parameter tuning, the python library Ray Tune https://ray.io/ (accessed on 12 February](https://ray.io/)
[2023) [54] is used. Finally, a tracking server based on the library Mlflow https://mlflow.org/](https://mlflow.org/)
(accessed on 12 February 2023) [55] was established to track the training results. The communication between the motion controller and the processing unit backend was realized
through an OPC UA server–client model based on publish–subscribe routines. Several
function blocks for buffering the high sample process data from the CPS at the motion
controller were developed to establish this concept. This enables an intelligent communication pattern, where only minor changes on the motion controller must be performed
to allow the described data exchange. The configuration files are written in YAML and
can be accessed and changed by the operator. For each CPS, a separate configuration file
is created. These files are also tracked to enable a traceable process at a later stage. The
preprocessor and model integrations are developed as interfaces to satisfy the configurable
requirement. Therefore, various considered models and preprocessors can be implemented
as long as they follow the abstract class structure, making it easy to exchange, adapt, or
evolve the described technique. To visualize the detection results and allow the user to
interact with the system, a dashboard for bi-directional communication between the CPPS
and the operator was implemented.
-----
_Sensors 2023, 23, 4207_ 11 of 19
**6. Evaluation**
_6.1. Experimental Setup_
The rotary table dispenser system shown in Figure 2 was used to evaluate the decentralized concept. The CPPS consists of different CPSs and a control unit working together to
realize several processes, e.g., transport and pick-and-place operations. The overall process
involves picking small items from a rotating table and putting them into several containers
which are moving on conveyor belts around the machine. During the process, the different
time series data of each CPS is collected in the motion controller. Several buffers are written
in the motion controller to adapt the high sample rate of 2 ms of each CPS to the minimal
data exchange cycle time of 50ms at the OPC UA server. After one buffer is filled, the data
package is sent to the OPC UA Server running on the processing unit backend, a pc type
NUC8i5BEK. The collected data are saved in the established database after each import
cycle. In total, 33 different time series over a period of 6 min were recorded. Based on
the high sample rate, each time series consists of around 176,000 samples, resulting in
approximately 5,808,000 data points as training data.
_6.2. Data Recording_
Realistic fault data were generated by forcing different anomalies into the normal
process to evaluate the performance of the used models. The resulting deviations from
the normal process were manually classified as anomalous areas in the resulting data
stream to rate the performance. Five error cases were defined, and at least one error case
was generated for each CPS. Additionally, long-term tests, including several complete
processes without anomalies, were carried out to control the resulting models in the normal
industrial setup.
1. **Friction: To simulate friction, which can result from abrasion of used mechanical**
components, delayed maintenance, or broken parts, external forces were applied to
the mechanical systems of the different CPSs, e.g., against the rotation direction of the
conveyor belt or the movement of the linear sliders. This results in increased torque
values at the applied CPS.
2. **Vibration: Undefined vibration, which can be caused by broken bearings or loose**
attachments, was applied to the mechanical system of the CPS. The simulation was
done by manually applying shocks to the rotating table.
3. **Defect components: Another industry-related anomaly can be caused by defect**
components in the production process, such as a broken container. To examine this
type of anomaly, different containers were manipulated in such a way that they could
not be picked by the robots anymore, resulting in an undefined status of the whole
production line.
4. **Incorrect process: In addition, external manipulations can influence industrial pro-**
duction lines. These injections in the normal process can result in some undefined
behavior of the system, which can cause damage to the products or the system itself.
To simulate this kind of anomaly, the placement of the containers on the belt was
changed in the running process. Therefore, the real positions differ from the fixed
pre-defined positions in the machine scope.
5. **Collision: Due to external influences or process errors, even in modern indus-**
trial systems, collisions may occur. The system typically detects heavy collisions,
whereas smaller collisions resulting in damaged products or fragile components are
mostly not recognized by the internal system. This can be, for example, a collision
with an obstacle in the moving path of the linear sliders or a displaced product on
the conveyor.
_6.3. Model Configuration_
The sampled data from the regular process was used to train the model. A MinMax
Scaler was chosen to preprocess the data by scaling the time series between zero and
one. An Adam optimizer was used, and the loss function was set to MAE. The default
-----
_Sensors 2023, 23, 4207_ 12 of 19
hyperparameter tuning results in 80 different decoder and encoder structures for each CPS.
The best model was automatically picked by evaluating the number of parameters and the
resulting loss value. Detailed information about the different considered parameters can
be found in Table 1. The focus was on realizing small and efficient model architectures to
meet the computational limitations (Section 3, point 6). Therefore, shallow structures with
a limited amount of parameters were preferred. By comparing all achieved loss values and
the resulting model structures, the smallest structure that achieved a low loss value and,
thus, a good reconstruction capability was automatically selected. Due to the unsupervised
setup, no anomalous data are available in the training process. Therefore, an immediate
evaluation of the detection performance is not possible; consequently, only the reconstruction capability can be taken as an additional selection criterion in this process. Additional
discussion on the selection process can be found in future work. As activation function, the
rectified linear unit was chosen. Dropout layers were applied as regularization between
the convolutional layers, and max pooling layers were used to reduce the dimensionality.
Different step sizes in the training process were tested. The best results were reached with
a step size of one.
**Table 1. Summary table of all parameters taken in the process of automatically selecting the models.**
**Model Parameter** **Range** **Definition**
Number of Layers [4, 8] Total number of layers used in the model.
Number of Filters in the first Layer [32, 128]
The number of filters used in the first layer of the model. To
realize the dimensionality reduction, the inner layers have
fewer filters. (In the automated concept, half of the previous
layer).
Window size [32, 128] Number of time steps of the sliding window.
The length of the sequence shifted between the individual
Step size [1, 64]
windows.
Number of epochs with no improvement after which training
Patience [1, 10]
will be stopped.
Total number of parameters [12,642, 208,614] Total number of parameters of the resulting model.
Achieved mean absolute error between input and output at the
Mean absolute error [0.002, 0.3]
end of training.
_6.4. Experimental Results_
This section validates the described concept applied in the experimental setup against
the requirements defined in the problem statement.
1. **Anomaly detection: Figure 8 shows some of the forced anomalies in the experimental**
setup, illustrating the detection performance of the generated models. In the pictures,
the detected anomalies are marked with red points, while the pre-defined anomalous
areas are indicated by the red background color of the figure. Combined with the
results in Table 2, this confirms that the different models can be successfully applied
to detect anomalies in the CPSs.
2. **Real-time: The evaluated sliding window sizes from the hyperparameter tuning were**
between 32–64, resulting in comparably small windows. To ensure a fast detection
in the real process, each generated sliding window was treated as a data stream and
evaluated immediately. With a sample rate of 2 ms, the overall time to collect one
window as input data for the model is between 64 and 128 ms. The average execution
time per reconstruction and verification for anomalies was around 34 ms, with a
maximum of 49 ms and a minimum of 22 ms. Therefore, anomaly detection can be
carried out with a maximum delay of 177 ms at our setup, which allows an immediate
reaction of the system on detected anomalies.
-----
_Sensors 2023, 23, 4207_ 13 of 19
3. **Prediction quality: The F1 Score is used to evaluate the model performance. The**
detailed performance for each CPS is shown in Table 2. To calculate the F1 Score, the
manually forced anomalies were classified as anomalous areas. If an anomaly in a
window was detected, the used window was assigned as anomalous and evaluated
against the area. By reaching high F1 Scores above 0.95, adequate prediction performances for every single CPS are realized. This confirms that the automatically created
models for each CPS can reliably detect anomalies in the given CPPS.
4. **Configurable: The described concept and resulting anomaly detection can be config-**
urated for various applications. Only minor changes must be made to the motion
controller to enable the sampling process. The automatically generated configuration
files can be manually changed, or the default values can be used.
5. **Data-driven: The models are trained only with the regular process data. Therefore,**
no anomalous data or feature engineering is needed. No values are added or changed.
All removed features are automatically declared. Only the data from the sensors
and actors of the CPSs are used. The model is created in an automated way by the
configuration file without the need for expert knowledge.
6. **Feasible: The method utilized standard communication technologies of common**
industrial setups. By outsourcing the process-intensive tasks to the processing unit
backend, the concept enables the application of anomaly detection for the CPPS,
even with the processing limitation and constraints of each CPS. In our experimental
setup, the simulated process reaches a maximum consumption of 350MB while not
exceeding a maximum of 12% CPU load.
Based on the experimental results, the introduced novel concept fulfills all the defined
industrial requirements of the problem statement in Section 3.
**Figure 8. Anomalous Samples.**
**Table 2. Performance Evaluation.**
**Unit** **TP** **TN** **FP** **FN** **Precision** **Recall** **F1-Score**
CB 232 3938 7 10 0.0.9707 0.958 0.964
RT 193 3199 9 8 0.955 0.960 0.957
SR 22 3002 2 0 0.916 1 0.956
PR 22 3002 1 1 0.956 0.956 0.956
P&P S 484 2965 21 20 0.958 0.960 0.959
P&P U 484 2967 29 15 0.943 0.969 0.956
P&P L 484 2964 27 22 0.947 0.956 0.951
CTS 75 3199 3 3 0.961 0.961 0.961
SRS 231 3374 8 10 0.966 0.958 0.962
CB = Conveyor Belt; RT = Rotating Table; SR = Sliding Robot; PR = Production Robot; P&P S = Pick & Place Robot
S Axis; P&P U = Pick & Place Robot U Axis; P&P L = Pick & Place Robot L Axis; CTS = Container Tray Slider;
SRS = Sliding Robot Slider.
**7. Conclusions and Future Work**
This paper presents a fast and decentralized anomaly detection concept for CPPS
under industry constraints. The concept is configurable and feasible to apply anomaly
detection in different use cases under the limitations of commonly used CPPSs in industrial
environments. Due to the decentralization, no additional computational units must be
-----
_Sensors 2023, 23, 4207_ 14 of 19
integrated. The generated models allow a fast and performant integration. The anomaly
detection is executed, and evaluations are carried out immediately during production.
The model is generated and tuned in a fully automated fashion. No expert knowledge
about anomalous data is needed. Overall, the experiments show that each model achieves
stable and accurate results. This presents a promising approach for decentralized and fast
anomaly detection in CPPSs under industry constraints.
However, despite the apparent success of the concept, there are several directions
for future research. In this work, the concept was only tested in a single CPPS with a
limited amount of CPSs. Therefore, more studies with different models, more CPSs, and
under different scenarios will be performed in future work. Secondly, the models are
only evaluated against the defined simulated computational resources and data storage
limitations of the used CPSs. This is mainly caused by the integration limitations of the
available CPSs. To integrate the models, adaptions to the hardware and software of the
CPSs must be carried out in the future. Additionally, several anomalies which can emerge
in a CPPS cannot be detected, e.g., process anomalies such as changing the overall process
to fewer containers as in the learning process. This forces the CPPS not into an undefined
state, although the actual process differs from the learned process. Therefore, another
research direction in the future is to adapt the concept even to detect this kind of anomaly.
Furthermore, the selection of the model is only based on two parameters, the achieved
loss value and the resulting model structure. Despite the good results obtained in the tests
with the defined anomalies, this method cannot guarantee the selection of the best model.
Further approaches and concepts for a better evaluation of the models and a guaranteed
choice of the best model must be found.
Finally, up to now, the output of the anomaly detection is the identification of the
anomaly, defined by the time and feature, in the data stream. Adding more information
may be helpful to increase the accuracy of the AD for the operator. Ways to gather and
provide this additional context information will be evaluated and investigated.
**Author Contributions: Conceptualization, C.G.; methodology, C.G.; software, C.G.; validation, C.G.;**
formal analysis, C.G.; investigation, C.G.; resources, C.G.; data curation, C.G.; writing—original draft
preparation, C.G.; writing—review and editing, C.G. and B.H.; visualization, C.G.; supervision, C.G.
and B.H.; project administration, C.G.; All authors have read and agreed to the published version of
the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Limited accessibility to the dataset can be given in single cases.**
**Conflicts of Interest: The author declares no conflict of interest.**
**Appendix A**
Tables A1–A3 lists each individual item of Figure 4–6 with a short description.
**Table A1. Definition Table BPMN Diagram AD Installation Cycle Figure 4.**
**Item** **Definition**
The processing unit backend, an external device that can be removed after the instalProcessing Unit Backend lation process is finished, performs all heavy processing tasks in the AD Installation
cycle to meet the previously explained industrial constraints of the CPPS.
Control Device Unit which typically controls the industrial process.
The interface of the embedded system to exchange data with the control device or
Communication Interface
the processing unit backend.
-----
_Sensors 2023, 23, 4207_ 15 of 19
**Table A1. Cont.**
**Item** **Definition**
Part of the CPS which interacts with sensors and actors to monitor and control the
Embedded System
mechanical system.
Mechanical System Summarizes all mechanical components of the system.
Single packages of time series data from the individual CPS. Process data samples
Process data samples consist of features like position, torque, and speed sampled as time series data from
the CPS.
Combined process data samples of all CPS collected from the normal process sampled
Record Regular Process Data
over a defined time.
Process data samples at a high sample rate are collected from the different CPS,
Collect Process Data Samples
combined, and sent to the control device as a data package.
In the analysis, unnecessary features are automatically removed from the data, and
Analysis
important information like feature range and data types are collected.
Based on the analysis, configuration files are generated. The operator can manually
Generate Configurations
tune this information, or the default values can be used.
AD Generation Cycle Main cycle to create the preprocessor and train the model.
Deployment AD Pipelines Each generated AD pipeline is exported and deployed to a separate CPS.
AD Production Cycle Live integration and execution of the AD pipeline in the individual CPS.
**Table A2. Definition Table BPMN Diagram AD Production Cycle Figure 6.**
**Item** **Definition**
A fast and effective data store that caches live data until it is passed to the AD
In-memory data storage
pipeline for processing.
Live process data is sampled at a high sample rate to an in-memory data storage to
Record Live Process Data
collect the needed data to execute the AD pipeline.
Execute AD Process Step The collected live data is preprocessed and evaluated by the AD pipeline.
Deliver Results to Control Unit The AD output can be delivered from the CPS to the control unit.
Depending on the CPS, the Operator can be immediately notified by the separate
Notify Operator
CPS.
In this step, the whole AD production cycle can be switched off to free resources and
Shut Down AD
stop the anomaly detection.
**Table A3. Definition Table BPMN Diagram AD Generation Cycle Figure 5.**
**Item** **Definition**
Regular Process Data Data collected from the normal process of the CPS over a defined time.
Preprocessed Data Transformed and scaled regular process data by the chosen Preprocessor.
AD Pipeline A combination of initialised Preprocessor and trained model.
Contains necessary parameters for the separate steps of the generation cycle, e.g.,
Configuration the number of layers, filters per layer, loss function, and type of optimizer. Default
parameters are automatically provided but can also be manually changed and tuned.
In the preprocessing step, the regular process data is transformed by the chosen
Preprocessing preprocessor. This scales the data provided, which normally consists of different
ranges and units, to an equal numerical range.
-----
_Sensors 2023, 23, 4207_ 16 of 19
**Table A3. Cont.**
**Item** **Definition**
Here, the model is built based on the configuration. Therefore, the number of layers,
Initialize Model
filter, and type of each layer and the optimizer and loss function are set.
In this step, the initialized model is trained with the preprocessed regular proTrain Model
cess data.
Depending on the evaluation method defined in the configuration step, the model is
Evaluate Model
tested, the results are tracked, and the complete experiment is saved.
In the optimization step, the hyperparameters are changed, influenced by the defined
Optimize Model ranges and tuning parameters. The search algorithm declared in the configuration
file searches over a generated search space for the best possible parameters.
Normally, after the tuning is finished, the AD pipeline is exported to the deployExport AD Pipeline
ment step.
**Appendix B**
To reach adequate prediction performance and fulfil the requirements defined in
Section 3, several models were investigated. All models were tested and evaluated against
a reduced test data set, consisting of the time series data of one CPS and a limited number
of anomalies. No optimisations were made to the models. The test results can be seen in
Table A4 . Decisive characteristics for the choice of the model are the number of required
parameters, the recognition rate and the time required for the training and the evaluation
of the test data. The table clearly shows that shallow methods, despite their fast evaluation,
have significant weaknesses in the recognition rate for dynamic and complex time series, as
specified in [11]. The LSTMAE has a higher number of parameters compared to the other
models. Due to the focused universal application and the limited process and memory
resources, this is a major disadvantage. The CAE shows a slightly improved recognition
in our test dataset compared to the AE architecture without convolutional layers. Based
on the findings of [37–41,43], which identified the good performance of the ConvAE on
industrial time series data, we have chosen the 1D-ConvAE as the model.
**Table A4. Model Evaluation.**
**Model** **Performance** **Size** **Avg. Time [ms]**
**TP** **FP** **TN** **FN** **Recall Precision** **F1** **Compelxity** **Training** **Evaluation**
OCSVM 400 25,005 19,633 138 0.7434 0.0157 0.0308 Low 37,041.6 36,811.1
iForest 467 13,709 30,865 71 0.8680 0.0329 0.0634 Low 1874.9 574.1
LSTMAE 8 1 693 1 0.8888 0.8888 0.8888 High 372,029.5 23,491.3
AE 8 2 692 1 0.8888 0.8 0.8421 Medium 13,151.2 18,720
1D-ConvAE 8 0 694 2 0.8 1 0.8888 Medium 113,227.6 20,045.2
OCSVM = One-Class Support Vector Machine [56]; iForest = Isolation Forest [57]; LSTMAE = Long Short-term
Memory Autoencoder [58]; AE = Autoencoder [59]; 1D-ConvAE = One dimensional convolutional Autoencoders.
**Appendix C**
Figure A1 shows additional showcases of anomalous samples. In Figure A1a the collision case is shown. As described in Section 6.2, the system detects typically heavy collisions.
The production of light collisions is a difficult task that requires precise interference in the
process. Therefore, a pole was prepared with a predetermined breaking point and applied
against the rotation direction of the rotary table during regular operation. Figure A1b
shows the case defect component. A container was manipulated to force pressure on the
moving conveyor belt at a certain point. To create the error case seen in Figure A1c, minor
-----
_Sensors 2023, 23, 4207_ 17 of 19
shocks were manually applied to the plate of the rotation table. A specialist carried out all
the different tests, considering safety aspects for the person and the system.
**Figure A1. Additional Anomalous Samples.**
**References**
1. Marwedel, P. Embedded System Design: Embedded Systems Foundations of Cyber-Physical Systems, and the Internet of Things; Springer:
Cham, Switzerland, 2021; pp. 1–15.
2. Jazdi, N. Cyber physical systems in the context of Industry 4.0. In Proceedings of the 2014 IEEE International Conference on
[Automation, Quality and Testing, Robotics, Cluj-Napoca, Romania, 22–24 May 2014; pp. 14–16. [CrossRef]](http://doi.org/10.1109/AQTR.2014.6857843)
3. Rajkumar, R.; Lee, I.; Sha, L.; Stankovic, J. Cyber-Physical Systems: The Next Computing Revolution. In Proceedings of the
[Design Automation Conference, Anaheim, CA, USA, 13–18 June 2010; pp. 731–736. [CrossRef]](http://dx.doi.org/10.1145/1837274.1837461)
4. Müller, T.; Jazdi, N.; Schmidt, J.; Weyrich, M. Cyber-physical production systems: Enhancement with a self-organized reconfigu[ration management. Procedia CIRP 2021, 9, 549–554. [CrossRef]](http://dx.doi.org/10.1016/j.procir.2021.03.075)
5. Monostori, L. Cyber-physical Production Systems: Roots, Expectations and R & D Challenges. Procedia CIRP 2014, 17, 9–13.
[[CrossRef]](http://dx.doi.org/10.1016/j.procir.2014.03.115)
6. Ali, N.; Hussain, M.; Hong, J.-E. SafeSoCPS: A Composite Safety Analysis Approach for System of Cyber-Physical Systems.
_[Sensors 2022, 22, 4474. [CrossRef]](http://dx.doi.org/10.3390/s22124474)_
7. Eiteneuer, B.; Hranisavljevic, N.; Niggemann, O. Dimensionality Reduction and Anomaly Detection for CPPS Data using
Autoencoder. In Proceedings of the 2019 IEEE International Conference on Industrial Technology (ICIT), Melbourne, VIC,
[Australia, 13–15 February 2019; pp. 1286–1292. [CrossRef]](http://dx.doi.org/10.1109/ICIT.2019.8755116)
8. Adepu, S.; Mathur,A. Distributed Attack Detection in a Water Treatment Plant: Method and Case Study. IEEE Trans. Dependable
_[Secur. Comput. 2021, 18, 86–99. [CrossRef]](http://dx.doi.org/10.1109/TDSC.2018.2875008)_
9. [Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 2009, 41, 58. [CrossRef]](http://dx.doi.org/10.1145/1541880.1541882)
10. Stojanovic, L.; Dinic, M.; Stojanovic, N.; Stojadinovic, A. Big-data-driven anomaly detection in industry (4.0): An approach
and a case study. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8
[December 2016; pp. 1647–1652. [CrossRef]](http://dx.doi.org/10.1109/BigData.2016.7840777)
11. Ruff, L.; Kauffmann, J.R.; Vandermeulen, R.A.; Montavon, G.; Samek, W.; Kloft, M.; Dietterich, T.G.; Muller, K.-R. A Unifying
[Review of Deep and Shallow Anomaly Detection. Proc. IEEE 2021, 109, 756–795. [CrossRef]](http://dx.doi.org/10.1109/JPROC.2021.3052449)
12. Gerz, F.; Bastürk, T.R.; Kirchhoff, J.; Denker, J.; Al-Shrouf, L.; Jelali, M. Comparative Study and a New Industrial Platform for
Decentralized Anomaly Detection Using Machine Learning Algorithms. In Proceedings of the 2022 International Joint Conference
[on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [CrossRef]](http://dx.doi.org/10.1109/IJCNN55064.2022.9892939)
13. Bulusu, S.; Kailkhura, B.; Li, B.; Varshney, P.K.; Song, D. Anomalous Example Detection in Deep Learning: A Survey. IEEE Access
**[2020, 8, 132330–132347. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3010274)**
14. Mishra, P.; Varadharajan, V.; Tupakula, U.; Pilli, E.S. A Detailed Investigation and Analysis of Using Machine Learning Techniques
[for Intrusion Detection. IEEE Commun. Surv. Tutor. 2019, 21, 686–728. [CrossRef]](http://dx.doi.org/10.1109/COMST.2018.2847722)
15. Thudumu, S.; Branch, P.; Jin, J.; Singh, J. A comprehensive survey of anomaly detection techniques for high dimensional big data.
_[Big Data 2020, 7, 1–30. [CrossRef]](http://dx.doi.org/10.1186/s40537-020-00320-x)_
-----
_Sensors 2023, 23, 4207_ 18 of 19
16. Cook, A.A.; Mısırlı, G.; Fan, Z. Anomaly Detection for IoT Time-Series Data: A Survey. IEEE Internet Things J. 2020, 7, 6481–6494.
[[CrossRef]](http://dx.doi.org/10.1109/JIOT.2019.2958185)
17. Goldstein, M.; Uchida, S. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.
_[PLoS ONE 2016, 11, e0152173. [CrossRef] [PubMed]](http://dx.doi.org/10.1371/journal.pone.0152173)_
18. [Oza, P.; Patel, V.M. One-Class Convolutional Neural Network. IEEE Signal Process. Lett. 2019, 26, 277–281. [CrossRef]](http://dx.doi.org/10.1109/LSP.2018.2889273)
19. Erfani, S.M.; Rajasegarar, S.; Karunasekera, S.; Leckie, C. High-dimensional and large-scale anomaly detection using a linear
[one-class SVM with deep learning. Pattern Recognit. 2016, 58, 121–134. [CrossRef]](http://dx.doi.org/10.1016/j.patcog.2016.03.028)
20. Smets, K.; Verdonk, B.; Jordaan, E.M. Discovering novelty in spatio/temporal data using one-class support vector machines. In
Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 2956–2963.
[[CrossRef]](http://dx.doi.org/10.1109/IJCNN.2009.5178801)
21. Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep autoencoding gaussian mixture model for
unsupervised anomaly detection. In Proceedings of the International Conference on Learning Representations, Vancouver, BC,
Canada, 30 April–3 May 2018.
22. Xiaoyi, G.; Akoglu, L.; Rinaldo, A. Statistical analysis of nearest neighbor methods for anomaly detection. _arXiv 2019,_
arXiv:1907.03813.
23. Elnour, M.; Meskin, N.; Khan, K.; Jain, R. A dual-isolation-forests-based attack detection framework for industrial control systems.
_[IEEE Access 2020, 8, 36639–36651. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2975066)_
24. Pang, G.; Shen, C.; Cao, L.; Van Den Hengel, A. Deep Learning for Anomaly Detection: A Review. ACM Comput. Surv. 2022, 54,
[1–38. [CrossRef]](http://dx.doi.org/10.1145/3439950)
25. Li, D.; Chen, D.; Jin, B.; Shi, L.; Goh, J.; Ng, S.K. MAD-GAN: Multivariate Anomaly Detection for Time Series Data with Generative
Adversarial Networks. In Artificial Neural Networks and Machine Learning—ICANN 2019: Text and Time Series ICANN 2019 Lecture
_[Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11730, pp. 703–716. [CrossRef]](http://dx.doi.org/doi.org/10.1007/978-3-030-30490-4-56)_
26. Choi, Y.; Lim, H.; Choi, H.; Kim, I.-J. GAN-Based Anomaly Detection and Localization of Multivariate Time Series Data for Power
Plant. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Republic
[of Korea, 19–22 February 2020; pp. 71–74. [CrossRef]](http://dx.doi.org/10.1109/BigComp48618.2020.00-97)
27. Jiang, W.; Hong, Y.; Zhou, B.; He, X.; Cheng, C. A GAN-Based Anomaly Detection Approach for Imbalanced Industrial Time
[Series. IEEE Access 2019, 7, 143608–143619. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2944689)
28. Heiko, H. Kernel PCA for novelty detection. Pattern Recognit. 2007, 40, 863–874.
29. Zhao, Y.; Deng, B.; Shen, C.; Liu, Y.; Lu, H.; Hua, X.S. Spatio-temporal autoencoder for video anomaly detection. In Proceedings
of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017.
30. Gong, D.; Liu, L.; Le V.; Saha, B.; Mansour, M.R.; Venkatesh, S.; Hengel, A.V.D. Memorizing normality to detect anomaly:
Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019.
31. Meidan, Y.; Bohadana, M.; Mathov, Y.; Mirsky, Y.; Shabtai, A.; Breitenbacher, D.; Elovici, Y. N-baiot-network-based detection of iot
[botnet attacks using deep autoencoders. IEEE Pervasive Comput. 2018, 17, 12–22. [CrossRef]](http://dx.doi.org/10.1109/MPRV.2018.03367731)
32. Park, S.; Adosoglou, G.; Pardalos, P.M. Interpreting rate-distortion of variational autoencoder and using model uncertainty for
[anomaly detection. Ann. Math. Artif. Intell. 2022, 90, 735–752. [CrossRef]](http://dx.doi.org/10.1007/s10472-021-09728-4)
33. Jinwon, A.; Cho, S. Variational autoencoder based anomaly detection using reconstruction probability. Spec. Lect. IE 2015, 2, 1–18.
34. Munir, M.; Siddiqui, S.A.; Dengel, A.; Ahmed, S. DeepAnT: A Deep Learning Approach for Unsupervised Anomaly Detection in
[Time Series. IEEE Access 2019, 7, 1991–2005. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2886457)
35. Gong, W.; Chen, H.; Zhang, Z.; Zhang, M.; Gao, H. A Data-Driven-Based Fault Diagnosis Approach for Electrical Power DC-DC
Inverter by Using Modified Convolutional Neural Network With Global Average Pooling and 2-D Feature Image. IEEE Access
**[2020, 8, 73677–73697. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2988323)**
36. Gong, W.; Wang, Y.; Zhang, M.; Mihankhah, E.; Chen, H.; Wang, D. A Fast Anomaly Diagnosis Approach Based on Modified
[CNN and Multisensor Data Fusion. IEEE Trans. Ind. Electron. 2022, 69, 13636–13646. [CrossRef]](http://dx.doi.org/10.1109/TIE.2021.3135520)
37. Qu, C.; Zhou, Z.; Liu, Z.; Jia, S. Predictive anomaly detection for marine diesel engine based on echo state network and
[autoencoder. Energy Rep. 2022, 8 (Suppl. 4), 998–1003. [CrossRef]](http://dx.doi.org/10.1016/j.egyr.2022.01.225)
38. Malviya, V.; Mukherjee, I.; Tallur, S. Edge-Compatible Convolutional Autoencoder Implemented on FPGA for Anomaly Detection
[in Vibration Condition-Based Monitoring. IEEE Sens. Lett. 2022, 6, 1–4. [CrossRef]](http://dx.doi.org/10.1109/LSENS.2022.3159972)
39. Guo, X.; Liu, X.; Zhu, E.; Yin, J. Deep Clustering with Convolutional Autoencoders. In Neural Information Processing: 24th
_International Conference, ICONIP 2017, Guangzhou, China, 14–18 November 2017; Proceedings, Part II 24; Springer: Cham, Switzerland,_
[2017; p. 10635. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-70096-0-39)
40. Lee, G.; Jung, M.; Song, M.; Choo, J. Unsupervised anomaly detection of the gas turbine operation via convolutional auto-encoder.
In Proceedings of the 2020 IEEE International Conference on Prognostics and Health Management (ICPHM), Detroit, MI, USA,
[8–10 June 2020; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/ICPHM49022.2020.9187054)
41. Yu, J.; Zhou, X. One-Dimensional Residual Convolutional Autoencoder Based Feature Learning for Gearbox Fault Diagnosis.
_[IEEE Trans. Ind. Inform. 2020, 16, 6347–6358. [CrossRef]](http://dx.doi.org/10.1109/TII.2020.2966326)_
42. Chen, T.; Liu, X.; Xia, B.; Wang, W.; Lai, Y. Unsupervised Anomaly Detection of Industrial Robots Using Sliding-Window
[Convolutional Variational Autoencoder. IEEE Access 2020, 8, 47072–47081. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2977892)
-----
_Sensors 2023, 23, 4207_ 19 of 19
43. Kwak, M.; Kim, S.B. Unsupervised Abnormal Sensor Signal Detection With Channelwise Reconstruction Errors. IEEE Access 2021,
_[9, 39995–40007. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3064563)_
44. Lai, Y.; Liu, Z.; Song, Z.; Wang, Y.; Gao, Y. Anomaly detection in Industrial Autonomous Decentralized System based on time
[series. Simul. Model. Pract. Theory 2016, 65, 57–71. [CrossRef]](http://dx.doi.org/10.1016/j.simpat.2016.01.013)
45. Sanjith, S.L.; Prakash Raj, E.G.D. Decentralized Time-Window Based Real-Time Anomaly Detection Mechanism (DTRAD) in Iot.
_[Int. J. Recent Technol. Eng. 2019, 8, 1619–1625. [CrossRef]](http://dx.doi.org/10.35940/ijrte.B2350.078219)_
46. Gupta, K.; Sahoo, S.; Mohanty, R.; Panigrahi B.K.; Blaabjerg, F. Decentralized Anomaly Identification in Cyber-Physical DC
Microgrids. In Proceedings of the 2022 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 9–13 October
[2022; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/ECCE50734.2022.9947581)
47. Wilbur, M.; Dubey, A.; Leão B.; Bhattacharjee, S. A Decentralized Approach for Real Time Anomaly Detection in Transportation
Networks. In Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP), Washington, DC,
[USA, 12–15 June 2019; pp. 274–282. [CrossRef]](http://dx.doi.org/10.1109/SMARTCOMP.2019.00063)
48. Bosman, H.; Iacca, G.; Tejada, A.; Wörtche H.J.; Liotta, A. Spatial anomaly detection in sensor networks using neighborhood
[information. Inf. Fusion 2017, 33, 41–56. [CrossRef]](http://dx.doi.org/10.1016/j.inffus.2016.04.007)
49. Nikolay, L.; Amizadeh, S.; Flint, I. Generic and scalable framework for automated time-series anomaly detection. In Proceedings
of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August
2015.
50. Mayra, M.; Wu, C. An unsupervised framework for anomaly detection in a water treatment system. In Proceedings of the 2019 18th
IEEE International Conference on Machine Learning And Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019.
51. Schneider, P.; Böttinger, K. High-Performance Unsupervised Anomaly Detection for Cyber-Physical System Networks. In Proceedings
of the 2018 Workshop on Cyber-Physical Systems Security and PrivaCy, Toronto, ON, Canada, 15–19 October 2018.
52. Goetz, C.; Humm, G.B. Unsupervised Process Anomaly Detection under Industry Constraints in Cyber-Physical Systems using
Convolutional Autoencoder. In Computational Intelligence for Engineering and Management Applications, Select Proceedings of CIEMA
_2022; Springer: Singapore, 2023, to be published._
53. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow:
Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 13 February 2023).
54. Moritz, P.; Nishihara, R.; Wang, S.; Tumanov, A.; Liaw, R.; Liang, E.; Elibol M.; Yang, Z.; Paul, W.; Jordan M.; et al. Ray: A
distributed framework for emerging AI applications. In Proceedings of the 13th USENIX Symposium on Operating Systems
Design and Implementation (OSDI 18), Carlsbad, CA, USA, 8–10 October 2018; pp. 561–577.
55. Zaharia, M.A.; Chen, A.; Davidson, A.; Ghodsi, A.; Hong, S.A.; Konwinski, A.; Murching, S.; Nykodym, T.; Ogilvie, P.; Parkhe,
M.; et al. Accelerating the Machine Learning Lifecycle with MLflow. IEEE Data Eng. Bull. 2018, 41, 39–45.
56. Schölkopf, B.; Williamson, R.; Smola, A.; Shawe-Taylor, J.; Platt, J. Support Vector Method for Novelty Detection. NIPS 1999, 12,
582–588.
57. [Liu, F.; Ting, K.; Zhou, Z. Isolation-Based Anomaly Detection. ACM Trans. Knowl. Discov. Data 2012, 6, 3. [CrossRef]](http://dx.doi.org/10.1145/2133360.2133363)
58. [Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef]](http://dx.doi.org/10.1162/neco.1997.9.8.1735)
59. Baldi, P. Autoencoders, unsupervised learning and deep architectures. In Proceedings of the 2011 International Conference on
Unsupervised and Transfer Learning Workshop, Bellevue, DC, USA, 2 July 2011; Volume 27, pp. 37–50.
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10181007, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/23/9/4207/pdf?version=1682227452"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-04-23T00:00:00
|
[
{
"paperId": "f662bb9ae32e13fafc2a9783cd6b787aec06e4a7",
"title": "A Fast Anomaly Diagnosis Approach Based on Modified CNN and Multisensor Data Fusion"
},
{
"paperId": "833ca3806caafced84fa9ba8e5ff40ac1b5a84c0",
"title": "Decentralized Anomaly Identification in Cyber-Physical DC Microgrids"
},
{
"paperId": "ff4831857071965395f01dc4eab46a50275fe7f9",
"title": "A Comparative Study and a New Industrial Platform for Decentralized Anomaly Detection Using Machine Learning Algorithms"
},
{
"paperId": "8ab87df3efc998a4901ee67da2c128f448820155",
"title": "SafeSoCPS: A Composite Safety Analysis Approach for System of Cyber-Physical Systems"
},
{
"paperId": "c4e8bec1a6d75c503f81fef28ccfebff600791b4",
"title": "Edge-Compatible Convolutional Autoencoder Implemented on FPGA for Anomaly Detection in Vibration Condition-Based Monitoring"
},
{
"paperId": "b461fe6c747e7a68c75f4941238160b58c0f06a4",
"title": "One-Dimensional Residual Convolutional Autoencoder Based Feature Learning for Gearbox Fault Diagnosis"
},
{
"paperId": "30b99ae0682d42a2010be401dd1d8f7baca9bb5f",
"title": "A Unifying Review of Deep and Shallow Anomaly Detection"
},
{
"paperId": "775247047d0b56950ba5ea77d4a29772eca95c1b",
"title": "Deep Learning for Anomaly Detection"
},
{
"paperId": "aca6d5f3866372a4506cf15773ae298f18c3f453",
"title": "A comprehensive survey of anomaly detection techniques for high dimensional big data"
},
{
"paperId": "30d65fbc8d71ac202ee8d7d5f0e58c63a6c6a957",
"title": "Anomaly Detection for IoT Time-Series Data: A Survey"
},
{
"paperId": "3c9bf9cefe2c365097132c2aa853adc305224806",
"title": "Unsupervised anomaly detection of the gas turbine operation via convolutional auto-encoder"
},
{
"paperId": "7cf5dddc831dacdbb9a1ab5553c0a6e99ef64623",
"title": "Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection"
},
{
"paperId": "76d98534d3ec7b716806ffa0f84acd1e347f8748",
"title": "A Data-Driven-Based Fault Diagnosis Approach for Electrical Power DC-DC Inverter by Using Modified Convolutional Neural Network With Global Average Pooling and 2-D Feature Image"
},
{
"paperId": "f53edf46eeb9fb110391c6c10ea27e439cf91ec8",
"title": "Unsupervised Anomaly Detection of Industrial Robots Using Sliding-Window Convolutional Variational Autoencoder"
},
{
"paperId": "a5342b260737a8088af20250d82fc2d28589959a",
"title": "A Dual-Isolation-Forests-Based Attack Detection Framework for Industrial Control Systems"
},
{
"paperId": "0c9997586868c163e6888a761df950a6f6c7eb70",
"title": "GAN-Based Anomaly Detection and Localization of Multivariate Time Series Data for Power Plant"
},
{
"paperId": "ea478a48c863649daba5d3394431e5ea77a4449b",
"title": "An Unsupervised Framework for Anomaly Detection in a Water Treatment System"
},
{
"paperId": "ff382aed6fda4c7de23349629138b09263be456a",
"title": "Decentralized Time-Window based Real-Time Anomaly Detection Mechanism (DTRAD) in Iot"
},
{
"paperId": "d2fe8bdc0d638ce6f082b4a0cd08d999f4e9e8d3",
"title": "Statistical Analysis of Nearest Neighbor Methods for Anomaly Detection"
},
{
"paperId": "b1f284c2fb68a609e707c93ea0ace48b1d833063",
"title": "A Decentralized Approach for Real Time Anomaly Detection in Transportation Networks"
},
{
"paperId": "d65eb30e5f0d2013fd5e4f45d1413bc2969ee803",
"title": "Memorizing Normality to Detect Anomaly: Memory-Augmented Deep Autoencoder for Unsupervised Anomaly Detection"
},
{
"paperId": "8dff7bb49b7dd83cbf89f20b2d58fe15a9b12ea8",
"title": "A GAN-Based Anomaly Detection Approach for Imbalanced Industrial Time Series"
},
{
"paperId": "9cf47b757bae8f4b8a3b0ec833c4be610304efdd",
"title": "Dimensionality Reduction and Anomaly Detection for CPPS Data using Autoencoder"
},
{
"paperId": "00695a31a80221c7125e49885a4767896ec2c4f7",
"title": "One-Class Convolutional Neural Network"
},
{
"paperId": "261e81cba749f70271fa4b7e230328fc1a4a6c96",
"title": "MAD-GAN: Multivariate Anomaly Detection for Time Series Data with Generative Adversarial Networks"
},
{
"paperId": "a31912c8684fdb334a62950f45300d42349ffe74",
"title": "Distributed Attack Detection in a Water Treatment Plant: Method and Case Study"
},
{
"paperId": "6b8aec2a36d28f32b85163c7b1107767405d678a",
"title": "N-BaIoT—Network-Based Detection of IoT Botnet Attacks Using Deep Autoencoders"
},
{
"paperId": "dbc7401e3e75c40d3c720e7db3c906d48bd742d7",
"title": "Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection"
},
{
"paperId": "a531ba97b05fc3a16f5fb023fb220236e01601d0",
"title": "High-Performance Unsupervised Anomaly Detection for Cyber-Physical System Networks"
},
{
"paperId": "f83a207712fd4cf41aded79e9e6c4345ba879128",
"title": "Ray: A Distributed Framework for Emerging AI Applications"
},
{
"paperId": "1003462fd2d360542ea712430b13cf0c43fc7f9d",
"title": "Deep Clustering with Convolutional Autoencoders"
},
{
"paperId": "fef6f1e04fa64f2f26ac9f01cd143dd19e549790",
"title": "Spatio-Temporal AutoEncoder for Video Anomaly Detection"
},
{
"paperId": "f076e4355c0facf111716dcab2837803367dd2d8",
"title": "High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning"
},
{
"paperId": "5fbe96e183ca492c0175e07d4b7c90c77de2469b",
"title": "Anomaly detection in Industrial Autonomous Decentralized System based on time series"
},
{
"paperId": "355692eb86b06a0a23af45c106cfb02c95bf380e",
"title": "A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data"
},
{
"paperId": "435c170edd031b65cf9087cf3c8473f071477df5",
"title": "Generic and Scalable Framework for Automated Time-series Anomaly Detection"
},
{
"paperId": "1b7b3df1e711c9bb20f77cb947f53fac21017595",
"title": "Cyber physical systems in the context of Industry 4.0"
},
{
"paperId": "dcd9f5d61bc9a70c40be84a8d78fbee822ffcd9e",
"title": "Isolation-Based Anomaly Detection"
},
{
"paperId": "bb1d6215f0cfd84b5efc7173247b016ade4c976e",
"title": "Autoencoders, Unsupervised Learning, and Deep Architectures"
},
{
"paperId": "4dd03ed9f0821c6663a9f08898b9a97184a171de",
"title": "Cyber-physical systems: The next computing revolution"
},
{
"paperId": "71d1ac92ad36b62a04f32ed75a10ad3259a7218d",
"title": "Anomaly detection: A survey"
},
{
"paperId": "69c784da3839474fc7274f53609e0e3049cff5ef",
"title": "Discovering novelty in spatio/temporal data using one-class support vector machines"
},
{
"paperId": "27222787908c3a1c6fb6c4b5cb5ef8b2542f1b3c",
"title": "Kernel PCA for novelty detection"
},
{
"paperId": "bf206bad6a74d27b40c8ea77ee54e98e492fb7f9",
"title": "Support Vector Method for Novelty Detection"
},
{
"paperId": "2e9d221c206e9503ceb452302d68d10e293f2a10",
"title": "Long Short-Term Memory"
},
{
"paperId": null,
"title": "Large-Scale Machine Learning on Heterogeneous Systems"
},
{
"paperId": "cc0225276effb2219c8e4579be6c4719824fc1e0",
"title": "Predictive anomaly detection for marine diesel engine based on echo state network and autoencoder"
},
{
"paperId": "5affaee79c34c0e299fcf4e0246565ef63f48d2c",
"title": "Cyber-physical production systems: enhancement with a self-organized reconfiguration management"
},
{
"paperId": "0141d6a31c5cd369e362b1d26972000552e2388b",
"title": "Unsupervised Abnormal Sensor Signal Detection With Channelwise Reconstruction Errors"
},
{
"paperId": "37eb32915b7767685ec3c9e9728ca9a50a379b8d",
"title": "Anomalous Example Detection in Deep Learning: A Survey"
},
{
"paperId": "d294d5246e0dd8ed8bd9ec9d24a01fd4ece4fb3c",
"title": "A Detailed Investigation and Analysis of Using Machine Learning Techniques for Intrusion Detection"
},
{
"paperId": "f37bc75aa1833e1330c39c4f04b131baca08d67b",
"title": "DeepAnT: A Deep Learning Approach for Unsupervised Anomaly Detection in Time Series"
},
{
"paperId": "b2e0b79e6f180af2e0e559f2b1faba66b2bd578a",
"title": "Accelerating the Machine Learning Lifecycle with MLflow"
},
{
"paperId": "20ba792ed3646949ff85d72785683a1766196fab",
"title": "Spatial anomaly detection in sensor networks using neighborhood information"
},
{
"paperId": "061146b1d7938d7a8dae70e3531a00fceb3c78e8",
"title": "Variational Autoencoder based Anomaly Detection using Reconstruction Probability"
},
{
"paperId": "df54a028da5d47d8458780c84afd0b38b3ddb012",
"title": "Embedded System Design"
},
{
"paperId": "5eb0cca4c9520bf00e37e4706dd0d1759c21d1ff",
"title": "2016 Ieee International Conference on Big Data (big Data) Big-data-driven Anomaly Detection in Industry (4.0): an Approach and a Case Study"
}
] | 17,226
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/016337f0d33ffcc75d194f57abc788161eaec927
|
[
"Computer Science"
] | 0.8917
|
On the Convergence of Artificial Intelligence and Distributed Ledger Technology: A Scoping Review and Future Research Agenda
|
016337f0d33ffcc75d194f57abc788161eaec927
|
IEEE Access
|
[
{
"authorId": "7259180",
"name": "Konstantin D. Pandl"
},
{
"authorId": "2244544",
"name": "Scott Thiebes"
},
{
"authorId": "1403054338",
"name": "Manuel Schmidt-Kraepelin"
},
{
"authorId": "1697247",
"name": "A. Sunyaev"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Developments in artificial intelligence (AI) and distributed ledger technology (DLT) currently lead to lively debates in academia and practice. AI processes data to perform tasks that were previously thought possible only for humans. DLT has the potential to create consensus over data among a group of participants in untrustworthy environments. In recent research, both technologies are used in similar and even the same systems. This can lead to a convergence of AI and DLT, which in the past, has paved the way for major innovations of other information technologies. Previous work highlights several potential benefits of a convergence of AI and DLT but only provides a limited theoretical framework to describe upcoming real-world integration cases of both technologies. In this research, we review and synthesize extant research on integrating AI with DLT and vice versa to rigorously develop a future research agenda on the convergence of both technologies. In terms of integrating AI with DLT, we identified research opportunities in the areas of secure DLT, automated referee and governance, and privacy-preserving personalization. With regard to integrating DLT with AI, we identified future research opportunities in the areas of decentralized computing for AI, secure data sharing and marketplaces, explainable AI, and coordinating devices. In doing so, this research provides a four-fold contribution. First, it is not constrained to blockchain but instead investigates the broader phenomenon of DLT. Second, it considers the reciprocal nature of a convergence of AI and DLT. Third, it bridges the gap between theory and practice by helping researchers active in AI or DLT to overcome current limitations in their field, and practitioners to develop systems along with the convergence of both technologies. Fourth, it demonstrates the feasibility of applying the convergence concept to research on AI and DLT.
|
Received February 4, 2020, accepted February 27, 2020, date of publication March 17, 2020, date of current version March 31, 2020.
_Digital Object Identifier 10.1109/ACCESS.2020.2981447_
# On the Convergence of Artificial Intelligence and Distributed Ledger Technology: A Scoping Review and Future Research Agenda
KONSTANTIN D. PANDL, SCOTT THIEBES, MANUEL SCHMIDT-KRAEPELIN,
AND ALI SUNYAEV
Institute of Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany
Corresponding author: Ali Sunyaev (sunyaev@kit.edu)
This work was supported by the Karlsruhe Institute of Technology through the KIT-Publication Fund.
**ABSTRACT Developments in artificial intelligence (AI) and distributed ledger technology (DLT) currently**
lead to lively debates in academia and practice. AI processes data to perform tasks that were previously
thought possible only for humans. DLT has the potential to create consensus over data among a group of
participants in untrustworthy environments. In recent research, both technologies are used in similar and
even the same systems. This can lead to a convergence of AI and DLT, which in the past, has paved the way
for major innovations of other information technologies. Previous work highlights several potential benefits
of a convergence of AI and DLT but only provides a limited theoretical framework to describe upcoming
real-world integration cases of both technologies. In this research, we review and synthesize extant research
on integrating AI with DLT and vice versa to rigorously develop a future research agenda on the convergence
of both technologies. In terms of integrating AI with DLT, we identified research opportunities in the areas
of secure DLT, automated referee and governance, and privacy-preserving personalization. With regard to
integrating DLT with AI, we identified future research opportunities in the areas of decentralized computing
for AI, secure data sharing and marketplaces, explainable AI, and coordinating devices. In doing so, this
research provides a four-fold contribution. First, it is not constrained to blockchain but instead investigates
the broader phenomenon of DLT. Second, it considers the reciprocal nature of a convergence of AI and DLT.
Third, it bridges the gap between theory and practice by helping researchers active in AI or DLT to overcome
current limitations in their field, and practitioners to develop systems along with the convergence of both
technologies. Fourth, it demonstrates the feasibility of applying the convergence concept to research on AI
and DLT.
**INDEX TERMS Artificial intelligence, blockchain, convergence, distributed ledger technology, machine**
learning.
**I. INTRODUCTION**
Artificial intelligence (AI) and distributed ledger technology (DLT) are among today’s most actively debated
developments in information technology with potential for
tremendous impact on individuals, organizations, and societies over the next decades. A 2018 report from the McKinsey
Global Institute estimates that the application of AI in various
industries could deliver an additional global economic output
of around USD 13 trillion by 2030 [1]. Similarly, a study by
the World Economic Forum predicts that by 2025, up to 10 %
The associate editor coordinating the review of this manuscript and
approving it for publication was Jiafeng Xie.
of the world’s GDP may be stored on a blockchain [2], which
is the most commonly used concept of DLT today [3].
AI can perform complex tasks that were previously thought
possible only for humans to perform. In some application
domains, AI can nowadays already exceed human capabilities. Research in the health care domain has, for instance,
shown that AI can analyze echocardiograms faster and
more accurate than medical professionals [4]. Furthermore,
advancements in AI are also expected to be key enablers of
important upcoming innovations such as autonomous driving [5] or intelligent robots [6], to name but a few. DLT,
on the other hand, can provide consensus over a shared ledger
in untrustworthy networks containing, for example, unreachable or maliciously behaving nodes [3]. It became known
-----
to the public due to the emergence of the cryptocurrency
Bitcoin [7]. Following the success of Bitcoin, further DLT
applications are emerging in application domains beyond
finance that are often corresponding to those of AI. DLT may,
for example, be used to manage access control for electronic
health records [8], or to secure the IT systems of autonomous
cars [9].
As one result of these developments, we now increasingly
see the emergence of applications using both information
technologies in close integration. Recent work, for example, uses deep reinforcement learning to explore attacks on
blockchain incentive mechanisms [10]. In doing so, the AI
system can detect new attack strategies and provide security
insights, even for well-studied DLT protocols like Bitcoin.
Another recent work uses a DLT-based platform to exchange
data and computing resources in order to enable AI applications [11]. The platform gives data providers the opportunity
to share their data while keeping it confidential and maintaining the right to manage data access. Data consumers can then
train algorithms on the provided data and compensate data
providers for use of their data.
The examples above demonstrate that the integration of AI
and DLT yields great potential to advance the capabilities of
both technologies, and, ultimately, to increase the positive
impact of AI and DLT on individuals, organizations, and
societies. Yet, in order to make meaningful contributions,
researchers and practitioners alike will have to keep up with
the latest developments in both fields as well as the most
recent developments and innovations related to their integration. Owing to the fast pace and interdisciplinary nature of
both research areas, assessing the current state of research on
AI and DLT and especially their integration in its entirety is
a difficult task. In attempt to provide guidance to researchers
enticed by the integration of AI and DLT, previous research
has either focused on partial aspects of the integration of AI
and DLT such as the use of blockchain for AI (e.g., [12])
or on deriving conceptual ideas of how both technologies
might be integrated with each other (e.g., [13]). Despite the
invaluable contributions that these publications have made to
the nascent stream of literature concerning the integration of
AI and DLT, we presently lack in-depth knowledge about
the current state of research on the integration of AI and
DLT that does (a) not only focus on a specific DLT concept
(i.e., blockchain), (b) considers the reciprocal integration of
both technologies (as opposed to the one-way integration of,
for example, DLT into AI), and (c) goes beyond a purely
conceptual level. In particular, we still lack a comprehensive
overview of the most pressing research challenges that must
be overcome in order to unleash the full potential of integrating both technologies. With this research, we aim to address
this apparent knowledge gap by asking and answering the
following two research questions:
_RQ 1: What is the current state of research on the techno-_
_logical integration of AI and DLT?_
_RQ 2: What are open research challenges on the techno-_
_logical integration of AI and DLT?_
To address our research questions, we draw on the concept
of convergence (see section II.C) and conduct a systematic
literature review on the current state of research on the convergence (i.e., integration) of AI and DLT and develop a future
research agenda. The contribution of our work is thereby fourfold. First, in contrast to extant research in this area, we also
include non-blockchain distributed ledgers in our analysis
of the literature. Prior work has highlighted several current
shortcomings of blockchain in the application for AI [12].
Other DLT concepts (e.g., BlockDAG or TDAG), may be
more promising for solving some of these shortcomings [3]
and thus potentially better suited for certain AI applications.
Second, we consider the reciprocal nature of convergence and
investigate both perspectives: the usage of AI for DLT, and the
usage of DLT for AI. To the best of our knowledge, no holistic review on the convergence of AI and DLT exists today,
which considers the large variety of interaction cases from
both perspectives. Third, we aim to bridge the gap between
theory and practice by drawing theoretical conclusions from
practical research, as well as outlining future potential for
practical and theoretical research from the theory in these
fields. Fourth, we apply the concept of convergence [14], [15]
as a theoretical lens for our article. In doing so, we contribute
to the understanding of how convergence may drive product
innovations and create economic value in the information
technology (IT) industry. In addition, we demonstrate how
convergence can be applied as a lens to tackle research questions in interdisciplinary, innovative, and emerging research
fields.
The remainder of this article is organized as follows: In
section 2, we discuss related work on AI and DLT, and
introduce the concept of convergence. Afterward, we describe
our methods in section 3. In section 4, we analyze the current
literature on AI usage for DLT, and in chapter 5, provide
our future research agenda on AI for DLT. In section 6,
we analyze the current literature on DLT usage for AI, and
a corresponding research agenda in section 7. In section 8,
we discuss our results, before we conclude this article in
section 9.
**II. RELATED WORK**
_A. ARTIFICIAL INTELLIGENCE_
AI enables computers to execute tasks that are easy for people to perform but difficult to describe formally. Such tasks
typically occur in complex or uncertain environments [16].
Despite ongoing debates in society about Artificial General
_Intelligence, which describes computer programs that can_
control themselves and solve tasks in a variety of different domains [17], most deployed AI-based systems solve
tasks in narrow application domains, and are referred to as
_Narrow Artificial Intelligence. Several approaches to design_
such narrow AI-based systems exist. For example, knowledge
bases have seen a lot of attention by researchers in the past.
Nowadays, Machine Learning (ML) seems to be the most
well-spread approach toward building AI-based systems [16].
-----
**FIGURE 1. Overview of artificial intelligence.**
ML-based systems consist of a model that represents a function between input data and output data. In most cases,
ML models have to be trained. In this training phase, an optimization algorithm tweaks the model parameters in order to
minimize a loss or maximize a reward. Depending on the
application, different types of training exist. In the case of
supervised machine learning, the input data and the corresponding output data are known during the training phase.
In the case of unsupervised machine learning, only the input
data is known but no output data. In a reinforcement learning
setting, a learning agent executes actions that result in an
immediate reward, whereas the agent’s goal is to maximize
a future, cumulative reward. In general, the training phase
can require large amounts of data and, thus, is often computationally intensive. This is especially the case for deep
neural networks, which are complex ML models with many
parameters that have paved the way for many of the recent
advancements in ML [18]. In Figure 1, we present a high-level
overview of different types of approaches toward designing
AI-based systems.
The execution of a (trained) ML model is called inference.
It is usually computationally less expensive than the initial
training phase. Some models can only be described as black
boxes, meaning that their inner functionalities are difficult to
explain. Among many research streams, some cutting-edge
research in ML aims to better explain the inner functioning
of ML models in order to guarantee their robustness [19].
Other streams aim to increase ML systems’ capabilities [20],
[21], or to ensure training data confidentiality when creating
ML-based systems [16]. Besides these ML approaches introduced above, many variations exist. For example, for some
ML algorithms, there is no explicit model building phase at
all [22]. For a detailed overview of AI and ML, in particular,
we refer to Russell and Norvig [23].
_B. DISTRIBUTED LEDGER TECHNOLOGY_
DLT enables the operation of a highly available, appendonly, peer-to-peer database (i.e., the distributed ledger) in
**FIGURE 2. Overview of distributed ledger technology and its**
characteristics by Kannengießer et al. [3].
untrustworthy environments characterized by Byzantine failures, where separated storage devices (i.e., the nodes) maintain a local replication of the data stored on the ledger. A distributed ledger can either be deployed as a public ledger, if a
new node can directly join a new network, or a private ledger,
in case a node first needs permission to join the network.
Another aspect to distinguish distributed ledgers are the write
permissions: In the case of permissionless ledgers, all nodes
have equal write permissions. In the case of permissioned
ledgers, nodes first require granted permission to validate and
commit new data to the ledger [3]. Today, several concepts
for DLT exist with different characteristics, for example,
regarding the transaction throughput or fault tolerance. The
most widespread concept of DLT is blockchain [3], which
became known to the public due to the emergence of the
cryptocurrency Bitcoin [7]. Other concepts, for example,
rely on directed acyclic graphs [3], [24], [25]. We provide
an overview of DLT concepts with different characteristics
in Figure 2. Nowadays, applications of DLT and especially
blockchain beyond financial transactions are emerging, such
as the management of medical data [26], autonomous driving
[9], or decentralized games [27].
In a blockchain system such as Bitcoin [7], transactions
of network participants are stored in blocks of a fixed size.
Every block also includes a timestamp, and a hash of the
previous block. Thus, the system can be regarded as a chain
of blocks. Cryptographic techniques are used to ensure that
only legitimate participants holding a cryptographic key can
perform transactions, which are stored in the block. Bitcoin
[7] was the first blockchain created and has mostly financial
applications.
The main networks of the widely known Bitcoin and
Ethereum [28] are public, unpermissioned blockchains.
To secure the network, only a selected node can propose a new
block that includes the cryptographically signed transactions.
This node has to find a block candidate with a hash below a
certain, network-defined threshold. As this hash calculation is
impossible to reverse, a node has to make large computational
efforts to find that new block, competing against other nodes.
In return, the node that successfully found a block gets a
reward in cryptocurrency payment. Since there are at any
time many miners aiming to find the next block, the chances
of finding the next block are relatively low for an individual miner. As a result, the variance of the mining payoff
for an individual miner is relatively large. Therefore, most
-----
miners nowadays mine together in so-called mining pools.
If a mining pool finds a block, the pool collective gets the
block reward and distributes it among its miners according to
their share of hash calculations. This reduces the payoff variance for individual miners participating in the pool. Besides
Bitcoin, several implementations of blockchains exist with
different characteristics [3]. Some implementations provide
high transaction confidentiality guarantees (e.g., Zcash [29]),
or enable a universally usable, decentralized Turing complete
computing platform (e.g., Ethereum [28]). In the latter case,
programs can be stored and executed upon the blockchain
system. These programs are referred to as smart contracts.
_C. CONVERGENCE OF AI AND DLT_
First described by Nathan Rosenberg in the 1960s, convergence describes a phenomenon in which two or more initially
separate items move toward unity and become increasingly
integrated with each other. Convergence typically occurs in
four phases [15], [30]. The first phase, which is termed scientific convergence, occurs when distinct scientific fields begin
citing each other, leading to an increase in cross-scientific
research. It is followed by the technology convergence phase
in which previously distinct technological fields increasingly
overlap and new technology platforms arise. As technology convergence continues to blur existing technological
and market boundaries, market convergence, the third phase,
takes place, resulting in the emergence of new product-market
combinations. In some cases, the convergence of technologies
and markets may impact existing industries such that a fourth
phase in form of the convergence of entire industries occurs.
A typical and relatively recent example of convergence is
the emergence of the smartphone, which nowadays combines
initially separate technologies such as phones, cameras, and
portable computing devices. It eventually created an entirely
new market, completely transformed the mobile phone industry, and even overthrew the compact camera industry. Another
example is Bitcoin [7], which was created by combining
techniques from various computer science domains such
as distributed systems, cryptography, security, and game
theory.
Concerning a potential convergence of DLT and AI,
we currently see the emergence of the first scientific publications that do not simply apply AI in the context of DLT
(e.g., deep learning for prediction of the Bitcoin price) or
vice versa, but instead discuss a deeper integration of both
technologies. Extant literature that provides a consolidated
overview of the integration possibilities of AI and DLT, however, is scarce. Dinh and Thai [13] provide a viewpoint with
conceptual ideas of how an integration of AI and blockchain
technology might look like. They outline possibilities for
the reciprocal integration of AI and DLT, highlighting that
AI can be used for blockchain, just as blockchain can be
used for AI. Specifically, AI can support blockchain systems by increasing their security and scalability, by acting
as an automated referee and governance mechanism, and for
privacy-preserving personalized systems. Blockchain, on the
other hand, can serve AI systems by enabling decentralized
computing, providing data sharing infrastructures, serving
as a trail for explainable AI, or by coordinating untrusting
devices. As the article is a viewpoint, the authors remain
on a rather conceptual level and do not provide an in-depth
review of extant literature. Salah et al. [12], on the other
hand, provide a review and open research challenges on
one of these two perspectives, the use of blockchain for AI.
The authors identify enhanced data security, improved trust
on robotic decisions, business process efficiency increases,
collective decision making and decentralized intelligence as
main drivers for the usage of blockchain for AI applications. Interestingly, the latter category has similarities with
the application of privacy-preserving personalization, which
Dinh and Thai [13] classify as an AI for blockchain perspective. Furthermore, Salah et al. [12] develop a taxonomy that
provides an overview of different relevant technical characteristics, such as the consensus protocol or the blockchain
type. Their review surveys research articles, as well as industry whitepapers. As such, it delivers valuable insights into
how blockchain can be used for AI in vertical industry use
case cases, such as banking, finance, or agriculture. The article concludes with several open research questions on technologies that are relevant in the context of blockchain for AI
applications. These include topics of increasing blockchain
systems’ scalability and data confidentiality, interoperability,
or quantum cryptography. Lastly, Karafiloski and Mishev
[31] provide an overview of how blockchain can be used
for storing and organizing large amounts of data. Although
the authors do not consider AI use cases in much detail,
the presented work is representative for some fields of the
convergence of AI and DLT, such as data sharing.
In summary, prior research lacks an understanding how
AI and DLT are reciprocally integrated in systems today,
and how future research can advance the convergence of
these technologies. Specifically, prior research focuses on
blockchain, a specific DLT concept, and not on other potential
DLT concepts that may be better suited in the context of AI
[12], [13]. Furthermore, prior research only provides purely
conceptual ideas how AI may be used for DLT, and does not
evaluate today’s technical feasibility of such systems [13].
**III. METHODS**
_A. DATA COLLECTION_
For the identification of articles addressing the convergence of AI and DLT, we systematically searched scientific
databases. To cover a wide range of journal and conference
publications, we queried IEEE Xplore, ACM Digital Library,
AIS Electronic Library, Science Direct, Scopus, Ebsco Business Source, and ArXiv. The reason for including ArXiv
preprints in our search is the fact that AI and DLT are fast
moving fields of research, where new, potentially groundbreaking research results and insights may be available as
preprints that are not finally published yet. Our search string
required the publications to have a DLT-specific term and an
-----
AI-specific term in either their title, abstract, or keywords.
The search string, we employed was:
_TIKEAB ((Blockchain OR „Distributed Ledger‘‘ OR DLT_
_OR Bitcoin OR Ethereum OR Cryptocurrency OR „Crypto_
_currency‘‘ OR „Crypto-currency‘‘ OR „block chain‘‘ OR_
_„Smart contract‘‘) AND (AI OR ML OR „Artificial Intelli-_
_gence‘‘ OR „Machine Learning‘‘ OR „Deep Learning‘‘ OR_
_Clustering OR Classification OR „Neural Network‘‘ OR „Big_
_data ‘‘OR „Data mining‘‘ OR „Intelligent system[∗]_ ‘‘OR „Sta_tistical model‘‘, OR „Statistic model‘‘))._
We excluded articles published before 2008, since the concept of Bitcoin emerged that year. For DLT, the search string
included its most common concept of blockchain, as well
as blockchain’s most frequent implementations (i.e., Bitcoin
and Ethereum) and other technical terms. For AI, the search
string included ML and the most common application forms
of it. We searched on December 19th, 2019, this resulted
in 2,411 unique articles.
In a first step, we first removed 140 articles published
before 2008 from our set, as not all of the databases provided
the option to include this as a search criterion. Afterward,
we analyzed the titles, abstracts, and keywords for the remaining articles and removed 17 that were not written in English.
For the remaining 2,254 articles, we not only analyzed the
titles, abstracts, and keywords, but also their full texts. From
this set, we first removed 431 articles that turned out to
not be research articles. Most of these articles were either
the title of a conference proceeding or book chapters. We,
then, checked the remaining 1,823 articles for whether they
actually covered both, AI and DLT. 1,544 turned out to not
cover AI and DLT, the largest group among these consists
of 141 articles from the medical field where DLT is an
abbreviation for other terms such as dose limiting toxicity
or double lumen endobronchial tube. From the remaining
279 articles, we excluded further 213 articles that did not
cover the close integration of these technologies according to
the concept of convergence [14], [15]. Out of these, the largest
group of 92 articles covered AI-based cryptocurrency price
prediction or trading. In the remaining set of 66 articles,
we excluded another 34 articles that did not clearly answer
why they used DLT. This resulted in a set of 32 articles that
eventually turned out to be relevant for further analysis.
_B. DATA ANALYSIS_
Following the data collection, which resulted in the identification of 32 relevant articles, we categorized all 32 articles into
groups. The groups were thereby derived from Dinh and Thai
[13]’s viewpoint and adapted to our review, where necessary.
Toward this end, we expanded the concept of blockchain to
DLT, and reframed the focus and name of some categories
to better suit the extant literature (e.g. secure DLTs instead
of secure and scalable DLTs, and coordination of devices
instead of coordination of untrusting devices). Table 1, below,
provides an overview of the adapted coding scheme and the
number of articles in each category. Note that we categorized
some articles into multiple groups. Therefore, the sum of
**TABLE 1. Classification of the identified articles that cover the**
integration of AI and DLT into groups.
articles in Table 1 is higher than 32. We also added another
type of perspective, Both, to the coding scheme. It consists of
consolidating works, such as literature reviews or high-level
articles covering both aspects of the convergence of AI and
DLT. Several of these articles contain concepts from multiple
groups, especially articles covering DLT for AI. In Table 6 in
the appendix, we present an overview of the 32 analyzed
articles and their classification into different groups. Our
future research agenda is an extension from our review and
draws from multiple sources. On the one hand, the findings,
outlook and conclusion of the extant articles discussed in our
review. On the other hand, our own assessment of further,
recent developments in literature in the fields of AI and DLT.
**IV. REVIEW ON AI FOR DLT**
Drawing on the general distinction between AI for DLT and
DLT for AI proposed by Dinh and Thai [13], this section
describes our findings in terms of how extant research has
applied AI for DLT. We identified three different groups of
use contexts, which we detail on below, and summarize our
findings in Table 2.
_A. SECURE DLTS_
1) DLT PROTOCOL SECURITY
Within this category, extant literature solely applies AI
methods of reinforcement learning to explore and develop
strategies in game-theoretic settings concerning the DLT
protocol. Specifically, these articles analyze the fairness of
mining activities. The learning agents get rewarded with a
cryptocurrency-based miner reward. In the category of articles using reinforcement learning for DLT protocol security,
two subcategories exist.
_a: SELFISH MINING_
Articles in this subcategory analyze selfish mining strategies for blockchain systems [10], [32]. The learning agent
-----
**TABLE 2. Overview of concepts in the literature on AI for DLT.**
thereby is a blockchain miner, which can strategically delay
the publication of found blocks. By doing so, the selfish miner
aims to waste mining power of honest miners that work on
another fork of the blockchain [33]. Through generating these
attacks on blockchain systems in a testing environment using
reinforcement learning, researchers can find new insights
on the security of these blockchain protocols. For example,
Hou et al. [10] performed the simulation with protocols such
as Bitcoin [7], Ethereum [28], or GHOST [34]. In their
research, an agent learns mining strategies where it can gain
disproportionately large mining rewards relative to its hash
rate. While some of these adversarial mining strategies are
known from theoretical studies of the blockchain protocols
[33], AI has the potential to detect new, previously unknown
attacks in complex scenarios, for example, in cases with multiple partially cooperating agents. Additionally, the simulation may contribute to a better understanding of the feasibility
of these mining strategies [10].
_b: POOL OPERATION_
In research within this subcategory, the agent learns strategies
for operating a mining pool. In our review, we identified only
one article that fits this subcategory: Haghighat and Shajari
[35] specifically analyze a block withholding game, where
a pool operator can either decide to mine honestly, or attack
another pool by mining (at least with a fraction of its power)
for the other pool, but not actually submitting a solution,
in case a block is found. As a consequence, the revenue of
the attacked pool drops, and it gets less attractive for other,
honest miners to mine as a member of this attacked pool. The
authors train an agent, which represents a pool operator, with
reinforcement learning methods. It decides between mining
honestly and attacking other pools at each step in the game.
One insight from the results is that it may be more likely than
widely assumed that one pool operator at a certain point in
time was in control of more than 51 % of Bitcoin’s network
mining power [35].
2) SMART CONTRACT SECURITY
Articles within this category aim to protect users from interacting with insecure or malicious smart contracts. Two subcategories with different approaches exist in literature: First,
the analysis and detection of such smart contracts. Second,
the active interaction with such smart contracts in order to
manipulate and invalidate them.
_a: DETECTION_
Three articles in this subcategory aim to detect security issues
in smart contracts using data analysis techniques. As the
smart contract opcode is usually published on the distributed
ledger, two articles [37], [38] analyze the opcode with neural
networks. Since a compiled opcode is challenging for humans
to read and understand, Kim, et al. [37] use neural networks
to estimate the functionality of a given smart contract (e.g.,
whether it is intended for a marketplace or for gaming).
Tann, et al. [38], in contrast, aim at detecting security
-----
vulnerabilities in smart contracts. To do so, they run long
short-term memory neural networks on a sequence of
opcodes. In a performance comparison with a symbolic analysis approach, the machine-learning-based analysis method
can outperform the formal verification-based method with
regards to classification accuracy and speed. Another article
by Camino et al. [36] aims at detecting honeypot smart contracts. These types of smart contracts appear to contain free,
withdrawable funds. However, once a user aims to invoke
the honeypot contract to redeem free tokens, the honeypot
actually does not release the funds, resulting in the user
losing funds that they may have used to invoke the honeypot
contract. Contrary to the other articles in this subcategory,
Camino et al. [36] do not use opcode analysis techniques, but
analyze the smart contract metadata (i.e., values derived from
related transactions and fund flows) and off-chain data (i.e.,
the availability of a source code and the number of source
code lines). In this way, data science techniques can classify
more than 85 % of unseen smart contracts correctly. This
approach especially works well in cases where there was a
minimum of smart contract on-chain activity, and, as a result,
on-chain metadata is available.
_b: MANIPULATION_
The only article identified within this subcategory [39] uses
an active reinforcement learning approach to invalidate criminal smart contracts. Criminal smart contracts can be used
for illegal activities, such as selling confidential information [46]. In this research, the agent learns to manipulate
the contracts’ data feed and thereby successfully invalidates
a substantial share of the studied criminal smart contracts.
While doing so, the system aims at invalidating these given
smart contracts, but not at detecting possible criminal smart
contracts in the first place.
_B. AUTOMATED REFEREE AND GOVERNANCE_
Another small group of articles uses AI for DLT-based system governance. Dinh and Thai [13] present the vision that
people, devices, and smart contracts record transactions on
a distributed ledger. An AI can then solve potential disputes
of events happening on- or off-chain, and record the results
on a distributed ledger. This automated arbitration could be
data driven, unbiased, and, as a result, more consistent, justified, and accepted than arbitrations today [47]. In extant
literature, articles toward this vision appear to be in an early
stage and fall within two categories: AI-based DLT protocol
governance, and AI-based smart contract governance.
1) DLT PROTOCOL GOVERNANCE
This category currently consists of two articles.
Lundbæk et al. [40] propose an adjusted proof-of-work-based
consensus mechanism. Only a subset of nodes participates
in the required hash calculation and the DLT protocol uses
ML to regularly update system governance parameters, such
as the ideal number of miners or the level of mining difficulty. In this scenario, AI is tightly integrated with the DLT
protocol. However, the authors do not discuss in detail the
functionality and security aspects of their ML-based governance. Gladden [41], on the other hand, discusses ethical
aspects about cryptocurrencies governed by an AI system.
The author claims that such a system can positively influence
the ethos and values of societies. However, the article has
a sociotechnical focus and does not describe the technical
architecture of the system.
2) SMART CONTRACT GOVERNANCE
Only one article in our review fits this category. Liu et al. [42]
propose to implement voting mechanisms for participants in
smart contracts to alter smart contract parameters. As such,
these voting mechanisms can govern smart contracts in complex situations and can alter the smart contracts towards an
adaptive behavior. An ML-based system assists the users
in voting, based on their past voting behavior. As a result,
the ML-based system pre-chooses the users’ selections and
eases their tasks. In this scenario, the ML-based system runs
outside of the actual DLT system.
_C. PRIVACY-PRESERVING PERSONALIZATION_
Many internet platforms nowadays collect data about their
users and apply AI-based recommender systems to personalize their content. Examples include Facebook, Netflix,
or Taobao in China. A recommendation by this AI-based
system thereby is not only influenced by the individual user’s
data, but also by all the other users’ data. Netflix, for example,
can recommend new movies to users based on their individual
and other users’ watching behavior. However, this comes at
the risk of impeding users’ privacy. In several cases, private
data from such platforms has been leaked to the public [48]
or misused [49]. A group of articles envisions AI-based
personalization for DLT-based data sharing platforms. This
could, for example, include a social network built on a DLT
infrastructure [13]. DLT can thereby serve as a transparent
trail of data flows, and give users control over their data.
Two categories of articles with different approaches toward
designing such systems exist in literature. First, a category
with articles that aim to use local computation. Second, a
category with articles that use distributed ledgers based on
hardware-assisted Trusted Execution Environments (TEEs).
1) LOCAL COMPUTATION
An ML model inference intended to personalize content on
a platform requires data about the user to personalize its
recommendation for them. If such a model inference is executed locally, there is no need for the user to share their
data, while they can still get a personalized recommendation.
However, if no user shares their data with the platform,
it is challenging for a platform operator to train ML models
using traditional methods. Therefore, articles in this category
describe systems that use federated learning. With this distributed ML technique, an ML model gets trained locally on a
user’s device and only ML model parameter updates leave the
device and are shared with the platform or potentially other
-----
parties. Model parameter updates from many users are then
aggregated into a new model. Federated learning is already
successfully applied on a large scale on smartphones, for
example, to predict the next word a user may want to type
on their keyboard [50]. The distributed ledger provides an
infrastructure to share data on an auditable and immutable
ledger. Some articles describe systems that only store hashes
of model gradient updates or hashes of aggregated models on
the distributed ledger, in order to save storage space on the
ledger and to preserve data confidentiality [51]. Other articles
describe systems that store the complete gradient updates
and aggregated models themselves on the ledger [52]. First
applications are illustrated for the Internet of Things [43] or
the taxi industry [44].
2) TRUSTED EXECUTION ENVIRONMENTS
The second category uses TEE-based DLTs. TEEs are typically located on a CPU and provide a special enclave for
secure computation. In this enclave, applications are executed
such that other applications executed on the same CPU but
outside the enclave cannot interfere with the state or control
flow of the application shielded by the TEE. As a result,
the enclave appears as a black box to the outside and secures
the application running inside. Furthermore, the hardware
of the TEE can generate a proof to the application running
inside the TEE to attest that the software is executing on
this specific, trusted hardware. This feature is called remote
attestation. Extant research uses DLT protocols to coordinate
TEEs, thus enabling confidential smart contracts that execute
inside the TEE. Since the TEE ensures data confidentiality,
this category does not necessarily require local computation.
First articles aim at providing solutions for the medical industry [11], [45]. To further increase data security, some articles
describe systems that combine TEEs with federated learning [45] or differential privacy [11]. Simplified, differential
privacy is a mechanism that adds randomly distributed noise
to a set of data points. This protects the information privacy
of users that provide data. As the random noise has zero
mean (or a predefined mean), an aggregator can still draw
meaningful conclusions from the aggregated data set without
threatening anyone’s information privacy.
**V. FUTURE RESEARCH AGENDA ON AI FOR DLT**
In this section, we present our analysis of future research
opportunities on the advancement of DLT using AI. Again,
we use the categories proposed by Dinh and Thai [13] to
structure the future research agenda. We summarize our findings in Table 3.
_A. SECURE DLTS_
Dinh and Thai [13] have drawn a futuristic picture with farreaching real-time analysis and decision possibilities of an
AI as part of the DLT system. Current AI-based systems,
however, do not provide the required security and robustness
guarantees necessary to govern a DLT system. Precisely,
while AI-based systems can detect software vulnerabilities,
they cannot (yet) guarantee that all available security vulnerabilities have been identified. If an AI detects no vulnerability,
this does not necessarily mean that there are none [10]. This
open research problem of AI robustness is also highly relevant
in other AI application domains, such as autonomous driving,
where first results may be transferable to software and DLT
systems’ security [53]. In general, methods from the field of
explainable AI (XAI) [19] could help to better understand
how reliable an AI-based decision is. We, therefore, see this as
a fruitful field for future, foundational research. While AI is a
promising technology to detect DLT security vulnerabilities,
further research that aims to resolve these vulnerabilities and
to build secure DLT-based systems is required in parallel [35].
Subsequently, we present future research opportunities in
the two categories identified in our review in section IV.
Furthermore, we see possible research opportunities in the
convergence of these two categories, which we also present.
1) DLT PROTOCOL SECURITY
The security analysis of DLT protocols using AI is currently dominated by reinforcement learning. Extant literature analyzes game-theoretic incentive mechanisms in widely
adopted protocols such as Bitcoin or Ethereum. Possible
future research opportunities include the expansion of previous work with settings such as partially cooperating miners
[10], or the analysis of strategies other than selfish mining [32]. In our view, beyond analyzing the fairness of mining
settings and its DLT security implications, another additional
interesting field of AI-based security analysis is the DLT
source code itself. This is a highly relevant field in practice.
For example, the deployed and widely-used cryptocurrency
Zcash, which uses a novel form of zero knowledge proof
cryptography, has been subject to a vulnerability that would
have allowed an attacker to generate an infinite amount
of cryptocurrency tokens [54], thus potentially inflating the
assets of other users. Extant literature already uses AI-based
analysis methods for software bug identification in general
[55], and smart contract opcode analysis in particular [37],
[38], thereby often outperforming formal verification-based
methods [38]. We, therefore, see a promising future research
avenue in further developing and applying such AI methods
for analyzing DLT protocol code security.
2) SMART CONTRACT SECURITY
Extant literature in this subcategory mostly aims to protect users from insecure [37], [38] or malicious [36], [39]
smart contracts. Practical experience, however, has shown
that developers struggle to develop secure smart contracts in
the first place [56]. Therefore, an interesting perspective for
future research would be in the field of using AI to assist
developers in developing secure smart contracts. This could
include the development of early security warning tools for
developers that automatically check developed code for security vulnerabilities. Recent work in other fields of software
engineering suggests that such AI-based systems may be
feasible [55], [57].
-----
**TABLE 3. Overview of future research opportunities in the field of AI for DLT.**
Beyond this suggestion to expand the usage perspective
of future research, we see further research opportunities in
refining and applying the methods identified in our review.
Reinforcement learning, which has already shown promising results for DLT protocol security analysis in simulation
settings [10], [32], [35], appears as a promising method to
analyze the security of game-theoretically complex smart
contracts [39] in simulations. This could include, for example, decentralized token exchanges.
The other identified approach is to analyze security vulnerabilities in smart contracts by supervised learning. This
appears as a promising field of AI application, as AI-based
methods have outperformed classical, formal verificationbased methods [38]. Future research could expand this work
by considering further classes of smart contracts [36] and
further details, such as different smart contract compiler versions [37]. Furthermore, future research that seeks to support
software developers could aim at not only detecting security
vulnerabilities in a smart contract, but also at localizing this
vulnerability (e.g., by expressing the information which portions of the bytecode cause the vulnerabilities [38]) or even
at providing suggestions to fix it [60], [61].
3) INTERACTION OF DLT PROTOCOLS WITH SMART
CONTRACTS AND SECURITY IMPLICATIONS
In addition, we see a convergence of these two categories—
AI for DLT protocol security, and AI for smart contract
security—as a promising avenue for future research. Recent
research has investigated the extent to which game-theoretical
aspects in DLT protocols and smart contracts influence each
other and can lead to unfair conditions for regular users of the
overall system [58]. For example, miners who may participate
in the trading of assets on blockchain-based decentralized
exchanges have the sovereignty to decide which transaction is
included in a block and which one is not. This can incentivize
other users to pay higher transaction costs for these exchange
transactions than they would without the miner participation.
Prior research considers this as an unfair setting [58]. The
outlined scenario is only one example of the complex interplay of smart contracts and DLT protocol incentives, which
is also relevant in decentralized applications other than token
exchanges [62], [63]. As it appears to be an even more complex topic than the standalone security of DLT protocols or
smart contracts, an AI-based system could provide valuable
insights. Prior research uses reinforcement learning for gametheoretic analyses of DLT protocols and smart contracts separately, therefore, reinforcement learning also appears as a
promising method to analyze the interplay between both. The
results could be used to support the development of fair and
secure DLT-based systems.
_B. AUTOMATED REFEREE AND GOVERNANCE_
In general, both identified categories—AI for smart contract and for DLT protocol governance—face the reality of
-----
current AI’s capabilities. To automatically make far-reaching
decisions on a distributed ledger with a potential influence
on financial or data confidentiality circumstances, most of
today’s AI’s robustness and explainability guarantees are
not strong enough. The robustness guarantees of many
modern AI-based systems are weak [64] compared to the
complexity of the actions that agents perform when interacting with DLT-based systems. Therefore, in our view,
breakthroughs in the robustness, explainability, and ultimately security of AI-based systems are required before
they can automatically govern DLT protocols and smart
contracts on a large scale. However, in the case of humanin-the-loop based AI smart contract governance [42], an AI
may assist a human decision maker which can intervene at
any time in case an AI-based system faces and detects an
irregular situation. This human-in-the-loop model, therefore,
appears to provide good practicality with today’s AI technology and appears to be a good starting point for future
research [42].
_C. PRIVACY-PRESERVING PERSONALIZATION_
DLT-based platforms for data sharing are starting to get
more attention in research but are not deployed and scaled
in practice yet. As such, research on such platforms and
their AI-based personalization is still in an early stage.
Articles in the group of privacy-preserving personalization
using AI for DLT often incorporate aspects from multiple other groups, such as decentralized computing for AI,
or secure data sharing and marketplaces for AI. In our
view, developments in the field of privacy-preserving per_sonalization depend on advancements in the underlying_
secure computation and privacy-preserving data sharing
technologies.
An interesting future research opportunity which is practical with today’s technologies is the further development
and deployment of DLT-based data sharing platforms and an
AI deployment on top of these platforms. TEEs appear as
a promising technology to realistically build such systems
today, due to their relatively small computational overhead
when compared with other secure computation technologies,
as well as their ability to enforce policies [65]. TEEs can be
combined with federated learning [45] or differential privacy
techniques [11] for stronger data confidentiality guarantees.
A reasonable avenue for deployment appears to be within
consortia of (at least partially) trusting participants. This
could, for example, include a consortium of hospitals aiming
to deploy personalized, AI-based treatments while complying
with strong privacy requirements [45].
**VI. REVIEW ON DLT FOR AI**
Drawing, again, on the general distinction between AI for
DLT and DLT for AI proposed by Dinh and Thai [13],
this section describes our findings in terms of how extant
research has applied DLT for AI. We identified four different groups of use contexts and summarize our findings
in Table 4.
_A. DECENTRALIZED COMPUTING FOR AI_
Due to the large amount of processed data, ML model training is often computationally expensive. Nowadays, graphics processing units (GPUs) are typically used to train ML
models, as they provide more computational power for most
ML training tasks. For complex optimization tasks, many
GPUs may be necessary to train ML models in a reasonable
amount of time [66]. However, many GPUs and CPUs in
computers around the world are only slightly loaded or even
unused. Previously, distributed computing has been applied
to utilize these unused resources in communities at scale, for
example, to perform protein folding calculations for medical
research [67]. Articles within this group describe systems
that use distributed computation methods to train AI models.
DLT can serve as a trail to organize this decentralized computation, as well as to provide a ledger for rewards paid in
cryptocurrency [13]. Looking at the literature, three different
approaches for DLT-enabled distributed computing for AI
exist: First, the computation within the DLT protocol. Second,
the computation in smart contracts. Third, the computation
outside the distributed ledger.
1) DLT PROTOCOL
Especially public, permissionless distributed ledgers, such
as the Bitcoin or Ethereum main network, are nowadays
secured by a proof-of-work-based consensus mechanism.
It refers to the search for a number such that the hash of
a blockchain block candidate, which includes that number,
is below a certain threshold. This mechanism is required to
limit the number of nodes that can propose a new block, and
ultimately, to ensure blockchain security. By adjusting the
threshold, the network can adjust the difficulty of this search
problem, and adjust the average time to find a new block.
The search problem is computationally expensive. However,
the verification of a potential solution is computationally
inexpensive.
The proof-of-work mechanism consumes large amounts of
energy for protocols such as Bitcoin, which some authors
see critical [68]. At the same time, the calculated hashes
cannot be meaningfully used for purposes other than securing
the network. Some authors therefore propose a proof-ofuseful-work mechanism, in which miners train an ML model
instead of finding a number for a certain block hash. The
block which contains an ML model with the least test error
is then accepted as the new block by the other nodes [69].
For such a system to work, several challenges have to be
overcome. On the one hand, the computational difficulty of
optimizing an ML model is hard to adjust [70]. This can
also cause blockchain forks to occur on a very regular basis.
On the other hand, the proof-of-useful-work mechanism for
AI requires the secure provision of an ML model architecture,
training data, and test data. Extant literature that describes
such systems uses selected groups of participants and committees with reputation mechanisms to govern this data
provision [69]–[71].
-----
**TABLE 4. Overview of concepts in the literature on DLT for AI.**
2) COMPUTATION IN SMART CONTRACTS
In some DLT networks, such as Ethereum, a virtual machine
can execute Turing-complete smart contracts. As such, DLT
could provide a substrate for ML model training [72].
However, traditional ledgers like Ethereum do not support
intensive computations that would be necessary to perform
ML model training [74]. To overcome this limitation, some
articles propose to extend distributed ledger smart contract
execution with TEEs such that these ledgers can support
computationally more intensive smart contracts. This TEE
can then train simple ML models [11], [59].
3) OFF-CHAIN COMPUTATION
Another subcategory of articles performs computations not
on the distributed ledger, but off-chain. Unless using sophisticated cryptographic techniques [79], the integrity of the
computation cannot be guaranteed. DLT-based federated
learning systems, for example, train an ML model on the
edge [43], [44], [52], [73]. The main motivation of most
articles focusing on DLT-based federated learning, however,
is not the usage of decentralized edge computing resources
per se. Instead, the main motivation is secure data sharing.
Some of the DLT-based federated learning articles also propose financial rewards for both in combination, model training and data sharing [80], [81]. Besides federated learning,
another article by Lu et al. [74] proposes a crowdsourced
computation approach to offload heavy computation tasks
from a blockchain, such as in ML model training. Multiple
offloaded computing engines audit each other’s work and
game-theoretic incentive mechanisms are used to build a
protocol. The authors also present a security analysis of their
protocol.
_B. SECURE DATA SHARING AND MARKETPLACES FOR AI_
In addition to the increasingly available computing power,
another fundamental reason for the recent advancement in
-----
AI is the strong growth of available and digitized data.
ML-based systems generally perform better the more data
they are trained on, for example, with regards to classification
accuracy [21], [82]. Some authors see recent developments in
IT, in which few companies are in control of large amounts of
personal data, critical and propose DLT-based data markets
to democratize such data silos [13], [83]. As such, a group
of articles proposes solutions based on DLT to build data
sharing infrastructures, thus enabling the deployment of AI.
These articles differ in their technical approaches toward
designing such systems. Articles which incorporate advanced
privacy-preserving mechanisms in their systems mostly focus
on the health care industry. We suspect the reasons for this
being strong confidentiality requirements for the handling of
health data.
1) SMART CONTRACTS
Two identified articles use traditional smart contracts on
Ethereum to build a data sharing infrastructure and marketplace. These articles differ in their focus, either on sharing
training data itself [75], or ML models [76]. Especially the
sharing of training data requires the storage and handling
of large files. Therefore, Özyilmaz et al. [75] connect their
system to the decentralized file protocol SWARM and manage data access rights through the blockchain. These marketplaces allow the payment of data providers with units of
the cryptocurrency Ether. To incentivize participation with
high quality data, Harris and Waggoner [76] propose staking
mechanisms in which malicious participants sharing spam
models lose their stake. Such systems can be applied in the
Internet of Things industry in general [75].
2) TRUSTED EXECUTION ENVIRONMENTS
TEEs enable the computation of relatively intensive tasks,
while preserving data confidentiality and integrity throughout
the computation. Therefore, systems described in some articles execute computationally intensive smart contracts offchain in a TEE [11], [45], [59], [84], [85]. These articles
use the blockchain concept with different DLT designs [3].
Several articles use the Oasis blockchain [11], [84], [85]
Hynes et al. [11], for example, present a privacy-preserving
data market that provides solutions for a large share of the
ML pipeline. Through smart contracts, data providers can
define policies to share their data. These policies, for example, include the asking for a reward and differential privacy
requirements. Data consumers can choose to fulfill these
policies in order to train an ML model on the providers’
data. Since a TEE ensures confidential computation, the training data does not get leaked and only data consumers can
get access to ML model inference. The ML model itself is
shielded inside a smart contract and inference executions
count toward the provider’s policies, which increases data
provider’s privacy against potential inference attacks. This
type of attacks aims at executing the ML model in order to
extract the underlying training data or the model itself [86].
Other articles, which describe a system with similar
capabilities, use an Ethereum-based blockchain [45], [59].
As a special feature, a virtual machine in the enclave
can allow the training of proprietary AI models while
not comprising the training data security [59]. PasseratPalmbach, et al. [45] incorporate federated learning into the
TEE-enabled blockchain. This protects data providers from
potential TEE side channel attacks and, therefore, further
reduces data privacy risks.
3) FEDERATED LEARNING
A large subcategory of articles describes systems that use
federated learning with distributed ledgers that are not TEEenabled [43], [44], [51], [52], [81]. DLT thereby serves as
a provenance record of data. This data can describe a large
share of the ML pipeline —training data origin, training
data, ML model modifications, or testing data [51]. In most
systems described in these articles, the data itself is not stored
on the blockchain, but hashes of the data [51]. In some cases,
the systems use relatively simple ML models and store plain
model updates on the blockchain [52]. As a result, every
participating server can audit and compute the aggregated ML
model weight updates. A blockchain-based solution can be
well-suited for certain use cases and only provides a small
performance overhead of ca. 5 % to 15 %, while enabling
transparency and accountability [52]. Some systems go further and replace the traditionally centralized aggregator with
a smart contract-based one [44].
As federated learning is potentially vulnerable to inference
attacks [86], some articles describe systems that further use
differential privacy techniques [43], [73] to increase data confidentiality guarantees. Further systems incorporate financial
incentives for participants sharing model updates [43], [81].
_C. EXPLAINABLE AI_
Complex ML models, such as deep neural networks, are
nowadays often used in a black box manner [19]. This means
that users or even system creators do not have the information
how these models come to a certain prediction. However,
obtaining this information can be desirable in certain cases,
for example, to verify the system’s robustness or to comply
with legislation [19]. Dinh and Thai [13] outline DLT as
a technology to increase AI-based systems’ explainability.
Their vision is that DLT provides an immutable trail to track
the data flow of AI-based systems.
Looking into the extant literature on DLT for AI,
the explainability of the black-box model itself is out of focus
for most researchers. The DLT for XAI literature mainly
covers data provenance or computational integrity aspects
for model training or inference. Sarpatwar et al. [51] design
a DLT-based federated learning system for trusted AI and
present five requirements of blockchain for trusted AI. First,
guarantees of AI model ownership and track of use are
important. Second, the confidential sharing of AI assets as
they are often created using sensitive data. Third, auditability
regarding the AI training process. Fourth, the traceability of
-----
the AI pipeline. Fifth, to keep a record in order to recognize
a potential bias and model fairness issues.
In general, several extant articles aim to use DLT-based
federated learning to ensure such aspects of AI model
explainability and trustworthiness [51], [52], [76]. Other articles use TEEs to also cover the model inference with their
system, and thus, provide stronger explainability and restrictions in order to track data flows for data providers [11],
[45], [59]. Hynes et al. [11], for example, use XAI methods
to compensate data providers relative to their influence their
data had on an ML model’s inference.
_D. COORDINATION OF DEVICES_
A final group of articles aims to coordinate devices with DLT.
These devices, such as Internet of Things devices, generate
data, which AI-based systems can analyze. A distributed
ledger can serve as a registrar for these devices, and store different types of data that these devices generate. This includes,
for example, metadata (such as hashes), or the generated
data itself. In extant literature, articles that use DLT also
connect devices to the distributed ledger. Using cryptographic
principles, transactions on a distributed ledger (such as smart
contracts) are signed, which ensures that only legitimate participants can perform certain transactions. Kang et al. [77]
propose to use DLT for reputation management in a federated
learning setting. Beyond identification through asymmetric
cryptographic mechanisms on a distributed ledger, TEEs provide mechanisms to self-identify and remote-attest [78], such
as physically unclonable functions [87].
**VII. FUTURE RESEARCH AGENDA ON DLT FOR AI**
In this section, we present our analysis on open, future
research fields on DLT for AI. Again, we use the categories
based on Dinh and Thai [13] to structure the future research
agenda. We present an overview of our results in Table 5.
_A. DECENTRALIZED COMPUTING FOR AI_
As we identified three categories of approaches using DLT for
decentralized computing for AI, the future research opportunities in these approaches differ. Subsequently, we present our
identified research opportunities in all of these.
1) DLT PROTOCOL
First, extant research presents blockchain designs with proofof-useful-work mechanisms that could be used for ML model
training. Proof-of-useful-work itself has a long history with
several blockchain-based cryptocurrency systems deployed
[88], for example, for number theoretic research [89]. While
this prior work is interesting from an academic point of view,
the deployed cryptocurrencies so far had little practical relevance based on their market capitalization. Some blockchain
designs, such as the public Ethereum mainnet, actually aim
to not use consensus mechanisms based on proof-of-work in
the future [90].
Despite that, we see promising avenues for future research
on proof-of-useful-work for AI, as well as for actually
deploying and evaluating such systems. On the one hand,
future research could analyze the economics of such systems
and their utility for ML model requesters as well as regular
DLT users. On the other hand, future research could analyze
the practical security guarantees of such systems. Aspects
include the security of incentive mechanisms, as well as the
security of DLT transactions. This is particularly interesting,
since an attacker with large computational power would not
only benefit from rearranging transactions on the ledger, but
also potentially from the useful work itself.
2) COMPUTATION IN SMART CONTRACTS
Extant research has aimed to compute smart contracts
off-chain in TEEs. Out of several approaches to off-chain
computation while ensuring integrity [79], TEEs are interesting due to the relatively high computational power they
provide [91]. TEEs have seen prior research from both fields
separately, AI and DLT. In the field of AI, extant research
provides frameworks for ML model training and inference
on TEEs [92]. It has shown that TEEs are in general capable
of simple practical machine learning tasks, such as speech
processing [93], or the processing of small images [91]. In the
field of DLT, researchers have sought to increase the low
performance of smart contracts using TEEs while ensuring
confidentiality [78].
From our point of view, TEEs generally provide a promising trade-off between computational performance and confidentiality. Prior research that provides an ML pipeline with
TEE-enabled DLTs does not evaluate its performance [11],
[45], [59]. As such, a natural extension would be a practical
performance evaluation. This could help researchers aiming
to deploy such systems, for example, for privacy-preserving
personalization. By knowing what amounts of data and what
complexity of computation the system can handle, other
researchers could select use cases and deploy and evaluate
TEE-enabled DLTs for their use cases.
Another avenue for future research is the further development of computationally powerful TEEs, and to potentially
even enable TEEs on GPUs [106], which are, for many
ML model training tasks, better suited than CPUs. At the
same time, research has identified security vulnerabilities in
TEEs and potential security measures [94]. Therefore, future
research could aim to develop secure TEEs and mechanisms
to prevent attacks on TEEs, as well as study the feasibility
and implications of TEEs on distributed ledgers. To prevent
TEE security vulnerabilities, some researchers combine them
with other privacy-enabling technologies, such as federated
learning [45] or differential privacy [11], [91]. The further
analysis of such combined methods with regards to data
confidentiality guarantees, computational overhead, impact
on machine learning quality, and practicality for use cases
would be another promising avenue for further research.
In addition, TEEs with a strong integration into a
blockchain protocol enable new mechanisms for block selection. These mechanisms use special functions of TEEs, such
as the secure generation of random numbers, and are known
-----
**TABLE 5. Overview of future research opportunities in the field of DLT for AI.**
as proof of luck [96] or proof-of-elapsed-time [97]. Future
research could aim to further integrate such mechanisms into
the DLT protocol and evaluate the practical benefits and
system security implications.
3) OFF-CHAIN COMPUTATION
Most presented DLT-based federated learning protocols do
not ensure the integrity of the model training calculations.
As such, we see future research potential for such systems
in at least partially trusting consortia. Federated learning is
already successfully applied in systems with a large number
of users [50], therefore, future applied research could provide
further promising results when combined with DLT.
A further avenue for future research is the use of gametheoretic mechanisms to incentivize honest computation, for
example, for ML model training [74]. In general, there is
a lot of current research that aims to scale smart contract
executions outside a distributed ledger while maintaining
integrity [98]–[102]. Much of this research is at the conceptual level and does not yet deal in detail with the application
of concrete computationally intensive applications, such as
the training of ML models.
In this regard, the advancement of cryptographic technologies for secure computation is another avenue for future
research. Homomorphic encryption, for example, enables
data consumers to perform computations on encrypted data.
-----
However, this technology and other technologies (such as
secure multiparty computation or zero knowledge proofs) are
not yet practicable for ML model computations [11], [78],
[84], [91] due to the strong computational overhead. If future
research achieves breakthroughs to enable intensive computations with these cryptographic technologies, they would
provide an alternative to TEEs.
_B. SECURE DATA SHARING AND_
_MARKETPLACES FOR AI_
Articles within this field already present systems that could
be further developed and deployed for practical use. Future
research could, therefore, evaluate the practicality and user
acceptance of such systems. For this practical deployment,
health care use cases may be particularly suitable, because the
deployment of ML in this context is especially challenging
due to high privacy and security requirements [11], [84], [85].
Beyond the deployment and evaluation, a further avenue
for future research is the rigorous security analysis of such
systems against different types of adversaries. Potential
adversaries could, for example, aim to get rewards for sharing spam data, aim to extract training data by training a
proprietary ML model, or seek to extract training data of a
shared ML model through inference attacks or the shared ML
model itself. Only some articles of in our body of literature
study such attacks and security measures to protect the system
[11], [52], [59]. Especially staking mechanisms have seen
little attention from researchers so far [76]. Future research
could possibly transfer insights from proof-of-stake-based
DLT consensus protocols [107] toward staking in DLT-based
data marketplaces.
We see differential privacy as a promising technique for
ensuring users’ privacy [43], [73], especially in large datasets
with low dimensionality. In such cases, its negative effect
on the aggregated model quality is small while improving
the individual data sharer’s privacy. Previous research on
differential privacy for ML studies its theoretical implications
on test data sets [16]. In our view, future research could
build on these and study differential privacy’s practicality in
certain vertical use cases for both, classic ML (relevant in the
context of DLT; e.g., through TEEs), and federated learning.
Federated learning, for example, can be vulnerable to inference attacks [86]. Differential privacy can help impeding the
practical feasibility of such attacks [103].
Articles describing DLT-based federated learning use DLT
as an immutable trail and, in some cases, even for the data
communication and storage. Furthermore, DLT serves as a
ledger for the reward payment using cryptocurrencies. We,
therefore, see plain federated learning approaches as particularly promising for applications in (at least partially) trusting
consortia. TEE-based approaches, however, not only use DLT
for the same reasons, but go further. Smart contracts on a
distributed ledger can enforce policies for both, data providers
and data consumers. From this point of view, the TEE-based
approach is more powerful in terms of covering a large
share of the ML pipeline, and ultimately, in ensuring system
security. Accordingly, it may have better utility for future
research that aims to deploy DLT-based secure data sharing
and marketplace systems.
Nevertheless, TEE implementations have a rich history of
security vulnerabilities [108]. A detailed security analysis
of the practical security of TEEs and their implications
for TEE-based DLT systems, possibly with further security
mechanisms in smart contracts could provide a valuable
contribution to future research. Such an analysis could also
include combined security measures in smart contracts, like
federated learning [45], differential privacy [11], or secure
multiparty computation [11]. We see such combinations
of privacy techniques [65], [84] with TEEs as an interesting avenue for future research, although some of these
cryptography-based privacy techniques need breakthroughs
in practicability to be usable for ML [84].
_C. EXPLAINABLE AI_
In our view, DLT is well-suited as a trail for AI model
metadata and hashes of data generated around the AI model
training and inference phase. As such, DLT can help to enable
XAI applications that aim to describe the black box behavior
of AI models for both, off-chain, as well as on-chain [11]
applications. This can increase an AI-based system’s security and, ultimately, enable the deployment of trustworthy
AI [51], [104].
In many DLT-based federated learning articles, only AI
model hashes and metadata [51] or relatively small AI models
[52] are stored on the distributed ledger. TEE-based systems,
on the other hand, can securely write and read external memory while preserving data confidentiality [59]. Thus, we see
TEE-based systems [11], [45], [59] as specifically suited
for large, data intensive ML environments. Future research
could aim to deploy such systems and evaluate the practical
explainability of the AI model.
_D. COORDINATION OF DEVICES_
One of the core functions of DLT is to provide an immutable
ledger where only cryptographically legitimate participants
can perform transactions. The usage of DLT for coordinating devices and participants is therefore essential and basic
at the same time. Reputation management of participants
on a distributed ledger is one field that has only started
to see attention and appears to be of potential for future
research. Kang et al. [77] specifically mention a dynamic
system with variable thresholds as a future research opportunity. With regard to self-identifying TEEs, prior research
already uses DLT to coordinate devices [78], [109]. We consider the further dissemination and use of self-identifying
functionalities through physically unclonable functions [105]
as an interesting future research opportunity. Such selfidentification functionalities could potentially be built in
Internet of Things devices, such as sensors or actuators. However, mechanisms for hardware-based authentication have
been identified as insecure in the past [110], [111]. Therefore,
we see an analysis of their practical security guarantees as
another open research problem.
-----
**VIII. DISCUSSION**
_A. PRINCIPAL FINDINGS_
Both fields, AI and DLT, currently experience a lot of hype
within research and practice. Even though our initial database
query listed a few articles that did not clearly explain what AI
or especially what DLT was used for, a substantial number of
extant articles provided profound insights for our review and
future research agenda.
One analysis aspect in our work is that we do not limit our
work to blockchain as the only DLT concept, but also consider
other concepts such as directed acyclic graphs. This decision
was based on the expectation that other DLT concepts with
distinct characteristics might be better suited for some AI
applications than the concept of blockchain [3], [12]. Yet,
only one out of 32 articles within our review considered
DLT concepts other than blockchain in the context of AI
[75]. This result is in line with previous research, which has
called for more research on other DLT concepts [3]. Improving and deploying DLT concepts other than blockchain is,
therefore, a highly interesting and relevant avenue for future
research.
Overall, the framework provided by Dinh and Thai [13]
served as a helpful tool to classify extant literature with regard
to the convergence of AI and DLT. However, several articles
cover aspects from multiple groups in the subsections of AI
for DLT or DLT for AI. This is particularly the case for
articles in our review that cover multiple subsections of DLT
for AI or the subsection privacy-preserving personalization
from the section AI for DLT. We have slightly modified
the framework of Dinh and Thai [13] in two ways. First,
extant literature applies AI for DLT in the context of secure
distributed ledgers, but not for scalable distributed ledgers as
well. A possible reason for this is the lack of AI’s robustness
and security guarantees, as discussed in our future research
agenda. Second, we renamed the subsection coordination of
_untrusting devices to coordination of devices, because DLT_
is not the element that establishes trust in all of the articles.
For example, TEEs can use physically unclonable functions
for remote attestation [87].
In our view, some of the research fields are mature enough
to transfer systems into practice and evaluate their influence
and user acceptance. This includes, for example, the fields of
security analysis of smart contracts or marketplace systems
based on TEE-enabled DLTs. Research particularly focus on
the application of these marketplaces in the health care industry [11], [45], [59], [84], possibly due to the presence of data
lakes and strong data confidentiality requirements. Therefore,
further research-based deployment and evaluation of DLTbased marketplaces for AI in health care settings may deliver
a promising contribution to their real-world deployment.
At the same time, other research fields, such as AI-based
automated referee and governance for DLT protocols, appear
to require substantial progress in fundamental research
(e.g., robust and secure AI), before transferring scientific
knowledge into practice and the establishing of real-world
systems.
By applying convergence as a theoretical lens on extant literature, we were able to focus our research on innovative articles that closely integrate AI and DLT. Furthermore, we were
able to exclude research that does not closely integrate AI
and DLT, such as AI-based cryptocurrency price prediction
or trading. Drawing on the definition of convergence [30],
many articles on AI and DLT’s convergence fall into the
first phase with cross-scientific research on their integration.
Some articles already pave the way for the second phase
with new platforms arising that could, for example, accelerate health care research [11], [45], [84]. During our review,
we noticed that the concept of convergence has received
relatively little attention by IT researchers in the past. This
came to our surprise, as convergence has been a main driver of
IT innovations over the recent years [14]. Therefore, we consider convergence as a promising theoretical lens to explore
interdisciplinary technological settings.
_B. LIMITATIONS_
Both fields, AI and DLT, are moving very fast and breakthroughs are regularly achieved. In this respect, we cannot
rule out the possibility that in some subfields of the convergence of AI and DLT, future research will achieve innovative
breakthroughs that may enable use cases not identified in
our review or future research agenda. We have taken several steps to minimize the chances of this outcome. First,
our analysis includes ArXiv preprints of research articles
that may only soon be published in journals or presented at
conferences. Second, we included insights from foundational
research on the topics of AI, DLT, and secure computation
into our future research agenda. In doing so, we sought to
consider aspects that may not be covered in the reviewed
body of literature but are nevertheless highly relevant in the
individual fields (e.g., the practical security guarantees of
TEEs). Third, we have also included articles that cover little
researched technologies, often with little practicality today,
but that may see breakthroughs in the future. This includes
Artificial General Intelligence, DLT which is not based on
the concept of blockchain, and cryptographic protocols for
computationally intensive tasks.
**IX. CONCLUSION**
In this research, we investigated the current research status
and future research opportunities on the convergence of
AI and DLT. In order to assess the current state of convergence, we conducted a systematic literature review and
analyzed extant literature through the lens of convergence.
Our findings include several different ways that describe
how AI can advance DLT applications, as well as how DLT
can advance AI applications. In order to develop a future
research agenda, we built on the structure of our literature
review and linked the ongoing research with other research
in the separate fields of AI and DLT, as well as our own
view on future research opportunities. Our results reveal
multiple future research opportunities in this interdisciplinary
field for both, theory- as well as practice-oriented research.
-----
**TABLE 6. Overview of relevant and analyzed papers and their classification into different groups.**
With our article, we contribute to the current state of research
in four ways. First, we expand prior research, which did
not consider DLT concepts other than blockchain in the
integration with AI. Second, we consider both perspectives, AI for DLT, and DLT for AI and the many different concepts of their integration. Third, we bridge the gap
-----
**TABLE 6. (Continued) Overview of relevant and analyzed papers and their classification into different groups.**
between theory and practice by drawing theoretical conclusions from practical research and outlining future practical research opportunities from theory. Fourth, we describe
how convergence creates innovation in an emerging
field.
This article provides insights for researchers and practitioners interested in deepening their knowledge for
interdisciplinary applications of any of the fields: AI,
DLT, their convergence, or convergence in general. By
providing these insights with an overview on the upcoming convergence of DLT and AI, we contribute to the
development of future innovations in this fast-paced field
problem.
**APPENDIX**
_A. RELEVANT AND ANALYZED PAPERS_
See Table 6.
-----
**ACKNOWLEDGMENT**
The authors would like to thank Mikael Beyene and Niclas
Kannengießer for taking the time to discuss the structure of
this article.
**REFERENCES**
[1] J. Bughin, J. Seong, J. Manyika, M. Chui, and R. Joshi. Notes From
_the AI Frontier: Modeling the Impact of AI on the World Economy._
Accessed: Mar. 4, 2020. [Online]. Available: https://www.mckinsey.com/
~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/
Notes%20from%20the%20frontier%20Modeling%20the%20impact%
20of%20AI%20on%20the%20world%20economy/MGI-Notes-fromthe-AI-frontier-Modeling-the-impact-of-AI-on-the-world-economySeptember-2018.ashx
[2] World Economic Forum. Building Block(chain)s for a Better Planet.
Accessed: Mar. 4, 2020. [Online]. Available: http://www3.weforum.
org/docs/WEF_Building-Blockchains.pdf
[3] N. Kannengießer, S. Lins, T. Dehling, and A. Sunyaev, ‘‘Mind the
gap: Trade-offs between distributed ledger technology characteristics,’’
_ACM Comput. Surv., to be published. [Online]. Available: https://_
arxiv.org/abs/1906.00861v1
[4] A. Madani, R. Arnaout, M. Mofrad, and R. Arnaout, ‘‘Fast and accurate
view classification of echocardiograms using deep learning,’’ NPJ Digit.
_Med., vol. 1, no. 1, p. 6, Dec. 2018._
[5] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp,
P. Goyal, L. D. Jackel, M. Monfort, U. Müller, J. Zhang,
X. Zhang, J. Zhao, and K. Zieba, ‘‘End to end learning for selfdriving cars,’’ 2016, arXiv:1604.07316v1. [Online]. Available: https://
arxiv.org/abs/1604.07316v1
[6] N. Sünderhauf, O. Brock, W. Scheirer, R. Hadsell, D. Fox, J. Leitner,
B. Upcroft, P. Abbeel, W. Burgard, M. Milford, and P. Corke, ‘‘The limits
and potentials of deep learning for robotics,’’ Int. J. Robot. Res., vol. 37,
nos. 4–5, pp. 405–420, Apr. 2018.
[7] S. Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System. Accessed:
Mar. 4, 2020. [Online]. Available: https://bitcoin.org/bitcoin.pdf
[8] G. G. Dagher, J. Mohler, M. Milojkovic, and P. B. Marella, ‘‘Ancile:
Privacy-preserving framework for access control and interoperability of
electronic health records using blockchain technology,’’ Sustain. Cities
_Soc., vol. 39, pp. 283–297, May 2018._
[9] L. Davi, D. Hatebur, M. Heisel, and R. Wirtz, ‘‘Combining safety and
security in autonomous cars using blockchain technologies,’’ in Proc.
_Int. Conf. Comput. Saf., Rel., Secur. Cham, Switzerland: Springer, 2019,_
pp. 223–234.
[10] C. Hou, M. Zhou, Y. Ji, P. Daian, F. Tramer, G. Fanti, and A. Juels,
‘‘SquirRL: Automating attack discovery on blockchain incentive mechanisms with deep reinforcement learning,’’ 2019, arXiv:1912.01798v1.
[Online]. Available: http://arxiv.org/abs/1912.01798v1
[11] N. Hynes, D. Dao, D. Yan, R. Cheng, and D. Song, ‘‘A demonstration of
sterling: A privacy-preserving data marketplace,’’ Proc. VLDB Endow_ment, vol. 11, no. 12, pp. 2086–2089, Aug. 2018._
[12] K. Salah, M. H. U. Rehman, N. Nizamuddin, and A. Al-Fuqaha,
‘‘Blockchain for AI: Review and open research challenges,’’ IEEE
_Access, vol. 7, pp. 10127–10149, 2019._
[13] T. N. Dinh and M. T. Thai, ‘‘AI and blockchain: A disruptive integration,’’
_Computer, vol. 51, no. 9, pp. 48–53, Sep. 2018._
[14] S. Jeong, J.-C. Kim, and J. Y. Choi, ‘‘Technology convergence: What
developmental stage are we in?’’ Scientometrics, vol. 104, no. 3,
pp. 841–871, Sep. 2015.
[15] G. Duysters and J. Hagedoorn, ‘‘Technological convergence in the IT
industry: The role of strategic technology alliances and technological
competencies,’’ Int. J. Econ. Bus., vol. 5, no. 3, pp. 355–368, Nov. 1998.
[16] M. Abadi, A. Chu, I. Goodfellow, H. B. Mcmahan, I. Mironov, K. Talwar,
and L. Zhang, ‘‘Deep learning with differential privacy,’’ in Proc. ACM
_SIGSAC Conf. Comput. Commun. Secur. (CCS), 2016, pp. 308–318._
[17] B. Goertzel and C. Pennachin, Artificial General Intelligence. Berlin,
Germany: Springer-Verlag, 2007.
[18] Y. LeCun, Y. Bengio, and G. Hinton, ‘‘Deep learning,’’ Nature, vol. 521,
no. 7553, p. 436, 2015.
[19] W. Samek and K.-R. Müller, ‘‘Towards explainable artificial intelligence,’’ in Explainable AI: Interpreting, Explaining Visualizing Deep
_Learning. Cham, Switzerland: Springer, 2019, pp. 5–22._
[20] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre,
G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam,
M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner,
I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and
D. Hassabis, ‘‘Mastering the game of go with deep neural networks and
tree search,’’ Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016.
[21] D. Mahajan, ‘‘Exploring the limits of weakly supervised pretraining,’’ in
_Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 181–196._
[22] B. W. Silverman and M. C. Jones, ‘‘E. Fix and J.L. Hodges (1951):
An important contribution to nonparametric discriminant analysis and
density estimation: Commentary on Fix and Hodges (1951),’’ Int. Stat.
_Rev., vol. 57, no. 3, pp. 233–238, 1989._
[23] S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach.
Upper Saddle River, NJ, USA: Prentice-Hall, 2016.
[24] L. G. M. E. Eykholt and J. Denman. _RChain_ _Architecture_
_Documentation. Accessed: Mar. 4, 2020. [Online]. Available: https://_
buildmedia.readthedocs.org/media/pdf/rchain-architecture/stable/rchainarchitecture.pdf
[25] S. Popov. The Tangle, IOTA Whitepaper. Accessed: Mar. 4, 2020.
[Online]. Available: https://assets.ctfassets.net/r1dr6vzfxhev/
2t4uxvsIqk0EUau6g2sw0g/45eae33637ca92f85dd9f4a3a218e1ec/iota1
_4_3.pdf.
[26] S. Thiebes, N. Kannengießer, M. Schmidt-Kraepelin, and A. Sunyaev,
‘‘Beyond data markets: Opportunities and challenges for distributed
ledger technology in genomics,’’ in Proc. 53rd Hawaii Int. Conf. Syst.
_Sci., 2020, pp. 1–10._
[27] T. Min, H. Wang, Y. Guo, and W. Cai, ‘‘Blockchain games:
A survey,’’ 2019, arXiv:1906.05558v1. [Online]. Available: http://arxiv.
org/abs/1906.05558v1
[28] G. Wood, ‘‘Ethereum: A secure decentralised generalised transaction ledger,’’ Ethereum Project Yellow Paper, vol. 151, pp. 1–32,
Apr. 2014.
[29] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and
M. Virza, ‘‘Zerocash: Decentralized anonymous payments from bitcoin,’’
in Proc. IEEE Symp. Secur. Privacy, May 2014, pp. 459–474.
[30] C.-S. Curran, S. Bröring, and J. Leker, ‘‘Anticipating converging industries using publicly available data,’’ Technological Forecasting Social
_Change, vol. 77, no. 3, pp. 385–395, Mar. 2010._
[31] E. Karafiloski and A. Mishev, ‘‘Blockchain solutions for big data challenges: A literature review,’’ in Proc. 17th Int. Conf. Smart Technol.
_(EUROCON), Jul. 2017, pp. 763–768._
[32] T. Wang, S. Chang Liew, and S. Zhang, ‘‘When blockchain meets
AI: Optimal mining strategy achieved by machine learning,’’
2019, _arXiv:1911.12942v1._ [Online]. Available: http://arxiv.org/
abs/1911.12942v1s
[33] I. Eyal and E. G. Sirer, ‘‘Majority is not enough: Bitcoin mining is
vulnerable,’’ in Proc. Int. Conf. Financial Cryptogr. Data Secur. Berlin,
Germany: Springer, 2014, pp. 436–454.
[34] Y. Sompolinsky and A. Zohar, ‘‘Secure high-rate transaction processing
in bitcoin,’’ in Proc. Int. Conf. Financial Cryptogr. Data Secur. Berlin,
Germany: Springer, 2015, pp. 507–527.
[35] A. Toroghi Haghighat and M. Shajari, ‘‘Block withholding game among
bitcoin mining pools,’’ Future Gener. Comput. Syst., vol. 97, pp. 482–491,
Aug. 2019.
[36] R. Camino, C. Ferreira Torres, M. Baden, and R. State, ‘‘A data
science approach for honeypot detection in ethereum,’’ 2019,
_arXiv:1910.01449v2._ [Online]. Available: http://arxiv.org/abs/1910.
01449v2
[37] Y. Kim, D. Pak, and J. Lee, ‘‘ScanAT: Identification of bytecodeonly smart contracts with multiple attribute tags,’’ IEEE Access, vol. 7,
pp. 98669–98683, 2019.
[38] W. J.-W. Tann, X. J. Han, S. S. Gupta, and Y.-S. Ong, ‘‘Towards
safer smart contracts: A sequence learning approach to detecting security threats,’’ 2018, arXiv:1811.06632v3. [Online]. Available:
http://arxiv.org/abs/1811.06632v3
[39] L. Zhang, Y. Wang, F. Li, Y. Hu, and M. H. Au, ‘‘A game-theoretic method
based on Q-learning to invalidate criminal smart contracts,’’ Inf. Sci.,
vol. 498, pp. 144–153, Sep. 2019.
[40] L.-N. Lundbæk, D. Janes Beutel, M. Huth, S. Jackson, L. Kirk, and
R. Steiner, ‘‘Proof of kernel work: A democratic low-energy consensus
for distributed access-control protocols,’’ Roy. Soc. Open Sci., vol. 5,
no. 8, Aug. 2018, Art. no. 180422.
-----
[41] M. E. Gladden, ‘‘Cryptocurrency with a conscience: Using artificial intelligence to develop money that advances human ethical values,’’ Annales.
_Etyka W Życiu Gospodarczym, vol. 18, no. 4, pp. 85–98, 2015._
[42] S. Liu, F. Mohsin, L. Xia, and O. Seneviratne, ‘‘Strengthening smart
contracts to handle unexpected situations,’’ in Proc. IEEE Int. Conf.
_Decentralized Appl. Infrastruct. (DAPPCON), Apr. 2019, pp. 182–187._
[43] Y. Zhao, J. Zhao, L. Jiang, R. Tan, and D. Niyato, ‘‘Mobile edge
computing, blockchain and reputation-based crowdsourcing IoT federated learning: A secure, decentralized and privacy-preserving system,’’ 2019, arXiv:1906.10893v1. [Online]. Available: http://arxiv.
org/abs/1906.10893v1
[44] P. Ramanan, K. Nakayama, and R. Sharma, ‘‘BAFFLE: Blockchain based
aggregator free federated learning,’’ 2019, arXiv:1909.07452v1. [Online].
Available: http://arxiv.org/abs/1909.07452v1
[45] J. Passerat-Palmbach, T. Farnan, R. Miller, M. S. Gross, H. Leigh Flannery, and B. Gleim, ‘‘A blockchain-orchestrated federated learning architecture for healthcare consortia,’’ 2019, arXiv:1910.12603v1. [Online].
Available: http://arxiv.org/abs/1910.12603v1
[46] A. Juels, A. Kosba, and E. Shi, ‘‘The ring of gyges: Investigating the
future of criminal smart contracts,’’ presented at the ACM SIGSAC
Conf. Comput. Commun. Secur., Vienna, Austria, Oct. 2016, doi:
[10.1145/2976749.2978362.](http://dx.doi.org/10.1145/2976749.2978362)
[47] J. Kang, M. Bennett, D. Carbado, and P. Casey, ‘‘Implicit bias in the
courtroom,’’ UCLA Law Rev., vol. 59, p. 1124, Jun. 2012.
[48] A. Narayanan and V. Shmatikov, ‘‘Robust de-anonymization of large
sparse datasets,’’ in Proc. IEEE Symp. Secur. Privacy (SP), May 2008,
pp. 111–125.
[49] J. Isaak and M. J. Hanna, ‘‘User data privacy: Facebook, Cambridge
analytica, and privacy protection,’’ Computer, vol. 51, no. 8, pp. 56–59,
Aug. 2018.
[50] T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage,
and F. Beaufays, ‘‘Applied federated learning: Improving Google
keyboard query suggestions,’’ 2018, arXiv:1812.02903v1. [Online].
Available: http://arxiv.org/abs/1812.02903v1
[51] K. Sarpatwar, R. Vaculin, H. Min, G. Su, T. Heath, G. Ganapavarapu,
and D. Dillenberger, ‘‘Towards enabling trusted artificial intelligence
via blockchain,’’ in Policy-Based Autonomic Data Governance. Cham,
Switzerland: Springer, 2019, pp. 137–153.
[52] D. Preuveneers, V. Rimmer, I. Tsingenopoulos, J. Spooren, W. Joosen,
and E. Ilie-Zudor, ‘‘Chained anomaly detection models for federated
learning: An intrusion detection case study,’’ Appl. Sci., vol. 8, no. 12,
p. 2663, 2018.
[53] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, ‘‘Safety verification
of deep neural networks,’’ in Proc. Int. Conf. Comput. Aided Verification.
Springer, 2017, pp. 3–29.
[54] B. W. Josh Swihart and S. Bowe. Zcash Counterfeiting Vulnerability
_Successfully Remediated. Accessed: Mar. 4, 2020. [Online]. Available:_
https://electriccoin.co/blog/zcash-counterfeiting-vulnerabilitysuccessfully-remediated/
[55] M. Pradel and K. Sen, ‘‘DeepBugs: A learning approach to name-based
bug detection,’’ ACM Program. Lang., vol. 2, pp. 1–25, Oct. 2018.
[56] M. Bartoletti and L. Pompianu, ‘‘An empirical analysis of smart contracts: Platforms, applications, and design patterns,’’ in Proc. Int. Conf.
_Financial Cryptogr. Data Secur. Cham, Switzerland: Springer, 2017,_
pp. 494–509.
[57] M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton, ‘‘A survey of
machine learning for big code and naturalness,’’ ACM Comput. Surv.,
vol. 51, no. 4, pp. 1–37, Jul. 2018.
[58] P. Daian, ‘‘Flash boys 2.0: Frontrunning in decentralized exchanges,
miner extractable value, and consensus instability,’’ in Proc. IEEE Symp.
_Secur. Privacy (SP), May 2020, pp. 566–583._
[59] S. Zhangy, A. Kim, D. Liu, S. C. Nuckchadyy, L. Huangy, A. Masurkary,
J. Zhangy, L. P. Karnatiz, L. Martinezx, T. Hardjono, M. Kellis, and
Z. Zhang, ‘‘Genie: A secure, transparent sharing and services platform for
genetic and health data,’’ 2018, arXiv:1811.01431v1. [Online]. Available:
http://arxiv.org/abs/1811.01431v1
[60] V. Raychev, M. Vechev, and E. Yahav, ‘‘Code completion with statistical
language models,’’ ACM SIGPLAN Notices, vol. 49, no. 6, pp. 419–428,
Jun. 2014.
[61] A. Hindle, E. T. Barr, Z. Su, M. Gabel, and P. Devanbu, ‘‘On the naturalness of software,’’ in Proc. 34th Int. Conf. Softw. Eng. (ICSE), Jun. 2012,
pp. 837–847.
[62] S. Eskandari, S. Moosavi, and J. Clark, ‘‘SoK: Transparent dishonesty:
Front-running attacks on blockchain,’’ in Financial Cryptography and
_Data Security. Cham, Switzerland: Springer, 2020, pp. 170–189._
[63] A. Judmayer, ‘‘Pay-to-win: Incentive attacks on proof-of-work
cryptocurrencies,’’ Cryptol. ePrint Arch., 2019. [Online]. Available:
https://eprint.iacr.org/2019/775/20200217:161450
[64] H. Bae, J. Jang, D. Jung, H. Jang, H. Ha, and S. Yoon, ‘‘Security and
privacy issues in deep learning,’’ 2018, arXiv:1807.11655v3. [Online].
Available: http://arxiv.org/abs/1807.11655v3
[65] UN Global Working Group on Big Data. UN Handbook on Privacy_Preserving Computation Techniques. Accessed: Mar. 4, 2020. [Online]._
Available: http://publications.officialstatistics.org/handbooks/privacypreserving-techniques-handbook/UN%20Handbook%20for%20PrivacyPreserving%20Techniques.pdf
[66] H. Cui, H. Zhang, G. R. Ganger, P. B. Gibbons, and E. P. Xing, ‘‘GeePS:
Scalable deep learning on distributed GPUs with a GPU-specialized
parameter server,’’ in Proc. 11th Eur. Conf. Comput. Syst. (EuroSys),
2016, pp. 1–16.
[67] A. L. Beberg, D. L. Ensign, G. Jayachandran, S. Khaliq, and V. S. Pande,
‘‘Folding@home: Lessons from eight years of volunteer distributed computing,’’ in Proc. IEEE Int. Symp. Parallel Distrib. Process., May 2009,
pp. 1–8.
[68] J. Truby, ‘‘Decarbonizing bitcoin: Law and policy choices for reducing the energy consumption of blockchain technologies and digital currencies,’’ Energy Res. Social Sci., vol. 44, pp. 399–410,
Oct. 2018.
[69] C. Chenli, B. Li, Y. Shi, and T. Jung, ‘‘Energy-recycling blockchain
with proof-of-deep-learning,’’ 2019, arXiv:1902.03912v1. [Online].
Available: http://arxiv.org/abs/1902.03912v1
[70] H. Turesson, M. Laskowski, A. Roatis, and H. M. Kim, ‘‘Privacypreserving blockchain mining: Sybil-resistance by proof-of-usefulwork,’’ 2019, arXiv:1907.08744v2. [Online]. Available: http://arxiv.org/
abs/1907.08744v2
[71] A. Baldominos and Y. Saez, ‘‘Coin.AI: A proof-of-useful-work scheme
for blockchain-based distributed deep learning,’’ Entropy, vol. 21, no. 8,
p. 723, 2019.
[72] M. Swan, ‘‘Blockchain thinking: The brain as a decentralized autonomous
corporation [commentary],’’ IEEE Technol. Soc. Mag., vol. 34, no. 4,
pp. 41–52, Dec. 2015.
[73] M. Shayan, C. Fung, C. J. M. Yoon, and I. Beschastnikh, ‘‘Biscotti:
A ledger for private and secure peer-to-peer machine learning,’’
2018, _arXiv:1811.09904v4._ [Online]. Available: http://arxiv.org/
abs/1811.09904v4
[74] Y. Lu, Q. Tang, and G. Wang, ‘‘On enabling machine learning tasks atop
public blockchains: A crowdsourcing approach,’’ in Proc. IEEE Int. Conf.
_Data Mining Workshops (ICDMW), Nov. 2018, pp. 81–88._
[75] K. R. Özyilmaz, M. Do?an, and A. Yurdakul, ‘‘IDMoB: IoT data marketplace on blockchain,’’ in Proc. Crypto Valley Conf. Blockchain Technol.
_(CVCBT), Jun. 2018, pp. 11–19._
[76] J. D. Harris and B. Waggoner, ‘‘Decentralized and collaborative AI on
blockchain,’’ in Proc. IEEE Int. Conf. Blockchain (Blockchain), Jul. 2019,
pp. 368–375.
[77] J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, and M. Guizani, ‘‘Reliable federated learning for mobile networks,’’ 2019, arXiv:1910.06837v1.
[Online]. Available: http://arxiv.org/abs/1910.06837v1
[78] R. Cheng, F. Zhang, J. Kos, W. He, N. Hynes, N. Johnson, A. Juels,
A. Miller, and D. Song, ‘‘Ekiden: A platform for confidentialitypreserving, trustworthy, and performant smart contracts,’’ in Proc. IEEE
_Eur. Symp. Secur. Privacy (EuroS&P), Jun. 2019, pp. 185–200._
[79] J. Eberhardt and J. Heiss, ‘‘Off-chaining models and approaches to offchain computations,’’ in Proc. 2nd Workshop Scalable Resilient Infras_truct. Distrib. Ledgers (SERIAL), 2018, pp. 7–12._
[80] X. Chen, J. Ji, C. Luo, W. Liao, and P. Li, ‘‘When machine learning meets
blockchain: A decentralized, privacy-preserving and secure design,’’ in
_Proc. IEEE Int. Conf. Big Data (Big Data), Dec. 2018, pp. 1178–1187._
[81] H. Kim, J. Park, M. Bennis, and S.-L. Kim, ‘‘Blockchained on-device
federated learning,’’ IEEE Commun. Lett., to be published.
[82] M. Banko and E. Brill, ‘‘Scaling to very very large corpora for natural
language disambiguation,’’ in Proc. 39th Annu. Meeting Assoc. Comput.
_Linguistics-ACL, 2001, pp. 26–33._
[83] G. A. Montes and B. Goertzel, ‘‘Distributed, decentralized, and democratized artificial intelligence,’’ Technol. Forecasting Social Change,
vol. 141, pp. 354–358, Apr. 2019.
-----
[84] M. Jones, M. Johnson, M. Shervey, J. T. Dudley, and N. Zimmerman,
‘‘Privacy-preserving methods for feature engineering using blockchain:
Review, evaluation, and proof of concept,’’ J. Med. Internet Res., vol. 21,
no. 8, 2019, Art. no. e13600.
[85] M. Johnson, M. Jones, M. Shervey, J. T. Dudley, and N. Zimmerman,
‘‘Building a secure biomedical data sharing decentralized app (DApp):
Tutorial,’’ J. Med. Internet Res., vol. 21, no. 10, 2019, Art. no. e13601.
[86] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, ‘‘Exploiting
unintended feature leakage in collaborative learning,’’ in Proc. IEEE
_Symp. Secur. Privacy (SP), May 2019, pp. 691–706._
[87] V. Costan and S. Devadas, ‘‘Intel SGX explained,’’ IACR Cryptol.
_ePrint Arch., vol. 2016, no. 86, pp. 1–118, 2016. [Online]. Available:_
https://eprint.iacr.org/2016/086/20170221:054353
[88] F. Tschorsch and B. Scheuermann, ‘‘Bitcoin and beyond: A technical survey on decentralized digital currencies,’’ IEEE Commun. Surveys Tuts.,
vol. 18, no. 3, pp. 2084–2123, 3rd Quart., 2016.
[89] S. King. Primecoin: Cryptocurrency With Prime Number Proof-of_Work. Accessed: Mar. 4, 2020. [Online]. Available: https://primecoin.io/_
bin/primecoin-paper.pdf
[90] F. Saleh, ‘‘Blockchain without waste: Proof-of-stake,’’ SSRN Elec_tron. J., 2019. [Online]. Available: https://papers.ssrn.com/sol3/papers._
cfm?abstract_id=3183935
[91] N. Hynes, R. Cheng, and D. Song, ‘‘Efficient deep learning on multisource private data,’’ 2018, arXiv:1807.06689v1. [Online]. Available:
http://arxiv.org/abs/1807.06689v1
[92] R. Kunkel, D. Le Quoc, F. Gregor, S. Arnautov, P. Bhatotia,
and C. Fetzer, ‘‘TensorSCONE: A secure TensorFlow framework
using intel SGX,’’ 2019, arXiv:1902.04413v1. [Online]. Available:
http://arxiv.org/abs/1902.04413v1
[93] F. Brasser, T. Frassetto, K. Riedhammer, A.-R. Sadeghi, T. Schneider,
and C. Weinert, ‘‘VoiceGuard: Secure and private speech processing,’’
in Proc. InterSpeech, Sep. 2018, pp. 1303–1307.
[94] D. Lee, D. Kohlbrenner, S. Shinde, D. Song, and K. Asanović, ‘‘Keystone:
An open framework for architecting TEEs,’’ 2019, arXiv:1907.10119v2.
[Online]. Available: http://arxiv.org/abs/1907.10119v2
[95] F. Brasser, S. Capkun, A. Dmitrienko, T. Frassetto, K. Kostiainen, and
A.-R. Sadeghi, ‘‘DR.SGX: Automated and adjustable side-channel protection for SGX using data location randomization,’’ in Proc. 35th Annu.
_Comput. Secur. Appl. Conf. (ACSAC), 2019, pp. 788–800._
[96] M. Milutinovic, W. He, H. Wu, and M. Kanwal, ‘‘Proof of luck: An efficient blockchain consensus protocol,’’ in Proc. 1st Workshop Syst. Softw.
_Trusted Execution (SysTEX), 2016, pp. 1–6._
[97] L. Chen, L. Xu, N. Shah, Z. Gao, Y. Lu, and W. Shi, ‘‘On security analysis
of proof-of-elapsed-time (PoET),’’ in Proc. Int. Symp. Stabilization, Saf.,
_Secur. Distrib. Syst. Cham, Switzerland: Springer, 2017, pp. 282–297._
[98] A. Miller, I. Bentov, S. Bakshi, R. Kumaresan, and P. McCorry, ‘‘Sprites
and state channels: Payment networks that go faster than lightning,’’ in
_Proc. Int. Conf. Financial Cryptogr. Data Secur. Cham, Switzerland:_
Springer, 2019, pp. 508–526.
[99] J. Poon and V. Buterin. Plasma: Scalable Autonomous Smart Con_tracts. Accessed: Mar. 4, 2020. [Online]. Available: https://www._
plasma.io/plasma.pdf
[100] S. Dziembowski, S. Faust, and K. Hostáková, ‘‘General state channel
networks,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur.,
Jan. 2018, pp. 949–966.
[101] H. Kalodner, S. Goldfeder, X. Chen, S. M. Weinberg, and E. W. Felten,
‘‘Arbitrum: Scalable, private smart contracts,’’ in 27th USENIX Secur.
_Symp. (USENIX Security), 2018, pp. 1353–1370._
[102] J. Teutsch and C. Reitwießner, ‘‘A scalable verification solution
for blockchains,’’ 2019, arXiv:1908.04756v1. [Online]. Available:
http://arxiv.org/abs/1908.04756v1
[103] R. C. Geyer, T. Klein, and M. Nabi, ‘‘Differentially private federated learning: A client level perspective,’’ 2017, arXiv:1712.07557v2.
[Online]. Available: http://arxiv.org/abs/1712.07557v2
[104] High-Level Expert Group on Artificial Intelligence. Ethics Guide_lines for Trustworthy AI. Accessed: Mar. 4, 2020. [Online]. Available:_
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
[105] S. Schulz, A.-R. Sadeghi, and C. Wachsmann, ‘‘Short paper: Lightweight
remote attestation using physical functions,’’ in Proc. 4th ACM Conf.
_Wireless Netw. Secur. (WiSec), 2011, pp. 109–114._
[106] S. Volos, K. Vaswani, and R. Bruno, ‘‘Graviton: Trusted execution environments on GPUs,’’ in Proc. 13th USENIX Symp. Oper. Syst. Design
_Implement. (OSDI), 2018, pp. 681–696._
[107] S. Bano, A. Sonnino, M. Al-Bassam, S. Azouvi, P. McCorry,
S. Meiklejohn, and G. Danezis, ‘‘SoK: Consensus in the age of
blockchains,’’ in Proc. 1st ACM Conf. Adv. Financial Technol. (AFT),
2019, pp. 183–198.
[108] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens,
M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx, ‘‘Foreshadow:
Extracting the keys to the Intel SGX kingdom with transient out-oforder execution,’’ in Proc. 27th USENIX Secur. Symp. (USENIX Security),
2018, pp. 991–1008.
[109] P. Das, ‘‘FastKitten: Practical smart contracts on bitcoin,’’ in Proc. 28th
_USENIX Secur. Symp. (USENIX Security), 2019, pp. 801–818._
[110] A. Vijayakumar and S. Kundu, ‘‘A novel modeling attack resistant PUF
design based on non-linear voltage transfer characteristics,’’ in Proc.
_Design, Autom. Test Eur. Conf. Exhib. (DATE), 2015, pp. 653–658._
[111] F. Ganji, On the Learnability of Physically Unclonable Functions. Cham,
Switzerland: Springer, 2018.
KONSTANTIN D. PANDL studied electrical engineering and information technology at the Karlsruhe Institute of Technology (KIT), Germany,
Purdue University, USA, and Tongji University,
China, and graduated with the master’s degree.
He is currently pursuing the Ph.D. degree with
the Institute of Applied Informatics and Formal
Description Methods, KIT. He also gained experience in industry at Siemens’ Venture Unit Next47,
San Francisco Bay Area, and Kearney, Germany.
His research interests include machine learning, system security, and distributed systems. His previous research appeared at the IEEE International
Conference on Intelligent Transportation Systems.
SCOTT THIEBES received the degree in information systems from the University of Cologne.
He is currently pursuing the Ph.D. degree with
the Institute of Applied Informatics and Formal
Description Methods, Karlsruhe Institute of Technology, Germany. His research interests include
emerging technologies in health care, information
security and privacy, and gamification. His recent
research interests include on the implications of
applying distributed ledger technology and artificial intelligence in health care settings. His research appeared in journals,
including the Journal of Medical Internet Research, BMC Bioinformatics,
and the European Journal of Human Genetics.
MANUEL SCHMIDT-KRAEPELIN received the
degree in information systems from the University
of Cologne. He is currently pursuing the Ph.D.
degree with the Institute of Applied Informatics and Formal Description Methods, Karlsruhe
Institute of Technology, Germany. He is also a
Research Associate with the Institute of Applied
Informatics and Formal Description Methods,
Karlsruhe Institute of Technology. His research
interests include gamification and emerging technologies in health care. His research appeared in leading scientific conferences, including ICIS, ECIS, and HICSS.
ALI SUNYAEV is currently a Professor of computer science with the Karlsruhe Institute of Technology, Germany. His research interests include
trustworthy Internet technologies and complex
health IT applications. His research work accounts
for the multifaceted use contexts of digital technologies with research on human behavior affecting Internet-based systems and vice versa. His
research appeared in journals, including ACM
CSUR, JIT, JMIS, the IEEE TRANSACTIONS ON
CLOUD COMPUTING, Communications of the ACM, and others. His research
work has been appreciated numerous times and is featured in a variety of
media outlets.
CLOUD COMPUTING,
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2001.11017, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09039606.pdf"
}
| 2,020
|
[
"JournalArticle",
"Review"
] | true
| 2020-01-29T00:00:00
|
[
{
"paperId": "1b32b77718be22ca31c2a0dba709c815e19d4672",
"title": "Ethics Guidelines for Trustworthy AI"
},
{
"paperId": "03d1b883e9d8474212094e5764646bc6450cf565",
"title": "Blockchain Without Waste: Proof-of-Stake"
},
{
"paperId": "759b10011ea66667345d6954dcfa8eb5192e68d3",
"title": "Trade-offs between Distributed Ledger Technology Characteristics"
},
{
"paperId": "128a5c0ddd09c6d36ef5d483b6d140dfbbcb0ce3",
"title": "Flash Boys 2.0: Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability"
},
{
"paperId": "5df775b74e058d4849f1c18280791a83efcda0fb",
"title": "Beyond Data Markets: Opportunities and Challenges for Distributed Ledger Technology in Genomics"
},
{
"paperId": "af86081d5394948d8570f1baa2a67cd78b57f177",
"title": "When blockchain meets AI: Optimal mining strategy achieved by machine learning"
},
{
"paperId": "618babd71f257f063d6ad2ecf1b2070d34721dda",
"title": "Reliable Federated Learning for Mobile Networks"
},
{
"paperId": "f04b7ebb62aeef8c8a37bf3838871318af01d41c",
"title": "A blockchain-orchestrated Federated Learning architecture for healthcare consortia"
},
{
"paperId": "edc02b24064569dfed1f3aef69e240792a9c0e0d",
"title": "A Data Science Approach for Detecting Honeypots in Ethereum"
},
{
"paperId": "bc488489d20833338d9764406635b224cf82c6bf",
"title": "Building a Secure Biomedical Data Sharing Decentralized App (DApp): Tutorial"
},
{
"paperId": "2ec1f1c4c641b80167f2458968c3c12ea0602b9a",
"title": "BAFFLE : Blockchain Based Aggregator Free Federated Learning"
},
{
"paperId": "21f214165bda8813c95cfa3e471e24aaf8d5776b",
"title": "Towards Explainable Artificial Intelligence"
},
{
"paperId": "758a4f817461315cd1a5e7a49b5c2a9202f54c84",
"title": "Combining Safety and Security in Autonomous Cars Using Blockchain Technologies"
},
{
"paperId": "886c2128a5a7be8e5c2d806f99af5553ef9b70ed",
"title": "A game-theoretic method based on Q-learning to invalidate criminal smart contracts"
},
{
"paperId": "44eb58178387617469d60a482e71b60f1a14117b",
"title": "A scalable verification solution for blockchains"
},
{
"paperId": "443762d49b2481ddd60687eb12cdc4490b31678a",
"title": "Privacy-Preserving Methods for Feature Engineering Using Blockchain: Review, Evaluation, and Proof of Concept"
},
{
"paperId": "c0a53bc7a381e1e315c6b6d00dbe1db0ca5759de",
"title": "Keystone: An Open Framework for Architecting TEEs"
},
{
"paperId": "69ddd8b6df52836d279e00cf379689378eb33975",
"title": "Keystone: A Framework for Architecting TEEs"
},
{
"paperId": "6d39fc36e536494f941314db3fda129466fa9d32",
"title": "Proof-of-Useful-Work as Dual-Purpose Mechanism for Blockchain and AI: Blockchain Consensus that Enables Privacy Preserving Data Mining"
},
{
"paperId": "0efbaec0a4875ad8d88a4e07e87f425d8d102295",
"title": "Decentralized and Collaborative AI on Blockchain"
},
{
"paperId": "7c7ccf8defa9c61eb2487af1249974b3f1213957",
"title": "Mobile Edge Computing, Blockchain and Reputation-based Crowdsourcing IoT Federated Learning: A Secure, Decentralized and Privacy-preserving System"
},
{
"paperId": "c0126e061a58b13d4f52686c9c59a81f815ccdb3",
"title": "Blockchain Games: A Survey"
},
{
"paperId": "393ab84a86631d5fda128c3aac0bf5476da07791",
"title": "Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges"
},
{
"paperId": "26db4528466b86a942dcdf3ba5fe394657443c55",
"title": "Strengthening Smart Contracts to Handle Unexpected Situations"
},
{
"paperId": "96321435462d9ea1e7971afccdf3752dab9cb4b3",
"title": "Distributed, decentralized, and democratized artificial intelligence"
},
{
"paperId": "c172b3429e70fc8e4489475a2f09da9234189d98",
"title": "Coin.AI: A Proof-of-Useful-Work Scheme for Blockchain-Based Distributed Deep Learning"
},
{
"paperId": "3b9926b55a8e622472b890d9393d8174f3af43a5",
"title": "Block withholding game among bitcoin mining pools"
},
{
"paperId": "aefe8b786cb8e7e4674c4e38962a0a57ae735e7b",
"title": "SoK: Transparent Dishonesty: Front-Running Attacks on Blockchain"
},
{
"paperId": "a7d0dfe415ab572db38f340e61cf4748ca3a9bf0",
"title": "FastKitten: Practical Smart Contracts on Bitcoin"
},
{
"paperId": "a96405df7f3b6318d31f088e3988fe93dbecc949",
"title": "TensorSCONE: A Secure TensorFlow Framework using Intel SGX"
},
{
"paperId": "eb8c673d677e0daa6bb94b3925ce1b9c350128f7",
"title": "Energy-recycling Blockchain with Proof-of-Deep-Learning"
},
{
"paperId": "fbe6553641ce3c6a1914ef5d9f8797fa762c8bad",
"title": "Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study"
},
{
"paperId": "094e1f0434195c6fa8f3df6e7adad1f3b67ba801",
"title": "Off-chaining Models and Approaches to Off-chain Computations"
},
{
"paperId": "b97047c4dc75cbe8d6fc5cb3dd5a81d36458892d",
"title": "APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS"
},
{
"paperId": "33c3f816bde8ee63ee9f2e60d4387b9390696371",
"title": "Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning"
},
{
"paperId": "5883a18e504baff774c01060343c71619f1823b8",
"title": "When Machine Learning Meets Blockchain: A Decentralized, Privacy-preserving and Secure Design"
},
{
"paperId": "1df492149bce34a88c60fa19e01c25c77e3733af",
"title": "Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning"
},
{
"paperId": "4f03d676a3dc4c93dd13bfdd77542acc8b706417",
"title": "Towards Safer Smart Contracts: A Sequence Learning Approach to Detecting Security Threats."
},
{
"paperId": "197be7cf03dbb250130461600c17528b21a0fe69",
"title": "Towards Safer Smart Contracts: A Sequence Learning Approach to Detecting Vulnerabilities"
},
{
"paperId": "e9447839d763937af2753e3623e586a346c10752",
"title": "Genie: A Secure, Transparent Sharing and Services Platform for Genetic and Health Data"
},
{
"paperId": "e9cbe218e5a3ad4719f6e9d512bad06fc6a343f5",
"title": "Privacy‐Preserving Blockchain Mining: Sybil-Resistance by Proof‐of‐Useful‐Work"
},
{
"paperId": "0313cb8bfdd7540686237221bbd9fd919fcb1992",
"title": "On Enabling Machine Learning Tasks atop Public Blockchains: A Crowdsourcing Approach"
},
{
"paperId": "1636594517efaad1d911b4065ede56f1f77bacd2",
"title": "Competition, Market Power and Third-Party Tracking"
},
{
"paperId": "4f394320c64162ea7a5d13db32d3b83e5fe1c3d5",
"title": "Graviton: Trusted Execution Environments on GPUs"
},
{
"paperId": "0625f8ea4149463e0df6648b3349801302d564ae",
"title": "General State Channel Networks"
},
{
"paperId": "15c69c4ece24b82105fc41d270d642593ce4318f",
"title": "Decarbonizing Bitcoin: Law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies"
},
{
"paperId": "1a5ba159030ad2349b5870b57697fbb45deabaf0",
"title": "VoiceGuard: Secure and Private Speech Processing"
},
{
"paperId": "54a4b505f267d5c27bcaa70d6a259a1035d22395",
"title": "AI and Blockchain: A Disruptive Integration"
},
{
"paperId": "721ba8f893ea219617f4ab4607abbf7d2cee0d54",
"title": "Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution"
},
{
"paperId": "0b96c12fd4e25b949dea4ff57df033bf16b2d100",
"title": "Blockchained On-Device Federated Learning"
},
{
"paperId": "70d2008b0dd9a4d201e62bd25f723d3e6d277c2e",
"title": "A Demonstration of Sterling: A Privacy-Preserving Data Marketplace"
},
{
"paperId": "356bb26086b2bbc36ed099ee4bec4555d7f89c4c",
"title": "User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection"
},
{
"paperId": "73423b28da891984a7e86b65d0a2148ab9d134b0",
"title": "Proof of Kernel Work: a democratic low-energy consensus for distributed access-control protocols"
},
{
"paperId": "7eb657c0394df220db3013bf7277cbde75dd18cf",
"title": "Security and Privacy Issues in Deep Learning"
},
{
"paperId": "147db0417e95ce8039ce18671260096c1bc046b8",
"title": "Efficient Deep Learning on Multi-Source Private Data"
},
{
"paperId": "8727ed6c89d23d520dffd7781b4f61b92b730ff7",
"title": "IDMoB: IoT Data Marketplace on Blockchain"
},
{
"paperId": "30e0ffeb519a4df2d4a2067e899c5fb5c5e85e70",
"title": "Exploiting Unintended Feature Leakage in Collaborative Learning"
},
{
"paperId": "0f885fd46064d271d4404cf9bb3d758e1a6f8d55",
"title": "Exploring the Limits of Weakly Supervised Pretraining"
},
{
"paperId": "863dff6fea7811e6c2b76b3eb64eee84ee280b33",
"title": "Ancile: Privacy-Preserving Framework for Access Control and Interoperability of Electronic Health Records Using Blockchain Technology"
},
{
"paperId": "c70f11da5a8bb2df36def1d99c4c08df315e2233",
"title": "DeepBugs: a learning approach to name-based bug detection"
},
{
"paperId": "643fb5aeccce0351cac89bb8e446c8d7e48a3d99",
"title": "The limits and potentials of deep learning for robotics"
},
{
"paperId": "c3a2176f4fa079793741966194f3a9a8e7491a24",
"title": "Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts"
},
{
"paperId": "ae712766e4a39c69af165ca7d23cc968824d4309",
"title": "Fast and accurate view classification of echocardiograms using deep learning"
},
{
"paperId": "4170c46d6e257358fb6e6ec4ad69c307406a0b45",
"title": "Should We Be Concerned About Data-opolies?"
},
{
"paperId": "b1e538dbf538fd9fdf5f5870c5b7416ae08c9882",
"title": "Differentially Private Federated Learning: A Client Level Perspective"
},
{
"paperId": "a37facbe13d9988ad32816cbcc34962235e11f62",
"title": "SoK: Consensus in the Age of Blockchains"
},
{
"paperId": "60c01f8700f063990da7a0d84adaa7a2d120c7b5",
"title": "On Security Analysis of Proof-of-Elapsed-Time (PoET)"
},
{
"paperId": "5d294a8e8c52485c8010e4bd6fc5ee45e68f390f",
"title": "DR.SGX: automated and adjustable side-channel protection for SGX using data location randomization"
},
{
"paperId": "960ba564e9e598d864dff38d2f3d0bad1b319ead",
"title": "A Survey of Machine Learning for Big Code and Naturalness"
},
{
"paperId": "58e0ca33ae3068fee7005f339bf6c444fc17d55f",
"title": "Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models"
},
{
"paperId": "37d9283061bb8057adff53ff4033dd11ccdf2a0c",
"title": "Blockchain solutions for big data challenges: A literature review"
},
{
"paperId": "73fd4caae8b4ce04df63437fb99e17f2cc1b6b39",
"title": "Financial Cryptography and Data Security"
},
{
"paperId": "4dce5b72e1f205e11dcdcc8db68ea1cb9a68bbc5",
"title": "Sprites and State Channels: Payment Networks that Go Faster Than Lightning"
},
{
"paperId": "ac72566bbc7628255002a70ca5bec0874929eba4",
"title": "Sprites: Payment Channels that Go Faster than Lightning"
},
{
"paperId": "d2cb2e69e04e8a22125b6956ab69af045719db0e",
"title": "Proof of Luck: an Efficient Blockchain Consensus Protocol"
},
{
"paperId": "48e34a61f715b791396457345f5a5a28bcf7b41d",
"title": "The Ring of Gyges: Investigating the Future of Criminal Smart Contracts"
},
{
"paperId": "d31655b3f82038c513eed0e4a84c8f1c89d5bfdb",
"title": "Safety Verification of Deep Neural Networks"
},
{
"paperId": "9543fc3670c251bc57c5bcccd9aed332bcec8aba",
"title": "The blockchain: a new framework for robotic swarm systems"
},
{
"paperId": "bd8a307efcffbf57d2e5c3c23577de44d883d865",
"title": "MedRec: Using Blockchain for Medical Data Access and Permission Management"
},
{
"paperId": "e9a986c8ff6c2f381d026fe014f6aaa865f34da7",
"title": "Deep Learning with Differential Privacy"
},
{
"paperId": "0e3cc46583217ec81e87045a4f9ae3478a008227",
"title": "End to End Learning for Self-Driving Cars"
},
{
"paperId": "7707082cc4522a91057fa3a8031d67fe3ff1f5a9",
"title": "GeePS: scalable deep learning on distributed GPUs with a GPU-specialized parameter server"
},
{
"paperId": "fa794a84f08546a630cfd90b0b1d5efd532fd112",
"title": "Artificial General Intelligence"
},
{
"paperId": "8db5d1d7169a1f5391cb184332b95835ae668cf4",
"title": "Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies"
},
{
"paperId": "846aedd869a00c09b40f1f1f35673cb22bc87490",
"title": "Mastering the game of Go with deep neural networks and tree search"
},
{
"paperId": "bc577f09b0b84545fc1c88a6099ef1461ab8e2e0",
"title": "Blockchain Thinking : The Brain as a Decentralized Autonomous Corporation [Commentary]"
},
{
"paperId": "4c006a424f230f2fd846146056bf7a2ac5fcaf81",
"title": "Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values"
},
{
"paperId": "00fe80e570a54010d2eb3d89aae55c7cb7176e13",
"title": "Technology convergence: What developmental stage are we in?"
},
{
"paperId": "b304c0a3f402e8332a9c4bd3f1b459d4ac1a25fd",
"title": "A novel modeling attack resistant PUF design based on non-linear voltage transfer characteristics"
},
{
"paperId": "728b60c04afb5b87853b59265e49f430dbf631db",
"title": "Secure High-Rate Transaction Processing in Bitcoin"
},
{
"paperId": "1e688be9f4554aa981fe3db9e2a66388b05bd167",
"title": "Code completion with statistical language models"
},
{
"paperId": "3797d924fc5f832b72a84c512e92021906793965",
"title": "Zerocash: Decentralized Anonymous Payments from Bitcoin"
},
{
"paperId": "7bf81e964a7c20d829c1225685ae138bf1489c99",
"title": "Majority is not enough"
},
{
"paperId": "77bdb4c1117d72ad863845f35ba50929cb623776",
"title": "On the naturalness of software"
},
{
"paperId": "5acdfa8829f5e698c62cb11129fb48496626e74c",
"title": "Implicit Bias in the Courtroom"
},
{
"paperId": "56252c731dbbe411caf9825821369ea30bc89e28",
"title": "Short paper: lightweight remote attestation using physical functions"
},
{
"paperId": "8727420aefa8f092b3f1a17f7a1c967d1c55ee3e",
"title": "Fifty Years of Moore's Law"
},
{
"paperId": "42980fe896329cd28d39b4ab09d7613bf4dd8608",
"title": "Anticipating converging industries using publicly available data"
},
{
"paperId": "1c37285eb427f9d5bae45dc0624cb5ee78abe827",
"title": "Folding@home: Lessons from eight years of volunteer distributed computing"
},
{
"paperId": "52d6c4a64bb9ffc8d8052413c3ad69df4c62481d",
"title": "Robust De-anonymization of Large Sparse Datasets"
},
{
"paperId": "7628b62d64d2e5c33a13a5a473bc41b2391c1ebc",
"title": "Scaling to Very Very Large Corpora for Natural Language Disambiguation"
},
{
"paperId": "de49b20ae2d1bd69c754d0ed8eb7055955e64709",
"title": "Technological Convergence in the IT Industry: The Role of Strategic Technology Alliances and Technological Competencies"
},
{
"paperId": "0ccd35653fed0836977432cbe3d9f3caa93842ec",
"title": "E. Fix and J.L. Hodges (1951): An Important Contribution to Nonparametric Discriminant Analysis and Density Estimation: Commentary on Fix and Hodges (1951)"
},
{
"paperId": "62cb06e6d7164d26a1957c374c7669d720831d64",
"title": "SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning"
},
{
"paperId": null,
"title": "Primecoin: Cryptocurrency With Prime Number Proof-of- Work"
},
{
"paperId": null,
"title": "Zcash Counterfeiting Vulnerability Successfully Remediated"
},
{
"paperId": null,
"title": "The Tangle, IOTA Whitepaper"
},
{
"paperId": null,
"title": "Notes From the AI Frontier: Modeling the Impact of AI on the World Economy"
},
{
"paperId": null,
"title": "RChain Architecture Documentation"
},
{
"paperId": "2614a0d0213e8cc457ea62b435a8f43dea54245a",
"title": "Blockchain for AI: Review and Open Research Challenges"
},
{
"paperId": "e0b828a391f7a2c0b17ac6e000184e32e7e77bdf",
"title": "ScanAT: Identification of Bytecode-Only Smart Contracts With Multiple Attribute Tags"
},
{
"paperId": "0d185e5aa2796f0e862b98fa7aa8feada2f5ed13",
"title": "Pay-To-Win: Incentive Attacks on Proof-of-Work Cryptocurrencies"
},
{
"paperId": "881eb0b890402d469c4ceb0ef48b2a57931c8caf",
"title": "High-Level Expert Group on Artificial Intelligence – Draft Ethics Guidelines for Trustworthy AI"
},
{
"paperId": "ff6c3c2505e953d2d82be7f97f12ce8d71442447",
"title": "Towards Enabling Trusted Artificial Intelligence via Blockchain"
},
{
"paperId": "e5559ea78af0685df47b534c4d96c5ae09474501",
"title": "Arbitrum: Scalable, private smart contracts"
},
{
"paperId": null,
"title": "Building Block(chain)s for a Better Planet"
},
{
"paperId": "cbc775e301d62740bcb3b8ec361721b3edd7c879",
"title": "Plasma : Scalable Autonomous Smart Contracts"
},
{
"paperId": "62f81ba2d672d470caf8a14fd8460ac257c9e4ff",
"title": "On the learnability of physically unclonable functions"
},
{
"paperId": null,
"title": "Prioriello, EOS"
},
{
"paperId": "2d7f3f4ca3fbb15ae04533456e5031e0d0dc845a",
"title": "Intel SGX Explained"
},
{
"paperId": null,
"title": "Byteball: A decentralized system for storage and transfer of value"
},
{
"paperId": null,
"title": "Artificial intelligence: a modern approach. Malaysia Pearson Education Limited"
},
{
"paperId": "2913c2bf3f92b5ae369400a42b2d27cc5bc05ecb",
"title": "Deep Learning"
},
{
"paperId": "55234eb4ea806df40cf9d8ff57abf7b14c346af0",
"title": "The Ring of Gyges : Using Smart Contracts for Crime"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Optimal Mining Strategy Achieved By Machine Learning , \" arXiv preprint arXiv :"
},
{
"paperId": "f2d1e3ca651d5b23016d333bdaebf15a408aebec",
"title": "MultiChain Private Blockchain — White Paper"
}
] | 26,546
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/01633c22086abbdc8f180de4361585c1a731bc93
|
[
"Computer Science",
"Engineering"
] | 0.823053
|
A Distributed Implementation of Steady-State Kalman Filter
|
01633c22086abbdc8f180de4361585c1a731bc93
|
IEEE Transactions on Automatic Control
|
[
{
"authorId": "152551086",
"name": "Jiaqi Yan"
},
{
"authorId": "2143732199",
"name": "Xu Yang"
},
{
"authorId": "1760677",
"name": "Yilin Mo"
},
{
"authorId": "3527062",
"name": "Keyou You"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Autom Control"
],
"alternate_urls": null,
"id": "1283a59c-0d1f-48c3-81d7-02172f597e70",
"issn": "0018-9286",
"name": "IEEE Transactions on Automatic Control",
"type": "journal",
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=9"
}
|
This article studies the distributed state estimation in sensor network, where <inline-formula><tex-math notation="LaTeX">$m$</tex-math></inline-formula> sensors are deployed to infer the <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>-dimensional state of a linear time-invariant Gaussian system. By a lossless decomposition of the optimal steady-state Kalman filter, we show that the problem of distributed estimation can be reformulated as that of the synchronization of homogeneous linear systems. Based on such decomposition, a distributed estimator is proposed, where each sensor node runs a local filter using only its own measurement, alongside with a consensus algorithm to fuse the local estimate of every node. We prove that the average of local estimates from all sensors coincides with the optimal Kalman estimate, and under certain condition on the graph Laplacian matrix and the system matrix, the covariance of local estimation error is bounded and the asymptotic error covariance is derived. As a result, the distributed estimator is stable for each single node. We further show that the proposed algorithm has a low message complexity of <inline-formula><tex-math notation="LaTeX">$\min (m,n)$</tex-math></inline-formula>. Numerical examples are provided in the end to illustrate the efficiency of the proposed algorithm.
|
# A Distributed Implementation of Steady-State Kalman Filter
### Jiaqi Yan, Xu Yang, Yilin Mo[∗], and Keyou You
**_Abstract—This paper studies the distributed state estimation_**
**in sensor network, where m sensors are deployed to infer the n-**
**dimensional state of a Linear Time-Invariant (LTI) Gaussian sys-**
**tem. By a lossless decomposition of optimal steady-state Kalman**
**filter, we show that the problem of distributed estimation can**
**be reformulated as the synchronization of homogeneous linear**
**systems. Based on such decomposition, a distributed estimator is**
**proposed, where each sensor node runs a local filter using only**
**its own measurement, alongside with a consensus algorithm to**
**fuse the local estimate of every node. We prove that the average**
**of estimates from all sensors coincides with the optimal Kalman**
**estimate, and under certain condition on the graph Laplacian**
**matrix and the system matrix, the covariance of estimation error**
**is bounded and the asymptotic error covariance is derived. As**
**a result, the distributed estimator is stable for each single node.**
**We further show that the proposed algorithm has a low message**
**complexity of min(m, n). Numerical examples are provided in**
**the end to illustrate the efficiency of the proposed algorithm.**
**_Index Terms—Distributed estimation, Kalman filter, Linear_**
**system synchronization, Consensus algorithm.**
I. INTRODUCTION
_yi(k)_ _yj(k)_
|Local filter Local filter|Col2|Col3|
|---|---|---|
|Synchronization ∆(k) i Linear system Linear system ∆ (k) j|||
||||
_xˆi(k)_ _xˆj(k)_
Fig. 1: The information flow of most existing algorithms,
where sensors i and j are immediate neighbors.
The past decades have witnessed remarkable research interests in multi-sensor networked systems. As one of its
important focuses, distributed estimation has been widely studied in various applications including robot formation control,
environment monitoring, spacecraft navigation (see [1]–[5]).
Compared with the centralized architecture, it provides better
robustness, flexibility and reliability.
One fundamental problem in distributed estimation is to
estimate the state of an LTI Gaussian system using multiple
sensors, where the well-known Kalman filter provides the optimal solution in a centralized manner [6]. Thus, many research
efforts have been devoted to the distributed implementation of
Kalman filter. For example, in an early work [7], the authors
suggest a fusion algorithm for two-sensor networks, where
local estimate of the first sensor is considered as a pseudo measurement of the second one. Due to its ease of implementation,
this approach has then inspired the sequential fusion in multisensor networks [8]–[10], where the multiple nodes repeatedly
perform the two-sensor fusion in a sequential manner. As
the result of serial operation, these algorithms require special
communication topology which should be sequentially connected as a ring/chain. In [11], Olfati-Saber et. al consider the
more general network topology. They introduce the consensus
The authors are with Department of Automation, BNRist, Tsinghua University. Emails: jiaqiyan@tsinghua.edu.cn, xu-yang16@mails.tsinghua.edu.cn,
ylmo@tsinghua.edu.cn,youky@tsinghua.edu.cn
C di th
algorithms into distributed estimation and propose KalmanConsensus Filter (KCF), where the average consensus on
local estimates is performed. Since then, various consensusbased distributed estimators have been proposed in literature
[12]–[24]. For example, instead of doing consensus on local
estimates, [14] suggests achieving consensus respectively on
noisy measurements and inverse-covariance matrices. On the
other hand, Battistelli et. al [25] find that, by performing
consensus on the Kullback-Leibler average of local probability
density function, estimation stability is also guaranteed. They
further prove that, if the single-consensus step is used, this
approach is reduced to the well-known covariance intersection
fusion rule [26], [27]. Since the consensus-based estimators
usually require multiple consensus steps during each sampling
period, they generate better estimation performance.
In Fig. 1, we present the general information flow of the
existing consensus-based estimation algorithms, where ∆i(k)
is the information transmitted by sensor i and to be fused
by consensus algorithms, which could be the local estimates
([11], [12]), measurements ([13]–[15], [28]), or information
matrices ([25], [29]). It is noticed from the figure that the
consensus/synchronization process is usually coupled with the
local filter in these works, making it hard to analyze the
performance of local estimates. Due to this fact, while the
aforementioned algorithms are successful in distributing the
fusion task over multiple nodes and providing stable local
estimates, i.e. the error covariance is proved to be bounded at
each sensor side, the exact calculation of error covariance can
hardly be obtained. Moreover, the global optimality (namely,
whether performance of the algorithm can converge to that of
the centralized Kalman filter) is also difficult to be analyzed
and guaranteed in some works
-----
It is worth noticing that in theory, the gain of the Kalman
filter converges to a steady-state gain exponentially fast [30],
which can be calculated off-line. Moreover, in practice, a fixed
gain estimator is usually implemented, which has the same
asymptotic performance as the time-varying Kalman filter.
Hence, this paper focuses on the distributed implementation
of the centralized steady-state Kalman filter. In contrast to
most of the existing algorithms, we decouple the local filter
from the consensus process. Such decoupling enables us to
provide a new framework for designing distributed estimators,
by reformulating the problem of distributed state estimation
into that of linear system synchronization. We, hence, are able
to leverage the methodologies from latter field to propose
solutions for distributed estimation. To be specific, in the
synchronization of linear systems, the dynamics of each agent
is governed by an LTI system, the control input to which is
generated using the local information within the neighborhood,
in order to achieve asymptotic consensus on the local states of
agents. Over the past years, lots of research efforts have been
devoted to this area (see [31]–[36] for examples) by designing
synchronization algorithms that can handle various network
constraints. Exploiting the results therein, the distributed estimator in this work is designed through two phases as below:
1) (Local measurement processing) A lossless decomposition of steady-state Kalman filter is proposed, where each
sensor node runs a local estimator based on this decomposition
using solely its own measurement.
2) (Information fusion via consensus) The sensor infers the
local estimates of all the others via a modified consensus algorithm designed for achieving linear system synchronization.
The contributions of this paper are summarized as follows:
1) By removing assumptions regarding the eigenvalues of
system matrix, this paper extends, in a non trivial way, the
results in [37], and thus develops the local filters for losslessly
decomposing Kalman filter in estimating the general systems.
(Lemma 3)
2) Through the decomposition of Kalman filter, this paper
bridges two different fields and makes it possible to leverage
a general class of algorithms designed for achieving the
synchronization of linear systems to solve the problem of
distributed state estimation. By doing so, we can propose
stable distributed estimators under different communication
constraints, such as time delay, switching topology, random
link failures, etc. (Theorem 4)
3) For certain synchronization algorithm, e.g., [31], the
stability criterion of the proposed estimator is established.
Moreover, in contrast to the existing literature, the covariance
of the estimation error can be exactly derived by solving
Lyapunov equations. (Theorem 2, Theorem 3, and Corollary 1)
4) The designed estimator enjoys low communication cost,
where the size of message sent by each sensor is min _m, n_ _,_
_{_ _}_
with n and m being dimensions of the state and measurement
respectively. (Remark 6)
Some preliminary results are reported in our previous work
[38], where most of the proofs are missing. This paper further
extends the results in [38] by computing the exact asymptotic
error covariance, instead of only showing the stability of proposed algorithms The extension to the more general random
communication topology is also added. Moreover, a model
reduction method is further proposed in this work to reduce
the message complexity from m to min _m, n_ .
_{_ _}_
_Notations: For vectors vi ∈_ R[m][i] _, the vector_ �v1[T] _[, . . ., v]N[T]_ �T
is defined by col(v1, . . ., vN ). Moreover, A ⊗ _B indicates_
the Kronecker product of matrices A and B. Throughout this
paper, we define a stochastic signal as “stable” if its covariance
is bounded at any time.
The remainder of this paper is organized as follows. Section
II introduces the preliminaries and formulates the problem of
interest. A lossless decomposition of optimal Kalman filter
is given in Section III, where a model reduction approach is
further proposed to reduce the system order. With the aim
of realizing the optimal Kalman filter, distributed solutions
for state estimation are given and analyzed in Section IV.
We then discuss some extensions in Section V and validate
performance of the developed estimator through numerical
examples in Section VI. Finally, Section VII concludes the
paper.
II. PROBLEM FORMULATION
In this paper, we consider the LTI system as given below:
_x(k + 1) = Ax(k) + w(k),_ (1)
where x(k) ∈ R[n] is the system state, w(k) ∼N (0, Q) is
independent and identically distributed (i.i.d) Gaussian noise
with zero mean and covariance matrix Q 0. The initial
_≥_
state x(0) is also assumed to be Gaussian with zero mean and
covariance matrix Σ 0, and is independent from the process
_≥_
noise _w(k)_ .
_{_ _}_
A network consisting of m sensors is monitoring the above
system. The measurement from each sensor i 1, _, m_
_∈{_ _· · ·_ _}_
is given by [1]:
_yi(k) = Cix(k) + vi(k),_ (2)
where yi(k) ∈ R is the output of sensor i, Ci is an ndimensional row vector, and vi(k) ∈ R is the Gaussian
measurement noise.
By stacking the measurement equations, one gets
_y(k) = Cx(k) + v(k),_ (3)
where
_y1(k)_
...
_ym(k)_
_C1_
...
_Cm_
_v1(k)_
...
_vm(k)_
_, C ≜_
_, v(k) ≜_
,
(4)
_y(k) ≜_
and v(k) is zero-mean i.i.d. Gaussian noise with covariance
_R_ 0 and is independent from w(k) and x(0).
_≥_
Throughout this paper, we assume that (A, C) is observable.
On the other hand, (A, Ci) may not necessarily be observable,
i.e., a single sensor may not be able to observe the whole state
space.
1The results in this paper can be readily generalized to cases where the
sensor outputs a vector measurement, by treating each entry independently as
l t
-----
_A. Preliminaries: the centralized Kalman filter_
If all measurements are collected at a single fusion center,
the centralized Kalman filter is optimal for state estimation
purpose, and provides a fundamental limit for all other estimation schemes. For this reason, this part will briefly review
the centralized solution given by the Kalman filter.
Let us denote by P (k) the error covariance of estimate
given by Kalman filter at time k. Since (A, C) is observable,
it is well-known that the error covariance will converge to the
steady state [6]:
_P = lim_ (5)
_k→∞_ _[P]_ [(][k][)][.]
Since the operation of a typical sensor network lasts for an
extended period of time, we assume that the Kalman filter is
in the steady state, or equivalently Σ = P, which results in a
steady-state Kalman filter with fixed gain[2]
_K = PC_ _[T][ �]CPC_ _[T]_ + R�−1 . (6)
Accordingly, the optimal Kalman estimate is computed recursively as
_xˆ(k + 1) = Axˆ(k) + K(y(k + 1)_ _CAxˆ(k))_
_−_
(7)
= (A _KCA)ˆx(k) + Ky(k + 1)._
_−_
It is clear that the optimal estimate (7) requires the information from all sensors. However, in a distributed framework,
each sensor is only capable of communicating with immediate neighbors, rendering the centralized solution impractical.
Therefore, this paper is devoted to the implementation of
Kalman filter in a distributed fashion.
III. DECOMPOSITION OF KALMAN FILTER
In this section, we shall provide a local decomposition of the
Kalman filter (7), where the Kalman estimate can be recovered
as a linear combination of the estimates from local filters. This
section extends, in a non-trivial way, the results in [37] by
removing the assumptions on the eigenvalues of system matrix
therein, and thus proposes the local filter for estimating the
general systems. The results in this part would further help us
to design distributed estimation algorithms in the next sections.
Without loss of generality, let the system matrix be
_A. Local decomposition of Kalman filter_
To locally decompose Kalman filter, we first introduce the
following lemmas, the proofs of which are given in appendix:
**Proposition 1. If Λ is a non-derogatory[3]** _Jordan matrix, then_
_both (Λ, 1) and (Λ[T]_ _, 1) are controllable._
**Lemma 1. Let (X, p) be controllable, where X ∈** R[n][×][n] _and_
_p ∈_ R[n]. For any q ∈ R[n], if X + pq[T] _and X do not share any_
_eigenvalues, then (X +pq[T]_ _, q[T]_ ) is observable, or equivalently
(X _[T]_ + qp[T] _, q) is controllable._
**Lemma 2. Let (X, p) be controllable, where X ∈** R[n][×][n] _and_
_p ∈_ R[n]. Denote the characteristic polynomial X as ϕ(s) =
det(sI − _X). Let Y ∈_ R[m][×][m] _and q ∈_ R[m]. Suppose that
_ϕ(Y )q = 0,_ (11)
_then there exists T ∈_ R[m][×][n] _which solves the equation below:_
_TX = Y T, Tp = q._ (12)
With the above preparations, let us consider the optimal
Kalman estimate in (7). For simplicity, we denote by Kj
the j-th column of the Kalman gain K. Namely, K =
[K1, · · ·, Km]. Accordingly, (7) can be rewritten as
_xˆ(k + 1) = (A_ _KCA)ˆx(k) +_
_−_
_m_
�
_Kiyi(k + 1)._ (13)
_i=1_
�Au
_A =_
_A[s]_
Notice that A _KCA is stable. It is clear that we can always_
_−_
find a Jordan matrix Λ ∈ R[n][×][n], such that Λ is strictly stable,
non-derogatory and has the same characteristic polynomial of
_A_ _KCA. In view of Proposition 1, we conclude that (Λ, 1)_
_−_
is controllable. Therefore, by Lemma 2, we can always find
matrices Fi’s, such that the following equalities hold for i =
1, _, m:_
_· · ·_
_FiΛ = (A −_ _KCA)Fi, Fi1n = Ki._ (14)
Suppose each sensor i performs the following local filter
solely based on its own measurements:
_ξˆi(k + 1) = Λˆξi(k) + 1n yi(k + 1),_ (15)
where _ξ[ˆ]i(k) is the output of local filter from sensor i, and_
**1n ∈** R[n] is a vector of all ones. Then it is proved the optimal
Kalman filter can be decomposed as a weighted sum of local
estimates _ξ[ˆ]i(k)’s, as stated below:_
**Lemma 3. Suppose each sensor performs the local filter (15).**
_The optimal Kalman estimate (7) can be recovered from the_
_local estimates_ _ξ[ˆ]i(k), i = 1, 2, · · ·, m as_
�
_,_ (8)
where A[u] _∈_ R[n][u][×][n][u] and A[s] _∈_ R[n][s][×][n][s], such that any
eigenvalue of A[u] lies on or outside the unit circle while the
eigenvalues of A[s] are strictly within the unit circle. It thus
follows from (1) that
_x[s](k + 1) = A[s]x[s](k) + Jw(k),_ (9)
where J = �0 **1ns** [�] _∈_ R[n][s][×][n] and x(k) = col(x[u](k), x[s](k)).
Accordingly, Ci is partitioned as
_Ci =_ �Ciu _Ci[s]�_ _,_ (10)
where Ci[u] _[∈]_ [R][n][u][×][1][ and][ C]i[s] _[∈]_ [R][n][s][×][1][.]
2Notice that even if Σ ̸= P, the Kalman estimate converges to the steadyt t K l filt i th t d t t ti t i t ti ll ti l
_Proof. By multiplying both sides of the recursive equation_
(15) by Fi, we arrive at
_Fiξ[ˆ]i(k + 1) = FiΛξ[ˆ]i(k) + Fi 1n yi(k + 1)._ (17)
3A matrix is defined to be non-derogatory if every eigenvalue of it has
t i lti li it 1
_xˆ(k) =_
_where Fi is defined in (14)._
_m_
�
_Fiξ[ˆ]i(k),_ (16)
_i=1_
-----
Then it follows from (14) that
_Fiξ[ˆ]i(k + 1) = (A −_ _KCA)Fiξ[ˆ]i(k) + Kiyi(k + 1),_ (18)
Summing up the above equation for all i = 1, _, m and_
_· · ·_
comparing it with (13), we can conclude that (16) holds.
Notice that the equality in Lemma 3 surely holds. That
means the Kalman filter can be perfectly recovered by (16). We
hence claim that (16) is a lossless decomposition of optimal
Kalman filter. To better illustrate the ideas, the information
flow of centralized Kalman filter and local decomposition (16)
is given in Fig 2.
_y1(k)_ _· · ·_ _ym(k)_
_y1(k)_ _· · ·_ _ym(k)_
Kalman filter
_xˆ(k)_
_ξˆ1(k)_ _ξˆm(k)_
Weighted sum
_xˆ(k)_
Fig. 2: The information flow of centralized Kalman filter (left
hand), and local decomposition of Kalman filter (16) (right
hand).
_B. A reformulation of (15) with stable inputs_
It is noted that the system matrix A may be unstable
which implies that the covariance of measurement y(k) is not
necessarily bounded. As a result, we need to redesign (15)
using the stable residual zi(k) as an input instead of the raw
measurement yi(k). The main reason for this reformulation is
to make the consensus algorithm feasible and develop stable
distributed estimators, which will be further discussed in the
proof of Theorem 3.
Towards the end, notice that (Λ, 1) is controllable, Λ is
stable and any eigenvalue of Au is unstable. Hence, we can
always find a non-zero β ∈ R[n] and compute
_S = Λ + 1β[T]_ _,_ (19)
such that
1) the characteristic polynomial of _A[u]_ divides _φ(s),_
where φ(s) is the characteristic polynomial of S, and
_φ(s)/ det(sI_ _A[u]) has only strictly stable roots;_
_−_
2) S do not share eigenvalues with Λ. Hence, by the virtue
of Lemma 1, (S[T] _, β) is controllable._
**Remark 1. Notice that by using β, we place the eigenvalues**
_of S to the locations which consist of two parts: the unstable_
_ones that coincide with the eigenvalues of Au and the stable_
_ones that are freely assigned but cannot be the eigenvalues of_
Λ. This is feasible as (Λ, 1) is controllable.
Next, let us consider the filter below:
_zi(k) = yi(k + 1) −_ _β[T][ ˆ]ξi(k),_
(20)
_ξˆ (k_ 1) _Sξ[ˆ] (k)_ **1** (k)
where β and S are calculated through (19). In the following
lemma, we shall show that (20) also losslessly decomposes the
Kalman filter. Moreover, the covariance of zi(k) is bounded
at any time.
**Lemma 4. Consider the local filter (20). The following**
_statements hold at any instant k:_
_1) (20) has the same input-output relationship with (15)._
_Namely, given the input yi(k), they yield the same output_
_ξˆi(k);_
_2) zi(k) is stable, i.e., the covariance of zi(k) is always_
_bounded._
_Proof. The proof is given in Appendix-C._
**Remark 2. If A has unstable modes, the previous discussions**
_show that (15) can be seen as a linear system with stable_
_system matrix Λ but unstable input yi(k + 1). As a contrast,_
(20) has unstable system matrix S but stable input zi(k).
_This formulation is essential to guarantee the stability of local_
_estimators, as will be seen in the proof of Theorem 4._
_C. A reduced-order decomposition of Kalman filter when n <_
_m_
To simplify notations, we define the following aggregated
matrices:
_S˜ ≜_ _Im ⊗_ _S, ˜Li ≜_ _ei ⊗_ **1n, ˜L ≜** [˜L1, · · ·, ˜Lm] = Im ⊗ **1n,**
(21)
where Im is an m-dimensional identity matrix and ei is the
_ith canonical basis vector in R[m]. We thus collect (16) and_
(20) in matrix form as:
_ξˆ1(k + 1)_ _ξˆ1(k)_ _z1(k)_
= ˜S + ˜L _,_
... ... ...
_ξˆm(k + 1)_ _ξˆm(k)_ _zm(k)_
(22)
_ξˆ1(k)_
_xˆ(k) = F_ _._
...
_ξˆm(k)_
where F ≜ [F1, F2, · · ·, Fm]. By Lemmas 3 and 4, (22)
represents a lossless decomposition of Kalman filter.
Notice that the system order of (22) is mn. In this part, we
shall show that by performing model reduction, this order can
be further reduced to n[2] when the state dimension is less than
the number of sensors, namely n < m. These discussions
would be useful for us to achieve a low communication
complexity in distributed frameworks.
To proceed, we regard the input and output of (22) as z(k)
and ˆx(k), respectively, where
_z(k) ≜_ �z1(k), · · ·, zm(k)�T . (23)
Let us introduce the below lemma, the proof of which is given
in Appendix-D:
**Lemma 5. Any matrix W ∈** R[n][×][n] _can be decomposed as_
_W_ _H ϕ (S) + H ϕ (S) +_ + H ϕ (S) (24)
-----
_where Hi ≜_ _eiβ[T]_ _, {ϕj(S)} are certain polynomials of S, and_
_S and β are given in (19)._
As a direct result of Lemma 5, for any Fi in (16), we can
always rewrite it by using the polynomials of S, i.e., {pij(S)}:
_Fi =_
_n_
�
_Hjpij(S)._ (25)
_j=1_
For simplicity, we also denote
_Ti ≜_ [(pi1(S) 1n)[T] _, · · ·, (pin(S) 1n)[T]_ ][T] _._ (26)
It is then proved in the below theorem that system (22) can
be reduced with a less order:
|yi(k) yj(k)|Col2|Col3|
|---|---|---|
|Local filter Local filter ξˆ i(k) ξˆ j(k) z i(k) z j(k)|||
|Synchronization Linear system ∆ i(k) Linear system η i(k) ∆ j(k) η j(k)|||
||||
**Theorem 1. Consider the following system:**
θ1(k + 1) θ1(k)
_..._ = (In ⊗ _S)_ _..._ + T
_θn(k + 1)_ _θn(k)_
θ1(k)
_x˜(k) = H_ _..._ _,_
_θn(k)_
_z1(k)_
_..._
_zm(k)_
_,_
(27)
_x˘i(k)_ _x˘j(k)_
Fig. 3: The information flow of Algorithm 1, where nodes i
and j are immediate neighbors.
if and only aij > 0. By denoting the degree matrix as D ≜
diag (deg1, . . ., degN ) with degi = [�]j[N]=1 _[a][ij][,][ the Laplacian]_
matrix of G is defined as LG ≜ _D −A. In this paper, a_
connected network is considered. We therefore can arrange the
eigenvalues of Laplacian matrix as 0 = µ1 < µ2 _µm._
_≤· · · ≤_
_A. Description of the distributed estimator_
In light of (16), the optimal estimate fuses _ξ[ˆ]i(k) from all_
sensors. However, in a distributed framework, each sensor can
only access the information in its neighborhood. Hence, any
sensor i needs to, through the communication over network,
infer _ξ[ˆ]j(k) for all j ∈V to achieve a stable local estimate._
Let us denote by ηi,j(k) as the inference from sensor i
on sensor j. As will be proved later in this section, ηi,j(k),
by running a synchronization algorithm, can track _m1_ _[ξ][ˆ][j][(][k][)]_
with bounded error. Hence, every sensor i can make a decent
inference on _ξ[ˆ]j(k)._
By collecting its inference on all sensors together, each
sensor i keeps a local state as below:
_where_
_T = [T1, T2, · · ·, Tm], H = [H1, H2, · · ·, Hn]._ (28)
_It holds that system (27) shares the same transfer function with_
(22).
_Proof. The proof is presented in Appendix-E._
Therefore, by performing model reduction, we present system (27) which shares the same transfer function with (22) but
with a reduced order. As proved previously, the output of (22)
is the optimal Kalman estimate. As a result, (27) also has the
Kalman estimate as its output and the Kalman filter can be
perfectly recovered by (27) as well. We hereby refer both (22)
and (27) to lossless decomposition of Kalman fiter. Depending
on the size of m and n, one should use a system with smaller
dimension to represent the centralized Kalman filter.
IV. LOCAL IMPLEMENTATION OF KALMAN FILTER
From Fig. 2, it is clear that local decomposition proposed
in Section III is still centralized as a fusion center is required
for calculating the weighted sum. In this section, we shall
provide distributed algorithms for implementing it, where each
sensor node performs local filtering by using the results from
Section III, and global fusion by exchanging information
with neighbors and running consensus algorithm. Based on
whether n is greater than m or not, different algorithms will
be presented to achieve a low communication complexity.
We use a weighted undirected graph = _,_ _,_ to
_G_ _{V_ _E_ _A}_
model the interaction among nodes, where = 1, 2, ..., m
_V_ _{_ _}_
is the set of sensors, is the set of edges, and
_E ⊂V × V_
_A = [aij] is the weighted adjacency matrix. It is assumed_
_aij ≥_ 0 and aij = aji, ∀i, j ∈V. An edge between sensors i
and j is denoted by eij ∈E, indicating that these two agents
can communicate directly with each other Note that e
_∈E_
_ηi,1(k)_
...
_ηi,m(k)_
_∈_ Rmn, (29)
_ηi(k) ≜_
which will be updated by synchronization algorithms. Since
_ηi(k) contains the fair inference on all_ _ξ[ˆ]j(k), j ∈V, sensor i_
finally uses it to compute a stable local estimate.
To be concrete, let us define the message sent by agent i at
time k as ∆i(k) ≜ Γ[˜]ηi(k) ∈ R[m], where Γ =[˜] _Im ⊗_ Γ and Γ
is a design parameter to be given later. We are now ready to
present the main algorithm. Suppose each node i is initialized
with ˆxi(0) = 0 and ηi(0) = 0. At any instant k > 0, its update
is outlined in Algorithm 1, the information flow of which is
shown in Fig. 3. Compared with Fig. 2, the proposed algorithm
is achieved in a distributed manner.
**Remark 3. Instead of transmitting the raw estimate ηi(k) ∈**
R[mn], each agent sends a “coded” vector ∆i(k), with a
_smaller size m._
-----
**Algorithm 1 Distributed estimation algorithm for sensor i**
1: Using the latest measurement from itself, sensor i computes
the local residual and update the local estimate by
_zi(k) = yi(k + 1) −_ _β[T][ ˆ]ξi(k),_
(30)
_ξˆi(k + 1) = S ˆξi(k) + 1n zi(k)._
2: Compute ∆i(k) = Γ[˜]ηi(k) and collect ∆j(k) from neighbors and fuse the neighboring information with the consensus
algorithm as
_m_
�
_ηi(k + 1) = Sη[˜]_ _i(k) + L[˜]izi(k) + B[˜]_ _aij(∆j(k) −_ ∆i(k)),
_j=1_
(31)
where _S[˜] and_ _L[˜]i are given in (21), and_ _B[˜] ≜_ _Im_ **1n.**
_⊗_
3: Update the fused estimate on system state as:
_x˘i(k + 1) = mFηi(k + 1)._ (32)
4: Transmit the new state ∆i(k + 1) to neighbors.
_B. Performance analysis_
This part is devoted to the performance analysis of Algorithm 1. We shall first provide the following theorem:
**Theorem 2. With Algorithm 1, the average of fused estimates**
_from all sensors coincides with the optimal Kalman estimate_
_at any instant k. That is,_
1
_m_
_m_
�
_x˘i(k) = ˆx(k), ∀k ≥_ 0. (33)
_i=1_
_Proof. Summing (31) over all i = 1, 2, ..., m yields_
_m_
�
_ηi(k + 1) = S[˜]_
_i=1_
_m_
�
_ηi(k) +_
_i=1_
_m_
� _L˜izi(k),_ (34)
_i=1_
where we use the fact that aij = aji for any i, j ∈V.
Comparing it with (20), it holds for any instant k and any
_j_ that:
_∈V_ _m_
_ξˆj(k) =_ � _ηi,j(k)._ (35)
_i=1_
Therefore, the following equation is satisfied at any k 0:
_≥_
_m_
�
_Fηi(k) =_
_i=1_
_m_
� � �[m] �
_Fj_ _ηi,j(k)_ =
_j=1_ _i=1_
_m_
�
_i=1_
_m_
�
_Fjηi,j(k)_
_j=1_
1
_m_
_m_
�
_x˘i(k) =_
_i=1_
=
number (the ratio of the maximum and minimum nonzero
eigenvalues of the Laplacian matrix):
**Lemma 6. Suppose that the product of all unstable eigenval-**
_ues of matrix S meets the following condition:_
�
_|λ[u]j_ [(][S][)][|][ <][ 1 +][ µ][2][/µ][m] _,_ (37)
_j_ 1 − _µ2/µm_
_where λ[u]j_ [(][S][)][ represents the][ j][th unstable eigenvalue of][ S][. Let]
Γ = 2 **_1[T]n_** _[P][S]_ _∈_ R[1][×][n], (38)
_µ2 + µm_ **_1[T]n_** _[P][ 1][n]_
_where µ2 and µm are, respectively, the second smallest and_
_largest eigenvalues of LG. Moreover, P > 0 is the solution to_
_the following modified algebraic Riccati inequality_
_S[T]_ _S +_ �1 _ζ_ [2][�] _[S][T][ P][ 1][n][ 1]n[T]_ _[P][S]_ _> 0,_ (39)
_P −_ _P_ _−_
**_1[T]n_** _[P][ 1][n]_
_with ζ satisfying_ [�]j ��λuj [(][S][)]�� _< ζ_ _−1 ≤_ 11+−µµ22/µ/µmm _[.][ Then for]_
_any j_ 2, ..., n _, it holds that_
_∈{_ _}_
_ρ(S −_ _µj 1n Γ) < 1._ (40)
_Proof. For any j ∈{2, ..., n}, let us denote ζj = 1−2µj/(µ2+_
_µm) ≤_ _ζ. Since (S, 1n) is controllable, there exists some P >_
0 which solves (39). Together with (38), it holds that
(S − _µj 1n Γ)[T]_ _P(S −_ _µj 1n Γ) −P_
=S[T] _PS −_ (1 − _ζj[2][)]_ _[S][T][ P][ 1][n][ 1]n[T]_ _[P][S]_ _−P_
**1[T]n** _[P][ 1][n]_ (41)
_S[T]_ _S_ (1 _ζ_ [2]) _[S][T][ P][ 1][n][ 1]n[T]_ _[P][S]_ _< 0._
_≤_ _P_ _−_ _−_ _−P_
**1[T]n** _[P][ 1][n]_
Hence, our proof completes.
**Remark 4. Note that, if all the eigenvalues of S lie on or**
_outside the unit circle, You et al. [31] prove that (40) holds_
_if and only if (37) is satisfied. In Lemma 6, we further show_
_that, (37) is still a sufficient condition to facilitate (40) if S_
_has stable modes._
**Remark 5. Invoking Remark 1, each λ[u]j** [(][S][)][ corresponds to]
_a root of the characteristic polynomial of A[u]. Thus, the_
_condition (37) can be rewritten using the system matrix A[u],_
�
_|rj(A[u])| <_ [1 +][ µ][2][/µ][m] _,_ (42)
_j_ 1 − _µ2/µm_
_where rj(A[u]_ _is a root of the characteristic polynomial of A[u]._
With the above preparations, we are now ready to analyze
the error covariance of local estimator as below:
**Theorem 3. Suppose that the Mahler measure of S meets**
_condition (37), and Γ is designed based on (38)–(39). With_
_Algorithm 1, the error covariance of each local estimate ˘xi(k)_
_is bounded at any instant k._
_Proof. Due to space limitation, the proof is given in Appendix-_
F
_m_
�
_Fjξ[ˆ]j(k) = ˆx(k)._
_j=1_
(36)
This completes the proof.
On the other hand, in order to show the stability of proposed
estimator, it is also desired to prove the boundedness of error
covariance. Towards this end, we introduce the following
lemma, the condition of which is characterized in terms of
a certain relation between the Mahler measure (the absolute
product of unstable eigenvalues of S) and the graph condition
-----
The proof of Theorem 3 implies that we present a distributed
estimation scheme with quantifiable performance.
**Corollary 1. Suppose that the Mahler measure of S meets**
_condition (37), and Γ is designed based on (38)–(39). Let_ _W[˘]_
_be the asymptotic error covariance of local estimates. Namely,_
_W˘_ ≜ lim
_k→∞_ [cov(˘][e][(][k][))][,]
_where ˘e(k) ≜_ col[(˘x1(k) − _x(k)), · · ·, (˘xm(k) −_ _x(k))]. By_
_using Algorithm 1, it holds that_
_W˘_ = ¯W + (1m 1[T]m[)][ ⊗] _[P,]_ (43)
_where_ _W[¯]_ _is the asymptotic error covariance between local_
_estimate and the Kalman estimate, and P is the error covari-_
_ance of Kalman filter as defined in (5). Moreover,_ _W[˘]_ _can be_
_exactly calculated._
As seen from the calculation, _W¯_, i.e., the performance
gap between our estimator and the optimal Kalman filter, is
purely caused by the consensus error. Therefore, if infinite
consensus steps are allowed between two consecutive sampling
instants, the consensus error vanishes and the performance of
the proposed estimator coincides with that of the Kalman filter.
Combining Theorems 2 and 3, the local estimator is stable
at each sensor side. Therefore, we conclude that by applying
the algorithm designed for linear system synchronization, i.e.,
(31), the problem of distributed state estimation is resolved.
**Remark 6. Note that Algorithm 1 requires each agent to send**
_out an m-dimensional vector ∆i(k) at any time. Therefore, in_
_the network with a large number of sensors, i.e., n < m, this_
_solution will cause a high communication cost. To address this_
_issue, this remark, by leveraging the reduced-order estimator_
(27) in Theorem 1, modifies Algorithm 1 to introduce less com_munication complexity. To be specific, we aim to implement the_
_reduced order system (27) with distributed estimators. Similar_
_as before, any agent i stores its estimate on all the others in_
_a variable ϑi(k), where_
**Algorithm 2 Distributed estimation algorithm 2 for sensor i**
1: Using the latest measurement from itself, sensor i computes
the local residual and update the local estimate by
_zi(k) = yi(k + 1) −_ _β[T][ ˆ]ξi(k),_
_ξˆi(k + 1) = S ˆξi(k) + 1n zi(k)._
2: Compute ∆i(k) = (In ⊗ Γ)ϑi(k) such that Γ is calculated
by (38). Collect ∆j(k) from neighbors and fuse the neighboring information with the consensus algorithm as
_ϑi(k + 1) =(In ⊗_ _S)ϑi(k) + Tizi(k)_
_m_
� (45)
+ (In ⊗ **1n)** _aij(∆j(k) −_ ∆i(k)),
_j=1_
_ϑi,1(k)_
_..._
_ϑi,n(k)_
where Ti is defined in (26).
3: Update the fused estimate on system state as:
_x˘i(k + 1) = mHϑi(k + 1),_ (46)
where H is given in (28).
4: Transmit the new state ∆i(k + 1) to neighbors.
V. EXTENSIONS OF PROPOSED SOLUTIONS
In the previous sections, we leverage the linear system
synchronization algorithm proposed in [31], to solve the
problem of distributed state estimation. In this section, we
aim to extend such a result and show that any control strategy,
which can facilitate the linear system synchronization, can be
modified to yield a stable distributed estimator. As a result,
we bridge the fields of distributed state estimation and linear
system synchronization.
Let us consider the synchronization of the following homogeneous LTI system:
_ηi(k + 1) = Sη[˜]_ _i(k) + Bu[˜]_ _i(k), ∀i ∈V,_ (47)
where ui(k) is the control input of agent i. In literature, a
large variety of synchronization algorithms has been proposed
with the framework below:
_ωi(k + 1) = Aωi(k) + Bηi(k + 1),_
∆i(k) = Γ[˜]ωi(k),
_m_ (48)
�
_ui(k) =_ _aijγij(k)(∆j(k) −_ ∆i(k)),
_j=1_
where ωi(k) is the “hidden state” that is necessary for agent i
to yield the communication state ∆i(k) and input ui(k), and Γ[˜]
refers to the control gain. Notice that (48) can be used to model
the controller with memory. Moreover, γij(k) ∈ [0, 1] models
the fading or lossy effect of the communication channel from
agent j to agent i. At every time, the agent collects the
available information in its neighborhood and synthesizes its
communication state and control signal via (48).
For simplicity, we denote as the control strategy that can
_U_
be represented by (48). Let the average of local states at time
_k be_
_m_
�
_η¯(k) = [1]_ _ηi(k)._
_m_
_n[2]_
_∈_ R _._ (44)
_ϑi(k) ≜_
_For each sensor i, it is initialized with ˆxi(0) = 0 and ϑi(0) =_
0. For the case of n < m, the estimation algorithm works as in
_Algorithm 2. Following similar arguments, the local estimator_
_at each sensor side is proved to be stable._
_Combining it with Algorithm 1, we conclude the size of_
_message sent by each sensor at any time is min_ _m, n_ _. Com-_
_{_ _}_
_pared with the existing solutions in distributed estimation, e.g.,_
_[12]–[16], our algorithm enjoys lower message complexity._
**Remark 7. Notice that sensor node i has perfect information**
_of its own local estimate ξi(k). Therefore, instead of using_
_ηi,i(k) to infer ξi(k)/m, node i can just use ξi(k)/m to_
_replace ηi,i(k) in (32), which potentially improves the per-_
_formance of the estimators._
-----
The network of subsystems (47) reaches strong synchronization under, if the following statements hold at any time:
_U_
1) Consistency: the average of local states keeps consistent
throughout the execution, i.e.,
_η¯(k + 1) = S[˜]η¯(k)._ (49)
2) Exponential Stability: agents exponentially reach consensus in mean square sense, i.e., there exist c > 0 and
_ρ_ (0, 1) such that
_∈_
E[||ηi(k) − _η¯(k)||[2]] ≤_ _cρ[k], ∀i ∈V._ (50)
We now review several existing strategies which facilitate
the strong synchronizationand show that they can be represented by (48):
1) Let ∆i(k) = Γ[˜]ηi(k) be the communication state defined
in Section IV-A. To facilitate the synchronization of homogeneous linear systems in undirected communication
topology, You et al. [31] design the following control law:
_yields a stable estimator for each sensor node. Specifically,_
_the following statements hold for any k_ 0:
_≥_
_1) the average of local estimates from all sensor coincides_
_with the optimal Kalman estimate;_
_2) the error covariance of each local estimate is bounded._
_Proof. The proof is given in Appendix-H._
**Remark 8. Theorem 4 assumes the independence of the com-**
_munication topology and system/measurement noises. There-_
_fore, as for the event-based synchronization algorithms, where_
_the communication relies on the agents’ states, we cannot_
_analyze its efficiency of solving the distributed estimation_
_problem by directly resorting to Theorem 4. In the future work,_
_we will continue to investigate this topic._
In contrast with Fig 1, this work, by using the lossless
decomposition of Kalman filter, decouples the local filter from
the consensus process, as shown in Fig. 3. The decoupling
enables us to leverage the rich results in linear systems synchronization to analyze the performance of local estimators, as
proved in Theorem 4. Moreover, following the similar proof
arguments as that of Theorem 3, we can show that with our
framework, the error covariance of each local estimate actually
consists of two orthogonal parts: the inherent estimation error
of Kalman filter and the distance from local estimate to
Kalman filter, namely:
cov(˘ei(k)) = cov(˘xi(k) − _x(k))_
= cov(˘xi(k) − _xˆ(k) + ˆx(k) −_ _x(k))_
= cov(˘xi(k) − _xˆ(k)) + cov(ˆx(k) −_ _x(k))_
_m_
� � �
= cov _x˘i(k) −_ [1] _x˘i(k)_ + cov(ˆx(k) − _x(k))_
_m_
_i=1_
_m_
=m[2]F cov �ηi(k) − [1] � _ηi(k)�F_ _[T]_ + cov(ˆx(k) − _x(k)),_
_m_
_i=1_
where the third equality holds due to the optimality of Kalman
filter, and the last equality holds by (32). Notice that the second
term of RHS is the error covariance of Kalman filter, while
first term is the error between local estimate and Kalman filter
and purely determined by the consensus process. Therefore,
by choosing proper strategy, extensive results on achieving
_U_
strong synchronization can be applied to (53) to deal with the
consensus error in various settings, such as directed graph,
time-varying topologies, etc. Particularly, if infinite consensus
steps are allowed between two consecutive sampling instants,
the consensus error vanishes, i.e., ηi(k) − _m[1]_ �mi=1 _[η][i][(][k][) = 0][,]_
and the performance of the proposed estimator is optimal since
it coincides with that of the Kalman filter. That means the
global optimality can be guaranteed.
VI. NUMERICAL EXAMPLE
In this section, we present numerical examples to verify the
theoretical results obtained in previous sections
_ui(k) =_
_m_
�
_aij(∆j(k) −_ ∆i(k)), (51)
_j=1_
which coincides with (48).
2) Another example is the filtered consensus protocol given
in [34]. By designing the hidden state as
_ωi(k) = F_ (q)ηi(k), (52)
where q is the unit advance operator, i.e., q[−][1]s(k) =
_s(k_ 1), and F (z) is the transfer function of a square
_−_
stable filter, the synchronization of linear systems is
achieved by (48) under a more relaxed condition than
(37), that is: [�]j _[|][λ]j[u][(][S][)][|][ <]_ 11+−[√][√]µµ22/µ/µmm _[.]_
3) Instead of focusing on perfect communication channels,
the authors in [32] and [33] develop the control protocols
to account for the random failure on communication
links and Markovian switching topologies, respectively.
By modeling the packet loss with the Bernoulli random
variable γij(k) ∈{0, 1}, these works complement the
results in [31] and prove the mean square stability under
the control strategy (48).
Notice that Algorithms 1 and 2 utilize (51) for achieving
synchronization and producing stable distributed estimators.
In what follows, we argue that the optimal Kalman estimate
can indeed be distributively implemented using any linear
system synchronization algorithms facilitating (49)-(50). To be
specific, Algorithm 1 should be modified[4] by replacing (31)
with
_ηi(k + 1) = Sη[˜]_ _i(k) + Bu[˜]_ _i(k) + L[˜]izi(k),_ (53)
where ui(k) is generated by U that facilitates (49)-(50). We
then state the stability of local estimators as below:
**Theorem 4. Consider any algorithm U which facilitates the**
_statements (49) and (50). At any time k, suppose each γij(k)_
_is independent of the noise_ _w(k)_ _and_ _v(k)_ _. Then (53)_
_{_ _}_ _{_ _}_
4Similarly, in the case of n < m, one can also derive the general form of
Al ith 2 ith li t h i ti t t
-----
_A. Numerical example when n < m_
Let us consider the case where four sensors cooperatively
estimate the system state. The system parameters are listed
below:
_B. Numerical example when n > m_
In the second example, we simulate the heat transfer process
5 in a planar closed region discussed in [41] and [42]:
�
_,_ (56)
�0.9 0 � �1 0 1 1 �T
_A =_ _, C =_ _,_
0 1.1 0 1 1 _−1_ (54)
_Q = 0.25I2, R = 4I4._
In this example, the number of states is smaller than that
of sensors, i.e. n < m. We therefore choose Algorithm 2.
Moreover, notice that the system is unstable, and sensor 1
cannot observe the unstable state.
Suppose that the topology of these four sensors is a ring
with weight 1 for each edge. The Laplacian matrix is thus:
_∂u_ � _∂2u_
+ _[∂][2][u]_
_∂t_ [=][ α] _∂x[2]1_ _∂x[2]2_
with boundary conditions
_∂x∂u1_ ���t,0,x2 [=][ ∂u]∂x1 ���t,l,x2 [=][ ∂u]∂x2 ���t,x1,0 [=][ ∂u]∂x2 ���t,x1,l [= 0][,]
(57)
where x1 and x2 are the coordinates in the region; u(t, x1, x2)
indicates the temperature at time t at position (x1, x2), l is the
side length of the square region and α adjusts the speed of the
diffusion process. With a N _N grid and sample frequency_
_×_
1Hz, the diffusion process can be discretized as:
_u(k + 1, i, j)_ _u(k, i, j) =_ _[α]_
_−_
_h[2][ [][u][(][k, i][ −]_ [1][, j][) +][ u][(][k, i, j][ −] [1)]
+ u(k, i + 1, j) + u(k, i, j + 1) 4u(k, i, j)],
_−_
(58)
where h = _N_ _−l_ 1 [denotes the size of each grid and][ u][(][k, i, j][)]
indicates the temperature at time k at location (ih, jh). By
collecting all the temperature values of each grid, we define the state variable U (k) = [u(k, 0, 0), _, u(k, 0, N_
_· · ·_ _−_
1), u(k, 1, 0), _, u(k, N_ 1, N 1))][T] . Further, by introduc_· · ·_ _−_ _−_
ing process noise into (58), one derives the following system
equation:
_U_ (k + 1) = AU (k) + w(k), (59)
where w(k) (0, Q) is Gaussian noise.
_∼N_
As shown in Fig. 5, m sensors are randomly deployed in
this region to monitor the temperature, where the measurement
of each sensor is a linear combination of temperature of the
grids around it. Specifically, suppose the location of sensor
_s is (ˆx1, ˆx2) such that ˆx1 ∈_ [i, i + 1), ˆx2 ∈ [j, j + 1), we
define ∆ˆx1 = xi1 _i and ∆ˆx1 = xi2_ _j. We assume that the_
_−_ _−_
measurement of sensor s at time k is
2 1 0 1
_−_ _−_
1 2 1 0
_−_ _−_
0 1 2 1
_−_ _−_
1 0 1 2
_−_ _−_
_._ (55)
_LG =_
It is not difficult to check that the second smallest and the
largest eigenvalues of LG are respectively µ2 = 2, µ4 = 4.
To fulfill the sufficient condition in Lemma 6, let us choose
_ζ = 0.5._
We set the initial state x(0) (0, I) and the initial local
_∼N_
estimate ˘xi(0) = 0 for each sensor i ∈{1, 2, 3, 4}. It can be
seen that the mean squared local estimate error ei(k) enters
steady state and is stable after a few steps (see Fig. 4).
1 KF
0.8
0.6
0.4
0.2
2.5
|Col1|Col2|Col3|s is (ˆx1, define ∆ˆx|
|---|---|---|---|
||||define ∆ KF measurem s1 s2 y (k s s3 s4|
|||||
|||||
|||||
|||||
We collect the measurements of each sensor at time k and
denote it as Y (k), then it follows
_Y (k) = CU_ (k) + v(k), (61)
(60)
_ys(k) = [1]_ �(1 − ∆ˆx1)(1 − ∆ˆx2)u(k, i, j)
_h[2]_
+ ∆ˆx1(1 − ∆ˆx2)u(k, i + 1, j)
+ (1 − ∆ˆx1)∆ˆx2u(k, i, j + 1)
+ ∆ˆx1∆ˆx2u(k, i + 1, j + 1)� + vs(k).
2
0 10 20 30
Time/s
0 10 20 30
1.5
1
|Col1|Time/s|Col3|We collec denote it|
|---|---|---|---|
||||KF s1 s2 where v( s3 surement s4 for the si α =|
|||||
|||||
|||||
Time/s
Fig. 4: Average mean square estimation error of system states
under Kalman filter and local estimators in 10000 experiments.
where v(k) (0, R) is the measurement noise and the mea_∼N_
surement matrix C can be derived from (60). The parameters
for the simulation are listed below:
_• α = 0.2;_
_• l = 4 and N = 5, thus the grid size h = 1;_
_• n = N_ [2] = 25 and m = 15. Therefore, n > m, which is
different from our first example.;
5State estimation in diffusion process has wide applications in sensor
network, e.g., urban CO2 emission monitoring [39], temperature monitoring
i d t t [40] t
-----
Fig. 5: (a) The position and topology of m sensors in the N
_×_
_N grid lines; (b) The estimate variance of centralized Kalman_
filter; (c) The estimate variance of local Kalman filter; (d) The
estimate variance of our estimators in 10000 experiments.
_• Q = 0.2I25 and R = 3I15._
As discussed in Remark 7, we replace ηi,i(k) with the
estimates given by local Kalman filters. The results are shown
in Fig. 5. Our algorithm achieves better performance compared
with local Kalman filters which merely use the measurement
of the sensor itself. The improvement of each sensor can
be found in TABLE I. Specifically, for each sensor i, we
respectively define the performance of local Kalman filter and
our algorithm in terms of:
_ϱi1 ≜_ [tr( ˆ][P][i][)] (62)
tr(P ) _[, ϱ][i][2][ ≜]_ [tr( ˘]tr([P]P[i])[)] _[,]_
Fig. 6: Comparison of the mean square error of the estimates
provided by different algorithms in 10000 experiments.
In the example, m = 4 sensors are connected as a ring to
infer the system state. Let the measurement equation be
1 0 0 0
1 0 0 0
1 0 0 0
0 0 1 0
_x(k) + v(k),_ (63)
_y(k) =_
where _P[ˆ]i,_ _P[˘]i and P are respectively the steady-state error_
covariance of local Kalman filter, our estimator and centralized
Kalman filter. We see that the proposed scheme outperforms
the local Kalman filter by at least 50% for each sensor.
_C. Comparison with existing algorithms_
We further compare the performance of Algorithm 1 with
those of existing algorithms: 1) centralized Kalman filter
(CKF), 2) KCF2009 ([13]), and 3) CMKF2018 ([43]), through
a numerical example on inverted pendulum.
Notice that an inverted pendulum has n = 4 states:
_x = [p;_ _p˙; θ;_ _θ˙], namely, the cart position, cart velocity,_
pendulum angle from vertical and pendulum angle velocity,
respectively. We consider the system linearized at θ = θ[˙] = 0
and discretized with sampling interval T = 0.01s, where the
detailed system equation can be found in [44] with system
noise w(k) (0 0 05[2]I )
_N_
where v(k) ∼N (0, 0.3[2]Im). Notice that sensor 4 cannot fully
observe the state space. Fig. 6 illustrates the mean square error
(MSE) of its estimate on p and θ, respectively. The results
show that our algorithm yields better estimation performance.
_D. Experiment when the global knowledge on system matrix_
_is unavailable_
Finally, notice that the proposed distributed estimator is
based on a lossless decomposition of Kalman filter as developed in Section III, which requires the global knowledge on
1) the system matrix A, 2) the measurement matrix C, and 3)
noise covariance matrix Q and R. In the case that certain part
of A, C, Q and R are unknown, before running Algorithm 1
or 2, each sensor can broadcast its local parameters. In this
way, every sensor can obtain the system parameters it needs
within finite steps.
To quantify the overhead incurred by this initialization,
i.e., broadcasting the parameters, in the third example, we
conduct an experiment using m = 15 raspberry pis equipped
with temperature sensors which run the proposed distributed
estimation algorithm every minute. In our experiment, it is
assumed that the sensors do not have global information on
_C and R Thus let each of them broadcast its C_ and R
-----
TABLE I: Performance Improvement in Comparison with Local Kalman filter
Sensor index i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Local KF performance ϱi1 1.94 1.94 1.96 1.96 1.94 1.93 1.97 1.94 1.95 1.94 1.92 1.94 1.95 1.94 1.95
Our estimator performance ϱi2 1.26 1.35 1.31 1.31 1.26 1.13 1.22 1.21 1.23 1.44 1.12 1.22 1.18 1.35 1.18
Improvement ϱi1 − _ϱi2_ 68% 59% 65% 65% 68% 80% 75% 73% 71% 50% 80% 72% 76% 59% 77%
at the starting phase so that every sensor can obtain system
parameters it needs.
The mean traffic of a sensor with 3 neighbors is shown in
Fig. 7. It turns out, compared with the centralized Kalman
filter, our solution induces lower communication burden even
with the additional effort on initial broadcasting. Obviously,
the merits become more apparent with the increasing scale of
sensor networks.
Fig. 7: Mean network traffic v.s. time.
VII. CONCLUSION
In this paper, the problem of distributed state estimation
has been studied for an LTI Gaussian system. We investigate
both cases where m > n and m _n, and propose distributed_
_≤_
estimators for both cases to introduce low communication cost.
The local estimator is proved to be stable at each sensor side,
in the sense that the covariance of estimation error is proved
to be bounded and the asymptotic error covariance can also be
derived. Our major merit lays in reformulating the problem of
distributed estimation to that of linear system synchronization.
APPENDIX A
PROOF OF LEMMA 1
We will prove by contradiction. If (X _[T]_ + qp[T] _, q) is not_
controllable, then we can find some s, such that the rank of
�X _[T]_ + qp[T] _−_ _sI_ _q�_ is strictly less than n. Therefore, there
exists a non-zero v, such that
_v[T][ �]X_ _[T]_ + qp[T] _−_ _sI_ _q�_ = 0,
which implies that
(X + pq[T] )v _sv = 0, q[T]_ _v = 0._
_−_
Therefore (X+pq[T] )v _sv = 0 and Xv_ _sv = 0, implying that_
_−_ _−_
_s is an eigenvalue of both X and X + pq[T]_, which contradicts
with the assumption X and X +pq[T] do not share eigenvalues.
We thus complete the proof
APPENDIX B
PROOF OF LEMMA 2
We will prove this lemma by construction. Towards the end,
let us next consider the following equation:
_T_ [p, Xp, · · ·, X _[n][−][1]p] = TRX = [q, Y q, · · ·, Y_ _[n][−][1]q] = RY,_
(64)
where _RX_ = [p, Xp, · · ·, X _[n][−][1]p]_ and _RY_ =
[q, Y q, _, Y_ _[n][−][1]q]._
_· · ·_
Since (X, p) is controllable, RX is full rank and thus
invertible, and T = RY RX[−][1] solves (64). Clearly Tp = q.
In what follows, we shall prove that TX = Y T . To this
end, let us denote the characteristic polynomial of X as
_ϕ(s) = s[n]_ + αn−1s[n][−][1] + . . . α0. It is noted that
_TX_ _[n]p = T_ (−αn−1X _[n][−][1]_ _−_ _αn−2X_ _[n][−][2]_ _−· · · −_ _α0I)p_
= (−αn−1Y _[n][−][1]q −_ _αn−2Y_ _[n][−][2]q −· · · −_ _α0q) = Y_ _[n]q,_
(65)
where the first and the last equality is due to Carley-Hamilton
and the second equality is from the fact TRX = RY . As a
result
_TXRX = T_ [Xp, · · ·, X _[n]p] = [Y q, · · ·, Y_ _[n]q] = Y RY,_
Hence, TX = Y RY RX[−][1] [=][ Y T] [, which finishes the proof.]
APPENDIX C
PROOF OF LEMMA 4
1) From (20), it is easy to verify that
_Sξ[ˆ]i(k) + 1n zi(k)_
=(Λ + 1n β[T] )ξ[ˆ]i(k) + 1n[yi(k + 1) − _β[T][ ˆ]ξi(k)]_ (66)
=Λξ[ˆ]i(k) + 1n yi(k + 1).
As a result, the local filter (20) has the same input-output
relationship with (15).
2) By Lemma 2, we know that for any i, we can find
_∈V_
_G[u]i_ _[∈]_ [R][n][×][n][u] [, such that]
(G[u]i [)][T][ S][T][ = (][A][u][)][T][ (][G]i[u][)][T][,][ (][G]i[u][)][T][ β][ = (][C]i[u][A][u][)][T][,]
which implies that
_G[u]i_ _[A][u][ −]_ **[1][n][C]i[u][A][u][ =][ SG]i[u]** _[β][T][ G][u]i_
_[−]_ **[1][n]**
= (Λ + 1β[T] )G[u]i _[β][T][ G][u]i_ [= Λ][G]i[u][,]
_[−]_ **[1][n]**
_β[T]_ _G[u]i_ [=][ C]i[u][A][u][.]
(67)
Furthermore,
�Gui 0[�] _A −_ **1nCiA =** �Gui _[A][u]_ 0[�] _−_ **1n** �Ciu[A][u] _Ci[s][A][s][�]_
= Λ �Gui 0[�] _−_ **1n** �0 _Ci[s][A][s][�]_ _,_
_β[T][ �]G[u]i_ 0[�] = �Ciu[A][u] 0[�] = CiA − �0 _Ci[s][A][s][�]_ _,_
(68)
-----
where A and Ci are given in (8) and (10), respectively. Now we are ready to prove Lemma 5. Notice that any matrix
For simplicity, we denote _W can be decomposed as_
_Gi ≜_ �Gui 0[�] _∈_ R[n][×][n]. (69) w1[T]
_W =_ ... = e1w1T [+][ e][2][w]2[T] [+][ · · ·][ e][n][w]n[T] _[.]_ (75)
Moreover, let
_ϵi(k) ≜_ _Gix(k) −_ _ξ[ˆ]i(k)._ (70) _wn[T]_
Since (S[T] _, β) is controllable, (24) can be concluded by_
It follows from (15) that
applying Lemma 7 to (75).
_ϵi(k + 1) = Gix(k + 1) −_ _ξ[ˆ]i(k + 1)_
= GiAx(k) + Giw(k) − Λξ[ˆ]i(k) − **1n yi(k + 1)**
= (Gi − **1n Ci)Ax(k) −** Λξ[ˆ]i(k) + (Gi − **1n Ci)w(k)** APPENDIX E
PROOF OF THEOREM 1
_−_ **1n vi(k + 1)**
= ΛG − **1n** �0 _Ci[s][A][s][�]_ _x(k) −_ Λξ[ˆ]i(k) + (Gi − **1n Ci)w(k)** To begin with, we note that the following relation holds true
_−_ **1n vi(k + 1)** at any k ≥ 0:
= Λϵi(k) − **1n Ci[s][A][s][x][s][(][k][) + (][G][i]** _[−]_ **[1][n]** _[C][i][)][w][(][k][)][ −]_ **[1][n]** _[v][i][(][k][ + 1)][,]_ � �n � �n
(71) _FiS[k]_ = _Hjpij(S)_ _S[k]_ = _HjS[k]pij(S),_ (76)
where the second to last equality holds by (68). Due to the _j=1_ _j=1_
fact that Λ and A[s] are stable, we conclude that ϵi(k) is stable, where the last equality holds as S is commutable with any
i.e., cov(ϵi(k)) is bounded. polynomials of itself. Then let us consider the output of system
(22):
One thus has
_zi(k) = yi(k + 1) −_ _β[T][ ˆ]ξi(k)_
= yi(k + 1) − _β[T]_ (Gix(k) − _ϵi(k))_
= Ci(Ax(k) + w(k)) + vi(k + 1) + β[T] _ϵi(k)_
_−_ (CiA − �0 _Ci[s][A][s][�])x(k)_
= β[T] _ϵi(k) + Ci[s][A][s][x][s][(][k][) +][ C][i][w][(][k][) +][ v][i][(][k][ + 1)][.]_
_k_
�
_F_ (Im ⊗ _S)[t](Im ⊗_ **1n)z(k −** _t)_
_t=0_
_k_
�
_H(In ⊗_ _S)[t]Tz(k −_ _t) = ˜x(k + 1)._
_t=0_
(72)
_xˆ(k + 1) =_
=
=
=
=
_k_
�
_t=0_
_k_
�
_t=0_
_k_
�
_t=0_
� �[m] �
_FiS[t]1nzi(k −_ _t)_
_i=1_
_n_
� � _HjS[t][�]_ �[m] _pij(S)1nzi(k −_ _t)�[�]_
_j=1_ _i=1_
_n_
� �
_HjS[t]pij(S)1nzi(k −_ _t)_
_j=1_
As proved in (71), cov(ϵi(k)) is bounded. Moreover, it follows
from (9) that Ci[s][A][s][x][s][(][k][)][ is a linear combination of the stable]
parts in x(k). Also, the covariance of w(k) and vi(k + 1)
are bounded as Q and Ri, respectively. We thus conclude that
_zi(k) is stable, i.e., the covariance of zi(k) is always bounded._
APPENDIX D
PROOF OF LEMMA 5
For the proof of Lemma 5, we need the following result:
**Lemma 7. Given any vector w ∈** R[n]. Suppose (S[T] _, v) is_
_controllable, then there exists a polynomial p of at most n_ 1
_−_
_degree, such that w can be decomposed as_
_w[T]_ = v[T] _ϕ(S)._ (73)
_Proof. Suppose ϕ(S) = α0I + α1S + · · · + αn−1S[n][−][1]. We_
thus rewrite (73) as
� �[m]
_i=1_
(77)
Notice that (27) has z(k) as its input and ˜x(k) as its output.
As proved, given any z(k), (22) and (27) yield the same
output, i.e., ˜x(k) = ˆx(k + 1). Hence, we conclude that the
two systems have the identical transfer functions. The proof
is thus completed.
APPENDIX F
PROOF OF THEOREM 3
For simplicity, we first define aggregated vectors and matrices as below:
(78)
_L˜1_
...
_L˜_
_._
_ξˆ1(k)_
...
_ξˆm(k)_
_,_
_, ˆξ(k) ≜_
� �
_w =_ _v_ _S[T]_ _v_ �S[n][−][1][�][T] _v_
_· · ·_
_α0_
...
_αn−1_
_η(k) ≜_
_Lη ≜_
_._ (74)
_η1(k)_
...
_ηm(k)_
Since (S[T] _, v) is controllable, the first matrix on the RHS of the_
equation has a column rank of n and hence the above equation
is always solvable We therefore complete the proof
-----
Then, we can rewrite (31) in matrix form as:
_η(k + 1)_
=(Im ⊗ _S[˜])η(k) + Lηz(k) −_ [Im ⊗ ( B[˜]Γ)]([˜] _LG ⊗_ _In)η(k)_
=[Im ⊗ _S[˜] −LG ⊗_ ( B[˜]Γ)][˜] _η(k) + Lηz(k)._
(79)
Next let us denote the average state of all agents as
Lemma 6, diag( S[˜]−µ2B[˜]Γ[˜], · · ·, _S[˜]−µmB[˜]Γ)[˜]_ is Schur. Recalling
Lemma 4, z(k) is also stable. We therefore conclude that (89)
is stable, which further implies the stability of (87).
On the other hand, one derives from (72) that
_z(k) = Cw(k)+v(k+1)+(Im⊗β[T]_ )ϵ(k)+C _[s]A[s]x[s](k), (90)_
where ϵ(k) ≜ col(ϵ1(k), · · ·, ϵm(k)) and
_C1[s]_
...
_Cm[s]_
.
_η¯(k) ≜_ [1]
_m_
_m_
�
_ηi(k) = m[1]_ [(][1]m[T] _[⊗][I][mn][)][η][(][k][)][.]_ (80)
_i=1_
_C_ _[s]_ =
Since 1[T]m _[L][G]_ [= 0][, it holds that]
_η¯(k + 1)_
� �
= m[1] [(][1]m[T] _[⊗][I][mn][)]_ [Im ⊗ _S[˜] −LG ⊗_ ( B[˜]Γ)][˜] _η(k) + Lηz(k)_
= S[˜]η¯(k) + m[1] [(][1]m[T] _[⊗][I][mn][)][L][η][z][(][k][)][.]_
(81)
Recalling (71), it follows that
_w(k)_
Furthermore, we define the state deviation of each sensor as
_δi(k) ≜_ _ηi(k) −_ _η¯(k) and then stack them as an aggregated_
vector δ(k) ≜ col(δ1(k), · · ·, δm(k)). Combining (79) and
(81) yields the dynamic equation of δ(k):
_δ(k + 1) = [Im ⊗_ _S[˜] −LG ⊗_ ( B[˜]Γ)][˜] _δ(k) + Lδz(k),_ (82)
where
_Lδ ≜_ [(Im − _m[1]_ **[1][m][ 1]m[T]** [)][ ⊗] _[I][mn][]][L][η][.]_ (83)
Recall that the Laplacian matrix of an undirected graph is
symmetric. Therefore, we can always find an unitary matrix
Φ ≜ [ _√1m 1m, φ2, · · ·, φm], such that LG is diagonalized as_
diag(0, µ2, · · ·, µm) = Φ[T] _LGΦ._ (84)
Using the property of Kronecker product yields that
(Φ ⊗ _Imn)[T]_ [Im ⊗ _S[˜] −LG ⊗_ ( B[˜]Γ)](Φ[˜] _⊗_ _Imn)_ (85)
= diag( S,[˜] _S[˜] −_ _µ2B[˜]Γ[˜], ...,_ _S[˜] −_ _µmB[˜]Γ)[˜]_ _._
Denote
_δ˜(k) ≜_ (Φ ⊗ _Imn)[T]_ _δ(k)._ (86)
One has
_δ˜(k + 1) = Aδ˜δ˜(k) + Lδ˜z(k),_ (87)
where Aδ˜ ≜ _diag( S,[˜]_ _S[˜] −_ _µ2B[˜]Γ[˜], · · ·,_ _S[˜] −_ _µmB[˜]Γ)[˜]_ and Lδ˜ ≜
[(Φ[T] _−_ _m[1]_ [Φ][T][ 1][m][ 1]m[T] [)][ ⊗] _[I][mn][]][L][η][.]_
We next study the stability of above system. To proceed, let us partition the state into two parts, i.e., _δ[˜](k) =_
[δ[˜]1[T] [(][k][)][,][ ˜][δ]2[T] [(][k][)]][T][, where][ ˜][δ][1][(][k][)][ ∈] [R][mn][ is a vector consisting of]
the first mn entries of _δ[˜](k) and satisfies_
_G1_ **1n C1**
_−_
_ϵ(k + 1) = (Im ⊗_ Λ)ϵ(k) + ... _w(k)_
_Gm_ **1n Cm**
_−_
_−(Im ⊗_ **1n)v(k + 1) −** (Im ⊗ **1n)C** _[s]A[s]x[s](k)_
= (Im ⊗ Λ)ϵ(k) + Wϵw(k) + Vϵv(k + 1) + Aϵx[s](k),
_Wr = ArWrA[T]r_ [+] _Wϵ_ _Q_ _Wϵ_ + Vϵ
_J_ _J_ 0
where
Aδ˜ _Lδ˜(Im ⊗_ _β[T]_ ) _Lδ˜C_ _[s]A[s]_
_Ar =_ _Im ⊗_ Λ _Aϵ_
_A[s]_
where
(91)
_,_
_G1_ **1n C1**
_−_
_Wϵ ≜_ ... _,_
_Gm_ **1n Cm**
_−_
_Vϵ ≜_ _−(Im ⊗_ **1n), Aϵ ≜** _−(Im ⊗_ **1n)C** _[s]A[s]._
By combining the above dynamics with (9), one derives that
_δ˜(k + 1)_ Aδ˜ _Lδ˜(Im ⊗_ _β[T]_ ) _Lδ˜C_ _[s]A[s]_ _δ˜(k)_
_ϵ(k + 1)_ = _Im ⊗_ Λ _Aϵ_ _ϵ(k)_
_x[s](k + 1)_ _A[s]_ _x[s](k)_
_w(k) +_
_v(k + 1)_
Lδ˜C Lδ˜
+ _Wϵ_ _w(k) +_ Vϵ _v(k + 1)_
_J_ 0
(92)
Notice that the above system is stable. Hence, we calculate
the covariance at both sides and in steady state. It holds that
_Wr, the steady state covariance, is the unique solution of below_
Lyapunov equation:
Lδ˜
Vϵ
0
_R_
Lδ˜C
_Wϵ_
_J_
_Q_
Lδ˜C
_Wϵ_
_J_
_T_
Lδ˜
Vϵ
0
T
_,_
_[s]_
_._
_m_
�
(ηi(k) − _η¯(k)) = 0. (88)_
_i=1_
_δ˜(k)_
_δ(k) =_ �Φ ⊗ _Imn_ 0 0[�] _ϵ(k)_ = Φδ
_x[s](k)_
1
_δ˜1(k) =_ _√_
_m_
_m_
� 1
_δi(k) =_ _√_
_m_
_i=1_
In view of (86), it holds that
_δ˜(k)_
_ϵ(k)_
_x[s](k)_
(93)
_,_ (94)
Therefore, _δ[˜]1(k) is stable. Moreover, it holds that_
_δ˜2(k_ +1) = diag( ˜S − _µ2 ˜B˜Γ, · · ·, ˜S −_ _µm ˜B˜Γ)˜δ2(k)+ ˜Lδ˜z(k),_
(89)
where _L[˜] consists the last (m[2]n_ _mn) rows of_ _L[˜]_ In view of
where
Φ ≜ �Φ ⊗ _I_ 0 0[�] (95)
-----
Moreover, let us denote
_e¯i(k) ≜_ _x˘i(k) −_ _xˆ(k),_ (96)
which is the bias from local estimate ˘xi(k) to optimal Kalman
one. Combining (16) and (35) yields
By Cauchy-Schwarz inequality, it holds for any i, j that
�
E[κ[T]i _[κ][j][]][ ≤]_
�
E[κ[T]i _[κ][i][]]_ E[κ[T]j _[κ][j][]][.]_ (106)
_xˆ(k) = F_
_m_
�
_ηi(k)._ (97)
_i=1_
One thus has
_e¯i(k) = mF_ (ηi(k) − _η¯(k)) = mFδi(k)._ (98)
Stacking such errors from all sensors together yields
�
_._ (99)
The proof is thus completed.
We next prove Theorem 4. Applying similar arguments to
Theorem 2, it is easy to see from the consistency condition
(49) that the average of local estimates coincides with the
optimal Kalman filter. We hence focus on the analysis of
estimation error covariance.
Let us denote δi(k) ≜ _ηi(k)−1/m_ [�]i[m]=1 _[η][i][(][k][)][ and][ ϖ][i][(][k][)][ ≜]_
_ωi(k) −_ 1/m [�]i[m]=1 _[ω][i][(][k][)][. Moreover, we define]_
_δ(k) ≜_ col(δ1(k), · · ·, δm(k)),
_ϖ(k) ≜_ col(ϖ1(k), · · ·, ϖm(k)).
It hence follows from (48) that
�δ(k + 1)� �D(k) _J (k)��_ _δ(k)_ � �Lδ�
= + _z(k),_
_ϖ(k)_ _ϖ(k_ 1) 0
_B�_ _A�_ _−_
(107)
where Lδ is defined in (83), and
_D(k) ≜_ _Im ⊗_ _S[˜] −L(k) ⊗_ ( B[˜]Γ[˜]B),
_J (k) ≜_ _−L(k) ⊗_ ( B[˜]Γ[˜]A), _A[�] ≜_ _Im ⊗A,_ _B[�] ≜_ _Im ⊗B,_
_e¯(k) = (Im ⊗_ _mF_ )δ(k) = (Im ⊗ _mF_ )Φδ
�δ˜(k)
_ϵ(k)_
Therefore, in steady state, the covariance of ¯e(k) can be
calculated as
_W¯_ = [(Im ⊗ _mF_ )Φδ]Wr[(Im ⊗ _mF_ )Φδ][T] _._ (100)
Finally, for any sensor i, let us denote its estimation error as
_e˘i(k) = ˘xi(k) −_ _x(k)_
= (˘xi(k) − _xˆ(k)) + (ˆx(k) −_ _x(k))_ (101)
= ¯ei(k) + ˆe(k),
where ˆe(k) is the estimation error of Kalman filter. Since
Kalman filter is optimal, ¯ei(k) is orthogonal to ˆe(k).
By defining ˘e(k) ≜ col(˘e1(k), · · ·, ˘em(k)), we therefore
have
_e˘(k) = ¯e(k) + 1m ⊗eˆ(k)._ (102)
Calculating the covariance of both sides yields
_W˘_ = ¯W + (1m 1[T]m[)][ ⊗] _[P,]_ (103)
where _W[˘]_ is the steady-state covariance of ˘e(k) and P is given
in (5). Notice that the above calculation also indicates the
boundedness of cov(˘e(k)) at any time.
APPENDIX G
PROOF OF COROLLARY 1
As proved in Appendix-F, one can exactly calculate _W[¯]_ by
solving Lyapunov equations (93) and (100). The result is thus
obvious by invoking (103).
APPENDIX H
PROOF OF THEOREM 4
To proceed, let us introduce the following lemma:
**Lemma 8. Given any random variables κ1, · · ·, κτ** _, it follows_
_that_
_τ_ _τ_
� 2[�] � � � �2
E������� _κi������_ _≤_ E[||κi||[2]] _._ (104)
_i=1_ _i=1_
_Proof. In order to prove (104), it is equivalent to show that_
with L(k) ≜ _{Li,j(k)} being the (random) Laplacian matrix_
with respect to the weights {aijγij(k)}. Namely,
_Li,j(k) ≜_ ��−amlij=1γij[a][il](k[γ])[il],[(][k][)][,] _j =j ̸= i i [.]_ (108)
For simplicity, Let
� (k) (k)
_D_ _J_
_Q(k) ≜_
_B�_ _A�_
�
_._ (109)
Since δi(0) = 0 and ϖi(0) = 0 hold for any i, it follows that
�δ(k + 1)� �k � �Lδ� �
= (k, t + 1) _z(t)_ _,_ (110)
_ϖ(k)_ _Q_ 0
_t=0_
where the transition matrix is defined as
� (k) (k 1) (s), _k_ _s,_
_Q_ _Q_ _−_ _· · · Q_ _≥_
(k, s) =
_Q_ _I,_ _k < s._
Then consider the update of any agent i. From the above
equation, we conclude that
_δi(k + 1) =_
_k_
�
Πi(k, t + 1)z(t), (111)
_t=0_
_τ_ _τ_
� �
E[κ[T]i _[κ][j][]][ ≤]_
_τ_ _τ_
� � [�]
�
E[κ[T]i _[κ][i][]]_
E[κ[T]j _[κ][j][]][.]_ (105)
where Πi(k, t + 1) refers to the i-th row of matrix Q(k, t +
1)[Lδ 0][T] _. Namely, the consensus error of agent i, i.e. δi(k+_
1), is caused by the sequence of residuals _z(t)_, where t _k._
_{_ _}_ _≤_
For simplicity, we denote
_κi(k, t) ≜_ Πi(k, t + 1)z(t).
Since cov(z(t)) is bounded at any time, in view of (50), the
following statement holds for any t _k:_
_≤_
E[||κ (k t)||[2]] ≤ _cρ[k][−][t]_ (112)
-----
Therefore, one has that
_cov(δi(k + 1)) = E[||δi(k + 1)||[2]] = E�������_
_k_
� 2[�]
_κi(k, t)_
������
_t=0_
_τ_ _k_
� � � �2 � �
_≤_ E[||κi(k, t)||[2]] _≤_
_i=1_ _t=0_
= _[c][(1][ −√][ρ][k][)][2]_
(1 _ρ)[2][,]_
_−_ _[√]_
[14] G. Battistelli, L. Chisci, G. Mugnai, A. Farina, and A. Graziano,
“Consensus-based linear and nonlinear filtering,” IEEE Transactions on
_Automatic Control, vol. 60, no. 5, pp. 1410–1415, 2014._
[15] W. Li and Y. Jia, “Consensus-based distributed multiple model UKF
for jump Markov nonlinear systems,” IEEE Transactions on Automatic
_Control, vol. 57, no. 1, pp. 227–233, 2011._
[16] G. Battistelli and L. Chisci, “Stability of consensus extended Kalman
filter for distributed state estimation,” Automatica, vol. 68, pp. 169–178,
2016.
[17] S. Del Favero and S. Zampieri, “Distributed estimation through randomized gossip Kalman filter,” in Proceedings of the 48th IEEE Conference
_on Decision and Control (CDC) held jointly with the 28th Chinese_
_Control Conference._ IEEE, 2009, pp. 7049–7054.
[18] S. Kar and J. M. Moura, “Gossip and distributed Kalman filtering:
Weak consensus under weak detectability,” IEEE Transactions on Signal
_Processing, vol. 59, no. 4, pp. 1766–1784, 2010._
[19] K. Ma, S. Wu, Y. Wei, and W. Zhang, “Gossip-based distributed tracking
in networks of heterogeneous agents,” IEEE Communications Letters,
vol. 21, no. 4, pp. 801–804, 2016.
[20] F. S. Cattivelli and A. H. Sayed, “Diffusion strategies for distributed
Kalman filtering and smoothing,” IEEE Transactions on Automatic
_Control, vol. 55, no. 9, pp. 2069–2084, 2010._
[21] J. Hu, L. Xie, and C. Zhang, “Diffusion Kalman filtering based on covariance intersection,” IEEE Transactions on Signal Processing, vol. 60,
no. 2, pp. 891–902, 2011.
[22] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “Diffusion strategies
for distributed Kalman filtering: Formulation and performance analysis,”
_Proc. Cognitive Information Processing, pp. 36–41, 2008._
[23] M. Farina, G. Ferrari-Trecate, and R. Scattolini, “Distributed moving
horizon estimation for linear constrained systems,” IEEE Transactions
_on Automatic Control, vol. 55, no. 11, pp. 2462–2475, 2010._
[24] A. Haber and M. Verhaegen, “Moving horizon estimation for largescale interconnected systems,” IEEE Transactions on Automatic Control,
vol. 58, no. 11, pp. 2834–2847, 2013.
[25] G. Battistelli and L. Chisci, “Kullback–Leibler average, consensus on
probability densities, and distributed state estimation with guaranteed
stability,” Automatica, vol. 50, no. 3, pp. 707–718, 2014.
[26] L. Chen, P. O. Arambel, and R. K. Mehra, “Estimation under unknown
correlation: Covariance intersection revisited,” IEEE Transactions on
_Automatic Control, vol. 47, no. 11, pp. 1879–1882, 2002._
[27] X. He, W. Xue, and H. Fang, “Consistent distributed state estimation
with global observability over sensor network,” Automatica, vol. 92, pp.
162–172, 2018.
[28] S. Das and J. M. Moura, “Consensus+ innovations distributed Kalman
filter with optimized gains,” IEEE Transactions on Signal Processing,
vol. 65, no. 2, pp. 467–481, 2016.
[29] G. Battistelli, L. Chisci, and D. Selvi, “A distributed Kalman filter with
event-triggered communication and guaranteed stability,” Automatica,
vol. 93, pp. 75–82, 2018.
[30] L. Shi, P. Cheng, and J. Chen, “Sensor data scheduling for optimal state
estimation with communication energy constraint,” Automatica, vol. 47,
no. 8, pp. 1693–1698, 2011.
[31] K. You and L. Xie, “Network topology and communication data rate
for consensusability of discrete-time multi-agent systems,” IEEE Trans_actions on Automatic Control, vol. 56, no. 10, pp. 2262–2275, 2011._
[32] K. You, Z. Li, and L. Xie, “Consensus condition for linear multiagent systems over randomly switching topologies,” Automatica, vol. 49,
no. 10, pp. 3125–3132, 2013.
[33] L. Xu, Y. Mo, and L. Xie, “Distributed consensus over Markovian packet
loss channels,” IEEE Transactions on Automatic Control, vol. 65, no. 1,
pp. 279–286, 2019.
[34] G. Gu, L. Marinovici, and F. L. Lewis, “Consensusability of discretetime dynamic multiagent systems,” IEEE Transactions on Automatic
_Control, vol. 57, no. 8, pp. 2085–2089, 2011._
[35] F. Amato, M. Ariola, and P. Dorato, “Finite-time control of linear systems subject to parametric uncertainties and disturbances,” Automatica,
vol. 37, no. 9, pp. 1459–1463, 2001.
[36] Y. Su and J. Huang, “Two consensus problems for discrete-time multiagent systems with switching network topology,” Automatica, vol. 48,
no. 9, pp. 1988–1997, 2012.
[37] Y. Mo and E. Garone, “Secure dynamic state estimation via local
estimators,” in 2016 IEEE 55th Conference on Decision and Control
_(CDC)._ IEEE, 2016, pp. 5073–5078.
[38] X. Yang, J. Yan, Y. Mo, and K. You, “A distributed implementation of
steady-state Kalman filter,” in 2021 40th Chinese Control Conference
_(CCC)_ IEEE 2021 5154 5159
�
�2
_cρ[k][−][t]_
(113)
where the first inequality holds by using Lemma 8. Since ρ
_∈_
(0, 1), combining the above results with (98) and (101) yields
that the estimation error is stable.
**Remark 9. It is noted that the reformulation (20) with**
_stable input zi(k) is essential to establish the stability of_
_local estimators. To be concrete, the stability of (111) is_
_guaranteed under the bounded input, which is key to prove_
_the boundedness of estimation error covariance, as can be_
_observed from (98)-(103). On the other hand, if an unstable_
_input, e.g., yi(k) as in (15), is applied, we cannot conclude_
_on the stability of local estimator even using the exponentially_
_converged synchronization algorithms._
REFERENCES
[1] M. V. Subbotin and R. S. Smith, “Design of distributed decentralized
estimators for formations with fixed and stochastic communication
topologies,” Automatica, vol. 45, no. 11, pp. 2491–2501, 2009.
[2] L. Xie, D.-H. Choi, S. Kar, and H. V. Poor, “Fully distributed state
estimation for wide-area monitoring systems,” IEEE Transactions on
_Smart Grid, vol. 3, no. 3, pp. 1154–1169, 2012._
[3] Z.-Q. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” IEEE Transactions on Information Theory,
vol. 51, no. 6, pp. 2210–2219, 2005.
[4] T. T. Vu and A. R. Rahmani, “Distributed consensus-based Kalman
filter estimation and control of formation flying spacecraft: Simulation
and validation,” in Proceedings of the AIAA Guidance, Navigation, and
_Control Conference, 2015, p. 1553._
[5] B. Jia, K. D. Pham, E. Blasch, D. Shen, Z. Wang, and G. Chen,
“Cooperative space object tracking using space-based optical sensors
via consensus-based filters,” IEEE Transactions on Aerospace and
_Electronic Systems, vol. 52, no. 4, pp. 1908–1936, 2016._
[6] B. D. Anderson and J. B. Moore, Optimal Filtering. Courier Corporation, 2012.
[7] Y. Bar-Shalom and L. Campo, “The effect of the common process
noise on the two-sensor fused-track covariance,” IEEE Transactions on
_Aerospace and Electronic Systems, no. 6, pp. 803–805, 1986._
[8] K. H. Kim, “Development of track to track fusion algorithms,” in
_Proceedings of 1994 American Control Conference._ IEEE, 1994, pp.
1037–1041.
[9] S.-L. Sun and Z.-L. Deng, “Multi-sensor optimal information fusion
Kalman filter,” Automatica, vol. 40, no. 6, pp. 1017–1023, 2004.
[10] B. Chen, G. Hu, D. W. Ho, and L. Yu, “Distributed Kalman filtering
for time-varying discrete sequential systems,” Automatica, vol. 99, pp.
228–236, 2019.
[11] R. Olfati-Saber, “Distributed Kalman filtering for sensor networks,” in
_Proceedings of the 46th IEEE Conference on Decision and Control._
IEEE, 2007, pp. 5492–5498.
[12] ——, “Distributed Kalman filter with embedded consensus filters,” in
_Proceedings of the 44th IEEE Conference on Decision and Control._
IEEE, 2005, pp. 8179–8184.
[13] ——, “Kalman-consensus filter: Optimality, stability, and performance,”
in Proceedings of the 48th IEEE Conference on Decision and Control
_(CDC) held jointly with the 28th Chinese Control Conference._ IEEE,
2009 7036 7042
-----
[39] X. Mao, X. Miao, Y. He, X.-Y. Li, and Y. Liu, “Citysee: Urban co2
monitoring with sensors,” in 2012 Proceedings IEEE INFOCOM. IEEE,
2012, pp. 1611–1619.
[40] L. Parolini, B. Sinopoli, B. H. Krogh, and Z. Wang, “A cyber–physical
systems approach to data center modeling and control for energy
efficiency,” Proceedings of the IEEE, vol. 100, no. 1, pp. 254–268, 2011.
[41] Y. Mo, R. Ambrosino, and B. Sinopoli, “Sensor selection strategies
for state estimation in energy constrained wireless sensor networks,”
_Automatica, vol. 47, no. 7, pp. 1330 – 1338, 2011._
[42] ——, “Network energy minimization via sensor selection and topology
control,” IFAC Proceedings Volumes, vol. 42, no. 20, pp. 174 – 179,
2009.
[43] W. Li, G. Wei, D. W. Ho, and D. Ding, “A weightedly uniform
detectability for sensor networks,” IEEE transactions on neural networks
_and learning systems, vol. 29, no. 11, pp. 5790–5796, 2018._
[44] Z. Li and Y. Mo, “Efficient secure state estimation against sparse integrity attack for system with non-derogatory dynamics,” arXiv preprint
_arXiv:2106.03066, 2021._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2101.10689, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2101.10689"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-01-26T00:00:00
|
[
{
"paperId": "09e7bf9d0fee7565a9f6a0a7ccdba053818d1bfd",
"title": "Efficient Secure State Estimation against Sparse Integrity Attack for System with Non-derogatory Dynamics"
},
{
"paperId": "8c3e6887cbe510fadb9cf24e9d174674bfb5791f",
"title": "Distributed Finite-Time Integral Sliding-Mode Control for Multi-Agent Systems with Multiple Disturbances Based on Nonlinear Disturbance Observers"
},
{
"paperId": "ef1913c7386ed0c5e986553d617667f41c8a112a",
"title": "Distributed Stochastic Source Seeking for Multiple Vehicles over Fixed Topology"
},
{
"paperId": "ed11f237b14d580ca6dc917f959180dc22500864",
"title": "Distributed Kalman filtering for time-varying discrete sequential systems"
},
{
"paperId": "31ec777792b6d07ac8853e9af23c9eda782a8aff",
"title": "Distributed Consensus Over Markovian Packet Loss Channels"
},
{
"paperId": "f5addbb8dd20a0535446f9944a22e73cd653b84d",
"title": "A distributed Kalman filter with event-triggered communication and guaranteed stability"
},
{
"paperId": "5bfb3af403c6e826a5780bdcc96f26ac91a271d0",
"title": "A Weightedly Uniform Detectability for Sensor Networks"
},
{
"paperId": "4dc31e90d3c14b60522df872efb343146e272e20",
"title": "Consistent distributed state estimation with global observability over sensor network"
},
{
"paperId": "149a15b0af3ca5778101d32b608f8ccbff95fddb",
"title": "Gossip-Based Distributed Tracking in Networks of Heterogeneous Agents"
},
{
"paperId": "1abf5d805d2c8a2b2099784f7f4bb58160d96f1d",
"title": "Secure dynamic state estimation via local estimators"
},
{
"paperId": "3ff60b5aaec9f573a904f68d077a367815ff5b85",
"title": "Consensus of Linear Multi-agent Systems with Persistent Disturbances"
},
{
"paperId": "78b9eb80d61ab8f675bac04f1f9a39402d075082",
"title": "Cooperative space object tracking using space-based optical sensors via consensus-based filters"
},
{
"paperId": "5ff2f5ffd74269830a7574741a25ede58ec9dd30",
"title": "Stability of consensus extended Kalman filter for distributed state estimation"
},
{
"paperId": "20c2239ccab70bbb1b549951290aba883fe2250c",
"title": "Consensus+Innovations Distributed Kalman Filter With Optimized Gains"
},
{
"paperId": "bd52592f98ca0677f4419eb63df13eb0363dfffe",
"title": "Consensus-Based Linear and Nonlinear Filtering"
},
{
"paperId": "83e02cc6e69ba124df9530c77a7b483d8d9190fe",
"title": "Distributed Consensus-Based Kalman Filter Estimation and Control of Formation Flying Spacecraft: Simulation and Validation"
},
{
"paperId": "7f07bd0f2af6882bdb24c763e9fc0e9d762c4f55",
"title": "Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability"
},
{
"paperId": "2a759e85ffaecb06c60fa86e00e1e4bb26339888",
"title": "Consensus condition for linear multi-agent systems over randomly switching topologies"
},
{
"paperId": "a082005bd3e48621ef959f98237a5df7bf112c86",
"title": "Moving Horizon Estimation for Large-Scale Interconnected Systems"
},
{
"paperId": "32c78d11f695d620294655bdfe94d41790163720",
"title": "Two consensus problems for discrete-time multi-agent systems with switching network topology"
},
{
"paperId": "017989d26bca7ad17848f244fbee5e2879e2f959",
"title": "Consensusability of Discrete-Time Dynamic Multiagent Systems"
},
{
"paperId": "ab0f9990c79569765a5e78acf07c47ad406ebddb",
"title": "Fully Distributed State Estimation for Wide-Area Monitoring Systems"
},
{
"paperId": "0bf29d128b87211f60fa60dc1fedd40844f4e306",
"title": "CitySee: Urban CO2 monitoring with sensors"
},
{
"paperId": "8d851384a52d00b06842a7d40dc791ca6aacdff6",
"title": "Diffusion Kalman Filtering Based on Covariance Intersection"
},
{
"paperId": "0122e7e784aae390dcc230f23bc79e8d19073092",
"title": "Network Topology and Communication Data Rate for Consensusability of Discrete-Time Multi-Agent Systems"
},
{
"paperId": "a0cade3d1a9a71e780e0ec6d59279eec4159da14",
"title": "Sensor selection strategies for state estimation in energy constrained wireless sensor networks"
},
{
"paperId": "7489910c640dccc86ab8c60c75d44e28015099ac",
"title": "Distributed moving horizon estimation for nonlinear constrained systems"
},
{
"paperId": "baf83fa0a09085c7b0ed76c99f6e9a5993f05035",
"title": "Diffusion Strategies for Distributed Kalman Filtering and Smoothing"
},
{
"paperId": "887e48278e922b1c258ea14d088c3c867fb1a527",
"title": "Kalman-Consensus Filter : Optimality, stability, and performance"
},
{
"paperId": "c53bf33f401988b4f2a74102ac629f0cc7346e69",
"title": "Distributed estimation through randomized gossip Kalman filter"
},
{
"paperId": "67726517ea954b26e5c1ee561e6cf213fd2d3dee",
"title": "Design of distributed decentralized estimators for formations with fixed and stochastic communication topologies"
},
{
"paperId": "9bd11e99ba0045ce3e2f9526826d0ce2ccded294",
"title": "Network Energy Minimization via Sensor Selection and Topology Control"
},
{
"paperId": "5021b6fc3bba5fb2a69ca9d4aa150e36d9d8702e",
"title": "Distributed Kalman filtering for sensor networks"
},
{
"paperId": "abca3c16c31d20c9d2796c07c5b24b06dee076b2",
"title": "Distributed Kalman Filter with Embedded Consensus Filters"
},
{
"paperId": "f85c47d9487462566c58f8862b6b81af9dd90a2b",
"title": "Universal decentralized estimation in a bandwidth constrained sensor network"
},
{
"paperId": "4d17c39e14a5491adb58a339a541f17bad1aa266",
"title": "Multi-sensor optimal information fusion Kalman filter"
},
{
"paperId": "628bf7b0af173de590e435b13f950d28f3ea0fc0",
"title": "Estimation under unknown correlation: covariance intersection revisited"
},
{
"paperId": "209e79be0d39307846f099a6f4fb4b3bd721ba76",
"title": "Technical Communique: Finite-time control of linear systems subject to parametric uncertainties and disturbances"
},
{
"paperId": "d508522cf62c9a806d44c6e1f4602a4c3316d018",
"title": "On optimal ℓ∞ to ℓ∞ filtering"
},
{
"paperId": "52074c55ae13e19b51131b108bba6bac5f5a0360",
"title": "Development of track to track fusion algorithms"
},
{
"paperId": "f2963642d0e3d9d9f5b015b89f7da084a9e492a0",
"title": "The Effect of the Common Process Noise on the Two-Sensor Fused-Track Covariance"
},
{
"paperId": null,
"title": "“A distributed implementation of steady-state Kalman filter,”"
},
{
"paperId": "2e15eac922143e30cdde0ee2ab8c3e729cbd3cf7",
"title": "A Cyber–Physical Systems Approach to Data Center Modeling and Control for Energy Efficiency"
},
{
"paperId": "3f49bb27995f4b4004d535f9f17002b283a89e32",
"title": "Consensus-Based Distributed Multiple Model UKF for Jump Markov Nonlinear Systems"
},
{
"paperId": "9b5e1d798bfc8a7a367ef4fcf6988e6d9f112c39",
"title": "Distributed Kalman Filtering : Weak Consensus Under Weak Detectability"
},
{
"paperId": "e1c61bf68627ef57cd8dd5e2f279a64e5ec5a87d",
"title": "Sensor data scheduling for optimal state estimation with communication energy constraint"
},
{
"paperId": "05b6fa6eb34fa83a741a304ac6cf6544bc887924",
"title": "Diffusion strategies for distributed Kalman filtering: formulation and performance analysis"
},
{
"paperId": "79fa5e51a519d30a19b3fd3accfa580cbd556b33",
"title": "Distributed Algorithms"
},
{
"paperId": null,
"title": "Zeromq, an open-source universal messaging library"
},
{
"paperId": null,
"title": "B.a.t.m.a.n. advanced documentation overview"
}
] | 27,006
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.