diff --git a/DESCRIPTION b/DESCRIPTION
index 5c8076c..f36cf78 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,7 +1,7 @@
Package: netrics
Title: Many Ways to Measure and Classify Membership for Networks, Nodes, and Ties
-Version: 0.2.1
-Date: 2026-04-04
+Version: 0.2.2
+Date: 2026-04-24
Description: Many tools for calculating network, node, or tie
marks, measures, motifs and memberships of many different types of networks.
Marks identify structural positions, measures quantify network properties,
diff --git a/NEWS.md b/NEWS.md
index 97389bc..00a7dc4 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,3 +1,16 @@
+# netrics 0.2.2
+
+## Package
+
+- Updated logos
+
+## Tutorials
+
+- Updated centrality tutorial
+- Updated community tutorial
+- Updated position tutorial
+- Updated topology tutorial
+
# netrics 0.2.1
## Package
diff --git a/inst/netrics.png b/inst/netrics.png
new file mode 100644
index 0000000..0c2ba6c
Binary files /dev/null and b/inst/netrics.png differ
diff --git a/inst/tutorials/tutorial3/centrality.Rmd b/inst/tutorials/tutorial3/centrality.Rmd
index 3af1ad4..215becf 100644
--- a/inst/tutorials/tutorial3/centrality.Rmd
+++ b/inst/tutorials/tutorial3/centrality.Rmd
@@ -134,7 +134,7 @@ rowSums(mat) == colSums(mat)
```{r degreesum-hint-2, purl = FALSE}
# Or by using a built in command in manynet like this:
-node_degree(ison_brandes, normalized = FALSE)
+node_by_degree(ison_brandes, normalized = FALSE)
```
```{r degreesum-solution}
@@ -143,7 +143,7 @@ mat <- as_matrix(ison_brandes)
degrees <- rowSums(mat)
rowSums(mat) == colSums(mat)
# You can also just use a built in command in manynet though:
-node_degree(ison_brandes, normalized = FALSE)
+node_by_degree(ison_brandes, normalized = FALSE)
```
```{r degreesum-Q, echo=FALSE, purl = FALSE}
@@ -184,7 +184,7 @@ though there are more elaborate ways to do this in base and grid graphics.
```{r distrib-solution}
# distribution of degree centrality scores of nodes
-plot(node_degree(ison_brandes))
+plot(node_by_degree(ison_brandes))
```
What's plotted here by default is both the degree distribution as a histogram,
@@ -218,7 +218,7 @@ question("What can the degree distribution tell us?",
## Other centralities
Other measures of centrality can be a little trickier to calculate by hand.
-Fortunately, we can use functions from `{manynet}` to help calculate the
+Fortunately, we can use functions from `{netrics}` to help calculate the
`r gloss("betweenness")`, `r gloss("closeness")`, and `r gloss("eigenvector")` centralities for each node in the network.
Let's collect the vectors of these centralities for the `ison_brandes` dataset:
@@ -227,27 +227,27 @@ Let's collect the vectors of these centralities for the `ison_brandes` dataset:
```
```{r micent-hint-1, purl = FALSE}
-# Use the node_betweenness() function to calculate the
+# Use the node_by_betweenness() function to calculate the
# betweenness centralities of nodes in a network
-node_betweenness(ison_brandes)
+node_by_betweenness(ison_brandes)
```
```{r micent-hint-2, purl = FALSE}
-# Use the node_closeness() function to calculate the
+# Use the node_by_closeness() function to calculate the
# closeness centrality of nodes in a network
-node_closeness(ison_brandes)
+node_by_closeness(ison_brandes)
```
```{r micent-hint-3, purl = FALSE}
-# Use the node_eigenvector() function to calculate
+# Use the node_by_eigenvector() function to calculate
# the eigenvector centrality of nodes in a network
-node_eigenvector(ison_brandes)
+node_by_eigenvector(ison_brandes)
```
```{r micent-solution}
-node_betweenness(ison_brandes)
-node_closeness(ison_brandes)
-node_eigenvector(ison_brandes)
+node_by_betweenness(ison_brandes)
+node_by_closeness(ison_brandes)
+node_by_eigenvector(ison_brandes)
```
What is returned here are vectors of betweenness, closeness, and eigenvector
@@ -301,7 +301,7 @@ Try to answer the following questions for yourself:
- what does Bonacich mean when he says that `r gloss("power")` and influence are not the same thing?
- can you think of a real-world example when an actor might be central but not powerful, or powerful but not central?
-Note that all centrality measures in `{manynet}` return normalized
+Note that all centrality measures in `{netrics}` return normalized
scores by default --
for the raw scores, include `normalized = FALSE` in the function as an extra argument.
@@ -312,9 +312,9 @@ Now, can you create degree distributions for each of these?
```
```{r otherdist-solution}
-plot(node_betweenness(ison_brandes))
-plot(node_closeness(ison_brandes))
-plot(node_eigenvector(ison_brandes))
+plot(node_by_betweenness(ison_brandes))
+plot(node_by_closeness(ison_brandes))
+plot(node_by_eigenvector(ison_brandes))
```
## Plotting centrality
@@ -335,19 +335,19 @@ we can highlight which node or nodes hold the maximum score in red.
```{r ggid-solution}
# plot the network, highlighting the node with the highest centrality score with a different colour
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_degree())) %>%
+ mutate_nodes(color = node_is_max(node_by_degree())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_betweenness())) %>%
+ mutate_nodes(color = node_is_max(node_by_betweenness())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_closeness())) %>%
+ mutate_nodes(color = node_is_max(node_by_closeness())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_eigenvector())) %>%
+ mutate_nodes(color = node_is_max(node_by_eigenvector())) %>%
graphr(node_color = "color")
```
@@ -361,19 +361,19 @@ What can you see?
```{r ggid_twomode-solution}
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_degree(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_degree(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_betweenness(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_betweenness(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_closeness(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_closeness(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_eigenvector(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_eigenvector(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
```
@@ -391,7 +391,7 @@ question("Select all that are true for the two-mode Brandes network.",
-`{manynet}` also implements network `r gloss("centralization")` functions.
+`{netrics}` also implements network `r gloss("centralization")` functions.
Here we are no longer interested in the level of the node,
but in the level of the whole network,
so the syntax replaces `node_` with `net_`:
@@ -401,10 +401,10 @@ so the syntax replaces `node_` with `net_`:
```
```{r centzn-solution}
-net_degree(ison_brandes)
-net_betweenness(ison_brandes)
-net_closeness(ison_brandes)
-print(net_eigenvector(ison_brandes), digits = 5)
+net_by_degree(ison_brandes)
+net_by_betweenness(ison_brandes)
+net_by_closeness(ison_brandes)
+print(net_by_eigenvector(ison_brandes), digits = 5)
```
By default, scores are printed up to 3 decimal places,
@@ -428,18 +428,18 @@ but fortunately the `{patchwork}` package is here to help.
```{r multiplot-solution}
ison_brandes <- ison_brandes %>%
- add_node_attribute("degree", node_is_max(node_degree(ison_brandes))) %>%
- add_node_attribute("betweenness", node_is_max(node_betweenness(ison_brandes))) %>%
- add_node_attribute("closeness", node_is_max(node_closeness(ison_brandes))) %>%
- add_node_attribute("eigenvector", node_is_max(node_eigenvector(ison_brandes)))
+ add_node_attribute("degree", node_is_max(node_by_degree(ison_brandes))) %>%
+ add_node_attribute("betweenness", node_is_max(node_by_betweenness(ison_brandes))) %>%
+ add_node_attribute("closeness", node_is_max(node_by_closeness(ison_brandes))) %>%
+ add_node_attribute("eigenvector", node_is_max(node_by_eigenvector(ison_brandes)))
gd <- graphr(ison_brandes, node_color = "degree") +
- ggtitle("Degree", subtitle = round(net_degree(ison_brandes), 2))
+ ggtitle("Degree", subtitle = round(net_by_degree(ison_brandes), 2))
gc <- graphr(ison_brandes, node_color = "closeness") +
- ggtitle("Closeness", subtitle = round(net_closeness(ison_brandes), 2))
+ ggtitle("Closeness", subtitle = round(net_by_closeness(ison_brandes), 2))
gb <- graphr(ison_brandes, node_color = "betweenness") +
- ggtitle("Betweenness", subtitle = round(net_betweenness(ison_brandes), 2))
+ ggtitle("Betweenness", subtitle = round(net_by_betweenness(ison_brandes), 2))
ge <- graphr(ison_brandes, node_color = "eigenvector") +
- ggtitle("Eigenvector", subtitle = round(net_eigenvector(ison_brandes), 2))
+ ggtitle("Eigenvector", subtitle = round(net_by_eigenvector(ison_brandes), 2))
(gd | gb) / (gc | ge)
# ggsave("brandes-centralities.pdf")
```
diff --git a/inst/tutorials/tutorial3/centrality.html b/inst/tutorials/tutorial3/centrality.html
index 6ee3cc1..d3ab251 100644
--- a/inst/tutorials/tutorial3/centrality.html
+++ b/inst/tutorials/tutorial3/centrality.html
@@ -13,7 +13,7 @@
-
+
# Or by using a built in command in manynet like this:
-node_degree(ison_brandes, normalized = FALSE)
+node_by_degree(ison_brandes, normalized = FALSE)
# distribution of degree centrality scores of nodes
-plot(node_degree(ison_brandes))
+plot(node_by_degree(ison_brandes))
What’s plotted here by default is both the degree distribution as a histogram, as well as a density plot overlaid on it in red.
@@ -276,7 +276,7 @@Other measures of centrality can be a little trickier to calculate by
-hand. Fortunately, we can use functions from {manynet} to
+hand. Fortunately, we can use functions from {netrics} to
help calculate the
betweenness ,
@@ -294,30 +294,30 @@
# Use the node_betweenness() function to calculate the
+# Use the node_by_betweenness() function to calculate the
# betweenness centralities of nodes in a network
-node_betweenness(ison_brandes)
+node_by_betweenness(ison_brandes)
# Use the node_closeness() function to calculate the
+# Use the node_by_closeness() function to calculate the
# closeness centrality of nodes in a network
-node_closeness(ison_brandes)
+node_by_closeness(ison_brandes)
# Use the node_eigenvector() function to calculate
+# Use the node_by_eigenvector() function to calculate
# the eigenvector centrality of nodes in a network
-node_eigenvector(ison_brandes)
+node_by_eigenvector(ison_brandes)
node_betweenness(ison_brandes)
-node_closeness(ison_brandes)
-node_eigenvector(ison_brandes)
+node_by_betweenness(ison_brandes)
+node_by_closeness(ison_brandes)
+node_by_eigenvector(ison_brandes)
What is returned here are vectors of betweenness, closeness, and eigenvector scores for the nodes in the network. But what do they @@ -354,7 +354,7 @@
Note that all centrality measures in {manynet} return
+
Note that all centrality measures in {netrics} return
normalized scores by default – for the raw scores, include
normalized = FALSE in the function as an extra
argument.
plot(node_betweenness(ison_brandes))
-plot(node_closeness(ison_brandes))
-plot(node_eigenvector(ison_brandes))
+plot(node_by_betweenness(ison_brandes))
+plot(node_by_closeness(ison_brandes))
+plot(node_by_eigenvector(ison_brandes))
# plot the network, highlighting the node with the highest centrality score with a different colour
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_degree())) %>%
+ mutate_nodes(color = node_is_max(node_by_degree())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_betweenness())) %>%
+ mutate_nodes(color = node_is_max(node_by_betweenness())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_closeness())) %>%
+ mutate_nodes(color = node_is_max(node_by_closeness())) %>%
graphr(node_color = "color")
ison_brandes %>%
- mutate_nodes(color = node_is_max(node_eigenvector())) %>%
+ mutate_nodes(color = node_is_max(node_by_eigenvector())) %>%
graphr(node_color = "color")
How neat! Try it with the two-mode version. What can you see?
@@ -419,19 +419,19 @@ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_degree(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_degree(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_betweenness(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_betweenness(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_closeness(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_closeness(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")
ison_brandes2 %>%
- add_node_attribute("color", node_is_max(node_eigenvector(ison_brandes2))) %>%
+ add_node_attribute("color", node_is_max(node_by_eigenvector(ison_brandes2))) %>%
graphr(node_color = "color", layout = "bipartite")

{manynet} also implements network
+
{netrics} also implements network
centralization functions. Here we are no longer interested
in the level of the node, but in the level of the whole network, so the
@@ -459,10 +459,10 @@
net_degree(ison_brandes)
-net_betweenness(ison_brandes)
-net_closeness(ison_brandes)
-print(net_eigenvector(ison_brandes), digits = 5)
+net_by_degree(ison_brandes)
+net_by_betweenness(ison_brandes)
+net_by_closeness(ison_brandes)
+print(net_by_eigenvector(ison_brandes), digits = 5)
By default, scores are printed up to 3 decimal places, but this can be modified and, in any case, the unrounded values are retained @@ -484,18 +484,18 @@
ison_brandes <- ison_brandes %>%
- add_node_attribute("degree", node_is_max(node_degree(ison_brandes))) %>%
- add_node_attribute("betweenness", node_is_max(node_betweenness(ison_brandes))) %>%
- add_node_attribute("closeness", node_is_max(node_closeness(ison_brandes))) %>%
- add_node_attribute("eigenvector", node_is_max(node_eigenvector(ison_brandes)))
+ add_node_attribute("degree", node_is_max(node_by_degree(ison_brandes))) %>%
+ add_node_attribute("betweenness", node_is_max(node_by_betweenness(ison_brandes))) %>%
+ add_node_attribute("closeness", node_is_max(node_by_closeness(ison_brandes))) %>%
+ add_node_attribute("eigenvector", node_is_max(node_by_eigenvector(ison_brandes)))
gd <- graphr(ison_brandes, node_color = "degree") +
- ggtitle("Degree", subtitle = round(net_degree(ison_brandes), 2))
+ ggtitle("Degree", subtitle = round(net_by_degree(ison_brandes), 2))
gc <- graphr(ison_brandes, node_color = "closeness") +
- ggtitle("Closeness", subtitle = round(net_closeness(ison_brandes), 2))
+ ggtitle("Closeness", subtitle = round(net_by_closeness(ison_brandes), 2))
gb <- graphr(ison_brandes, node_color = "betweenness") +
- ggtitle("Betweenness", subtitle = round(net_betweenness(ison_brandes), 2))
+ ggtitle("Betweenness", subtitle = round(net_by_betweenness(ison_brandes), 2))
ge <- graphr(ison_brandes, node_color = "eigenvector") +
- ggtitle("Eigenvector", subtitle = round(net_eigenvector(ison_brandes), 2))
+ ggtitle("Eigenvector", subtitle = round(net_by_eigenvector(ison_brandes), 2))
(gd | gb) / (gc | ge)
# ggsave("brandes-centralities.pdf")
@@ -622,12 +647,30 @@
@@ -764,7 +764,7 @@ There is no one single best community detection algorithm.
Instead there are several, each with their strengths and weaknesses.
Since this is a rather small network, we'll focus on the following methods:
walktrap, edge betweenness, and fast greedy.
-(Others are included in `{manynet}`/`{igraph}`)
+(Others are included in `{netrics}`/`{igraph}`)
As you use them, consider how they portray communities and consider which one(s)
afford a sensible view of the social world as cohesively organized.
@@ -819,13 +819,13 @@ which(friend_wt == 2)
```{r walk-hint-3, purl = FALSE}
# resulting in a modularity of
-net_modularity(friends, friend_wt)
+net_by_modularity(friends, friend_wt)
```
```{r walk-solution}
friend_wt <- node_in_walktrap(friends, times=50)
# results in a modularity of
-net_modularity(friends, friend_wt)
+net_by_modularity(friends, friend_wt)
```
We can also visualise the clusters on the original network
@@ -857,9 +857,9 @@ graphr(friends,
node_color = "walk_comm",
node_group = "walk_comm") +
ggtitle("Walktrap",
- subtitle = round(net_modularity(friends, friend_wt), 3))
+ subtitle = round(net_by_modularity(friends, friend_wt), 3))
# the function `round()` rounds the values to a specified number of decimal places
-# here, we are telling it to round the net_modularity score to 3 decimal places,
+# here, we are telling it to round the net_by_modularity score to 3 decimal places,
# but the score is exactly 0.27 so only two decimal places are printed.
```
@@ -874,7 +874,7 @@ graphr(friends,
node_color = "walk_comm",
node_group = "walk_comm") +
ggtitle("Walktrap",
- subtitle = round(net_modularity(friends, friend_wt), 3))
+ subtitle = round(net_by_modularity(friends, friend_wt), 3))
```
This can be helpful when polygons overlap to better identify membership
@@ -924,7 +924,7 @@ graphr(friends,
node_color = "eb_comm",
node_group = "eb_comm") +
ggtitle("Edge-betweenness",
- subtitle = round(net_modularity(friends, friend_eb), 3))
+ subtitle = round(net_by_modularity(friends, friend_eb), 3))
```
```{r ebplot-solution}
@@ -934,7 +934,7 @@ graphr(friends,
node_color = "eb_comm",
node_group = "eb_comm") +
ggtitle("Edge-betweenness",
- subtitle = round(net_modularity(friends, friend_eb), 3))
+ subtitle = round(net_by_modularity(friends, friend_eb), 3))
```
For more on this algorithm, see M Newman and M Girvan: Finding and
@@ -959,7 +959,7 @@ although I personally find it both useful and in many cases quite "accurate".
```{r fg-hint-1, purl = FALSE}
friend_fg <- node_in_greedy(friends)
friend_fg # Does this result in a different community partition?
-net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
+net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
```
```{r fg-hint-2, purl = FALSE}
@@ -970,14 +970,14 @@ graphr(friends,
node_color = "fg_comm",
node_group = "fg_comm") +
ggtitle("Fast-greedy",
- subtitle = round(net_modularity(friends, friend_fg), 3))
+ subtitle = round(net_by_modularity(friends, friend_fg), 3))
#
```
```{r fg-solution}
friend_fg <- node_in_greedy(friends)
friend_fg # Does this result in a different community partition?
-net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
+net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
# Again, we can visualise these communities in different ways:
friends <- friends %>%
@@ -986,7 +986,7 @@ graphr(friends,
node_color = "fg_comm",
node_group = "fg_comm") +
ggtitle("Fast-greedy",
- subtitle = round(net_modularity(friends, friend_fg), 3))
+ subtitle = round(net_by_modularity(friends, friend_fg), 3))
```
See A Clauset, MEJ Newman, C Moore:
@@ -1005,7 +1005,7 @@ question("What is the difference between communities and components?",
### Communities
-Lastly, `{manynet}` includes a function to run through and find the membership
+Lastly, `{netrics}` includes a function to run through and find the membership
assignment that maximises `r gloss("modularity")` across any of the applicable community
detection procedures.
This is helpfully called `node_in_community()`.
diff --git a/inst/tutorials/tutorial4/community.html b/inst/tutorials/tutorial4/community.html
index fda2a0e..37264f3 100644
--- a/inst/tutorials/tutorial4/community.html
+++ b/inst/tutorials/tutorial4/community.html
@@ -13,7 +13,7 @@
-
+
# calculating network density manually according to equation
net_ties(tasks)/(net_nodes(tasks)*(net_nodes(tasks)-1))
-but we can also just use the {manynet} function for
+
but we can also just use the {netrics} function for
calculating the density, which always uses the equation appropriate for
the type of network…
net_density(tasks)
+net_by_density(tasks)
Note that the various measures in {manynet} print
+
Note that the various measures in {netrics} print
results to three decimal points by default, but the underlying result
retains the same recurrence. So same result…
First, let’s calculate
reciprocity in the task network. While one could do this
-by hand, it’s more efficient to do this using the {manynet}
+by hand, it’s more efficient to do this using the {netrics}
package. Can you guess the correct name of the function?
net_reciprocity(tasks)
+net_by_reciprocity(tasks)
# this function calculates the amount of reciprocity in the whole network
Wow, this seems quite high based on what we observed visually! But if @@ -366,7 +366,7 @@
tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = "rec")
-net_indegree(tasks)
+net_by_indegree(tasks)
So we can see that indeed there are very few asymmetric ties, and yet @@ -388,7 +388,7 @@
net_transitivity(tasks)
+net_by_transitivity(tasks)
# this function calculates the amount of transitivity in the whole network
Let’s return to the question of closure. First try one of the closure
measures we have already treated that gives us a sense of shared
partners for one-mode networks. Then compare this with
-net_equivalency(), which can be used on the original
+net_by_equivalency(), which can be used on the original
two-mode network.
# net_transitivity(): Calculate transitivity in a network
+# net_by_transitivity(): Calculate transitivity in a network
-net_transitivity(women_graph)
-net_transitivity(event_graph)
+net_by_transitivity(women_graph)
+net_by_transitivity(event_graph)
# net_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network
+# net_by_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network
-net_equivalency(ison_southern_women)
+net_by_equivalency(ison_southern_women)
net_transitivity(women_graph)
-net_transitivity(event_graph)
-net_equivalency(ison_southern_women)
+net_by_transitivity(women_graph)
+net_by_transitivity(event_graph)
+net_by_equivalency(ison_southern_women)
net_components() function will return the number of
+net_by_components() function will return the number of
strong components for directed networks. For weak
components, you will need to first make the network
@@ -595,7 +595,7 @@ net_components(friends)
+net_by_components(friends)
# note that friends is a directed network
# you can see this by calling the object 'friends'
# or by running `is_directed(friends)`
@@ -607,14 +607,14 @@ Components
# Note: to_undirected() returns an object with all tie direction removed,
# so any pair of nodes with at least one directed edge
# will be connected by an undirected edge in the new network.
-net_components(to_undirected(friends))
+net_by_components(to_undirected(friends))
# note that friends is a directed network
-net_components(friends)
-net_components(to_undirected(friends))
+net_by_components(friends)
+net_by_components(to_undirected(friends))
So we know how many components there are, but maybe we’re also
interested in which nodes are members of which components?
-node_components() returns a membership vector that can be
+node_in_component() returns a membership vector that can be
used to color nodes in graphr():
friends <- friends %>%
- mutate(weak_comp = node_components(to_undirected(friends)),
- strong_comp = node_components(friends))
-# node_components returns a vector of nodes' memberships to components in the network
+ mutate(weak_comp = node_in_component(to_undirected(friends)),
+ strong_comp = node_in_component(friends))
+# node_in_component returns a vector of nodes' memberships to components in the network
# here, we are adding the nodes' membership to components as an attribute in the network
# alternatively, we can also use the function `add_node_attribute()`
-# eg. `add_node_attribute(friends, "weak_comp", node_components(to_undirected(friends)))`
+# eg. `add_node_attribute(friends, "weak_comp", node_in_component(to_undirected(friends)))`
friends <- friends %>%
- mutate(weak_comp = node_components(to_undirected(friends)),
- strong_comp = node_components(friends))
+ mutate(weak_comp = node_in_component(to_undirected(friends)),
+ strong_comp = node_in_component(friends))
graphr(friends, node_color = "weak_comp") + ggtitle("Weak components") +
graphr(friends, node_color = "strong_comp") + ggtitle("Strong components")
net_components(blogs)
-net_components(to_undirected(blogs))
+node_in_component(blogs)
+node_in_component(to_undirected(blogs))
net_modularity(blogs, membership = node_in_partition(blogs))
+net_by_modularity(blogs, membership = node_in_partition(blogs))
Remember that modularity ranges between 1 and -1. How can we @@ -783,7 +783,7 @@
graphr(blogs, node_color = "Leaning")
-net_modularity(blogs, membership = node_attribute(blogs, "Leaning"))
+net_by_modularity(blogs, membership = node_attribute(blogs, "Leaning"))

{manynet}/{igraph}) As you use them, consider
+{netrics}/{igraph}) As you use them, consider
how they portray communities and consider which one(s) afford a sensible
view of the social world as cohesively organized.
# resulting in a modularity of
-net_modularity(friends, friend_wt)
+net_by_modularity(friends, friend_wt)
friend_wt <- node_in_walktrap(friends, times=50)
# results in a modularity of
-net_modularity(friends, friend_wt)
+net_by_modularity(friends, friend_wt)
We can also visualise the clusters on the original network How does the following look? Plausible?
@@ -920,9 +920,9 @@This can be helpful when polygons overlap to better identify membership Or you can use node color and size to indicate other @@ -996,7 +996,7 @@
For more on this algorithm, see M Newman and M Girvan: Finding and evaluating community structure in networks, Physical Review E 69, 026113 @@ -1037,7 +1037,7 @@
friend_fg <- node_in_greedy(friends)
friend_fg # Does this result in a different community partition?
-net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
+net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
friend_fg <- node_in_greedy(friends)
friend_fg # Does this result in a different community partition?
-net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
+net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
# Again, we can visualise these communities in different ways:
friends <- friends %>%
@@ -1066,7 +1066,7 @@ Fast Greedy
node_color = "fg_comm",
node_group = "fg_comm") +
ggtitle("Fast-greedy",
- subtitle = round(net_modularity(friends, friend_fg), 3))
+ subtitle = round(net_by_modularity(friends, friend_fg), 3))
See A Clauset, MEJ Newman, C Moore: Finding community structure in very large networks, Fast Greedy
Lastly, {manynet} includes a function to run through and
+
Lastly, {netrics} includes a function to run through and
find the membership assignment that maximises
modularity across any of the applicable community
@@ -1461,15 +1461,15 @@
-For this session, we're going to use the "ison_algebra" dataset included in the `{manynet}` package.
+For this session, we're going to use the "ison_algebra" dataset included in the `{netrics}` package.
Do you remember how to call the data?
Can you find out some more information about it via its help file?
@@ -204,7 +204,7 @@ there are no bridges.
```{r bridges, exercise = TRUE, exercise.setup = "objects-setup"}
sum(tie_is_bridge(friends))
-any(node_bridges(friends)>0)
+any(node_by_bridges(friends)>0)
```
### Constraint
@@ -212,7 +212,7 @@ any(node_bridges(friends)>0)
But some nodes do seem more deeply embedded in the network than others.
Let's take a look at which actors are least `r gloss("constrained", "constraint")`
by their position in the *task* network.
-`{manynet}` makes this easy enough with the `node_constraint()` function.
+`{netrics}` makes this easy enough with the `node_by_constraint()` function.
```{r objects-setup, purl=FALSE}
alge <- to_named(ison_algebra)
@@ -226,12 +226,12 @@ tasks <- to_uniplex(alge, "tasks")
```
```{r constraint-hint, purl = FALSE}
-node_constraint(____)
+node_by_constraint(____)
# Don't forget we want to look at which actors are least constrained by their position in the 'tasks' network
```
```{r constraint-solution}
-node_constraint(tasks)
+node_by_constraint(tasks)
```
This function returns a vector of constraint scores that can range between 0 and 1.
@@ -244,8 +244,8 @@ We can also identify the node with the minimum constraint score using `node_is_m
```{r constraintplot-hint-1, purl = FALSE}
tasks <- tasks %>%
- mutate(constraint = node_constraint(____),
- low_constraint = node_is_min(node_constraint(____)))
+ mutate(constraint = node_by_constraint(____),
+ low_constraint = node_is_min(node_by_constraint(____)))
# Don't forget, we are still looking at the 'tasks' network
```
@@ -264,8 +264,8 @@ graphr(tasks, node_size = "constraint", node_color = "low_constraint")
```{r constraintplot-solution}
tasks <- tasks %>%
- mutate(constraint = node_constraint(tasks),
- low_constraint = node_is_min(node_constraint(tasks)))
+ mutate(constraint = node_by_constraint(tasks),
+ low_constraint = node_is_min(node_by_constraint(tasks)))
graphr(tasks, node_size = "constraint", node_color = "low_constraint")
```
@@ -318,7 +318,7 @@ a uniplex subgraph thereof.
### Finding structurally equivalent classes
-In `{manynet}`, finding how the nodes of a network can be partitioned
+In `{netrics}`, finding how the nodes of a network can be partitioned
into structurally equivalent classes can be as easy as:
```{r find-se, exercise = TRUE, exercise.setup = "data"}
@@ -337,8 +337,8 @@ how these classes are identified and how to interpret them.
### Step one: starting with a census
All equivalence classes are based on nodes' similarity across some profile of motifs.
-In `{manynet}`, we call these motif *censuses*.
-Any kind of census can be used, and `{manynet}` includes a few options,
+In `{netrics}`, we call these motif *censuses*.
+Any kind of census can be used, and `{netrics}` includes a few options,
but `node_in_structural()` is based off of the census of all the nodes' ties,
both outgoing and incoming ties, to characterise their relationships to tie partners.
@@ -347,24 +347,24 @@ both outgoing and incoming ties, to characterise their relationships to tie part
```
```{r construct-cor-hint-1, purl = FALSE}
-# Let's use the node_by_tie() function
+# Let's use the node_x_tie() function
# The function accepts an object such as a dataset
# Hint: Which dataset are we using in this tutorial?
-node_by_tie(____)
+node_x_tie(____)
```
```{r construct-cor-hint-2, purl = FALSE}
-node_by_tie(ison_algebra)
+node_x_tie(ison_algebra)
```
```{r construct-cor-hint-3, purl = FALSE}
# Now, let's get the dimensions of an object via the dim() function
-dim(node_by_tie(ison_algebra))
+dim(node_x_tie(ison_algebra))
```
```{r construct-cor-solution}
-node_by_tie(ison_algebra)
-dim(node_by_tie(ison_algebra))
+node_x_tie(ison_algebra)
+dim(node_x_tie(ison_algebra))
```
We can see that the result is a matrix of 16 rows and 96 columns,
@@ -379,7 +379,7 @@ what would you do if you wanted it to be binary?
```{r construct-binary-hint, purl = FALSE}
# we could convert the result using as.matrix, returning the ties
-as.matrix((node_by_tie(ison_algebra)>0)+0)
+as.matrix((node_x_tie(ison_algebra)>0)+0)
```
@@ -388,17 +388,17 @@ as.matrix((node_by_tie(ison_algebra)>0)+0)
# Note that this also reduces the total number of possible paths between nodes
ison_algebra %>%
select_ties(-type) %>%
- node_by_tie()
+ node_x_tie()
```
-Note that `node_by_tie()` does not need to be passed to `node_in_structural()` ---
+Note that `node_x_tie()` does not need to be passed to `node_in_structural()` ---
this is done automatically!
However, the more generic `node_in_equivalence()` is available and can be used
-with whichever census (`node_by_*()` output) is desired.
-Feel free to explore using some of the other censuses available in `{manynet}`,
+with whichever census (`node_x_*()` output) is desired.
+Feel free to explore using some of the other censuses available in `{netrics}`,
though some common ones are already used in the other equivalence convenience functions,
-e.g. `node_by_triad()` in `node_in_regular()`
-and `node_by_path()` in `node_in_automorphic()`.
+e.g. `node_x_triad()` in `node_in_regular()`
+and `node_x_path()` in `node_in_automorphic()`.
### Step two: growing a tree of similarity
@@ -413,7 +413,7 @@ so that help page should be consulted for more details.
By default `"euclidean"` is used.
Second, we can also set the type of clustering algorithm employed.
-By default, `{manynet}`'s equivalence functions use `r gloss("hierarchical clustering","hierclust")`, `"hier"`,
+By default, `{netrics}`'s equivalence functions use `r gloss("hierarchical clustering","hierclust")`, `"hier"`,
but for compatibility and enthusiasts, we also offer `"concor"`,
which implements a CONCOR (CONvergence of CORrelations) algorithm.
@@ -438,7 +438,7 @@ question("Do you see any differences?",
allow_retry = TRUE)
```
-So plotting a `membership` vector from `{manynet}` returns a dendrogram
+So plotting a `membership` vector from `{netrics}` returns a dendrogram
with the names of the nodes on the _y_-axis and the distance between them on the _x_-axis.
Using the census as material, the distances between the nodes
is used to create a dendrogram of (dis)similarity among the nodes.
@@ -469,7 +469,7 @@ But where does this red line come from?
Or, more technically, how do we identify the number of clusters
into which to assign nodes?
-`{manynet}` includes several different ways of establishing `k`,
+`{netrics}` includes several different ways of establishing `k`,
or the number of clusters.
Remember, the further to the right the red line is
(the lower on the tree the cut point is)
@@ -507,7 +507,7 @@ then we might expect there to be a relatively rapid increase
in correlation as we move from, for example, 3 clusters to 4 clusters,
but a relatively small increase from, for example, 13 clusters to 14 clusters.
By identifying the inflection point in this line graph,
-`{manynet}` selects a number of clusters that represents a trade-off
+`{netrics}` selects a number of clusters that represents a trade-off
between fit and parsimony.
This is the `k = "elbow"` method.
@@ -535,12 +535,12 @@ Either is probably fine here,
and there is much debate around how to select the number of clusters anyway.
However, the silhouette method seems to do a better job of identifying
how unique the 16th node is.
-The silhouette method is also the default in `{manynet}`.
+The silhouette method is also the default in `{netrics}`.
Note that there is a somewhat hidden parameter here, `range`.
Since testing across all possible numbers of clusters can get
computationally expensive (not to mention uninterpretable) for large networks,
-`{manynet}` only considers up to 8 clusters by default.
+`{netrics}` only considers up to 8 clusters by default.
This however can be modified to be higher or lower, e.g. `range = 16`.
Finally, one last option is `k = "strict"`,
@@ -597,14 +597,14 @@ but this can be tweaked by assigning some other summary statistic as `FUN = `.
```
```{r summ-hint, purl = FALSE}
-# Let's wrap node_by_tie inside the summary() function
+# Let's wrap node_x_tie inside the summary() function
# and pass it a membership result
-summary(node_by_tie(____),
+summary(node_x_tie(____),
membership = ____)
```
```{r summ-solution}
-summary(node_by_tie(alge),
+summary(node_x_tie(alge),
membership = node_in_structural(alge))
```
diff --git a/inst/tutorials/tutorial5/position.html b/inst/tutorials/tutorial5/position.html
index 54a6232..8426eef 100644
--- a/inst/tutorials/tutorial5/position.html
+++ b/inst/tutorials/tutorial5/position.html
@@ -13,7 +13,7 @@
-
+

For this session, we’re going to use the “ison_algebra” dataset
-included in the {manynet} package. Do you remember how to
+included in the {netrics} package. Do you remember how to
call the data? Can you find out some more information about it via its
help file?
sum(tie_is_bridge(friends))
-any(node_bridges(friends)>0)
+any(node_by_bridges(friends)>0)
{manynet} makes this easy enough with the
-node_constraint() function.
+network. {netrics} makes this easy enough with the
+node_by_constraint() function.
node_constraint(____)
+node_by_constraint(____)
# Don't forget we want to look at which actors are least constrained by their position in the 'tasks' network
node_constraint(tasks)
+node_by_constraint(tasks)
This function returns a vector of constraint scores that can range between 0 and 1. Let’s graph the network again, sizing the nodes @@ -286,8 +286,8 @@
tasks <- tasks %>%
- mutate(constraint = node_constraint(____),
- low_constraint = node_is_min(node_constraint(____)))
+ mutate(constraint = node_by_constraint(____),
+ low_constraint = node_is_min(node_by_constraint(____)))
# Don't forget, we are still looking at the 'tasks' network
tasks <- tasks %>%
- mutate(constraint = node_constraint(tasks),
- low_constraint = node_is_min(node_constraint(tasks)))
+ mutate(constraint = node_by_constraint(tasks),
+ low_constraint = node_is_min(node_by_constraint(tasks)))
graphr(tasks, node_size = "constraint", node_color = "low_constraint")
Why minimum? Because constraint measures how well connected each @@ -355,7 +355,7 @@
In {manynet}, finding how the nodes of a network can be
+
In {netrics}, finding how the nodes of a network can be
partitioned into structurally equivalent classes can be as easy as:
All equivalence classes are based on nodes’ similarity across some
-profile of motifs. In {manynet}, we call these motif
+profile of motifs. In {netrics}, we call these motif
censuses. Any kind of census can be used, and
-{manynet} includes a few options, but
+{netrics} includes a few options, but
node_in_structural() is based off of the census of all the
nodes’ ties, both outgoing and incoming ties, to characterise their
relationships to tie partners.
# Let's use the node_by_tie() function
+# Let's use the node_x_tie() function
# The function accepts an object such as a dataset
# Hint: Which dataset are we using in this tutorial?
-node_by_tie(____)
+node_x_tie(____)
node_by_tie(ison_algebra)
+node_x_tie(ison_algebra)
# Now, let's get the dimensions of an object via the dim() function
-dim(node_by_tie(ison_algebra))
+dim(node_x_tie(ison_algebra))
node_by_tie(ison_algebra)
-dim(node_by_tie(ison_algebra))
+node_x_tie(ison_algebra)
+dim(node_x_tie(ison_algebra))
We can see that the result is a matrix of 16 rows and 96 columns, because we want to catalogue or take a census of all the different @@ -428,7 +428,7 @@
# we could convert the result using as.matrix, returning the ties
-as.matrix((node_by_tie(ison_algebra)>0)+0)
+as.matrix((node_x_tie(ison_algebra)>0)+0)
Note that node_by_tie() does not need to be passed to
+
Note that node_x_tie() does not need to be passed to
node_in_structural() — this is done automatically! However,
the more generic node_in_equivalence() is available and can
-be used with whichever census (node_by_*() output) is
+be used with whichever census (node_x_*() output) is
desired. Feel free to explore using some of the other censuses available
-in {manynet}, though some common ones are already used in
+in {netrics}, though some common ones are already used in
the other equivalence convenience functions,
-e.g. node_by_triad() in node_in_regular() and
-node_by_path() in node_in_automorphic().
node_x_triad() in node_in_regular() and
+node_x_path() in node_in_automorphic().
"euclidean" is used.
Second, we can also set the type of clustering algorithm employed. By
-default, {manynet}’s equivalence functions use
+default, {netrics}’s equivalence functions use
hierarchical clustering , "hier", but for
compatibility and enthusiasts, we also offer "concor",
@@ -494,7 +494,7 @@
So plotting a membership vector from
-{manynet} returns a dendrogram with the names of the nodes
+{netrics} returns a dendrogram with the names of the nodes
on the y-axis and the distance between them on the
x-axis. Using the census as material, the distances between the
nodes is used to create a dendrogram of (dis)similarity among the nodes.
@@ -522,7 +522,7 @@
But where does this red line come from? Or, more technically, how do we identify the number of clusters into which to assign nodes?
-{manynet} includes several different ways of
+
{netrics} includes several different ways of
establishing k, or the number of clusters. Remember, the
further to the right the red line is (the lower on the tree the cut
point is) the more dissimilar we’re allowing nodes in the same cluster
@@ -556,7 +556,7 @@
{manynet} selects a number of
+point in this line graph, {netrics} selects a number of
clusters that represents a trade-off between fit and parsimony. This is
the k = "elbow" method.
The other option is to evaluate a candidate for k based
@@ -582,11 +582,11 @@
{manynet}.
+{netrics}.
Note that there is a somewhat hidden parameter here,
range. Since testing across all possible numbers of
clusters can get computationally expensive (not to mention
-uninterpretable) for large networks, {manynet} only
+uninterpretable) for large networks, {netrics} only
considers up to 8 clusters by default. This however can be modified to
be higher or lower, e.g. range = 16.
Finally, one last option is k = "strict", which only
@@ -645,15 +645,15 @@
# Let's wrap node_by_tie inside the summary() function
+# Let's wrap node_x_tie inside the summary() function
# and pass it a membership result
-summary(node_by_tie(____),
+summary(node_x_tie(____),
membership = ____)
summary(node_by_tie(alge),
+summary(node_x_tie(alge),
membership = node_in_structural(alge))
This node census produces 96 columns, \(16
@@ -1029,22 +1029,22 @@ Glossary
@@ -1062,29 +1062,29 @@ Glossary
@@ -1168,7 +1168,7 @@ Glossary
setup = "alge <- to_named(ison_algebra)\nfriends <- to_uniplex(alge, \"friends\")\nsocial <- to_uniplex(alge, \"social\")\ntasks <- to_uniplex(alge, \"tasks\")",
chunks = list(list(label = "objects-setup", code = "alge <- to_named(ison_algebra)\nfriends <- to_uniplex(alge, \"friends\")\nsocial <- to_uniplex(alge, \"social\")\ntasks <- to_uniplex(alge, \"tasks\")",
opts = list(label = "\"objects-setup\"", purl = "FALSE"),
- engine = "r"), list(label = "bridges", code = "sum(tie_is_bridge(friends))\nany(node_bridges(friends)>0)",
+ engine = "r"), list(label = "bridges", code = "sum(tie_is_bridge(friends))\nany(node_by_bridges(friends)>0)",
opts = list(label = "\"bridges\"", exercise = "TRUE",
exercise.setup = "\"objects-setup\""), engine = "r")),
code_check = NULL, error_check = NULL, check = NULL, solution = NULL,
@@ -1190,7 +1190,7 @@ Glossary
engine = "r", split = FALSE, include = TRUE, purl = TRUE,
max.print = 1000, label = "bridges", exercise = TRUE,
exercise.setup = "objects-setup", code = c("sum(tie_is_bridge(friends))",
- "any(node_bridges(friends)>0)"), out.width.px = 624,
+ "any(node_by_bridges(friends)>0)"), out.width.px = 624,
out.height.px = 384, params.src = "bridges, exercise = TRUE, exercise.setup = \"objects-setup\"",
fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"),
engine = "r", version = "4"), class = c("r", "tutorial_exercise"
@@ -1236,7 +1236,7 @@ Glossary
opts = list(label = "\"constraint\"", exercise = "TRUE",
exercise.setup = "\"objects-setup\"", purl = "FALSE"),
engine = "r")), code_check = NULL, error_check = NULL,
- check = NULL, solution = structure("node_constraint(tasks)", chunk_opts = list(
+ check = NULL, solution = structure("node_by_constraint(tasks)", chunk_opts = list(
label = "constraint-solution")), tests = NULL, options = list(
eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE,
tidy.opts = NULL, collapse = FALSE, prompt = FALSE, comment = NA,
@@ -1301,7 +1301,7 @@ Glossary
exercise.setup = "\"objects-setup\"", purl = "FALSE"),
engine = "r")), code_check = NULL, error_check = NULL,
check = NULL, solution = structure(c("tasks <- tasks %>% ",
- " mutate(constraint = node_constraint(tasks), ", " low_constraint = node_is_min(node_constraint(tasks)))",
+ " mutate(constraint = node_by_constraint(tasks), ", " low_constraint = node_is_min(node_by_constraint(tasks)))",
"graphr(tasks, node_size = \"constraint\", node_color = \"low_constraint\")"
), chunk_opts = list(label = "constraintplot-solution")),
tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup",
@@ -1332,11 +1332,11 @@ Glossary
@@ -1482,8 +1482,8 @@ Glossary
engine = "r"), list(label = "construct-cor", code = "",
opts = list(label = "\"construct-cor\"", exercise = "TRUE",
exercise.setup = "\"data\"", purl = "FALSE"), engine = "r")),
- code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("node_by_tie(ison_algebra)",
- "dim(node_by_tie(ison_algebra))"), chunk_opts = list(label = "construct-cor-solution")),
+ code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("node_x_tie(ison_algebra)",
+ "dim(node_x_tie(ison_algebra))"), chunk_opts = list(label = "construct-cor-solution")),
tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup",
tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE,
comment = NA, highlight = FALSE, size = "normalsize",
@@ -1547,7 +1547,7 @@ Glossary
exercise.setup = "\"data\"", purl = "FALSE"), engine = "r")),
code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("# But it's easier to simplify the network by removing the classification into different types of ties.",
"# Note that this also reduces the total number of possible paths between nodes",
- "ison_algebra %>%", " select_ties(-type) %>%", " node_by_tie()"
+ "ison_algebra %>%", " select_ties(-type) %>%", " node_x_tie()"
), chunk_opts = list(label = "construct-binary-solution")),
tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup",
tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE,
@@ -1642,11 +1642,11 @@ Glossary
@@ -1971,7 +1971,7 @@ Glossary
list(label = "summ", code = "", opts = list(label = "\"summ\"",
exercise = "TRUE", exercise.setup = "\"strplot\"",
purl = "FALSE"), engine = "r")), code_check = NULL,
- error_check = NULL, check = NULL, solution = structure(c("summary(node_by_tie(alge),",
+ error_check = NULL, check = NULL, solution = structure(c("summary(node_x_tie(alge),",
" membership = node_in_structural(alge))"), chunk_opts = list(
label = "summ-solution")), tests = NULL, options = list(
eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE,
@@ -2198,12 +2198,12 @@ Glossary
net_smallworld(generate_smallworld(50, 0.25))
+net_by_smallworld(generate_smallworld(50, 0.25))
net_scalefree(generate_scalefree(50, 2))
+net_by_scalefree(generate_scalefree(50, 2))
lawfirm %>%
- mutate(ncn = node_kcoreness()) %>%
+ mutate(ncn = node_by_kcoreness()) %>%
graphr(node_color = "ncn")
@@ -639,8 +639,8 @@ treeleven <- create_tree(11, directed = TRUE)
-net_by_hierarchy(treeleven)
-rowMeans(net_by_hierarchy(treeleven))
+net_x_hierarchy(treeleven)
+rowMeans(net_x_hierarchy(treeleven))
We see here four different measures of hierarchy:
@@ -675,9 +675,9 @@graphr(ison_emotions)
-net_by_hierarchy(ison_emotions)
+net_x_hierarchy(ison_emotions)
graphr(fict_thrones)
-net_by_hierarchy(fict_thrones)
+net_x_hierarchy(fict_thrones)
Actually, these two networks have the same average hierarchy score of around 0.525. But they have quite different profiles. Can you make sense @@ -711,7 +711,7 @@
net_connectedness(ison_adolescents)
+net_by_connectedness(ison_adolescents)
This measure gets at the proportion of dyads that can reach each other in the network. In this case, the proportion is 1, i.e. all nodes @@ -745,7 +745,7 @@
net_cohesion(ison_adolescents)
+net_by_cohesion(ison_adolescents)
net_adhesion(ison_adolescents)
+net_by_adhesion(ison_adolescents)
ison_adolescents |> mutate_ties(cut = tie_is_bridge(ison_adolescents)) |>
graphr(edge_color = "cut")
ison_adolescents |> mutate_ties(coh = tie_cohesion(ison_adolescents)) |>
+ison_adolescents |> mutate_ties(coh = tie_by_cohesion(ison_adolescents)) |>
graphr(edge_size = "coh")
Where would you target your efforts if you wanted to fragment this @@ -944,8 +944,6 @@
@@ -1226,8 +1221,7 @@