diff --git a/DESCRIPTION b/DESCRIPTION index 5c8076c..f36cf78 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,7 +1,7 @@ Package: netrics Title: Many Ways to Measure and Classify Membership for Networks, Nodes, and Ties -Version: 0.2.1 -Date: 2026-04-04 +Version: 0.2.2 +Date: 2026-04-24 Description: Many tools for calculating network, node, or tie marks, measures, motifs and memberships of many different types of networks. Marks identify structural positions, measures quantify network properties, diff --git a/NEWS.md b/NEWS.md index 97389bc..00a7dc4 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,3 +1,16 @@ +# netrics 0.2.2 + +## Package + +- Updated logos + +## Tutorials + +- Updated centrality tutorial +- Updated community tutorial +- Updated position tutorial +- Updated topology tutorial + # netrics 0.2.1 ## Package diff --git a/inst/netrics.png b/inst/netrics.png new file mode 100644 index 0000000..0c2ba6c Binary files /dev/null and b/inst/netrics.png differ diff --git a/inst/tutorials/tutorial3/centrality.Rmd b/inst/tutorials/tutorial3/centrality.Rmd index 3af1ad4..215becf 100644 --- a/inst/tutorials/tutorial3/centrality.Rmd +++ b/inst/tutorials/tutorial3/centrality.Rmd @@ -134,7 +134,7 @@ rowSums(mat) == colSums(mat) ```{r degreesum-hint-2, purl = FALSE} # Or by using a built in command in manynet like this: -node_degree(ison_brandes, normalized = FALSE) +node_by_degree(ison_brandes, normalized = FALSE) ``` ```{r degreesum-solution} @@ -143,7 +143,7 @@ mat <- as_matrix(ison_brandes) degrees <- rowSums(mat) rowSums(mat) == colSums(mat) # You can also just use a built in command in manynet though: -node_degree(ison_brandes, normalized = FALSE) +node_by_degree(ison_brandes, normalized = FALSE) ``` ```{r degreesum-Q, echo=FALSE, purl = FALSE} @@ -184,7 +184,7 @@ though there are more elaborate ways to do this in base and grid graphics. ```{r distrib-solution} # distribution of degree centrality scores of nodes -plot(node_degree(ison_brandes)) +plot(node_by_degree(ison_brandes)) ``` What's plotted here by default is both the degree distribution as a histogram, @@ -218,7 +218,7 @@ question("What can the degree distribution tell us?", ## Other centralities Other measures of centrality can be a little trickier to calculate by hand. -Fortunately, we can use functions from `{manynet}` to help calculate the +Fortunately, we can use functions from `{netrics}` to help calculate the `r gloss("betweenness")`, `r gloss("closeness")`, and `r gloss("eigenvector")` centralities for each node in the network. Let's collect the vectors of these centralities for the `ison_brandes` dataset: @@ -227,27 +227,27 @@ Let's collect the vectors of these centralities for the `ison_brandes` dataset: ``` ```{r micent-hint-1, purl = FALSE} -# Use the node_betweenness() function to calculate the +# Use the node_by_betweenness() function to calculate the # betweenness centralities of nodes in a network -node_betweenness(ison_brandes) +node_by_betweenness(ison_brandes) ``` ```{r micent-hint-2, purl = FALSE} -# Use the node_closeness() function to calculate the +# Use the node_by_closeness() function to calculate the # closeness centrality of nodes in a network -node_closeness(ison_brandes) +node_by_closeness(ison_brandes) ``` ```{r micent-hint-3, purl = FALSE} -# Use the node_eigenvector() function to calculate +# Use the node_by_eigenvector() function to calculate # the eigenvector centrality of nodes in a network -node_eigenvector(ison_brandes) +node_by_eigenvector(ison_brandes) ``` ```{r micent-solution} -node_betweenness(ison_brandes) -node_closeness(ison_brandes) -node_eigenvector(ison_brandes) +node_by_betweenness(ison_brandes) +node_by_closeness(ison_brandes) +node_by_eigenvector(ison_brandes) ``` What is returned here are vectors of betweenness, closeness, and eigenvector @@ -301,7 +301,7 @@ Try to answer the following questions for yourself: - what does Bonacich mean when he says that `r gloss("power")` and influence are not the same thing? - can you think of a real-world example when an actor might be central but not powerful, or powerful but not central? -Note that all centrality measures in `{manynet}` return normalized +Note that all centrality measures in `{netrics}` return normalized scores by default -- for the raw scores, include `normalized = FALSE` in the function as an extra argument. @@ -312,9 +312,9 @@ Now, can you create degree distributions for each of these? ``` ```{r otherdist-solution} -plot(node_betweenness(ison_brandes)) -plot(node_closeness(ison_brandes)) -plot(node_eigenvector(ison_brandes)) +plot(node_by_betweenness(ison_brandes)) +plot(node_by_closeness(ison_brandes)) +plot(node_by_eigenvector(ison_brandes)) ``` ## Plotting centrality @@ -335,19 +335,19 @@ we can highlight which node or nodes hold the maximum score in red. ```{r ggid-solution} # plot the network, highlighting the node with the highest centrality score with a different colour ison_brandes %>% - mutate_nodes(color = node_is_max(node_degree())) %>% + mutate_nodes(color = node_is_max(node_by_degree())) %>% graphr(node_color = "color") ison_brandes %>% - mutate_nodes(color = node_is_max(node_betweenness())) %>% + mutate_nodes(color = node_is_max(node_by_betweenness())) %>% graphr(node_color = "color") ison_brandes %>% - mutate_nodes(color = node_is_max(node_closeness())) %>% + mutate_nodes(color = node_is_max(node_by_closeness())) %>% graphr(node_color = "color") ison_brandes %>% - mutate_nodes(color = node_is_max(node_eigenvector())) %>% + mutate_nodes(color = node_is_max(node_by_eigenvector())) %>% graphr(node_color = "color") ``` @@ -361,19 +361,19 @@ What can you see? ```{r ggid_twomode-solution} ison_brandes2 %>% - add_node_attribute("color", node_is_max(node_degree(ison_brandes2))) %>% + add_node_attribute("color", node_is_max(node_by_degree(ison_brandes2))) %>% graphr(node_color = "color", layout = "bipartite") ison_brandes2 %>% - add_node_attribute("color", node_is_max(node_betweenness(ison_brandes2))) %>% + add_node_attribute("color", node_is_max(node_by_betweenness(ison_brandes2))) %>% graphr(node_color = "color", layout = "bipartite") ison_brandes2 %>% - add_node_attribute("color", node_is_max(node_closeness(ison_brandes2))) %>% + add_node_attribute("color", node_is_max(node_by_closeness(ison_brandes2))) %>% graphr(node_color = "color", layout = "bipartite") ison_brandes2 %>% - add_node_attribute("color", node_is_max(node_eigenvector(ison_brandes2))) %>% + add_node_attribute("color", node_is_max(node_by_eigenvector(ison_brandes2))) %>% graphr(node_color = "color", layout = "bipartite") ``` @@ -391,7 +391,7 @@ question("Select all that are true for the two-mode Brandes network.", -`{manynet}` also implements network `r gloss("centralization")` functions. +`{netrics}` also implements network `r gloss("centralization")` functions. Here we are no longer interested in the level of the node, but in the level of the whole network, so the syntax replaces `node_` with `net_`: @@ -401,10 +401,10 @@ so the syntax replaces `node_` with `net_`: ``` ```{r centzn-solution} -net_degree(ison_brandes) -net_betweenness(ison_brandes) -net_closeness(ison_brandes) -print(net_eigenvector(ison_brandes), digits = 5) +net_by_degree(ison_brandes) +net_by_betweenness(ison_brandes) +net_by_closeness(ison_brandes) +print(net_by_eigenvector(ison_brandes), digits = 5) ``` By default, scores are printed up to 3 decimal places, @@ -428,18 +428,18 @@ but fortunately the `{patchwork}` package is here to help. ```{r multiplot-solution} ison_brandes <- ison_brandes %>% - add_node_attribute("degree", node_is_max(node_degree(ison_brandes))) %>% - add_node_attribute("betweenness", node_is_max(node_betweenness(ison_brandes))) %>% - add_node_attribute("closeness", node_is_max(node_closeness(ison_brandes))) %>% - add_node_attribute("eigenvector", node_is_max(node_eigenvector(ison_brandes))) + add_node_attribute("degree", node_is_max(node_by_degree(ison_brandes))) %>% + add_node_attribute("betweenness", node_is_max(node_by_betweenness(ison_brandes))) %>% + add_node_attribute("closeness", node_is_max(node_by_closeness(ison_brandes))) %>% + add_node_attribute("eigenvector", node_is_max(node_by_eigenvector(ison_brandes))) gd <- graphr(ison_brandes, node_color = "degree") + - ggtitle("Degree", subtitle = round(net_degree(ison_brandes), 2)) + ggtitle("Degree", subtitle = round(net_by_degree(ison_brandes), 2)) gc <- graphr(ison_brandes, node_color = "closeness") + - ggtitle("Closeness", subtitle = round(net_closeness(ison_brandes), 2)) + ggtitle("Closeness", subtitle = round(net_by_closeness(ison_brandes), 2)) gb <- graphr(ison_brandes, node_color = "betweenness") + - ggtitle("Betweenness", subtitle = round(net_betweenness(ison_brandes), 2)) + ggtitle("Betweenness", subtitle = round(net_by_betweenness(ison_brandes), 2)) ge <- graphr(ison_brandes, node_color = "eigenvector") + - ggtitle("Eigenvector", subtitle = round(net_eigenvector(ison_brandes), 2)) + ggtitle("Eigenvector", subtitle = round(net_by_eigenvector(ison_brandes), 2)) (gd | gb) / (gc | ge) # ggsave("brandes-centralities.pdf") ``` diff --git a/inst/tutorials/tutorial3/centrality.html b/inst/tutorials/tutorial3/centrality.html index 6ee3cc1..d3ab251 100644 --- a/inst/tutorials/tutorial3/centrality.html +++ b/inst/tutorials/tutorial3/centrality.html @@ -13,7 +13,7 @@ - + Centrality @@ -205,7 +205,7 @@

Degree centrality

data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
# Or by using a built in command in manynet like this:
-node_degree(ison_brandes, normalized = FALSE)
+node_by_degree(ison_brandes, normalized = FALSE)
Degree centrality degrees <- rowSums(mat) rowSums(mat) == colSums(mat) # You can also just use a built in command in manynet though: -node_degree(ison_brandes, normalized = FALSE) +node_by_degree(ison_brandes, normalized = FALSE)
@@ -252,7 +252,7 @@

Degree distributions

data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
# distribution of degree centrality scores of nodes
-plot(node_degree(ison_brandes))
+plot(node_by_degree(ison_brandes))

What’s plotted here by default is both the degree distribution as a histogram, as well as a density plot overlaid on it in red.

@@ -276,7 +276,7 @@

Degree distributions

Other centralities

Other measures of centrality can be a little trickier to calculate by -hand. Fortunately, we can use functions from {manynet} to +hand. Fortunately, we can use functions from {netrics} to help calculate the betweenness , @@ -294,30 +294,30 @@

Other centralities

-
# Use the node_betweenness() function to calculate the
+
# Use the node_by_betweenness() function to calculate the
 # betweenness centralities of nodes in a network
-node_betweenness(ison_brandes)
+node_by_betweenness(ison_brandes)
-
# Use the node_closeness() function to calculate the 
+
# Use the node_by_closeness() function to calculate the 
 # closeness centrality of nodes in a network
-node_closeness(ison_brandes)
+node_by_closeness(ison_brandes)
-
# Use the node_eigenvector() function to calculate 
+
# Use the node_by_eigenvector() function to calculate 
 # the eigenvector centrality of nodes in a network
-node_eigenvector(ison_brandes)
+node_by_eigenvector(ison_brandes)
-
node_betweenness(ison_brandes)
-node_closeness(ison_brandes)
-node_eigenvector(ison_brandes)
+
node_by_betweenness(ison_brandes)
+node_by_closeness(ison_brandes)
+node_by_eigenvector(ison_brandes)

What is returned here are vectors of betweenness, closeness, and eigenvector scores for the nodes in the network. But what do they @@ -354,7 +354,7 @@

Other centralities

  • can you think of a real-world example when an actor might be central but not powerful, or powerful but not central?
  • -

    Note that all centrality measures in {manynet} return +

    Note that all centrality measures in {netrics} return normalized scores by default – for the raw scores, include normalized = FALSE in the function as an extra argument.

    @@ -367,9 +367,9 @@

    Other centralities

    -
    plot(node_betweenness(ison_brandes))
    -plot(node_closeness(ison_brandes))
    -plot(node_eigenvector(ison_brandes))
    +
    plot(node_by_betweenness(ison_brandes))
    +plot(node_by_closeness(ison_brandes))
    +plot(node_by_eigenvector(ison_brandes))
    @@ -392,19 +392,19 @@

    Plotting centrality

    data-lines="0" data-pipe="|>">
    # plot the network, highlighting the node with the highest centrality score with a different colour
     ison_brandes %>%
    -  mutate_nodes(color = node_is_max(node_degree())) %>%
    +  mutate_nodes(color = node_is_max(node_by_degree())) %>%
       graphr(node_color = "color")
     
     ison_brandes %>%
    -  mutate_nodes(color = node_is_max(node_betweenness())) %>%
    +  mutate_nodes(color = node_is_max(node_by_betweenness())) %>%
       graphr(node_color = "color")
     
     ison_brandes %>%
    -  mutate_nodes(color = node_is_max(node_closeness())) %>%
    +  mutate_nodes(color = node_is_max(node_by_closeness())) %>%
       graphr(node_color = "color")
     
     ison_brandes %>%
    -  mutate_nodes(color = node_is_max(node_eigenvector())) %>%
    +  mutate_nodes(color = node_is_max(node_by_eigenvector())) %>%
       graphr(node_color = "color")

    How neat! Try it with the two-mode version. What can you see?

    @@ -419,19 +419,19 @@

    Plotting centrality

    data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    ison_brandes2 %>%
    -  add_node_attribute("color", node_is_max(node_degree(ison_brandes2))) %>%
    +  add_node_attribute("color", node_is_max(node_by_degree(ison_brandes2))) %>%
       graphr(node_color = "color", layout = "bipartite")
     
     ison_brandes2 %>%
    -  add_node_attribute("color", node_is_max(node_betweenness(ison_brandes2))) %>%
    +  add_node_attribute("color", node_is_max(node_by_betweenness(ison_brandes2))) %>%
       graphr(node_color = "color", layout = "bipartite")
     
     ison_brandes2 %>%
    -  add_node_attribute("color", node_is_max(node_closeness(ison_brandes2))) %>%
    +  add_node_attribute("color", node_is_max(node_by_closeness(ison_brandes2))) %>%
       graphr(node_color = "color", layout = "bipartite")
     
     ison_brandes2 %>%
    -  add_node_attribute("color", node_is_max(node_eigenvector(ison_brandes2))) %>%
    +  add_node_attribute("color", node_is_max(node_by_eigenvector(ison_brandes2))) %>%
       graphr(node_color = "color", layout = "bipartite")
    @@ -446,7 +446,7 @@

    Plotting centrality

    Centralization

    -

    {manynet} also implements network +

    {netrics} also implements network centralization functions. Here we are no longer interested in the level of the node, but in the level of the whole network, so the @@ -459,10 +459,10 @@

    Centralization

    -
    net_degree(ison_brandes)
    -net_betweenness(ison_brandes)
    -net_closeness(ison_brandes)
    -print(net_eigenvector(ison_brandes), digits = 5)
    +
    net_by_degree(ison_brandes)
    +net_by_betweenness(ison_brandes)
    +net_by_closeness(ison_brandes)
    +print(net_by_eigenvector(ison_brandes), digits = 5)

    By default, scores are printed up to 3 decimal places, but this can be modified and, in any case, the unrounded values are retained @@ -484,18 +484,18 @@

    Centralization

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    ison_brandes <- ison_brandes %>%
    -  add_node_attribute("degree", node_is_max(node_degree(ison_brandes))) %>%
    -  add_node_attribute("betweenness", node_is_max(node_betweenness(ison_brandes))) %>%
    -  add_node_attribute("closeness", node_is_max(node_closeness(ison_brandes))) %>%
    -  add_node_attribute("eigenvector", node_is_max(node_eigenvector(ison_brandes)))
    +  add_node_attribute("degree", node_is_max(node_by_degree(ison_brandes))) %>%
    +  add_node_attribute("betweenness", node_is_max(node_by_betweenness(ison_brandes))) %>%
    +  add_node_attribute("closeness", node_is_max(node_by_closeness(ison_brandes))) %>%
    +  add_node_attribute("eigenvector", node_is_max(node_by_eigenvector(ison_brandes)))
     gd <- graphr(ison_brandes, node_color = "degree") + 
    -  ggtitle("Degree", subtitle = round(net_degree(ison_brandes), 2))
    +  ggtitle("Degree", subtitle = round(net_by_degree(ison_brandes), 2))
     gc <- graphr(ison_brandes, node_color = "closeness") + 
    -  ggtitle("Closeness", subtitle = round(net_closeness(ison_brandes), 2))
    +  ggtitle("Closeness", subtitle = round(net_by_closeness(ison_brandes), 2))
     gb <- graphr(ison_brandes, node_color = "betweenness") + 
    -  ggtitle("Betweenness", subtitle = round(net_betweenness(ison_brandes), 2))
    +  ggtitle("Betweenness", subtitle = round(net_by_betweenness(ison_brandes), 2))
     ge <- graphr(ison_brandes, node_color = "eigenvector") + 
    -  ggtitle("Eigenvector", subtitle = round(net_eigenvector(ison_brandes), 2))
    +  ggtitle("Eigenvector", subtitle = round(net_by_eigenvector(ison_brandes), 2))
     (gd | gb) / (gc | ge)
     # ggsave("brandes-centralities.pdf")
    @@ -586,9 +586,34 @@

    Glossary

    @@ -622,12 +647,30 @@

    Glossary

    @@ -770,23 +851,23 @@

    Glossary

    @@ -814,9 +895,28 @@

    Glossary

    @@ -881,24 +981,24 @@

    Glossary

    @@ -926,17 +1026,36 @@

    Glossary

    @@ -1085,14 +1204,32 @@

    Glossary

    @@ -1258,18 +1433,37 @@

    Glossary

    @@ -1387,22 +1600,22 @@

    Glossary

    @@ -1429,12 +1642,30 @@

    Glossary

    diff --git a/inst/tutorials/tutorial4/community.Rmd b/inst/tutorials/tutorial4/community.Rmd index 6c30f87..609ce41 100644 --- a/inst/tutorials/tutorial4/community.Rmd +++ b/inst/tutorials/tutorial4/community.Rmd @@ -237,7 +237,7 @@ respectively: net_ties(tasks)/(net_nodes(tasks)*(net_nodes(tasks)-1)) ``` -but we can also just use the `{manynet}` function for calculating the density, +but we can also just use the `{netrics}` function for calculating the density, which always uses the equation appropriate for the type of network... ```{r dens, exercise=TRUE, exercise.setup = "separatingnets", purl = FALSE} @@ -245,10 +245,10 @@ which always uses the equation appropriate for the type of network... ``` ```{r dens-solution} -net_density(tasks) +net_by_density(tasks) ``` -Note that the various measures in `{manynet}` print results to three decimal points +Note that the various measures in `{netrics}` print results to three decimal points by default, but the underlying result retains the same recurrence. So same result... @@ -286,7 +286,7 @@ First, let's calculate `r gloss("reciprocity")` in the task network. While one could do this by hand, -it's more efficient to do this using the `{manynet}` package. +it's more efficient to do this using the `{netrics}` package. Can you guess the correct name of the function? ```{r recip, exercise=TRUE, exercise.setup = "separatingnets", purl = FALSE} @@ -294,7 +294,7 @@ Can you guess the correct name of the function? ``` ```{r recip-solution} -net_reciprocity(tasks) +net_by_reciprocity(tasks) # this function calculates the amount of reciprocity in the whole network ``` @@ -305,7 +305,7 @@ reciprocated and not. ```{r recip-explanation, exercise = TRUE} tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = "rec") -net_indegree(tasks) +net_by_indegree(tasks) ``` So we can see that indeed there are very few asymmetric ties, @@ -325,7 +325,7 @@ Again, can you guess the correct name of this function? ``` ```{r trans-solution} -net_transitivity(tasks) +net_by_transitivity(tasks) # this function calculates the amount of transitivity in the whole network ``` @@ -460,7 +460,7 @@ interested in the correlation of ties between nodes. Let's return to the question of closure. First try one of the closure measures we have already treated that gives us a sense of shared partners for one-mode networks. -Then compare this with `net_equivalency()`, which can be used on the original +Then compare this with `net_by_equivalency()`, which can be used on the original two-mode network. ```{r twomode-cohesion, exercise=TRUE, exercise.setup = "easyway", purl = FALSE} @@ -468,22 +468,22 @@ two-mode network. ``` ```{r twomode-cohesion-hint-1, purl = FALSE} -# net_transitivity(): Calculate transitivity in a network +# net_by_transitivity(): Calculate transitivity in a network -net_transitivity(women_graph) -net_transitivity(event_graph) +net_by_transitivity(women_graph) +net_by_transitivity(event_graph) ``` ```{r twomode-cohesion-hint-2, purl = FALSE} -# net_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network +# net_by_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network -net_equivalency(ison_southern_women) +net_by_equivalency(ison_southern_women) ``` ```{r twomode-cohesion-solution} -net_transitivity(women_graph) -net_transitivity(event_graph) -net_equivalency(ison_southern_women) +net_by_transitivity(women_graph) +net_by_transitivity(event_graph) +net_by_equivalency(ison_southern_women) ``` ```{r equil-interp, echo=FALSE, purl = FALSE} @@ -514,7 +514,7 @@ Now let's look at the friendship network, 'friends'. We're interested here in how many `r gloss("components", "component")` there are. -By default, the `net_components()` function will +By default, the `net_by_components()` function will return the number of _strong_ components for directed networks. For _weak_ components, you will need to first make the network `r gloss("undirected")`. @@ -534,7 +534,7 @@ question("Weak components...", ``` ```{r comp-no-hint-1, purl = FALSE} -net_components(friends) +net_by_components(friends) # note that friends is a directed network # you can see this by calling the object 'friends' # or by running `is_directed(friends)` @@ -545,13 +545,13 @@ net_components(friends) # Note: to_undirected() returns an object with all tie direction removed, # so any pair of nodes with at least one directed edge # will be connected by an undirected edge in the new network. -net_components(to_undirected(friends)) +net_by_components(to_undirected(friends)) ``` ```{r comp-no-solution} # note that friends is a directed network -net_components(friends) -net_components(to_undirected(friends)) +net_by_components(friends) +net_by_components(to_undirected(friends)) ``` ```{r comp-interp, echo = FALSE, purl = FALSE} @@ -572,7 +572,7 @@ question("How many components are there?", So we know how many components there are, but maybe we're also interested in which nodes are members of which components? -`node_components()` returns a membership vector +`node_in_component()` returns a membership vector that can be used to color nodes in `graphr()`: ```{r comp-memb, exercise=TRUE, exercise.setup = "separatingnets", purl = FALSE} @@ -581,12 +581,12 @@ that can be used to color nodes in `graphr()`: ```{r comp-memb-hint-1, purl = FALSE} friends <- friends %>% - mutate(weak_comp = node_components(to_undirected(friends)), - strong_comp = node_components(friends)) -# node_components returns a vector of nodes' memberships to components in the network + mutate(weak_comp = node_in_component(to_undirected(friends)), + strong_comp = node_in_component(friends)) +# node_in_component returns a vector of nodes' memberships to components in the network # here, we are adding the nodes' membership to components as an attribute in the network # alternatively, we can also use the function `add_node_attribute()` -# eg. `add_node_attribute(friends, "weak_comp", node_components(to_undirected(friends)))` +# eg. `add_node_attribute(friends, "weak_comp", node_in_component(to_undirected(friends)))` ``` ```{r comp-memb-hint-2, purl = FALSE} @@ -598,8 +598,8 @@ graphr(friends, node_color = "strong_comp") + ggtitle("Strong components") ```{r comp-memb-solution} friends <- friends %>% - mutate(weak_comp = node_components(to_undirected(friends)), - strong_comp = node_components(friends)) + mutate(weak_comp = node_in_component(to_undirected(friends)), + strong_comp = node_in_component(friends)) graphr(friends, node_color = "weak_comp") + ggtitle("Weak components") + graphr(friends, node_color = "strong_comp") + ggtitle("Strong components") ``` @@ -648,8 +648,8 @@ Since there are many isolates, there will be many components, even if we look at weak components and not just strong components. ```{r blogcomp, exercise = TRUE, exercise.setup = "blogsize"} -net_components(blogs) -net_components(to_undirected(blogs)) +node_in_component(blogs) +node_in_component(to_undirected(blogs)) ``` ### Giant component @@ -698,7 +698,7 @@ The most common measure of the fit of a community assignment in a network is `r gloss("modularity")`. ```{r blogmod, exercise=TRUE, exercise.setup = "blogtogiant"} -net_modularity(blogs, membership = node_in_partition(blogs)) +net_by_modularity(blogs, membership = node_in_partition(blogs)) ``` Remember that modularity ranges between 1 and -1. @@ -727,7 +727,7 @@ but is otherwise quite flexible. ```{r blogmodassign, exercise=TRUE, exercise.setup = "blogtogiant", warning=FALSE, fig.width=9} graphr(blogs, node_color = "Leaning") -net_modularity(blogs, membership = node_attribute(blogs, "Leaning")) +net_by_modularity(blogs, membership = node_attribute(blogs, "Leaning")) ``` gif of Chevy Chase saying plot twist @@ -764,7 +764,7 @@ There is no one single best community detection algorithm. Instead there are several, each with their strengths and weaknesses. Since this is a rather small network, we'll focus on the following methods: walktrap, edge betweenness, and fast greedy. -(Others are included in `{manynet}`/`{igraph}`) +(Others are included in `{netrics}`/`{igraph}`) As you use them, consider how they portray communities and consider which one(s) afford a sensible view of the social world as cohesively organized. @@ -819,13 +819,13 @@ which(friend_wt == 2) ```{r walk-hint-3, purl = FALSE} # resulting in a modularity of -net_modularity(friends, friend_wt) +net_by_modularity(friends, friend_wt) ``` ```{r walk-solution} friend_wt <- node_in_walktrap(friends, times=50) # results in a modularity of -net_modularity(friends, friend_wt) +net_by_modularity(friends, friend_wt) ``` We can also visualise the clusters on the original network @@ -857,9 +857,9 @@ graphr(friends, node_color = "walk_comm", node_group = "walk_comm") + ggtitle("Walktrap", - subtitle = round(net_modularity(friends, friend_wt), 3)) + subtitle = round(net_by_modularity(friends, friend_wt), 3)) # the function `round()` rounds the values to a specified number of decimal places -# here, we are telling it to round the net_modularity score to 3 decimal places, +# here, we are telling it to round the net_by_modularity score to 3 decimal places, # but the score is exactly 0.27 so only two decimal places are printed. ``` @@ -874,7 +874,7 @@ graphr(friends, node_color = "walk_comm", node_group = "walk_comm") + ggtitle("Walktrap", - subtitle = round(net_modularity(friends, friend_wt), 3)) + subtitle = round(net_by_modularity(friends, friend_wt), 3)) ``` This can be helpful when polygons overlap to better identify membership @@ -924,7 +924,7 @@ graphr(friends, node_color = "eb_comm", node_group = "eb_comm") + ggtitle("Edge-betweenness", - subtitle = round(net_modularity(friends, friend_eb), 3)) + subtitle = round(net_by_modularity(friends, friend_eb), 3)) ``` ```{r ebplot-solution} @@ -934,7 +934,7 @@ graphr(friends, node_color = "eb_comm", node_group = "eb_comm") + ggtitle("Edge-betweenness", - subtitle = round(net_modularity(friends, friend_eb), 3)) + subtitle = round(net_by_modularity(friends, friend_eb), 3)) ``` For more on this algorithm, see M Newman and M Girvan: Finding and @@ -959,7 +959,7 @@ although I personally find it both useful and in many cases quite "accurate". ```{r fg-hint-1, purl = FALSE} friend_fg <- node_in_greedy(friends) friend_fg # Does this result in a different community partition? -net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure +net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure ``` ```{r fg-hint-2, purl = FALSE} @@ -970,14 +970,14 @@ graphr(friends, node_color = "fg_comm", node_group = "fg_comm") + ggtitle("Fast-greedy", - subtitle = round(net_modularity(friends, friend_fg), 3)) + subtitle = round(net_by_modularity(friends, friend_fg), 3)) # ``` ```{r fg-solution} friend_fg <- node_in_greedy(friends) friend_fg # Does this result in a different community partition? -net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure +net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure # Again, we can visualise these communities in different ways: friends <- friends %>% @@ -986,7 +986,7 @@ graphr(friends, node_color = "fg_comm", node_group = "fg_comm") + ggtitle("Fast-greedy", - subtitle = round(net_modularity(friends, friend_fg), 3)) + subtitle = round(net_by_modularity(friends, friend_fg), 3)) ``` See A Clauset, MEJ Newman, C Moore: @@ -1005,7 +1005,7 @@ question("What is the difference between communities and components?", ### Communities -Lastly, `{manynet}` includes a function to run through and find the membership +Lastly, `{netrics}` includes a function to run through and find the membership assignment that maximises `r gloss("modularity")` across any of the applicable community detection procedures. This is helpfully called `node_in_community()`. diff --git a/inst/tutorials/tutorial4/community.html b/inst/tutorials/tutorial4/community.html index fda2a0e..37264f3 100644 --- a/inst/tutorials/tutorial4/community.html +++ b/inst/tutorials/tutorial4/community.html @@ -13,7 +13,7 @@ - + Cohesion and Community @@ -299,7 +299,7 @@

    Density

    # calculating network density manually according to equation
     net_ties(tasks)/(net_nodes(tasks)*(net_nodes(tasks)-1))
    -

    but we can also just use the {manynet} function for +

    but we can also just use the {netrics} function for calculating the density, which always uses the equation appropriate for the type of network…

    Density
    -
    net_density(tasks)
    +
    net_by_density(tasks)
    -

    Note that the various measures in {manynet} print +

    Note that the various measures in {netrics} print results to three decimal points by default, but the underlying result retains the same recurrence. So same result…

    @@ -345,7 +345,7 @@

    Reciprocity

    First, let’s calculate reciprocity in the task network. While one could do this -by hand, it’s more efficient to do this using the {manynet} +by hand, it’s more efficient to do this using the {netrics} package. Can you guess the correct name of the function?

    Reciprocity
    -
    net_reciprocity(tasks)
    +
    net_by_reciprocity(tasks)
     # this function calculates the amount of reciprocity in the whole network

    Wow, this seems quite high based on what we observed visually! But if @@ -366,7 +366,7 @@

    Reciprocity

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = "rec")
    -net_indegree(tasks)
    +net_by_indegree(tasks)

    So we can see that indeed there are very few asymmetric ties, and yet @@ -388,7 +388,7 @@

    Transitivity

    -
    net_transitivity(tasks)
    +
    net_by_transitivity(tasks)
     # this function calculates the amount of transitivity in the whole network
    @@ -521,7 +521,7 @@

    Closure in two-mode networks

    Let’s return to the question of closure. First try one of the closure measures we have already treated that gives us a sense of shared partners for one-mode networks. Then compare this with -net_equivalency(), which can be used on the original +net_by_equivalency(), which can be used on the original two-mode network.

    Closure in two-mode networks data-label="twomode-cohesion-hint-1" data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>"> -
    # net_transitivity(): Calculate transitivity in a network
    +
    # net_by_transitivity(): Calculate transitivity in a network
     
    -net_transitivity(women_graph)
    -net_transitivity(event_graph)
    +net_by_transitivity(women_graph) +net_by_transitivity(event_graph)
    -
    # net_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network
    +
    # net_by_equivalency(): Calculate equivalence or reinforcement in a (usually two-mode) network
     
    -net_equivalency(ison_southern_women)
    +net_by_equivalency(ison_southern_women)
    -
    net_transitivity(women_graph)
    -net_transitivity(event_graph)
    -net_equivalency(ison_southern_women)
    +
    net_by_transitivity(women_graph)
    +net_by_transitivity(event_graph)
    +net_by_equivalency(ison_southern_women)
    @@ -573,7 +573,7 @@

    Components

    here in how many components there are. By default, the -net_components() function will return the number of +net_by_components() function will return the number of strong components for directed networks. For weak components, you will need to first make the network @@ -595,7 +595,7 @@

    Components

    -
    net_components(friends)
    +
    net_by_components(friends)
     # note that friends is a directed network
     # you can see this by calling the object 'friends'
     # or by running `is_directed(friends)`
    @@ -607,14 +607,14 @@

    Components

    # Note: to_undirected() returns an object with all tie direction removed, # so any pair of nodes with at least one directed edge # will be connected by an undirected edge in the new network. -net_components(to_undirected(friends))
    +net_by_components(to_undirected(friends))
    # note that friends is a directed network
    -net_components(friends)
    -net_components(to_undirected(friends))
    +net_by_components(friends) +net_by_components(to_undirected(friends))
    @@ -626,7 +626,7 @@

    Components

    So we know how many components there are, but maybe we’re also interested in which nodes are members of which components? -node_components() returns a membership vector that can be +node_in_component() returns a membership vector that can be used to color nodes in graphr():

    Components data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    friends <- friends %>% 
    -  mutate(weak_comp = node_components(to_undirected(friends)),
    -         strong_comp = node_components(friends))
    -# node_components returns a vector of nodes' memberships to components in the network
    +  mutate(weak_comp = node_in_component(to_undirected(friends)),
    +         strong_comp = node_in_component(friends))
    +# node_in_component returns a vector of nodes' memberships to components in the network
     # here, we are adding the nodes' membership to components as an attribute in the network
     # alternatively, we can also use the function `add_node_attribute()`
    -# eg. `add_node_attribute(friends, "weak_comp", node_components(to_undirected(friends)))`
    +# eg. `add_node_attribute(friends, "weak_comp", node_in_component(to_undirected(friends)))`
    Components data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    friends <- friends %>% 
    -  mutate(weak_comp = node_components(to_undirected(friends)),
    -         strong_comp = node_components(friends))
    +  mutate(weak_comp = node_in_component(to_undirected(friends)),
    +         strong_comp = node_in_component(friends))
     graphr(friends, node_color = "weak_comp") + ggtitle("Weak components") +
     graphr(friends, node_color = "strong_comp") + ggtitle("Strong components")
    @@ -703,8 +703,8 @@

    Factions

    -
    net_components(blogs)
    -net_components(to_undirected(blogs))
    +
    node_in_component(blogs)
    +node_in_component(to_undirected(blogs))
    @@ -760,7 +760,7 @@

    Modularity

    -
    net_modularity(blogs, membership = node_in_partition(blogs))
    +
    net_by_modularity(blogs, membership = node_in_partition(blogs))

    Remember that modularity ranges between 1 and -1. How can we @@ -783,7 +783,7 @@

    Modularity

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    graphr(blogs, node_color = "Leaning")
    -net_modularity(blogs, membership = node_attribute(blogs, "Leaning"))
    +net_by_modularity(blogs, membership = node_attribute(blogs, "Leaning"))

    gif of Chevy Chase saying plot twist

    @@ -818,7 +818,7 @@

    Communities

    there are several, each with their strengths and weaknesses. Since this is a rather small network, we’ll focus on the following methods: walktrap, edge betweenness, and fast greedy. (Others are included in -{manynet}/{igraph}) As you use them, consider +{netrics}/{igraph}) As you use them, consider how they portray communities and consider which one(s) afford a sensible view of the social world as cohesively organized.

    @@ -877,14 +877,14 @@

    Walktrap

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    # resulting in a modularity of 
    -net_modularity(friends, friend_wt)
    +net_by_modularity(friends, friend_wt)
    friend_wt <- node_in_walktrap(friends, times=50)
     # results in a modularity of 
    -net_modularity(friends, friend_wt)
    +net_by_modularity(friends, friend_wt)

    We can also visualise the clusters on the original network How does the following look? Plausible?

    @@ -920,9 +920,9 @@

    Walktrap

    node_color = "walk_comm", node_group = "walk_comm") + ggtitle("Walktrap", - subtitle = round(net_modularity(friends, friend_wt), 3)) + subtitle = round(net_by_modularity(friends, friend_wt), 3)) # the function `round()` rounds the values to a specified number of decimal places -# here, we are telling it to round the net_modularity score to 3 decimal places, +# here, we are telling it to round the net_by_modularity score to 3 decimal places, # but the score is exactly 0.27 so only two decimal places are printed.
    Walktrap node_color = "walk_comm", node_group = "walk_comm") + ggtitle("Walktrap", - subtitle = round(net_modularity(friends, friend_wt), 3)) + subtitle = round(net_by_modularity(friends, friend_wt), 3))

    This can be helpful when polygons overlap to better identify membership Or you can use node color and size to indicate other @@ -996,7 +996,7 @@

    Edge Betweenness

    node_color = "eb_comm", node_group = "eb_comm") + ggtitle("Edge-betweenness", - subtitle = round(net_modularity(friends, friend_eb), 3)) + subtitle = round(net_by_modularity(friends, friend_eb), 3))
    Edge Betweenness node_color = "eb_comm", node_group = "eb_comm") + ggtitle("Edge-betweenness", - subtitle = round(net_modularity(friends, friend_eb), 3)) + subtitle = round(net_by_modularity(friends, friend_eb), 3))

    For more on this algorithm, see M Newman and M Girvan: Finding and evaluating community structure in networks, Physical Review E 69, 026113 @@ -1037,7 +1037,7 @@

    Fast Greedy

    data-lines="0" data-pipe="|>">
    friend_fg <- node_in_greedy(friends)
     friend_fg # Does this result in a different community partition?
    -net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
    +net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
    Fast Greedy node_color = "fg_comm", node_group = "fg_comm") + ggtitle("Fast-greedy", - subtitle = round(net_modularity(friends, friend_fg), 3)) + subtitle = round(net_by_modularity(friends, friend_fg), 3)) #
    Fast Greedy data-lines="0" data-pipe="|>">
    friend_fg <- node_in_greedy(friends)
     friend_fg # Does this result in a different community partition?
    -net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
    +net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure
     
     # Again, we can visualise these communities in different ways:
     friends <- friends %>% 
    @@ -1066,7 +1066,7 @@ 

    Fast Greedy

    node_color = "fg_comm", node_group = "fg_comm") + ggtitle("Fast-greedy", - subtitle = round(net_modularity(friends, friend_fg), 3))
    + subtitle = round(net_by_modularity(friends, friend_fg), 3))

    See A Clauset, MEJ Newman, C Moore: Finding community structure in very large networks, Fast Greedy

    Communities

    -

    Lastly, {manynet} includes a function to run through and +

    Lastly, {netrics} includes a function to run through and find the membership assignment that maximises modularity across any of the applicable community @@ -1461,15 +1461,15 @@

    Glossary

    @@ -1592,7 +1592,7 @@

    Glossary

    list(label = "dens", code = "", opts = list(label = "\"dens\"", exercise = "TRUE", exercise.setup = "\"separatingnets\"", purl = "FALSE"), engine = "r")), code_check = NULL, - error_check = NULL, check = NULL, solution = structure("net_density(tasks)", chunk_opts = list( + error_check = NULL, check = NULL, solution = structure("net_by_density(tasks)", chunk_opts = list( label = "dens-solution")), tests = NULL, options = list( eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, comment = NA, @@ -1619,11 +1619,11 @@

    Glossary

    @@ -1679,7 +1679,7 @@

    Glossary

    list(label = "recip", code = "", opts = list(label = "\"recip\"", exercise = "TRUE", exercise.setup = "\"separatingnets\"", purl = "FALSE"), engine = "r")), code_check = NULL, - error_check = NULL, check = NULL, solution = structure(c("net_reciprocity(tasks)", + error_check = NULL, check = NULL, solution = structure(c("net_by_reciprocity(tasks)", "# this function calculates the amount of reciprocity in the whole network" ), chunk_opts = list(label = "recip-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", @@ -1738,7 +1738,7 @@

    Glossary

    "tasks <- to_uniplex(ison_algebra, \"tasks\")"), chunk_opts = list( label = "setup", include = FALSE, purl = FALSE, eval = TRUE)), setup = NULL, chunks = list(list(label = "recip-explanation", - code = "tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = \"rec\")\nnet_indegree(tasks)", + code = "tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = \"rec\")\nnet_by_indegree(tasks)", opts = list(label = "\"recip-explanation\"", exercise = "TRUE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = NULL, tests = NULL, options = list( @@ -1759,7 +1759,7 @@

    Glossary

    engine = "r", split = FALSE, include = TRUE, purl = TRUE, max.print = 1000, label = "recip-explanation", exercise = TRUE, code = c("tasks %>% mutate_ties(rec = tie_is_reciprocated(tasks)) %>% graphr(edge_color = \"rec\")", - "net_indegree(tasks)"), out.width.px = 624, out.height.px = 384, + "net_by_indegree(tasks)"), out.width.px = 624, out.height.px = 384, params.src = "recip-explanation, exercise = TRUE", fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"), engine = "r", version = "4"), class = c("r", "tutorial_exercise" @@ -1806,7 +1806,7 @@

    Glossary

    list(label = "trans", code = "", opts = list(label = "\"trans\"", exercise = "TRUE", exercise.setup = "\"separatingnets\"", purl = "FALSE"), engine = "r")), code_check = NULL, - error_check = NULL, check = NULL, solution = structure(c("net_transitivity(tasks)", + error_check = NULL, check = NULL, solution = structure(c("net_by_transitivity(tasks)", "# this function calculates the amount of transitivity in the whole network" ), chunk_opts = list(label = "trans-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", @@ -1835,30 +1835,30 @@

    Glossary

    @@ -2195,8 +2195,8 @@

    Glossary

    label = "\"twomode-cohesion\"", exercise = "TRUE", exercise.setup = "\"easyway\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, - check = NULL, solution = structure(c("net_transitivity(women_graph)", - "net_transitivity(event_graph)", "net_equivalency(ison_southern_women)" + check = NULL, solution = structure(c("net_by_transitivity(women_graph)", + "net_by_transitivity(event_graph)", "net_by_equivalency(ison_southern_women)" ), chunk_opts = list(label = "twomode-cohesion-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -2226,28 +2226,28 @@

    Glossary

    @@ -2330,7 +2330,7 @@

    Glossary

    exercise = "TRUE", exercise.setup = "\"separatingnets\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("# note that friends is a directed network", - "net_components(friends)", "net_components(to_undirected(friends))" + "net_by_components(friends)", "net_by_components(to_undirected(friends))" ), chunk_opts = list(label = "comp-no-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -2360,19 +2360,19 @@

    Glossary

    @@ -2429,8 +2429,8 @@

    Glossary

    exercise = "TRUE", exercise.setup = "\"separatingnets\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("friends <- friends %>% ", - " mutate(weak_comp = node_components(to_undirected(friends)),", - " strong_comp = node_components(friends))", "graphr(friends, node_color = \"weak_comp\") + ggtitle(\"Weak components\") +", + " mutate(weak_comp = node_in_component(to_undirected(friends)),", + " strong_comp = node_in_component(friends))", "graphr(friends, node_color = \"weak_comp\") + ggtitle(\"Weak components\") +", "graphr(friends, node_color = \"strong_comp\") + ggtitle(\"Strong components\")" ), chunk_opts = list(label = "comp-memb-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", @@ -2460,20 +2460,20 @@

    Glossary

    @@ -2649,7 +2649,7 @@

    Glossary

    setup = "# This is a large network\nnet_nodes(irps_blogs)\n# Let's concentrate on just a sample of 240 \n# (by deleting a random sample of 1250)\nblogs <- delete_nodes(irps_blogs, sample(1:1490, 1250))\ngraphr(blogs)", chunks = list(list(label = "blogsize", code = "# This is a large network\nnet_nodes(irps_blogs)\n# Let's concentrate on just a sample of 240 \n# (by deleting a random sample of 1250)\nblogs <- delete_nodes(irps_blogs, sample(1:1490, 1250))\ngraphr(blogs)", opts = list(label = "\"blogsize\"", exercise = "TRUE"), - engine = "r"), list(label = "blogcomp", code = "net_components(blogs)\nnet_components(to_undirected(blogs))", + engine = "r"), list(label = "blogcomp", code = "node_in_component(blogs)\nnode_in_component(to_undirected(blogs))", opts = list(label = "\"blogcomp\"", exercise = "TRUE", exercise.setup = "\"blogsize\""), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = NULL, @@ -2670,8 +2670,8 @@

    Glossary

    message = TRUE, render = NULL, ref.label = NULL, child = NULL, engine = "r", split = FALSE, include = TRUE, purl = TRUE, max.print = 1000, label = "blogcomp", exercise = TRUE, - exercise.setup = "blogsize", code = c("net_components(blogs)", - "net_components(to_undirected(blogs))"), out.width.px = 624, + exercise.setup = "blogsize", code = c("node_in_component(blogs)", + "node_in_component(to_undirected(blogs))"), out.width.px = 624, out.height.px = 384, params.src = "blogcomp, exercise = TRUE, exercise.setup = \"blogsize\"", fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"), engine = "r", version = "4"), class = c("r", "tutorial_exercise" @@ -2848,7 +2848,7 @@

    Glossary

    engine = "r"), list(label = "blogtogiant", code = "blogs <- blogs %>% to_giant()\nsum(node_is_isolate(blogs))\ngraphr(blogs)", opts = list(label = "\"blogtogiant\"", exercise = "TRUE", warning = "FALSE", fig.width = "9", exercise.setup = "\"blogsize\""), - engine = "r"), list(label = "blogmod", code = "net_modularity(blogs, membership = node_in_partition(blogs))", + engine = "r"), list(label = "blogmod", code = "net_by_modularity(blogs, membership = node_in_partition(blogs))", opts = list(label = "\"blogmod\"", exercise = "TRUE", exercise.setup = "\"blogtogiant\""), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = NULL, @@ -2869,7 +2869,7 @@

    Glossary

    message = TRUE, render = NULL, ref.label = NULL, child = NULL, engine = "r", split = FALSE, include = TRUE, purl = TRUE, max.print = 1000, label = "blogmod", exercise = TRUE, - exercise.setup = "blogtogiant", code = "net_modularity(blogs, membership = node_in_partition(blogs))", + exercise.setup = "blogtogiant", code = "net_by_modularity(blogs, membership = node_in_partition(blogs))", out.width.px = 624, out.height.px = 384, params.src = "blogmod, exercise=TRUE, exercise.setup = \"blogtogiant\"", fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"), engine = "r", version = "4"), class = c("r", "tutorial_exercise" @@ -2879,22 +2879,22 @@

    Glossary

    @@ -2949,7 +2949,7 @@

    Glossary

    engine = "r"), list(label = "blogtogiant", code = "blogs <- blogs %>% to_giant()\nsum(node_is_isolate(blogs))\ngraphr(blogs)", opts = list(label = "\"blogtogiant\"", exercise = "TRUE", warning = "FALSE", fig.width = "9", exercise.setup = "\"blogsize\""), - engine = "r"), list(label = "blogmodassign", code = "graphr(blogs, node_color = \"Leaning\")\nnet_modularity(blogs, membership = node_attribute(blogs, \"Leaning\"))", + engine = "r"), list(label = "blogmodassign", code = "graphr(blogs, node_color = \"Leaning\")\nnet_by_modularity(blogs, membership = node_attribute(blogs, \"Leaning\"))", opts = list(label = "\"blogmodassign\"", exercise = "TRUE", exercise.setup = "\"blogtogiant\"", warning = "FALSE", fig.width = "9"), engine = "r")), code_check = NULL, @@ -2972,7 +2972,7 @@

    Glossary

    engine = "r", split = FALSE, include = TRUE, purl = TRUE, max.print = 1000, label = "blogmodassign", exercise = TRUE, exercise.setup = "blogtogiant", code = c("graphr(blogs, node_color = \"Leaning\")", - "net_modularity(blogs, membership = node_attribute(blogs, \"Leaning\"))" + "net_by_modularity(blogs, membership = node_attribute(blogs, \"Leaning\"))" ), out.width.px = 864, out.height.px = 384, params.src = "blogmodassign, exercise=TRUE, exercise.setup = \"blogtogiant\", warning=FALSE, fig.width=9", fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"), engine = "r", version = "4"), class = c("r", "tutorial_exercise" @@ -3087,7 +3087,7 @@

    Glossary

    opts = list(label = "\"walk\"", exercise = "TRUE", exercise.setup = "\"separatingnets\""), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("friend_wt <- node_in_walktrap(friends, times=50)", - "# results in a modularity of ", "net_modularity(friends, friend_wt)" + "# results in a modularity of ", "net_by_modularity(friends, friend_wt)" ), chunk_opts = list(label = "walk-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -3163,7 +3163,7 @@

    Glossary

    "graphr(friends, node_group = \"walk_comm\")", "# or both!", "graphr(friends,", " node_color = \"walk_comm\",", " node_group = \"walk_comm\") +", " ggtitle(\"Walktrap\",", - " subtitle = round(net_modularity(friends, friend_wt), 3))" + " subtitle = round(net_by_modularity(friends, friend_wt), 3))" ), chunk_opts = list(label = "walkplot-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -3302,7 +3302,7 @@

    Glossary

    check = NULL, solution = structure(c("friends <- friends %>% ", " mutate(eb_comm = friend_eb)", "graphr(friends,", " node_color = \"eb_comm\",", " node_group = \"eb_comm\") +", " ggtitle(\"Edge-betweenness\",", - " subtitle = round(net_modularity(friends, friend_eb), 3))" + " subtitle = round(net_by_modularity(friends, friend_eb), 3))" ), chunk_opts = list(label = "ebplot-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -3370,11 +3370,11 @@

    Glossary

    purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("friend_fg <- node_in_greedy(friends)", "friend_fg # Does this result in a different community partition?", - "net_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure", + "net_by_modularity(friends, friend_fg) # Compare this to the edge betweenness procedure", "", "# Again, we can visualise these communities in different ways:", "friends <- friends %>% ", " mutate(fg_comm = friend_fg)", "graphr(friends,", " node_color = \"fg_comm\",", " node_group = \"fg_comm\") +", - " ggtitle(\"Fast-greedy\",", " subtitle = round(net_modularity(friends, friend_fg), 3))" + " ggtitle(\"Fast-greedy\",", " subtitle = round(net_by_modularity(friends, friend_fg), 3))" ), chunk_opts = list(label = "fg-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -3401,16 +3401,16 @@

    Glossary

    @@ -3495,38 +3495,38 @@

    Glossary

    @@ -3603,12 +3603,12 @@

    Glossary

    diff --git a/inst/tutorials/tutorial5/position.Rmd b/inst/tutorials/tutorial5/position.Rmd index 06e21d0..2d967fd 100644 --- a/inst/tutorials/tutorial5/position.Rmd +++ b/inst/tutorials/tutorial5/position.Rmd @@ -55,7 +55,7 @@ tasks <- to_uniplex(ison_algebra, "tasks") gif of rick and morty characters hanging out with themselves -For this session, we're going to use the "ison_algebra" dataset included in the `{manynet}` package. +For this session, we're going to use the "ison_algebra" dataset included in the `{netrics}` package. Do you remember how to call the data? Can you find out some more information about it via its help file? @@ -204,7 +204,7 @@ there are no bridges. ```{r bridges, exercise = TRUE, exercise.setup = "objects-setup"} sum(tie_is_bridge(friends)) -any(node_bridges(friends)>0) +any(node_by_bridges(friends)>0) ``` ### Constraint @@ -212,7 +212,7 @@ any(node_bridges(friends)>0) But some nodes do seem more deeply embedded in the network than others. Let's take a look at which actors are least `r gloss("constrained", "constraint")` by their position in the *task* network. -`{manynet}` makes this easy enough with the `node_constraint()` function. +`{netrics}` makes this easy enough with the `node_by_constraint()` function. ```{r objects-setup, purl=FALSE} alge <- to_named(ison_algebra) @@ -226,12 +226,12 @@ tasks <- to_uniplex(alge, "tasks") ``` ```{r constraint-hint, purl = FALSE} -node_constraint(____) +node_by_constraint(____) # Don't forget we want to look at which actors are least constrained by their position in the 'tasks' network ``` ```{r constraint-solution} -node_constraint(tasks) +node_by_constraint(tasks) ``` This function returns a vector of constraint scores that can range between 0 and 1. @@ -244,8 +244,8 @@ We can also identify the node with the minimum constraint score using `node_is_m ```{r constraintplot-hint-1, purl = FALSE} tasks <- tasks %>% - mutate(constraint = node_constraint(____), - low_constraint = node_is_min(node_constraint(____))) + mutate(constraint = node_by_constraint(____), + low_constraint = node_is_min(node_by_constraint(____))) # Don't forget, we are still looking at the 'tasks' network ``` @@ -264,8 +264,8 @@ graphr(tasks, node_size = "constraint", node_color = "low_constraint") ```{r constraintplot-solution} tasks <- tasks %>% - mutate(constraint = node_constraint(tasks), - low_constraint = node_is_min(node_constraint(tasks))) + mutate(constraint = node_by_constraint(tasks), + low_constraint = node_is_min(node_by_constraint(tasks))) graphr(tasks, node_size = "constraint", node_color = "low_constraint") ``` @@ -318,7 +318,7 @@ a uniplex subgraph thereof. ### Finding structurally equivalent classes -In `{manynet}`, finding how the nodes of a network can be partitioned +In `{netrics}`, finding how the nodes of a network can be partitioned into structurally equivalent classes can be as easy as: ```{r find-se, exercise = TRUE, exercise.setup = "data"} @@ -337,8 +337,8 @@ how these classes are identified and how to interpret them. ### Step one: starting with a census All equivalence classes are based on nodes' similarity across some profile of motifs. -In `{manynet}`, we call these motif *censuses*. -Any kind of census can be used, and `{manynet}` includes a few options, +In `{netrics}`, we call these motif *censuses*. +Any kind of census can be used, and `{netrics}` includes a few options, but `node_in_structural()` is based off of the census of all the nodes' ties, both outgoing and incoming ties, to characterise their relationships to tie partners. @@ -347,24 +347,24 @@ both outgoing and incoming ties, to characterise their relationships to tie part ``` ```{r construct-cor-hint-1, purl = FALSE} -# Let's use the node_by_tie() function +# Let's use the node_x_tie() function # The function accepts an object such as a dataset # Hint: Which dataset are we using in this tutorial? -node_by_tie(____) +node_x_tie(____) ``` ```{r construct-cor-hint-2, purl = FALSE} -node_by_tie(ison_algebra) +node_x_tie(ison_algebra) ``` ```{r construct-cor-hint-3, purl = FALSE} # Now, let's get the dimensions of an object via the dim() function -dim(node_by_tie(ison_algebra)) +dim(node_x_tie(ison_algebra)) ``` ```{r construct-cor-solution} -node_by_tie(ison_algebra) -dim(node_by_tie(ison_algebra)) +node_x_tie(ison_algebra) +dim(node_x_tie(ison_algebra)) ``` We can see that the result is a matrix of 16 rows and 96 columns, @@ -379,7 +379,7 @@ what would you do if you wanted it to be binary? ```{r construct-binary-hint, purl = FALSE} # we could convert the result using as.matrix, returning the ties -as.matrix((node_by_tie(ison_algebra)>0)+0) +as.matrix((node_x_tie(ison_algebra)>0)+0) ``` @@ -388,17 +388,17 @@ as.matrix((node_by_tie(ison_algebra)>0)+0) # Note that this also reduces the total number of possible paths between nodes ison_algebra %>% select_ties(-type) %>% - node_by_tie() + node_x_tie() ``` -Note that `node_by_tie()` does not need to be passed to `node_in_structural()` --- +Note that `node_x_tie()` does not need to be passed to `node_in_structural()` --- this is done automatically! However, the more generic `node_in_equivalence()` is available and can be used -with whichever census (`node_by_*()` output) is desired. -Feel free to explore using some of the other censuses available in `{manynet}`, +with whichever census (`node_x_*()` output) is desired. +Feel free to explore using some of the other censuses available in `{netrics}`, though some common ones are already used in the other equivalence convenience functions, -e.g. `node_by_triad()` in `node_in_regular()` -and `node_by_path()` in `node_in_automorphic()`. +e.g. `node_x_triad()` in `node_in_regular()` +and `node_x_path()` in `node_in_automorphic()`. ### Step two: growing a tree of similarity @@ -413,7 +413,7 @@ so that help page should be consulted for more details. By default `"euclidean"` is used. Second, we can also set the type of clustering algorithm employed. -By default, `{manynet}`'s equivalence functions use `r gloss("hierarchical clustering","hierclust")`, `"hier"`, +By default, `{netrics}`'s equivalence functions use `r gloss("hierarchical clustering","hierclust")`, `"hier"`, but for compatibility and enthusiasts, we also offer `"concor"`, which implements a CONCOR (CONvergence of CORrelations) algorithm. @@ -438,7 +438,7 @@ question("Do you see any differences?", allow_retry = TRUE) ``` -So plotting a `membership` vector from `{manynet}` returns a dendrogram +So plotting a `membership` vector from `{netrics}` returns a dendrogram with the names of the nodes on the _y_-axis and the distance between them on the _x_-axis. Using the census as material, the distances between the nodes is used to create a dendrogram of (dis)similarity among the nodes. @@ -469,7 +469,7 @@ But where does this red line come from? Or, more technically, how do we identify the number of clusters into which to assign nodes? -`{manynet}` includes several different ways of establishing `k`, +`{netrics}` includes several different ways of establishing `k`, or the number of clusters. Remember, the further to the right the red line is (the lower on the tree the cut point is) @@ -507,7 +507,7 @@ then we might expect there to be a relatively rapid increase in correlation as we move from, for example, 3 clusters to 4 clusters, but a relatively small increase from, for example, 13 clusters to 14 clusters. By identifying the inflection point in this line graph, -`{manynet}` selects a number of clusters that represents a trade-off +`{netrics}` selects a number of clusters that represents a trade-off between fit and parsimony. This is the `k = "elbow"` method. @@ -535,12 +535,12 @@ Either is probably fine here, and there is much debate around how to select the number of clusters anyway. However, the silhouette method seems to do a better job of identifying how unique the 16th node is. -The silhouette method is also the default in `{manynet}`. +The silhouette method is also the default in `{netrics}`. Note that there is a somewhat hidden parameter here, `range`. Since testing across all possible numbers of clusters can get computationally expensive (not to mention uninterpretable) for large networks, -`{manynet}` only considers up to 8 clusters by default. +`{netrics}` only considers up to 8 clusters by default. This however can be modified to be higher or lower, e.g. `range = 16`. Finally, one last option is `k = "strict"`, @@ -597,14 +597,14 @@ but this can be tweaked by assigning some other summary statistic as `FUN = `. ``` ```{r summ-hint, purl = FALSE} -# Let's wrap node_by_tie inside the summary() function +# Let's wrap node_x_tie inside the summary() function # and pass it a membership result -summary(node_by_tie(____), +summary(node_x_tie(____), membership = ____) ``` ```{r summ-solution} -summary(node_by_tie(alge), +summary(node_x_tie(alge), membership = node_in_structural(alge)) ``` diff --git a/inst/tutorials/tutorial5/position.html b/inst/tutorials/tutorial5/position.html index 54a6232..8426eef 100644 --- a/inst/tutorials/tutorial5/position.html +++ b/inst/tutorials/tutorial5/position.html @@ -13,7 +13,7 @@ - + Position and Equivalence @@ -114,7 +114,7 @@

    Setting up

    gif of rick and morty characters hanging out with themselves

    For this session, we’re going to use the “ison_algebra” dataset -included in the {manynet} package. Do you remember how to +included in the {netrics} package. Do you remember how to call the data? Can you find out some more information about it via its help file?

    Bridges data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    sum(tie_is_bridge(friends))
    -any(node_bridges(friends)>0)
    +any(node_by_bridges(friends)>0)
    @@ -254,8 +254,8 @@

    Constraint

    others. Let’s take a look at which actors are least constrained by their position in the task -network. {manynet} makes this easy enough with the -node_constraint() function.

    +network. {netrics} makes this easy enough with the +node_by_constraint() function.

    @@ -264,13 +264,13 @@

    Constraint

    -
    node_constraint(____)
    +
    node_by_constraint(____)
     # Don't forget we want to look at which actors are least constrained by their position in the 'tasks' network
    -
    node_constraint(tasks)
    +
    node_by_constraint(tasks)

    This function returns a vector of constraint scores that can range between 0 and 1. Let’s graph the network again, sizing the nodes @@ -286,8 +286,8 @@

    Constraint

    data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    tasks <- tasks %>% 
    -  mutate(constraint = node_constraint(____),
    -         low_constraint = node_is_min(node_constraint(____)))
    +  mutate(constraint = node_by_constraint(____),
    +         low_constraint = node_is_min(node_by_constraint(____)))
     
     # Don't forget, we are still looking at the 'tasks' network
    @@ -312,8 +312,8 @@

    Constraint

    data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    tasks <- tasks %>% 
    -  mutate(constraint = node_constraint(tasks), 
    -         low_constraint = node_is_min(node_constraint(tasks)))
    +  mutate(constraint = node_by_constraint(tasks), 
    +         low_constraint = node_is_min(node_by_constraint(tasks)))
     graphr(tasks, node_size = "constraint", node_color = "low_constraint")

    Why minimum? Because constraint measures how well connected each @@ -355,7 +355,7 @@

    Structural Equivalence

    Finding structurally equivalent classes

    -

    In {manynet}, finding how the nodes of a network can be +

    In {netrics}, finding how the nodes of a network can be partitioned into structurally equivalent classes can be as easy as:

    Finding structurally equivalent classes class="section level3">

    Step one: starting with a census

    All equivalence classes are based on nodes’ similarity across some -profile of motifs. In {manynet}, we call these motif +profile of motifs. In {netrics}, we call these motif censuses. Any kind of census can be used, and -{manynet} includes a few options, but +{netrics} includes a few options, but node_in_structural() is based off of the census of all the nodes’ ties, both outgoing and incoming ties, to characterise their relationships to tie partners.

    @@ -390,28 +390,28 @@

    Step one: starting with a census

    -
    # Let's use the node_by_tie() function
    +
    # Let's use the node_x_tie() function
     # The function accepts an object such as a dataset
     # Hint: Which dataset are we using in this tutorial?
    -node_by_tie(____)
    +node_x_tie(____)
    -
    node_by_tie(ison_algebra)
    +
    node_x_tie(ison_algebra)
    # Now, let's get the dimensions of an object via the dim() function
    -dim(node_by_tie(ison_algebra))
    +dim(node_x_tie(ison_algebra))
    -
    node_by_tie(ison_algebra)
    -dim(node_by_tie(ison_algebra))
    +
    node_x_tie(ison_algebra)
    +dim(node_x_tie(ison_algebra))

    We can see that the result is a matrix of 16 rows and 96 columns, because we want to catalogue or take a census of all the different @@ -428,7 +428,7 @@

    Step one: starting with a census

    data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    # we could convert the result using as.matrix, returning the ties 
    -as.matrix((node_by_tie(ison_algebra)>0)+0)
    +as.matrix((node_x_tie(ison_algebra)>0)+0)
    Step one: starting with a census # Note that this also reduces the total number of possible paths between nodes ison_algebra %>% select_ties(-type) %>% - node_by_tie() + node_x_tie()
    -

    Note that node_by_tie() does not need to be passed to +

    Note that node_x_tie() does not need to be passed to node_in_structural() — this is done automatically! However, the more generic node_in_equivalence() is available and can -be used with whichever census (node_by_*() output) is +be used with whichever census (node_x_*() output) is desired. Feel free to explore using some of the other censuses available -in {manynet}, though some common ones are already used in +in {netrics}, though some common ones are already used in the other equivalence convenience functions, -e.g. node_by_triad() in node_in_regular() and -node_by_path() in node_in_automorphic().

    +e.g. node_x_triad() in node_in_regular() and +node_x_path() in node_in_automorphic().

    @@ -464,7 +464,7 @@

    Step two: growing a tree of similarity

    help page should be consulted for more details. By default "euclidean" is used.

    Second, we can also set the type of clustering algorithm employed. By -default, {manynet}’s equivalence functions use +default, {netrics}’s equivalence functions use hierarchical clustering , "hier", but for compatibility and enthusiasts, we also offer "concor", @@ -494,7 +494,7 @@

    Step two: growing a tree of similarity

    So plotting a membership vector from -{manynet} returns a dendrogram with the names of the nodes +{netrics} returns a dendrogram with the names of the nodes on the y-axis and the distance between them on the x-axis. Using the census as material, the distances between the nodes is used to create a dendrogram of (dis)similarity among the nodes. @@ -522,7 +522,7 @@

    Step three: identifying the number of clusters

    to the branches (clusters) present at that cut-point.

    But where does this red line come from? Or, more technically, how do we identify the number of clusters into which to assign nodes?

    -

    {manynet} includes several different ways of +

    {netrics} includes several different ways of establishing k, or the number of clusters. Remember, the further to the right the red line is (the lower on the tree the cut point is) the more dissimilar we’re allowing nodes in the same cluster @@ -556,7 +556,7 @@

    Step three: identifying the number of clusters

    be a relatively rapid increase in correlation as we move from, for example, 3 clusters to 4 clusters, but a relatively small increase from, for example, 13 clusters to 14 clusters. By identifying the inflection -point in this line graph, {manynet} selects a number of +point in this line graph, {netrics} selects a number of clusters that represents a trade-off between fit and parsimony. This is the k = "elbow" method.

    The other option is to evaluate a candidate for k based @@ -582,11 +582,11 @@

    Step three: identifying the number of clusters

    around how to select the number of clusters anyway. However, the silhouette method seems to do a better job of identifying how unique the 16th node is. The silhouette method is also the default in -{manynet}.

    +{netrics}.

    Note that there is a somewhat hidden parameter here, range. Since testing across all possible numbers of clusters can get computationally expensive (not to mention -uninterpretable) for large networks, {manynet} only +uninterpretable) for large networks, {netrics} only considers up to 8 clusters by default. This however can be modified to be higher or lower, e.g. range = 16.

    Finally, one last option is k = "strict", which only @@ -645,15 +645,15 @@

    Summarising profiles

    -
    # Let's wrap node_by_tie inside the summary() function
    +
    # Let's wrap node_x_tie inside the summary() function
     # and pass it a membership result
    -summary(node_by_tie(____),
    +summary(node_x_tie(____),
             membership = ____)
    -
    summary(node_by_tie(alge),
    +
    summary(node_x_tie(alge),
             membership = node_in_structural(alge))

    This node census produces 96 columns, \(16 @@ -1029,22 +1029,22 @@

    Glossary

    @@ -1062,29 +1062,29 @@

    Glossary

    @@ -1168,7 +1168,7 @@

    Glossary

    setup = "alge <- to_named(ison_algebra)\nfriends <- to_uniplex(alge, \"friends\")\nsocial <- to_uniplex(alge, \"social\")\ntasks <- to_uniplex(alge, \"tasks\")", chunks = list(list(label = "objects-setup", code = "alge <- to_named(ison_algebra)\nfriends <- to_uniplex(alge, \"friends\")\nsocial <- to_uniplex(alge, \"social\")\ntasks <- to_uniplex(alge, \"tasks\")", opts = list(label = "\"objects-setup\"", purl = "FALSE"), - engine = "r"), list(label = "bridges", code = "sum(tie_is_bridge(friends))\nany(node_bridges(friends)>0)", + engine = "r"), list(label = "bridges", code = "sum(tie_is_bridge(friends))\nany(node_by_bridges(friends)>0)", opts = list(label = "\"bridges\"", exercise = "TRUE", exercise.setup = "\"objects-setup\""), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = NULL, @@ -1190,7 +1190,7 @@

    Glossary

    engine = "r", split = FALSE, include = TRUE, purl = TRUE, max.print = 1000, label = "bridges", exercise = TRUE, exercise.setup = "objects-setup", code = c("sum(tie_is_bridge(friends))", - "any(node_bridges(friends)>0)"), out.width.px = 624, + "any(node_by_bridges(friends)>0)"), out.width.px = 624, out.height.px = 384, params.src = "bridges, exercise = TRUE, exercise.setup = \"objects-setup\"", fig.num = 0, exercise.df_print = "paged", exercise.checker = "NULL"), engine = "r", version = "4"), class = c("r", "tutorial_exercise" @@ -1236,7 +1236,7 @@

    Glossary

    opts = list(label = "\"constraint\"", exercise = "TRUE", exercise.setup = "\"objects-setup\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, - check = NULL, solution = structure("node_constraint(tasks)", chunk_opts = list( + check = NULL, solution = structure("node_by_constraint(tasks)", chunk_opts = list( label = "constraint-solution")), tests = NULL, options = list( eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, comment = NA, @@ -1301,7 +1301,7 @@

    Glossary

    exercise.setup = "\"objects-setup\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("tasks <- tasks %>% ", - " mutate(constraint = node_constraint(tasks), ", " low_constraint = node_is_min(node_constraint(tasks)))", + " mutate(constraint = node_by_constraint(tasks), ", " low_constraint = node_is_min(node_by_constraint(tasks)))", "graphr(tasks, node_size = \"constraint\", node_color = \"low_constraint\")" ), chunk_opts = list(label = "constraintplot-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", @@ -1332,11 +1332,11 @@

    Glossary

    @@ -1482,8 +1482,8 @@

    Glossary

    engine = "r"), list(label = "construct-cor", code = "", opts = list(label = "\"construct-cor\"", exercise = "TRUE", exercise.setup = "\"data\"", purl = "FALSE"), engine = "r")), - code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("node_by_tie(ison_algebra)", - "dim(node_by_tie(ison_algebra))"), chunk_opts = list(label = "construct-cor-solution")), + code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("node_x_tie(ison_algebra)", + "dim(node_x_tie(ison_algebra))"), chunk_opts = list(label = "construct-cor-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, comment = NA, highlight = FALSE, size = "normalsize", @@ -1547,7 +1547,7 @@

    Glossary

    exercise.setup = "\"data\"", purl = "FALSE"), engine = "r")), code_check = NULL, error_check = NULL, check = NULL, solution = structure(c("# But it's easier to simplify the network by removing the classification into different types of ties.", "# Note that this also reduces the total number of possible paths between nodes", - "ison_algebra %>%", " select_ties(-type) %>%", " node_by_tie()" + "ison_algebra %>%", " select_ties(-type) %>%", " node_x_tie()" ), chunk_opts = list(label = "construct-binary-solution")), tests = NULL, options = list(eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, tidy.opts = NULL, collapse = FALSE, prompt = FALSE, @@ -1642,11 +1642,11 @@

    Glossary

    @@ -1971,7 +1971,7 @@

    Glossary

    list(label = "summ", code = "", opts = list(label = "\"summ\"", exercise = "TRUE", exercise.setup = "\"strplot\"", purl = "FALSE"), engine = "r")), code_check = NULL, - error_check = NULL, check = NULL, solution = structure(c("summary(node_by_tie(alge),", + error_check = NULL, check = NULL, solution = structure(c("summary(node_x_tie(alge),", " membership = node_in_structural(alge))"), chunk_opts = list( label = "summ-solution")), tests = NULL, options = list( eval = FALSE, echo = TRUE, results = "markup", tidy = FALSE, @@ -2198,12 +2198,12 @@

    Glossary

    diff --git a/inst/tutorials/tutorial6/topology.Rmd b/inst/tutorials/tutorial6/topology.Rmd index 711c176..2be6e10 100644 --- a/inst/tutorials/tutorial6/topology.Rmd +++ b/inst/tutorials/tutorial6/topology.Rmd @@ -298,7 +298,7 @@ but with a rewiring probability of 0.25: ``` ```{r smallwtest-solution} -net_smallworld(generate_smallworld(50, 0.25)) +net_by_smallworld(generate_smallworld(50, 0.25)) ``` #### Scale-free graphs @@ -346,7 +346,7 @@ comes from a power-law distribution. ``` ```{r scaleftest-solution} -net_scalefree(generate_scalefree(50, 2)) +net_by_scalefree(generate_scalefree(50, 2)) ``` ## Core-Periphery @@ -431,14 +431,14 @@ one on the left and one on the right. But is it really all that much of a core-periphery structure? We can establish how correlated our network is compared to -a core-periphery model of the same dimension using `net_core()`. +a core-periphery model of the same dimension using `net_by_core()`. ```{r netcore, exercise=TRUE, purl = FALSE, exercise.setup="gnet"} ``` ```{r netcore-solution} -net_core(lawfirm, node_is_core(lawfirm)) +net_by_core(lawfirm, node_is_core(lawfirm)) ``` ```{r corecorr-qa, echo=FALSE, purl = FALSE} @@ -504,12 +504,12 @@ question("There a statistically significant association between the core assignm An alternative route is to identify 'core' nodes depending on their `r gloss("k-coreness","kcoreness")`. In `{manynet}`, we can return nodes _k_-coreness -with `node_coreness()` instead of +with `node_by_kcoreness()` instead of the `node_is_core()` used for core-periphery. ```{r nodecoren, exercise=TRUE, purl = FALSE, exercise.setup="gnet"} lawfirm %>% - mutate(ncn = node_kcoreness()) %>% + mutate(ncn = node_by_kcoreness()) %>% graphr(node_color = "ncn") ``` @@ -572,12 +572,12 @@ Ok, so let's now take a closer look at how to investigate the degree of hierarchy in a given network. The classic example would be to look at a tree network, like the one constructed earlier. -Let's try the function `net_by_hierarchy()` on a tree network of 12 nodes. +Let's try the function `net_x_hierarchy()` on a tree network of 12 nodes. ```{r treeh, exercise = TRUE, purl = FALSE} treeleven <- create_tree(11, directed = TRUE) -net_by_hierarchy(treeleven) -rowMeans(net_by_hierarchy(treeleven)) +net_x_hierarchy(treeleven) +rowMeans(net_x_hierarchy(treeleven)) ``` We see here four different measures of hierarchy: @@ -606,9 +606,9 @@ graphr(fict_thrones) ```{r hierarchy-solution} graphr(ison_emotions) -net_by_hierarchy(ison_emotions) +net_x_hierarchy(ison_emotions) graphr(fict_thrones) -net_by_hierarchy(fict_thrones) +net_x_hierarchy(fict_thrones) ``` Actually, these two networks have the same average hierarchy score of around 0.525. @@ -642,7 +642,7 @@ First, we might be interested in whether the network is `r gloss("connected","co ``` ```{r connected-solution} -net_connectedness(ison_adolescents) +net_by_connectedness(ison_adolescents) ``` This measure gets at the proportion of dyads that can reach each other in the network. @@ -675,7 +675,7 @@ or how many dropped nodes it would take to (further) fragment the network. ``` ```{r cohesion-solution} -net_cohesion(ison_adolescents) +net_by_cohesion(ison_adolescents) ``` ```{r cohesion-qa, echo=FALSE, purl = FALSE} @@ -740,7 +740,7 @@ Here we are interested in identifying which ties are `r gloss("bridges","bridge" ``` ```{r tieside-solution} -net_adhesion(ison_adolescents) +net_by_adhesion(ison_adolescents) ison_adolescents |> mutate_ties(cut = tie_is_bridge(ison_adolescents)) |> graphr(edge_color = "cut") ``` @@ -754,7 +754,7 @@ This is called (rather confusingly) tie cohesion. ``` ```{r tiecoh-solution} -ison_adolescents |> mutate_ties(coh = tie_cohesion(ison_adolescents)) |> +ison_adolescents |> mutate_ties(coh = tie_by_cohesion(ison_adolescents)) |> graphr(edge_size = "coh") ``` diff --git a/inst/tutorials/tutorial6/topology.html b/inst/tutorials/tutorial6/topology.html index 3fc5d78..09805b1 100644 --- a/inst/tutorials/tutorial6/topology.html +++ b/inst/tutorials/tutorial6/topology.html @@ -13,7 +13,7 @@ - + Topology and Resilience @@ -380,7 +380,7 @@

    Small-world graphs

    -
    net_smallworld(generate_smallworld(50, 0.25))
    +
    net_by_smallworld(generate_smallworld(50, 0.25))
    @@ -431,7 +431,7 @@

    Scale-free graphs

    -
    net_scalefree(generate_scalefree(50, 2))
    +
    net_by_scalefree(generate_scalefree(50, 2))
    @@ -584,7 +584,7 @@

    Coreness

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    lawfirm %>% 
    -  mutate(ncn = node_kcoreness()) %>% 
    +  mutate(ncn = node_by_kcoreness()) %>% 
       graphr(node_color = "ncn")
    @@ -639,8 +639,8 @@

    Measuring GTDH

    data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    treeleven <- create_tree(11, directed = TRUE)
    -net_by_hierarchy(treeleven)
    -rowMeans(net_by_hierarchy(treeleven))
    +net_x_hierarchy(treeleven) +rowMeans(net_x_hierarchy(treeleven))

    We see here four different measures of hierarchy:

    @@ -675,9 +675,9 @@

    Comparing GTDH

    data-completion="1" data-diagnostics="1" data-startover="1" data-lines="0" data-pipe="|>">
    graphr(ison_emotions)
    -net_by_hierarchy(ison_emotions)
    +net_x_hierarchy(ison_emotions)
     graphr(fict_thrones)
    -net_by_hierarchy(fict_thrones)
    +net_x_hierarchy(fict_thrones)

    Actually, these two networks have the same average hierarchy score of around 0.525. But they have quite different profiles. Can you make sense @@ -711,7 +711,7 @@

    How cohesive is the network?

    -
    net_connectedness(ison_adolescents)
    +
    net_by_connectedness(ison_adolescents)

    This measure gets at the proportion of dyads that can reach each other in the network. In this case, the proportion is 1, i.e. all nodes @@ -745,7 +745,7 @@

    How cohesive is the network?

    -
    net_cohesion(ison_adolescents)
    +
    net_by_cohesion(ison_adolescents)
    @@ -812,7 +812,7 @@

    Identifying bridges

    -
    net_adhesion(ison_adolescents)
    +
    net_by_adhesion(ison_adolescents)
     ison_adolescents |> mutate_ties(cut = tie_is_bridge(ison_adolescents)) |> 
       graphr(edge_color = "cut")
    @@ -827,7 +827,7 @@

    Identifying bridges

    -
    ison_adolescents |> mutate_ties(coh = tie_cohesion(ison_adolescents)) |> 
    +
    ison_adolescents |> mutate_ties(coh = tie_by_cohesion(ison_adolescents)) |> 
       graphr(edge_size = "coh")

    Where would you target your efforts if you wanted to fragment this @@ -944,8 +944,6 @@

    Glossary

    @@ -1226,8 +1221,7 @@

    Glossary

    @@ -1729,8 +1716,7 @@

    Glossary

    @@ -2014,8 +2000,7 @@

    Glossary

    @@ -2118,8 +2104,7 @@

    Glossary

    @@ -2197,22 +2183,22 @@

    Glossary

    @@ -2276,8 +2262,7 @@

    Glossary

    @@ -2429,8 +2414,7 @@

    Glossary

    @@ -2515,8 +2499,7 @@

    Glossary

    @@ -2631,8 +2614,7 @@

    Glossary

    diff --git a/man/figures/logo.png b/man/figures/logo.png index c12b8cd..0c2ba6c 100644 Binary files a/man/figures/logo.png and b/man/figures/logo.png differ