+
+{% endfor %}
+
+{% if paginator.total_pages > 1 %}
+
+{% endif %}
diff --git a/_posts/2017-01-01-advanced-examples.md b/_posts/2017-01-01-advanced-examples.md
deleted file mode 100644
index 785d05464b8..00000000000
--- a/_posts/2017-01-01-advanced-examples.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-title: "Advanced examples"
-mathjax: true
-layout: post
-categories: media
----
-
-
-
-
-## MathJax
-
-You can enable MathJax by setting `mathjax: true` on a page or globally in the `_config.yml`. Some examples:
-
-[Euler's formula](https://en.wikipedia.org/wiki/Euler%27s_formula) relates the complex exponential function to the trigonometric functions.
-
-$$ e^{i\theta}=\cos(\theta)+i\sin(\theta) $$
-
-The [Euler-Lagrange](https://en.wikipedia.org/wiki/Lagrangian_mechanics) differential equation is the fundamental equation of calculus of variations.
-
-$$ \frac{\mathrm{d}}{\mathrm{d}t} \left ( \frac{\partial L}{\partial \dot{q}} \right ) = \frac{\partial L}{\partial q} $$
-
-The [Schrödinger equation](https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation) describes how the quantum state of a quantum system changes with time.
-
-$$ i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},t) = \left [ \frac{-\hbar^2}{2\mu}\nabla^2 + V(\mathbf{r},t)\right ] \Psi(\mathbf{r},t) $$
-
-## Code
-
-Embed code by putting `{{ "{% highlight language " }}%}` `{{ "{% endhighlight " }}%}` blocks around it. Adding the parameter `linenos` will show source lines besides the code.
-
-{% highlight c %}
-
-static void asyncEnabled(Dict* args, void* vAdmin, String* txid, struct Allocator* requestAlloc)
-{
- struct Admin* admin = Identity_check((struct Admin*) vAdmin);
- int64_t enabled = admin->asyncEnabled;
- Dict d = Dict_CONST(String_CONST("asyncEnabled"), Int_OBJ(enabled), NULL);
- Admin_sendMessage(&d, txid, admin);
-}
-
-{% endhighlight %}
-
-## Gists
-
-With the `jekyll-gist` plugin, which is preinstalled on Github Pages, you can embed gists simply by using the `gist` command:
-
-
-
-## Images
-
-Upload an image to the *assets* folder and embed it with `)`. Keep in mind that the path needs to be adjusted if Jekyll is run inside a subfolder.
-
-A wrapper `div` with the class `large` can be used to increase the width of an image or iframe.
-
-
-
-[Flower](https://unsplash.com/photos/iGrsa9rL11o) by Tj Holowaychuk
-
-## Embedded content
-
-You can also embed a lot of stuff, for example from YouTube, using the `embed.html` include.
-
-{% include embed.html url="https://www.youtube.com/embed/_C0A5zX-iqM" %}
diff --git a/_posts/2017-02-01-markdown-examples.md b/_posts/2017-02-01-markdown-examples.md
deleted file mode 100644
index 072a700e0ea..00000000000
--- a/_posts/2017-02-01-markdown-examples.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: "Markdown examples"
-layout: post
----
-
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
-
-Curabitur pretium tincidunt lacus. Nulla gravida orci a odio. Nullam varius, turpis et commodo pharetra, est eros bibendum elit, nec luctus magna felis sollicitudin mauris. Integer in mauris eu nibh euismod gravida. Duis ac tellus et risus vulputate vehicula. Donec lobortis risus a elit.
-
-
-## Heading Two (h2)
-
-### Heading Three (h3)
-
-#### Heading Four (h4)
-
-##### Heading Five (h5)
-
-###### Heading Six (h6)
-
-
-## Blockquotes
-
-### Single line
-
-> My mom always said life was like a box of chocolates. You never know what you're gonna get.
-
-### Multiline
-
-> What do you get when you cross an insomniac, an unwilling agnostic and a dyslexic?
->
-> You get someone who stays up all night torturing himself mentally over the question of whether or not there's a dog.
->
-> – _Hal Incandenza_
-
-## Horizontal Rule
-
----
-
-## Table
-
-| Title 1 | Title 2 | Title 3 | Title 4 |
-|------------------|------------------|-----------------|-----------------|
-| First entry | Second entry | Third entry | Fourth entry |
-| Fifth entry | Sixth entry | Seventh entry | Eight entry |
-| Ninth entry | Tenth entry | Eleventh entry | Twelfth entry |
-| Thirteenth entry | Fourteenth entry | Fifteenth entry | Sixteenth entry |
-
-## Code
-
-Source code can be included by fencing the code with three backticks. Syntax highlighting works automatically when specifying the language after the backticks.
-
-````
-```javascript
-function foo () {
- return "bar";
-}
-```
-````
-
-This would be rendered as:
-
-```javascript
-function foo () {
- return "bar";
-}
-```
-
-## Lists
-
-### Unordered
-
-* First item
-* Second item
-* Third item
- * First nested item
- * Second nested item
-
-### Ordered
-
-1. First item
-2. Second item
-3. Third item
- 1. First nested item
- 2. Second nested item
diff --git a/_posts/2017-03-01-welcome-to-jekyll.md b/_posts/2017-03-01-welcome-to-jekyll.md
deleted file mode 100644
index 0b02eb48b78..00000000000
--- a/_posts/2017-03-01-welcome-to-jekyll.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "Welcome to Jekyll"
-layout: post
----
-
-You’ll find this post in your `_posts` directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run `jekyll serve`, which launches a web server and auto-regenerates your site when a file is updated.
-
-
-To add new posts, simply add a file in the `_posts` directory that follows the convention `YYYY-MM-DD-name-of-post.ext` and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works.
-
-Jekyll also offers powerful support for code snippets:
-
-{% highlight ruby %}
-def print_hi(name)
- puts "Hi, #{name}"
-end
-print_hi('Tom')
-#=> prints 'Hi, Tom' to STDOUT.
-{% endhighlight %}
-
-Check out the [Jekyll docs][jekyll-docs] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekyll’s GitHub repo][jekyll-gh]. If you have questions, you can ask them on [Jekyll Talk][jekyll-talk].
-
-[jekyll-docs]: http://jekyllrb.com/docs/home
-[jekyll-gh]: https://github.com/jekyll/jekyll
-[jekyll-talk]: https://talk.jekyllrb.com/
diff --git a/_posts/2021-12-02-UR5-Pick-And-Place-task.md b/_posts/2021-12-02-UR5-Pick-And-Place-task.md
new file mode 100644
index 00000000000..f92bb9205e9
--- /dev/null
+++ b/_posts/2021-12-02-UR5-Pick-And-Place-task.md
@@ -0,0 +1,60 @@
+---
+title: "Inverse Kinematics for UR5 Pick and Place task"
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/UR5moveAndPlace/homeConfig.png
+---
+
+
+
+Move and Place is a very common industrial application of robotic arms. In my first semester in Robotics at JHU, I worked with two other graduate students to implement inverse kinematics solution for move and place application of UR5 arm.
+
+
+
+## Overview and Motivation
+
+Move and Place is a very common industrial application of robotic arms. In my first semester in Robotics at JHU, I worked with two other graduate students to implement inverse kinematics solution for move and place application of UR5 arm.
+
+- Languages: MATLAB
+- Sottware : ROS
+
+## Approach
+
+For a start and target location input,trajectory was planned using Inverse Kinematics Control,Resolved Rate Control and Transpose Jacobian Control.
+
+For the inverse kinematics control, we designed a path by calculating a screw path from the starting position to the end position, and calculating frames along this path at specified intervals. Inverse kinematics was used to get the joint positions at each interval, which were then passed to the ur5_interface object to command the robot to each joint position.
+
+The inverse kinematics problem deals with solving the equation $$g_{st}\ =\ g_{d}$$ where $$g_{st}$$ can be obtained from the forward kinematics map and $$g_d$$ is the desired configuration. Both $$g_{st}$$ and $$g_{d}\ \in\ SE(3)$$. The problem might have a single solution, multiple solutions or no solution. A function was then written that gives 8 possible solutions of the generalized coordinates for any given homogeneous transformation. The algorithm followed to find the transformation required to move gradually from the start configuration to the final configuration is as follows:
+
+- The initial and the final homogeneous transformations are decoupled into their respective rotation and translation part.
+ $$g_{initial}\ =\ (R_{initial},\ p_{initial}) \text{and}\ g_{final}\ =\ (R_{final},\ p_{final})$$
+- A straight line path in the cartesian space is used to find the translation and rotation at any instant of time. Here the time interval considered is t[0,1], the increments are done in 0.1.
+$$
+\begin{matrix}
+p(t) & = & p_{initial}\ +\ t(p_{final}\ -\ p_{initial}) \\
+R(t) & = & R_{initial}e^{(log(R_{initial} T R_{final})t)} \\
+\end{matrix}
+$$
+- So at t = 0, $$p(t)\ =\ p_{initial}$$ , $$R(t)\ =\ R_{initial}$$ and at t = 1, $$p(t)\ =\ p_{final}$$ , $$R(t)\ =\ R_{final}$$
+- The initial and final change as per the move and place function requirement.
+- The $$g_{final}$$ thus computed is used to find the possible solution to the inverse kinematics problem where $$\theta\ =\ f^{-1}(x)$$ where $$f(x)$$ is $$g_{final}g_{t}$$. Here $$g_{t}$$ is the transformation used to compensate the offset between the tool frame and the gripper frame.
+- The solutions of the inverse kinematics are sent as an input to a function which checks for the best possible solution by considering each set of solutions as $$\theta\ =\ [theta_{1},\ theta_{2},\ theta_{3},\ theta_{4},\ theta_{5},\ theta_{6}]^T$$
+ - Based on the UR5 configuration, the observation that was made was that huge values of $$\theta_{1}$$ and $$\theta_{2}$$ increase the chances of collision with the table. - So any $$\theta_{1}$$, $$\theta_{2}$$ values that varied from the previous values of $$\theta_{1}$$ and $$\theta_{2}$$ by 30° were eliminated.
+ - Then the other values of the generalized coordinates were checked in a similar manner.
+ - These configurations were also checked for singularities at $$\theta_{3}$$ and $$\theta_{5}$$.
+ - Any configurations resulting in a homogeneous transformation that needs the gripper pick to collide with the table are changed to a cut off transformation in order to avoid collisions.
+ - The set of joint space coordinates that clears all these checks were used to move the robot in every iteration.
+
+Rviz was used to visualize the control algorithm.
+
+## Result
+
+Moving to target location | Moving back to start location
+:-----------------------------------------------:|:------------------------------------------------:
+ | 
+
+## Challenges
+The solution proposed is not a robust way to check for singularities in joint angle solutions
+
+
diff --git a/_posts/2022-05-10-iver-trajectory-generation.md b/_posts/2022-05-10-iver-trajectory-generation.md
new file mode 100644
index 00000000000..bb6ac55bf08
--- /dev/null
+++ b/_posts/2022-05-10-iver-trajectory-generation.md
@@ -0,0 +1,75 @@
+---
+title: "Trajectory Generation and Tracking of an Iver3 AUV"
+youtubeId1: jUiR1kPK5To
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/iverNonlinearControl/Iver3HydroLab.png
+---
+
+
+
+## Overview and Motivation
+
+Various nonlinear control strategies are beneficial to deal with dynamic uncertainities of highly nonlinear, complex systems. One such system is the Iver3 Autonomous Underwater Vehicle(AUV). So during the second semester of my master's I picked up this project along with another graduate student to test nonlinear control strategies for Iver3 AUV
+
+The objective of the project is to generate a smooth trajectory to a desired goal position and to track the generated
+trajectory using feedback control for an Iver3 AUV model. Seabed exploration is the practical scenario considered
+to simulate the environment. Efforts were made to design a path planner that finds a smooth path in the presence
+of obstacles and to design a controller that tracks the path found. The two different parts of the problem are
+trajectory generation and trajectory tracking.
+
+- Languages: MATLAB
+
+## Approach
+
+The model is an underactuated, nonlinear system. For the purpose of the project, a simplified 5-DOF model has been
+considered. The AUV has three translational degrees of freedom in $$(x,\ y,\ z)$$ and two rotational degrees of freedom
+in pitch $$(\theta)$$ and yaw $$(\psi)$$. The independent fins of the Iver3 model generate rotational motion whereas the thruster
+produces surge velocity in the x-direction. The control inputs to the system are the normalized driving input $$\delta q$$ ,
+normalized rudder input $$\delta r$$ , and the thruster rate $$\delta u$$.
+
+The dynamic model of the system has already been developed by the [Dynamical Systems and Control Lab](https://dscl.lcsr.jhu.edu/) at JHU. One such model was used to test non linear control strategies for a trajectory generation and tracking problem. The states of the system are:
+
+| Variables | description |
+|------------------------------|-------------------|
+| $$(p_x,\ p_y,\ p_z)^{T}$$ | position vector |
+| $$\theta$$ | pitch |
+| $$\psi$$ | yaw |
+| $$u_{v}$$ | surge velocity |
+| $$q$$ | pitch acceleration|
+| $$r$$ | yaw acceleration |
+| $$T$$ | thrust force |
+
+### Trajectory generation
+#### Differential Flatness
+Differential flatness property can be quite useful to generate smooth trajectories for highly non linear systems. The flat output space that we chose for this system are $$y =\ (p_x,\ p_y,\ p_z)^{T}$$ because we want the AUV to go from the initial position to a desired final 3D position. To prove that this is a flat for the nonlinear system, all the states and controls should be expressed as a function of the flat outputs and their derivatives.
+
+$$
+\begin{matrix}
+x & = & \phi(y,\ \dot{y},\ \ddot{y},\....,y_{(b)}) \\
+u & = & \alpha(y,\ \dot{y},\ \ddot{y},\....,y_{(c)}) \\
+\end{matrix}
+$$
+
+Once thses conditions were checked, we generated the trajectory in flat output space using a polynomial basis function of the form $$\lambda(t)\ =\ (t^{5},\ t^{4},\ t^{3},\ t^{2},\ t,\ 1)^{T}$$
+
+## Trajectory Tracking
+The equations for control inputs were solved using feedback linearization. The output states for this problem are $$y\ =\ (p_{x},\ p_{y},\ p_{z})^{T}$$. With this in mind we can check if the system satisfies input-output linearization condition and have a control input that is of the form $$u\ =\ a(x)\ +\ b(x)v$$ where $$v\ =\ y_{d}^{(3)}\ -\ k_{1}(y\ -\ y_{d})-k_{2}(\dot{y}\ -\ \dot{y_{d}})\ -\ k_{3}(\ddot{y}\ -\ \ddot{y_{d}})$$ is the virtual input derived from the computed torque control law.
+
+## Modified RRT Path Planner
+For a dynamic environment with obstacles, it is not ideal to generate trajectory at the beginning of the task. To address this issue, a RRT path planner was used to find a path from the intial position to the desired position. But when RRT find a new configuration from a current configuration, the current configuration is set as the initial configuration for the polynomial based trajectory generator and the next configuration is set as the desired configuration. If RRT find that this path is collision free, a smooth trajectory is generated using differential flatness based trajectory generation to avoid pointy path as in a typical RRT. These trajectory segments are stored and an overall trajectory is generated once the tree reaches the desired configuration. In order to target the goal, the algoroithm also has a random chance to attempt to reach the goal instead of choosing a random point.
+
+## Result
+
+Trajectory generation and tracking was done for three different initial conditions with perturbations for the 3D model of the system. The goal was for the system states $$(y_1(t),\ y_2 (t),\ y_3(t))$$ to follow $$(y_{1d}(t),\ y_{2d}(t),\ y_{3d}(t))$$. Here are the results for a slight perturbation in all degrees of freedom of the system.
+
+{% include youtubePlayer.html id=page.youtubeId1 %}
+
+Errors in positio | Thrust velocity for the system
+:---------------------------------------------------:|:---------------------------------------------------------:
+ | 
+
+## Challenges
+- The solution does not consider constraints on the control and path. However, various optimization tools can be used for the same trajectory generation problem to include constraints and cost functional.
+- The feedback linearization method might not be as robust as other nonlinear methods such as backstepping and lyapunov redesign.
diff --git a/_posts/2022-10-02-rocket-landing.md b/_posts/2022-10-02-rocket-landing.md
new file mode 100644
index 00000000000..bc6c914574d
--- /dev/null
+++ b/_posts/2022-10-02-rocket-landing.md
@@ -0,0 +1,31 @@
+---
+title: "Optimal Control for Rocket Landing Problem"
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/rocketLanding/ConvergenceWindow.png
+---
+
+## Overview and Motivation
+
+Resuable rockets pose an incredibly complex landing sequence problem. For this project I worked with another graduate student and this porject really focuses on the final stage of landing and the objective here is to use applied optimal control strategies for this application is to minimize the fuel consumed.
+
+- Languages: MATLAB, Python
+- Framework: ACADO Toolkit
+
+## Approach
+
+Since an elaborate dynamical model of the rocket is extremely complex, we start with a basic 2 DOF dynamic model. In this system, we have for states the position, $$[x,\ y]$$, velocity, $$(v)$$, mass, $$(m)$$ and the control input, $$[u_{T},\ u_{\theta}]$$ is the thrust and the angle of attack.
+Constraints were put on the control inputs, velocity, mass and the angle of attack. Boundary conditions were specified for the final positions and angle of attack, initial velovity and angle of attack(AOA). Optimization problem was solved by using final mass as the Mayer Term in ACADO through MATLAB interface.
+
+Once the convergence was met, this was extended to a 3-DOF system. Since the problem has both path and control constraints, Sequential Quadractic Programming(SQP) was used to solve this Nonlinear Programming(NLP) problem.
+
+## Result
+
+Rocket velocities | Convergence for the 3-DOF problem
+:-----------------------------------------:|:-------------------------:
+ | 
+
+## Challenges
+- The intial position, integration tolerance and tight constraints played a significant role in the convergence of the NLP problem. This was particularly challenging because of the control constraints on thurst, AOA, mass and velocities.
+- This project does not address the dynamics of pitch angle and heading angle.
diff --git a/_posts/2022-12-25-ribbonFinFish.md b/_posts/2022-12-25-ribbonFinFish.md
new file mode 100644
index 00000000000..d3f2c23939a
--- /dev/null
+++ b/_posts/2022-12-25-ribbonFinFish.md
@@ -0,0 +1,102 @@
+---
+title: "A Robust Strategy to Track the Fin a Weakly Electric Knifefish"
+youtubeId1: Jx1Ayr9Fp4Q
+youtubeId2: 3LgBlrvGDVA
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/ribbonFin/FinPoints.png
+---
+
+{% include youtubePlayer.html id=page.youtubeId1 %}
+Doris, a glassknife fish
+
+## Overview and Motivation
+
+My interest in studying unique locomotion motivated me to join the [Locmotion in Mechanical and Biological Systems Lab](https://limbs.lcsr.jhu.edu/) where I studied methods to track the ribbon-fin of a glassknife fish using [Deep Lab Cut](http://www.mackenziemathislab.org/deeplabcut)(DLC).
+The objective of the project is to to track the motion of the undulatory ribbon-fin of a glassknife fish, Eigenmannia virescen effectively.
+This fish's swimming motions are unique and interesting from a locomotion perspective because of the complex mechanics of the two counter propagating waves produced by the fish, one starting from the head and the other starting from the tail.
+These waves help the fish swim forward and backwards without moving it's body too much.
+
+Tracking of the fin is a challenging problem because the number of waves produced at each time step might vary and tracking a point on the wave is not equivalent to tracking a point on the fin. This idea, a lot of trial and test, and co-ordinate frame transformations were used to track the fin efficiently.
+
+- Languages: MATLAB
+- Framework: Deep Lab Cut (DLC)
+
+## Approach
+
+### Head and Body Center Orientation
+Two stationary points that are easy to track are bodycenter and the head. Along with the head, the body center, the pectoral fins,
+the tail and a point on the shuttle was tracked. The fish naturally follows the shuttle and tries to stay inside the
+shuttle. But even for stationary shuttle experiments, the position of the head changes with each frame. Having
+the same coordinates for the head position in all frames is useful in efficient tracking. Frames were extracted from
+the analysed DLC video. The head position of the first frame was used as the reference position. Head positions
+from all other frames were moved to this point as described in the following equations:
+
+$$
+\begin{matrix}
+\Delta x & = & x_{ref} - x \\
+\Delta y & = & y_{ref} - y \\
+x_{h_{new}} & = & x_h + \Delta x \\
+y_{h_{new}} & = & y_h + \Delta y \\
+\end{matrix}
+$$
+
+Frame translation | Frame rotation
+:------------------------------------:|:-------------------------:
+ | 
+
+Another important aspect is to orient the body center on the same line as the head. This implies that the
+x-coordinates of the body center and head might be different but the y-coordinates have to be very similar. This
+translation and rotation can be done using co-ordinate frame transformations. The frame is translated to move
+the head position to the reference point and then frame is rotated in clockwise direction if the angle between the
+head and the body center is positive and in anticlockwise direction if this angle is negative.
+
+$$
+\begin{matrix}
+\Delta \theta & = & tan^{-1}(\frac{y_h - y_b}{x_h - x_b}) \\
+l & = & \sqrt{(x_{h_{new}} - x_{b_{new}})^{2}+(y_{h_{new}} - y_{b_{new}})^{2}} \\
+(x_{b_{new}}, y_{b_{new}}) & = & (x_{b_{new}},y_{b_{new}} \pm l \Delta \theta) \\
+\end{matrix}
+$$
+
+Overlap check after rotating the initial frame | Overlap check after the transformation
+:----------------------------------------------:|:-------------------------:
+ | 
+
+### Line generation for tracking
+DLC collects the pixel coordinates marked during labeling. These coordinates are used to train the network. And
+hence it is crucial to feed and label the right data to obtain accurate results. Lines were earlier used to track the fin points.
+But my contribution was to implement having all the lines in every frame to label all points at once. Starting from a fixed distance from the head position, 56 lines were drawn perpendicular to the line joining the head and the body center. This was to avoid errors in labeling and to make sure that points are being tracked accurately.
+For the videos chosen, 56 lines covered all of the fin and the body of the fish. It is
+important to note that fixing the head at the reference position will help in drawing the lines at fixed distances
+from this position. The fin line of the fish was also tracked to interpolate the data correctly for data analysis.
+The same strategy for labeling and training was also used to track almost 56 points on the body-line.
+
+## DLC details
+- DeepLabCut version 2.2.3 was for this project
+- Videos with 56 lines on each frame was used for fin tracking
+- 150 frames were extracted for labeling each video using kmeans clustering algorithm inbuilt in DLC
+- The cluster step chosen was 1
+- Resnet 50 network was used to train the labeled dataset and the augmentation method used was imgaug
+- Maximum iterations were set to 25,000
+
+## Result
+The points on the fin and the body line were tracked with more than 95% accuracy on an average. However, it is evident to see the dynamically moving nodal point where the
+waves cancel out. DLC could not track this point accurately since it was not predictable.
+
+{% include youtubePlayer.html id=page.youtubeId2 %}
+
+The datasets from the tracking algorithms were used to find wave parameters such as wavelength, frequency,
+time period and amplitude. A conversion factor was found based on the length of the actual fish in cm and the
+length in pixels from the data. This conversion factor was used to plot and analyse the data in cm. Used curve fitting tools in MATLAB to fit the bodyline and fin data. PCA raw data was obtained by considering x-coordinates of the fin data
+as the rows and each column as a complete fin. Each columns represent the fin at a given frame.
+
+Testing the fin and bodyline data for selected frames | Checking the visibility of nodal point
+:-----------------------------------------------------:|:-------------------------:
+ | 
+
+## Challenges
+- The fish is 6.5-7 cm long on an average. It was very challenging to capture clear videos of the transparent fin
+- Changing head and body center position for all frames
+- Feeding the right data to DLC - solved with the line generation approach
diff --git a/_posts/2023-05-09-UR5-trajectory-generation.md b/_posts/2023-05-09-UR5-trajectory-generation.md
new file mode 100644
index 00000000000..2b14f7db36b
--- /dev/null
+++ b/_posts/2023-05-09-UR5-trajectory-generation.md
@@ -0,0 +1,36 @@
+---
+title: "UR5 trajectory generation using OROCOS real-time toolkit"
+youtubeId1: feqXhstn2l8
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/ur5.png
+---
+
+## Overview and Motivation
+
+The goal here was to create dynamically reconfigurable parameters that sets the desired configuration of the robot for the trajectory generator real time.
+
+- Languages: C++, Python
+- Framework: ROS2
+- Library : OROCOS Toolchain, KDL, Reflexxes
+
+## Approach
+
+Used the urdf description from Universal Robotics to solve the inverse kinmeatics problem. The ROS parameters were exposed to RQT to enable them to be adjusted by a GUI. The parameters that were dynamically reconfired were desired cartesian position, $$[x,\ y,\ z,\ roll,\ pitch,\ yaw]$$ desired joint angles, $$[\theta_{1},\ \theta_{2},\ \theta_{3},\ \theta_{4},\ \theta_{5},\ \theta_{6}]$$, and velocity sclaing parameter to adjust the velocity of the robot from 0-100%.
+
+A RTT component was implemented. This RTT component has two operations; one to trigger a new joint
+trajectory that moves the robot, from wherever the robot is, to the desired position.
+And another one to move the robot by producing a Cartesian trajectory that moves the
+robot from its current position to the desired Cartesian frame. Inverse kinematics was computed at every time step using the KDL library.
+
+## Result
+
+The algorithm was tested on a real UR5.
+
+{% include youtubePlayer.html id=page.youtubeId1 %}
+
+## Challenges
+- Used `rcl_interfaces` library to declare parameters that need to be dynamically reconfired. It was challenging to keep track of the changing variables - was solved using a callback that checked for changed variables.
+- `\robot_description` topic publishes the description only once when it is initiated. This was a challenge because by the time deployer starts running it would miss the description that was published - solved by adding a delay to the `robot_description_node` in the launch file.
+- Vibrations when implementing the algorithm on the real UR5 - solved by increasing the number of steps in the IK solver method.
diff --git a/_posts/2023-05-17-turtlebotTeaming.md b/_posts/2023-05-17-turtlebotTeaming.md
new file mode 100644
index 00000000000..f1ac385def4
--- /dev/null
+++ b/_posts/2023-05-17-turtlebotTeaming.md
@@ -0,0 +1,52 @@
+---
+title: "Turtlebot teaming using ROS2"
+youtubeId1: smgqKGkIvUg
+youtubeId2: p8ss_OPU6lk
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/MapRealTime.png{:height="300px" width="300px"}
+---
+
+| Map in Rviz | Real world setting |
+| ----------------------------- | ---------------------------- |
+|  |  |
+
+
+## Overview and Motivation
+
+To facilitate a successful teaming operation between two mobile robots, several subtasks must be accomplished, including mapping, navigation, sensor calibration, and real-time communication. So in my final semester at JHU, I worked with 3 other graduate students to implement and integrate these submodules to achieve a collaborative teaming task involving two Turtlebots
+
+This project utilizes ROS2 actions and the navigation stack to enable cooperative teamwork between two Turtlebots. The task involves Turtlebot3 Burger moving from one end of a platform to the other. However, the platform consists of moving blocks, and one of the blocks is strategically positioned, creating a gap. When Burger reaches this specific spot on the platform, it sends a request to Waffle. Subsequently, Waffle responds by pushing the platform to close the gap, allowing Burger to pass through smoothly.
+
+- Languages: C++, Python
+- Framework: ROS2
+- Library : Aruco marker library, Nav2
+
+## Approach
+
+Burger uses camera and ArUco markers to know its location in the world whereas Waffle uses a lidar to map the world and get to Burger. Waffles drives around the world using teleop and uses SLAM toolbox to create the map.
+
+ROS2 Nav2 was used to calibrate the camera. Burger and Waffle use ROS2 action operations to communicate. These actions have a goal, result and feedback. Actions were chosen instead of services since actions give the ability to cancel requests. Burger is the action client in this project and gives out a 2D goal pose to Waffle as soon as it reaches the gap on the platform. Waffle is the action server and can return feedback while it is perfomring the task of going from it's current pose to the target pose and a result regarding the status of the task.
+
+Picture calibration
+
+An inbuilt Adaptive Monte-Carlo Lozalizer was used to estimate the current 2D pose of Waffle. The laser model used was of likelihood field type. A good estimate for the initial pose, $$[x, y, \theta]^{T}$$ where $$\theta$$ is the orientation, was fed to the localizer. Parameters such as process and sensor noise covariances were tuned to have the particle filter converge eventually.
+
+Existing turlebot3 robot descriptions were used to visualize the task in simulation. Ignition Gazebo was used to render the bots in simulation environment.
+
+Ignition world | World rendered in Rviz
+:---------------------------:|:-------------------------:
+ | 
+
+## Result
+
+{% include youtubePlayer.html id=page.youtubeId1 %}
+Real time simulation
+{% include youtubePlayer.html id=page.youtubeId2 %}
+Mutlibot teaming
+
+## Challenges
+- Reammping topics for multirobot environment
+- Camera calibration and communication between ArUco marker node and Burger's camera node
+
diff --git a/_posts/2023-05-23-NSAID.md b/_posts/2023-05-23-NSAID.md
new file mode 100644
index 00000000000..f2ae30f8f24
--- /dev/null
+++ b/_posts/2023-05-23-NSAID.md
@@ -0,0 +1,37 @@
+---
+title: "Model based Adaptive Identification for Fault Isolation and Detection in Iver3 AUV"
+mathjax: true
+layout: post
+categories: media
+excerpt_img_url: ../assets/NSAID/controls.png
+---
+
+
+
+## Overview and Motivation
+
+This is an ongoing research project at [Dynamical Systems and Control Lab](https://dscl.lcsr.jhu.edu/) at JHU. The research focuses on developing adaptive identification of both plant and actuator parameters for an underactuated, nonlinear 5 DOF Iver3 AUV model. The applications are in model-based fault detection and isolation, navigation and control.
+
+- Languages: MATLAB
+
+## My contribution
+
+I worked in collaboration with Annie Mao, a PhD in the lab.
+
+- The complex dynamics of the AUV that involves drag and lift coefficients, the effects of angle of attack on these coefficients, the relationship between fin angles and pitch angle, buoyancy parameters and parameters that contribute to hydrodynamic drag were studied and analyzed. These dynamics were later used to design control inputs that result in physically possible values of various states such as thurster input, pitch angle, angle of attack, linear and angular velocities.
+- Another aspect of this project that I am working on is to tune adaptive controller gains to achieve parameter convergence and check for persistent excitation conditions for the control inputs. This, for an underactuated system is an open research question in adaptive systems field. The system on a total has 32 dynamic parameters which include mass, inertia, drag, buoyancy and actuator parameters.
+
+## Results so far
+
+Control inputs following boundary conditions |
+:----------------------------------------------:|
+ |
+
+3DOF states for the above control inputs | 3DOF velocities following boundary conditions
+:----------------------------------------------:|:-----------------------------------------------:
+ | 
+
+## Challenges
+- The complete model of the Iver model is a complex model to study and analyze.
+- Achieving parameters convergence through gain tuning for 32 dynamic parameters is a very challenging problem.
+
diff --git a/_sass/minima/_layout.scss b/_sass/minima/_layout.scss
new file mode 100644
index 00000000000..216264a9cf8
--- /dev/null
+++ b/_sass/minima/_layout.scss
@@ -0,0 +1,260 @@
+/**
+ * Site header
+ */
+.site-header {
+ border-top: 5px solid $grey-color-dark;
+ border-bottom: 1px solid $grey-color-light;
+ min-height: $spacing-unit * 1.865;
+
+ // Positioning context for the mobile navigation icon
+ position: relative;
+}
+
+.site-title {
+ @include relative-font-size(1.625);
+ font-weight: 300;
+ line-height: $base-line-height * $base-font-size * 2.25;
+ letter-spacing: -1px;
+ margin-bottom: 0;
+ float: left;
+
+ &,
+ &:visited {
+ color: $grey-color-dark;
+ }
+}
+
+.site-nav {
+ float: right;
+ line-height: $base-line-height * $base-font-size * 2.25;
+
+ .nav-trigger {
+ display: none;
+ }
+
+ .menu-icon {
+ display: none;
+ }
+
+ .page-link {
+ color: $text-color;
+ line-height: $base-line-height;
+
+ // Gaps between nav items, but not on the last one
+ &:not(:last-child) {
+ margin-right: 20px;
+ }
+ }
+
+ @include media-query($on-palm) {
+ position: absolute;
+ top: 9px;
+ right: $spacing-unit / 2;
+ background-color: $background-color;
+ border: 1px solid $grey-color-light;
+ border-radius: 5px;
+ text-align: right;
+
+ label[for="nav-trigger"] {
+ display: block;
+ float: right;
+ width: 36px;
+ height: 36px;
+ z-index: 2;
+ cursor: pointer;
+ }
+
+ .menu-icon {
+ display: block;
+ float: right;
+ width: 36px;
+ height: 26px;
+ line-height: 0;
+ padding-top: 10px;
+ text-align: center;
+
+ > svg {
+ fill: $grey-color-dark;
+ }
+ }
+
+ input ~ .trigger {
+ clear: both;
+ display: none;
+ }
+
+ input:checked ~ .trigger {
+ display: block;
+ padding-bottom: 5px;
+ }
+
+ .page-link {
+ display: block;
+ padding: 5px 10px;
+
+ &:not(:last-child) {
+ margin-right: 0;
+ }
+ margin-left: 20px;
+ }
+ }
+}
+
+
+
+/**
+ * Site footer
+ */
+.site-footer {
+ border-top: 1px solid $grey-color-light;
+ padding: $spacing-unit 0;
+}
+
+.footer-heading {
+ @include relative-font-size(1.125);
+ margin-bottom: $spacing-unit / 2;
+}
+
+.contact-list,
+.social-media-list {
+ list-style: none;
+ margin-left: 0;
+}
+
+.footer-col-wrapper {
+ @include relative-font-size(0.9375);
+ color: $grey-color;
+ margin-left: -$spacing-unit / 2;
+ @extend %clearfix;
+}
+
+.footer-col {
+ float: left;
+ margin-bottom: $spacing-unit / 2;
+ padding-left: $spacing-unit / 2;
+}
+
+.footer-col-1 {
+ width: -webkit-calc(35% - (#{$spacing-unit} / 2));
+ width: calc(35% - (#{$spacing-unit} / 2));
+}
+
+.footer-col-2 {
+ width: -webkit-calc(20% - (#{$spacing-unit} / 2));
+ width: calc(20% - (#{$spacing-unit} / 2));
+}
+
+.footer-col-3 {
+ width: -webkit-calc(45% - (#{$spacing-unit} / 2));
+ width: calc(45% - (#{$spacing-unit} / 2));
+}
+
+@include media-query($on-laptop) {
+ .footer-col-1,
+ .footer-col-2 {
+ width: -webkit-calc(50% - (#{$spacing-unit} / 2));
+ width: calc(50% - (#{$spacing-unit} / 2));
+ }
+
+ .footer-col-3 {
+ width: -webkit-calc(100% - (#{$spacing-unit} / 2));
+ width: calc(100% - (#{$spacing-unit} / 2));
+ }
+}
+
+@include media-query($on-palm) {
+ .footer-col {
+ float: none;
+ width: -webkit-calc(100% - (#{$spacing-unit} / 2));
+ width: calc(100% - (#{$spacing-unit} / 2));
+ }
+}
+
+
+
+/**
+ * Page content
+ */
+.page-content {
+ padding: $spacing-unit 0;
+ flex: 1;
+}
+
+.page-heading {
+ @include relative-font-size(2);
+}
+
+.post-list-heading {
+ @include relative-font-size(1.75);
+}
+
+.post-list {
+ margin-left: 0;
+ list-style: none;
+
+ > li {
+ margin-bottom: $spacing-unit;
+ }
+}
+
+.post-meta {
+ font-size: $small-font-size;
+ color: $grey-color;
+}
+
+.post-link {
+ display: block;
+ @include relative-font-size(1.5);
+}
+
+
+
+/**
+ * Posts
+ */
+.post-header {
+ margin-bottom: $spacing-unit;
+}
+
+.post-title {
+ @include relative-font-size(2.625);
+ letter-spacing: -1px;
+ line-height: 1;
+
+ @include media-query($on-laptop) {
+ @include relative-font-size(2.25);
+ }
+}
+
+.post-content {
+ margin-bottom: $spacing-unit;
+
+ h2 {
+ @include relative-font-size(2);
+
+ @include media-query($on-laptop) {
+ @include relative-font-size(1.75);
+ }
+ }
+
+ h3 {
+ @include relative-font-size(1.625);
+
+ @include media-query($on-laptop) {
+ @include relative-font-size(1.375);
+ }
+ }
+
+ h4 {
+ @include relative-font-size(1.25);
+
+ @include media-query($on-laptop) {
+ @include relative-font-size(1.125);
+ }
+ }
+}
+
+/* Cover Image */
+.hero {
+ background: url('../home-feature.jpg');
+}
diff --git a/aboutMe.md b/aboutMe.md
new file mode 100644
index 00000000000..12487862467
--- /dev/null
+++ b/aboutMe.md
@@ -0,0 +1,14 @@
+---
+title: ""
+permalink: "/about/"
+layout: page
+---
+
+
+
+I am a roboticist primarily focued on dynamics and controls in robotics with a master's degree in Robotics from the Johns Hopkins University and a bachelor's degree in Electrical and Electronics Engineering from Amrita Vishwa Vidyapeetham. My passion lies in developing robust and adaptive control schemes that allow robots to learn from their environment and dynamically adjust their control parameters based on feedback. The majority of my work has involved implementing control agorithms for trajectory generation, motion planning and for parameter identification in various systems such as manipulators, mobile robots and underwater robots. I also have experience in designing wiring harness schematics, testing Li-Ion battery quality and in battery management systems.
+
+On this website, you will find a collection of my projects, research and related work.
+
+
+
diff --git a/assets/CroppedRealTime.png b/assets/CroppedRealTime.png
new file mode 100644
index 00000000000..4de93f3dc62
Binary files /dev/null and b/assets/CroppedRealTime.png differ
diff --git a/assets/MapRealTime.png b/assets/MapRealTime.png
new file mode 100644
index 00000000000..133b8e1d78a
Binary files /dev/null and b/assets/MapRealTime.png differ
diff --git a/assets/Maze_Setup.jpg b/assets/Maze_Setup.jpg
new file mode 100644
index 00000000000..ca99c78b567
Binary files /dev/null and b/assets/Maze_Setup.jpg differ
diff --git a/assets/NSAID/NSAID_pic.jpg b/assets/NSAID/NSAID_pic.jpg
new file mode 100644
index 00000000000..ab2e6c1d4fa
Binary files /dev/null and b/assets/NSAID/NSAID_pic.jpg differ
diff --git a/assets/NSAID/controls.png b/assets/NSAID/controls.png
new file mode 100644
index 00000000000..52811bcb31b
Binary files /dev/null and b/assets/NSAID/controls.png differ
diff --git a/assets/NSAID/empty.txt b/assets/NSAID/empty.txt
new file mode 100644
index 00000000000..8b137891791
--- /dev/null
+++ b/assets/NSAID/empty.txt
@@ -0,0 +1 @@
+
diff --git a/assets/NSAID/states.png b/assets/NSAID/states.png
new file mode 100644
index 00000000000..d33987266c0
Binary files /dev/null and b/assets/NSAID/states.png differ
diff --git a/assets/NSAID/velocities.png b/assets/NSAID/velocities.png
new file mode 100644
index 00000000000..b9b406c2658
Binary files /dev/null and b/assets/NSAID/velocities.png differ
diff --git a/assets/UR5moveAndPlace/empty.txt b/assets/UR5moveAndPlace/empty.txt
new file mode 100644
index 00000000000..8b137891791
--- /dev/null
+++ b/assets/UR5moveAndPlace/empty.txt
@@ -0,0 +1 @@
+
diff --git a/assets/UR5moveAndPlace/homeConfig.png b/assets/UR5moveAndPlace/homeConfig.png
new file mode 100644
index 00000000000..b3954bd5957
Binary files /dev/null and b/assets/UR5moveAndPlace/homeConfig.png differ
diff --git a/assets/UR5moveAndPlace/startLocation.png b/assets/UR5moveAndPlace/startLocation.png
new file mode 100644
index 00000000000..3d7d84f21ec
Binary files /dev/null and b/assets/UR5moveAndPlace/startLocation.png differ
diff --git a/assets/UR5moveAndPlace/targetLocation.png b/assets/UR5moveAndPlace/targetLocation.png
new file mode 100644
index 00000000000..4ec1022998f
Binary files /dev/null and b/assets/UR5moveAndPlace/targetLocation.png differ
diff --git a/assets/home-feature.jpg b/assets/home-feature.jpg
new file mode 100644
index 00000000000..09b8e37b355
Binary files /dev/null and b/assets/home-feature.jpg differ
diff --git a/assets/iverNonlinearControl/Config2_traj.jpg b/assets/iverNonlinearControl/Config2_traj.jpg
new file mode 100644
index 00000000000..1e140601f06
Binary files /dev/null and b/assets/iverNonlinearControl/Config2_traj.jpg differ
diff --git a/assets/iverNonlinearControl/Error_config2.jpg b/assets/iverNonlinearControl/Error_config2.jpg
new file mode 100644
index 00000000000..d2f6c724772
Binary files /dev/null and b/assets/iverNonlinearControl/Error_config2.jpg differ
diff --git a/assets/iverNonlinearControl/Iver3HydroLab.png b/assets/iverNonlinearControl/Iver3HydroLab.png
new file mode 100644
index 00000000000..5e007867868
Binary files /dev/null and b/assets/iverNonlinearControl/Iver3HydroLab.png differ
diff --git a/assets/iverNonlinearControl/Thrust_Vel_config2.jpg b/assets/iverNonlinearControl/Thrust_Vel_config2.jpg
new file mode 100644
index 00000000000..bdd3c95b062
Binary files /dev/null and b/assets/iverNonlinearControl/Thrust_Vel_config2.jpg differ
diff --git a/assets/iverNonlinearControl/empty.text b/assets/iverNonlinearControl/empty.text
new file mode 100644
index 00000000000..8b137891791
--- /dev/null
+++ b/assets/iverNonlinearControl/empty.text
@@ -0,0 +1 @@
+
diff --git a/assets/ribbonFin/2Fins.png b/assets/ribbonFin/2Fins.png
new file mode 100644
index 00000000000..215ffa5fd92
Binary files /dev/null and b/assets/ribbonFin/2Fins.png differ
diff --git a/assets/ribbonFin/AfterRotation.png b/assets/ribbonFin/AfterRotation.png
new file mode 100644
index 00000000000..d33529dc5d9
Binary files /dev/null and b/assets/ribbonFin/AfterRotation.png differ
diff --git a/assets/ribbonFin/BodyLine.png b/assets/ribbonFin/BodyLine.png
new file mode 100644
index 00000000000..7b075bb56ae
Binary files /dev/null and b/assets/ribbonFin/BodyLine.png differ
diff --git a/assets/ribbonFin/FinPoints.png b/assets/ribbonFin/FinPoints.png
new file mode 100644
index 00000000000..851805d1bff
Binary files /dev/null and b/assets/ribbonFin/FinPoints.png differ
diff --git a/assets/ribbonFin/FinsBodies.png b/assets/ribbonFin/FinsBodies.png
new file mode 100644
index 00000000000..d2244ff4ec6
Binary files /dev/null and b/assets/ribbonFin/FinsBodies.png differ
diff --git a/assets/ribbonFin/NewRot.png b/assets/ribbonFin/NewRot.png
new file mode 100644
index 00000000000..526bd011680
Binary files /dev/null and b/assets/ribbonFin/NewRot.png differ
diff --git a/assets/ribbonFin/NewTrans.png b/assets/ribbonFin/NewTrans.png
new file mode 100644
index 00000000000..1f3b700ee2e
Binary files /dev/null and b/assets/ribbonFin/NewTrans.png differ
diff --git a/assets/ribbonFin/TRansVsRot.png b/assets/ribbonFin/TRansVsRot.png
new file mode 100644
index 00000000000..f202f8dca91
Binary files /dev/null and b/assets/ribbonFin/TRansVsRot.png differ
diff --git a/assets/ribbonFin/VideoMoreClear.png b/assets/ribbonFin/VideoMoreClear.png
new file mode 100644
index 00000000000..bcae6e2d948
Binary files /dev/null and b/assets/ribbonFin/VideoMoreClear.png differ
diff --git a/assets/ribbonFin/empty.text b/assets/ribbonFin/empty.text
new file mode 100644
index 00000000000..8b137891791
--- /dev/null
+++ b/assets/ribbonFin/empty.text
@@ -0,0 +1 @@
+
diff --git a/assets/rocketLanding/ConvergenceWindow.png b/assets/rocketLanding/ConvergenceWindow.png
new file mode 100644
index 00000000000..3780a7b5f61
Binary files /dev/null and b/assets/rocketLanding/ConvergenceWindow.png differ
diff --git a/assets/rocketLanding/Traj.jpg b/assets/rocketLanding/Traj.jpg
new file mode 100644
index 00000000000..59de13c0110
Binary files /dev/null and b/assets/rocketLanding/Traj.jpg differ
diff --git a/assets/rocketLanding/Velocities.jpg b/assets/rocketLanding/Velocities.jpg
new file mode 100644
index 00000000000..cad7577b3f4
Binary files /dev/null and b/assets/rocketLanding/Velocities.jpg differ
diff --git a/assets/rocketLanding/empty.text b/assets/rocketLanding/empty.text
new file mode 100644
index 00000000000..8b137891791
--- /dev/null
+++ b/assets/rocketLanding/empty.text
@@ -0,0 +1 @@
+
diff --git a/assets/sim_gazebo.png b/assets/sim_gazebo.png
new file mode 100644
index 00000000000..7026495aeab
Binary files /dev/null and b/assets/sim_gazebo.png differ
diff --git a/assets/sim_rviz.png b/assets/sim_rviz.png
new file mode 100644
index 00000000000..10390572062
Binary files /dev/null and b/assets/sim_rviz.png differ
diff --git a/assets/ur5.png b/assets/ur5.png
new file mode 100644
index 00000000000..d58cf0be95e
Binary files /dev/null and b/assets/ur5.png differ
diff --git a/assets/websiteProfile_small.png b/assets/websiteProfile_small.png
new file mode 100644
index 00000000000..d0d8e786480
Binary files /dev/null and b/assets/websiteProfile_small.png differ
diff --git a/index.md b/index.md
new file mode 100644
index 00000000000..64c027c36b1
--- /dev/null
+++ b/index.md
@@ -0,0 +1,8 @@
+---
+layout: page
+title: ""
+---
+
+Hi! I am Nivya. I am a roboticist specialized in Automation and Control. I love solving challenging control problems in robotics. Check out my portfolio!
+
+
diff --git a/index.html b/portfolio.md
similarity index 88%
rename from index.html
rename to portfolio.md
index 0b3329cdd2f..7b6d0592b52 100644
--- a/index.html
+++ b/portfolio.md
@@ -1,6 +1,6 @@
---
layout: default
-title: "Home"
+title: "About me"
---
{% if site.show_excerpts %}