Soft mechanical systems constitute stretchable skins, tissue-like appendages, fibers and fluids, and utilize material deformation to transmit forces or motion to perform a mechanical task. These systems may possess infinite degrees of freedom with finite modes of actuation and sensing, and this creates challenges in modeling, design and controls. This thesis explores the use of surrogate models to approximate the complex physics between the inputs and outputs of a soft mechanical system composed of a ubiquitous soft building block known as Fiber Reinforced Elastomeric Enclosures (FREEs). Towards this the thesis is divided into two parts, with the first part investigating reduced order models for design and the other part investigating reinforcement learning (RL) framework for controls.The reduced order models for design is motivated by the need for repeated quick and accurate evaluation of the system performance. Two mechanics-based models are investigated: (a) A Pseudo Rigid Body model (PRB) with lumped spring and link elements, and (b) a Homogenized Strain Induced (HIS) model that can be implemented in a finite element framework. The parameters of the two models are fit either directly with experiments on FREE prototypes or with a high fidelity robust finite element model. These models capture fundamental insights on design by isolating a fundamental dyad building block of contracting FREEs that can be configured to either obtain large stroke (displacement) or large force. Furthermore, the thesis proposes a novel building block-based design framework where soft FREE actuators are systematically integrated in a compliant system to yield a given motion requirement. The design process is deemed useful in shape morphing adaptive structures such as airfoils, soft skins, and wearable devices for the upper extremities. Soft robotic systems such as manipulators are challenging to control because of their flexibility, ability to undergo large spatial deformations that are dependent on the external load. The second part of this work focuses on the control of a unique soft continuum arm known as the BR2 manipulator using reinforcement learning (RL). The BR2 manipulator has a unique parallel architecture with a combined bending mode and torsional modes, and its inherent asymmetric nature precludes well defined analytical models to capture its forward kinematics. Two RL-based frameworks are evaluated on the BR2 manipulator and their efficacy in carrying out position control using simple state feedback is reported in this work. The results highlight external load invariance of the learnt control policies which is a significant factor for deformable continuum arms for applications involving pick and place operations. The manipulator is deemed useful in berry harvesting and other agricultural applications.
【 预 览 】
附件列表
Files
Size
Format
View
Surrogate models for the design and control of soft mechanical systems