Abstract:
With the advancement of electronic design automation, continuous-flow microfluidic biochips have become one of the most promising platforms for biochemical experiments. This chip manipulates fluid samples in milliliters or nanoliters by utilizing internal microvalves and microchannels, and thus automatically performs basic biochemical experiments, such as mixing and detection. To achieve the correct bioassay function, the microvalves deployed inside the chip are usually managed by a multiplexer-based control logic, and valves receive control signals from a core input through the control channel for accurate switching. Since biochemical reactions typically require high sensitivity, the length of control paths connecting each valve needs to be reduced to ensure immediate signal propagation, and thus to reduce the signal propagation delay. In addition, to reduce the fabrication cost of chips, a vital issue to be addressed in the logic architecture design is how to effectively reduce the total channel length within the control logic. To address the above issues, this paper proposes a deep reinforcement learning-based control logic routing algorithm to minimize the signal propagation delay and total control channel length, thereby automatically constructing an efficient control channel network. The algorithm employs the Dueling Deep Q-Network architecture as the agent of the deep reinforcement learning framework to evaluate the tradeoff between signal propagation delay and total channel length. Besides, the diagonal channel routing is implemented for the first time for control logic, thus fundamentally improving the efficiency of valve switching operations and reducing the fabrication cost of the chip. The experimental results demonstrate that the proposed algorithm can effectively construct a high-performance and low-cost control logic architecture.