This paper presents two new methods for fast VVC intra-picture encoding. Both are based on an approach that uses a CNN for blockadaptive parameter estimation. The parameters restrict the multitype-tree (MTT) partitionings tested by the encoder. The methods aim for an improvement of the approach by further constraints with additional parameters. Adding parameters increases the time required for training data generation exponentially. This raises the question which parameters to add and how. To explore further partitioning restrictions, the first method adds parameters controlling the block sizes the MTT can start from. Although this leads to four parameters, we can exploit that some of their combinations are invalid. To investigate whether testing fewer prediction and transform modes is feasible, the second method adds a single parameter that restricts their number jointly. The paper evaluates hypothetical and actual encoding time reductions for VTM-10.2. The first method outperforms our other and other existing method: The encoding time decreases by 50% with a bit rate increase of 0.7%.